19. Application and/or data hosting. This is related to the preceding idea, but not identical. And again, while we’ve already funded several startups in this area, it’s probably going to be big enough that it contains several rich markets.
It may turn out that 4, 18, and 19 all have the same answer. Or rather, that there will be things that answer all three. But the way to find such a grand, overarching solution is probably not to approach it directly, but to start by solving smaller, specific problems, then gradually expand your scope. Start by writing Basic for the Altair.
My Idea – UniversalAPI
“Start by writing Basic for the Altair.” This is probably my favorite line out of the original 30 idea post. After reading the first paragraph I thought to myself “OK, here we go – what obscure twist can I put on the over-saturated store-your-photos-in-the-cloud genre?” But that last bit really got me thinking outside of the box.net (pun intended). One thing that came to mind is the idea of a universal API that would basically act as an interface to the multitude of cloud-based storage services that are out there today. As much as I love APIs, there are a couple of big issues that API consumers face when deciding to utilize a particular API:
- Betting on a winner – When you choose to build your service on top of someone else’s API, you’re putting a lot of trust in their hands. Not only are you expecting a basic level of uptime and responsiveness, you also need to make sure that the service you pick is going to be sticking around for a while. Given the high failure rate of web startups, this can be pretty tough to do.
- Steep learning curves – It can take a while to get a new API connection up and running. Besides needing to request a developer key and all that, there’s lots of documentation to read and new keywords and interfaces to learn prior to getting started. Efforts like OpenSocial are a good start, but there’s still no real universal API standard (that I know of) that you can learn once and apply to all of the major startups in a given area (photos, videos, file storage, etc..)
- Failover options – When S3 goes down, if you’re using their API, you’re going down right along with it. Same goes for Twitter, or pretty much any other API-based service. Smart developers can write code to store critical files locally in case of emergency, but it’s expensive and far from being automatic.
My idea is a service that provides a simple and standard set of APIs to handle the basic nuts-and-bolts operations for Web 2.0. The service would start by covering three basic areas – photo, video, and file storage. The service would act as a business layer that would take the simple API requests by the developer and map them on the backend to the various custom API specifications for all of the major online storage APIs. For example, the developer might write a simple line of code like “pUniveralAPI -> StorePhoto(fileHandle, auth)” and this service would take that file and post it to PhotoBucket, Box.Net, drop.io, etc.. Besides being able to map to the best possible service, the developer can learn a single API and not have to worry about the messy details of every unique API across the various services. There are lots of details that would need to be worked out around how to handle the developer keys (i.e. throw a master dev key + url through a hash to map to the unique dev key for a service) and user authentication, but let’s skip that for now and jump to a few key features:
- Automatic mapping to the best service for a scenario – Instead of spending hours trying to evaluate which photo sharing service to use, just use the features you want out of the UniversalAPI and the system will figure out the best one to use. For example, if you are storing extra large files, the UniversalAPI will map those files to a service that specialized in large files. If the goal is to have fast file retrieval times, UAPI would choose the optimal service for that. Of course, if developers want greater control over which service to use, they can add a flag to the call to ensure the file gets mapped to their preferred service. Another advantage of this approach is that if one service goes out of business, the system can automatically re-route those files to a different service without the developer needing to write new code or the deal with any lost user files.
- Local storage and caching to improve performance and uptime – The UniversalAPI service would optionally be able to store files on their own local storage system in case of emergency (or to minimize latency caused by the extra layer of abstraction). This way, if the file storage service is experiencing downtime, the service would be able to return a cached local copy of the file. Of course, this would be a premium service that developers would have to pay for. What’s nice is that devs could start off with the free version while they are still in the early stages (read: dirt poor), and upgrade to the premium service once revenue/funding is available.
- Code snippets to provide direct access to an API – One big problem with this plan is that you’ve basically introduced another potential fail point into your system. Now instead of worrying about one service going down, you have two services to worry about. The solution? The UniversalAPI service could allow the developer to embed a “failover function” into their code that would bypass the UniversalAPI service and instead interact directly with the storage APIs. In pseudo-code this would be “If UniversalAPI.Fail then RunFailover” where RunFailover would contain code to submit a file to S3 or whatever service(s) the UniversalAPI is mapping that particular developer’s calls to.
I’m fully aware that there are a ton of holes in this idea as presented. That being said, what do you guys think of the high-level concept? Also, sorry for missing a day, I’ll try to make up for it by doing a bonus post this weekend.