Incentives and Cloud Computing Interoperability

To succeed, cloud interoperability must drive down costs for cloud computing vendors, both established and aspiring. This is how interop has been achieved throughout the history - look at car industry, railroads (selecting rail gauge), etc. Or check out Wikipedia article on Standardization in general. Indeed, for something to be adopted, it makes sense for all parties to reap the benefits. Throughout the history, standardization resulted in reduced supply chain costs (same car parts, same freight cars), so everyone was interested and everyone participated.

When interop does not reduce costs for vendors and there exist competing proposals, history shows us that we end up with format wars - Betamax vs VHS, Blue-ray vs HD-DVD. Want an example of something close to a format war in technology? How about use of slash (/) or back-slash () in file paths.

How is this related to cloud computing interoperability, you might ask. There are many things that customers get with cloud interop - easier migrations, no lock-in, standard interfaces, standard integration libraries, flexibility, vendors competing on price instead of on substance of their offering.

But what will interop do for cloud vendors?

On a high-level, a cloud vendor faces the following major categories of costs:

  • materials (datacenters, computers, storage, network gear)
  • operations (electricity)
  • labor (engineers)
  • costs associated with acquiring new customers (mainly facilitating systems integration)
What I am trying to say is that for established vendors, cloud interop does not significantly drive down any of these categories of costs. For all other market participants or those who want to enter the market, interop can only significantly drive down the last category - costs associated with acquiring new customers.

Imagine what if 90% of the world’s railroad traffic was carried on a 10”, 11” and 12” rail gauge. Would it make sense for established railroads to sit down with newcomers and discuss interoperability? Or would newcomers be better off adopting whatever rail gauge was already there? Looks to me like a rhetorical question.

I believe that this is why we won't see the Big 2 (Amazon and Google) actively involved in cloud interop at this point, and frankly this is exactly why I think cloud interop efforts are premature.

If you are interested in learning more about how standards emerge, please check out this paper titled The Emergence Of Standards: A Meta-Analysis.

I am not saying that interop efforts can not or should not continue if Big 2 do not join - as a developer and systems architect, I will use whatever is easier to use and has an active community. I just wanted to highlight this aspect of interop, because I think it has been missing from the general discussion.

Additionally, I would like to point out that there could be other efforts where both customers and vendors, both established and aspiring, could collaborate to achieve win-win results. For example, Cloud Security Alliance. CSA is very different from interop efforts regarding how incentives are positioned - both vendors and customers will benefit immensely if the industry reaches a better consensus and understanding on host security, data security and compliance in the cloud. Less security-related FUD - more customers will embrace the cloud (win for vendors) and enterprise already in the cloud will feel safer (win for customers). Hence, from incentives perspective, there is no reason why Big 2 won’t join or contribute to CSA cause.

Correctly aligned incentives are a powerful engine of innovation and should not be overlooked.

Categories: cloud-computing | economics |

My Comment on Open Federated Clouds

I left the following comment at CloudAve yesterday, on a post titled Open Federated Clouds And Sun's Cloud Announcement.

Interesting. Looks to me it all depends on how you look at different clouds - as infrastructure providers or as software platforms.

The former case is roughly similar to buying Internet connectivity for your office from 2 different ISPs for redundancy.

The latter case, however, is roughly similar to a process of selecting platform for a project - say between Weblogic and JBoss. For a new project, a single platform is usually selected - I don’t think there are many cases when an app is built on top of both for better resiliency or to increase capacity (even though I admit that it’s not impossible).

In both cases, products are very similar or nearly identical to a certain extent, but the way you look at them makes you select 2 in one case and only 1 in another.

Right now, I think choosing a cloud is akin to selecting a software platform. So one will choose only one. However, the future may very well change this trend like you said, especially as interop gets better and each cloud gets its strengths and weaknesses better defined.

Categories: cloud-computing |

The Ultimate Twitter

Microblogging started in large part as a medium to keep a close circle of friends updated on one’s activities and whereabouts on the go (in other words, without access to a computer). Since that time, as most know, it grew and expanded its scope to include meeting new people, forming new clubs and communities both online and offline, marketing, branding and image building, and much more. Nobody using Twitter gets surprised these days by notification emails about strangers starting to follow you, or by @replies from people many timezones away whom you have never met or heard of.

Yesterday I read a great post on ReadWriteWeb about reverse network effect that comes with scale. It’s something that I am sure many people have been thinking about, but Bernard Lunn was the one who skillfully put the thoughts together in his post.

My conclusion after reading it? Twitter as it exists today is just a beginning, and in its current form it won’t be able to realize full potential of microblogging. Twitter is currently constrained to following individual Twitter accounts. We assume that we want to hear everything this person or brand says. But this thinking applies only to close circle of friends (see the first paragraph). In the future, I want to follow topics, themes, places, discussions, communities, threads, news. And I don't want to do it on the client side (hashtags, keywords) but on server side (semantic recognition, associations, weights). I think this “ultimate” twitter eliminates reverse network effects by design.

This might as well be impossible using technology and academic research available today, but I am happy that we as a society made the first step to improve distribution of information, and that there is room to grow!

Categories: blogging |

Adjustable Per-URI Backend Capacity in Rabbitbal

I recently pushed a Rabbitbal update to Github - http://github.com/somic/rabbitbal.

The biggest enhancement (IMHO) is ability to increase or decrease the number of backend consumers based on any HTTP request headers. In “table” routing mode (see rabbitbal.yml), you can now specify array of tests against which incoming request headers will be matched. This will cause your request to be published with a matching key (note :key). Your backend consumers use the same YAML file and can bind to all or only some queues, thus giving you flexibility in adjusting the capacity. Old functionality is available by using “topic” routing mode.

Note that I still use topic-based exchange, because I wanted to support a use case where you want to aggregate all incoming requests into separate queues (routing key would be something like “request.#”) for bot detection, access log aggregation, etc. In other words, each request ultimately must end up in a single queue where it will be picked up by backend servers, while at the same time it can also be duplicated into other queues for other purposes.

Categories: rabbitmq | ruby | software-engineering |

CohesiveFT Launches VPN-Cubed For Amazon EC2

Update: CohesiveFT now also offers IPsec connectivity to VPN-Cubed running inside Amazon EC2. Read more.

Today CohesiveFT team officially launches VPN-Cubed for Amazon EC2, a product that has been in beta for several weeks now. Check out the announcement on Elastic Server blog, which talks about both Pay and Free Editions, or check out the product page.

VPN-Cubed for EC2 is a self-service preconfigured solution that allows you to build overlay networks inside Amazon EC2 cloud, either in a single region (US or EU) or spanning multiple regions. Building a private network across the Atlantic can not be any easier or cheaper than this! All you need to get started is familiarity with EC2 - we packaged the rest into AMIs and wrote detailed step-by-step documentation.

The product has all the benefits of our regular VPN-Cubed offering:

  • customer assigned IP addresses in the cloud
  • encrypted communications between all hosts
  • built-in high availability and failover, no single points of failure (there is no single master server in case you are wondering)
  • support for IP multicast inside EC2 cloud (without VPN-Cubed, your multicast-based applications will not work in EC2)

And in addition, we created an easy-to-use web-based admin tool to make configuration and monitoring your private topology in the cloud even simpler.

VPN-Cubed for EC2 is a great way for you to quickly try it out, see how it works and how it can help you take your cloud operations to the next level. And if you need greater flexibility, more complex interconnects, customized discovery, agent-based monitoring, further traffic optimization or want to use VPN-Cubed outside of EC2 - contact us and we can tailor VPN-Cubed to meet your needs.

Categories: cloud-computing | cohesiveft |

Previous Page
Next Page