This post is about stuff I work on at my current job. I do not speak for my employer on this blog though, therefore please consider thoughts and opinions below as strictly my own, not necessarily endorsed or approved by CohesiveFT.
It has been about 6 months since I last blogged about work, so I figured an update may be in order, especially since today CohesiveFT announced availability of VPN-Cubed on Flexiant’s cloud offerings.
We’ve been very busy on VPN-Cubed engineering side. Along with features already on the roadmap, we delivered several enhancements that were directly driven or requested by VPN-Cubed users. On the product support side, we continued to expand a range of devices with which VPN-Cubed can do IPsec interop, which now include even ones I personally have never heard about before. We grew our experience and expertise in the area of troubleshooting intra-cloud and cloud-to-datacenter connectivity issues (there are many!). We’ve also worked on a few projects that required non-trivial topologies or interconnects, successfully mapping customer requirements to VPN-Cubed capabilities.
One theme that I have had in my head for some time now, is VPN-Cubed as the networking fabric of the Intercloud. Let me explain.
VPS was a predecessor of modern IaaS clouds. In VPS land, boxes are usually provisioned individually, one by one. Typical setups in VPS consisted of 1, 2 or 3 boxes. Networking 3 independent boxes together is relatively straightforward.
At the beginning of IaaS era, I imagine most setups were also 1 or 2 boxes. But as IaaS is gaining ground, topologies headed to the cloud are getting bigger, more complex and more dependent on access to external resources. Setting up networking consistently is becoming a bigger deal. But it’s not the end.
One of the roles of Intercloud is providing customers with an alternative (competition, in other words) - if one doesn’t like cloud A, she may take entire topology to cloud B. I’d say 99 of 100 public cloud justification documents being submitted to CIOs worldwide today include a statement saying something like this: “If this cloud provider fails to deliver what we need at a level we need it, we will switch to another provider.” This is actually not as easy in practice as it may sound.
Each cloud’s networking has unique aspects, no two are alike. Public IPs, private IPs, dynamic or not, customer assignable or not, eth0 private or public, cloud-provided firewall exists or not, peculiarities of firewall - these are some of the differences (as of today, I have set up boxes in 6 IaaS clouds with admin endpoints facing public Internet - I have seen many network setups). Taking images of N boxes from one cloud and dropping them in another cloud is well understood, recreating one cloud's networking in another cloud is where the challenge is.
It is here where I think VPN-Cubed shines as a customer-controlled network abstraction - it's an overlay built on top of service provider's network, which allows it to be identical no matter what the underlying infrastructure looks like.
Same story plays out when an application is hyper-distributed and runs in multiple clouds or multiple regions of one cloud (where regions are connected via public Internet). And here as well VPN-Cubed provides an abstraction that allows one to treat all compute resources as being on the same network, regardless where they are actually located at the moment.
At the same time, VPN-Cubed can be appealing to topologies that don’t care about Intercloud. Networking and network security are areas that don’t get enough attention from cloud developers today, because developers are used to working within a perimeter. Excessively wide-open security group setups, using public IPs instead of private for communications, disabled local firewalls - these are all time bombs. They don’t affect the app right now (“look, it works!”) but they can be catastrophic over time when they could become an attack vector. For such topologies, VPN-Cubed provides a virtual perimeter that confines authorized communications to a mutually-authenticated tunnel encrypted end-to-end (are you sure you want to continue forcing your incoming web traffic to HTTPS but not encrypting writes and reads from app servers to database? or do you think application-level encryption could be better, faster or easier to maintain than transport-level?)
Where is the lock-in then? If it's not the hypervisor, what makes moving from one cloud to another so difficult? Simply put, it's architectural differences. Every cloud chooses to do storage and networking differently.