Unexpected Downside of a SaaS

It just occurred to me today that despite all the positive things that SaaS companies continue to deliver, a lot of them have an interesting weak spot.

In order to be successful, a SaaS must deliver superior product to what I would otherwise be able to build myself, plus deliver it at a price that I’d be willing to pay. Since price of the service is a factor in sales, it’s better to price service at a reasonable level. This in turn drives a need for SaaS companies to pursue economies of scale - it’s much better for them to build solutions that can be sold to many customers vs. build a customized solution for every customer.

As a result, one of the things many SaaS companies would do is they would put every customer’s data into the same data store of some sort, and would ensure data confidentiality through their service.

And here lies the rub. By doing this, they end up with much bigger datasets than an original problem domain called for. Bigger datasets are harder to work with - they require more optimization, more security, more planning, more of everything. Sometimes the size of SaaS dataset even prevents them from building functionality that I would be able to easily build had I needed to build it only for myself, on my small dataset.

This was inspired by a realization that one of the SaaS services I am using, lacks a certain API which would be trivial for me to implement had I had all of the SaaS functionality under my own control but only for my data.

Could bigger not always be better in this context? Does this line of thought affect technical architecture of SaaS data stores? Should it?

Categories: software-engineering |