Scarcity & Abundance in 2015

Computer architecture has always trended towards optimizing use of what’s scarce and wasting what’s abundant.

In the mainframe era, everything was expensive – processors, memory, storage space, network bandwidth when it even existed. Computing time was strictly rationed. Mini-computers were much more cost effective, but were still shared and expensive enough to be used judiciously.

Historical - Mini-computer era

Typical Mini-Computer

The PC presented the first really inexpensive compute cycles and it triggered the evolution to the Client/Server architecture – PC compute cycles were cheap, abundant and dedicated to a single user, while server processing was scarcer, more expensive and shared. As a result, the Client/Server architecture drove an application design that offloaded everything possible & practical to the client and simultaneously expanded the workloads. The UI and some business logic was handled by the PC, while shared resources such as database and storage was served by the servers. It’s also worth noting that only with the inexpensive compute cycles on the PC did it become cost effective to spend compute cycles on ease of use and better optimizing the actual computer user’s time as more valuable than the computer’s cycles.

The cloud era is changing economic equations and with it application architectures again. Processing power is cheap, at an all time low for both servers and client devices. And both of these system classes are shipping with many more processing cores. With the cloud, these cores are effectively available on demand. Storage capacity has also plummeted in cost with prices at a mind-boggling $0.03 per Gb for the latest mechanical disks. Network bandwidth is cheaper and faster than ever as well.

What’s expensive today? Time is what’s expensive. Businesses lose customers’ attention if they don’t respond quickly enough to queries. Time is money in big data as well; responding in real time with conclusions based on analysis of large dynamic data sets quicker than competitors is a huge business advantage. And what wastes a lot of both end user’s and computer system’s time with mobile/cloud applications? Synchronizing data.

There are several places in which the slow speed of synchronizing data can be noted: For example, having multiple servers processing and coordinating portions of the same data slows both of them down. Not only that, the faster our processors are, and the more of them there are, the more potential instructions are lost while waiting for synchronization. Whether it’s synchronizing processors within a server or distributed state synchronization across a network, the more processors stalled during the synchronization, the more aggregate time and associated processing cycles are wasted.

This leads me towards what I would say is an interesting trend in the mobile/cloud era – computer systems should evolve to use the abundance of compute, memory, network and storage capacity to avoid distributed synchronization! And guess what? That’s actually happening…

We’re starting to see this, particularly in storage. The migration of databases away from transactions and towards key/value pairs and eventual consistency is an example. Another example is the movement away from in place read/write and towards write/append and objects with versioning for cloud applications. These techniques mean we are using a lot of storage space by keeping many copies of slightly different objects, and we are using a lot of bandwidth for replication. Both of these resources are abundant and inexpensive. And if the entity is not changed after it’s created and instead we replace it with a newer version (and keep the old one around) there is much less complex synchronization and coordination. And it is that coordination that is scarce and expensive in 2015.

This entry was posted in Cloud, Evolution of Storage Architecture, Software-Defined Data Center. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *