2016 Trends Inside and Outside the Data Center

It’s that time of year for CTO predictions and the rate of innovation and disruption across IT is certainly accelerating providing ample opportunities for comment. Although there is a significant amount of disruptive change going on across many disciplines. I wanted to primarily focus on both the influence of cloud and web-scale computing on data center infrastructure as well as some storage observations for 2016.

Back to the Future 2016

HCI Pivots to an IT Architecture

Hyperconverged infrastructure (HCI) has gained a lot of attention in recent years for its model of tightly integrating networking, compute, virtualization and storage into an integrated building block packaging and sales strategy. HCI usually comprises a server with direct-attached storage disks and PCI-e flash cards, along with all the software needed to run virtual workloads, including hypervisor, systems management, configuration tools and virtual networking. Most significantly for storage, though, there is always a software-defined storage (SDS) stack virtualizing the disks and flash hardware into a virtual storage array while providing storage management capabilities. This SDS stack delivers all the storage services to the virtual machines.

I predict that in 2016, the HCI approach to data center architecture will be increasingly separated from its integrated packaging sales strategy. As an example, in VMware’s EVO:Rail offering, VMware Virtual SAN (VSAN) is the integrated storage stack. Now, enterprise-proven and rich with enterprise features, VSAN will become more adopted within the data center. However, those that don’t want to embrace the one size fits all HCI packaging strategy can utilize server-side, software-defined storage solutions like VSAN that are both high performance and cost effective. IT leaders will increasingly choose the more customizable, most appropriate and flexible components inspired by HCI architecture principles for sophisticated enterprise data center use cases.

Web-Scale’s Growing Influence over Processor & Server design

Web-scale applications have always been constructed differently than traditional enterprise apps from the software architecture point of view. Speed and scale are paramount. Frequent, continuous application change and updates. Applications are comprised of many scale out components or micro-services and the application is responsible for maintaining its own integrity while running on unreliable infrastructure. Automate everything. An eventually consistent data model has largely replaced transactional integrity. Failure processing is now steady state, not a separate recovery phase.

2016 will see the widespread changing of the guard that the most important drivers pushing the envelope of system design and Intel in particular are the web-scale service vendors – Facebook, Google, Amazon, etc. and no longer the traditional server vendors catering to traditional IT. Traditional IT with its best practices and standardized servers is no longer where the growth and innovation is.

Cloud Winners and Losers

Long the darling of the industry, VMware just slipped behind AWS for new deployments of workloads. The long term trend is clearly away from enterprise data center hosted applications where VMware is king, and towards cloud-hosted deployments (especially for next generation cloud native applications).

The public cloud appears to be especially dominant for new, next generation applications: the so-called third platform/cloud native apps. While these public clouds utilize virtual machines (as well as containers), they are not dominated by the VMware vSphere platform and its rich enterprise feature set. This market shift will shepherd in a new collection of complementary technologies and services for deployment, migration and management of cloud applications. This is looking more and more like an inevitable changing of the guard.

While most organizations don’t seem to be making the jump to the cloud entirely for their legacy applications, increasingly more of them are putting at least some part of their enterprise environment and workloads in the cloud. In this market, AWS is the dominant leader and continues to innovate at an impressive pace. They are followed by Microsoft and Google, while everyone else seems like increasingly marginal “also-rans”. In my opinion, Google just ramped up their odds of success in this cloud market with the best “acqui-hire” since Apple bought Next Computer all those years ago to bring back Steve Jobs. Google acquired BeBop and appointed former VMware founder and CEO Diane Greene as their chief cloud executive. Google always had the technical chops to dominate the public/hybrid cloud, but lacked the enterprise expertise. If my friend and former boss’s boss is not an infusion of premier enterprise executive expertise, in addition to being someone particularly adept at having shepherded to ubiquity a highly disruptive to the status-quo data center infrastructure technology with VMware, I don’t know what is!

All Flash Data Center is not Happening even as Flash use Continues to Grow

While the all-flash array market continues to grow in size, and flash decreases in price, the reality of flash production is that the industry does not have the manufacturing capacity necessary to enable flash to supplant hard disk drives. A recent Register article quoted Samsung and Gartner data that suggested that by 2020 the NAND Flash industry could produce 253 exa-bytes, which is 3X the current manufacturing capacity, at a cost of approximately $23B.

The kicker is that 253EB is only 10% of the expected industry storage capacity demand! This article further estimated that $2 trillion dollars would have to be invested in manufacturing facilities to deliver the amount of flash necessary to meet projected industry storage capacity requirements.

Hard drives are far from dead; they will continue to dominate storage from a capacity perspective, further bolstering the case for architectures that separates performance from capacity.

Emergence of Storage-Class Memory

Readers of my blog already know that I am a fan of Storage Class Memory. Toward the end of 2016, we’ll see the initial emergence of the technology that I expect will succeed flash. Storage-class memory (SCM) will fundamentally change today’s storage industry, just as flash changed the hard drive industry. Intel/Micron dubbed one version 3D XPoint and HP/SanDisk joined forces to create another variant.

SCM is persistent memory technology. Connecting to memory slots, SCM devices are accessed similarly to memory, but with different performance characteristics. While slower than DRAM, Intel claims that 3D XPoint will be 1,000 times faster than flash and 1,000 times more resilient, with symmetric read/write performance. Uniquely, SCM can be addressed atomically at either the byte level or block-level granularity. Although they can be accessed as super-fast block storage devices for compatibility, the bigger disruption will be application access as direct memory mapped “files” which will allow next-generation applications to take advantage of finer grained persistence algorithms.

SCM will provide unprecedented storage performance, upend the database/file system structures to which we’ve grown accustomed and advance the trend toward server-side storage processing, hereby transforming everything from storage economics to application design.

As you can see, there is another exciting year in store for our industry. The mobile/cloud era marches inexorably forward, crowning new winners and losers. And the seeds of the next major Data Center disruptions have already been planted and are beginning to germinate.

This entry was posted in Cloud, Evolution of Storage Architecture, Software-Defined Data Center. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *