Infinio – A Look Under the Hood


I discussed in a previous post that one of the things that excited me about joining Infinio was the innovative, operationally non-disruptive approach to server-side cache; what I’m calling a software-defined storage service. Here, I’m going to take a closer technical look at how Infinio actually works.

Infinio’s initial product is a software-based accelerator, packaged as a virtual appliance that uses server-side caching to minimize the I/O traffic load on central storage and also reduce latency. Infinio Accelerator lets you separate performance from capacity. For a fraction of the cost of hardware-based products, Accelerator offers a software-only solution that extends the performance life of storage you already have (initially NAS) by better resource utilization.

Infinio works without any additional hardware – no SSDs required – instead using just a small amount of idle memory from each ESXi server to create a pooled, de-duplicated cache in memory. There’s no data migration; it’s a zero-downtime installation (and uninstall). There are no operational changes to your storage infrastructure operations.

Infinio is currently packaged as a VSA that serves as a transparent NFS proxy. Let me draw an analogy as to how this works — the atomic insertion and removal point is akin to the crossover point on a vMotion operation where the live VM instance is stopped on one node and resumed on another. Similarly, the proxy takes on the address of the real storage server and subsequently forwards packets as required utilizing private network ports and vLAN tagging which was automatically enabled as setup to the atomic insertion. Think of this mechanism as an example of Software Defined Network (SDN) principles in action.

Infinio’s core engine is a highly-scalable, transparent, distributed content-addressable memory cache. Content addressability is something I am quite familiar with, having been the original inventor at VMware years ago of what turned into View Storage Accelerator/CBRC.

Content addressability is a form of virtualization, it abstracts the address space of data elements based on their contents. This is a powerful technology and Infinio is initially applying it to caching for vSphere workloads. The distributed, content-aware indexing scheme makes maximum and optimal use of the available cache memory by having only one copy of any given page regardless of its address across the cluster. This makes for a uniquely effective cache architecture that has far greater impact then its resources would imply.

Infinio Distributed Cache

Infinio Distributed Cache

Blocks that have the same content but different addresses in each of the VMs running in a vSphere cluster are captured with a single copy of the data in the distributed cache with this design. The directory structure is a highly efficient, optimized design utilizing loosely coupled, scale out principles. Cache bookkeeping is kept to a minimum and is asynchronous to the I/O path.

One workload that is particularly ripe for this type of technology is VDI. It’s widely recognized, VDI is a somewhat unique workload. It reflects the workload characteristics and Microsoft Client OS implementation decisions, when executing many instances on an individual server.

High consolidation ratio is needed to make VDI cost effective, but the load and load variations imparted by the high consolidation ratios coupled with interactive user experience expectations can overwhelm the infrastructure and deliver a poor user experience. Hence a technology that addresses this conundrum seamlessly and transparently is very valuable! Enter Infinio.

This distributed content-aware memory cache automatically and dynamically identifies frequently referenced pages across all the VMs that are running within the VDI cluster and deliver them from its memory cache instead of from storage. This not only offloads the infrastructure but can directly improve user experience by speeding up content delivery of all types, whether that is OS and application executables, application content or even cached document or web content. This leads to a much better, more predictable user experience and a much less taxed storage infrastructure for your VDI deployment.

The result? Infinio can offload 75% or more of the read requests from your NAS. Each server contributes 8 GB of memory to the aggregated cache and the content-based dedupe mechanism makes that 5x more effective in practice.

Infinio Accelerator Sample Results

Infinio Accelerator Sample Results

All this is accomplished with the resources you already have in your servers – its simply more intelligent use of memory. There is no need to buy and migrate to a new storage array architecture or converged infrastructure server. You don’t have to purchase and install server-side flash/SSD cards. And there is no migration to a new datastore.  It’s a strong, cost effective alternative towards meeting your storage performance needs.


This entry was posted in Evolution of Storage Architecture. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *