Hyperconverged Infrastructure (HCI) is an architectural approach that integrates in a single system or platform the compute, storage and networking resources necessary for the operation of applications and workloads. Instead of using separate hardware components, such as servers, storage systems, and network switches, a hyperconverged infrastructure combines these elements into a set of highly integrated servers.
The management of a hyperconverged infrastructure is done through software. A centralized management software layer is used to provide a unified view of all resources and allows them to be managed efficiently. This software layer can also include features such as virtualization, data replication, data deduplication and compression, virtual machine migration, among others.
Many IT managers recommend implementing it in their companies for all the benefits it brings. However, there is also some confusion about this solution. It is normal for questions to arise, such as: how is it different from traditional infrastructure? Is it worth implementing in my company? Here are some of the key concepts to understanding this system.
In the IT world, the term infrastructure is used to refer to the set of hardware and software components that serve as the basis for the software that companies use to develop their business.
When we talk about hyperconverged infrastructure, the term “infrastructure” refers to the typical minimum components needed to run an application. For example, for a browser to work, the computer must have a processor, memory, and enough storage. In a data processing center, we have the same components on a much larger scale.
In "traditional" infrastructure, the compute or processor and memory are provided by physical servers, the storage by specific storage components (SAN arrays) and, additionally, a component (SAN switch) is needed to connect the storage with the servers and allow the data flow; this occurs when we have two or more physical servers, which also need to access a shared storage. The term SAN comes from storage terminology, where there are other types such as NAS and DAS.
Thus, we have three main components: servers, SAN switches and SAN array, which provide a basic infrastructure unit to run applications. In order to properly administer and manage these components, we must have very specific knowledge of all their technologies, as well as specific management tools for each of them. For this, we will need technical personnel with knowledge that covers all the products of our "traditional solution", which makes the day-to-day life of our infrastructure more expensive, both for the number of technicians we need and the training and updating needs they require.
Another important aspect to take into account in traditional infrastructures are the single points of failure, which forces us to redundant physical components and add software, which further increases the complexity of managing the environment, in addition to the extra cost of these additions.
To alleviate this situation, converged infrastructures appeared first, where the same manufacturer proposed the same as in the previous option but in a unified and centralized manner, simplifying the administration, management and operation of the infrastructure, while maintaining all the physical components of a traditional solution.
However, this system of separate components could no longer meet the current needs of large cloud environments in terms of simplicity of management and cost efficiency. Thus, in 2012 hyperconverged infrastructures or “HCI” (Hyper Converged Infrastructure) emerged.
If a physical server can carry storage (DAS), why not bring together through software the storage of several servers to form a single storage entity and offer it in a comprehensive and unified way as a SAN array?
Precisely this is hyperconvergence, joining several physical servers that already provided computing and memory, now providing storage, and adding a software layer that presents all the disks of all the servers as a single storage entity.
Before hyperconvergence, and because many CPDs contained multiple storage systems, Software Defined Storage (SDS)were developed, where software running on a central storage node could group several physical arrays into a logical entity. But the reality is that it did not simplify the infrastructure of a DPC, but added even more complexity, and cost.
Another element that has favored hyperconvergence is the fact that very high-speed (10 GB or more) network (Ethernet) connections have become popular and cheap enough, allowing data movements (bits) between the disks of a hyperconverged system to be fast enough to compete with a traditional SAN system and improve its performance.
With hyperconverged systems we have eliminated two components that introduced complexity and cost: SAN switches and SAN arrays. However, a new component appears to be managed: hyperconverged storage.
By relying entirely on servers that provide the necessary drives, some manufacturers have developed a management system to include both servers and storage, and even the server virtualization layer as is the case with Nutanix with its AOS hyperconverged software; while others have added the hyperconverged layer to their server virtualization system in an integrated way, such as VMware with its vSAN hyperconverged software.
We hope to have clarified any possible doubts about this system. However, should you have any additional questions or are considering implementing a hyperconverged infrastructure system in your company, at Serban Group together with Nutanix we offer you this demo where you can experience the Nutanix software, launch an HCI platform and see for yourself all the benefits it can offer.