Evolving Virtualization Infrastructure and Memory Storage
Today, we are excited to welcome back Duncan Epping, Chief Technologist Cloud Infrastructure at VMware. Duncan is a veteran of cloud technology and his work helps individuals and organizations become more innovative.
In this interview episode, we discussed accelerating virtualized applications, software-defined memory implementation, cloud storage optimization platforms and much more.
Watch&Listen to the full interview episode!
Duncan Epping
Chief Technologist, Cloud Infrastructure at VMware
Duncan, you are a prominent advocate of VM technologies in tech communities. I wondered about VMware User Groups. How has your participation in the communities changed due to the pandemic restrictions?
We realized we were not going back to physical events soon, so we created virtual roadshows. We took a look at live user community events and their typical structure of several sessions, each an hour-long, a keynote, and a few breakout sessions, and then changed the format for our virtual events. We created shorter sessions, about 30 minutes long, and added live interactions with the audience, encouraging the participants to ask questions, either in chat or the Q&A window. And that did magic because people started typing questions and engaging in the conversation. That format landed really well, and we’ve had over 90 events worldwide, most of them in Europe, of course.
You have recently launched a new podcast series called Unexplored Territory. Who do you plan to invite to the podcast, and what will you build your discussions around?
We want to make sure that we have people talking about VMware’s vision and the strategic directory of the industry itself. We’re inviting guests who have an exciting topic to discuss, like Kit Colbert, VMware’s new CTO.
I read news from VMWare and noticed a post about a new concept of a Federated Storage Platform. Could you briefly explain what the main benefits of Federated Storage are?
At VMware, we have a management solution called vCenter Server that manages hosts and could manage vSAN and many other storage vendor platforms like Dell EMC, Pure Storage, HP, or IBM. Thus, we are looking into creating a control pane capable of managing all of these storage platforms across multiple vCenter Server instances.
Our large enterprise customers often need to manage vSAN clusters separately for their storage systems. Therefore, when they place a virtual machine or create a new cloud-native application and want to store the application’s data, they need to go through the mental decision about where to place the data and where to place the virtual machine.
The Federated Storage Platform aims to help provision virtual machines over cloud-native applications by connecting all storage components across the datacenter. This platform looks at the available storage platforms and gives you a precise storage system where an application should land. It provides you with the backend storage connection across vCenter instances and saves the hustle of running around with cables and plugging everything into everything.
Another project I noticed is called “Project Monterey”. Could you explain why this is an important innovation and the problems it solves?
At the VMware stack, we have tech solutions in computing, networking and security components, and there’s a storage component on top of that. Project Monterey allows you to offload some of these components to the Smart NIC (Ed. a Network Interface Card). The project enables you to have more cycles for your traditional or cloud-native applications that may run on an x86 platform.
VMware has partnertnered with many companies to build new products and tech solutions. What are the benefits for partners to work with VMware?
At VMware, we have many partnerships happening with various companies. We’ve recently announced a partnership with Nvidia in developing the AI Ready Platform that enables customers to deploy a solution that has been certified both by Nvidia and VMware and comes with the components that bring AI/ML solutions into production extremely fast.
We tend to align with partners, figure out their roadmap in building a product, and then define how we can enable them to sell more of their solutions on top of our platform.
With Project Capitola, I see VMware targets a problem of efficient memory use for vSphere installation using the concept of Software Defined Memory. Can you briefly describe what WMware is looking to achieve with this new project?
The number one challenge for enterprise organizations in virtualization or compute platforms is finding a solution that will allow having more memory against the lower costs. Some see a solution in adding more memory modules to the host. Though this isn’t as easy as it may sound – you would need to have DIMM slots available (while most of us are limited in the number of DIMM slots on the motherboard) and comes at a pretty high cost because you’re essentially throwing away the old memory and buying a new one.
With Project Capitola, we are researching the ability to have a form of memory tiering. We plan to have multiple types of memory within your system, which may look the same to the application, but the platform will start scheduling access to that memory in an optimal way for the platform and most efficiently from the cost perspective. Project Capitola and Project Monterey could potentially be used simultaneously for organizations trying to lower the total cost of ownership for the Cloud Platform.