Virtualization, Containers, and Hyperconverged Infrastructure
Recently The Prime View had the privilege to sit down with Duncan Epping, Chief Technologist in the Office of CTO, Cloud Platform at VMware. Duncan pushes the envelope of technology innovation and is the driving force behind the next generation of cloud technology, virtualization, and hyper-converged infrastructure solutions at VMware.
In the interview we talked about the future trends in hyper-converged infrastructure (HCI), the benefits of the VMware Kubernetes cluster solution, how and why VMware selects acquisition targets, and more.
Listen & Watch the Full Interview Episode.
Read the interview extract with the key points from the conversation below.
Duncan, you’ve been working with many technology products. What is the most exciting technology for you personally? And where do you see the most significant potential?
A lot of exciting things are happening in the IT space right now. We’ve been talking about Kubernetes and containers for a while. Now, many enterprise organizations are actively looking into how they can move from traditional platforms to more cloud-native platforms. And that comes with challenges because many of their legacy applications have never been built for cloud-native space. As a result, they need to refactor a lot of those applications.
That’s where the complexity comes into play. There are slim chances of having the expertise in-house, people in-house who will fully understand how that whole stack works and why it works in a particular way. Everyone’s figuring out how we can move them to a newer platform? What are the benefits of a newer platform? Should I even be moving it? And if it moves, what would it bring to the company? And that is one interesting trend I’ve got to notice.
The other trend is AI/ML space, which is interesting from a technology point. In the past 4-5 years VMware focused a lot on not only the cloud-native workloads but also AI/ML and how we can enable those types of workloads on top of our platform while partnering with larger vendors that provide technology that would cater to these workloads.
For me, AI/ML is interesting mainly because of my storage background. We will end up running these algorithms against the data sets. Now, the data sets need to be stored somewhere, and they need to move from storage into memory. People will occasionally hit the challenge of having to move these massive data sets from A to B. The available infrastructure may not be suitable for running those types of workloads or moving those massive amounts of data around.
Chief Technologist, Cloud Platforms at VMware
VMWare offers various products and solutions for running applications, like Tanzu Services, VMware Cloud, Hybrid Cloud, Hyperconverged infrastructure. How do you advise your potential clients on the best option for them?
I work as a technologist, which means I have no sales quota, but whenever I talk to a customer, I would prefer them to use VMware technology. But at VMware, we ask many questions to understand our customers – Do they want to have that equipment on-premises? Are they looking into adopting several public clouds? And if they are moving to the public cloud, could they do a combination of a public cloud and a private cloud? Or could they do a combination of public cloud, for instance, native AWS, and VMware Cloud on AWS, which sits in the same data center?
So, it’s always most important to have a discussion with the customer first to figure out what they are doing or which direction they want to move to. Later you can create a roadmap from a business perspective and then try to figure out how you can align it with different technologies to ensure that they can meet their business goals.
Managed Kubernetes clusters is a hot topic in the last few years. And all clouds are now offering such capabilities. What is the benefit of the VMware Kubernetes cluster solution?
The most important aspect of the solution that we offer is simplicity. Anyone who has done anything with Kubernetes, or containers in general, probably figured out within the first few minutes that it’s exceptionally complex not just to install and configure, but more importantly, to manage. And that’s when our solution comes into play.
Our solution is the best for customers who want to run either on-premises or in the private data center or do a combination of on-premises with the public cloud environments or even multiple public cloud offerings. We have focused from day one on the ability for customers not only to have the workloads running within their data center but manage those clusters that are in the public cloud as well as create that hybrid cloud experience where applications could be cross-connected between those clouds or could move between those clouds. So, we will take care of the installation, configuration aspect of things, upgrades, and updates. Those things are essential for customers, and that’s truly the strength of our platform.
Our offering runs on top of our hypervisor, on top of our virtualization platform. It’s fully integrated. You can run new types of workloads on our platform, but you can still have your legacy workloads sitting next to them. You don’t need to create an entirely new separate environment dedicated to those cloud-native apps. You can share those platforms for those different applications that make it easy to deploy, easy to manage, and efficient from a cost perspective. And as we do this across clouds, it also provides much flexibility.
Networking, Storage virtualization are parts of Hyperconvergence infrastructure. What can you, as an expert in this area, see the future trends there?
That’s a fascinating conversation to have right now, mainly because we see new technologies being implemented. For instance, when we had discussions with our customers around the networking speed, one of the challenges they had was the price of these ISP configurations. But now the cost has significantly decreased and many customers are deploying configurations with 25 gig NICs, 40 gigs, 50 gigs, 100 gigs; many are leveraging our DMA, remote direct memory access technology, which gives a lower latency connection between hosts, which will allow us to start moving data around faster. That will open up a lot more opportunities from a storage point of view. It will also allow us to use these newer, faster storage devices that are more efficient. Because so far, we had a high-speed storage device, but the network wasn’t fast enough. Then the network became faster. That allows us to start using that storage device more efficiently and start reaching the storage device’s performance levels.
The focus for a long time was on the storage platform and some of the digital data services like replication, stretch, clustering, deduplication, and compression. However, now more and more platforms are starting to deliver data types of data services. Our solution is called VMware vSAN. And with vSAN, we have a data persistence platform, a framework created for partners. It allows our partners to run the applications directly on top of vSAN in a fully automated fashion.
For instance, if you’d like to deploy an s3 object storage-based solution on top of the platform, we have a plugin for that. Whereas typically, if you want to deploy a certain type of s3 solution on top of this, you will need to go to the website, download the solution, install it, configure it, try to figure out if it works, and then when need to update, you’d need to go through the whole process again. But now, working closely with partners and providing them the framework, they can offer a solution, which runs directly on top of our platform and is easy to install and easy to update, and easy to manage.
VMware runs an impressive, diversified portfolio of products and many of them are the result of past acquisitions. How does the company select potential acquisition targets?
Not too long ago, we acquired a company called Datrium, which is part of our business unit. One of the reasons for us to acquire that company was that we noticed that many customers were looking for a solution where they could replicate data easily to a cheap target cloud storage and have the ability to recover from that replicated data. And this is not an easy problem to solve.
So, before deciding on an acquisition target, we typically look if it’s a solution that we can develop ourselves, the number of hours involved, the size of the project, and the cost involved with a project like that. And depending on the cost, depending on the amount of time that we’ll need to complete the project, depending on the available alternative way to get that solution onboard without multiple years of development, especially when there’s a need to go to market fast, that’s where the acquisition comes into play.
When we acquired Datrium, we literally within a few months had a beta running. A couple of months later, we had a GA release. The integration is something that we do exceptionally well, and the whole due diligence process is something that VMware is good at. A lot of the products that we have available right now came in through acquisition. But for a lot of those products, you wouldn’t even notice anymore it’s been acquired. In some cases, you may see it because of the logo that gives a hint that it was acquired, but in most cases, you won’t see that.
VMware is a global company with diversified teams working across the globe. How is the development work on diversified product lines organized? Do your distributed offices “own” individual products? Or global, distributed teams are working across the portfolio?
We have offices all over the world, but in the majority of cases, those are sales offices. Though, if you look at our engineering offices, we try to make sure that people work on a particular feature, a feature set, or a product to be at least in the same time zone. When teams work on the same feature, they are located next to each other because it’s easier to work on a particular product.
For products like vSphere, a huge product, we can’t have all of the engineers sitting in the same building or sitting in the same area. In that particular case, they are scattered across the globe; they are in the US, China, Bulgaria. When teams work on the same feature, they are typically located next to each other because it’s easier to work that way.
Many big cloud providers are building certification and partnership programs to bring top talents on board. What is VMware doing to attract talented tech community leaders?
Attracting the top talent is a tricky thing, especially in Palo Alto. Though, I know for sure, one of the things why people join VMware, is the company’s culture. Diversity and inclusion are something we’re focused on and aim to hire more people from diverse backgrounds. But more importantly, we’re trying to include everyone in a conversation. Because you can hire all the people you want, but if you don’t include them in certain parts of the process, you are still missing out on the fact that you hired a diverse range of people. So having that as part of the culture is extremely important.
Duncan, thank you for the in-depth conversation and for pursuing solutions to some of the most intriguing problems in cloud technology.
Stay tuned for more great interviews coming your way!
Managing Chaos and Improving Reliability with Gremlin
with Matthew Fornaciari, CTO & Co-Founder at Gremlin Inc.
Resilience to failure with Chaos Engineering.
Automating Cloud Infrastructure with Checkov 2.0
with Matt Johnson, Developer Advocate Lead at Bridgecrew.io
Open-source tools for infrastructure security.
Building Unprecedented Data Platform
with Aaron Erickson, VP of Engineering at New Relic
The importance of the "why" component in the engineering work.