What is edge computing?
Articles,  Blog

What is edge computing?


What is edge computing? We define edge computing as being about placing workloads as close to the edge, to where the data’s being created and where actions are being taken, as possible. So, let’s think about that for a little bit.
Where does data come from? We often think about data existing in the cloud, where we might do analytics and AI activities that process that data, but that’s really not where the data originally was created. The data is created, really, by us as human beings, in our world, in the environments that we operate, in the places where we do work. It comes from us in our interactions with the equipment that we use as we’re performing various tasks. It comes from the equipment itself, and it’s produced as a byproduct of our use of that equipment. So, let’s take this down a little bit further. If we want to make use of the edge, and we want to place workload there, we have to start by thinking about what data ends up coming back to the cloud. And when we talk about clouds, let’s talk
about both private and public clouds, and not distinguish those, because frankly, where we put that data, where we end up processing that data for things like aggregate analytics, trend analysis, is still likely to occur in the cloud, in the hybrid cloud. Now, it turns out the network providers are also looking at the world of networking, the facilities that they provide, and how they can bring workloads into the network itself. So, we can label that the network edge. That’s sort of how they refer to it themselves. Oftentimes you’ll hear the term edge being used by the network providers as being about their own network. 5G opens up the opportunity for us to communicate into the premises where work is performed, onto the factory floor, into the distribution
centers, into the warehouses, into the retail stores, into banks, hotels, you name it, there
is an opportunity for us to introduce compute capacity into those environments and communicate with that through 5G networks. Now, there’s two kinds of edge computing capabilities that we often find in these environments. One is what we call an edge server. An edge server you can think of as basically a piece of IT equipment. It could be a half rack of maybe four or eight blades, it could be an industrial PC, but it’s a piece of equipment that was built for the purpose of computing IT workloads. Now, another place where we can perform work in the edge, in the on-premise locations, are in what we think of as edge devices. An edge device is interesting, because what it is, first and foremost, is a piece of equipment that was built for some purpose. It could be an assembly machine, it could be a turbine engine, it could be a robot, it could be a car. They were built first and foremost to perform those functions. They just so happened to have compute capacity on it, and in fact, what we’ve seen over the last few years is that many of the pieces of equipment that we had before, that we refer to as IoT devices, now have grown up, and we’ve seen the addition of more and more compute capacity on these devices. A car – let’s take a car, for example. The
average car today has 50 CPUs on it. Almost all new industrial equipment have compute capacity built into that equipment. And the thing is that these computers are being opened up. They oftentimes run Linux. They have the ability for us to deploy containerized workloads onto these devices, which means that now that becomes a place where we can do work that we couldn’t do before. Let’s say, for example, you’ve got a video
camera built into an assembly machine, an assembly machine that’s making parts. Maybe it’s making metal boxes of some sort, you can put a camera on that. You can put an analytic on that camera that now looks at the quality that that machine is producing. Now it’s very common that a lot of these operating environments also have edge servers. Again, remember these things are pieces of IT equipment, so it might be a half rack sitting on a factory floor that is today being used to model the production processes or to monitor for production optimization, and whether the production is being performed as efficiently, and as with as much yield as we want. The same thing may occur in a distribution center, managing all of the conveyor belts, and all the stackers, and the sorters, and the things that are used in a distribution center. So, these are places where work can be performed. Edge servers, on the other hand, being pieces of IT equipment, oftentimes are much bigger. So, it’s common that if we’re going to have a container on his workload that we’re trying to manage into these spaces, that we’ll run that container on a Docker run time without the benefit that Kubernetes brings to the table. Whereas an edge server, not only do we have the capacity to run Kubernetes, but more importantly, we have the need, the need to get elastic scale, and high availability, and continuous availability out of the workloads that are deployed on these edge servers, because frankly, they’re being used on behalf of many of these edge devices. So, with that in mind, we can start to think
about what happens in these environments, and how do we manage these environments? How do we make sure the right workloads are placed in the right place at the right time? First of all, we think about what we’ve done in the cloud. We know that in the cloud it’s important to build workloads as containers. This is something that we have developed for scaling, and efficiency, and consistency, that almost all of the public cloud providers, and certainly most of the private cloud suppliers, now enable with Kubernetes running in the cloud. We can take that same concept, and we can use it to package the workloads and manage distribution out into a these edge computing scenarios. Secondly, because these things are often built for use in hybrid cloud scenarios where we have built hybrid cloud management, we can begin to reuse those concepts as a technique for handling the distribution of those containers into these edge locations. But there are several problems. One of them is just think about the volumes, the numbers of devices out there. We estimate there’s about 15 billion edge devices in the marketplace today, and that that grows to about 55 billion by 2022, and there are some estimates that that will grow to about 150 billion by 2025. But, if that’s true, that means that every enterprise out there will have literally tens of thousands, hundreds of thousands, maybe even millions of devices that they have to manage from their central operations. We have to have management techniques that are able to distribute workloads into these places at massive scale without requiring individual administrators going out and assigning those workloads to individual devices. We also have an issue of diversity. These
devices come in many different forms. They have different purposes, they have different utility, they make different assumptions about their footprint, but also what operating system they use, what kinds of work they’re going to perform on these devices. Finally, security as an issue. In these environments, these devices out here on the edge exist outside the boundaries of the IT data center. They don’t have the protections that we typically associate with the hybrid cloud environments, physical protection, the uniformity, the consistencies that we look for, typically, when we certify security in these kinds of environments. We have to now think about how do we ensure that workloads don’t get tampered with when they’re deployed out to these systems? How do we make sure that the machine itself, if it does get tampered with is something that we can detect, and respond to, and remediate? We have to make sure that the data that we associate with these workloads is properly protected, not only from the fact that this data may be moved back into the network, through the network and into the cloud, but also because the movement itself represents a point of vulnerability. If we can move the workloads to the edge, and avoid have to move sensitive data back to other locations, then we’ve actually reduced the potential for people to find attacks on that data. So, all of these things together are the things that will, on the one hand, inhibit the use of edge computing, but on the other hand become an opportunity, an opportunity for vendors to introduce management controls that are able to handle that diversity, and the dynamism, the ability to protect data in the right places or at the right time, and finally, to build an ecosystem, which of course is just as important as everything else. So, just to wrap this all up, it is important that we acknowledge that the edge computing world is growing. This is going to grow rapidly. It will have as much impact in the world of enterprise computing as mobile phones did in the world of consumer computing. If you think about the changes that have occurred as a consequence of the mobile phones, you’re going to see as much change occur in enterprise computing as a consequence of edge computing that we saw over the last 10 years with mobile computing. So, this is a world that’s growing. This is a world that has lots of interesting complexity to it, but where, if we can solve these issues, will yield an enormous amount of value to our customers. Thank you for watching this video on edge computing. If you liked it, like and subscribe it and we’ll bring you more.

3 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *