Explaining Edge Computing
Articles,  Blog

Explaining Edge Computing

Welcome to another video from ExplainingComputers.com. This time I’m going to talk about edge computing. This places networked computing resources as close as possible to where data is created. As we’ll see, edge computing is associated
with the Internet of Things, with mesh networks, and with the application of small computing devices like these. So, let’s go and delve more deeply into
computing on the network edge. To understand edge computing we need to reflect
on the rise of the cloud. In recent years, cloud computing has been
one of the biggest digital trends, and involves the delivery of computing resources over the Internet. In the early days, most of the devices that
accessed cloud services were PCs and other end-user hardware. But increasingly, devices accessing cloud
services are also Internet of things or IoT appliances that transmit data for analysis
online. Connecting cameras and other sensors to the
Internet facilitates the creation of smart factories and smart homes. However, transmitting an increasing volume of data for remote, centralized processing is becoming problematic. Not least, transmitting video from online
cameras to cloud-based vision recognition services can overload available network capacity and result in a slow speed of response. And this is the reason for the rise of edge
computing. Edge computing allows devices that would have relied on the cloud to process some of their own data. So, for example, a networked camera may perform
local vision recognition. This can improve latency — or the time taken
to generate a response from a data input — as well as reducing the cost and requirement
for mass data transmission. Staying with our previous example, let’s
consider more deeply the application of artificial neural networks for vision recognition. Today, Amazon, Google, IBM and Microsoft all
offer cloud vision recognition services that can receive a still image or video feed and
return a cognitive response. These cloud AI services rely on neural networks that have been pre-trained on data center servers When an input is received, then they perform
inference — again on a cloud data center server — to determine what the camera is
looking at. Alternatively, in an edge computing scenario, a neural network is usually still trained on a data centre server, as training requires
a lot of computational power. So, for example, a neural network for use
in a factory may be shown images of correctly produced and then defective products so that
it can learn to distinguish between the two. But once training is complete, a copy of the
neural network is deployed to a networked camera connected to edge computing hardware. This allows it to identify defective products
without transmitting any video over the network. Latency is therefore improved and the demands on the network are decreased, as data only has to be reported back when defective products are identified. This scenario of training a neural network
centrally and deploying copies for execution at the edge has amazing potential. Here I’ve indicated how it could be used
in vision recognition. But the same concept is equally applicable
for the edge processing of audio, sensor data, and the local control of robots or other cyber physical systems. In fact, edge hardware can be useful in any
scenario where the roll-out of local computing power at the extremities of a network can
reduce reliance on the cloud. One of the challenges of both the Internet
of Things, and of edge computing, is providing an adequate network connection to a vast number
of cameras, sensors and other devices. Today, the majority of devices connected wirelessly to a local network communicate directly with a WiFi router. However, an alternative model is to create
a mesh network in which all individual nodes dynamically interconnect on an ad-hoc basis to facilitate data exchange. Consider, for example, the placement of moisture and temperature sensors in a large industrial greenhouse. If all of these devices have to have direct
wired or wireless connectivity, then a lot of infrastructure would need to be put in
place. But if the sensors can be connected to edge computing devices that can establish a mesh network, then only one wired or wireless connection to the local network may be required. Edge computing hardware is defined by its
location, not its size, and so some edge devices may be very powerful local servers. But this said, a lot of edge computing is destined to take place on small devices, such as single board computers. Here, for example, we have a LattePanda Alpha and a UDOO BOLT, both of which could be deployed to process data at the edge. Other potential edge devices include the Edge-V from Khadas as we can see here — this has even got “edge” in its name — and it’s
got multiple camera connectors, which is very useful for edge applications. And then over here we have a Jetson Nano SoM, a system-on-a-module, and this is a particularly interesting single board computer because
it’s got a 128 CUDA core GPU. So it’s very good for vision recognition
processing at the edge. Another slightly different and very interesting device is this, the Intel Neural Compute Stick 2, or NCS2. This features a Movidius Myriad X vision processing unit, or VPU, and it’s a development kit for prototyping AI edge applications. And if I take off the end here you’ll see
this is a cap, and this is actually a USB device. And the idea is you can plug this into a single board computer, such as a Raspberry Pi, in order to significantly increase the capability of a small board like a Raspberry Pi to run edge applications like vision recognition. The exact definition of edge computing remains
a little blurry. This said, all major players agree that it
places networked computing resources as close as possible to where data is created. To provide you with some more extensive definitions, IBM note that “Edge computing is an important emerging paradigm that can expand your operating model by virtualizing your cloud beyond a data center or cloud computing center. Edge computing moves application workloads from a centralized location to remote locations, such as factory floors, warehouses, distribution centers, retail stores, transportation centers, and more”. Similarly, the Open Glossary of Edge Computing
from the Linux Foundation defines edge computing as “The delivery of computing capabilities
to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services. By shortening the distances between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today’s Internet, ushering in new classes
of applications”. Cisco have also introduced the term “fog
computing”, which it describes as “. . .a standard that defines how edge computing should work, and [which] facilitates the operation of compute, storage and networking services between end devices and cloud computing data centers”. What this means is that fog computing refers to resources that lie close to the metaphorical ground, or between the edges of a network
and the remote cloud. It may be, for example, that in a factory
some edge sensors communicate with local fog resources, which in turn communicate as necessary with a cloud data center. It should be noted that the term “fog computing” is mainly used by Cisco, and is viewed by some as a marketing term rather than an entirely distinct paradigm to edge computing. Edge computing is emerging for two reasons. The first is the rising pressure on network
capacity. While the second is our growing demand to obtain a faster and faster response from AI and related applications. As a result, while for a decade we’ve been
pushing computing power out to the cloud, increasingly we’re also be pushing it in
the opposite direction to the local extremities of our networks. More information on a wide range of computing developments — including AI, blockchain and quantum computing — can be found here on the ExplainingComputers YouTube channel. But now that’s it for another video. If you’ve enjoyed what you’ve seen here
please press that like button. If you haven’t subscribed, please subscribe. And I hope to talk to you again very soon.


  • John Miller

    Process control instruments have been moving in this direction for many years with I/O that can be connected to instrumentation nearby and mounted directly on or near the equipment. The I/O communicates to the PLC via Ethernet.
    Now, we have highly accurate devices with built in diagnostics that connect to the PLC over Ethernet. Previously, multiple wires for analog and discrete signals would have been required to send the information for mass flow, density, temerature, flow total and device health from a coriolis meter and now, using Ethernet IP or Modbus TCP, all of that information and more is sent directly to the PLC, HMI, or process historian.
    I think the key to acceptance of SBC devices on the plant/factory floor will be availability of "open" standards like OPC, EthernetIP and ModbusTCP.

    Thanks for another thought provoking video.

  • Max

    Thank you for the video, which explains that edge computing has nothing to do with internet explorer… And now for something completely different, I was just wondering, if you think of reviewing the Nvidia Jetson Xavier, which is still a sbc, but a very expensive one – 650 $ on Amazon US. I am thinking of importing one for Cuda development and use it also to possibly replace a Pi4 to run network security and home automation. It is supposed to have a power consumption of less than 10 W at the lowest CPU setting.

  • Richard Collins

    As the hardware can be of any size and so not necessarily a new form factor, would you say this is process innovation vs technology innovation?

  • Balsey Dean De Witt, Jr.

    They might know this by now, but they should prioritize what data they actually need to be sent online and keep everything else, that has to be processed, locally. Right now, I presume, they are sending everything online. This would help if they trimmed off what actually has to be sent online. Just my thought!

  • David Hedges

    So pulling back processing power from the cloud to local devices, just like mainframes and dumb terminals, went to PC's. then went to internet devices, then "The Cloud" … here we go round again …

  • Joe Belmont

    Thanks ExplainingComputers Sensei and finding this edge computing subject fascinating; not so FOG-gy on this edge computing subject anymore. Again thanks.

  • QuietStorm

    There is lots of good information in this video as an introduction to edge computing. Unfortunately, you have, IMO, made a very large omission. The vast majority of edge devices, such as smart light bulbs, smart thermostats, etc. do not use Single Board Computers. They use micro-controllers. Your video gives the incorrect impression that SBCs are the dominant edge computing device. They are not. Perhaps you intentionally conflated SBCs and microcontrollers to simplify your presentation. I hope not as it is an important and meaningful distinction.

  • Bill Gross

    This was an excellent video! Along similar lines, could you do a video on mesh networking, particularly with the RPi or similar SBC? Thank you!

  • Ian McCluskey

    Beyond my mental and educational resources. Sorry, in other words that an Englishman will comprehend, the ball went straight through to the wicket keeper.

  • cgwworldministries

    Sounds like normal computing to me. Install a processing unit on hardware to lessen internet load. Like graphics cards for video games. There’s a reason the cloud died in the 80s.


    Thank you Christopher for this video. I'm new in the sw consultent business and we are going to work with these things. What you have done is to very pedagogically explaining how every thing is related.

  • jason campbell

    This week’s video ties in nicely with last week’s really doesn’t it? What I’m not clear on yet is what EDGE is really for- I get that in enterprise it can have benefits, it’s a sort of half-way house driven in great part by relatively cheap SBC. At home I am struggling to see the benefits, mind you I am usually late to the party on these things- as it develops the benefits will probably become more obvious.

    I get that there’s a lot of AI involved in a lot of EDGE (even if they are separate things), and I guess there’s bandwidth considerations there. At the same time for me the closest I get to EDGE is a NAS really. Over time I’ve moved away from local storage so it feels a bit strange to be coming back to it. Time will tell I suppose, not everything translates into the consumer space.

    I've done a bit of reading on EDGE and I can see some synchronicity with modular computing- which I guess is touched on with the Intel Compute Stick. Nice video, food for thought.

    Is anyone using an EDGE approach at home, would be interesting to learn how they are using it- if the cloud is just 'somebody elses' computer' (I think it is more than that) is EDGE really just a home network, with a new name, using the emerging hardware that is now available?

  • brostenen

    I am not a fan of cloud computing. Mostly. Sharing content on youtube is ok, and streaming movies or music is in it self a kind of cloud computing. Yet what I store is always local.

  • john clough

    Computing like fashion always goes in cycles, moving from centralising to decentralising; Unix (centralised), Windows clients (decentralised), Cloud (centralised), and now Edge (decentralised), give it another 10 years and everything will be heading back to the centre again…..

  • Rob R

    I see a lot of comments where there is a misunderstanding of data vs data *processing*. The end data is still sent to some external service to be consumed.

  • Hassan Sultan

    Isnt that how things were. Hard fix that usb in your computer and it becomes a permanent of your computer… exactly how things were. But that fog thing is tricky, a new network administrator controling my computer

  • Paul Andrew Mitchell

    Another excellent presentation, Chris. I hereby reserve all U.S. utility patent rights to a complete clone of all brain cells within your cranium, for reduplication throughout the known Universe. This way, if anybody asks how they might speak directly to Chris, all we need to tell them is to access the nearest EC-EDGE ™ for a faithful yet artificially intelligent hologram of your handsome good looks! p.s. "tm" stands for trademark, NOT transcendental meditation!

  • John Nguyen

    This concept of Edge Computing is very interesting, I’m going to research a bit more about this.

    Oh and Happy Sunday ExplainingComputers!

  • ChuckBleedinNorris

    "Edge computing" aka "on premise" for those who still believe the cloud is the only way forward but refuse to accept that you still need local tin.

  • William James

    An interesting and well explained video – thanks. It shows how the internet added the ability to draw on more powerful resources, not available locally, to process data. But, as the quantity of date increased that travels over the internet, the internet's attributes became the limiting factor; as does the number of requests. The availability of powerful and relatively cheap single board computers has allowed processing of date at the edge, where is is created, of a central location, being the cloud. Thus, there is an opportunity for individuals to have control and ownership of their own data. Rather than hand it over to large corporations and lose control of it. Moreover, in edge computing there is opportunity to learn about AI in good detail, without a need to work for a large corporation to gain this knowledge. As always we live in interesting times, as change appears to be the only certainty.

  • Troy Stover

    No matter what the circumstance is, your still going to have latency and limited by the transfer-speed and power of each processor it has to go through. There is such thing as latency in a processor. The slowest speed is what your gonna get at the end result. It's like cars going thru a round about. For more important traffic, a direct lane is better some cases. A city builder is a great example of how this type of computing works. if you destroy a lane it will take longer to get to the destination. When a product fails, where does the data go for that device? You would need to think of Edge as a RAID Array.

  • Timpraetor

    Christopher, you continue to deliver great info in these — except for this one. This sounds like you've fallen into the "Confabulator" trap based on the marketing BS that these tech organizations are pushing to come up with a new reason to convince you to spend new money with them … Still gave it a like, but that's some serious double speak you had going on there.

  • Jason Carlson

    Brilliant explanation of the this technology. Thanks for taking the time to explain in such detail and with common language. Really enjoy your channel. Keep up the great work!

  • Robert McKenzie

    People aren’t ‘Willing’ to share their data they are both being kept ignorant and forced by licensing to give up their data… can you imagine if a door to door salesman knocked on your door and said ‘ yes you can have this gizmo, that you’ll find really useful (though it isn’t) but first, I need to make a copy of all your friends and families names and addresses from your address book, ok?’ Truth is, governments need to pass law, a digital ‘unfair contract terms act’ so to speak . The general public need protecting at a global level by democratic government.

  • Art McTeagle

    Very good video! I've never heard of this before, but it strikes me that 'Edge computing' will be necessary for self driving, autonomous cars in the future.

  • Jedzia Dex

    Excuse me, Christopher. But I can not resist explaining you the future and present at the same time, which is definitely a rookie mistake:) More thoughts:
    There is a zero cost solution(in regards of money and feasibility) for the computing power problem: Just wait 2 years and you'll get 10 times the speed. Or 4 years and have 100 times the computational power and so on. Sounds silly, but in principle is true:)
    With this in mind you can ask a fundamental question: Why should i send you(foreign data center/cloud) my data? Instead, send me your program and i do the computation. An Example: I want to book a hotel in London. So i tell 10 different agencies or hotel companies my personal data to find the one that suits me. Their only job is to provide a room for the night. That's what they (should) make money with. Not with my name, my birthday, my habits or my click behavior.
    With this "Inversion of Responsibility" from above, we can solve a big problem of today: Privacy.
    You may think, with this solution the hotel/travel agency/any-other-service has to transfer their data-set to you? Just the executing front-end is enough. The same privacy also applies to the other endpoint. If its only you, then with enough computational power they can still extract personal information by your requests to the database. But when all transactions are done in this manner it is a O(N) problem and simply not attractive anymore to press some money out of you as human-data-cattle.
    Let's put the focus back to image recognition and the present: Even in the present, it is possible to send the data stream to an independent data center. The high-tech AI vendor can send in their program, complete the data training phase, and get paid for what it should be paid for. Not the misuse of your pictures, data and personal information, what has happened too much in the past for my understanding.
    "Where data is, desires also arise…" (Yoda?)

  • Robyn Edwards

    The wheel turns and history repeats.

    Main frame, IBM 36, PDP and so on, cloud.
    Rise of the PC with central network storage. Novell, Win server, Edge.
    The internet. Fog.
    RDP and thin client, Cloud.
    Central buy as a service cloud.
    Now, local processing, edge.

    Down loading a preconfigured neural net is not that far from downloading an application, is it?

    Round and round we go and where we stop nobody knows.

  • codigoBinario

    A new term was born not so long ago: Mist Computing
    More info for people interesting in privacy and decrease costs (compared to Amazon cloud cost) in this paper from a research center: https://ieeexplore.ieee.org/abstract/document/8819993

  • Geekboy NZ

    Another great video Chris!

    It would be great to see a future video explaining the technical security challenges of 5G hardware. I think understanding the tech can be apolitical, in the same way that we can understand general challenges of internet security without knowing specific bad actors. You have a knack of making it simple to understand!

  • Patrick G.

    Wyze Cameras Use Xnore Ai and the algorithm is trained then placed on the firmware therefore processed on the camera then transmitted once a particular value ( detects a person) is met, this is interesting, as I was just learning more about this myself to help resolve some resource/network bottlenecks, Excellent Video!

  • DarknessFX

    Great video and content, as always! If I may suggest, check M5Stack.com they have some innovative ESP32 devices and recently launched a RISC-V (+cam, +lcd, +battery, +usbC) for Edge Image Recognition called M5StickV for cheap (I think is under $25) and they are about to launch a new version that include Wireless, almost an all-in-one device the size of a matchbox.

  • Brainstorm4300

    Some people are commenting as if cloud is something bad and services are reverting back to "good old days" where local machines did all the processing. That is a fundamental misunderstanding of cloud and edge computing paradigm. Some services cannot be processed at the edge and more computationally intensive operations are still tasked to the cloud. Modern SBCs are powerful enough to lend a hand to the cloud. This seamless sharing of resources which has ushered in a host of new applications and services has never existed before.

  • Nigel Johnson

    The concept of the cloud was driven more by the corporate marketing departments rather than their R&D labs. The cloud fits the service provider model that links the customers wallet to the corporates bank account via a subscription agreement.
    The fog solution recognises the unacceptable load that universal adoption of directly connected IoT will place on the internet and appears to be an attempt to salvage some of the cash generating power of subscription services from the obvious practical problems of send all that low level data to remote servers.
    Most engineers who have thought about the problem have concluded that distributed computing is the answer with only necessary data being moved to the increasingly remote parts of the network. As has been stated in this video, this solves many of the latency problems and limits the data load placed on the internet. Not mentioned here, is that if the standards are well designed it will increase the resilience of the system by reducing the dependence of local nodes on each other, but also on access to the remote servers. The test being what happens if the internet connection is broken, does the local network continue to operate and at what level of functionality.
    In defence of the technical advantages of cloud computing, it must be said that the cloud has provided users with access to significant processing power and allowed the development of AI applications that would otherwise be impossible to afford, but he data load produced by mass adoption along with advances in affordable computer power will return the location of the processing engines to their rightful place, as local and close to the action as possible.

  • Michael Bishop

    Fascinating. I foresee a day when thousands or millions of small neural networks send their analyses to other, larger neural networks closer to the cloud, for meta-analysis, which in turn send their data up the chain, in a hierarchy of neural networks, allowing heretofore undreamed of levels of abstraction in the interpretation of data. Which path should we take to the Singularity? All of them!

  • DumbledoreMcCracken

    The only problem I see this solving is autonomous interdependent vehicles (AIVs), who cognitively interoperate to share observations and plans that avoid accidents and speed throughput.

  • Ana E

    This technology has been a great equaliser in bringing the quality of Australian takeaway up to Yank and Pom standards.. Missed going abroad and having my takeaway be right 90% of the time, [as opposed to 65% at best here in straya]. Machine learning+Cameras working their magic at Dominoes pizza ( ͡° ͜ʖ ͡°)

  • Jesse C

    Watching computers evolve the way organic biology did is amazing. This is just like a video I watched about "robot skin", how there's too many points of data to track even for a human mind so a lot of the sensing data is computed by the nerves before it sends a signal, basically just significant changes are sent to your brain, and this is the exact same thing for silicon based intelligence. Watching the entire planet gain sentience via computing is wild, it's a crazy time to be alive to watch this all happening so fast right in front of us.

    It took people billions of years to reach this point, and computers will surpass it in a hundred. What massive and exponentially more potent use of matter, it's almost insane to think about.

    This of course, is completely leaving out the ramifications, the philosophy behind this. It's just a fact, computers are becoming more organic but with the ability to evolve in seconds, not millennia. It's organic life, but with the "edit" mode unlocked.

  • PrivateSi

    The back and forth between dumb terminals and clever clients will not stop…. it's price vs performance. I don't like dumb terminals and fully centralised systems………….. anywhere.

  • freeman239

    This will be perfect for china to keep track of it's citizens for it's social credit system. I'm sure it will be used in the UK soon, as well as North America eventually!

  • Earnest Williams

    I'd love a video on what home users can do with a device like the NCS2. For instance, if we run something like openHab, could we utilize video processing or facial recognition at a reasonable cost to improve the quality of home video monitoring?

    Another great video, btw!

  • Daniel Segel

    The intel NCS and Google Coral allow for training something like a visual recognition system on a raspberry pi. They’re not needed to run a recognition system. The same advantage the Jetson Nano has with its Cuda cores. They’re used for TRAINING, not implementation.

  • Dave Boyer

    In the 1970s my Community College had "dumb" mechanical teletype terminals connected to another college to run their software programming projects. We didn't call that a "cloud connection" at that time. I favor local control and processing of inputs and programs where it makes sense. Thanks for another great video.

  • S C

    oh Lord, this is the most wierdest video on this channel as it is completely and utterly about nothing ) it is as abstract as human mind can go: look, for decades we were trying to be as 'remote' as possible with remote being both marketing term and scientific declaration, and all of a sudden 'lets store data locally'? Who's benefitting?)

  • Rufus

    So, it's back to the local PC/Server with Internet model, got it. Oh except it's called "Edge Computing" now, that's so progressive.
    One note, I'm glad you don't use "leverage and leveraged" among all the necessary buzz words.

  • Major Dick

    I would totally use a Digital Assistant that sounded like Chris.

    It could even be called Christopher.

    Or just an EC how to video on making Google Assistant sound like him.

  • Samyojeet Dey

    Hi I have a request can you make a video on Deepin OS please? The link below shows that it is better, but I want your expertise and advice in this.

  • Funky Monkey

    Good idea..have all your data stored on somebody else's hardware then have all your appliances hooked into the interweb so {{{they}}} can spy on you

  • Ron Lewenberg

    Given the increasing processing power in phone and tablet system on chips, most notably the newest iPhones and iPad Pro, would you expect to see Edge like features move over to these devices as well? This would allow for a lower latency, and I could see a new system where there is localized learning.
    And do you think that in your own that works or chips for these will be built into Intel and AMD chipsets in the near future? It seems to me that the PC is being left behind.

    PS. There already are implementations for active directory in Windows Server 2016 and 2019, which creates an edge like paradigm. The active directory for managing all of the computers, devices, and accounts is centralized in a cloud-based server with the local servers being copies. In some cases these local servers can an accountant update data, but not others they are simple copies. Either way, there's regular synchronisation at set times.

  • An Kaz

    Is "cloud" not still essentially a marketing term?
    I never stopped seeing it that way. Just meaning "web service," be it storage, processing, hosting, software, etc.
    It's humorous thinking companies decided to make the "thing" in their IoT less useless to save on bandwith costs.
    Edge computing as a concept is somewhat understandable, but to the end-user it may aswell be a device that half-works on its own but is still dependent on the net and phoning-home, at least with the AI example given.

  • Jamie Whitehorn

    Another great video of just the right length. No waffle, no banter, just the information you need to understand the concept.

    It makes me wonder, is this process cyclic? 50 years ago compute power was expensive so processing was centralised with mainframes and dumb terminals. Then PCs change the landscape with cheap processing and we de-centralised. Then we upped the amount of processing we needed to handle the vast amounts of data we are now collecting so the Cloud was born and we centralised again. Now we've got dedicated devices like VPUs and pre-trained neural nets that can offload the processing and a limited resource of bandwidth, so we're decentralising again. I wonder if we "fix" the bandwidth problem will we centralise again …😀

  • cicada

    way to make the video 10 minutes bro. this 10 minute thing is getting ridiculous and makes me not want to watch YouTube anymore.

Leave a Reply

Your email address will not be published. Required fields are marked *