What is edge AI and what are its applications?
Edge AI has been the cornerstone of many transformations in imaging systems used across industries such as agriculture, medical, retail, industrial, smart city, etc. It makes use of artificial intelligence to help automate certain tasks to improve the efficiency and performance of machines.
But what is edge AI? What is the difference between AI and edge AI? Does edge AI come with certain benefits? We attempt to answer these questions in this article.
What are artificial intelligence, machine learning, deep learning, and computer vision?
To learn what edge AI is, it is important to understand fundamental concepts such as AI (Artificial Intelligence), ML (Machine Learning), DL (Deep Learning), and CV (Computer Vision).
Artificial Intelligence is a generic term that is often referred to as the ability of systems and machines to perform intelligent tasks that usually require human supervision. Natural language processing and computer vision are a couple of examples of this. Machine learning is a special subcategory of AI where these machines learn and improve automatically through the experience of carrying out a task. The learning and feedback can happen under supervision or can be unsupervised as well.
On the other hand, deep learning represents a certain type of learning algorithm that is based on multiple layers of small learning units with lots of interconnections called nodes. This structure is called a neural network and it’s inspired by how our neurons are structured inside the brain. The larger the number of layers in a neural network, the deeper it is and hence the term ‘Deep Learning’.
Now let’s come to Computer Vision.
It is a specific task in AI which requires machines to understand image/video data and provide intelligent inferences from it. The complexity of such a task can vary based on what we try to achieve. For example, finding whether there is an object of interest present in an image or not is a basic form of computer vision.
What is edge AI?
Let us now come to the core topic of discussion – edge AI.
ML algorithms are traditionally deployed over the cloud due to their sheer size and complexity. The machines send new data to the cloud system, get the inferencing done there, and get the prediction back from it.
With powerful computing machines becoming portable and accessible in addition to a lot of ML algorithms getting simplified, these algorithms can now be run on the machines themselves without the need for a cloud-based computing platform. This is called ML at the edge (or edge ML). And when this ‘learning’ is used to perform intelligent real-world tasks, it’s called edge AI.
Advantages of edge AI
Some of the most important advantages of edge AI include:
- The need for on-device data analysis arises in cases where decisions based on data processing must be made immediately. In similar scenarios where bandwidth is limited and latency is critical, edge processing is more efficient.
- Running the ML inference at the edge means that the application will continue to run even if access to the network is disrupted. This decentralization makes it more reliable. This is particularly helpful in applications or systems that are deployed in locations where network interruptions are frequent.
- In edge AI, the data is processed and stays local in the device and need not be sent over a network. This reduces the risk of data privacy violations.
- Apart from this, in terms of cost of operation and power consumption, edge AI beats cloud-based computing hands down.
Popular edge AI-based embedded vision applications
Now that we understand what edge AI is, let us look at some of the ‘cool’ embedded vision applications where edge AI is leveraged to improve the overall effectiveness of the system.
AI in medical diagnostics has primarily been developed to improve efficiency and effectiveness in clinical caregiving. For example, in modern clinical practice, digital pathology plays a crucial role in the laboratory environment.
Diagnosing conditions manually is a time-consuming process that is prone to inaccuracies. Also, according to several studies, the shortage of pathologists is a major issue in developing countries with just one pathologist for every million people. And this is where camera modules and edge AI can come together to ease pathological procedures.
Advancements in cameras made it much easier to produce digitized images of whole tissue slides at microscopic resolution. And this has led machine learning algorithms to evolve in parallel to aid diagnosis.
Typical digital image analysis comprises segmentation, detection, and classification. In addition, it involves quantification and grading. Image classification processes such as the detection of cells or nuclei is one of the tasks where deep learning techniques have made a tremendous contribution to the field of digital pathology. Deep convolutional neural networks have shown great results in tumor classification & segmentation, mutation classification, and outcome prediction.
In the future, AI may not replace a pathologist. But a pathologist with training in AI will certainly replace one without it.
Remote patient monitoring
Of all the applications in the medical space where edge AI and camera technology come together, remote patient monitoring or RPM has seen a wide adoption given the possible benefits.
Remote patient monitoring is dependent on one or more digital imaging solutions that are integrated into hospital networks. It allows health workers to monitor multiple patients, and clinical teams to assess patient conditions without being physically present in the same room.
By leveraging edge AI, remote patient monitoring is moving from mere video monitoring to doing behavioral analyses like fall detection, tracking patient movements, monitoring people in a room, and many such. The patient’s behavior is then classified and analyzed using frameworks like PeopleNet to prevent falls from happening.
For example, imagine a patient on a highly sedative medication trying to wake up from a hospital bed. An AI model trained to identify patients who are moving from a minimally conscious state can send an alert to a caretaker or nurse for immediate attention. This is made possible by continuously capturing the video of the patient using the camera in the RPM device, thereby feeding the required image and video data to the AI model.
Now let us see what the future holds when it comes to the use of embedded camera technology and edge AI in patient monitoring.
Given the increased patient privacy concerns, questions are being raised against storing patients’ video data for analysis purposes. This is likely to give rise to the adoption of 3D depth mapping technologies in remote patient monitoring.
Using 3D imaging systems like time of flight or stereo cameras, patient movement can be monitored by collecting the depth data alone instead of the color data or the real video. This enhances the privacy of patients and gives them peace of mind. Interestingly, e-con Systems is already equipped for this change by offering 3D depth cameras including a time of flight camera called DepthVista and a stereo camera called Tara.
To learn more about the impact of AI and camera technology on remote patient monitoring and patient care in general, please check out the article How cameras and Artificial Intelligence are changing the way we look at patient care?
The last application we will look at is autonomous shopping systems. These devices use one or more cameras to capture images of products and shoppers.
There are two types of autonomous shopping systems:
- Smart trolley
- Smart checkout system
Smart trolleys look like regular grocery carts but typically have tablet-sized devices attached near the cart handles. When a consumer places an item in a smart cart, a camera attached to the device automatically views and recognizes the item without needing a barcode scan.
A smart checkout system detects when shoppers enter the store and tracks them until they exit. An advanced network of cameras running machine learning algorithms identifies the items purchased by the shopper. And the system automatically processes the payment when the shopper leaves the store.
Product identification in these systems is done by object recognition using ML algorithms. Users may place the object at any orientation they want, but the system should be capable of identifying it. High-quality cameras, large sets of pre-trained datasets, and efficient algorithm tuning are needed to achieve the desired levels of accuracy in these systems.
By tapping into the power of embedded vision and edge AI, retailers are further improving consumer experience. To learn more about how embedded vision is changing autonomous shopping systems, please have a look at the below articles:
- How embedded vision accelerates smart trolley and smart checkout journeys
- Key camera-related features of smart trolley and smart checkout systems
Edge AI is fast evolving. With high-performance processing platforms like the NVIDIA Orin series and new-age embedded cameras, the future of edge AI and embedded vision technology looks bright. If you have any questions on the topic, please do leave a comment.
If you are looking for help in integrating cameras into your system, please don’t hesitate to write to us at email@example.com. Meanwhile, you can check out the Camera Selector to browse through our entire portfolio of cameras.
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.