Time-of-Flight technology has gone from being a buzzword in the embedded vision world into a fully-fledged depth sensing solution that is taking several markets by storm. But this journey started in the late 1970s when a paper published by the Stanford Research Institute theorized the usage of laser sensors for 3D analysis. Obviously, several technical challenges held it back from becoming a viable solution those days. However, ToF technology has come a long way – overcoming these challenges to blaze a new trail in advanced 3D imaging for mobile robots. Especially in industrial environments like warehouses, they have helped mobile robots perceive their surroundings with unprecedented accuracy. But what can mobile robots, powered by Time-of-Flight cameras, do to significantly improve warehouse management? And how exactly do they go about it? Let’s find out in today’s article.
How ToF technology has revolutionized mobile robots
As you may be aware, stereo vision technology has also evolved to meet modern application requirements. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance. Time-of-Flight cameras take it a step further by using a sensor, a lighting unit, and a depth processing unit to empower mobile robots to calculate depth.
Hence, they can be leveraged out of the box without further calibration. While performing scene-based tasks, the ToF camera is placed on the mobile robot to extract 3D images at high frame rates – with rapid segmentation of the background or foreground. Since ToF cameras also use active lighting components, mobile robots can perform tasks in brightly-lit conditions or in complete darkness. Read: How Time-of-Flight (ToF) compares with other 3D depth mapping technologies
Role of ToF-powered mobile robots in automated warehouse operations
Time-of-Flight (ToF) cameras have been reinventing warehouse operations by equipping AGVs (Automated Guided Vehicles) and AMRs (Autonomous Mobile Robots) with depth-sensing intelligence. These cameras help them perceive their surroundings and capture depth imaging data to undertake business-critical functions with accuracy, convenience, and speed. These include:
- Obstacle Avoidance
- Object Detection
A time of flight camera helps AMRs identify positions in a known map by scanning the environment and matching the collected information against known data. A traditional approach to this would require a GPS signal – but in indoor environments like warehouses, this would be unavailable. Hence, AMRs need localization features that can be operated locally. With ToF cameras, they can capture 3D depth data by knowing the distance from the reference points in a map. Then, using triangulation, the AMR can pinpoint its exact position. This enables seamless localization making navigation easy and safe.
ToF cameras help AMRs create a map in unknown environments by measuring the transit time of the reflected light from the target object. It uses the SLAM algorithm – which requires 3D depth data – to provide accurate mapping information. For instance, using 3D depth sensing, these cameras can easily create predetermined paths within the premises for mobile robots to move around.
Learn how e-con Systems helped a leading Autonomous Mobile Robot manufacturer enhance warehouse automation by integrating cameras to enable accurate object detection and error-free barcode reading.
Mobile robots come with navigation capabilities to move from point A to Point B on a known map. They can also do the same in an unknown environment by utilizing SLAM algorithms. Leveraging ToF technology, AMRs can quickly understand their environment in 3D before deciding the path to be taken.
AMRs are likely to encounter objects during the course of their navigation within the warehouse. Hence, they require precise information about their surroundings to avoid any obstacles in their path. If in case they run into them, using ToF cameras they can easily plan a new path to reach the destination.
The traditional approach was to integrate an AI-enabled monocular camera into an AMR to detect an object. But with ToF technology, 3D images can be used to match its shape to predefined parameters of known objects. If a match is found, the AMR can take a suitable decision.
In this case, odometry can be defined as the process of estimating any change in position of the mobile robot over a period of time by analyzing data from motion sensors. Previously, a combination of gyro and a wheel encoder was considered the most popular solution. But, in recent years, ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.
e-con Systems’ Time-of-Flight camera for industrial mobile robots
e-con Systems has been working with depth-sensing technologies for over ten years. We have designed and developed DepthVista – a cutting-edge Time-of-Flight camera that delivers depth information and RGB data in one frame. This is tremendously useful for simultaneous depth measurement and object recognition. This is made possible by a combination of a CCD sensor (for depth measurement) and the ARO234 color global shutter sensor from Onsemi (for object recognition). The depth sensor streams a resolution of 640 x 480 @ 30fps, while the color global shutter sensor streams HD and FHD @30fps. Please visit DepthVista product page to learn more about the features and other application use cases of the time of flight camera. Alternatively, you could watch the below video to get a quick overview of the product:
Hope you now find yourself in a better position to understand the significance of ToF technology in the evolution of mobile robots and how it enables them to perform mission-critical functions. If you are looking for help in integrating e-con’s time of flight camera into your AMR, please write to us at firstname.lastname@example.org. Visit the Camera Selector to see our complete portfolio.
Dilip Kumar is a computer vision solutions architect having more than 8 years of experience in camera solutions development & edge computing. He has spearheaded research & development of computer vision & AI products for the currently nascent edge AI industry. He has been at the forefront of building multiple vision based products using embedded SoCs for industrial use cases such as Autonomous Mobile Robots, AI based video analytics systems, Drone based inspection & surveillance systems.