At e-con Systems, we have been working on 3D camera technologies for more than a decade – starting with developing our first passive stereo camera. Since then, we have been exploring different ways of improving the generation of 3D data. After all, 3D visualization has been invaluable in helping several industries break new frontiers in innovation. Being merely equipped with 2D camera data is restrictive for applications that require robust and reliable data for making real-time decisions.
While we received great feedback for our STEEREOCAM camera solution, we also started to dig deep and explore a new 3D depth mapping technology called Time-of-Flight (ToF). Even though ToF has been around for a while (ever since the introduction of the lock-in CCD technique in the 1990s), it has only recently started to mature. As camera features start to evolve, ToF technology is becoming more viable for non-mobile markets like industrial, retail, etc.
What is a time of flight camera?
Time-of-Flight (ToF) cameras use infrared light to extract depth information– making it easy to evaluate distances in a full scene with just one laser pulse. Hence, they have seen wide adoption in industries where applications rely on real-time depth and visual information.
To learn more about time of flight cameras and their key components, visit the article What is a time of flight sensor? What are the key components of a time of flight camera?
Popular use cases of time of flight cameras
Time-of-Flight (ToF) cameras play a vital role in the industrial sector. Modern industrial AGVs (Automated Guided Vehicles) and AMRs (Autonomous Mobile Robots) depend on their ability to draw insights about their surroundings to avoid collisions. This is possible by capturing accurate depth data. They also amplify volume dimensioning capabilities in warehouses – especially to meet the demands of the e-commerce sector, where speed and accuracy are competitive differentiators. ToF cameras provide 3D data to help pinpoint the dimensions of products – saving a lot of time and effort.
Furthermore, ToF cameras have also witnessed tremendous growth in the biometrics sector, considering the push for face recognition-based authentication protocols to address spoofing and other security concerns.
Stereo Vision and 3D Mapping: How it works
Before we compare Time-of-Flight with other 3D mapping technologies, let’s take a deeper look at one of its challengers – Stereo Vision.
As you may already know, human binocular vision is based on the depth being perceived by using stereo disparity (difference in image location of an object seen by the left and right eye). Then, the brain uses this binocular disparity to extract depth information from the 2D retinal images (known as stereopsis).
Similarly, stereo vision cameras like Tara and TaraXL try to mimic this technique of human vision to perceive depth by using a geometric approach called triangulation. Some of the properties considered are:
Baseline: It is the distance between the two cameras (about 50–75 mm – interpupillary distance).
Resolution: It is directly proportional to the depth. The higher the number of pixels to search, the higher the number of disparity levels (but with a higher computational load).
Focal length: It is directly proportional to the depth. The lower the focal length, the farther we see – but with a reduced Field of View. The higher the focal length, the more near depth we see with a high Field of View.
After capturing two 2D images from different positions, stereo vision cameras enable correlation to create a depth image. So, stereo vision cameras are suitable for outdoor applications with a large field of view. However, it is necessary for both images to have sufficient details and texture or non-uniformity. You can also add those details by illuminating the scene with structured lighting to achieve better quality.
What about Structured Light imaging?
Structured light involves using a light source (laser/LED) to provide a narrow light pattern onto the surface and detect distortions of illuminated patterns as a 3D image is geometrically reconstructed by the camera. Then, using triangulation, it scans several images and assesses the object’s dimensions, even if they are complex. Basically, this approach ensures that cameras can capture moving scenes from various perspectives before quickly building a 3D reconstruction.
Effective vision solutions can also capture multiple images of structured light simultaneously. Some sectors that harness this approach include biometrics, entertainment, manufacturing, healthcare, security, etc.
How ToF stands when compared with other 3D mapping technologies?
Every embedded vision technology available for 3D image mapping has its own pros and cons. Let’s see how Time-of-Flight (ToF) cameras fare in comparison to the other stereo and 3D technologies – stereo vision and structured light.
Following is a pictorial representation of how the 3 stereo vision technologies work:
Stereo Vision vs. Structured Light vs. Time-of-Flight (ToF)
The following table gives a detailed comparison of the three 3D mapping technologies by parameters such as cost, accuracy, depth range, low light performance etc.
|STEREO VISION||STRUCTURED LIGHT||TIME-OF-FLIGHT|
|Principle||Compares disparities of stereo images from two 2D sensors||Detects distortions of illuminated patterns by 3D surface||Measures the transit time of reflected light from the target object|
Why Time-of-Flight (ToF) camera is a better choice for 3D mapping
As evidenced in the above comparison table, ToF cameras are ahead in the race to achieve excellence in 3D image performance. Some of their key differentiators are:
Reduced software complexity
ToF cameras provide the depth data directly from the module – thereby avoiding complications like running depth matching algorithms in the host platform
Higher imaging accuracy
ToF cameras provide better output in terms of image quality since they rely on accurate laser lighting.
More depth scalability
ToF cameras have a scalable depth range based on the number of VCSELs used for illumination.
Better low light performance
ToF cameras perform better in low-light conditions due to their active and reliable light source.
ToF cameras boast of an impressive form factor with their compactness – attributed to the fact that the sensor and illumination can be placed together.
e-con Systems & ToF cameras: What’s happening now?
As earlier mentioned, e-con Systems is making giant strides in maximizing the effectiveness of Time-of-Flight (ToF) camera solutions. We are well aware that designing a ToF-based depth-sensing camera is certainly no jog in the park. It can be a complex journey, given that it involves factors like optical calibration, temperature drifts, VCSEL pulse timing patterns, etc. Each one of these has the potential to affect depth accuracy.
We also know that it’s a time-consuming process – as anyone who wants to design a ToF system should be prepared for a very long design cycle. Having said that, it definitely helps that e-con Systems has over a decade’s worth of specialized experience in working with stereo vision-based 3D camera technologies! Over the years, we have helped customers across the world to successfully to integrate into live products.
And today, we are proud to say that we have been working on a state-of-the-art ToF camera product for the past year.
We are excited to soon share more details with you in this Technology Thursday blog series as part of this journey.
See you next Thursday!
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.