Technology Deep Dive Time-of-Flight (ToF)

What are depth-sensing cameras? How do they work?

Depth sensing cameras are used in new-age autonomous vehicles to help them seamlessly navigate their environments. Learn all that you need to know about depth-sensing cameras, the types, their working principle, and the key embedded vision applications where they are used.

What are depth-sensing cameras? How do they work?

Advancements in machine learning, artificial intelligence, embedded vision, and processing technology have helped innovators build autonomous machines that have the ability to navigate an environment with little human supervision. Examples of such devices include AMRs (Autonomous Mobile Robots), autonomous tractors, automated forklifts, etc.

And making these devices truly autonomous requires them to have the ability to move around without any manual navigation. This in turn requires the capability to measure depth for the purposes of mapping, localization, path planning, and obstacle detection & avoidance. This is where depth sensing cameras come into play.

In this article, we will learn what depth-sensing cameras are, the different types, their working principle, and finally a quick look at the most popular embedded vision applications that use these new-age cameras.

What are depth-sensing cameras and their different types?

Depth-sensing means nothing but measuring the distance from a device to an object or the distance between two objects. A depth-sensing camera is used for this purpose where it automatically detects the presence of any object nearby and measures the distance to it on the go. This helps the device or equipment integrated with the depth-sensing camera move autonomously by making real-time intelligent decisions.

Out of all the depth technologies available today, three of the most popular and commonly used ones are:

  1. Stereo vision
  2. Time of flight
  3. Structured light

Next, let us look at the working principle of each in detail.

Stereo vision

A stereo camera relies on the same principle that a human eye works based on – binocular vision. Human binocular vision uses what is called stereo disparity to measure the depth of an object. Stereo disparity is the technique of measuring the distance to an object by using the difference in an object’s location as seen by two different sensors or cameras (eyes in the case of humans).

The below image illustrates this concept very well:

Stereo Disparity in CamerasFigure 1 – stereo disparity

In the case of a stereo camera, the depth is calculated using an algorithm that usually runs on the host platform. However, for the camera to function effectively, the two images need to have sufficient details and texture. Owing to this, stereo cameras are recommended for outdoor applications that have a large field of view.

To learn more about the working principle of stereo cameras, please visit the article What is a stereo vision camera?

Time of flight camera

Time of flight (ToF) refers to the time taken by light to travel a given distance. Time of flight cameras work based on this principle where the distance to an object is estimated using the time taken by the light emitted to come back to the sensor after reflecting off the object’s surface.

A time of flight camera has three major components:

  1. ToF sensor and sensor module
  2. Light source
  3. Depth sensor

The architecture of a time of flight camera is given below:

architecture of a time of flight cameraFigure 2 – architecture of a time of flight camera

The sensor and sensor module are responsible for collecting the light data reflected from the target object. The sensor converts the collected light into raw pixel data. The light source used is either a VCSEL or LED that typically emits light in the NIR (Near InfraRed) region. The function of the depth processor is to convert the raw pixel data from the sensor into depth information. In addition, it also helps in noise filtering and provides 2D IR images that can be used for other purposes of the end application.

If you are interested in knowing more about how a time-of-flight camera works, check out the article What is a ToF sensor? What are the key components of a ToF camera? You can also learn how ToF compares with depth sensing technologies by reading How Time-of-Flight (ToF) compares with other 3D depth mapping technologies.

Structured light camera

A structured light-based depth sensing camera uses a laser/LED light source to project light patterns (mostly a striped one) onto the target object. Based on the distortions obtained, the distance to the object can be calculated. A structured light 3D scanner is often used to reconstruct the 3D model of an object.

Comparison between the three depth sensing technologies

All the three 3D depth mapping cameras we discussed above come with their own pros and cons. The choice of camera used will entirely depend on the specifics of your end application. It is always recommended to take the help of an imaging expert like e-con Systems to guide you through the camera evaluation and integration process.

The three technologies can be analyzed using 10 different parameters. A detailed comparison is given in the below table:

  STEREO VISION STRUCTURED LIGHT TIME-OF-FLIGHT
Principle Compares disparities of stereo images from two 2D sensors Detects distortions of illuminated patterns by 3D surface Measures the transit time of reflected light from the target object
Software Complexity High Medium Low
Material Cost Low High Medium
Depth(“z”) Accuracy cm um~cm mm~cm
Depth Range Limited Scalable Scalable
Low light Weak Good Good
Outdoor Good Weak Fair
Response Time Medium Slow Fast
Compactness Low High Low
Power Consumption Low Medium Scalable

Popular embedded vision applications that use depth sensing cameras

As mentioned before, depth sensing is required for any device that has to navigate autonomously. However, given below are some of the most popular embedded vision applications that need 3D depth cameras for their seamless functioning:

  • Autonomous Mobile Robots (AMR)
  • Autonomous tractors
  • People counting and facial anti-spoofing systems
  • Remote patient monitoring

Autonomous Mobile Robots

Autonomous Mobile Robots (AMR) have helped automate various tasks across industrial, retail, agricultural, and medical applications. Following are a few examples of AMRs used in warehouses, retail stores, hospitals, office buildings, agricultural fields, etc.

  • Goods to person robots
  • Pick and place robots
  • Telepresence robots
  • Harvesting robots
  • Automated weeders
  • Patrol robots
  • Cleaning robots

Whatever the type, any robot that has to move autonomously without any human supervision needs to have a 3D depth camera in it. Some robots that use a depth sensing camera might use a combination of automated and human-aided navigation. Even in those cases, depth cameras help detect obstacles and avoid accidents. An example of this type of robot is a delivery robot.

To develop a better understanding of how depth cameras function in AMRs, please have a look at the article How does an Autonomous Mobile Robot use time of flight technology?

Learn how e-con Systems helped a leading Autonomous Mobile Robot manufacturer enhance warehouse automation by integrating cameras to enable accurate object detection and error-free barcode reading.

View Case study

Autonomous tractors

Autonomous tractors are used to automate key farming processes such as plowing, weed & bug detection, crop monitoring, etc. They work similarly to AMRs when it comes to depth sensing. Depth cameras help them measure the distance to obstacles and nearby objects in order to move from one point to the other. This ability to move autonomously is a game-changer given the labor shortage in the agricultural industry.

People counting and facial anti-spoofing systems

People counting and facial anti-spoofing systems are used to count people and detect fraud in identity management and access control. 3D depth cameras like stereo cameras and time of flight cameras are needed here to locate the exact position or location of a person during counting or facial recognition.

Remote patient monitoring

Modern remote patient monitoring systems leverage artificial intelligence and camera technology to detect key events like patient falls in order to facilitate completely human-free patient monitoring, that too 24×7. However, detecting falls requires capturing the patient’s video and sharing and storing it for analysis. This raises concerns about privacy. This is where a 3D depth camera can make a difference.

With the help of these cutting-edge camera systems, fallsnd patient movements can be tracked using depth data alone. This ensures privacy and gives the patient peace of mind as no visually identifiable image or video is processed by the remote patient monitoring camera.

If you wish to read further on how 3D depth cameras (especially time-of-flight cameras) help patient monitoring systems improve privacy, please read How does a Time-of-Flight camera make remote patient monitoring more secure and private?

That’s all about depth-sensing cameras. In case you have any further queries on the topic, please feel free to leave a comment.

If you are looking for help in integrating 3D depth cameras into your autonomous vehicle, please write to us at camerasolutions@e-consystems.com. You could also visit the Camera Selector to have a look at our complete portfolio of cameras.

Related posts

Leave a Comment