Technology Deep Dive

Do a deep dive into various camera technologies and understand how they enhance the performance of camera-based devices and help machines see better.

Autonomous Mobile RobotsCamera ApplicationsTechnology Deep Dive

Why Multi-Robot Autonomous Mapping Is Becoming Essential for Large-Scale Facilities (Part 1)

Arun Asokan
For years, facility digitization has relied on a single, expensive robot slowly traversing every corridor of a warehouse, laboratory, or industrial plant. While effective, this approach is linear, which clashes with the fact that facilities are becoming larger and the demand for up-to-date digital twins has intensified. It means that......
Edge AI Vision KitsOur Product InsightsTechnology Deep Dive

Lattice FPGA–Based Holoscan Cameras on NVIDIA AGX Thor & Orin for Scalable Multi-Sensor Edge AI Systems

Prabu Kumar
Present-day edge AI systems rely heavily on multi-modal sensor fusion, such as camera, LiDAR, and Radar, to enable accurate, real-time decision-making. Existing platforms, such as NVIDIA Jetson Orin NX, are equipped to adequately support multi-camera use cases. To advance this further, NVIDIA’s latest Jetson Thor series modules have been combined...
3D Depth CamerasOur Product InsightsTechnology Deep DiveTime-of-Flight (ToF)

A Detailed Guide to Confidence Filtering in Indirect Time of Flight Cameras

Prabu Kumar
Indirect Time-of-Flight (iToF) sensors are among the most widely used 3D camera technologies for depth estimation. However, noisy depth measurements can limit their ability to deliver reliable, high-quality depth data. Applying appropriate filtering techniques helps improve accuracy by suppressing noisy pixels and enhancing depth reliability....
Sensor and ISPTechnology Deep Dive

Clear HDR vs DOL HDR: Which HDR Technology is Right for Your Application?

Prabu Kumar
Embedded vision applications require the camera image sensor to capture a clear image under challenging illumination conditions, with no motion artifacts. The Sony STARVIS 2 image sensor family will meet this requirement by supporting advanced HDR modes, including clear HDR and DOL HDR, enabling high-end embedded vision cameras to deliver...
Camera ApplicationsSmart SurveillanceTechnology Deep Dive

3D Mobile Mapping for Digital Twins: Camera Features That Ensure Accuracy

Ram Prasad
Digital twins depend on how accurately physical environments are captured, reconstructed, and updated over time. Mobile mapping systems feed imaging data of streets, facilities, and structures into photogrammetry and SLAM pipelines to create virtual models. Therefore, camera performance determines if a digital twin can support simulation, planning, and monitoring with...
Sensor and ISPTechnology Deep Dive

Sony Pregius IMX264 vs. IMX568: A Detailed Sensor Comparison Guide

Prabu Kumar
IMX264 and IMX568 both belong to the Sony Pregius family of image sensors, which feature global-shutter pixels. Both sensors are renowned for their sensitivity, low noise, and distortion-free imaging, and are well-suited to high-speed vision applications, especially those requiring light sensitivity. While both sensors belong to the same Pregius family,...
Sensor and ISPTechnology Deep Dive

What is a dust denoising filter in TOF camera, and how does it remove noise artifacts in vision systems?

Prabu Kumar
Time-of-Flight (ToF) cameras with IR sensors are susceptible to performance variations caused by environmental dust. This dust can create 'dust noise' in the output depth map, directly impacting camera accuracy and, consequently, the reliability of critical embedded vision applications....
Technology Deep DiveTime-of-Flight (ToF)

Indirect Time-of-Flight: Continuous-Wave or Pulsed – Which Suits Your Needs?

Prabu Kumar
Indirect ToF has emerged as a practical depth-sensing method for automation, robotics, and mobility systems. CW and pulsed approaches achieve depth measurement differently, shaping their performance in controlled or outdoor conditions. CW systems leverage phase shifts for fine mapping and richer scene data, while pulsed systems rely on timing offsets...
OpticsTechnology Deep Dive

TLens vs VCM Autofocus Technology

Prabu Kumar
Autofocus is crucial in embedded vision applications across various industries, including medical devices, robotics, and autonomous vehicles. Traditional VCM-based autofocus systems, however, face several challenges: slow response, friction, vibration, motion blur, heat generation, and reduced reliability....