Key Takeaways
- Why mobility workloads strain perception when hardware are fragmented
- How consolidated vision units simplify deployment and upgrades
- What real-world challenges are addressed by unified Edge AI vision boxes
- How integrated platforms support long-duration mobility operation
Mobility systems operate in environments filled with movement, glare, outdoor exposure, and unpredictable surroundings, which places heavy pressure on perception pipelines. Robots, AMRs, and other mobility platforms depend on vision solutions to understand complex scenes while maintaining real-time responses. Considering the rate at which mobility workloads are growing, more and more are looking for a solution that can single-handedly handle advanced autonomous requirements.
e-con Systems®, a global leader in embedded vision solutions, recently launched its first Edge AI Vision Box called Darsi™ Pro powered by NVIDIA® Jetson Orin™ NX platform. Darsi Pro delivers up to 100 TOPS of AI performance, supports e-con’s broad camera ecosystem, enables powerful cloud-based device management, provides multi-sensor connectivity, and ensures rugged industrial-grade reliability. This production-ready platform is perfect for next-generation autonomous mobility systems.
In this blog, you’ll learn about the demanding vision challenges faced by mobility applications – and how e-con Systems’ Darsi Pro addresses those challenges.
Why Mobility Systems Need One Unified Vision Unit
Mobility platforms operate under demanding conditions such as continuous motion, rapidly changing lighting, vibration, and long-duty operation. These stresses expose weaknesses in perception systems, especially when cameras, sensors, and AI processing are sourced and integrated as separate units. In such setups, differences in latency, buffering, and synchronization introduce timing jitter and data misalignment, degrading perception reliability and forcing conservative system behavior.
Bringing everything under one vision unit helps teams avoid those gaps and gives mobility robots a steadier path from intake to output. A unified setup helps mobility developers handle upgrades and field deployment with fewer steps. Developers also get a single reference point to work with, and fleets experience fewer disruptions during software changes or maintenance cycles.
- Mobility robots have one environment for camera intake and AI-driven tasks
- Integration steps are reduced when cameras, sensors, and processing share a common unit
- Timing becomes easier to manage when perception elements follow the same design
- Large fleets rely on one system for updates, monitoring, and long-duration operation
Mobility Vision Challenges – And How Darsi Pro Helps Overcome Them
1) Multi-camera support
Challenge Faced:
Mobility platforms rely on several viewpoints to understand movement in warehouses, factory lanes, or outdoor routes when movement accelerates or lighting shifts. Robots dealing with cluttered floors or glare-heavy zones lose clarity when intake timing breaks down. Continuous video intake also grows harder to use when scenes shift faster than the cameras can be aligned.
How Darsi Pro Helps:
Darsi Pro supports up to 8 GMSL or GMSL2 cameras through FAKRA connectors for mobility systems that depend on consistent intake from several viewpoints. A synchronized set of streams helps robots follow paths in warehouses and open areas, even when lighting changes unexpectedly. Movement at higher speeds still stays readable when each view arrives on time.
2) AI Processing
Challenge Faced:
Mobility units depend on detection, tracking, and classification to react in real time, but heavy workloads can strain the processing margin. Glare, motion, and long-running operations introduce moments where the AI pipeline falls behind. Once that slowdown begins, responses lose the pace needed for active routes.
How Darsi Pro Helps:
e-con Systems’ Darsi Pro delivers up to 100 TOPS and can operate in Super Mode (on customization) for workloads that need up to 157 TOPS, which supports mobility units that rely on detection, tracking, and classification. Robots that work with continuous video intake receive smooth imaging output when AI throughput is dependable during long operation cycles. Therefore, glare or rapid motion does not slow the pipeline when processing remains consistent. Mobility use cases such as intelligent video analytics and smart DVRs can take advantage of that higher performance ceiling.
3) Multi-sensor fusion
Challenge Faced:
LiDAR, radar, cameras, and IMUs must deliver inputs that share the same timing for a robot to interpret movement clearly. Small timing gaps create uncertainty during turns or narrow passages. Robots using continuous sensing feel the impact quickly because their routes demand real-time input.
How Darsi Pro Helps:
This Edge AI Vision Box works with LiDAR, radar, cameras, IMUs, and other sensors using Precision Time Protocol, so that mobility systems receive structured inputs. Robots moving through factory lanes or outdoor paths gain reliable perception when timing does not drift. Also, Navigation reacts more predictably when all sensing points follow one timing source.
4) Camera compatibility
Challenge Faced:
Mobility routes include bright highlights, shadowed areas, and rapid movement, and camera modules do not always handle those changes cleanly. When lighting varies from one moment to the next, detail drops before the system can correct exposure. Fast motion adds another layer that makes scene capture harder to keep consistent.
How Darsi Pro Helps:
This Edge AI Vision Box works with multiple camera configurations, including 3 MP cameras for surround view, a 4K camera for front-facing applications, and a DMS camera (subject to camera compatibility). It also covers a wide range of e-con Systems’ GMSL camera modules. Hence, robots operating under glare, bright highlights, or shadowed scenes retain more detail through HDR options that manage both ends of the light range. Tasks that involve fast movement benefit from global shutter imaging that avoids distortion. Moreover, mobility units in indoor and outdoor routes rely on these imaging options to maintain perception when lighting keeps changing.
5) Performance in rugged environments
Challenge Faced:
Robots face dust, vibration, and temperature variation during long operation cycles, and many units lose stability under those conditions. Imaging and processing can slip when the environment shifts faster than the hardware can compensate. Extended duty pushes weak systems further until perception no longer remains dependable.
How Darsi Pro Helps:
Darsi Pro uses a rugged enclosure, a fanless build, and a wide operating temperature range suited for indoor and outdoor mobility environments. Robots running for long cycles on warehouse floors or open areas can be operational when dust or vibration is present. Imaging remains reliable even when exposure levels rise during extended tasks.
6) Wide interface support
Challenge Faced:
Mobility workloads require several links for cameras, sensors, displays, and networks, and scattered hardware leaves gaps in those paths. When an interface does not match the rest of the setup, data slows or drops entirely. This can be a major hassle because robots moving through large sites need every connection to respond without hesitation.
How Darsi Pro Helps:
e-con Systems’ Darsi Pro comes with Dual GbE with PoE, USB 3.2, HDMI, CAN, and support for wireless modules that mobility units rely on. AMRs and warehouse robots can bring together multiple capabilities without needing extra components. Their perception tasks become easier when all the interfaces are available in one unit. Also, deployment is a lot simpler because the I/O already supports mobility layouts used in real environments.
7) Cloud-based fleet management
Challenge Faced:
Large fleets spread across warehouses or outdoor areas need regular updates and monitoring, yet manual checks take time. When units fall out of sync, perception accuracy varies from one robot to another. The result? Mobility teams struggle to sync up deployments without a central way to adjust settings or send updates.
How Darsi Pro Helps:
This Edge AI Vision Box works with CloVis Central™ for remote monitoring, configuration, health checks, and OTA updates used by mobility fleets. Robots deployed in warehouses or outdoor sites can be on the same page because updates do not rely on repeated on-site visits. Diagnostics reach deployed units through a single cloud system linked to the box. Hence, mobility operations that run for long hours maintain consistency through that centralized support.
e-con Systems’ Darsi Pro – a Cutting-Edge AI Vision Box
Since 2003, e-con Systems has been designing, developing, and manufacturing OEM cameras. Over the years, we have deployed several camera solutions for mobility use cases, and this experience has been a guiding light behind the creation of Darsi Pro. This includes the tuning and testing methods used to prepare vision hardware for challenging conditions.
If you want to evaluate this Edge AI Vision Box or need more information about how it relates to your requirements, we’d be happy to reach out.
Please write to camerasolutions@e-consystems.com.
Prabu is the Chief Technology Officer and Head of Camera Products at e-con Systems, and comes with a rich experience of more than 15 years in the embedded vision space. He brings to the table a deep knowledge in USB cameras, embedded vision cameras, vision algorithms and FPGAs. He has built 50+ camera solutions spanning various domains such as medical, industrial, agriculture, retail, biometrics, and more. He also comes with expertise in device driver development and BSP development. Currently, Prabu’s focus is to build smart camera solutions that power new age AI based applications.