Camera ApplicationsSports Broadcasting & Analytics

Integrating a synchronized multi-camera system into automated sports broadcasting systems

Automated sports broadcasting systems are gaining in popularity globally. They need a multi-camera system to stream live matches and capture images & videos for sports analytics purposes. Learn everything you need to know about integrating a multi-camera solution into these new-age devices.

Integrating a synchronized multi-camera system into automated sports broadcasting systems

The birth of multi-camera systems can be attributed to the rapid twin evolution of processing platforms and cameras. While advancements in processors made edge-based analytics possible, new-age cameras helped capture high-quality images. Of course, with innovation becoming the mantra of several industries like automated sports broadcasting, many new use cases also emerged in which a single camera was simply not enough to meet the demands of new-age applications. Instead, they required multiple cameras to be connected to a single host processor in the same device.

In this blog, let’s take a deeper look at the role of synchronized multi-cameras in automated sports broadcasting systems, and what are the key factors to consider while integrating these cameras into them.

What is a synchronized multi-camera system?

A multi-camera system, as the term suggests is a camera setup or configuration that involves more than one camera. Depending on the use case, the cameras can be used for a single purpose or multiple purposes. There are two ways in which you can achieve multi-camera synchronization – software-level synchronization and hardware-level synchronization.

Software synchronization involves passing a trigger command to the cameras to initiate image capture. However, since there is a possibility of different cameras in the system having different frame starts, the synchronization might not be accurate. Hence, this method is typically recommended only in applications where the environment remains static.

In hardware synchronization, simultaneous image capture is initiated in all the cameras by means of a hardware trigger, which is nothing but an external PWM (Pulse Width Modulation) signal. This ensures that all the cameras have the same frame start, and hence all the frames align with each other perfectly in terms of the moment of capture.

Why are multi-camera systems popular in sports broadcasting?

Automated sports broadcasting as a term broadly means streaming and broadcasting live sports matches without the help of field operators or crew. Unlike professional sports broadcasting which needs an army of people to operate the camera and associated systems, automated sports broadcasting relies on embedded cameras deployed on the field that automatically sends a feed to be telecast on TV or a streaming platform.

To learn more about what automated sports broadcasting systems are and the importance of cameras in them, please visit the article Role of a camera in AI-driven automated sports broadcasting.

Now, for the camera to cover the entire Field of View (FoV) by ensuring superior image quality, a single camera will not be enough. Typically a minimum of 3 cameras are recommended for the purposes of FoV coverage, better resolution, and reducing lens distortions. We will discuss these in detail in the next section as we go through the factors to consider while picking and embedding a multi-camera into automated sports broadcasting systems.

Interested in finding the FOV required for your application? Jump right into our FOV calculator

Go to FOV Calculator

Factors to consider while integrating multi-camera solutions into sports broadcasting applications

Selecting a multi-camera involves the evaluation of parameters like resolution, frame rate, sensitivity, dynamic range, etc. More importantly, your end application must be analyzed in detail since there is no one size fits all approach here. Having said that, let’s look at some of the key factors to consider while integrating a multi-camera into automated sports broadcasting systems.

Lighting conditions

The very first factor you need to look at while integrating a multi-camera into an automated sports broadcasting system is the environment in which the camera operates. If it is placed in an outdoor sports arena, it is recommended to go with a camera that has the high dynamic range feature. This would greatly help in coping with varying lighting conditions and ensuring suitable image quality.

Quality of streaming

Another key factor to be considered while choosing a multi-camera system for this application is the streaming quality. Basically, you should have a camera that comes with high resolution – typically 1080p or 4K. Along with this, you need to transfer the streams over a fast network to enable a viewing audience-friendly experience.

You can choose the resolution also based on the details required in the captured images. For instance, if your application involves ball tracking and player analysis, then integrated AI algorithms are needed to derive actionable insights about the player and team performance. This, in turn, requires the images to be cropped. Hence, 4K resolution is generally preferred over 1080p to achieve more details and ensure a better viewing experience.

Field of View (FoV)

Although a 180-degree Field of View can be achieved using a single camera, it might lead to lens distortions. So, if the lens FoV is larger than the sensor field of view, the image might appear shrunk or squeezed to fit into the frame. Hence, a multi-camera system with 2-3 cameras is preferred to cover the entire wide field of view. If we are to go with 3 cameras, each camera should have a field of view of around 70 degrees in order to cover the 180-degree field of view without any lens distortion. When it comes to the frame rate, 30 fps would suffice at 4K in most cases.

Number of cameras

Deciding upon the number of cameras is governed by lens distortion and the extent of details you want to cover in the desired region of interest. A higher number of cameras would help eliminate lens distortion. At the same time, it also offers a higher resolution, which results in a higher number of pixels being available for the same field of view. This is extremely helpful while carrying out operations like cropping for the purposes ball tracking and player analysis.

e-con Systems has worked with multiple sports broadcasting system providers where we have used 3 or more cameras to cover the entire field of view of a sports field. One of e-con’s key differentiators here is our proprietary 180-degree stitching algorithm using which the images from multiple cameras are stitched together to get a 180-degree output image.

Type of camera interface

Since the system involves transferring data from 3 cameras at a high resolution and frame rate, the MIPI interface is the best choice to go with. This is because the USB interface offers a maximum theoretical bandwidth of 5 Gbps while MIPI can provide a bandwidth of up to 10 Gbps with four lanes.

Host platform or processor

The host platform used in an automated sports broadcasting system has to be capable of handling the high throughput from the cameras. Hence, high-end processors like NVIDIA Jetson AGX Xavier or AGX Orin is recommended.

Method of synchronization

For the 180-degree stitching in sports broadcasting cameras to work effectively, frame-level synchronization has to happen, which demands hardware-based camera synchronization. This ensures that there are no deformations in the stitched image.

Image processing and analysis

In image processing, ISP (Image Signal Processor) is one of the most important factors to consider. When it comes to it, we have two options. The first is to use an internal ISP that comes with platforms like NVIDIA or Qualcomm. The other option is to use an external ISP that is independent of the host platform.

Using an external ISP comes with its own advantages. First of all, you don’t have to worry about tuning the image quality specific to the platform. Another advantage is that you can reduce the load on the host processor due to the 3 camera pipelines.

Once the basic image processing and 180-degree stitching are done on the edge, the synchronized frames from the three cameras are received at the MIPI receiver in the processing platform. The MIPI and camera drivers enable the access of camera frames to the GPU. The GPU which uses a pre-calibrated and calibrated image stitching vectors, stitch the frames from the three camera sources to produce a single frame. The output frames are published as an eglstream for any supporting frameworks to use it and do necessary processing on the frames.

Advanced analytics using the AI algorithm can be done either completely on the cloud, or partially on the edge and partially on the cloud.

In the former, the stitched images are compressed and streamed to the cloud right away in the H.264 format for further analyses like ball tracking, image cropping, player performance analysis, etc. In the latter, cropping of the specific regions of interest can be done on the edge, while the rest of the analyses can be carried out in the cloud. Here, the images are compressed and sent to the cloud after cropping.

The method you should follow for image processing and analysis can be chosen based on the flexibility you need. Here, you can keep in mind that edge processing has its limitations and doesn’t offer as many computational and analysis capabilities as the cloud.

Get to know how e-con Systems helped a global amateur sports broadcaster automate soccer broadcasting and improve the viewing experience.

View Case study

e-con Systems’ expertise in developing multi-camera-enabled sports broadcasting applications

e-con’s expertise spans USB cameras, MIPI cameras, and GMSL cameras with a wide range of options for each. Being an elite partner to NVIDIA, we have camera modules for all the Jetson processors including the latest NVIDIA Jetson AGX Orin. Other Jetson platforms supported by our multi-camera solutions include Xavier NX, TX2, TX2 NX, and AGX Xavier. Their resolutions range from Full HD to 18 MP with sensors from Sony, Onsemi, and Omnivision.

As earlier mentioned, our 180-degree stitching solution is based on our 13 MP camera modules. They can help capture 4K images from multiple cameras to produce a 24MP RGBA output image with a 180-degree field of view. Apart from the 13 MP camera module, we can also enable stitching with any of e-con’s other camera modules.

Check out our Camera Selector to have a look at our entire portfolio. If you need help in integrating a multi-camera system into your automated sports broadcasting systems, please write to us at

Related posts

Leave a Comment