Nowadays surveillance camera is mandatory in all working places and public for security purpose
Have you ever thought what's happening behind the surveillance camera?
This article gives you brief details about how we get live video from surveillance camera.
Surveillance Camera - An Embedded System product
Surveillance camera is an embedded system which is made of following
Camera sensor - Devices that captures live image
Processor unit - Receives the captured live image in RAW formats (RGB, YUV BAYER) and processess it. Generally, microprocessor or microcontroller will do this job.
Network device - Media used by the processor unit to send the processed image to Desktop PC, laptop or any network server.
Important terms in Surveillance camera
⚫ Frame is an image data that has WidthxHeight pixels
⚫ Resolution of frame is defined by width and height. Width is the number of pixels in frame and height is number of
lines in frame.
⚫ Pixel is the data which represents colour information of captured image.
⚫ Camera Preview is showing live video thet continuous frames captured from camera on a connected display.
⚫ Camera Streaming is showing live video on connected network device.
Camera Streaming components
When we say streaming, there are two components - 1. Stream Generator 2. Stream Receiver
⚫ Camera Sensor
⚫ Capture unit in Processor
⚫ Network device
⚫ Display device (optional) - This can be used to see the live video (which is not encoded) at the generating end.
⚫ Network device
Key Points in Camera Streaming Performance
In camera streaming on network, there are two key points to be focused
Video Lagging: It is the time difference between displayed frames and captured frames. Possible causes for lagging are
Encoding in software
Decoding in software
Frame Drop: Number of frames per second (fps) is frame rate. A typical frame rate for live video is 25 fps. If it is less than this, you can notice frame drop in moving objects. Possible causes for frame drop are:
display frame rate
Encoding in software
Tips on Camera Streaming Performance Improvement
Make sure your camera sensor is giving expected configured frame rate.
If camera sensor is in parallel interface, probing VSYNC line will give you reception of frame from camera and you can calculate fps by monitoring this VSYNC line status within one second
If camera sensor is in MIPI interface, the corresponding driver will give you frame reception details in /proc/interrupts
After starting camera streaming using VIDIOC_STREAMON, you can calculate fps by counting VIDIOC_DQBUF and VIDIOC_QBUF v4l2 IOCtl's called in one second. Comparing to above, in this copying frame to allocated memory is included.
Use DMA for copying frame from capture unit to allocated memory
Encoding must be used for sending compressed frames instead of actual frames captured from camera sensor. Otherwise, there will be huge video lag at the receiving end.
Software encoding consumes CPU load; hence it causes video lagging. Instead use HW codec available in your system
If HW codec is not available in your system, try to reduce CPU load by running SW encoding on other co-processors like NEON.
Set Frequency of HW codec to maximum and configure quantum level in encoding for required video quality.
Encoder expects input data format as I420. But most of the camera sensors do not support this format. Use HW Image Converter unit for making I420 data from captured camera frame.
In case there is no HW Image Converter, write SW image conversion algorithm and make it run-able on co-processor like NEON core.
Avoid memory copy operation between capture unit and encoding unit. This can be done by sharing the same memory allocated by one of these units. Use respective V4L2 API for this operation.
e-con experience on Monochrome camera streaming performance.
Monochrome camera generates grayscale image instead of colour image.
When we are implementing monochrome camera streaming we, faced following issues and solved with above tips
Encoding is not functioning.
Because the encoder is expecting I420 as input and camera generates GRAY8 only.
We tried to use HW Image converter for GRAY8 to I420 conversion. But the iMX6 doesn't have this conversion support.
When we used software conversion, we can encode and stream it.
This is due to the use of SW conversion and high CPU load.
To get rid of this issue, we implemented simple SW conversion in NEON. This reduces frame drops but this it is noticeable
To further improve this, the capture driver source is modified to generate I420 frame without using SW conversion in NEON or in user application. This gives us expected frame rate and frame drop issue is solved.
I hope this article gives you good points to note and implement in your camera streaming. Try with our eSOMiMX6 Ankaa Board and cameras for experimenting your camera streaming.
You may not be able to access content or other services correctly without cookies. Learn more