David Henderson, the Director of Industrial Segment for Micron Technology’s Embedded Business Unit.
As artificial intelligence technologies and applications have multiplied, integrators and manufacturers across multiple industries have made an effort to locate more powerful computing devices in the field, as close as possible to the actual place where the data is generated and used. This move to “edge computing” offers significant benefits to the gathering and analysis of security and video security data.
This Industry Influencer Q&A, sponsored by video security storage manufacturer Micron Technology Inc., features David Henderson, Director of Micron’s Industrial Segment, who explains how to enable AI at the edge, and the memory and storage requirements for AI-enabled edge devices in video security applications.
Henderson: AI implementation in video security applications should consider the factor of architecture from end-to-end, edge-to-cloud perspectives. With the increasing processing power and algorithm development at edge devices in recent years, the AI-enabled camera can run advanced video analytics to extract valuable, actionable insights from captured data.
By enabling AI functionality in the camera, the AI workloads can be split into camera devices and cloud data centers for operating efficiency, to provide a faster response (less latency) and to reduce bandwidth consumption. For example, in a situation where a security system needs instant response and action after analyzing a face or a number plate, sending data to the cloud and waiting for its answer is not feasible.
This is why edge analytics are becoming an area of significant investment for video security systems. They offer low transmission bandwidth consumption as only necessary data is sent to the cloud. Enabling AI at the camera ensures quicker alerts in case of threat detection, allowing faster data-driven analysis and decision-making. Edge-based analytics also come with lower hardware and deployment costs as less on-premises server resources are needed for the security solution.
AI capabilities are becoming even more essential as large-scale projects such as smart cities are growing and are gaining more advanced functionality. Such trends lead to the development of new applications and AI-enabled camera solutions.
Multi-imager cameras, sometimes referred to as multi-sensor cameras, typically use multiple camera lenses to offer panoramic video coverage of 180 to 360-degree scenes. Multi-direction overviews combined with artificial intelligence makes them ideal for wide-area coverage applications such as traffic intersections, retail and public spaces.
Capturing up to four independent video streams, each image sensor can be customized using unique video analytic parameters to detect only people and objects of interest per scene while automatically optimizing images. For retail, a loitering detection feature can provide alerts when people linger at a location for longer than usual. Facial recognition matched with watch lists can immediately detect people who may be known as threats.
The use of Automatic number plate recognition (ANPR) cameras has risen due to the wide range of applications such as: traffic management, smart parking, toll automation and intelligent transportation systems in smart cities. For business organizations, cameras with embedded AI-powered ANPR software can automatically recognize license plates and store the metadata in a database for future searches, while comparing them against watch lists to identify whitelisted, blacklisted or suspect vehicles to trigger actions such as opening a barrier gate for access control to restricted sites.
Nowadays, smart cities can integrate ANPR cameras to city sensors for vehicle behavior analysis and traffic optimization. The cameras are able to address free flow tolling and real-time road monitoring challenges. Automated parking systems and on-street and off-street ANPR parking solutions decrease the need for gates, tickets and operators and reduce traffic congestion.
New video security cameras are now being designed to process multiple video streams and higher resolutions (4K and above) to provide AI algorithms with a large dataset of detailed images and videos required for analyses. In addition, increasing amounts of metadata are captured and stored on the device to enable operators to quickly search and find relevant video footage. Much of the processing now occurs on the device level.
Implementing AI at the camera brings unique challenges as it may have power constraints, performance limits, durability issues and environment impacts. Here are some factors that camera manufacturers should consider while designing with AI-enabled cameras.
With more computing power from new chipsets enabling deep neural network processing on the camera itself for edge intelligence, memory and storage technologies need to keep up with these evolving changes in processing and workload requirements. Here’s a look at Micron’s memory and storage solutions for AI-enabled cameras (see Micron’s respective product datasheets for detail specification):
LPDDR4 and LPDDR5 for edge computing:
e.MMC for code/application storage
Industrial microSD cards for data storage