Chipsets Bring AI to the Edge

March 11, 2020
Technology born in the consumer industry will bring better cameras, sensors and intelligence, which means bigger business for security
This article originally appeared in the March 2020 issue of Security Business magazine. When sharing, don’t forget to mention @SecBusinessMag!

Research firm IHS Markit predicts that sales of semiconductors for artificial intelligence applications is expected to reach $128.9 billion in 2025, compared with $42.8 billion last year. AI chip sales to solution providers – including those in the business of electronic security – will triple by 2025, the report says.

AI chipsets enable the devices and systems to better understand the surrounding environment and user roles to execute continuously improving security functions – giving security cameras in particular, the ability to learn without explicitly programmed algorithms.

There is no question that modern advances in AI and Deep Learning technologies have enabled organizations to greatly scale their defensive capabilities and better support safe cities and smart spaces. Between detecting evolving threats, automating discovery, fighting dynamic attacks, and even freeing up time for IT professionals, AI-fueled automation has been a boon for system defenders.

Many of these chipset innovations are born in the consumer market, where they are used for cutting-edge applications such as self-driving vehicles, where AI is so important. Most importantly, these chipset are enabling AI and Deep Learning analysis at the edge instead of in a data center.

What Edge AI is

Edge processing happens inside a device, with the security data processed locally without the need to upload data to cloud services for further processing. A private or hybrid cloud is decentralized, where a percentage of the data is still processed, saving on bandwidth and latency between edge device and cloud service. This is one area where 5G will impact our industry, hiding latency where possible in the future, as well as reducing the issue of packet loss during data transmission and enhancing data security.

Edge AI is heavily favored for electronic security and smart home devices, and is a necessity for autonomous vehicles. Combining Edge AI with object-based storage can create extensive metadata that can be searched, indexed and processed, regardless of location.

Edge AI and IoT Come Together

Application IoT (AIoT) blends Edge AI and the Internet of Things. Supported by the new breed of efficient AI chipsets in products such as light bulbs, monitors, cameras and acoustic sensors, they enable efficient voice assistant activations.

“Turn on the alarm” while in the room triggers a small, AI accelerator chip embedded in the speaker itself to activate the perimeter security function. “Turn on the exit path” instantly arms the facility yet identifies a safe exit path lit by light bulbs with this chipset that themselves have the capability for 20-30 action keywords. While a sound or image maybe streamed from an internet service or locally captured, recognition is entirely processed on-device. This improves the overall security experience: low latency and fully reactive to what the user wants.

Powerful Edge AI hardware is available in many devices including smartphones, LPR and Traffic Machine Vision Cameras, ToF (Time of Flight) Cameras. For many of these platforms, edge processing performance provides 200 images per second performance and accuracy has increased from 50% to better than 99%, greatly reducing the need for expensive video management application licensing.

The MediaTek MT8175 is one example of an AI vision platform that can be used in smart TVs, smart cameras, and in-vehicle entertainment systems etc., to bring the AI experience from smartphones to other smart-screen devices, such as surveillance cameras. AI-based smart speakers will support the next wave of acoustic signature recognition of glass breaking, accidents, fighting by supporting ultra-low power voice wake-up from standby, new low-power far-field commands, locally processed voiceprint recognition, voice commands.

Applying it to Security

Collaborations between AI chipset vendor, sensor manufacturers and solution providers are moving the impactful 3D image processing use cases further for security and access control. Case in point: Chip maker Ambarella announced a partnership at CES 2020 with Lumentum, a manufacturer of optical and photonic products, and CMOS image sensor solution provider ON Semiconductor to create a platform that leverages a single CMOS image sensor to obtain both a visible image for viewing and an infrared image for depth sensing.

The Ambarella CV25 chipset powers depth processing, anti-spoofing algorithms, 3D facial recognition algorithms, and video encoding on a single chip, significantly reducing system complexity while improving performance.

“ON Semiconductor’s RGB-IR sensor technology enables single-sensor solutions to provide both visible and IR images in security and vision IoT applications,” Gianluca Colli, VP and general manager of the Commercial Sensing Division at ON Semiconductor, said in a press release. “Ambarella’s CV25 computer vision SoC, with its next-generation image signal processor (ISP), brings out the best image quality of our RGB-IR sensor, while providing powerful AI processing capability for innovative use cases in security applications.”

AI processors from Hailo, another CES exhibitor, are supporting a wide variety of applications, often at the same time, where quick processing at the edge is needed. The Hailo deep learning chip improves the performance of sensors in advance driver-assistance systems (ADAS), engine control units and side-view mirror warnings. Autonomous vehicles harness the power of a full “data center on wheels,” with near latency-free computing, critical to empowering the age of the autonomous automobile.

In smart cities, these fast AI chips enable cameras to detect threats to public safety and assist with critical tasks such as locating missing persons, finding stolen vehicles, or more effectively enforcing traffic violations. Multiple, full HD video streams for traffic, airports, transportation and city centers can be processed locally and quickly with high privacy.

Solution providers are delivering edge devices with efficient Edge AI processors like Hailo that can handle diverse processes in constrained environments and at ultra-low power consumption include AI cameras, drones and facial recognition entries. In its demonstration at CES, a single Hailo Edge AI processor handled no less than ten streams being processed in real time:

  1. Skeletal pose by multiple people
  2. Room Entrance identifying static tables and objects, and people in motion
  3. Multiple vehicle identification on two lane highway
  4. Hockey game with players and puck identified
  5. Car Race
  6. Vehicle intersection with car, trucks, people, all in motion
  7. Train moving on rail tracks
  8. Traffic intersection lights
  9. Airplane takeoff
  10. City center with pedestrians and vehicles

AI Chips and Cybersecurity

Before we get too comfortable, integrators and security users need to remember that there is another side of AI – where enhancing adversarial capabilities and challenges in defensive machine learning are opening new attack surfaces. How do companies secure the chip itself, as well as the data within?

Although Europe’s focus has been more on the privacy side of things, the US is catching up quickly with the Cybersecurity Act and acceptable ideas from NIST (National Institute of Standards and Technology).

That said, there are some basic tasks that manufacturers can employ, such as not shipping devices with simple defaults like admin/admin as the user/password combination, together with administrative privileges because people will just simply connect them to the an unsecure network or the Internet, anyway.

Once two-factor authentication becomes commonplace in our industry, every process that touches potentially sensitive or personal identifiable information (PII) in these AI processes and devices will become “trusted.”

Encryption is a tool, but it cannot be achieved by simply dropping an AES block into an AI chipset. The point at which data is most vulnerable is the point at which encryption and decryption occur – it must be in a place where nobody has access to it and can never export it.

Security in AI device processes today may have more to do with Star Trek and less with the “guy in the chair,” hacking away at a node, never seeming to look at the device. For many chipsets currently on the market, theft of the “keys to the kingdom” can be as simple as monitoring the encryption process from the power supply noise for 1,000 to 2,000 cycles; you can decrypt the keys using a simulation methodology. AES encryption will only delay the “crack” by about another 4000 cycles, so the technology to protect PII in the AI’s encryption-decryption process has far more to do with the willingness to pay for a more expensive chipset, than the “security management theorist” opinion of just not letting the AI have the data to train on.

Be smart and understand that data security begins with silicon and recognize the value (code for perhaps a 10% price increase) and your customers will reap benefits like saving lives from active assailants and wildfires.

Steve Surfaro is Chairman of the Public Safety Working Group for the Security Industry Association (SIA) and has more than 30 years of security industry experience. Follow him on Twitter, @stevesurf.