Machines that can understand and predictively respond to what we do, say, or touch without a constant dependence on the cloud promise to revolutionize the Internet of Things (IoT), delivering unprecedented levels of privacy, convenience and productivity in our lives.
Artificial intelligence (AI) is leading the charge away from traditional human interface devices like mice and keyboards towards more natural human interfaces such as touch, voice and image sensing. It’s enabling a new breed of machines not only capable of understanding commands but able to learn from us in order to develop a sense of intent, recognize personal preferences and respond in near real-time.
Device makers and consumers agree that in today’s smart home, commercial and industrial environments, whatever processing can be performed locally should be. Local caching, sensor fusion and secure inferencing from machine learning (ML) algorithms are all enabling a greater use of on-device processing, providing enormous cost, latency, user experience and data security benefits.
And thanks to advancements in device-level AI technology from companies like Synaptics, the processing burden for many of these functions can be handled intelligently and securely on the device itself.
Synaptics was born in the 1980s, out of a vision to build silicon that computes as effectively as the human brain. What began with capacitive sensing for touchpads, touchscreens and fingerprint sensors has grown to encompass a broad portfolio of human interface technologies – brilliant displays that engage your visual senses, audio products creating all new sound experiences, computer vision chips capable of processing trillions of neural network computation operations per second and voice input technology that recognizes what you say. Now, using AI, we’re able to take these technologies to the next level. We call it ‘ambient computing’: intelligence anywhere and everywhere.
“Thanks to Arm, we’ll be able to keep innovating at the cutting edge of human-centric technology, long into the future.”
A new level of smart home intelligence
Voice and vision are two human interface technologies that stand to benefit most from on-device AI. Voice continues to be redefined by ongoing technological innovations such as far-field sensing, contextual awareness and voice biometrics. AI vision is already being used to great effect in smartphones, providing biometric security features as well as face-warping entertainment. The next step will be for vision technology to expand into IoT and smart home devices, combining voice, gestures, gaze, biometrics and touch in order to personalize the experience for individual users.
This new wave of devices won’t need a wake word like ‘Hey Siri’ or ‘Alexa’ and will differentiate between people’s voices and learn their individual preferences, using this context to deliver a bespoke and context-relevant response.
Until recently, the considerable amount of computation required to achieve this level of intelligence has been reserved for high-value devices like smartphones. However, a new generation of smart SoCs such as Synaptics’ Arm-based AS3xx series offer secure neural network acceleration at price points targeting consumer smart home devices such as smart speakers, sound bars, smart displays, set-top-boxes, media streamers, household appliances and mesh Wi-Fi routers.
Arm Flexible Access in an evolving market
Arm IP powers many of our IoT solutions: high-end Arm Cortex-A processors for our edge computing SoCs for smart home multimedia applications and for over-the-top (OTT) streaming applications and low-power Cortex-M microcontrollers for our personal audio solutions.
However, the consumer electronics market that Synaptics participates in is a market of constant and rapid change. Our product requirements undergo significant modifications several times over the course of the development of the product, resulting in constant changes to our IP needs. More than ever before, it’s absolutely critical that we’re able to experiment with, evaluate and test new innovative design concepts freely before we commit to production silicon.
That’s why we were ready, pen in hand, to sign up for the new Arm Flexible Access program when it launched earlier this year: The program gives us that agility and freedom of choice without having to worry about complicated licensing terms.
Freedom to innovate
Prior to the Arm Flexible Access program, our product development teams faced a constant struggle to react as quickly as required to changing market requirements. Now, we have unfettered evaluation access to a portfolio of IP solutions, tools and support and this greatly reduces the turn-around time for new feature evaluations, helping us optimize product development schedules.
We’ve come a long way since the 1980s. Our founders were pioneers in machine learning and artificial neural networks. We’ve taught smart devices to feel, hear and see. Synaptics continues to lead advances in human interface technology, now using modern deep neural network technologies to give devices the local intelligence to respond with near human-like cognition and speed. And thanks to Arm, we’ll be able to keep innovating at the cutting edge of human-centric technology, long into the future.
Jumpstart your concept-to-compute journey and join one of the world’s largest, most prolific and creative communities of technology leaders with Arm Flexible Access.