Embedded AI – Are You Ready to Take the Leap of Faith?

Embedded AI – Are You Ready to Take the Leap of Faith?

Learn the process of deployment of Embedded AI on different embedded devices with AI and NLP

With an increasing number of smart devices being launched, we can be sure that AI has already found its place in embedded devices. For instance, a smart camera capable of advanced object detection or a smart wearable device with an advanced smart assistant, examples are plenty. As the device manufacturers and solution providers derive higher value than just providing enhanced user experience and services, like driving business decisions and reducing costs by improving serviceability amongst others, more industries are hopping onto the Embedded AI bandwagon. However, the question is what is the way ahead, and should we take a leap of faith in adopting this trend?

Using natural language processing (NLP) and emerging technologies such as generative AI, knowledge graphs, and composite AI, organizations are increasingly using AI solutions to create new products, improve existing products and grow their customer base.  As per the Gartner hype cycle for AI, Edge AI is at the Peak of Inflated Expectations. While the advantages such as reduced or negligible latency, no dependency on connectivity and bandwidth, reduced dependency on data centre, enhanced security and privacy guaranteed, are very high, due to market scenarios, there are a few challenges in its adoption. However, there are some industries that are at a very advanced adoption curve, for example, ADAS in automotive, social security (surveillance and biometrics), predictive maintenance, etc.

Challenge of Standardization with Embedded AI

All the tech giants today, including Tesla, Google, Meta, and many more, are driving independent approaches, frameworks, and platforms related to Embedded AI. Currently, the expectation is that the Semiconductor/system vendor will solve the platform challenges and hence each of the Semiconductor vendors solve in different ways. This brings challenges to standardization. If we change the platform, the chance of reuse will be very minimal. AI is all about continuous training and using this for improved inference, so embedded AI deployment strategy should consider how to collect data for continued training and how the updated model can be deployed regularly for better inference. With specialized processing architectures evolving, deploying a non-modular and fixed framework is not recommended. These deployments should be capable of updates that are expected in the future, not just model optimization, but model updates themselves.

Most of these algorithms will have extensive linear algebra, matrix, and vector data operations. The standard/traditional CPU architectures are not defined for such workloads or optimized for those. Hence, the sector experiences a lot of custom innovation in defining architectures optimized for such workloads which consider balancing the model architecture (performance, size, latency, accuracy, size, type, etc) requirements with cost, power, and modularity/programmability. This is also driving custom SoC (Silicon on Chip) designs for addressing specific requirements which are otherwise difficult to achieve using standard SoC from Semiconductor companies.

Deployment of Embedded AI

Following is a typical cycle for deploying an AI model in an embedded device. As in this, we can see that it is very important to consider the simulation capability, support for diverse models, and tools for each of the phases.

With the current challenges, for efficiently deploying a model on embedded system, it needs to be architected and designed considering the embedded in the mind.

Based on the Critical to Success factor and deployment scenarios, the decision will have to be made on embarking on Embedded AI than just a style statement. For example, if latency, connectivity is not a problem, don't attempt Edge AI now as it will reduce your flexibility to deploy and learn. In the above case, data privacy can be still achieved with some level of data pre-processing (but consider no data to be moved back need to be handled separately as it is counter-intuitive of AI where collected data is not used for training). The reality is that any model, ML/DL algorithm cannot be deployed on any hardware, and the choice of model, ML/DL algorithm is also dependent on the algorithm internals, with the above restrictions, careful analysis to be done in choosing the algorithm (thinking embedded AI during design), model, compression, and hardware/Processor.

Way Ahead

With privacy, latency, and reliability requirements becoming more stringent, edge processing is becoming very critical for business and it will become an important element in the future. Embedded AI will be the basic building block in predictive and reactive maintenance, latency, and privacy-sensitive processing, and many such use cases. This will be relevant in industries like automotive healthcare, aviation, Hi-Tech, manufacturing, energy, and retail. Some of the examples are embedded AI Vision use cases in connectivity restricted regions like visual inspection, Gesture control systems, Smart robots for Home and industry, Medical use cases like personalized health guidance, Faster failure detection and action on the factory floor, Autonomous driving, Fraud prevention in Fintech.

Author

Tinku Malayil Jose, CoE leader (Embedded Product Engineering) at QuEST Global

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net