TECH TALK

Hands-On Edge AI: Developing Embedded AI/ML Applications

Our experts build an Edge AI application step-by-step, from the model integration to deployment on an embedded device. See an embedded AI application come to life.

About this Tech Talk

AI/ML at the edge is rapidly transforming how IoT devices sense, interpret, and respond to the world around them. But moving from concept to a working embedded AI application can feel complex and fragmented.

This Tech Talk is designed to simplify that journey.

In this session, we will build an AI/ML application step-by-step, walking through the full development flow from model integration to deployment on an embedded device. The focus will be on practical implementation — understanding how models move from development to embedded inference, how performance and memory tradeoffs are evaluated, and how tooling supports efficient iteration.

By the end of the session, you’ll have a clearer roadmap for bringing AI/ML capabilities into your own embedded designs.

Speakers

Tamas Daranyi

Tamas Daranyi

Product Marketing Manager, AI/ML
Silicon Labs

Zsombor Almási

Zsombor Almási

Applications Engineer, AI/ML
Silicon Labs

Duration

56 Minute Presentation







Transcript

Hello, everyone. Welcome to the Tech Talk from Silicon Labs. 
  
My name is Rui, and I am the product manager for the machine learning software and tools. 
  
Today, we're going to explore a very exciting topic, which is edge AI. 
  
We have two speakers for today's Tech Talk. 
  
We'll have Tamas, who is the product manager for our machine learning product, and he's going to give us some fascinating application examples, give us an overview of our offerings across the hardware and software for AI/ML. 
  
And we also have Zsombor, who is our expert application engineer, and he will give us a very good walkthrough of the developer journey. 
  
What do you need to start an ML example? 
  
What do you need to bring your own ML custom model into the application on Silicon Labs devices? 
  
And without further ado, I'll pass the mic to Tamas. 
  
Let's get started. 
  
Hello, everyone. 
  
I hope you can hear me well. 
  
So, welcome. 
  
I'm glad to see there is a lot of interest about this topic, so let's immediately jump into this overview of machine learning applications, what we are currently providing at Silicon Labs. 
  
In the agenda, so I will talk in a more broader context what's going on at the edge, then bringing up some application examples, talking a bit about the software tools and the partner ecosystems we have. 
  
We are going to show you a very nice demo that has been recorded by Gabor, and I will touch a little bit on the hardware offering, too. 
  
So it's not a question that machine learning and AI is at the peak of its hype curve, and maybe that hype curve is still continue to grow. 
  
But maybe it's not just a hype, and it's becoming more and more important for the edge devices, too, in general, for IoT, for edge AI or AIoT or TinyML or whatever buzzword we are using it. 
  
It's getting more and more important, and I give you some reasoning why. 
  
So according to some forecasts and to some researches, in the past few years, there are literally zettabytes of, which is a lot of zeros after the one, zettabytes of data is being generated at the edge devices. 
  
So it absolutely a reason to move the business logic, to move the machine learning algorithm, move the decision making as close to the edge device as possible. 
  
Not only for just for the common sense, because it's the fastest way, of course, the most secure way, so that guarantees the most privacy, that gives the less chance to the hackers to get that data. 
  
Also, it enables offline operation, no question. 
  
If there is a network outage, you still need to do decision making. 
  
You still need to have the device operable. 
  
The shades in your home, the lights, the PIR sensor, or anything should work as expected. 
  
Also, the network bandwidth could be an issue, so there are some remote applications, long-range, low-profile protocols like animal tracking, agriculture use cases, or city municipality use cases where you just literally cannot afford to upload that big volume of the data during that time and do the decision somewhere else. 
  
You must do there, right there at the source of the data. 
  
This is especially true for vision applications with cameras. 
  
Also, latency, that's pretty obvious. 
  
So if you do that right away, no network latency, no connectivity latency. 
  
It happens instantly. 
  
And one of the most important driving factor is actually the cost reduction, and there is two aspects of that. 
  
So one of the aspect is you can save a lot of cloud cost, because cloud is not for free. 
  
So all the data ingestion, all the cloud processing, all the cloud compute costs a lot of money. 
  
As well as you need to have a team who maintain that, a DevOps team maintaining your service 24/7. 
  
On the flip side, you can also think about cost reduction in a way that if you can do more on the same device, and this is what the machine learning at the edge enables you, enhance your use cases, do even more things on your device. 
  
With the same device, with the same price point, you can provide more to your customers. 
  
And this is what Silicon Labs is trying to help. 
  
We are well known as the number one IoT radio provider in the IoT space. 
  
But we are not just that. 
  
We are opening up more and more on the application side. 
  
So with the radium modem, you can do even more. 
  
With our dedicated parts that is having dedicated machine learning hardware accelerator in there, we can be six, eight, ten times faster than comparing it would be like just a pure Cortex-M33. 
  
We can be more efficient, so you can save more battery if you use the hardware accelerator running machine learning more efficiently. 
  
And on the other hand, we are having ambitious plans on the compute side. 
  
We are taking a lot of attention on the security, so we are the ARM PSA Level 3 security with our parts. 
  
And of course, our radio offering is still as good as it was, and it continuously developing. 
  
So as I told to you, it's not just the wireless connectivity. 
  
From my point of view, my projects are really looking at the radio as a peripheral, and my main focus is always, okay, what kind of applications we can enable, what kind of machine learning engine, hardware-wise, software-wise, we can provide to the customer to enable all those enhanced use cases in the wide range of verticals we are supporting. 
  
We have customer-based, smart home, smart health, industrial, commercial, so tons of use cases we can enable. 
  
Here, I list you a few in the upcoming slides. 
  
Basically, I categorized these use cases into four different categories. 
  
And of course, this is not a comprehensive list. 
  
These are just showing examples, helping you make think. 
  
If you want to start to kick off a project with one of four parts, you may want to take a look at these, maybe giving you ideas. 
  
Also, on the bottom of the slide, you can see some already existing examples that we are hosting either on our GitHub or even in our Simplicity Studio SDK available with literally just a few click, and you can kick off the project immediately. 
  
So on the sensor, on the time series data processing side, these are the most simple applications. 
  
As I see, even the compute requirement is like the smallest one. 
  
You can have a very easy and lightweight anomaly detection or predictive maintenance algorithm. 
  
The inference time is really short, so even on the lowest end parts, you can do some accelerometer use cases like a magic wand or something detecting a vibration, whatever IoT application you have there. 
  
We have a few POCs, and of course, we have a lot of partner-enabled demos. 
  
At the end of the slide deck, I will have a link where you can access all of these. 
  
The audio application space is getting more and more important. 
  
You can think about, of course, the most obvious, keyword detection examples and use cases where actually our radio could be the watchdog of the system. 
  
So in many cases, it's attached to a bigger host, MCU or MPU, so like a Cortex A-class or even higher compute. 
  
And since the radio is always on at all the time, it's like a wise idea to move the keyword detection functionality into the radio, so that we can do it in a very power efficient way. 
  
The rest of the system can sleep, so you can save power. 
  
And when actually the radio part is detecting the keyword real-time, and we have the capability for that with our embedded machine learning accelerator, we can do it in literally 30, 40 millisecond of inference time. 
  
Then it wakes up the rest of the system. 
  
This is one very obvious use case, but we have a lot of other examples like glass break detection or other security-related applications like shot detection, scream detection, or someone is saying help or yelling in anywhere that space which is being monitored. 
  
And of course, the noise suppression and other audio-related use cases like human voice activity detection is also popular one. 
  
It's on our scope. 
  
The vision part, yeah, it's a bit more constrained still in our parts because the RAM is always limiting the resolution that you can process. 
  
But still we can do decent work in many different use cases. 
  
And in a lot of the applications, you don't really need high resolution. 
  
So you can do object detection, you can do even face identification, literally an image like 120 by 120, and this is what we can actually process with our latest MG26 part, for example. 
  
Or doing home automation sensors with IR camera ensuring the privacy, you do not need high resolution. 
  
We can do that within our parts, so you can actually build a standalone sensor together with the radio and the vision application. 
  
We also have a very nice example of people flow counting on the GitHub you will see. 
  
And the fourth category maybe that's the best fit for Silicon Labs as being a radio provider company. 
  
We are doing a lot of interesting use cases just processing the radio signals, the radio data that we are getting out of our radio abstraction layer. 
  
One of the interesting one is a tire pressure monitoring system use case where we can position the tires with the help of machine learning without any complicated algorithm. 
  
There is another nice example. 
  
We have just recently announced a few blog posts and cooperation with a company called Emanate. 
  
They are doing industrial-grade locationing system with the help of our real-time locationing system and machine learning running on our MG26 part. 
  
They can do very accurate asset tracking in hospitals. 
  
Actually, we won the Edge AI Award this year with the Edge AI Foundation Award with this use case two weeks ago. 
  
And maybe I'm jumping in to the enablement part. 
  
So what we provide for you to kick off this journey if you haven't done it already. 
  
So the first and the most important part is the discovery. 
  
That's why I was showing this slide. 
  
What do you want to achieve? 
  
Is it achievable with machine learning? 
  
Is machine learning the right way to do that? 
  
Or vice versa, I have a problem, can it be solved with machine learning? 
  
So you need to think through all of that. And kick off the project. 
  
Or software enablement is mostly focusing on the part where you already have a machine learning model and you want to optimize and deploy it onto all parts. 
  
If you need to build your application and you don't have the expertise to do that, we have a very rich partner ecosystem to kick this off. 
  
So whatever is in that black rectangle here, you either do it on your own, some of our customers are already doing it, not impossible, or you rely on a partner. 
  
Let me show you some of the partner examples I have here. 
  
So as I said, we are categorizing the offering into three silos. 
  
For those customers who know machine learning, who knows Python coding, who knows how to build a machine learning model or how to achieve their goals with machine learning, we have a very comprehensive machine learning SDK. 
  
This is an extension to our Simplicity SDK, the GSDK, sorry, the Gecko SDK. 
  
You can install it as an extension. 
  
We are supporting the TensorFlow Lite Micro runtime natively on all parts, so you are using all the standard CMC's NM packages without knowing anything about their hardware accelerator. 
  
So all the kernels are implemented. 
  
So actually you will only use the same CMC's NM kernels and everything else. 
  
The offloading of the compute is happening automatically, and whatever is not possible to offload because it's some special layers that you designed, that is going to fall back to the Cortex-M MCU to be executed. 
  
But we are supporting all the standard TensorFlow operations. 
  
We have a nice developer journey. 
  
So if you click this link, this will go to our marketing webpage. 
  
You will find some decent examples there and how to kick off with the projects depending on what is your expertise and what you want to achieve. 
  
I would encourage everyone to start with there. 
  
The middle tier is like this low code or auto ML solutions where you are getting a tool which is helping you to create the machine learning application without any deep coding or deep level of machine learning. 
  
These partners are Edge Impulse, this is a well-known company in this space. 
  
There is a UI. 
  
You are being guided through the entire machine learning development process from the data gathering, model creation, model building, model training, and also the deployment. 
  
They also have a lot of canned solutions, so you can leverage those and sell sample applications. 
  
At the end, the tool is spitting out a project file that you can easily pull into your Simplicity project and deploy it already optimized to our devices. 
  
Same with the company called SensI ML and same with the company called Model Cat, formerly known as Eta Compute. 
  
These guys are mostly focused on this low-resolution vision with hardware constraints. 
  
So you are able to set in the platform all the memories you can sacrifice for the ML model or the constraints you have in the hardware, and they are actually creating machine learning model with the help of machine learning optimized for the given design. 
  
So it's one of my favorite because this is really a device aware training tool, and it's very easy to use. 
  
They have a lot of dataset already available. 
  
You can also upload your own dataset. 
  
They have their dataset plus open source dataset. 
  
So if you need a low-resolution vision, I would really recommend this platform. 
  
And of course, there are the third columns, vendors, which is the easiest way to go. 
  
Of course, this is not for free. 
  
There are companies who are providing turnkey solutions like Sensory. 
  
They are the leader in the keyword detection market or AI. 
  
They have a various kind of offering with different kind of machine learning solution. 
  
These are basically license-based. 
  
Or you can go with one of our system integrator partners that are bigger and smaller ones. 
  
All of these are very familiar with machine learning, also very familiar with the Silicon Labs offering. 
  
So if you have a problem, you have a project, you can easily go to one of these partner companies and they will be able to help you. 
  
Once my next slide is going to come up. 
  
Yeah, I brought you some latest and greatest demonstrations, which we have done jointly with one of these partners, just to show you some examples. 
  
One of that is the glass break reference design. 
  
So this daughterboard was actually designed to be able to put onto our FG28 Explorer Kit. 
  
FG28 because FG28 is the sub-GHz part, and the majority of these security sensors, home sensors like glass break detection, smoke detection, are always running on, or the vast majority is running on sub-GHz protocols. 
  
We have designed this daughterboard. 
  
If you are interested, you can reach out to your sales contact and you will get some samples about that. 
  
It's not publicly available yet, but through our sales, you can request one. 
  
And we are partnered up with the company AI ZIF. 
  
They are providing a high-end production-ready glass break detection algorithm while we are doing all the hardware and the full sample application that is ensuring you three years of battery life. 
  
You are able to test, you are able to measure the algorithm. 
  
Of course, you need to pay a license for that. But for testing, if you are interested in this application, it is available already. 
  
The next one is a very nice demonstration about two use cases immediately. 
  
We are doing a nice gesture detection, where the camera is recognizing your hand gestures, as well as we are providing a demo about the face detection, which is an object detection algorithm. 
  
Both of the demo, as you see, are running in a very decent inference time. 
  
And okay, these numbers could be better, of course, but just consider that this is a Cortex-M33 running only on 79 MHz. 
  
Plus, it is doing the radio communication in the meantime, whether it is Bluetooth or Zigbee, and comparing to that, these numbers are very good. 
  
Also, the energy consumption are very good numbers because we offload the vast majority of machine learning compute to the MVP accelerator engine that we have in the die. 
  
And the other benefit of this ModelCat platform, as I mentioned earlier, that they are creating a machine learning model with the help of machine learning that is actually optimized for this device, leveraging those accelerated kernels. 
  
That's why we are able to do this face detection actually with seven, nine, 10 frame per sec, which is pretty compelling, comparing that this is an M33 running on 79 MHz. 
  
And the third example is being done with the partner Unikie. 
  
We have shown this demo at the Embedded World Trade Show. 
  
The picture has been taken there. 
  
This is the tire pressure monitoring system I mentioned. 
  
There is this car. 
  
In the car, there is a two-by-two antenna array, plus our BG24 part. 
  
And the machine learning algorithm is actually getting signals through these two-by-two antenna from these nodes. 
  
Those are BG22 nodes acting as beacons. 
  
And the machine learning is analyzing those incoming radio signals, and the space is four quadrant, and the machine learning is telling you in which quadrant the radio signals are coming from. 
  
Why is it important? 
  
Because the standard position and algorithm would be very complicated, very slow, and also memory-consuming. 
  
While with machine learning, this algorithm is just super simple and super fast. 
  
And it's going even beyond this use case, because this was an obvious choice to demo it in the car, but you can think about more like if you are a smart door lock maker, it's very easy to tell if it's inside or outside of the door, or something is left or right from you, or up or down. 
  
So these easy problems, which have been not really trivial, getting resolved earlier, is getting very easy with the help of machine learning. 
  
So this is why this demo is so close to my heart. 
  
Just getting back to the software pipeline. 
  
So once you have the model already deployed, could get from a partner or you made it. 
  
We are having various tools how to deploy it onto our device. 
  
And deployment also means optimization, preparation of the model, convert it, optimize it, and deploy it. 
  
So as said, we support the TensorFlow Lite ecosystem, which is now being called LiteRT, so don't be confused. 
  
Nothing has changed. 
  
Google has just renamed the TensorFlow Lite to LiteRT. 
  
That's the new name of their edge AI. 
  
TensorFlow Lite new name is LiteRT. 
  
It's interesting that the TensorFlow Lite Micro is staying being called TensorFlow Lite Micro, so that could cause some confusion, but nothing has changed. 
  
What we provide here, and it's already released, is a model profiler. 
  
That's one of the most important part of the optimization. 
  
Once you have the TensorFlow Lite, you pull in the TensorFlow Lite model into this model profiler, and it will actually tell you how the model is going to operate and run on our device. 
  
Which means that you are getting inference time, inference energy, layer-by-layer analysis, so you will be able to see if there is any layer failing, or if you are stuck in the execution, you will be able to see if everything is matching with the Tensor Arena. 
  
It's a pretty comprehensive tool, and we also have a compiler that helps you to optimize the model execution. 
  
So if you store the model, which is typically the case, that you store the model in the external memory because it's too big to be kept in the SRAM, we are having a prefetch compiler that is optimizing the execution of the model, and we can achieve much better performance than if you would just store it and not using the compiler. 
  
I know the time is going very fast, and I would like to give some time for the questions. 
  
So let me jump into the demonstration where Zsombor is going to show how to use the profiler I was talking about and how to create this voice control application. 
  
Hi, I'm Zsombor, and in this video, I show you how to get started with the AI/ML development on Silicon Labs devices. 
  
We'll set up the required tools, create a voice control example, look at how a model is added to the project, profile it, deploy it to hardware, and take a look at the code that turns inference results into application behavior. The best place to start is the AI/ML developer journey page. 
  
After a short introduction, a step-by-step guide can be found at the bottom of the page. 
  
First, you will need a hardware that is ideal for running machine learning applications. 
  
Here you can see our suggestions to get started with the most important functionalities and properties listed. 
  
As you can see, all these parts have an MVP, which is short for matrix vector processor. 
  
The purpose of this hardware is to accelerate floating point operations, and by that save energy by offloading computationally intensive operations. 
  
Today, I am going to demonstrate a voice control example, which requires a microphone, and for that, I chose the EFR32 XG26 dev kit. 
  
A Silicon Labs account is helpful for the full developer experience, but for this demo, the main requirements are the board and the software developer tools that I'm going to show soon. 
  
The next step is setting up the development environment. 
  
On this page, you can download the Simplicity installer, which will give you access to the software tools needed for developing with Silicon Labs devices. 
  
Choose the correct installer based on your system. 
  
In the installer, there are several options to get the required software packages. 
  
Choosing technology install, the most important packages will already be selected. 
  
Make sure to add the AI/ML option from the optional packages. 
  
This way, the newest Simplicity SDK, the AI/ML SDK, and also the machine learning profiler tool will be installed. 
  
The last application we need is the Visual Studio Code with the Simplicity Studio for VS Code extension, which you can find on the extension page. 
  
Let's get started with building the application. 
  
For a quick demo, choose one of our examples in Simplicity Studio 6. 
  
Go to the project page and select create new project. 
  
At the select device option, choose your hardware either by selecting the connected board at the top or by searching for the board. 
  
Scroll down at the filters and select machine learning to see all available example projects for your device. 
  
Today, I'm showing the AI/ML SoC voice control light example. 
  
After creating the project, the SLCP file will be automatically opened. 
  
This file describes the components included in the project. 
  
For example, LED and microphone drivers, IO stream for debug logging, and also one of the most important components for AI projects, the TensorFlow Lite Micro, which is designed to run machine learning models on microcontrollers. 
  
When adding this component, the tensor arena size has to be configured. 
  
This example is ready to run with the current value, but it can be set to minus one to tell the system to dynamically determine the optimal arena size at runtime. 
  
This example has debug logging using IO stream enabled, so we will be able to look at some debug messages when running the project. 
  
You can click the open in VS Code button to quickly jump to the editor. 
  
Looking at the structure of the project, you can find the main.c, app.c, voice control light files that describe the application code. 
  
The config/tflite folder contains the TensorFlow Lite model used by the application, which is the.tflite file. 
  
This exact model is built to recognize the on and off keywords. 
  
When you replace or add the model here, the project's conversion flow generates the C source artifacts needed to compile that model into the firmware image. 
  
To demonstrate this automatic generation, I created a new empty example. 
  
I only added the AI/ML extension and the TensorFlow Lite Micro component. 
  
Now simply create the tflite folder inside config and drop the.tflite model file in there. 
  
The C source files are now added to the autogen folder. 
  
Let's take a look at the next step in the developer journey, which is the build your own solution page. 
  
We already have a model and know how it can be added to the project, but before deploying, it should be validated. 
  
For that, I would like to show you another tool which is called ML Profiler. 
  
This tool can be found in the tools menu inside Studio 6. 
  
You can either browse or simply just drag and drop your tflite model file here, select your device, and click profile. 
  
As soon as profiling finishes, the profiling summary page appears. 
  
Here you can find many details about how your model performs. 
  
It shows the flash usage, RAM usage, the number of MCU and accelerator cycles and stalls, and a whole per-layer summary where you can take a look at each layer and their execution time regarding both the MCU and the hardware accelerator.These metrics are useful to validate and optimize your model before deploying it to production. 
  
You also have the option to save the report in JSON or text format. 
  
On the trace view, you can also look at a timeline of performance data and even zoom into specific sections. 
  
After validation, we can proceed to deploy the model. 
  
The main file calls app_init, the function which runs user code. 
  
This calls the voice control initialization that creates the voice control task. 
  
The task periodically updates the input features, runs the inference, and processes the output. 
  
Here you can see how the code updates the input tensor and runs the inference. 
  
The output is stored in the global output tensor. 
  
The application controls the LED based on the output evaluation and also prints debug logs. 
  
Now let's try the example. 
  
On the Simplicity Studio extensions view, hover your mouse over the name of the project and look for the hammer icon to build. 
  
Once it's finished, click on the chip icon next to the build option to flash the application to the device. 
  
As this example has debug logs enabled and contains some prints based on the output, we can check those. 
  
I'm going to use the Simplicity Commander tool, which can be found in the Tools menu in Simplicity Studio 6. 
  
Let's select the kit, go to the VCOM view, and connect to the device. 
  
Now try saying the on and off keywords. 
  
On. Off. On. Off. 
  
From now, you can swap in your own model, use ML Profiler to evaluate whether the model is a good fit for your target device, and adjust the code that processes the inputs and outputs. 
  
Thank you for your attention. 
  
Thank you, Gabor. 
  
So this was a great presentation, how easy it is to kick off with the machine learning projects, and I hope you are going to be successful with this in the future. 
  
And as I mentioned, this was a very simple demo. 
  
We have way more examples in the GitHub repo. 
  
You can check it out. 
  
You can play with those. 
  
I would encourage everyone to try, for example, our Pac-Man demo, where you can play real-time Pac-Man game with your voice control with the same developer kit. 
  
That's so fun. 
  
And once I'm able to change my slides, I would like to give some insights about our currently available machine learning parts, machine learning-focused parts. 
  
Before talking to that, I would like to highlight that these parts are having the machine learning accelerator, which is the best for such kind of an application, but we are actually supporting machine learning on all of our parts. 
  
So that's a common mistake. 
  
You do not necessarily need an accelerated part. 
  
If the application is very simple and you do not need that much horsepower, you can use a part without an accelerator, and you can execute machine learning on the core itself. 
  
But these parts are special because we have this MVP accelerator there. 
  
MVP stands for Matrix Vector Processor. 
  
So this is a dedicated engine sitting on the die next to the Cortex-M, tightly coupled to the memory interfaces, and it actually has DMA access. 
  
So that enables that sub-processor to do the machine learning compute while the CPU is doing other tasks or the CPU is actually sleeping. 
  
So that's one of the key benefits. 
  
So not just by default it's faster, not just by default it's more efficient because this engine was optimized for multiplying, accumulating matrices and all these CNNs are full with those operators. 
  
But since it's offloading the CPU and the CPU can do other things in the meantime, you can get things done more faster, you can get back to sleep more faster, so you are saving more battery life if you are battery powered, or you are just more real-time if you need the speed and you still won't miss any messages on the Bluetooth or on any kind of mesh network that you are using. 
  
The MG24 part is the first one that we have launched a while ago. 
  
The big brother of that is the MG26. 
  
Both of these part is 2.4 gigahertz radio primarily. 
  
Also, it's multi-radio, so you can run Zigbee, Thread, Matter, Bluetooth, or even proprietary applications. 
  
The BG parts, which is the B in this X, those are Bluetooth-focused parts. 
  
But all of these two having the ML accelerator, I would recommend to go first with the 26 because that has the most RAM and the most flash in our product offering.And if you don't need that later, you can optimize it down to 24, but there you won't see any constraints. 
  
The 28, that's our sub-GHz part, as well as 2.4, because all of our parts support BLE Bluetooth communication, having the same machine learning accelerator, and we have the Wi-Fi part also having the same machine learning engine. 
  
However, it's a bit different architecture because this is an M4 running on higher clock frequency. 
  
But all the parts is having the common machine learning support, as you have seen in the presentation by Jean Bordes. 
  
You will use and follow the same track, whatever part you are using from here. 
  
And what I would recommend, if you are really starting a project or kicking a project off, to go with our so-called developer kits or explorer kits, because each and every part has its own offering. 
  
So for the 24, 26, we have the dev kit with the same form factor, with the same set of sensors. 
  
It's pretty handy. 
  
There is also a battery holder, a coin cell battery holder on the opposite side, so you can really do remote sensor, battery-operated simulations or demos. 
  
A decent set of sensors ensures you that you can kick off with all the microphone or accelerometer, IMU-based use cases. 
  
The majority of our demos are based on microphones or IMU data. 
  
Of course, the camera is either I2C or SPI or parallel. 
  
On the other hand, these boards also having this tiny quick connector here, which ensures you to be able to plug in any kind of additional third-party sensors that you can buy, for instance, from SparkFun Electronics or Micro-E, they are supporting this micro bus platform. 
  
And the other explorer kit, it's called explorer kit, it's a bit different form factor. 
  
It has no sensors, but having this micro bus connectivity, so you can plug and play those devices from SparkFun or Micro-E that is supporting those micro bus. 
  
So there is plenty of options here. 
  
This is commercially available at our webpage or any of the distributors like Digi-Key, Mouser. 
  
I would recommend to kick off the project with this. 
  
Same dev kit we have, explorer kit we have for the 28, and the very similar developer kit we have for our Wi-Fi part. 
  
And all of that is having a USB debugger interface that is also powering it up, so it's very easy to use. 
  
With the simplicity studio, it's plug and play. 
  
You will immediately have it listed there, and kick off this project in just a few minutes. 
  
And with this, I believe that's a wrap-up. 
  
Just a bit more information on how to get more about that. 
  
So the machine learning webpage is one source of truth where we have the developer journey that you have also seen in the video. 
  
We have a dedicated page for the model profiler. 
  
I would also recommend to check out the Works With presentations. 
  
Works With is our internal conference, which is both in-person and also a digital online version. 
  
And in every year, we have at least two, three, four machine learning-related content by our partners or internally. 
  
I would recommend to watch those back. 
  
Very nice sessions, a lot of learning there. 
  
And of course, if you have any questions, you can go to our salespersons globally or visit our community page, where actually anyone can ask any questions and you will get an answer in a few days surely. 

Close
Loading Results
Close