Integrating artificial intelligence (AI) and Machine Learning (ML) into edge devices is one of the most highly anticipated developments in IoT. Smart devices that are trainable, actionable, and capable of extracting information and learning from the environment are becoming more contextually aware, and ultimately more useful. Performing AI at the edge comes with significant advantages, including low latency, reduced bandwidth, and lower power and cost, as well as privacy and security. AI can be used to achieve capabilities from small microcontrollers that were historically unheard of through conventional code: such small microcontrollers can leverage AI to achieve better decision-making in edge nodes. Adding embedded intelligence to IoT devices will create new opportunities for manufacturers – this is at the heart of why we are teaming up with SensiML, a leading provider of AI and ML.
Accelerating Development of AI IoT Development
SensiML offers cutting-edge software that enables ultra-low power IoT endpoints that implement AI and transform raw sensor data into meaningful insights at the device itself. SensiML’s Analytics Studio also provides a comprehensive development platform that enables developers with minimal data science expertise to build intelligent endpoints up to 5X faster than what’s possible with hand-coded solutions. This means that customers can fast-track their development projects and get AI/ML embedded into their design in weeks instead of the couple of years that data science projects usually take. The combination of SensiML Analytics Studio and Silicon Labs’ wireless SoCs and MCUs will make it possible for developers to add features, reduce complexity, and take advantage of low-power, low-cost, small-footprint designs. The SensiML Analytics Toolkit suite automates each step of the process for creating optimized AI IoT sensor recognition code.
What is the Difference Between AI and ML?
Both AI and ML are associated with the same computer science. But, while many people tend to use them interchangeably, they do have different meanings.
AI is the study of "intelligent agents:" any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
ML is the study of computer algorithms that improve automatically through experience.
An AI system is concerned about maximizing the chances of success.
ML is a subset of AI which allows a machine to automatically learn from past data without programming explicitly.
AI can help simple MCU-based systems solve complex problems.
ML algorithms are used where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
The Benefits of Automated Machine Learning and How it Works
Automating the process of constructing machine learning models brings a host of benefits to developers when it comes to tasks that would otherwise require specialized backgrounds. For example, without automated machine learning, or AutoML, the following tasks are left to the modeler to determine based on their own understanding of the problem, desired model performance, and – most critically – their expertise in the proper application of signal processing and machine learning classifiers:
AutoML helps by employing high-performance computing and search optimization algorithms to augment user knowledge in performing the task of constructing. The advantages of AutoML include the ability to evaluate hundreds of thousands or even millions of model permutations in the same amount of time that it would take a human data science expert to evaluate just a few. And with directed search constraints, the combination of AutoML in the hands of a skilled user can focus searches on the most promising permutations rather than just execute brute-force grid searches. This makes AutoML a powerful tool for algorithm development, whether it’s being used by an AI novice or a seasoned data science expert.
With this partnership, we get closer to living in a smarter, more connected world, and we are proud to have SensiML as a partner in this journey. For more information on SensiML and our technology partner network, please visit our Design Partner Networks.
To learn more about what we are doing with artificial intelligence and machine learning click here.
You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter.
Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure.
The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data.
Some Formal Definitions
The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks  defines jitter and wander as follows.
4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz).
4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz).
Similarly, the SONET standard Telcordia GR-253-CORE  states in a footnote
“Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy.
Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference.
Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically.
Why 10 Hz?
So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”.
Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards.
Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc.
Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see .
Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards.
Wander Terminology and Metrics
You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements.
By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 .
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time.
Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810.
Time Error (TE)
The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Time Interval Error (TIE)
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval.
Maximum Time Interval Error (MTIE)
MTIE measures the maximum peak-peak variation of TIE for all observation times of length tau = n*tau0 within measurement period T. ITU-T G.810 gives the following formula for estimating MTIE
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data.
An MTIE Example
It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns.
If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns.
I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales.
However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful?
MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers.
I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation.
As always, if you have topic suggestions or questions appropriate for this blog, please send them to firstname.lastname@example.org with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on.
 ITU-T G.810 Definitions and terminology for synchronization networks
 Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria
The official version is orderable but not free from
My old copy is Issue 3, September 2000 but the fundamentals have not changed with the newer issues.
 Understanding Jitter and Wander Measurements and Standards, 2003
This old Agilent (now Keysight) document remains a treasure, especially for SONET/SDH jitter and wander. See “Cause of wander” starting on p. 118.
 ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock
 K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013
An excellent tutorial. See slide 12.
 L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
Another excellent tutorial. See slides 40 – 41.
Industrial environments demand a lot from control systems. Devices such as programmable logic controllers (PLCs) must operate continuously with various components and as little maintenance and downtime as possible. However, a PLC is only as good as the input /output capabilities of the digital channels connected to the industrial ecosystem. Harsh, noisy environments and various unknown factors can all contribute to design challenges that affect digital channel reliability, resulting in possible circuit damage, downtime, and system failure. In the dual webinar sessions, Protecting 24 V Digital Outputs from the Unknown and Factories are Dirty – Protecting Industrial Digital Inputs, senior product manager Asa Kirby and applications engineers Travis Lenz and Kevin Huang describe the design challenges specific to industrial digital channels and how to mitigate them using Silicon Labs' Si834x and Si838x digital isolator devices.
Industrial ecosystems present a multitude of conditions that can result in damage to digital input and output channels. The most common challenges include:
Input/output-specific challenges include managing overload conditions and driving inductive loads for outputs and device compatibility and assembly/installation protection for inputs. Industrial systems must be able to handle all these varied design challenges while operating in harsh environments.
Silicon Labs’ Digital Isolator Solutions
Silicon Labs' digital isolators provide optimal solutions to the unique challenges of industrial environments. Our Si834x isolated smart switches are ideal for driving resistive and inductive loads, including solenoids, relays, and lamps commonly found in industrial control systems. They are fully compliant with IEC61131-2, so they interoperate well with other channels. Each switch can detect an open circuit condition and is protected against over-current, over-voltage from demagnetization (inductive kick or flyback voltage), and over-temperature conditions. An innovative multi-voltage smart clamp can manage an unlimited amount of demagnetization energy (EAS). Si834x switches are available in Parallel or SPI input types and sourcing or sinking output types. With substantial power savings and a compact 9x9 DFN package, these switches reduce board space and design headache!
Our Si838x isolated multi-channel input isolators are high-density, highly flexible devices that are ideal replacements for traditional optocouplers. They offer eight channels of 24 V digital field interface in a single compact QSOP package with integrated safety rated isolation. With a few external components, this structure provides compliance to IEC 61131-2 switch types 1, 2, or 3. The input interface is built on our ground-breaking CMOS-based LED emulator technology, which means the devices can handle sourcing or sinking configurations without a power supply on the field side. By utilizing our proprietary silicon isolation technology, these devices support up to 2.5 kV RMS withstand voltage, enabling high-speed capability, high noise immunity of 25 kV/µs, reduced variation with temperature and age, and better part-to-part matching. One Si838x isolator can replace eight traditional optocouplers, making them ideal solutions for space-constrained industrial facilities.
Watch these webinars to learn more about how our digital isolators provide optimal solutions to the unique challenges and harsh conditions of industrial environments: Protecting 24 V Digital Outputs and Factories are Dirty. To learn more about our Si834x and Si838x devices, contact your Silicon Labs sales representative.
Silicon Labs recently received the highest level of certification available (see press release) through the well-known Platform Security Architecture, or PSA. This Level 3 certification, which has been designed to provide laboratory assessment of IoT chips with substantial security capabilities, represents a significant milestone for chip vendors targeting connected devices. We’re actually the first silicon provider to achieve this but what does it mean and why should any device manufacturer care?
What is Platform Security Architecture?
Before Arm developed PSA Certified and shared it with the world, it was essentially left to each silicon vendor to develop its own security systems. Of course, this resulted in varying degrees of robustness and confusing terminology in describing the different solutions. Arm responded by spending several years talking to security experts in the semiconductor world and coming up with a universal architecture that took all of those good ideas and put them into a single security architecture specification they named the “Platform Security Architecture” with the mission of providing what they called a “Hardware Root of Trust” in a secure microcontroller.
Some tenants of this “Hardware Root of Trust” philosophy are functions, including:
Enter PSA Certified
If Arm had stopped there, customers would still be taking the word of silicon vendors about its PSA implementation. Arm recognized this and created the PSA Certification process. They formed psacertified.org, joining other heavy hitters in the security certification industry including Brightsight, Riscure, UL Security Solutions, and TrustCB.
PSA Certified’s first priority was to write a simplified protection profile, starting with the PSA Architecture as a base requirement, then add assurance levels on top of that. Protection Profiles define “what” security a vendor is claiming in a particular component. The assurance level just means to what level or extent the security features in the Protection Profile are evaluated or tested.
So PSA Certified set about creating three separate documents. The first was what they called a Level 1 questionnaire which is a self-assessment of how a vendor meets the PSA “Root of Trust”. This questionnaire is submitted to TrustCB for scrutiny to prevent manipulation. The two other documents were Protection Profiles for two different levels of assurance against software and physical attacks.
By far the most common attacks are software attacks, which can be either local (the device is in your hands), or remote (you are connecting to the device either wired or wirelessly via some communication medium). The PSA Level 2 Protection Profile specifically addresses scalable software attacks and details security functions necessary in the silicon to prevent those types of attacks. PSA Level 2 is not simply a questionnaire, but also requires independent third-party labs to spend a specified amount of time and various methods trying to break the prescribed Level 2 security functions.
PSA Level 3 adds hardware attacks (again either local or remote), which have historically required more time, more experience, a much more expensive equipment to execute. So, if local hardware attacks aren’t as common as software attacks, why would Silicon Labs, or any other vendor, go through the trouble of getting this high level of certification? The answer is because there are tools reaching the market that effectively remove two of these barriers by bringing down the experience required and the cost of equipment for a physical attack. For example, NewAE has a product called ChipWhisperer and for a mere $3,800 you can get a starter kit that makes it possible to do some pretty effective side channel analysis attacks by stealing secret keys in the device as they are being used in the crypto operations. This same company also sells a tool for $3,300 called ChipShouter which is an inexpensive EMF fault injection tool which can cause the software in a product to glitch (often called glitch attacks) and allow malware to be injected in the product or do things unlock a locked debug port. I am sure there are more advanced tools available on the dark web that are even more deadly, these are just examples of tools that are easily bought by anyone.
The Growing Risks of Inaction Against Physical Attacks
With these relatively cheap tools, a criminal enterprise can pretty easily do some serious damage to a brand, ecosystem, or the bottom line of a company. An easy way to make money if you’re an organized cyber criminal is to steal the intellectual property of a company and sell it to someone who has the resources to produce knock-offs of those devices. It’s estimated that 10 percent of consumer electronic devices sold on the web are counterfeit, including sophisticated devices like a Wi-Fi router. Companies try to protect against IP theft by locking the debug port to prevent someone from simply dumping the whole contents of the product. With the ChipShouter tool, you can simply perform a glitch attack on the software that locks the debug port and boom, all the IP comes spilling out.
Another example might be when you have a sophisticate attestation procedure for your ecosystem to protect against rouge or fake devices from joining your network. This requires a secure identity in the device and a secure handshake to verify your device is authentic. With ChipWhisper and a real device in your hands, you can steal that secret identity and clone the device easily.
Silicon Labs is committed to anticipating our customers’ security needs and addressing them before they become an issue. That’s why we’ve adopted the PSA Architecture and achieved its highest level of certification - to create products that proactively stay ahead of this ‘cyber mafia’ rather than being forced to react to them after they’ve wreaked havoc.
For more information on how Silicon Labs is securing the IoT, visit silabs.com/security.
The healthcare industry is very focused on treating chronic diseases, providing effective aging-in-place support for an increasingly elderly population, and ensuring a smooth transition between inpatient hospital care and outpatient home care. The coronavirus and its impact on remote care have underscored and accelerated the importance of and demand for continuous patient monitoring provided by intelligent sensor solutions connected remotely to a cloud-based infrastructure. This has triggered the need to build secure, low-power wireless end-products that keep end-user data privacy at the core of their security architecture.
That was the topic of discussion I had the pleasure of participating in during a recent Parks Associates Connected Health Summit panel discussion regarding smart medical devices.
I encourage you to watch the discussion, which spanned a range of challenges and opportunities facing smart medical devices, perhaps most importantly the necessity to ensure healthcare data is kept private and secure.
The rise of connected medical devices has caught the attention of hackers, who are launching more attacks on operational and infrastructure targets, typically using ransomware schemes to enrich organized crime groups. As highlighted at the RSA conference in early 2020, the level of sophistication of these ransomware attacks is growing exponentially, and – if left unprotected – vulnerable wireless devices are an effective means to compromise systems remotely using a wide variety of attacks. In order to combat the threat of cybercrime, it’s clear that the individual components being used in medical devices must have an enhanced level of security robustness that delivers security from chip to cloud.
Bluetooth® Low-Energy (BLE) has become the most popular wireless connectivity solution for patient monitoring products and the Bluetooth SIG began introducing protocol level security features in 2015 with the ratification of BLE 4.2.
In addition to the BLE 4.2 security protocol, more stringent system-related security augmentations must be deployed to most effectively secure data and privacy. This is especially true for BLE, as the way to communicate the end-user / patient information to the cloud is often performed using a smart-phone and software application that jointly offers vulnerabilities for hackers attempting to gain control of medical sensors.
Additional security starts with the need to identify the end-product application and the silicon ICs used the first time these ICs initiate a connection to the cloud infrastructure. It is also critical to understand that embedded systems assume that the proper software is executed. To achieve this, a Root of Trust (RoT) must be in place so that true software authentication is performed before any code execution. This ensures that malicious software can be detected and reported and that additional measures can be deployed as needed, such as immediately cutting off the potentially infected medical product from the network.
The lifecycle of many medical products is long, often available for purchase for several years after they are first produced. All the while, hacking techniques continue to evolve. New tools can help expose weaknesses, new hacks can occur, and new flaws can be discovered. It is therefore critical that connected medical devices are equipped to be remotely updated through secure over-the-air (OTA) updates.
Silicon Labs made a major announcement in 2020 with its Secure Vault Technology on EFR32 Series-2. Secure Vault offers an impressive list of technical hardware and software features that can be used to develop extremely robust, secure IoT wireless solutions. These features include Secure Loader with Root of Trust, Secure Debug with lock and unlock capabilities, Secure Key generation and storage, and Advanced Hardware Cryptography with DPA countermeasures. Secure Vault has achieved tremendous recognition on the market and earned a gold medal in the 2020’s LEAP (Leadership in Engineering Achievement Program) Awards Connectivity category.
PSA Certified – a respected security certification body for Internet of Things (IoT) hardware software and devices created by Arm Holdings – officially certified Level 3 status to Silicon Labs’ EFR32MG21 wireless SoCs with Secure Vault. Silicon Labs is the world’s first silicon innovator to achieve PSA Certified’s highest level of IoT hardware and software security protection.