Official Blog of Silicon Labs

      • Silicon Labs Acquires Z-Wave, Bringing Complete Smart Home Connectivity Under One Roof

        Lance Looper | 04/109/2018 | 07:47 AM

        Today, we’ve announced the acquisition of Sigma Designs’ Z-Wave business. Adding Z-Wave to our wireless portfolio gives ecosystem providers and developers of smart home solutions access to the broadest range of wireless connectivity options available today.

        Together, we’ll open the door to millions of potential users of smart home technologies by expanding access to a large and varied network of ecosystems and partners. Z-Wave’s reputation as a leading mesh networking technology for the smart home with traction in more than 2,400 certified interoperable Z-Wave devices from more than 700 manufacturers and service providers worldwide, coupled with Silicon Labs’ position as the leader in silicon, software, and solutions for the IoT, make this a great match.

        Silicon Labs has been actively driving the IoT for years, and we recognize the large following Z-Wave has among developers and end customers in the smart home. With our experience in mesh technologies, we are uniquely positioned to advance the standard and grow adoption with input from the Z-Wave Alliance and partners.

        Adding Z-Wave to Silicon Labs’ extensive portfolio of connectivity options allows us to create a unified vision for the technologies underpinning the smart home market: a secure, interoperable customer experience is at the heart of how smart home products are designed, deployed and managed. Our vision for the smart home is one where various technologies work securely together, where any device using any of our connectivity technologies easily joins the home network, and where security updates or feature upgrades occur automatically or on a pre-determined schedule.

        You can learn more in the press release as well as our website.

      • IMONT IoT Hero Creates Cloudless Connections for IoT Devices

        Lance Looper | 04/92/2018 | 02:31 PM

        Silicon Labs recently had the opportunity to speak with Larry Poon, chief operating officer of  IMONT, a start-up software company taking a radical approach to connecting IoT devices by circumventing the cloud. Larry shared how IMONT’s interoperable software connects any type of device to other devices, regardless of the manufacturer. Graham Nice from Skelmir, one of IMONT’s key integration partners, joined our conversation to explain how companies are reacting to IMONT’s new IoT option for connectivity – and how he sees a potential move in the future away from the cloud.

        So tell me about IMONT – what exactly do you offer?

        We develop device connectivity software. If a company wants software to connect their devices to other devices, we can help them do so in a unique way.

        We lower the barrier to entry and the ongoing operational costs of scaling out – we do this by being cloudless and hubless. We’re also much more secure, and we’re interoperable. For example, if a utility company wants to offer a smart home solution that includes devices from other manufacturers - they can connect them all using our software. Otherwise, they would have to use different apps to connect the different manufactured products. By not using the cloud, we save a lot of money for certain customers, such as smart home operators. And obviously, if you don’t use the cloud, it’s more secure.

        Can you tell me how your platform avoids using the cloud? And why is it more secure?

        The software is mesh-based, and we do everything locally. So if we have to do any transaction or use analytics, we use the edge. That is a big advantage of our system - we never have to connect the device to the cloud. Also, when I say we have no hub, I mean any device in the configuration can be the hub – we don’t require a separate hub. All of the data is within each device itself; therefore, you don’t have to move anything to the cloud. But the cloud option is there because we have made it flexible enough with MQTT for cloud transmission, if a customer wants it.

        You can offer this because of your software expertise, whereas a hardware company needs a hub, unless they write software for the edge?

        That’s right. Let’s say Samsung, a device manufacturer, wants its products to connect to other devices in a smart home. Everyone wants choices, so it’s hard to find a home with all Samsung devices. In order for all of those devices to be connected, Samsung would typically create a hub, then use their cloud service to interoperate with the other manufacturers’ cloud services, which is not the most efficient way of doing it. But with our system, we’re already there, we’ve already written the code to connect manufacturers; therefore, we are able to avoid using the cloud and a hub.

        How do you approach customers with your value proposition?

        We’ve been around since August 2016 – so awareness is key right now. We’re a young company, small and lean. We’re knocking on the doors of anyone offering IoT systems, but we partner with companies like Silicon Labs to offer this solution to your customers, who could be looking for this type of solution. We also partner with implementation partners who can get this done for them.

        Have you seen people searching for your type of solution, or are you educating people about the option?

        It’s a little of both. Every time we talk to someone about it, they say exactly what you say – “oh, this is kind of novel, I never thought about it that way.” But then there’s a certain group of people who are beginning to say, “we don’t really need the cloud.” New articles are starting to crop up about cloudless approaches, but it’s just starting to get noticed. Anyone we end up talking to likes the idea once they hear it – but to go so far as say people are actively looking for a cloudless solution, we’re slowly getting there.

        Is data an issue if you’re not using the cloud?

        No, our customers can collect all of the data they want – we give them that flexibility, and they can move it to the cloud if they want.

        So there’s no real drawback to moving away from the cloud?

        No, we don’t think there is. People have no option but to move away from the cloud, data is too expensive.

        Graham, tell me about the Java integration and how your companies work together? 

        Our company is turning 20 years-old this year. We started out providing our virtual machine for running Java  on set top boxes in the German-speaking European Pay-TV market. Since then, our customers have deployed over 120 million devices using various iterations of that core virtual machine. We have a history of deploying predominantly in the digital TV space around the world.  

        In the past six years, we’ve worked in the IoT market, supporting Java-based IoT industry standards and proprietary solutions. In the case of IMONT, we had worked with one of the founders previously and he reached out to us to use our VM to host his new solution.

        Since IMONT’s software runs on Java, our role is to help IMONT’s customers get up and running extremely quickly on various platforms and devices.

        As a close partner, what is your impression of the market reaction to IMONT?

        IMONT has a disruptive approach to deploying IoT. Everybody is all about the cloud, but the cloud has some significant downfalls. For one, it’s horrendously expensive, and you have vast amounts of data constantly feeding up to the cloud, chewing up bandwidth. You also still have privacy concerns - a lot of consumers have an issue with their personal data being moved to the cloud. All of that data incurs costs to operators. The reaction IMONT is getting from service providers is – first, that can’t be done. But then IMONT proves them wrong. Yes, it can be done, and when operators see the cost benefits, it becomes a very compelling proposition. There are a lot of people realizing that the cloud isn’t the way forward and edge computing makes more sense. IMONT provides the framework for edge computing, and hopefully we provide the vehicle to get their technology running on low-end devices, bringing the cost point down for service providers in the home. But it’s not just the home, industrial IoT deployment applications is a market for IMONT, as well.

        Larry, how did you start using Silicon Labs’ products?

        Our partnership with D-Link strengthened our ties with Silicon Labs. D-Link offers a lot of devices built with Silicon Labs’ technology, so we started making our software work with Silicon Labs.

        Where do you see IoT going in the next 5-8 years?

        From our perspective, we see devices getting smarter than they already are, yielding greater power efficiency and eventually operating independently of the cloud. We also expect the number and types of IoT device deployments to continue to explode, but consumers are pushing for greater security and seamless connectivity, so we will see significant improvements in those areas, as well.


      • Micrium OS Video Series Gets You Started with Kernel Programming

        Lance Looper | 03/88/2018 | 12:43 PM

        If you’re planning to develop IoT applications for the EFM32 Giant Gecko or Pearl Gecko, you’re probably already thinking about using a real-time operating system.

        It’s quite true that many embedded developers can get by with less sophisticated software based on a simple loop. But the latest EFM32 microcontrollers are packed with complex peripherals that require correspondingly complex application software. And designing IoT devices means dealing with both elevated user expectations and challenging design requirements. All this means that it’s become increasingly difficult for your projects to succeed without an operating system.

        So how to get started? It can be daunting to make the sudden jump from bare-metal programming to kernel-based application development. So help you overcome that hurdle, we're producing a ten-episode video series to help smooth the way: Getting Started with Micrium OS.

        The series is hosted by Matt Gordon, who has spent more than 10 years helping developers learn how to maximize the potential of the Micrium real-time operating system. He helped establish the Micrium training program, and is currently RTOS Product Manager at Silicon Labs.

        The first episodes in the series starts with some basic information about what a kernel does and how kernel-based applications are structured. Matt covers initialization, how the kernel performs task scheduling, and how context switches pass control of the CPU from one task to another. Later in the series, Matt will discuss synchronization, resource protection, and inter-task communication. The series will leave you with a cohesive picture of real-time kernels and Micrium OS.

        That’s not all: this series is supplemented with some of the best developer documentation ever produced for embedded systems programming. You can visit to learn much more about kernel-based application development and the networking and communication stacks that make up Micrium OS.

        The Micrium OS kernel is available for free download through Simplicity Studio v4. To download and to find out more about Micrium OS, visit:

        To find the series on YouTube, visit:

        And be sure to subscribe to the Silicon Labs YouTube channel to receive notifications of new episodes!

        Check out the first video in the series here:


      • Smart Meter Deployments Still on the Rise, Meters Evolve as Technology Options Increase

        Lance Looper | 03/85/2018 | 10:54 AM

        Although not an entirely new concept, the smart meter market continues to be a major global growth market based on the device’s ability to greatly improve efficiencies for both utility companies and consumers. Markets and Markets estimates the smart meter market to be worth $12.79 billion (2017), and it is expected to grow at a CAGR rate of 9.34 percent from 2017-2022.

        Interestingly, the first smart meter was developed pre-Internet, in the 1970s, and it wasn’t until the mid-nineties after the U.S. National Energy Policy Act, and similar utility deregulation efforts across the globe, that smart metering really took off. Widespread deregulation set-up a market-driven pricing environment for utility companies, creating an immediate demand for utility companies to understand the energy consumption rate of their customers in order to keep their costs down, hence a crucial need for smart meters was born.

        Modern day smart meters record and report, via a communications network, the consumption of electricity, gas, water, or heating/cooling. By obtaining this level of consumption detail in real-time, utilities can simultaneously reduce costs while increasing customer satisfaction, making smart meter deployments a valuable investment for any type of utility company. Smart meters also play a key role in helping regions meet aggressive climate goals set-up by state and federal governments in many countries.

        The benefits are obvious, but from a designer perspective, the types of metering technologies are vast and require detailed knowledge of the market.

        Meter Types

        The most common type of smart meters use one-way, transmit only communications and are called Automatic Meter Reading (AMC). These meters started out as walk-by or drive-by meters, but eventually have become fully automated with wireless capability, running on a Wide Area Network (WAN).

        Advanced Metering Infrastructure (AMI) meters are two-way communications networks that not only produce a reading, but control the meter and equipment and allow the utility to connect or disconnect customers; monitor and anticipate usage changes, allowing for a smart grid operation; and enable software and security updates.

        Traditional metrology equipment was used in the earliest smart meters, but today almost all new smart meter designs use electronic equipment, referred to in the industry as static meters.

        Electricity meters are probably what most people think of when they hear the term smart meter, and there are two primary kinds of electricity meters. Current Transformers (CT) were the original meter, though now a wide range of MCU-based meters exist, which don’t have the problems associated with transformer-based meters, such as the tendency to get saturated with heavy currents and the susceptibility to tampering.

        One of the more popular types of smart meters deployed extensively in Europe and urban areas are Heat Cost Allocator (HCA) devices. These meters are typically used in multi-tenant residential and commercial buildings, and enable a fair cost allocation of a shared heating system, giving tenants heating bills proportional to their usage of the heating system. This meter is hailed by energy conservationists, as it encourages users to reduce consumption, unlike a flat heating bill that doesn’t reward tenants for reduced energy consumption behavior.

        In-Home Displays (IHD) is another desired piece of metering, and IHDs are common in homes part of the GB Smart Energy program in North America. These devices have direct wireless connections to the smart meters in the home, and typically use a Zigbee mesh network to display varying utility cumulative and real-time usage rates.

        Communications Technologies

        To no surprise to embedded designers, there are numerous communications technologies to choose from when designing a smart meter.

        Typical installations use a sub-GHz Field Area Network (FAN) with a star or mesh topology, though another popular option is using equipment with WAN capabilities built directly into the meter with a M2M connection using 2G, 3G or 4G. The new NarrowBand IoT standard has improved the power and cost performance of this approach, creating numerous unlicensed band Low Power Wide Area Network (LPWAN) technology providers. Another major communications network is the Zigbee-based Home Area Network (HAN), which is already deployed in more than 23 million homes in the U.K. The HAN meters have a built-in Zigbee radio, and come with an IHD.

        Though, Wi-Fi, Bluetooth and Z-Wave are nowhere to be found in smart meter deployments, due primarily to power constraints. But Bluetooth Low Energy is a viable option if based on a 2.4 GHz radio using a multi-protocol SoC, such as a Silicon Labs Mighty Gecko.

        The Power Play

        Power is not an issue for electricity meters since they have their own power supply, but power becomes a pivotal issue for heating, gas, and water meters. Specialized lithium batteries have been created for smart meters in recent years – lasting close to 20 years - but not all markets embrace these batteries. China is a good example, as it requires utility customers to replace their double AA batteries every 12-18 months.

        Maximizing battery life is an important part of smart meter designs, making the underlying technology components critical to creating a high-performance smart meter unburdened by power restrictions.

        Whatever smart meter electronic design pursued, smart meters will continue to prove their worth as a highly efficient way for utilities to compete and run more efficiently, consumers to save money, and societies at large to reduce their environmental footprint.

      • System Integration Considerations for Optical Heart Rate Sensing Designs

        Lance Looper | 03/81/2018 | 09:54 AM

        Morrie Altmejd, a senior staff engineer at Silicon Labs,  wrote this article that recently appeared in Electronic Products Magazine. 

        Designing and implementing an optical heart rate monitoring (HRM) system, also known as photoplethysmography (PPG), is a complex, multidisciplinary project. Design factors include human ergonomics, signal processing and filtering, optical and mechanical design, low-noise signal receiving circuits and low-noise current pulse creation.

        Wearable manufacturers are increasingly adding HRM capabilities to their health and fitness products. Integration is helping to drive down the cost of sensors used in HRM applications. Many HRM sensors now combine discrete components such as analog front ends (AFE), photodetectors and light-emitting diodes (LEDs) into highly integrated modules. These modules enable a simpler implementation that reduces the cost and complexity of adding HRM to wearable products.

        Wearable form factors are steadily changing too. While chest straps have effectively served the health and fitness market for years, HRM is now migrating to wrist-based wearables. Advances in optical sensing technology and high-performance, low-power processors have enabled the wrist-based form factor to be viable for many designs. HRM algorithms also have reached a level of sophistication to be acceptable in wrist form factors. Other new wearable sensing form factors and locations are emerging, such as headbands, sport and fitness clothing, and earbuds. However, the majority of wearable biometric sensing will be done on the wrist.

        No two HRM applications are alike. System developers must consider many design tradeoffs: end-user comfort, sensing accuracy, system cost, power consumption, sunlight rejection, how to deal with many skin types, motion rejection, development time and physical size. All of these design considerations impact system integration choices, whether to use highly integrated module-based solutions or architectures incorporating more discrete components.

        Figure 1 shows the fundamentals of measuring heart rate signals, which depend on the heart rate pressure wave being optically extracted from tissue. Figure 1 shows the travel path of the light entering the skin. The expansion and contraction of the capillaries, caused by the heart rate pressure wave, modulates the light signal injected into the tissue by the green LEDs. The received signal is greatly attenuated by the travel through the skin and is picked up by a photodiode and sent to the electronic subsystem for processing. The amplitude modulation due to the pulse is detected (filtering out motion noise), analyzed and displayed

        Figure 1. Principles of operation for optical heart rate monitoring.

        A fundamental approach to HRM system design uses a custom-programmed, off-the-shelf microcontroller (MCU) that controls the pulsing of external LED drivers and simultaneously reads the current output of a discrete photodiode. Note that the current output of the photodiode must be converted to voltage to drive most analog-to-digital (A/D) blocks. The Figure 2 schematic shows the outline of such a system. Note that the I-to-V converter creates a voltage equal to VREF at 0 photodiode current, and the voltage decreases with increasing current.

        The current pulses generally used in heart rate systems are between 2 mA and 300 mA depending on the color of the subject’s skin and the intensity of sunlight with which the desired signal needs to compete. The infrared (IR) radiation in sunlight passes through skin tissue with little attenuation, unlike the desired green LED light, and can swamp the desired signal unless the green light is very strong or unless an expensive IR blocking filter is added. Generally speaking, the intensity of the green LED light where it enters the skin is between 0.1x and 3x the intensity of sunlight. Due to heavy attenuation by the tissue, the signal that arrives at the photodiode is quite weak and generates just enough current to allow for a reasonable signal-to-noise ratio (SNR) (70 to 100 dB) due to shot noise even in the presence of perfect, noise-free op amps and A/D converters. The shot noise is due to the finite number of electrons received for every reading that occurs at 25 Hz. The photodiode sizes used in the design are between 0.1 mm2 and 7 mm2. However, above 1 mm there are diminishing returns due to the effect of sunlight.


        Figure 2. The basic electronics required to capture optical heart rate.


        The difficult and costly function blocks to implement in an optical heart rate system design, as shown in Figure 2, are the fast, high-current V-to-I converters that drive the LED, a current to voltage converter for the photodiode and a reliable algorithm in the MCU that sequences the pulses under host control. A low-noise (75 - 100 dB SNR) 300 mA LED driver that can be set to very low currents down to 2 mA while still creating very narrow light pulses down to 10 µs is an expensive block to achieve with discrete op amps.

        The narrow pulses of light down to 10 µs shown in Figure 2 allow the system to tolerate motion and sunlight. Typically two fast light measurements are made for each 25 Hz sample. One measurement is taken with the LEDs turned off and one with the LEDs turned on. The calculated difference removes the effect of ambient light and gives the desired raw optical signal measurement that is, most importantly, insensitive to flickering background light. 

        The short duration of the optical pulses both allows and requires a relatively strong light pulse. It is essential to stay brighter than the sunlight signal, which may be present and not allow the PPG signal carrier to be dwarfed by the sunlight signal. If the sunlight signal is larger than the PPG carrier, then although it may be removed by subtraction, the signal can be so large that external modulation such as swinging an arm in and out of shadow can create difficult-to-remove artifacts. As a result, systems that use low-current LED drivers and large photodiodes to compensate suffer severely from motion artifacts in bright light situations     

        Much of the desired HRM sensing functionality is available pre-designed and integrated into a single device. Packing most of this functionality into one piece of silicon results in a relatively small 3 mm x 3 mm package that can even integrate the photodiode (PD) itself.

        Figure 3 shows an example of a schematic with an Si118x optical sensor from Silicon Labs. This HRM design is relatively easy to implement. The engineer just needs to focus on the optical portion of the design, which includes optical blocking between the parts on the board and coupling the system to the skin.

        Figure 3. An integrated heart rate sensor requiring only external LEDs.


        While the approach shown in Figure 3 results in a high-performance HRM solution, it is not as small or power efficient as some designers would like. To achieve an even smaller solution, the LED die and the control silicon must be integrated into a single package that incorporates all essential functions including the optical blocking and the lenses that improve the LED output. Figure 4 illustrates this more integrated approach, based on a Silicon Labs Si117x optical sensor.

        No external LEDs are required for this HRM design. The LEDs and photodiode are all internal to the module, which can be installed right below the optical ports at the back of a wearable product such as a smartwatch. This advantageous approach enables a shorter distance between the LEDs and the photodiode than is possible with a discrete design. The reduced distance allows operation at extremely low power due to lower optical losses traversing the skin. 

        Integrating the LEDs also addresses the issue of light leakage between the LEDs and the photodiode. As a result, the designer does not have to add optical blocking to the printed circuit board (PCB). The alternative to this approach is to handle the blocking with plastic or foam inserts and special copper layers on the PCB. 

        Figure 4. A highly integrated HRM sensor module incorporating all essential components.


        There is one more part of an HRM design that the developer does not necessarily need to create: the HRM algorithm. This software block residing on the host processor is quite complex due to the signal corruption that occurs during exercise and motion in general. End-user motion often creates its own signal that spoofs the actual heart rate signal and is sometimes falsely recognized as the heart rate beat.

        If a wearable developer or manufacturer does not have the resources to develop the algorithm, third-party vendors provide this software on a licensed basis. Silicon Labs also offers a heart rate algorithm for its Si117x/8x optical sensors that can be compiled to run on most host processors.

        It is up to the designer to decide how much integration is right for the HRM application. The developer can simplify the design process and speed time to market by opting for a highly integrated module-based approach using a licensed algorithm. Developers with in-depth optical sensing expertise, time and resources may opt to use separate components (sensors, photodiodes, lenses, etc.) and do their own system integration, and even create their own HRM algorithm. Ultimately, when it comes to HRM system design, the developer has a choice of doing it all or purchasing it all.

      • Introducing New PoE Devices

        Lance Looper | 03/64/2018 | 09:54 PM

        This week we’re at APEC 2018 and we’ve just introduced two new PoE powered device families designed for best-in-class efficiency and integration for the IoT. Power-over-ethernet is ideally suited for application that require both power and data at a device connected to an Ethernet switch. A couple of the advantages include lower equipment costs and lower installation costs  compared to separate data cables and power cables. It also makes use of the massive installed base of UTP cabling for wired Ethernet networks, and is part of IEEE’s 802.3at Ethernet standard, which specifies the technical requirements for the safe and reliable distribution of power over the same CAT-5 UTP cabling.

        Our new Si3406x and Si3404 devices offer the highest level of integration available for high-voltage devices on a single power delivery chip and support IEEE 802.3at PoE+ power functionality, power conversion options with up to 90 percent efficiency, robust sleep/wake/LED support modes, and electromagnetic interference (EMI) performance. These features will help developers reduce system cost and help them get to market faster with high-power, high-efficiency PoE PD-powered applications.

        Designers face a number of challenges in creating new devices, including low power conversion efficiency, electromagnetic interference problems, oversized PCBs with a lot of BOM, and running out of headroom on power. The Si3406x and Si3404 can help relieve all of these through high efficiency, proven EMI results with suppression and control techniques, superior BOM integration, and 30W power headroom.

        IP cameras are a good use case because two cables are needed; one for power and one for data. With PoE, these two cables are combined into one. With a complete power supply built with Si3406x or Si3404 PD devices, designers can focus on their more value-added portions of an IP Camera design.

        The growth of the IoT is raising demand for PoE+ connectivity across application areas, and the increasing popularity of the PoE+ standard, coupled with the requirement to support 30 W designs, these parts represent the next movement in PD interface solutions for homes, businesses, and industrial environments.

        The Si3406x family integrates control and power management functions needed for a PoE+ PD applications, converting the high voltage supplied over a 10/100/1000BASE-T Ethernet connection to a regulated, low-voltage output supply. The highly integrated architecture minimizes printed circuit board (PCB) footprint and external BOM cost by enabling the use of economical external components while maintaining high performance.

        Its high-power PoE+ capabilities also make it possible to develop advanced IoT products including IP cameras with pan/tilt/zoom and heater elements and newer protocol 802.11 wireless access points that demand much from power supplies. The Si3406x family’s on-chip current-mode-controlled switching regulator supports multiple isolated and non-isolated power supply topologies. This flexibility, along with Silicon Labs’ comprehensive PoE/PD reference designs, makes it easier and faster for developers to deploy critical power supply subsystems.

        The S3406x and Si3404 Family bring a large number of additional benefits over our previous, single offering of Si3402. 

        • EFFICIENT: With 90% efficiency options with added BOM (like FET switch to replace a diode for synchronous rectification), the family can make best use of 30W.  Further, with the best high voltage device and BOM integration in the industry, customers enjoy best cost and size.
        • VERSATILE: By supporting major topologies (buck, flyback, isolated, non-isolated), the family is flexible for any PD application type.  It supports switching between PoE and AC adapter-supplied power.
        • ADAPTABLE:  The Si34062 supports sleep and wake modes for lowest possible standby power consumption.  Each IC is resilient to surges per the IEEE specification.  And the tunable switching frequency helps the system designer control and eliminate unwanted harmonic emissions.

        For more information, visit: or





      • Timing 101 #7: The Case of the Spurious Phase Noise Part II

        Lance Looper | 02/57/2018 | 07:17 PM

        Hello and welcome to another chapter in our Timing 101 series from Silicon Labs' Kevin Smith.


        In this article, I want to continue last month’s discussion regarding spurs in clock phase noise measurements. There were a few items I just couldn’t include previously due to lack of time and space.

        You will recall from last time that spurs are discrete frequency components in clock phase noise plots. Spurs are typically few and low amplitude, but generally undesirable as they contribute to a clock’s total jitter. 

        However, spurs can also be used for evaluation and characterization of timing devices. We can use lab sources configured for low-level modulation to apply spurious frequency components, directly or indirectly, as input stimuli to a clock device or system. The resulting output clock spurs are then measured with a spectrum analyzer or phase noise analyzer.  

        In this post, The Case of the Spurious Phase Noise Part II, I will briefly review suitable signal modulation options.  Next I will discuss some notable measurements.  Finally, I will give results for a select example, jitter transfer.


        Modulation Selection, i.e. Not All Spurs are Created Equal

        There are three basic analog modulation options to most lab grade generators, i.e., AM, FM, or PM, referring to Amplitude Modulation, Frequency Modulation, and Phase Modulation, respectively. Each have their place in our “spur toolbox.” But first a digression. Consider each of the spectrum analyzer screen caps below. The carrier is nominal 100 MHz and there are a pair of symmetric spurs on each side at 100 kHz offset from the carrier.  Each spur is about 60 dB down from the carrier.

        Can you tell which screen cap corresponds to AM, FM, or PM? No, not really, not without additional information. In this particular example, the images are in alphabetical order.

        So, why are they so hard to distinguish? There are several reasons:

        1. A spectrum analyzer measures only the amplitude of the spectra, but not the phase.  In this sense, it acts like a voltmeter.  See for example Keysight Technologies’ Spectrum Analysis Basics app note.
        2. FM and PM are both angular modulation methods that behave the same way and really only differ by their modulating function. An FM signal can produce PM and vice-versa.
        3. Finally, at low modulation indices, AM, FM and PM sideband amplitudes look very similar.

        Let’s look at the last couple of points in some detail.  The following relations are adapted from the appendix in Keysight Technologies’ Spectrum Analysis Amplitude and Frequency Modulation app note.


        Note: These FM components are the same magnitude as for AM, but unlike AM there is a minus sign in front of the lower sideband. However, since the spectrum analyzer does not preserve phase information, low modulation AM, FM, and PM components look the same.

        In general, the SSB or single sideband spur to carrier ratio of AM, FM, or PM is 20*log10 (Modulation Index/2). For example, given 200 Hz peak-peak frequency deviation and 100 kHz frequency modulation, we expect a SSB spur as follows:

        SL = 20 log10 {(200/2)/100E3} = -60 dBc

        Now, here’s the practical aspect of using FM versus PM. If your source supports PM, then you can directly enter the amount of peak phase modulation. You need not change this setting as you step the modulation frequency or spur offset frequency.  However, if your source only supports FM, then the frequency modulation index must be maintained per the following relation.

        In this case, you will need to scale the peak frequency deviation Delta-f together with the modulation frequency fm in order to keep Beta constant.

        So What Tests Can We Do with Modulation Spurs?

        Generally, we will measure output clock spurs in the frequency domain using either a spectrum analyzer or a phase noise analyzer. We choose different modulation methods depending on what stimulus we need to apply to the system.  The table below summarizes some notable measurements. I will briefly discuss each of these tests and then focus on the last one in a bit more detail.

        You will note that either FM or PM can be used to generate input clock spurs for jitter transfer testing. The only thing you will need to keep track of is the phase or frequency modulation index. Modern AWGs (Arbitrary Waveform Generators) typically support AM, FM, and PM.  Higher frequency RF and Microwave signal generators also support at least FM.


        Here are some more details about each of the tests mentioned in the table.

        Input AM-to-PM Conversion

        A high gain well designed clock buffer will tend to reject AM and only pass along phase (timing) error. However, no input clock buffer is perfect and some AM-to-PM conversion can take place. The mechanism and amount of such conversion will in general differ depending on the modulation frequency. 

        The set-up for this test is straight forward, i.e. apply an input clock with AM and then check for an output clock spur offset at the amplitude modulation frequency.  There are a few considerations to keep in mind when doing this type of test:

        • Keep the modulation index low so there is practically only a single sideband spur of consequence.
        • Vary the modulation frequency over the regions of interest. 
        • Use a limiter on the input to the spectrum analyzer or phase analyzer so that we needn’t worry about AM-to-PM conversion in the instrument.


        PSR or Power Supply Rejection is similar to the previous test in that AM is applied. However, in this case, it is not the input clock that is modulated.  Rather AM is introduced indirectly via the power supply and then spurs measured as before.  This type of measurement also goes by other names such as PSRR (Power Supply Rejection Ratio) or power supply ripple testing.

        In addition to the earlier AM-to-PM considerations, there are a few others:

        • We usually want to remove all the bypass capacitors if possible. This eliminates one variable and makes it easier to inject fixed amplitude ripple, e.g. 100 mVpp, into the power rail over the frequency range of interest. It is also fairer when comparing devices.
        • AM needs to be injected into the power supply without impacting instruments or other system components.  We generally use a Bias Tee for this purpose.
        • Consistency is important for low level spur measurements, so try to keep set-ups the same when comparing devices.


        This topic alone deserves much separate treatment.  Please see Silicon Labs app note AN491: Power Supply Rejection for Low-Jitter Clocks for further details. Where there are multiple rails, and/or removing bypass caps may be a performance issue, you can leave them in and simply do straightforward performance testing as described in Silicon Labs’ app note AN887: Si534X and Power Supply Noise.


        Jitter Transfer

        A relatively quick way to check the transfer curve of a clock PLL chip is to apply low level PM or FM spurs and step the modulation offset frequency from well below the expected loop bandwidth to well above it. Then using a phase noise analyzer, with Max Hold enabled, you will see how the applied spurs roll off. The intersection of the asymptotes of the spur amplitudes allows one to estimate the loop bandwidth.

        You can kind of tell what’s going on by looking at the phase noise, but using a fixed modulation index input clock allows us to more precisely measure the transfer function. The 2 screen caps below were taken applying a phase modulated 25 MHz input clock (0.2° phase deviation) to an Si5345 jitter attenuator and measuring the phase noise continuously in Max Hold for a 100 MHz output clock.

        In the first case, figure below, the DSPLL bandwidth is set 400 Hz. The plot shows that the annotated asymptotic lines intersect right around 400 Hz, as expected. The roll-off in the vicinity of the corner frequency is a little over 30 dB/dec.


        In the second case, figure below, the DSPLL bandwidth is set to 4 kHz. This time the plot shows that the annotated asymptotic lines intersect around 4.5 kHz, which is a little wider than the nominal target. The roll-off in the vicinity of the corner frequency here looks closer to 25 dB/dec.


        The use of the Max Hold feature allows us to make a “quick and dirty” manual measurement. However, we could make more careful measurements using averaging and storing spur amplitudes across ensembles of runs in order to accurately characterize the loop bandwidth of the DUT.


        Well, that’s it for this month. In this post, I’ve extended our discussion on spurs in phase noise measurements to include some thoughts on using them for test purposes. I hope you have enjoyed this Timing 101 article.  

        As always, if you have topic suggestions, or there are questions you would like answered, appropriate for this blog, please send them to with the words Timing 101 in the subject line.  I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on.




      • Simplify Low-Power, Cloud-Connected Development

        Lance Looper | 02/54/2018 | 08:24 AM

        For the upcoming Embedded World tradeshow in Nuremberg, Germany, the Silicon Labs MCU team is showing off some unique ways to ease the challenges of developing cloud-connected applications. The demo consists of the EFM32 Giant Gecko 11 MCU, which is running Micrium OS and connects to Amazon Web Services via the new XBee3 cellular module from Digi International.


        This particular demo is quite simple – a closed-loop system with an MCU monitoring a temp sensor and controlling a fan. However, the real-world use cases that these building blocks and tools can scale to serve are much more profound.

        For example, many smart city applications including bridge sensors, parking meters, waste management sensors, and others often consist of portable sensor devices that require seamless long-range connectivity to the cloud. They may be battery powered with user demands of 10+ year battery life. They may have lots of sensor inputs and extra features like button inputs and local displays. Finally, they might need to be designed quickly, but with a long field-upgradeable lifetime in mind. These are the types of applications that this demo speaks to, with Micrium OS, Giant Gecko 11, and Digi’s XBee3.

        Micrium OS is running on the MCU and helps modularize the application functions. It’s helping the MCU maintain communication with the cellular module, monitor the temp sensor, drive the TFT display, and update control settings when local push buttons are pressed. By using Micrium, these various pieces can easily be divided and coded in parallel without having to worry about any messy integration at the end. In fact, this is exactly what the Embedded World demo team did – three different development teams in three different cities built the demo, and Micrium was the underlying glue that made it seamlessly come together.

        Another challenge being addressed here is the connectivity piece. As devices are now adding wireless connectivity, there are lots of hurdles to clear: RF design in some cases, FCC certifications, understanding wireless networking, security, and more. Not only does Silicon Labs offer homegrown, low power SoCs and modules, but now Digi helps add simple cellular connectivity. The Digi XBee3 is a plug-and-play NB-IoT module that has built-in security and is pin-compatible with 3G and LTE-M modules. It’s programmable via MicroPython and comes pre-certified so developers can focus more on the application itself.

        This brings us to the developer’s main focus, the application. The Giant Gecko 11 is a new 32-bit energy friendly microcontroller from Silicon Labs, and our the most capable yet. It helps simplify complex, cloud-connected applications with its large on-chip memory (2MB/512kB), lots of flexible sensor interfaces, SW and pin compatibility with other EFM32 MCUs, and unique low power capability to help prolong battery life. For example, not only does Giant Gecko 11 allow for autonomous analog and sensing in “Stop Mode” (1.6 uA), but it also has Octal SPI interface for external data logging, which could be used to reduce cellular transmission duty cycling.

        There is one more unique offering in this demo. Considering that cellular connectivity might not be the solution for all IoT applications, the SW compatibility of Giant Gecko 11 and all EFM32s with Silicon Labs Wireless Geckos makes it easy to migrate to another wireless SoC or module, if needed. For example, some use cases and markets may use NB-IoT (such as this demo), while others might need their own proprietary sub-GHz solution (Flex Gecko).

        For more information about what we’re doing at Embedded World, click here:

      • IoT Hero Play Impossible Puts a New Spin on Playtime

        Lance Looper | 02/47/2018 | 09:26 AM

        Play Impossible has reinvented the ball by connecting it to phones and tablets. They’ve managed to do this while maintaining the look and feel like a ball found on any gymnasium floor. Launched in October of last year, Play Impossible won first place at the Last Gadget Standing competition at CES in December. With rave reviews from USA Today, CNN, and Mashable, Play Impossible’s Gameball is capturing the hands and minds of kids as it provides another way to play ball with the modern insight of today’s connected devices. Silicon Labs had the opportunity to sit down with cofounder and CTO Kevin Langdon to hear how the company got its start and what he sees for the future.


        How did Play Impossible come about?

        All of the founders of the company are dads. And as parents, we have all struggled with the amount of time our kids spend on devices. This particular problem was the impetus for the company - how do we get our kids up off the couch in active play and doing what we call active play. Active play is physical and involves movement, but it’s also social and creative in nature. These are important things that many kids today aren’t getting enough of, and there are plenty of studies saying this is only getting worse. Getting kids to move and play is what Play Impossible is all about.


        The quality of Gameball is amazing - it’s a real ball.

        Yes. If you couldn’t see the charging part, most people would not know there are electronics inside of the ball. The quality of the ball was important to us, but that aspect of the product definitely was not in our wheelhouse, and we didn’t want to reinvent the process. So we partnered with Baden Sports, which specializes in sports equipment, to build the ball.


        What were some of the original design requirements when you set out to create the ball?

        We really wanted to create something with a reasonable price point, especially when it’s sitting on a shelf next to $5 balls in a retail setting. The connection range of the device was critical as well. We needed a Bluetooth to stay connected as far as you could throw the ball. Silicon Labs played a big role in helping us do this. Power was another issue – creating a solution that didn’t get in the way in terms of charging.


        What was Silicon Labs’ value proposition in the beginning?

        I first started looking at Blue Gecko when I was working on another product for SkyGolf. And then with Gameball, we looked at a lot of modules and realized the range and low-power functions were two pieces that we knew Silicon Labs could help with.


        Were there any unforeseen challenges that you came across, such as weight, size, etc.

        The hardest part for us was getting the durability right with all of the electronics inside. We also came up with a unique solution for the power. There is no battery in the ball, it runs entirely on super capacitors. We needed to do that for both cost reasons and to maintain the durability. I’m pretty happy with the solution we came up with - it’s a real jaw dropper when people see our ball charge up in 20 seconds.


        What was the Last Gadget Standing competition at CES like?

        There were hundreds of applicants and they narrowed it down to 10 gadgets on stage. I had no expectations of being selected, but when we were, we were honored. One of the gadgets was a Star Wars VR gadget, and it was two months after Star Wars had hit movie theaters. But it went really well and was a lot of fun. The host, David Pogue, was tough and asked a bunch of questions, but he loved the product.


        What types of pressures are you under to be innovative – is it developing new games, cost of goods, talent? It’s definitely creating new games. It’s a combination of making the ball new again. Anyone who has a kid knows kids typically like a new toy for a few days, but then on the fourth day, the toy tends to be thrown into the closet. We want to make sure our product is played with a long time beyond those four days. The new games we create make the ball new again and give the kid a reason to get the Gameball back. We are driven to create hit games that are what everyone is talking about.


        Is all of the production for Gameball done in house?

        When we first started, we hired an experienced gaming designer to build the game, as it’s definitely not a traditional game. We had to do a lot of heavy prototyping and understand the software and hardware capabilities. We had to figure out what the product would be capable of doing socially and with Bluetooth and power. We definitely pushed the limits in terms of what we could do with those functionalities. For example, with a lot of IoT products, real time doesn’t matter. Of course it’s always important to be quick, but real time isn’t critical. With us, if you look at other playables on the market with Bluetooth, I don’t think there are any products as fast as Gameball. The game requires feedback from your fingers on the ball as quickly as possible to get the gestures from the beginning with the ball.


        Where do you see the future of IoT going? And where do you see it expanding for the everyday person?

        Right now, expectations are low among the average consumer of what IoT is all about. When our product is sitting on a shelf at a retail location, no matter how much we put on that box, there is little a consumer can understand about the product until they actually play with it. It’s going to take years for consumers to change and expect connectivity in everything. The nice thing is it’ll be much easier at that point for businesses such as ours. But today, it’s a critical issue for us in terms of marketing and sales. We see ourselves as a software platform that can interact with many different devices. Gameball is just the first of many devices and accessories that will change how we play in the future.

      • Selecting the Best Mesh Networking Standard

        Lance Looper | 02/43/2018 | 10:23 AM

        The benefits of mesh technology continue to gain traction among IoT developers as end-users experience sizeable application performance gains from IoT devices tapping this type of wireless interconnection network.

        In the new whitepaper, “Selecting the Appropriate Wireless Mesh Network Technology,” we give IoT developers much-needed advice into considerations required for selecting wireless mesh networks for IoT applications, such as lighting systems, retail beacon systems, or building or home automation.  

        Mesh networks use connected devices as nodes to extend connectivity, shortening proximity ranges for connectivity and allowing device-to-device communication often without the need of a cloud gateway. For instance, the connectivity range for lighting systems is extended every time a new light is introduced to the system, enabling any light switch action to stay within the mesh network instead of being transmitted over a cloud gateway. One of the main benefits of mesh networks is their ability to remove latency issues and speed device application performance.

        The new whitepaper hits briefly on some of the applications benefiting from mesh networks, yet focuses mainly on explaining the nuances of integrating IoT devices into wireless mesh networks.

        Interoperability with already deployed wireless protocols, such as Zigbee and Bluetooth, is discussed in length in the paper, along with best practices using the Thread mesh protocol. Different service providers have requirements for a specific protocol and/or multiple protocols; therefore, designers must be aware of these details when selecting the appropriate connectivity solution. Many existing devices use Zigbee, and for new devices based on a technology such as Bluetooth mesh, an interoperability strategy either through the end device or gateway supporting multiple protocols needs to be considered. Several other important interoperability insights are discussed in the paper, as well as the importance of ensuring the entire connectivity ecosystem is addressed and adaptation of IP at the gateway is successful, as needed.

        Another valuable theme conveyed is the use of wireless standards and how to use the protocols depending on the type of device and application. Of the three standards discussed in the paper, the Thread mesh standard is the only protocol based on IPv6, providing several unique benefits, such as end-to-end routing and addressability on the same or across networks. Development tips are also discussed, such as the fact that Bluebooth low energy can be combined with Zigbee to simplify device setup via Bluetooth commissioning, using smart phones for Zigbee devices or to provide the Bluetooth connectivity needed to support Apple Homekit.

        Silicon Labs has a multiprotocol software and hardware solution designed to solve many of the issues detailed in this article, which helps designers design a single product supporting multiple wireless connectivity protocols. This can be the same device capable of connectivity to multiple protocols in the field, or a device with the ability to be configured in the field or factory to one of a number of different wireless protocols.

        As is often the case, one protocol may not be able to meet the needs of all products and markets, though this paper provides a fair amount of insight into which one to consider depending on the type of application the designer is tackling.

        Download the whitepaper.