The most successful loT products make Bluetooth and Wi-Fi connectivity easy for end customers to set up and use. The lack of time and resources in companies with limited in-house wireless design expertise can lead to slipped delivery schedules and multiple product re-designs.
There are four typical stages of the wireless development process:
The development process can take up to a year to complete. Let’s discuss and highlight the unique challenges presented in each development stage.
In the firmware development stage, developers using unprogrammed modules must become experts in Bluetooth communication or Wi-Fi protocols and vendor-specific software stacks. In traditional Bluetooth and Wi-Fi design, developers must create an embedded host + network co-processor design with a communication link that operates at a low level controlling the network co-processor. Half of the development work includes writing firmware code and the other half is spent on testing.
Choosing the right hardware is a critical piece of wireless functionality and the integrity of system design. Using unprogrammed modules to add wireless connectivity to their products poses a variety of problems including potential delays, antenna design issues, and RF certification hurdles. FCC certification alone can cost thousands of dollars and take months of testing and validation. Good RF performance is a critical design challenge.
Mobile Applications Development
The mobile app development stage is often the most challenging for companies since many don’t have developers in house with mobile application experience. For this development stage, developers must become experts in both Android and iOS development, which means more APIs to understand. They often outsource to vendors who build the mobile infrastructure, perform testing, etc. which can be very time consuming and costly due to the difficulty in finding subject matter experts in both iOS and Android development.
This stage of product development is a critical one and can be challenging and prone to errors and potential launch delays that affect the success of the loT applications. Getting and maintaining reliable cloud connectivity and properly collecting data are huge concerns in loT applications today. It’s almost impossible for companies, especially small ones, to develop a cloud-connected framework/infrastructure from scratch. Developers also often have problems with unreliable links and connectivity and they might also be restricted by the MCU. Having reliable connectivity links is a critical piece of product longevity and customer satisfaction. Firmware updates are also an important part of product maintenance and are usually outsourced by companies. Using an integrated solution that already has the infrastructure for adding cloud connectivity can save developers months of framework development.
Benefits of Pre-Programmed Wireless Modules
loT developers today want robust functionality in the smallest footprint possible and they want solutions that support easy Wi-Fi and Bluetooth connectivity. Leveraging integrated modules that already include pre-programmed firmware, pre-certified RF and hardware, easy mobile app framework, and cloud connectivity, streamlines the development process and takes the guesswork out of successful connectivity.
Key Points to Consider
Silicon Labs Wireless Xpress products, powered by Gecko OS, and application firmware running on pre-certified Silicon Labs modules combines these product development cycle stage optimizations to provide a streamlined embedded-to-phone and embedded-to-cloud connectivity.
Value of Gecko OS
Gecko OS is a highly-optimized loT operations system designed specifically to power hardware platforms with secure Wi-Fi networking capability and is the best choice for resource-constrained devices. Hardware running Gecko OS provides products with a powerful and secure wireless connection to a mobile device or the cloud. The Gecko OS API is a huge benefit to loT developers because it provides a common software foundation across multiple product lines.
Gecko OS products maintain much of the wireless interface without external MCU intervention, only exposing critical variables and commands for external MCU control.
To learn more about how Wireless Xpress can help IoT developers deliver ease-of-use to end customers, read the full whitepaper:
There is a huge demand today for adding Wi-Fi connectivity to IoT applications because of the many advantages over other wireless protocols (Zigbee, Bluetooth, etc.) such as longer range, native IP connectivity, and high bandwidth. For millions of IoT applications, including industrial machines and sensors, Wi-Fi is often the best choice for connectivity because of its robust infrastructure and global reach- Wi-Fi exists almost everywhere in the world today.
Challenges for developers: The biggest challenge for developers has been the high-power consumption of Wi-Fi in IoT systems. Wi-Fi protocols were designed primarily to optimize bandwidth, range, and throughput, not power consumption. This makes it a poor choice for power-constrained applications that rely on battery power. Of the various cons of using standard Wi-Fi protocols, high power consumption is the most impactful (range limitations and busy networks are cons as well). Until today, developers have avoided adding Wi-Fi to their IoT applications as there hasn’t been a viable option for adding Wi-Fi connectivity to battery operated devices that didn’t require high power consumption.
These are the four key challenges when adding Wi-Fi connectivity:
Power consumption in Wi-Fi varies dramatically across various modes of operation and it’s important to understand the different modes and optimize them to reduce overall power consumption. One strategy is to stay in the lowest power mode as much as possible and transmit/receive data quickly when needed.
RF performance: Unlike many wireless protocols, Wi-Fi power consumption is significantly impacted by RF performance and network conditions. This is a significant problem with the increasingly crowded Wi-Fi networks today. A busy network leads to many retries/retransmissions which consumes a high level of power. Developers must focus on reducing retransmissions and controlling link budgets to be successful.
Wi-Fi devices typically consume significant power in both Transmit (Tx) and Receive (Rx) modes. There are several ways to reduce power consumption and optimize Tx and Rx modes. First choose devices with high selectivity/out of band rejection. Also, choose devices with high Rx sensitivity, and if possible, choose uncrowded channels for device operation. This might mean using channels not used by chatty connections such as video streaming.
Applications: Power consumption is highly dependent on the application and use case. IoT applications typically fall into one of three categories:
Always on/connected-these devices are always on which allows users to access the device remotely at any time via cloud or mobile application. A Wi-Fi video camera is a good example of this use case. Latency is a critical factor in these applications and power consumption is dominated by the transmit power mode (the highest power consumption), as the device is transmitting data and it would be detrimental to be inactive or inaccessible.
Periodically connected - These devices are connected to a remote server or cloud platform and only need to transmit occasionally. A good example is a temperature or humidity sensor that sends data every few minutes and it can tolerate the small amount of time it takes to become active. Latency is not a major concern and the power consumption is dominated by receive and sleep currents. It stays in intermediate power levels so it’s never completely awake or asleep so it wakes up faster.
Event-driven - An online shopping order button is a good example of event-driven Wi-Fi connectivity. It’s almost always inactive/asleep, meaning there is no data transmission. It wakes up infrequently, and it takes longer to wake up from this mode. An event occurs that triggers wakeup such as when a user selects the order button. This mode is dominated by the lowest sleep current and is best when needing to use the least amount of power possible for an IoT application.
Design issues - Lowering Wi-Fi power consumption is also a design system issue and is a critical challenge for developers today. Power management and extended battery life are major factors when developing IoT applications. Although standard Wi-Fi protocols weren’t designed initially for low power operations, there are many techniques to help significantly reduce power consumption. These techniques include optimizing Rx and Tx modes, optimizing power-saving modes (sleep modes, WMM, DTIM, shutdown/standby), choosing the right hardware, using built-in specifications, optimizing RF performance, and system level optimization. Developers must understand all the contributing factors to overall energy consumption in IoT devices.
They must also understand both system-level factors and deep application factors in order to achieve low energy consumption in their applications. Finding the right mix of power-saving Wi-Fi modes and selecting the right hardware are the keys to dramatically reducing power consumption. Leveraging hardware and software designed specifically for IoT devices and low power consumption can reduce long term costs, overcome development challenges, extend battery life, and potentially enhance the life of products and customer satisfaction.
We solve the power management issues for IoT developers by providing drop-in Wi-Fi solutions, including pre-programmed modules (WF200 and WGM160) that can cut power consumption in half. These solutions are designed proactively with low power IoT applications in mind and work in a wide range of applications from home automation to commercial, retail, security, and consumer health-care products. Pre-programmed modules provide a prototype quickly which helps developers get products to market faster.
To read the full whitepaper on this topic. click here:
Recently, we had the opportunity to speak with Alex Rogers, Professor of Computer Science at Oxford University. One of his recent projects exploring technology and zoology resulted in the creation of a small, low-power acoustic device built to record the songs of a potentially extinct cicada. The project began a little more than two years ago and has since morphed into a start-up called Open Acoustic Devices spinning out of the university.
The Open Acoustic device, known as the AudioMoth, is already in the hands of many ecologists and conservation organizations that are using it to track and study hard-to-detect wildlife and/or potential threats to wildlife, such as gun shots by illegal poachers or chain saws in protected forests. Previously, if ecologists or wildlife enthusiasts needed a highly sensitive audio recorder for field research, they had to pay nearly $1,000 per audio recorder. Or they could opt for an open-source recorder built from a low-cost single-board computer, which required large battery packs -- sometimes even car batteries! The AudioMoth, on the other hand, is slightly larger than a smart phone (batteries included) and costs roughly $50.
Check out our conversation below about how a small university project scaled itself to commercialize a one-of-a-kind audio recorder for wildlife.
Tell me a little bit about yourself and how Open Acoustic Devices came about.
As a professor of computer science, my interest has been in deploying machine learning algorithms on devices constrained by computing power and battery power.
My interest in conservation technology stemmed from an event at the Zoology Dept. at Oxford, which was exploring new technology for biodiversity monitoring. The department was interested in using low-cost phones to change how people conduct environmental monitoring. With PhD student Davide Zilli, we set out to use smartphones to listen for a rare cicada insect in the U.K., which we still don’t know is extinct, hidden or just rare. The cicada sings at a very high frequency, at about 15 kilohertz, which most adults can’t hear, but smartphones can.
We didn’t find the cicada with the smartphones, but we started thinking about how we could design a small acoustic device to automatically detect the song of this insect. Two new PhD students, Andy Hill and Peter Prince, joined the project, and we ended up building a prototype device, and then made it available to others about a year ago.
We soon discovered a huge appetite for low-cost, open-source acoustic recorders. We are now working with ecologists who use our device to record bats, birds, insects and other wildlife. Until now, professional ecologists typically had been surveying wildlife with commercial equipment.
The cost advantage of AudioMoth completely changes the science people can do. It means ecologists can do research that would have been cost-prohibitive before. Previously, if an ecologist had a small budget, they could maybe only deploy three or four recorders. Now they can potentially deploy 100 recorders, meaning different types of wildlife surveys can be conducted.
Who is your buying audience?
It’s a big mix – it’s a split equally between university researchers (ecologists) and conservation organizations. We’ve done some large bat survey deployments with the Zoological Society of London and the Bat Conservation Trust. But then there’s a whole pool of individuals and enthusiasts recording birds and bats on their own.
Can you tell me about the performance of the device?
From the beginning, we were looking to create a minimal device we could run smart algorithms on to only record when hearing a sound of interest. In the first instance, this was the New Forest cicada.
We combined an inexpensive MEMS microphone, similar to what’s inside a smartphone, with an SD card and MCU to create a programmable and highly mobile device. Because of the small size, the microphones are extremely sensitive to high frequencies -- perfect for people interested in bats, where they are recording at 100 kilohertz.
We have a lot of deployments in remote jungles and forests with extremely limited Internet access, but we are still planning to add low-power wireless connectivity to new versions of the device for alerting, streaming and research purposes.
Did you have any design challenges?
The key challenge for a battery-powered device is power -- we knew we had to focus on low power from the beginning. Our users worry most about how much data they will end up recording. We used Silicon Labs’ Wonder Gecko microcontrollers because of their low power capabilities, which results in smaller batteries and longer life in the field.
The non-commercial, open-source recorder alternative is typically based on Raspberry Pi, which uses a much more capable processor running a Linux operating system, and as a result requires a much larger battery pack. In many wildlife applications, the devices have to be carried to the deployment sites in backpacks, making the size and weight of the batteries critical.
Can you give me some idea of the power gains experienced by using the Gecko MCU?
To give an example, right now we have a deployment in Belize that involves listening for gunshots to detect illegal hunting in tropical forests. With a small battery pack (a 6V lantern battery), we can deploy a sensor that lasts for 12 months and listens continuously for 12 hours a day, only making recordings if it thinks it detected a gun shot. With the Gecko MCU, we can do nearly all the listening while the processor sleeps, then it can wake up to run the detection algorithms across a 4-second sound buffer.
How did the Gecko get on your radar?
We originally used an NXP processor and the Arm Mbed development platform in our prototype. We really liked the development platform, but the processor used too much power. Silicon Labs ended up being a better option because of the integrated tool chain, allowing us to directly measure and optimize energy consumption. We can also distribute the code, knowing that the development tools are free and are available on all operating systems, which is a critical benefit.
As a university project, how did you manufacture these devices?
To keep costs low, we started exploring alternative manufacturing routes. With Alasdair Davies of the Arribada Initiative (an organization promoting open, affordable conservation technology), we started running group purchasing campaigns through GroupGets, a low-cost assembly company that facilitates group purchasing. After testing the market with some relatively small orders, GroupGets enabled us to run off a batch of 1,500 devices from a PCB assembler, providing real economy of scale.
This model allows designers the ability to offer various types of devices, yet manufacture at a low risk. We’ve manufactured close to 4,000 devices so far and have a live campaign running at the moment that will likely result in another 1,500 orders. As a small university project, there is no way we would have been able to do without this model.
We also used CircuitHub, which enabled us to post our hardware design and bill of materials on its website. The concept essentially hacks low volume manufacturing. Suddenly, people can share and distribute hardware in the same way people have been able to share and distribute software.
Where do you see IoT going in the next 5-8 years?
Computation on devices is always more energy efficient than storing or transmitting data, meaning devices will continue to become smarter and handle more processing on their own. Many of the deep learning algorithms that researchers are exploring at the moment are still too complex to run on very low-power small devices, but there’s already a huge amount of interest in figuring out how to push these algorithms down to small, low-power devices.
Tile is the leading maker of Bluetooth trackers that help you track your belongings such as keys, backpacks, teddy bears or, more generally, any object you want to be able to locate easily when needed. There are two basic building blocks needed; the Tile hardware device that you attach to the item you want to track and the Tile mobile application that you can use to locate the Tile.
Now, with the introduction of Find with Tile, Tile technology can now be embedded into any product quickly and easily.
For a generic introduction to the Tile, see the “how-it-works” section on the Tile website.
Find with Tile hardware (i.e. devices with Tile technology embedded) is typically physically small and needs to be consume minimal energy so that it can be battery powered. The required range of operation is in the order of tens of meters and therefore it is a perfect match for Bluetooth Low Energy that has been designed exactly for use cases like this.
The Tile embedded application (i.e. the code that runs on the Tile itself) can be run on Silicon Labs EFR32 based SoCs and modules. It can be used with any Blue Gecko part. For easy demonstration, this article includes a project that runs on the Thunderboard Sense 2.
Thunderboard Sense 2 is small, low cost and it can be powered off a CR2032 coin cell battery so it’s a good platform for prototyping Tile functionality. The Tile embedded code is available as a C code library that can be ported also on other Silicon Labs Bluetooth development kits.
At the end of this article you will find pre-built binaries that you can program to your Thunderboard Sense 2. Using the pre-built demo is easiest way to get started. The entire demo project is also available if you want to have a closer look on how it’s done.
The pre-built binaries for Tile demo on Thunderboard Sense 2 are provided at the end of this article as attachments. It consists of two files:
· Gecko bootloader for Thunderboard Sense 2 (gecko_bootloader_TBS2-combined.s37)
· The Tile demo application (Tile_on_TBS2.hex)
You can flash the binaries to your TBS2 using Simplicity Commander which is a utility that comes with Simplicity Studio.
Follow these steps to program the demo:
1- Launch Simplicity Studio and connect your TBS2 to your PC with the USB cable
2- Make sure the board is detected by Simplicity Studio as shown below:
3- Launch Simplicity Commander from the Tools menu:
Commander launches in a separate window. Following screenshot shows the required steps to flash binaries to the target.
Press Connect button to connect to the embedded J-Link debugger on the TBS2
Press Connect (next to the Target label) to connect to the EFR32MG12 device on the TBS2
Open the Flash tab from the left side menu
Use the Browse… button to locate the binary files on your computer
Press Flash button to program the file to target
First program the bootloader file (gecko_bootloader_TBS2-combined.s37). Then repeat the same procedure for the application image (Tile_on_TBS2.hex).
After programming the two binaries, you can either keep running the Tile so that it is powered from the USB cable. Alternatively, you can unplug the cable and insert a CR2032 coin cell to power the device.
Once you have programmed the Thunderboard Sense 2 with the provided bootloader and application image, the device starts to advertise using name “Tile”. You can observe it with any Bluetooth LE test utility, but to take advantage of the Tile features you need to use the Tile mobile app that is available for Android and iOS. Install the app on your mobile device from Android Play Store or Apple App Store.
First step in taking a new Tile into use is registering the Tile with the mobile app. The procedure is the same as with “real” Tiles, as explained in the Tile website. However, note that with this demo you do not need to press any button on the Thunderboard Sense 2 to activate it. The device starts to advertise as soon as the board is powered up (either through USB cable or a CR2032 coin cell).
After registering the Tile with the mobile app, the UI should look something like this:
You can give the tile any name you want. In the above screenshot, it is named “Tbsense 2”.
Whenever the Tile is in the range of the phone, the app will make a connection with it. The green Find button in the UI indicates that the connection is established. You can press the Find button to make the tile “Ring”. The Thunderboard Sense 2 does not include a speaker, so the Ring feature is indicated by flashing the RGB leds on each corner of the PCB. The flashing continues until you press “Done” on the mobile app.
The mobile app keeps a connection open with the Tile as long as it is within range. Maintaining a Bluetooth connection is very power-efficient because the protocol has been optimized for battery-operated devices. An estimated current consumption (averaged) for different modes are summarized below. Note that the current consumption is not optimized to the last detail, these figures are just to give a rough estimate about the current consumption.
Note that the bright RGB LEDs draw a lot of current and therefore the current consumption while device is ringing is high. Do not leave the Ring on for long duration to avoid draining the coin cell battery.
The demo project is also provided as an archived Simplicity Studio project (*.sls file). You can import it in Simplicity Studio via the Project -> Import -> MCU Project… menu.
Note: the project is only for the application, it does not contain bootloader. You can use the pre-built bootloader image or alternatively create a Gecko bootloader project for Thunderboard Sense 2 in Simplicity Studio and build the bootloader yourself.
The archived project is created using Bluetooth SDK 2.10.1 and GCC toolchain 7.2.1. You need to have these installed in Simplicity Studio to be able to use the project.
Here are some tips on how to navigate the project contents:
The application main loop includes handlers for basic events such as connection opened / closed etc. You can add any custom functionality into the application as you wish. The Tile application is integrated to the project so that at the end of the event loop (your own code), the same Bluetooth stack event is forwarded to the Tile event handler by calling tile_on_ble_evt().
The Tile app requires a valid Tile ID and key pair. These can be obtained from the Tile website by filling out a form here.
The credentials you receive from Tile must be inserted to file Tile_Common\tile_storage.c. Replace the dummy values in variables interim_tile_id, interim_tile_key with the actual credentials that you have received from Tile, Inc. Registration of the Tile will not work unless you replace the dummy values with valid credentials.
The example uses custom advertising packet format and the advertising payload is set in function setup_custom_advertising() (in application.c). You can modify the payload if you wish, as long as the Tile service UUID is included.
The name that is advertised is set in function setup_custom_advertising() and the default name is “Tile”.
In addition to passing BLE stack events to the Tile handler, the demo application needs to have some hook to start and stop the Ring function that is triggered when you press the Find button on the mobile app. The callback functions to start and stop ringing are found in file Tile_Common/tile_service.c. The functions are play_ring_song() / play_ring_song().
In this demo implementation, the above callbacks will call function gecko_external_signal() that triggers an event in the Bluetooth stack. The event is then handled in the application main loop. The mechanism is similar to what is used typically to detect button presses or other interrupts in a Bluetooth application. More details are found in UG136, Chapter 6 Interrupts.
The objective of this blog is to show you the steps necessary to use an existing Micrium OS USBD example and add a different class and demo using the EFM32GG11.
Since the Gecko SDK currently ships with a ‘micriumos_usbdhidmouse’ project for the SLSTK3701A_EFM32GG11 board, we can make a copy of it and rename it ‘micriumos_usbdvendor’. The convenience of making a copy of the project is to modify it according to our needs without breaking the Gecko SDK default projects. Start by locating the ‘micriumos_usbdhidmouse’ folder in your Simplicity Studio installation. The project location is at ‘C:\SiliconLabs\SimplicityStudio\v4\developer\sdks\gecko_sdk_suite\v2.5\app\mcu_example\SLSTK3701A_EFM32GG11’
Once you found the folder, make a copy of it and rename it ‘micriumos_usbdvendor’. Please make sure to keep the new folder at the same path location of the original. Locate the ‘SLSTK3701A_micriumos_usbdhidmouse.slsproj’ file inside your New Folder ‘micriumos_usbdvendor\SimplicityStudio’ and rename it ‘SLSTK3701A_micriumos_usbdvendor.slsproj’
We will be adding our new workspace, so launch Simplicity Studio and connect the SLSTK3701A_EFM32GG11 board to the PC.
Add workspace by right-clicking anywhere inside the Project Explorer box and Select Import>MCU Project
Use the Browse button to locate the ‘SLSTK3701A_micriumos_usbdvendor.slsproj’ and click Next>
File Location: `C:\SiliconLabs\SimplicityStudio\v4\developer\sdks\gecko_sdk_suite\v2.5\app\mcu_example\SLSTK3701A_EFM32GG11\micriumos_usbdvendor\SimplicityStudio`
Since you already have your board connected, it should all be auto-detected. Leave everything by default making sure that there is an SDK selected, then click Next>
You can either change the name of the project or keep the default, then click Finish.
We will now need to modify the project configuration files to include Micrium OS USBD Vendor class as part of our build. Start by expanding the Includes section in the Project Explorer panel, then expand the configuration folder as shown in the image below. After that, double-click on the rtos_configuration.h to open it in the editor.
As soon as you try to edit the rtos_description.h, you will be presented a Warning indicating you are editing an SDK file. Click on Edit in SDK.
Add the following #define in rtos_description.h to indicate Micrium OS that you want to use VENDOR class.
Remove the following #define in rtos_decription.h
Tell Micrium OS you want to use the USBD VENDOR demo by modifying ex_description.h. Expand the Includes section in the Project Explorer panel, then expand the project folder as shown in the image below. After that, double-click on the ex_description.h to open it in the editor.
As soon as you try to edit the ex_description.h, you will be presented a Warning indicating you are editing an SDK file. Click on Edit in SDK.
Remove the following #define in ex_description.h
Add the following #define in ex_description.h to indicate Micrium OS you want to use VENDOR class.
Expand the src section in the Project Explorer panel and remove the ‘ex_usbd_hid_mouse.c’ linked file.
Select Import > MCU Project by right-clicking on src section as shown on image below
Choose ‘More Import Options…’ and select File System on the next window that pops-up as shown on the images below
Add ‘ex_usbd_vendor_loopback.c’ example as shown on image below, and click Finish.
File location: 'C:\SiliconLabs\SimplicityStudio\v4\developer\sdks\gecko_sdk_suite\v2.5\app\micrium_os_example\usb\device\all'
Expand the usb>source>device>class section in the Project Explorer panel and right-click on class. Select Import > MCU Project and add the USBD VENDOR class file as shown on images below
Use the Browse button to locate the VENDOR class files to be added as shown below.
File location: 'C:\SiliconLabs\SimplicityStudio\v4\developer\sdks\gecko_sdk_suite\v2.5\platform\micrium_os\usb\source\device\class'
You can now build your application and flash it on the board. Once the application starts running, you should see LED0 on the board blinking which means all the initialization was done correctly; therefore, we can now test the USBD VENDOR demo.
Use a Micro-USB B cable to connect the PC to the EFM32GG11 board. As soon as you connect it, Windows will enumerate the device and display it in 'Universal Serial Bus Devices' as shown in the image below.
Execute the Windows USB application provided in the attachment (Located at 'App\Host\OS\Windows\Vendor\Visual Studio 2010\exe\x86') and provide the number of transfers.
Silicon Labs has an unusually broad perspective of the smart home market, being we provide both chipset and wireless solutions to a vast array of global smart home customers. But what makes us especially unique is that we support most all of the major smart home connectivity protocols, and even offer solutions to help customers create their own wireless protocols. Wireless connectivity is complicated, but it’s getting remarkably easier for both designers and users as time goes by. And as it does, the smart home is getting much smarter.
The smart home market as we know it initially started in the early 2000s, and for many years, the question has always been – when is mass adoption going to happen? No one knows for sure. Yet we are confident adoption rates will increase substantially this coming year. According to Statista, there are already nearly 35 million smart homes in the U.S. in 2018, with growth expected toward 60 million homes by 2023. People have been using smart home thermostats, lighting, and security products for quite a few years now, but the smart speakers recently introduced have been an explosive driver for the smart home. More than 50 percent of smart speaker owners have gone on to buy other smart home products, and Gartner predicts that 75 percent of U.S. households will have smart speakers by 2020.
So what’s coming up in 2019 that will be different for the smart home? Silicon Labs shares some predictions below.
Professionals take a backseat: One of the shortcomings of the smart home thus far has been the tendency for people to buy the application they want, but once they get the package home, the installation is too complicated and an outside professional is required to install the device. Thanks to new highly interoperable smart home platforms, such as the Silicon Labs Z-Wave SmartStart, the installation of products is becoming surprisingly easier. Ring is a good example of a new plug and play security smart home product that just needs to be plugged in, then the user sees the application on their phone. It’s that easy.
AI and smart home unite: Wireless and mesh connectivity solutions have improved dramatically in range and power consumption in recent years, enabling low-costs sensors to be deployed across the home (and yard). No longer limited by short ranges and power constraints, ubiquitous devices are giving the smart home the ability to react intelligently to changing conditions. The smart home has already seen the first iterations of AI, otherwise known as context-aware intelligence, in consumer products, and more are on the way. A popular example is the smart thermostat that learns family preferences. New smart thermostats will sense how many people are in which rooms of the house and adjust accordingly. They will know what time of day energy prices drop and react for optimal economy.
Insurance industry adoption: More than ten years ago we saw smart home thermostat products disrupt the utility market, and we’re going to see those kinds of dynamics happen again in other markets. Smart home insurance IoT products are something to watch closely this year. Context-aware smart homes are allowing the insurance industry to move its central business paradigm from reactive claim services in to proactive loss prevention. A draft in the home can be traced to a roof in need of costly repair. Moisture in the garage can distinguish between a simple worn valve or an expensive leak in the foundation. Water Hero, an IoT product that detects a water leak in the house before it escalates, is the first of many new insurance IoT products that will continue to hit the market in the coming year.
Homes get even smarter: Some of the early smart home consumer products centered around video monitoring, yet a more sophisticated sensing is materializing. New smart home products for Aging in Place are a great example. Keeping close watch on older and more fragile family members doesn’t mean they need to be watched via obtrusive video cameras. Instead, data can be collected about elderly daily habits from invisible sensors in appliances, lights, rooms, medicine cabinets, etc. If the data shows unusual irregularities, family members can be notified.
Costs decrease, longevity increases: The beauty of a maturing technology market is as the technology advances, the costs come down, and this dynamic will be no different in 2019 for the smart home. Besides decreasing consumer costs, we’ll also see major gains in battery and low power. A truly smart environment features embedded sensing throughout the entire space, including areas where direct electrical power is either impossible or impractical. Battery operated devices are a necessary mainstay of the smart home landscape. Due to their need for continual battery replacement, service providers and end users often limit the deployment of these devices, thus limiting the life cycle of the system. The recently released Silicon Labs Z-Wave 700 platform is so efficient that it can allow battery operated devices to provide ten years of service on a single coin cell battery. We will start seeing the benefits of this battery development in the coming year as applications roll out based on the technology.
We'd love to hear about what you're expecting from the smart home market this year.
Get the latest improvements, bug fixes and security updates for Silicon Labs Bluetooth, Thread, Zigbee and MCU product families in our latest SDK.
If you have questions or need help, contact our technical support team.
In this follow-up post, The Case of the Noisy Source Clock Tree Part 2, I will discuss in more detail exactly how to calculate the total jitter for a noisy source clock tree that includes a jitter attenuator. I will also provide a measurement and spreadsheet example.
In Part 1, I first discussed the low jitter source canonical clock tree, how to calculate the total jitter by Root Sum Square, and reviewed the terms jitter transfer, jitter generation, and additive jitter. I then moved on to the noisy source clock tree, the motivation for adding jitter attenuation, and introduced how to calculate its total jitter.
As I mentioned last time, following the clock signal from the source through the clock tree components to the sink or destination is best viewed as a system that processes phase noise. That is, if we know the phase noise characteristics of each clock tree component, we should be able to estimate the end clock phase noise and its phase jitter over a particular jitter bandwidth.
By best I mean that this approach is more universal and accurate. It can be applied to all types of clock trees, with or without noisy sources and jitter attenuators.
The Basic Idea
The general approach is illustrated below. Every clock tree can be regarded as a cascade of phase noise processing elements each of which, in the most general sense, can be modeled as the Root Sum Square or RSS of the Jitter Generation (JGEN) phase noise in contribution with the Jitter Transfer Function (JTF) applied to the scaled input clock phase noise.
Scaling is required so that the components contributing to the RSS are all at the same carrier frequency. The details will be made clearer in the example that follows.
The ith element above illustrates the general clock tree component model. All the contributions shown apply in the case of a jitter attenuator which also multiplies or divides the input clock. However, in practice, not all aspects of the general model apply, or are readily available, for every clock tree component:
A Practical Measurement Example
Consider the following simplified block diagram. I used an Arbitrary Waveform Generator or AWG as my noisy 50 MHz input clock source and followed it with an Si5345 evaluation board. The Si5345 does both jitter attenuation and clock multiplication as is the common practical case. I then followed the jitter attenuator (JA) with an Si53301 clock buffer evaluation board. The output clocks for both the jitter attenuator and clock buffer are 156.25 MHz.
No baluns or limiters were used. Just straightforward single-ended connections to and from test equipment and differential connections between the jitter attenuator and clock buffer. The unmeasured output clock polarity was terminated on the clock buffer EVB.
Calculating Phase Jitter
In the work that follows, we often want to calculate RMS phase jitter, i.e. integrated phase noise over a select frequency range, from a dataset of phase noise L(f) (dBc/Hz) versus offset frequency f (Hz). For the purpose of this exercise, we ignore spurs though they can certainly be included.
The general procedure is as follows.
As will be seen, the worksheets that use this technique calculate values very close to what the phase noise instrument reports.
I have attached a spreadsheet, PhaseJitterCalcsClockTreeWithoutSpurs.xlsx, that records the results and compares them to lab measurements in the form of Agilent E5052B screen caps. There are 9 total measurements steps listed below in worksheet order as follows with additional details. The convention on the calculations worksheets is that input data are in yellow colored cells.
1. 50 MHz AWG Meas Data - Measure the AWG’s 50 MHz phase noise. This is the clock phase noise that will be input to the jitter attenuator. This worksheet imports the CSV file containing the measured phase noise for an Arbitrary Waveform Generator (AWG) operating at nominal 0 dBm and 50 MHz.
2. 50 MHz AWG Calcs - Calculate the AWG’s 50 MHz phase jitter based on the measured phase noise data using the procedure described earlier.In this context, the term “phase jitter” always refers to the RMS quantity based on integrating phase noise over the 12 kHz – 20 MHz offset frequency range. The calculated result is 7917 fs or 7.7917 ps which is noisy indeed. This result is accurate to 0.1% compared to the figure reported by the Agilent E5052B screen cap.
3. 50 MHz Sig Gen Meas Data - Measure the signal generator’s 50 MHz phase noise. This worksheet imports the CSV file containing the measured phase noise for a signal generator also operating at nominal 0 dBm and 50 MHz.
Note: This is the clock phase noise that will be input to the jitter attenuator (JA) to estimate its Jitter Generation (JGEN) at the output frequency. It was presumed, based on previous experience, that the sig gen’s performance would be better than the AWG’s and so would be a good candidate for this role. However, this had to be confirmed.
4. 50 MHz Sig Gen Calcs - Calculate the sig gen’s 50 MHz phase jitter.The calculated result is 785 fs, accurate to 0.02% compared to the figure reported by the Agilent E5052B.
5. 50 MHz AWG vs Sig Gen Plots - Here I compare the AWG versus the signal generator phase noise, both operating at 50 MHz. Generally speaking the sig gen’s phase noise is close to, or better than, that of the AWG’s performance. The previous worksheets’ calculations showed that the signal generator was roughly an order of magnitude better than the AWG in terms of phase jitter. Given this confirmation, it is reasonable to select the sig gen to be the low noise source for the subsequently estimated JGEN.
Note: It may also be expedient to operate a jitter attenuator in Free Run mode and use its output clock phase noise as a stand-in for JGEN. It does not account for all noise sources but can often get within 5% for typical jitter bandwidths. However, it will not be as accurate at low offset frequencies.
6. JGEN 156.25 MHz Meas Data - This worksheet imports the CSV file containing the JA’s 156.25 MHz output clock measured phase noise where the low noise RF signal generator supplies the 50 MHz input clock. This data is used to the estimate the JA’s JGEN.
7. JGEN 156.25 MHz Calcs - Calculates the JA’s JGEN phase jitter. The calculated result is 83.5 fs, accurate to 0.03% compared to the figure reported by the Agilent E5052B.
8. Clock Tree Calcs - This is the clock tree calculations worksheet that puts everything together. Inspecting the columns going from left to right you can see the following operations:
Two different E5052B screen caps are copied on to this last calculations worksheet, one for the JA output clock, and one for the buffer output clock.
9. Clock Tree Plots - All of the relevant input, interim, and output curves for the jitter attenuator are plotted here.
So how did it go? It went reasonably well with a caveat at the end.
The JA output clock phase jitter was calculated to be 83.47 fs which was 0.9% lower than the measured 84.19 fs. The shape of the measured phase noise plot looks close to the expected plot except close in.
The buffer output clock phase noise was calculated to be 167.31 fs which was -5.2 % lower than measured, based on using the datasheet typical value for additive jitter, 12 kHz – 20 MHz. We don’t simply see +1 or +2 dB added everywhere to the JA output clock phase noise. Rather, the shape of the measured phase noise plot showed more phase noise at far offset frequencies, where the “floor” rose from about -162 to -152 dBc/Hz.
Bottom line: These results are good from a phase jitter point of view. However, it appears that a buffer JTF would be needed to better predict the end phase noise plot.
I hope you have enjoyed this Timing 101 article. This is the last post for 2018. Happy Holidays and Happy New Year to you all! I look forward to exploring more topics with you in 2019.
As always, if you have topic suggestions, or there are questions you would like answered, appropriate for this blog, please send them to firstname.lastname@example.org with the words Timing 101 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on.
Using isolated gate drivers as discrete components in system design can reduce overall system costs due to package size requirements. In this blog, we take a look at isolated gate drivers, discrete gate drivers, component integration and various solutions. We also discuss the benefits and tradeoffs to integration and why it’s not always the best solution.
Component integration has been the driving force of the semiconductor industry for more than 60 years. It’s right there in the industry term, “integrated circuits,” and year after year diligent circuit designers, engineers and product marketers look for opportunities to take chips to the next level of integration to reduce cost, shrink device and board size, and minimize bill of materials (BOM).
Why not? There are many good reasons and advantages for system designers to integrate more functionality into IC devices. First is convenience. Soldering down one device is always better than having to solder down two. Next is interoperability. Integrated components are, of course, designed to work together. There is no need to worry about matching digital interfaces, impedances or messy glue logic. Finally, cost is a big incentive for component integration. Cost reduction has been the promise of integration realized now in economical computing systems and low-cost microcontrollers (MCUs) with an ever-increasing slew of functions.
When functions are complementary in achieving a system goal, then integration makes a lot of sense. The integration of high-performance op amps with analog-to-digital converters (ADCs) is a good example. The next step is integrating these analog components with an MCU. Together, they accomplish a system requirement with all the advantages of integration. Now, the further integration of wireless components is the next waypoint on the trail of semiconductor progress.
Integration Isn't Always the Best Solution
Not all integration incurs advantages without significant disadvantages or tradeoffs. In some cases, the better choice for a system design may be to continue with discrete components. Often, the deciding factor in whether to integrate or not is the effect of noise on the various components. Sensitive analog measurement integrated with noisy switching components rarely results in an improved system. Another instance when integration comes into question is when there are parts of the system that are space critical. This is generally related to the parasitic capacitors, loops and inductors in the system. When one parameter must be minimized, it often takes precedence over any advantages that may be gained by integration. Finally, the cost benefit of integration can sometimes reverse. This situation is seen with power MOSFETs where discrete components end up being cheaper than equivalent integrated devices because of the specialized fab process and packaging associated with them.
Isolated Gate Drivers
A common component that exemplifies the advantages of discrete over integrated components is the isolated gate driver. Isolated gate drivers are used when switching high-voltage rails in power conversion systems. Besides the requirements associated with effective driving of switch gates – fast current sourcing, low propagation delay and high transient immunity – there are also distinct requirements associated with the isolation such as package spacing.
There are clear reasons why an isolated gate driver is not a good candidate for integration into its paired system controller. For example, the fast, high-voltage switching of a field-effect transistor (FET) gate is inherently noisy. The gate voltage on the high-side switch travels through the entire range between the lower rail and the upper rail during the typical switching cycle. In some areas of the switching cycle, it can change by hundreds of volts or more in tens of nanoseconds or less. This fluctuation produces huge transients on the gate driver output. Dedicated gate drivers are designed to reject these transients but introducing this noise into the package can affect all circuits present on the die. If those circuits were sensitive analog circuits or time-critical digital circuits, they would be overwhelmed, and their functions fruitless.
Another reason integration is not an option for these components is that the gate driver needs to be close to the switch it is responsible for. The switch used and its associated requirements for heatsink mass and airflow often set the size for the switching subsystem. For switching half-bridges, and especially for full-bridges, integrated components make it impossible to locate the gate driver close to all of the FETs being used – at least two but often four or more devices. When designing a half-bridge or full-bridge circuit, component placement and printed circuit board (PCB) layout is critical to performance. To get the best performance, current return paths and the effect of parasitic elements – stray capacitance and inductance – must be minimized. Parasitic capacitance and inductance are unavoidable but keeping the driver close to the FET minimizes adverse effects.
Finally, the unique creepage requirements associated with the galvanic isolation deter integration of this component. Creepage is defined as the spacing along the package between exposed metal on the outside of the IC. Generally, as the bus voltage increases, the creepage must be larger. Typical creepage for isolated gate drivers runs from about 4 mm to 8 mm and even larger.
In the theoretical case of integrating an isolated gate driver, this creepage requirement places a large burden on the rest of the components. Integration with a system controller would require the package to grow in size and a large area left free of pins or exposed metal that might reduce creepage. This might significantly reduce the peripherals available to controllers that usually have pins around four sides of a device with functions assigned to each. Increasing the package size and accommodating the requirements of the isolation barrier will surely increase system cost.
Silicon Labs Discrete Gate Drivers
We offer several families of high-performance discrete isolated gate drivers. Some include options for single gate drivers that can be placed very close to the power switch. Other families have high-side/low-side pairs, which provide the same benefits of a discrete driver in noise immunity and cost optimization. However, care must be taken in layout of these devices to maintain symmetric parasitic environments.
The Si827x driver family, for example, provides a very high level of transient noise immunity. The device operates as expected even in the presence of 200 kV/ìs common mode transients. Other gate driver families, such as the Si8239x, offer up to 5 kV isolation ratings in packages with 8 mm creepage. Achieving these specifications and distances, while keeping the solution cost-effective, would be difficult, if not impossible.
Integration of components into a more capable single device makes sense in many cases. Integration of analog and mixed-signal functions, memory and high-performance digital logic has been a boon for the semiconductor industry for decades. The integration model falls short in some application cases, though. Gate drivers used in switching circuits for power converters must remain discrete components to keep noise from interfering with system controller functioning and to allow drivers to be placed close to switches to reduce parasitic effects. Using isolated gate drivers as discrete components in a system design can reduce overall system costs due to the unique package size requirements. Attempting to integrate these components creates a distinct burden that can only be addressed with expensive, non-standard packaging.
To get started, check out these isolation development tools.
In past few years, the IoT industry has aggressively expanded its scope towards 2.4GHz-based protocols such as Bluetooth, Wi-Fi, and Zigbee. These protocols have their pros and cons but there is one common denominator; they are not controlled by teleoperators and their interests. Companies could implement the IoT systems quite flexibly the way they wanted, and the interoperability was mostly managed by the protocol alliances such as the Bluetooth SiG, guaranteeing the interoperability between different vendors. But protocol compliance testing is just a minor part of the total cost of ownership for an IoT device. How do you manage CE, FCC, and other country regulations? It is quite straightforward to manage global regulatory certifications for a single device, but what if your product portfolio includes tens of devices you are selling around the world?
When considerable products in a company’s product mix starts to have IoT functions, they begin to realize that discrete design regulatory certification management becomes a burden, and the need for affordable, small, high-quality and pre-certified modules rapidly increases.
Number of Standards-Based IoT Devices is Exploding
There are thousands and thousands of applications currently enabled by the standards-based IoT protocols, which are expanding rapidly. It will soon be nearly impossible to find electronic devices that are not somehow IoT enabled – nearly everything is about to become connected, as the integration costs are reasonable. The result is a large number of companies are not used to hiring and maintaining electronic/RF engineers or protocol experts. These companies are also joining the IoT revolution en masse, where in the past they were developing products that are not generally considered technology products.
Construction equipment suppliers, agricultures devices, and home automation companies are a few examples. These kinds of companies have been mainly focusing on mechanics or quite simple electronic functions. Advanced RF engineering for IoT functionality is not in their core know-how. The relevant question is, how can these companies efficiently - and with reasonable investments - transform their products in the IoT era and meet the compatibility requirements? These companies need something easy to implement and manage. The very good solution for these companies is to consider new system-in-package (SiP) modules to meet the perfect balance of time to market, pre-certification, size, and cost.
Growing Number of IoT Ecosystems
Companies are forming up ecosystems around their IoT enabled devices, and they are inviting partners and subcontractors to join. These ecosystem builders are in front of the interoperability challenge – do the devices work seamlessly together with best possible performance? How do you make sure the devices in the ecosystem fulfills the end-user expectations? A good example of these ecosystems is building automation systems – connected lighting, whitegoods, etc. How do these companies ensure the journey to IoT Ecosystem and success?
A very good candidate to solve this issue is a pre-certified SiP module quarantining the flexible designs, interoperability, performance, right cost, easy product management and fast time-to-market.
Why the System in Package (SiP) Modules are Beneficial for Ecosystem Development?
SiP is a term for advanced semiconductor packaging where an IC is assembled together with passives into the substrate. The look and feel of the SiP IoT module are just like an IC/SoC, but unlike the IC the SiP module integrates all functionality needed for IoT operation in the same size and scale as an SoC. In other words, the SiP modules are completely integrated, certified systems ready for IoT functionality.
What makes the Silicon Labs SiP modules so feasible for IoT ecosystem creation? A proper high-performance RF design is not trivial, nor easy to implement and manage in a way that ensures radio good performance, which is the key into robust functionality. The RF design burden is taken away when designers use a completely integrated SiP module. The SiP allows flexible placement of the module into any electronics device with small footprint. The compact size has benefit as it allows the rest of the device to be flexibly engineered.
Why Are SiP Modules So Convenient?
The patent pending antenna of our SiP modules is in the substrate, and it is engineered the in a way that makes it possible to gain 70% antenna efficiency. Another great benefit of our SiP modules is that they do not easily detune off from the band. And if it does, it is easy to fix it by simple means without time-consuming RF engineering methods. This impressive 70% antenna efficiency is hard to beat, even by seasoned RF engineer designing the system from discrete components with significant amount of time and testing budget. The SiP modules have achieved high performance and small size on a scale no longer easily achievable by discrete designs.
The ecosystems utilizing such precisely engineered SiP modules will have significant advantages in compatibility, RF range, robustness, time-to-market and pre-certification.
It is crucial to have a good wireless range from the IoT device to make sure the RF link is robust. Even in short distances an RF design is only as good as the amount of interference it can tolerate and still achieve fast data rates and lower power consumption. Another great benefit of the SiP module for the ecosystem is its full protocol certifications such as FCC and CE. This means the end-user of the module inherits certifications from Silicon Labs and avoids RF or protocol testing completely. It is our responsibility to make sure our products comply, leaving the IoT developer free from compliance worries and regular re-certifications to meet the ever-evolving RF regulations.
What’s New With Silicon Labs SiP Modules?
Silicon Labs just released its most advanced SiP module, the BGM13S. The module is empowered with BG13 DIE, having state of the art Bluetooth 5.0 Low Energy and Bluetooth Mesh including long range Coded phy. The module has 512Kb memory capable of doing over the air updates. Its several power TX variants offer line-of-sight range even up to 700 meters. This is an extremely good figure considering the module’s size of only 6.5 x 6.5mm, including antenna that uses customer PCB as part of the antenna structure. This very advanced design of the SiP makes it possible for and OEM to optimize the RF range without any RF engineering. We also have been improving the manufacturability of this module by using popular 0.5 soldering pitch. The more relaxed soldering pitch makes it possible to manufacture the devices even in lowest cost contract manufacturers.
Learn more about Silicon Labs SiP modules
Bluetooth 5.0 and Bluetooth Mesh: BGM13S (Available now, 2018)
Zigbee/15.4, Thread: MGM13S (Available now, 2018)
Bluetooth 5.0 (no Bluetooth Mesh): BGM11S, BGM121, BGM123 (Available since November 2016)