Official Blog of Silicon Labs

      • Nanoleaf’s Award-Winning Lights Get Wirelessly Connected with Silicon Labs ZigBee

        Anonymous | 10/302/2015 | 04:00 PM

        Nanoleaf Smarter Kit w Silicon Labs ZigBee Inside.jpg

        Nanoleaf is a green technology company founded to make products that make a difference to the planet. We are very proud to say that their latest introduction, the Smarter Kit, includes our ZigBee SoCs and communications stack. They selected our ZigBee hardware, stack and tools because we have a very low power implementation that is very mature and easy to use.


        The Smarter Kit is also one of the first connected home products to work within Apple’s HomeKit ecosystem, allowing it to work with iPhones, iPads and Macs out of the box. The product is crowd sourced on Indiegogo. Sign up there now to preorder your own.


        Nanoleaf is a very cool company. It was founded in 2012 and experienced a hugely successful KickStarter campaign in 2013 for its Nanoleaf One light bulb. The newest light bulb, the Nanoleaf Ivy, improves on the Nanoleaf One by reducing the number of LEDs on each “petal.” While this reduces the overall light output, it also makes the Nanoleaf Ivy a candidate for EnergyStarTM certification, which can garner tax reductions for buyers.


        While writing this blog, I found myself digging into their fascinating products and technology on their news/blog site. They have great coverage from a host of interesting magazines. They also won the 2015 Red Dot Design Awards for the high design quality of its Nanoleaf One and Nanoleaf Bloom light bulbs. The bulbs use innovative design and LED technology to provide light at better than twice the efficiency of other LED bulbs, while outputting the same amount of light as a a standard 75w, 100w or even 115w incandescent bulb.


        Nanoleaf Unfolded Light Bulb.jpg


        The bulb shape, shown unfolded above in the Nanoleaf-printed design PCBs instead of the black PCBs on the site, is the brainstorm of the three founders. The various innovative design ideas and technologies are detailed in a very cool section of their site.


        Another great aspect of the site is that each product has a section providing an estimate of its cost savings over the course of its 27 year life.


        We are very happy to be partnered up with Nanoleaf. They are great company making great products.


        Find our more about the low-power ZigBee offerings inside the Nanoleaf products.

      • Chapter 7: Create a Sprinkler Timer Part 5 – Complete the Timer Circuit and Save Start/Stop Times

        lynchtron | 10/301/2015 | 12:53 PM


        This is the fifth part of our series about how to build a sprinkler timer with the EFM32.  At this point, you should be able to control an onboard LED with a timer that you programmed to start and stop.  The EFM32 should be able to keep track of time and use the LCD together with the buttons on the starter kit to set the start and stop times, similar to an alarm clock.  In this section, we will learn how to save those start and stop times so that they don’t need to be reprogrammed every time the EFM32 loses power.  This is a good think to know for any gadget.  Finally, we will complete the electrical connections to the water solenoid valve.


        Onboard Non-Volatile Flash Storage

        In order to be able to power down the device and retain user programmed start and stop time, we need some non-volatile memory space to store the start and stop times.  That place is a user page in the onboard flash memory.  This is the same flash memory that stores the program instructions every time you use Simplicity Studio to upload a new program to the MCU.  If we fail to store the user’s programmed instructions in the user page, the start/stop times would be lost when the RAM loses power.  Flash memory retains its contents even when no power is applied to the device. 


        We have to be careful when programming flash memory that we don’t write to it too many times over the lifetime of the device.  The EFM32 spec guarantees 20,000 erase cycles (including flashing the program), which is fine for our purposes here because we don’t expect the user to program the start/stop times that many times in a lifetime.  However, any automatic writing of data to the flash memory can burn out a location very quickly because we have the ability to write things to memory millions of times per second. 


        If we were to extend this lesson to include the backup power domain, we could actually run the timer in EM4 mode and keep it going on a coin cell battery in a barely-powered state.  But every time you drop into EM4, it is as if the system is being reset.  In order to fetch the start and stop times from memory, they have to be placed into non-volatile memory.  The following code will let you do that and is a simple thing that you can do to store and retrieve data across reset or power events. 


        #include "em_msc.h"
        #define USER_PAGE_ADDR        0x0FE00000
        msc_Return_TypeDef flash_write (uint32_t number)
              msc_Return_TypeDef ret;
              uint32_t *addr = (uint32_t *)USER_PAGE_ADDR;
              ret = MSC_WriteWord(addr, &number, sizeof(number));
              return ret;
        uint64_t flash_read (void) {
              uint64_t num;
              uint32_t *data = (uint32_t*) USER_PAGE_ADDR;
              num = *data;
              return num;
        void userpage_erase(void){
              uint32_t *addr = (uint32_t *)USER_PAGE_ADDR;

        Any data that you store with the above functions will persist even across flashing the device during normal debugging and programming.  Therefore, you can use this memory to hold a board serial number for example.   You must call userpage_erase function before you execute the flash_write function.  Flash memory can only write things to zero.  An erase operation sets all bits to one.  The flash_read command can be executed at any time and has no read limit.  I will not implement these functions in this particular timer, and I leave that as an extra credit exercise for the reader.  This functionality is very important to many types of gadget projects.


        Interfacing an External Solenoid

        Now that our sprinkler timer is fully programmed and tested on a harmless LED, it is the right time to physically connect to the solenoid valve and finish the job.  It is actually quite a simple thing to do, and have already done something similar when running the 12V LEDs.  All that is needed is a simple NPN transistor, a 12V power supply or battery pack, the 12V solenoid and a diode to handle the kickback from the collapsing magnetic field when the solenoid is switched off.  We didn’t need a diode in the LED lesson because the LEDs did not generate a magnetic field. But in this case, the kickback from the diode is powerful enough to damage the rest of the circuit, including the EFM32, so don’t connect anything without the diode in place.


        A solenoid is a magnet with an electromagnet wrapped around it.  The opening of the water valve is done by applying a magnetic field around the magnet which causes it to move when the power is applied.


        Connect pin PD1 from the Starter Kit to a 1k-ohm resistor and then on to the gate of the NPN transistor.  Connect the negative terminal of the 12V battery pack, the drain of the transistor, and ground wire for the solenoid all to the ground pins on the Starter Kit.  Then, connect the diode across the solenoid terminals in the direction shown in the figure below and connect the positive terminal of the battery pack to the positive lead on the solenoid like so:



        DO NOT CONNECT THE +12V POSITIVE TERMINAL OF THE BATTERY PACK TO ANYWHERE ON THE STARTER KIT!  The 12V battery pack is only to be connected to the a terminal on the solenoid (along with the striped side of the diode.)  The terminals on the solenoid are interchangeable.  All that maters is that there is a voltage across the terminals.  The orientation of the diode is very important however with respect to ground and +12V.  The diode has to be "pointing" toward positive voltage or else your batteries could be shorted directly to ground when the transistor is in the on state.  That would cause the batteries to warm up and it could be a dangerous thing so pay attention!


        I wired up my parts on a breadboard first, then crimped and soldered the transistor and diode right into the spade connectors on the solenoid valve before adding heatshrink around the components:


        Test this out in on the bench first.  WARNING: If you touch the terminals on the solenoid when power is removed, you may get a shock.  The built-up energy in the solenoid valve has no place to go.  This is why we use a diode to shunt the power back into the solenoid, until the natural resistance of the system drains it down. 


        Once you hear that the solenoid is opening and closing like it should, disconnect the Starter Kit and place the CR2032 battery in the slot on the Starter Kit.    Slide the switch over to the left to run the MCU from the CR2032 coin cell battery.  Now, program your timer and place it in the waterproof enclosure.  You can use a large cell phone or tablet soft case that will allow you to see the LCD and LED inside, and also let you push the programming buttons.  You will have to cut a small hole somewhere in the case to slip the wire that leads to the solenoid.  You can seal that up with some silicone caulk to keep it waterproof. 


        This completes the chapter on how to build a basic sprinkler timer.  Note that it is really just the crude beginnings of a much more feature-rich device.  Any modern sprinkler timer should have a soil humidity sensor to prevent watering in the rain.  In addition, there is a power drain into the solenoid of about 300mA whenever the valve is in the open position.  The energy consumption could be improved by building a valve that is controlled by a stepper motor, which can maintain the valve position without power.  


        In the next lesson we will branch out and start communicating with other smart devices.



      • ARM TechCON (CA)

        Siliconlabs | 10/296/2015 | 06:11 AM

        Location: Santa Clara, CA / mbed zone Pod # 512_5

        Date: November 10-12, 2015


        Silicon Labs will showcase Silicon Labs' Thread connected lighting and connected home demo in the ARM mbed Zone at ARM TechCon.



        Speaking Sessions:

        Using Dual-Band/Multi-Protocol Wireless SoCs
        Presenter: Greg Fyke, Director of Marketing for IoT Wireless Products - Silicon Labs
        Date/Time: Tuesday, November 10 at 1:30pm - 2:20pm
        Location: Ballroom G

        Processor Loading for a Router in a Connected Home
        Presenter: Skip Ashton, Vice President of Software Engineering - Silicon Labs
        Date/Time: Wednesday, November 11 at 4:30pm - 5:20pm
        Location: Ballroom G

        Simplifying software development for SoCs containing multiple Cortex-M based processors
        Panel Participant: Øivind Loe, Senior Manager of Strategic Marketing - Silicon Labs
        Date/Time: Thursday, November 12 at 4:30pm - 5:20pm
        Location: Ballroom F


         original (1).png

      • IoT Hero Julia Park Innovates Smart Home Design

        deirdrewalsh | 10/294/2015 | 11:56 AM
        Recently, I got to meet Julia Park, an emerging IoT Hero from The University of Texas.  Her unique smart home design places importance on monitoring and educating, rather than automating and controlling. I sat down with her to find out more about the project. 
        Hi, Julia. So, tell me a little bit about yourself. 
        Screenshot 2015-10-22 11.20.53.pngHello! My name is Julia Park, student leader of the NexSmart system and my team members are Abhishek Pratapa, Alex Best, and Ignacio Urena. A couple of us are UT Austin students (hook 'em!) and the others are graduates, but just as passionate about using technology to promote sustainability! We're all from different walks of life and have unique backgrounds. I am studying architecture and engineering, while the rest of the team is composed of computer science and electrical engineering backgrounds.

        Great. I understand you guys are working on a cool application. 
        Yes. A little bit about the project – Our smart home project is a part of a larger effort to build a fully solar-powered home to be entered into this year's Department of Energy Solar Decathlon competition.  The home, called NexusHaus, is a design collaboration between The University of Texas at Austin and the Technische Universität München.
        The home's name comes from the idea of 'Nexus' which combines ideas about energy, water, food, and density and incorporates the concepts into a self-sustaining design. Initially, I was involved with the architectural design of the home, but at some point I realized that in order to truly influence a modern homeowner's lifestyle, we have to use technology because it dominates nearly every facet of our lives today. I realized we could build some sort of system to facilitate the homeowner's lifestyle, and so I found some other students who were interested in this idea to build a 'smart green home' and here we are!

        Early on in the project, we decided that a notion of 'extreme automation' such as being able to remotely control lights from our phones did not fit with the overall concept of the home. But we felt that this type of remote automation might lead to a 'forgetful' or 'lazy' attitude where the importance of turning off the lights in the first place might become lost. We wanted to encourage actively leading a sustainable life, instead of potentially facilitating a further disconnection from understanding how much energy we are using.
        "Therefore, our NexSmart system is a smart home design that places importance on monitoring and educating, rather than automating and controlling."
        The concept behind our home smart management system (NexSmart) is that it is custom designed technology for the NexusHaus, and it promotes understanding about energy consumption and water conservation. Instead of using 'luxury' smart home technologies from various manufacturers and awkwardly place them together in the home, we wanted to integrate everything into one, cohesive smart home system that emphasizes and encourages a sustainable lifestyle. We feel that if we can gather data about various elements of the home and display them in a non-obtrusive manner we can subtly influence the homeowner to live a more sustainable lifestyle.
        Screenshot 2015-10-22 11.19.53.png
        The system has some two concepts:
        1: NexSmart will collect information (via sensors on the Silicon Labs Sensor Puck) about the environment and display suggestions on how to improve the home conditions and save energy.

        2: The smart home should not be composed by random gadgets tacked onto the house, but should be integrated and expressed into the architecture of NexusHaus. In other words, it should have a presence in the home. We decided to locate a lot of the hardware in our SenseBar, which is a linear acrylic piece in the home that houses the tablet and glows depending on how much energy or water the user might be consuming.
        Screenshot 2015-10-22 11.20.25.png
        Tell me a bit more about how you're using Silicon Labs products? 
        We are using the fantastic Sensor Puck product! It came out just this year and it's a great little wireless board loaded with sensors. We're using it primarily for temperature and humidity sensing, but it's very flexible and adaptable should we decide to add onto the smart home system later and use its other sensors in the future. The temperature and humidity sensors detect indoor thermal comfort levels and control the HVAC system. We are considering utilizing the light sensor as well to detect illuminance levels inside the home after we finish the current version. 

        Fascinating! What's the biggest thing you've learned throughout this project? 
        The absolute biggest thing we have learned through this experience is the importance of a cross-disciplinary education – it was a great challenge and experience to try to meld technology and architecture and to learn about the different disciplines. The next biggest thing is probably communication, and the sheer amount of dedication it takes to make a project like this work during school.
        I know it's a big question, but in your opinion, what does the future of IoT look like? 
        I think the IoT movement is an incredibly powerful one -– and I believe it's applicable in so many different fields. I think it will keep growing, and in particular I hope more people think of ways to integrate connectivity into architecture. Not only is it convenient and safe to be able to see and control aspects of buildings remotely, I think there is great potential to be an informative, educational tool. We spend 90% of our time indoors yet a lot of us don't know anything about the buildings we inhabit. IoT makes knowledge about our environment very easily accessible. Having connectivity in the physical world opens up so many possibilities – it's exciting to imagine what they could be!
        For more from Julia, watch this quick video interview. 


      • Choosing Between an 8-bit or 32-bit MCU - Part 2

        Anonymous | 10/289/2015 | 03:13 PM

        8-bit MCU v 32-bit MCU - Which One to Use - cover.png


        Introduction – Part 2


        This blog series compares use cases for 8-bit and 32-bit MCUs and serves as a guide on how to choose between the two MCU architectures. Most 32-bit examples focus on ARM Cortex-M devices, which behave very similarly across MCU vendor portfolios.


        There is a lot more architectural variation on the 8-bit MCU side, so it’s harder to apply apples-to-apples comparisons among 8-bit vendors. For the sake of comparison, we use the widely used, well-understood 8051 8-bit architecture, which remains popular among embedded developers.


        Part 2 – Architecture Specifics and Conclusion: A More Nuanced View of Applications


        Part 1 of this blog series painted the basic picture for the 8-bit and 32-bit trade-offs.


        Now it's time to look at a more detailed analysis of applications where each architecture excels and where our general guidelines in Part 1 break down.


        To compare these MCUs, you need to measure them. There are a lot of tools to choose from. I’ve selected scenarios I believe provide the fairest comparison and are most representative of real-world developer experiences. The ARM numbers below were generated with GCC + nanoCLibrary and -03 optimization.


        I made no attempt to optimize the code for either device. I simply implemented the most obvious “normal” code that 90 percent of developers would come up with.


        It is much more interesting to see what the average developer will see than what can be achieved under ideal circumstances.




        There is a noticeable difference in interrupt and function-call latency between the two architectures, with 8051 being faster than an ARM Cortex-M core. In addition, having peripherals on the Advanced Peripheral Bus (APB) can also impact latency since data must flow across the bridge between the APB and the AMBA High-Performance Bus (AHB). Finally, many Cortex-M-based MCUs require the APB clock to be divided when high-frequency core clocks are used, which increases peripheral latency.


        I created a simple experiment where an interrupt was triggered by an I/O pin. The interrupt does some signaling on pins and updates a flag based on which pin performs the interrupt. I then measured several parameters shown in the following table. The 32-bit implementation is listed here.


        Figure 2 - IO Interrupt Experiment.png


        The 8051 core shows an advantage in Interrupt Service Routine (ISR) entry and exit times. However, as the ISR gets bigger and its execution time increases, those delays will become insignificant.


        In keeping with the established theme, the larger the system gets, the less the 8051 advantage matters. In addition, the advantage in ISR execution time will swing to the ARM core if the ISR involves a significant amount of data movement or math on integers wider than 8 bits. For example, an ADC ISR that updates a 16- or 32-bit rolling average with a new sample would probably execute faster on the ARM device.


        Control vs. Processing


        The fundamental competency of an 8051 core is control code, where the accesses to variables are spread around and a lot of control logic is used (if, case, etc.). The 8051 core is also very efficient at processing 8-bit data while an ARM Cortex-M core excels at data processing and 32-bit math. In addition, the 32-bit data path enables efficient copying of large chunks of data since an ARM MCU can move 4 bytes at a time while the 8051 has to move it 1 byte at a time.


        As a result, applications that primarily stream data from one place to another (UART to CRC or to USB) are better-suited to ARM processor-based systems.


        Consider this simple experiment. I compiled the function below on both architectures for variable sizes of uint8_t, uint16_t and uint32_t.


        Figure 3 - Data Size Experiment.png


        As the data size increases, the 8051 core requires more and more code to do the job, eventually surpassing the size of the ARM function. The 16-bit case is pretty much a wash in terms of code size, and slightly favors the 32-bit core in execution speed since equal code generally represents fewer cycles. It’s also important to note that this comparison is only valid when compiling the ARM code with optimization. Un-optimized code is several times larger.


        This doesn't mean applications with a lot of data movement or 32-bit math shouldn't be done on an 8051 core.


        In many cases, other considerations will outweigh the efficiency advantage of the ARM core, or that advantage will be irrelevant. Consider the implementation of a UART-to-SPI bridge. This application spends most of its time copying data between the peripherals, a task the ARM core will do much more efficiently. However, it's also a very small application, probably small enough to fit into a 2 KB part. Even though an 8051 core is less efficient, it still has plenty of processing power to handle high data rates in that application. The extra cycles available to the ARM device are probably going to be spent sitting in an idle loop or a “WFI” (wait for interrupt), waiting for the next piece of data to come in.


        In this case, the 8051 core still makes the most sense, since the extra CPU cycles are worthless while the smaller flash footprint yields cost savings.


        If we had something useful to do with the extra cycles, then the extra efficiency would be important, and the scales may tip in favor of the ARM core.




        8051 devices do not have a unified memory map like ARM devices, and instead have different instructions for accessing code (flash), IDATA (internal RAM) and XDATA (external RAM).


        To enable efficient code generation, a pointer in 8051 code will declare what space it's pointing to. However, in some cases, we use a generic pointer that can point to any space, and this style of pointer is inefficient to access.


        For example, consider a function that takes a pointer to a buffer and sends that buffer out the UART. If the pointer is an XDATA pointer, then an XDATA array can be sent out the UART, but an array in code space would first need to be copied into XDATA. A generic pointer would be able to point to both code and XDATA space, but is slower and requires more code to access.


        Segment-specific pointers work in most cases, but generic pointers can come in handy when writing reusable code where the use case isn't well known. If this happens often in the application, then the 8051 starts to lose its efficiency advantage.


        Identifying the “Core” Strengths


        I've noted several times that math leans towards ARM, and control leans towards 8051, but no application focuses solely on math or control. How can we characterize an application in broad terms and figure out where it lies on the spectrum it lies?


        Let’s consider a hypothetical application composed of 10% 32-bit math, 25% control code and 65% general code that doesn’t clearly fall into an 8 or 32-bit category. The application also values code space over execution speed, since it does not need all the available MIPS and must be optimized for cost.


        The fact that cost is more important than application speed will give the 8051 core a slight advantage in the general code. In addition, the 8051 core has moderate advantages in the control code. The ARM core has the upper hand in 32-bit math, but that’s only 10% in the example. Taking all these variables into consideration, this particular application is a better fit for an 8051 core.


        Figure 4 - Application Code Breakout Percentages.png


        If we make a change to our example and say that the 32-bit math is 30% and general code only 45%, then the ARM core becomes a much more competitive player.


        Obviously, there is a lot of estimation in this process, but the technique of deconstructing the application and then evaluating each component will help identify cases where there is a significant advantage to be had for one architecture over the other.


        Power Consumption


        When looking at data sheets, it's easy to come to the conclusion that one MCU edges out the other for power consumption. While it's true that the sleep mode and active mode currents will favor certain types of MCUs, that assessment can be extremely misleading.


        Duty cycle (how much time is spent in each power mode) will always dominate energy consumption.


        Consider a system where the device wakes up, adds a 16-bit ADC sample to a rolling average and goes back to sleep until the next sample. That task involves a significant amount of 16-bit and 32-bit math. The ARM device is going to be able to make the calculations and go back to sleep faster than an 8051 device. In this case, illustrated below, the ARM core may have higher sleep currents, but results in a lower power system.


        Figure 5 - MCU Duty Cycle Impacts Power.png


        Peripheral features can also skew power consumption one way or the other. For example, most of Silicon Labs’ EFM32 32-bit MCUs have a low-energy UART (LEUART) that can receive data while in low power mode, while only two of the EFM8 MCUs offer this feature. This peripheral affects the power duty cycle and heavily favors the EFM32MCUs over EFM8 devices without LEUART.


        8-bit or 32-bit? I still can't decide!


        What happens if, after considering all of these variables, it's still not clear which MCU architecture is the best choice? Congratulations! That means they are both good options, and it doesn't really matter which architecture you use.


        Rely on your past experience and personal preferences if there is no clear technical advantage.


        This is also a great time to look at future projects. If most future projects are going to be well-suited to ARM devices, then go with ARM, and if future projects are more focused on driving down cost and size, then go with 8051.      


        What does it all mean?


        8-bit MCUs still have a lot to offer embedded developers and their ever-growing focus on the Internet of Things. Whenever a developer begins a design, it's important to make sure that the right tool is coming out of the toolbox.


        The difficult truth is that choosing an MCU architecture can't be distilled into one or two bullet points on a Marketing PowerPoint presentation.


        However, making the best decision isn't hard once you have the right information and are willing to spend a little time applying it.


        <-- PART 1

      • Choosing Between an 8-bit or 32-bit MCU - Part 1

        Anonymous | 10/289/2015 | 02:14 PM

        8-bit MCU v 32-bit MCU - Which One to Use - cover.png




        This blog series compares use cases for 8-bit and 32-bit MCUs and serves as a guide on how to choose between the two MCU architectures. Most 32-bit examples focus on ARM Cortex-M devices, which behave very similarly across MCU vendor portfolios.


        There is a lot more architectural variation on the 8-bit MCU side, so it’s harder to apply apples-to-apples comparisons among 8-bit vendors. For the sake of comparison, we use the widely used, well-understood 8051 8-bit architecture, which remains popular among embedded developers.


        Part 1 – The Basics, and Obvious Applications for 8 v 32-bit Architectures


        I was in the middle of the show floor talking to an excitable man with a glorious accent. When I told him about our 8-bit MCU offerings, he stopped me and asked, “But why would I want to use an 8-bit MCU?"


        This wasn't the first time I had heard the question, and it certainly won’t be the last.


        It's a natural assumption that just as the horse-drawn buggy gave way to the automobile and snail mail gave way to email, 8-bit MCUs have been eclipsed by 32-bit devices. While that MCU transition may become true in some distant future, the current situation isn't quite that simple. It turns out that 8- and 32-bit MCUs are still complementary technologies, each excelling at some tasks particularly well versus the other, while performing at parity in others.


        The trick is figuring out when a particular application lends itself to a particular MCU architecture.


        Star Trek is Better Than Star Wars


        Is "Star Trek better than Star Wars?" is similar to asking, “Is ARM Cortex better than 8051?”.


        The truth is that while both questions are interesting, neither one is logical. Each fits different applications very well. (And Star Wars is clearly superior. Just kidding. Please don’t comment-bomb me.)


        For MCUs, the much better question to ask is "Which MCU will best help me solve the problem I'm working on today?" Different jobs require different tools, and the goal is to understand how best to apply the available 8-bit and 32-bit devices.


        A Note on Tools and Updated Technology


        Before we begin comparing architectures, it's important to note that I am comparing modern 8-bit technology with modern 32-bit technology. I am using the Silicon Labs’ EFM8 line of 8051-based MCUs which are far more efficient than the original 8051 architecture with modern process technology.


        Development tools are also important. Modern embedded firmware development requires a fully-featured IDE, ready-made firmware libraries, extensive examples, comprehensive evaluation and starter kits, and helper applications to simplify things like hardware configuration, library management and production programming.


        ARM has an army of tools developers supporting their impressive IDE. Again, the Silicon Labs 8-bit IDE, Simplicity Studio, is what I used, and it compares nicely with various suites for both ARM and 8-bit development.


        Obvious Choices for 8-bit and 32-bit MCUs


        System Size


        The first generality is that ARM Cortex-M cores excel in large systems (> 64 KB of code), while 8051 devices excel in smaller systems (< 8 KB of code). The middle ground could go either way, depending on what the system is doing. It's also important to note that in many cases, peripheral mix will play an important role. If you need three UARTs, an LCD controller, four timers and two ADCs, chances are you won't find all of those on an 8-bit part, while many 32-bit parts support that feature set.


        Ease-of-Use vs. Lowest Cost and Smallest Size


        For systems sitting in the middle ground where either architecture might do the job, the big trade-off is between the ease of use that comes with an ARM core and the cost and physical size advantages that can be gained with an 8051 device.


        The unified memory model of the ARM Cortex-M architecture, coupled with full C99 support in all common compilers, makes it very easy to write firmware for this architecture. In addition, there is a huge set of libraries and third-party code to draw from. Of course, the penalty for that ease-of-use is cost. Ease-of-use is an important factor for applications with high complexity, short time-to-market or inexperienced firmware developers.


        While there is some cost advantage when comparing equivalent 8- and 32-bit parts, the real difference is in the cost floor. It's common to find 8-bit parts as small as 2 KB/512 bytes (flash/RAM), while 32-bit parts rarely go below 8 KB/2 KB. This range of memory sizes allows a system developer to move down to a significantly lower-cost solution in systems that don't need a lot of resources. For this reason, applications that are extremely cost-sensitive or can fit in a very small memory footprint will favor an 8051 solution.


        8-bit parts also generally have an advantage in physical size. For example, the smallest 32-bit QFN package offered by Silicon Labs is 4 mm x 4 mm, while our 8051-based 8-bit parts are as small as 2 mm x 2 mm in QFN packages. Applications that are severely space-constrained often need to use an 8051 device to satisfy that constraint.


        General Code and RAM efficiency


        One of the major reasons for the lower cost of an 8051 MCU is that it generally uses flash and RAM more efficiently than an ARM Cortex-M core, which allows systems to be implemented with fewer resources. The larger the system, the less impact this will have.


        However, this 8-bit memory resource advantage is not always the case. In some situations, an ARM core will be as efficient as an 8051 core. For example, 32-bit math operations require only one instruction on an ARM device, while requiring multiple 8-bit instructions on an 8051 MCU.


        The ARM architecture has two major disadvantages at small flash/RAM sizes: code-space efficiency and predictability of RAM usage.


        The first and most obvious issue is general code-space efficiency. The 8051 core uses 1-, 2- or 3-byte instructions, and ARM cores use 2- or 4-byte instructions. The 8051 instructions are smaller on average, but that advantage is mitigated by the fact that a lot of the time, the ARM core can do more work with one instruction than the 8051. The 32-bit math case is just one such example. In practice, instruction width results in only moderately more dense code on the 8051.


        In systems that contain distributed access to variables, the load/store architecture of the ARM architecture is often more important than the instruction width. Consider the implementation of a semaphore where a variable needs to be decremented (allocated) or incremented (freed) in numerous locations scattered around code. An ARM core must load the variable into a register, operate on it and then store it back, which takes three instructions. The 8051 core, on the other hand, can operate directly on the memory location and requires only one instruction. As the amount of work done on a variable at one time goes up, the overhead due to load/store becomes negligible, but for situations where only a little work is done at a time, load/store can dominate and give the 8051 a clear efficiency advantage.


        While semaphores are not common constructs in embedded software, simple counters and flags are used extensively in control-oriented applications and behave the same way. A lot of common MCU code falls into this category.


        The other piece of the puzzle involves the fact that an ARM processor makes much more liberal use of the stack than an 8051 core. In general, 8051 devices only store return addresses (2 bytes) on the stack for each function call, handling a lot of tasks through static variables normally associated with the stack. In some cases, this creates an opportunity for problems, since it causes functions to not be re-entrant by default. However, it also means that the amount of stack space that must be reserved is small and fairly predictable, which matters in MCUs with limited RAM.


        As a simple example, I created the following program. Then I measured the stack depth inside funcB and found that the M0+ core's stack consumed 48 bytes, while the 8051 core's stack consumed only 16 bytes. Of course, the 8051 core also statically allocated 8 bytes of RAM, consuming 24 bytes total. In larger systems, the difference is negligible, but in a system that only has 256 bytes of RAM, it becomes important.


        Stack Depth Benchmark Code.png


        Next post will dive into Architecture Specifics, and a more nuanced look at where each architecture excels.


        PART 2 -->

      • Chapter 7: Create a Sprinkler Timer Part 4 – Create the Timer Program Logic

        lynchtron | 10/286/2015 | 07:57 PM



        This is part four of a five part series on how to build a sprinkler timer using the EFM32 series of MCUs. At this point in the chapter, we have created software to control the LCD, the input buttons to adjust the start/stop times, and implemented a clock to keep track of time.  In this section, we will create the logic that pulls all of those pieces together and create a functioning gadget. In addition, we will figure out how to test our code without waiting minutes or hours for the time to pass.


        As a reminder, the complete code for the examples for this and all chapters are located in Github here.


        Programming the Timer

        Now that I have all of the subsystems ready to go, I can finally implement the logic to step through the sequence that I described earlier in the user interface figure.  Here it is again for reference:



        During the programming of the clock and timer, I must store the time in the data struct for the RTC, so I built a set_clock_time function to do that:


        // Stores the hours/mins for clock time, start time, or stop time
        void set_clock_time(int index, uint16_t hours, uint16_t minutes)
              // Set the time clock
              if (index == 0)
                    // Midnight is time zero
                    RTC->CNT = hours * 3600 + minutes * 60;
              else if (index == 1)
                    // Add 1 second so that RTC interrupt source is clear,
                    // and not to confuse with the clock minute updates
                    time_keeper.timer_start_seconds = hours * 3600 + minutes * 60 + 1;
                    // Set up the RTC to trigger a start event
                    RTC_CompareSet(0, time_keeper.timer_start_seconds);
                    // Save for next program event
                    start_hours = hours;
                    start_minutes = minutes;
              else if (index == 2)
                    // Add 1 second so that RTC interrupt source is clear,
                    // and not to confuse with the clock minute updates
                    time_keeper.timer_stop_seconds = hours * 3600 + minutes * 60 + 1;
                    // Save for next program event
                    stop_hours = hours;
                    stop_minutes = minutes;

        Here I have reused the same function with different indices.  For index=0, the clock is set, for index=1, the timer start time is set, and for index=2, the timer stop time is set.  This function takes care of remembering the values programmed into the timer for the next program operation, which is just for display purposes, but it also programs the RTC compare registers for both 0 and 1.  These register sets are what will trigger the RTC interrupt to do its job at the right time, which is to change the time every minute or start/stop the sprinkler valve.


        Finally, I am ready to implement the programming sequence described earlier.  This function is what controls the blinking of the hour or minute, as well as displaying the right help text to the user and calling upon set_clock_time to set the compare interrupts.


        #define BUTTON_DELAY                300
        #define FASTER_BUTTON_DELAY         100
        // Use buttons to set time, start, stop
        // Stores those values to memory
        void program_timer()
              static bool initial_programming = true;
              // Disable RTC interrupts while in here...
              for (int i=0; i < 3; i++)
                    if (i == 0)
                    else if (i == 1)
                          if (initial_programming)
                                start_hours = display_hours;
                                start_minutes = display_minutes;
                          display_hours = start_hours;
                          display_minutes = start_minutes;
                    else if (i == 2)
                          if (initial_programming)
                                stop_hours = start_hours;
                                stop_minutes = start_minutes;
                          display_hours = stop_hours;
                          display_minutes = stop_minutes;
                    display_time(display_hours, display_minutes);
                    // Set the hours
                    while (!set_button.short_press)
                          if (adjust_button.short_press)
                                if (display_hours > 23)
                                      display_hours = 1;
                                display_time(display_hours, display_minutes);
                                // Delay so that we don't get double presses
                    display_time(display_hours, display_minutes);
                    // Set the minutes
                    while (!set_button.short_press)
                          if (adjust_button.short_press)
                                if (display_minutes > 59)
                                      display_minutes = 0;
                                display_time(display_hours, display_minutes);
                                // Delay so that we don't get double presses
                    // Commit the clock hours and minutes to memory for the given index
                    set_clock_time(i, display_hours, display_minutes);
              display_hours = get_time(HOURS);
              display_minutes = get_time(MINUTES);
              display_time(display_hours, display_minutes);
              // Delay so that we don't get double presses
              initial_programming = false;
              // Trigger interrupt every 60 seconds to update the time
              RTC_CompareSet(1, RTC->CNT + 60);
              // Now that programming is done, clear and enable interrupts

        The first thing that I do when entering the programming mode is to disable the RTC interrupts.  That would cause the display to change to the time display every 60 seconds and could interfere with the programming task.  I also add delays of BUTTON_DELAY between each state in this function to ensure that the buttons don’t double count when all that was wanted was a single press, yet it still supports a press-and-hold rate of counting up.  This is accomplished by using a FASTER_BUTTON_DELAY for programming the minutes. 


        This function does some basic range checking of hours to ensure that they never go over 23, and that minutes never go over 59.  If the user can’t program invalid values, then the code won’t have to check for invalid values.  Note that my timer is a 24-hour timer and therefore avoids the whole AM/PM debacle.


        Before I leave this function, I re-display the current time and re-enable the RTC compare interrupts.  The timer is alive now, and all that is left is a bit of code in main to kick off programming and keep the user informed of the current mode.


        The Main State Machine

        Here is the final code that runs the sprinkler timer.  I have removed much of the setup of the RTC, LCD, GPIO’s and buttons to helper functions.  What remains is a state machine, which is a useful way to keep track of things in embedded devices. 


        int main(void)
              // Chip errata
              // Set 1ms SysTick
              if (SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE) / 1000))
              typedef enum { INIT, PROGRAM, ON, OFF } program_modes;
              program_modes mode = INIT;
              program_modes last_mode = INIT;
              while (1)
                    if (!set_button.short_press)
                    switch (mode)
                    case INIT:
                          if (set_button.short_press || program_button.long_press)
                                mode = PROGRAM;
                    case PROGRAM:
                          last_mode = PROGRAM;
                          mode = ON;
                    case ON:
                          if (mode != last_mode)
                                timer_on = true;
                                last_mode = ON;
                                RTC_CompareSet(0, time_keeper.timer_start_seconds);
                                //SysTick->CTRL = 0;
                          if (program_button.long_press)
                                mode = PROGRAM;
                          else if (set_button.short_press)
                                mode = OFF;
                                // Delay so that we don't get double presses
                    case OFF:
                          if (mode != last_mode)
                                timer_on = false;
                                last_mode = OFF;
                                //SysTick->CTRL = 0;
                          if (program_button.long_press)
                                mode = PROGRAM;
                          else if (set_button.short_press)
                                mode = ON;
                                // Delay so that we don't get double presses

        Notice that I am placing the machine into EM2 energy state at the top of each pass through the state machine as long as the buttons are not currently pressed.  This will save energy while the device waits for something to happen.  If any interrupt occurs, the device will exit EM2 and run through the state machine again before going right back to sleep. 


        The state machine has four states as defined in the program_mode enum that are INIT, PROGRAM, ON, OFF.  Each state takes care of doing what it needs to do upon entry, and the ON/OFF states make sure do something only if this is the first time entering the state from another state.  The states should take care of themselves and not try to do work that belongs in another state.  The last thing that a state does before exiting is set the next state to transition, and then that state will take care of its own business.  Most of these states are looking for short or long button presses to figure out what to do next.  The PROGRAM state has no decisions at all.  It enters the state, does its business, and then transitions to the ON state every time.


        The EM2 state at the beginning of my state machine requires that I set up the GPIO pin for the Set button as an interrupt source with the following code within the setup_gpio_and_buttons function.  This will wake up the device so that the button presses can be captured.


              // Enable GPIO_ODD interrupt vector in NVIC
              // Configure interrupt on falling edge of SET_BUTTON_PIN
              GPIO_IntConfig(BUTTON_PORT, SET_BUTTON_PIN, false, true, true);

        Oddly-numbered GPIOs all share the same interrupt source, and even-numbered GPIOs all share a separate interrupt source.  From there, the interrupt flags can be used to determine which GPIO pin triggered the interrupt in ranges 0 to 15.  Therefore, I need to create an interrupt handler to catch this interrupt and clear it.    Since the SET_BUTTON_PIN is on PB10, the GPIO interrupt handler is even.



        void GPIO_EVEN_IRQHandler(void)
              // clear the interrupt

        In my case, I only have one interrupt that I care about, so I can just blast all channels indiscriminately.  Had I wanted to only wanted to clear the interrupt source for the SET_BUTTON_PIN, I could have sent in 1 << SET_BUTTON_PIN as the mask instead of 0xffff.


        This is our timer, in a nutshell.  We can test this now and see an LED on the starter kit turn on and off at the right time in our programmed sequence.  If you want to test things without waiting hours for that to happen, just change the clock divider for the RTC to something smaller and watch the time fly!  Since the RTC is not connected to the SysTick timer at all, the blink rate and the button interface will all work as usual while the time is just scrolling by at light speed.


        //CMU_ClockDivSet(cmuClock_RTC, cmuClkDiv_32768);
        CMU_ClockDivSet(cmuClock_RTC, cmuClkDiv_512);   // DEBUG: TODO: REMOVE

        You should get in a habit of commenting out the line you want to restore later and adding a TODO or other searchable string to remove later.  In fact, Simplicity Studio automatically indexes comments that contain the TODO token and places in the Tasks view.  You can work the Tasks view until all of the TODOs are fixed. 


        This divider of 512 makes the minutes fly by like seconds.  Just make sure to set the start and stop timers at least an hour away from the current time, or it will pass them while you are programming the timer at this 512 multiplier.


        We have a few steps to finish our sprinkler timer, which is to store the start/stop times in non-volatile memory and to connect the Starter Kit to the solenoid.  We will wrap all of that up in the next lesson.



      • Webinar: Bluetooth® Smart Development

        Anonymous | 10/286/2015 | 12:59 AM

        BGM111 webinar image - Oct 2015.png


        This webinar from our Bluetooth product experts provides details on the technical resources needed to develop with the BGM111 Bluetooth Smart module. We also discuss why a module is a financially sound decision versus an SOC, saving thousands of dollars in development and certification costs.


        When: October 28, 2015

        Time: 10:00 AM Greenwich Mean Time

        Duration: 1 hour


        Click Register to Attend Bluetooth Smart Webinar Now (Oct 28).png

      • Fully Qualified ZigBee Remote Control Adds Voice and Saves Money with Soft Codec

        Anonymous | 10/280/2015 | 12:06 PM

        Adding voice command capability to a remote control makes a lot of sense. With well-thought-out voice control, finding buttons to watch a favorite movie, or fast forward, pause or stop, is not so challenging and frustrating.


        After all, if our TVs have gotten so smart, why are the remotes so dumb?!


        Our ZigBee Remote Control Reference Design (part number: EM34X-VREVK) supports voice control, Infrared (IR) control with IR database, a backlit keyboard, and an acceleration sensor for activating the backlight. It’s a slick remote control and has is in mass production with one of our leading customers, a large remote control provider.


        ZigBee Remote Control ZRC image.jpg


        The remote is designed to be cost efficient while supporting the requirements of various cloud-based “voice-to-text” software providers. Voice control typically requires an internet connection to transmit the voice command into the could where it is converted to text. This is the model many service providers have adopted, and the remote supports these specifications as shown in the table below.


        One way we saved money for our customer and others who adopt this remote control is by integrating the standalone codec functionality and bill of materials (BOM) into our ZigBee SOC, the EM341. According to Digi-Key pricing, this can save between $0.50 and $1.50 per device by removing the standalone codec and its BOM, of course the savings depend on the volumes for the remote control.


        The reference design is orderable and configurable for both hardware codec and software codec. Find more information on Silicon Labs ZigBee Remote Control solutions at


        Read more about adding voice control in our whitepaper here:


        ZigBee Remote Control ZRC Table.jpg


      • Chapter 7: Sprinkler Timer Part 3: Real Time Clock (RTC)

        lynchtron | 10/274/2015 | 01:16 AM


        This is part three of a five part series on how to build a sprinkler timer using the EFM32 series of MCUs.  In the first two parts, we learned how to use the onboard LCD screen and onboard pushbuttons to create an on-screen display, similar to what you would see on, well, a sprinkler timer.  The user interface (UI), however crude, is complete.  Now, we will begin to put some substance behind the empty interface.  We have looked at fast-running timers in chapter five, but those aren’t necessarily the best to use for long-running time keeping purposes such as keeping track of the time of day and the day of the year.  The best timer for that purpose is the Real Time Clock (RTC) which is the topic of this section.


        Real Time Clock (RTC)

        Since I am building a sprinkler timer, I need a suitable timer to keep track of time.  All of the clocks that I covered in the last lesson were not ideal for long-term timekeeping.  They could certainly work for the purpose, but are generally to be used to operate hardware peripherals, generate PWM waveforms, or to keep track of tiny amounts of time in the millisecond-to-second range.  When you go beyond that range, we turn to the Real Time Clock (RTC), which keeps track of a larger number of ticks, making it suitable to be used to keep track of seconds, hours, days, weeks, months and years, and it does so in a very low power state.  This is the right clock for the sprinkler timer, which needs to keep track of time over days and weeks.


        There are two separate RTC’s in the Wonder Gecko.  The normal RTC, which is included in all EFM32 models, and a Back Up Real Time Clock (BURTC) that can continue operating all the way down to the lowest EM4 energy state and consumes even less energy than the RTC.  It also has its own backup power supply called BU_VIN that is separate from the VIN to the MCU and is routed on PD8 on the Wonder Gecko pinout.  The BURTC is only reset with a loss of power, as normal system resets do not reset the BURTC.  The BURTC is only available in some models of the EFM32 line up, so make sure to note that when choosing your MCU.


        The RTC is a 24-bit timer where each tick can be configured in the microseconds-to-seconds range.  This allows for a counting range between 512 seconds to 194 days before timer rollover, depending on the length of each tick.  The BURTC is a 32-bit timer.


        I chose the RTC instead of the BURTC for this lesson because I will be running the LCD all of the time, which requires EM2, so I do not need to drop all the way into EM4.  You can see the peripherals that run at different energy states in the EMU section of the Reference Manual.  Even if I decided later to turn off the LCD and drop to EM3, the difference in standby current in EM2 and EM3 was similar according to the Wonder Gecko Data Sheet at around 4 µA.


        The RTC clock has 24 bits of resolution, which would let me count for 194 days.  But since my sprinkler timer has no concept of days, I will let it count for 24 hours and then reset it to zero.  I will consider zero seconds to be midnight, and keep track of all time from zero.  The RTC has two compare registers that can trigger interrupts when they are reached.  I will use one interrupt to trigger every 60 seconds so that I can update the time.  I will use the other compare register to store the start and/or stop times of the sprinkler valve.  When either of these compare registers are reached, the RTC interrupt handler will run and take care of advancing the clock or starting/stopping the sprinkler.


        I first configure the required clock sources for RTC, set the timer for 1 second ticks, and enable the interrupts in the NVIC controller.  This is the setup function:

        // This is the timekeeping clock
        void setup_rtc()
              // Ensure LE modules are accessible
              CMU_ClockEnable(cmuClock_CORELE, true);
              // Enable LFACLK in CMU (will also enable oscillator if not enabled)
              CMU_ClockSelectSet(cmuClock_LFA, cmuSelect_LFRCO);
              // Use the prescaler to reduce power consumption. 2^15
              CMU_ClockDivSet(cmuClock_RTC, cmuClkDiv_32768);
              // Enable clock to RTC module
              CMU_ClockEnable(cmuClock_RTC, true);
              // Set up the Real Time Clock as long-time timekeeping
              RTC_Init_TypeDef init = RTC_INIT_DEFAULT;
              init.comp0Top = false;
              // Enabling Interrupt from RTC


        You can see that the clock divisor is huge at 32768.  The Reference Manual for RTC shows the divisors in integers from 1 to 15, while the CMU requires inputs to be in the form of 2^DIVISOR.  Note the difference between the two.  So this is why I had to send in such a huge divider.  I disabled the comp0Top parameter, which defaults to true.  That would cause the RTC to rollover when compare #0 is reached, which I don’t want because I will be using both compare channels for the start and stop of the timer, then I will manually reset things at midnight.


        Next, I set up a struct to hold the RTC timer data and create a get_time function to extract the time from the RTC clock.  I then set up the interrupt handler for the RTC to do something when either of the compare registers trigger an RTC interrupt.  I also define the all-important run_sprinkler and stop_sprinkler functions that will be connected to the 12V solenoid.


        #define SOLENOID_PORT                     gpioPortD
        #define SOLENOID_PIN                      1
        typedef struct rtc_struct_type
              uint32_t timer_start_seconds;
              uint32_t timer_stop_seconds;
        } rtc_struct;
        rtc_struct time_keeper;
        #define END_OF_DAY                        60*60*24  // secs * mins * hrs
        uint16_t get_time(uint16_t segment)
              if (segment == HOURS)
                    return RTC->CNT / 3600;  // Translate seconds in RTC->CNT to hours
              else if (segment == MINUTES)
                    uint16_t leftover_seconds = RTC->CNT % 3600;
                    return leftover_seconds / 60;
              return 0;
        void run_sprinkler()
              // Set test LED to indicate solenoid is ON
              GPIO_PinModeSet(LED_PORT, LED0_PIN, gpioModePushPull, 1);
              GPIO_PinModeSet(SOLENOID_PORT, SOLENOID_PIN, gpioModePushPull, 1);                    
        void stop_sprinkler()
              // Clear test LED to indicate solenoid is OFF
              GPIO_PinModeSet(LED_PORT, LED0_PIN, gpioModePushPull, 0);
              GPIO_PinModeSet(SOLENOID_PORT, SOLENOID_PIN, gpioModePushPull, 0);  
        void RTC_IRQHandler(void)
              // Check to see which counter has triggered
              if (RTC->IF & RTC_IF_COMP0)
                    // Within 10 seconds allows us to speed up the clock for testing
                    if ((RTC->CNT - time_keeper.timer_start_seconds) < 10)
                          RTC_CompareSet(0, time_keeper.timer_stop_seconds);
              else  // A minute update has occurred
                    if (RTC->CNT >= END_OF_DAY)
                          RTC->CNT = 0;
                    RTC_CompareSet(1, RTC->CNT + 60);
                    display_hours = get_time(HOURS);
                    display_minutes = get_time(MINUTES);
                    display_time(display_hours, display_minutes);

        You should notice that I am directly accessing and manipulating the RTC counter value in RTC->CNT.  Peripherals in the EFM32 libraries can generally be found as a pointer from a memory address.  The libraries have set up a #define for the identifier RTC to hold the system address for the register space for the RTC clock.  From that address, individual registers can be accessed via a -> symbol.  Therefore, the count contained in the RTC clock resides at RTC->CNT.  The programmer has full access to read or write to this value, and the hardware will update it as well.  


        With this code, the clock can now keep track of time and update every 60 seconds, as well as start and stop the sprinkler valve.  You can see run_sprinkler and stop_sprinkler functions referenced above set an LED, which is all that we need for testing purposes.  The physical connection to the live 12V solenoid can come later.


        So now we have a clock to base our sprinkler timer upon.  In the next section, we will add the logic that ties the user interface to the RTC and enables you to do the job of programming it.