Official Blog of Silicon Labs

    Publish
     
      • RTOS Development Made Easy

        Alf | 10/286/2016 | 09:26 AM

        Update: This article has been edited to better reflect the needed toolchains.

         

        Ever wondered how to get started on your first RTOS project? Want to get a visual feel for how an RTOS operates and schedules events? This blog is a super-quick walkthrough on how to get started with μC/OS-III from Micrium on the Silicon Labs EFM32 Giant Gecko, using SystemView from SEGGER.

         

         

        Prerequisites:

        1. You need the latest version of Simplicity Studio to run this demo. You can download that here: www.silabs.com/simplicity
        2. You need to update the kit firmware to the latest version to support Real-Time Trace (RTT). The "Update firmware"-button should be easily recognizable from the "Launcher" in Simplicity Studio.
        3. You need the latest version of J-link software to support RTT. You can download that here: https://www.segger.com/downloads/jlink
        4. You need IAR for compiling. This demo has only been ported to the IAR compiler, so you will need that. You can download a free trial here: https://www.iar.com/iar-embedded-workbench/#!?device=EFM32GG990F1024&architecture=ARM

         

        Steps:

        1. The SystemView application that runs on the host computer is available for Windows, Mac and Linux computers. Download and install the SystemView Installer for the host PC from one of the following links:
        1. The zip file from this link https://www.dropbox.com/s/xvd9if3dqrjd64z/SystemView-uCOS-EFM32-GiantGecko.zip?dl=0 contains a Simplicity Studio project for the EFM32 Giant Gecko Starter Kit. Extract the zip file to a folder in your computer.
        1. Open Simplicity Studio, click the button Simplicity IDE and switch to the Development perspective, click File -> Import…, select the option Existing Projects into Workspace under the General category and click Next to import the project.
        1. Configure the root directory by browsing to the folder in your computer where you extracted the zip file in Step 2, select Copy projects into workspace and click Finish.
        1. Click Project and then Clean… to rebuild the project.
        1. Connect the Giant Gecko Starter Kit to your computer from the J-Link USB port.
        1. Expand the node Binaries in the Project Explorer and select the ELF file (extension .out)
        1. Create a new Debug Configuration, by dropping down the Debug button on the top toolbar and selecting Debug Configurations…
        1. Select Silicon Labs ARM Program and click the New Launch Configuration button on the top toolbar of the dialog window.
        1. Click the Debug button
        1. Click F8 to resume execution
        1. Halt the CPU by clicking the Pause button on the top toolbar of the IDE and then click the button right next to it to disconnect the debugger from the target.
        1. Open SystemView and click Target from the menu options at the top.
        1. Click Start Recording
        1. Enter the device name EFM32GG990F1024 as shown below and click OK to start the streaming of events. 

        segger_systemview.png

         

         

        And voilá: SystemView is streaming the RTOS-events directly to your computer:

         

        segger_systemview2.png

         

        When you want to understand more about system view, click here. 

      • Choose Wisely: An MCU is Only As Good as Its Development Tools

        Alf | 07/187/2016 | 09:32 PM

        Selecting an MCU for your next project? Only looking at datasheets is no longer enough. Other important factors are well-written documentation, nicely organized software, and development tools. And when you start to look at development tools? Big, small, square, round, black, green, red, pin-out, features? There's a lot of factors to take into account when deciding what vendor to go with.

         

        Our starter-kits (STKs) are so easy that you can pick them up and start coding within five minutes of opening the kits. But they do not stop there. They continue to grow with you as you take all the steps of development. Want a more professional IDE? No problem, all major IDEs are supported. Want to optimize the battery life of your product? No problem, all our STKs support Advanced Energy Monitoring (AEM) to accurately measure the energy consumption of your code. Want to create your own PCB? No problem, the STK doubles as an external debugger, allowing you even to use the same 30 dollar kit even for production programming, as we do in our production line.

         

        EFR32 Starter Kits.png

         

        So, if you want to try out our MCUs and development kits, get a kit from here and head over to mbed.com and check out the online IDE. Here you can get the first programs up running in a matter of minutes. Make sure to check out the example mbed_blinky_low_power to see why the EFM32 will give you 1000x the battery life of a competing MCU, when using mbed. Then, when you've gotten the feeling and are starting your proper design, download our Simplicity Studio and get all the software and supporting tools in one, easy package.

         

        Not sure which kit to chose? Check out these videos for a run-down of the different chips and kits:

         

      • The (G) Force is Strong with these RGB LEDs

        Lance Looper | 01/25/2016 | 09:35 AM

        The (G) Force is Strong with these RGB LEDs

        Customizable RGB LEDs are finding their way into all kinds of devices, particularly gaming applications where the trend has gone from enabling customizable light patterns to actually being incorporated into game-play. That’s all fun and games, but check out what happened when we paired our Blue Gecko BGM111 Bluetooth Smart Module with a 3-axis accelerometer on a custom board with the BLED112 Bluetooth Smart Dongle.

         

        To show multiple Bluetooth devices in a set-up, we've configured these mini kits to use g-force to control Bluetooth-enabled lighting boards. The lighting reference designs are using both the lighting reference boards are Bluetooth peripherals, while the PC with the BLED112 dongle is configured as a Bluetooth central device. 

         

        By monitoring the XYZ axis, the sum determines the color of the LEDs. The lighting reference boards have firmware capable of receiving the LED colors and information is sent from either the keys or the accelerometers which emulates the keys in this application to the PC.

         

        When the PC receives information, it turns on or off the corresponding LEDs:

        1 G (the board is laying still) is translated to no keys pressed.

        >1 G (added acceleration in any direction) is translated to the blue key is pressed.

        <1 G (reduced acceleration, like a free fall) is translated to the red key is pressed.

         

      • 2015: A Look Back at the Content You Found Most Useful

        Lance Looper | 01/18/2016 | 09:57 AM

        We’re going to start with the top five mbed-enabled EFM32 kits of last year. 2015 was the year when mbed turned energy friendly as ARM and Silicon Labs provided the community with ARM mbed power management APIs for embedded developers creating battery-operated, low-energy products for the IoT.

         

        mbedboy-2015.jpeg


        We picked the top 5 popular EFM32 platforms based on how many times imported into the mbed online compiler last year:


        EFM32 USB-enabled Happy Gecko
        EFM32 Giant Gecko
        EFM32 Zero Gecko
        EFM32 Wonder Gecko
        EFM32 Leopard Gecko

        Click here to learn more about  mbed, mbed OS and EFM32.

      • Build a Super Mario-Inspired Coin Cube!

        Lance Looper | 01/05/2016 | 05:18 PM

        Level 1 – Hit Reset and go Back to 1983
        This nifty little project might not have happened if not for a few important events. One happened back in 1983 when Nintendo launched the arcade game, Mario Bros., which, followed by the pseudo-sequel named Super Mario Bros two years later, introduced the world to the iconic sound we hear in this hack. Another event is Silicon Labs’ annual holiday ornament, which Jordan Wills, employee, experienced hacker and maker, was tasked to create.

        These two events provided Jordan the inspiration and motivation for a pretty cool project:



        Level 2 – Get Creative with PCB

        The Mario Coin Ornament is a cube where the PCB features finger joints and see through sections formed as question marks. With some creative soldering, five LEDs in total (where one LED acts as the light for the familiar coin itself which is fitted inside a laser-etched acrylic coin), a large capacitive touch pad and an 8-bit MCU and external EEPROM, the whole thing can be used as a Christmas ornament as well.

         

        Level 3 – Build your own
        To learn how to make your own, head over to Jordan’s step-by-step guide.

      • Choosing Between an 8-bit or 32-bit MCU - Part 2

        Anonymous | 10/289/2015 | 03:13 PM

        8-bit MCU v 32-bit MCU - Which One to Use - cover.png

         

        Introduction – Part 2

         

        This blog series compares use cases for 8-bit and 32-bit MCUs and serves as a guide on how to choose between the two MCU architectures. Most 32-bit examples focus on ARM Cortex-M devices, which behave very similarly across MCU vendor portfolios.

         

        There is a lot more architectural variation on the 8-bit MCU side, so it’s harder to apply apples-to-apples comparisons among 8-bit vendors. For the sake of comparison, we use the widely used, well-understood 8051 8-bit architecture, which remains popular among embedded developers.

         

        Part 2 – Architecture Specifics and Conclusion: A More Nuanced View of Applications

         

        Part 1 of this blog series painted the basic picture for the 8-bit and 32-bit trade-offs.

         

        Now it's time to look at a more detailed analysis of applications where each architecture excels and where our general guidelines in Part 1 break down.

         

        To compare these MCUs, you need to measure them. There are a lot of tools to choose from. I’ve selected scenarios I believe provide the fairest comparison and are most representative of real-world developer experiences. The ARM numbers below were generated with GCC + nanoCLibrary and -03 optimization.

         

        I made no attempt to optimize the code for either device. I simply implemented the most obvious “normal” code that 90 percent of developers would come up with.

         

        It is much more interesting to see what the average developer will see than what can be achieved under ideal circumstances.

         

        Latency

         

        There is a noticeable difference in interrupt and function-call latency between the two architectures, with 8051 being faster than an ARM Cortex-M core. In addition, having peripherals on the Advanced Peripheral Bus (APB) can also impact latency since data must flow across the bridge between the APB and the AMBA High-Performance Bus (AHB). Finally, many Cortex-M-based MCUs require the APB clock to be divided when high-frequency core clocks are used, which increases peripheral latency.

         

        I created a simple experiment where an interrupt was triggered by an I/O pin. The interrupt does some signaling on pins and updates a flag based on which pin performs the interrupt. I then measured several parameters shown in the following table. The 32-bit implementation is listed here.

         

        Figure 2 - IO Interrupt Experiment.png

         

        The 8051 core shows an advantage in Interrupt Service Routine (ISR) entry and exit times. However, as the ISR gets bigger and its execution time increases, those delays will become insignificant.

         

        In keeping with the established theme, the larger the system gets, the less the 8051 advantage matters. In addition, the advantage in ISR execution time will swing to the ARM core if the ISR involves a significant amount of data movement or math on integers wider than 8 bits. For example, an ADC ISR that updates a 16- or 32-bit rolling average with a new sample would probably execute faster on the ARM device.

         

        Control vs. Processing

         

        The fundamental competency of an 8051 core is control code, where the accesses to variables are spread around and a lot of control logic is used (if, case, etc.). The 8051 core is also very efficient at processing 8-bit data while an ARM Cortex-M core excels at data processing and 32-bit math. In addition, the 32-bit data path enables efficient copying of large chunks of data since an ARM MCU can move 4 bytes at a time while the 8051 has to move it 1 byte at a time.

         

        As a result, applications that primarily stream data from one place to another (UART to CRC or to USB) are better-suited to ARM processor-based systems.

         

        Consider this simple experiment. I compiled the function below on both architectures for variable sizes of uint8_t, uint16_t and uint32_t.

         

        Figure 3 - Data Size Experiment.png

         

        As the data size increases, the 8051 core requires more and more code to do the job, eventually surpassing the size of the ARM function. The 16-bit case is pretty much a wash in terms of code size, and slightly favors the 32-bit core in execution speed since equal code generally represents fewer cycles. It’s also important to note that this comparison is only valid when compiling the ARM code with optimization. Un-optimized code is several times larger.

         

        This doesn't mean applications with a lot of data movement or 32-bit math shouldn't be done on an 8051 core.

         

        In many cases, other considerations will outweigh the efficiency advantage of the ARM core, or that advantage will be irrelevant. Consider the implementation of a UART-to-SPI bridge. This application spends most of its time copying data between the peripherals, a task the ARM core will do much more efficiently. However, it's also a very small application, probably small enough to fit into a 2 KB part. Even though an 8051 core is less efficient, it still has plenty of processing power to handle high data rates in that application. The extra cycles available to the ARM device are probably going to be spent sitting in an idle loop or a “WFI” (wait for interrupt), waiting for the next piece of data to come in.

         

        In this case, the 8051 core still makes the most sense, since the extra CPU cycles are worthless while the smaller flash footprint yields cost savings.

         

        If we had something useful to do with the extra cycles, then the extra efficiency would be important, and the scales may tip in favor of the ARM core.

         

        Pointers

         

        8051 devices do not have a unified memory map like ARM devices, and instead have different instructions for accessing code (flash), IDATA (internal RAM) and XDATA (external RAM).

         

        To enable efficient code generation, a pointer in 8051 code will declare what space it's pointing to. However, in some cases, we use a generic pointer that can point to any space, and this style of pointer is inefficient to access.

         

        For example, consider a function that takes a pointer to a buffer and sends that buffer out the UART. If the pointer is an XDATA pointer, then an XDATA array can be sent out the UART, but an array in code space would first need to be copied into XDATA. A generic pointer would be able to point to both code and XDATA space, but is slower and requires more code to access.

         

        Segment-specific pointers work in most cases, but generic pointers can come in handy when writing reusable code where the use case isn't well known. If this happens often in the application, then the 8051 starts to lose its efficiency advantage.

         

        Identifying the “Core” Strengths

         

        I've noted several times that math leans towards ARM, and control leans towards 8051, but no application focuses solely on math or control. How can we characterize an application in broad terms and figure out where it lies on the spectrum it lies?

         

        Let’s consider a hypothetical application composed of 10% 32-bit math, 25% control code and 65% general code that doesn’t clearly fall into an 8 or 32-bit category. The application also values code space over execution speed, since it does not need all the available MIPS and must be optimized for cost.

         

        The fact that cost is more important than application speed will give the 8051 core a slight advantage in the general code. In addition, the 8051 core has moderate advantages in the control code. The ARM core has the upper hand in 32-bit math, but that’s only 10% in the example. Taking all these variables into consideration, this particular application is a better fit for an 8051 core.

         

        Figure 4 - Application Code Breakout Percentages.png

         

        If we make a change to our example and say that the 32-bit math is 30% and general code only 45%, then the ARM core becomes a much more competitive player.

         

        Obviously, there is a lot of estimation in this process, but the technique of deconstructing the application and then evaluating each component will help identify cases where there is a significant advantage to be had for one architecture over the other.

         

        Power Consumption

         

        When looking at data sheets, it's easy to come to the conclusion that one MCU edges out the other for power consumption. While it's true that the sleep mode and active mode currents will favor certain types of MCUs, that assessment can be extremely misleading.

         

        Duty cycle (how much time is spent in each power mode) will always dominate energy consumption.

         

        Consider a system where the device wakes up, adds a 16-bit ADC sample to a rolling average and goes back to sleep until the next sample. That task involves a significant amount of 16-bit and 32-bit math. The ARM device is going to be able to make the calculations and go back to sleep faster than an 8051 device. In this case, illustrated below, the ARM core may have higher sleep currents, but results in a lower power system.

         

        Figure 5 - MCU Duty Cycle Impacts Power.png

         

        Peripheral features can also skew power consumption one way or the other. For example, most of Silicon Labs’ EFM32 32-bit MCUs have a low-energy UART (LEUART) that can receive data while in low power mode, while only two of the EFM8 MCUs offer this feature. This peripheral affects the power duty cycle and heavily favors the EFM32MCUs over EFM8 devices without LEUART.

         

        8-bit or 32-bit? I still can't decide!

         

        What happens if, after considering all of these variables, it's still not clear which MCU architecture is the best choice? Congratulations! That means they are both good options, and it doesn't really matter which architecture you use.

         

        Rely on your past experience and personal preferences if there is no clear technical advantage.

         

        This is also a great time to look at future projects. If most future projects are going to be well-suited to ARM devices, then go with ARM, and if future projects are more focused on driving down cost and size, then go with 8051.      

         

        What does it all mean?

         

        8-bit MCUs still have a lot to offer embedded developers and their ever-growing focus on the Internet of Things. Whenever a developer begins a design, it's important to make sure that the right tool is coming out of the toolbox.

         

        The difficult truth is that choosing an MCU architecture can't be distilled into one or two bullet points on a Marketing PowerPoint presentation.

         

        However, making the best decision isn't hard once you have the right information and are willing to spend a little time applying it.

         

        <-- PART 1

      • Choosing Between an 8-bit or 32-bit MCU - Part 1

        Anonymous | 10/289/2015 | 02:14 PM

        8-bit MCU v 32-bit MCU - Which One to Use - cover.png

         

        Introduction

         

        This blog series compares use cases for 8-bit and 32-bit MCUs and serves as a guide on how to choose between the two MCU architectures. Most 32-bit examples focus on ARM Cortex-M devices, which behave very similarly across MCU vendor portfolios.

         

        There is a lot more architectural variation on the 8-bit MCU side, so it’s harder to apply apples-to-apples comparisons among 8-bit vendors. For the sake of comparison, we use the widely used, well-understood 8051 8-bit architecture, which remains popular among embedded developers.

         

        Part 1 – The Basics, and Obvious Applications for 8 v 32-bit Architectures

         

        I was in the middle of the show floor talking to an excitable man with a glorious accent. When I told him about our 8-bit MCU offerings, he stopped me and asked, “But why would I want to use an 8-bit MCU?"

         

        This wasn't the first time I had heard the question, and it certainly won’t be the last.

         

        It's a natural assumption that just as the horse-drawn buggy gave way to the automobile and snail mail gave way to email, 8-bit MCUs have been eclipsed by 32-bit devices. While that MCU transition may become true in some distant future, the current situation isn't quite that simple. It turns out that 8- and 32-bit MCUs are still complementary technologies, each excelling at some tasks particularly well versus the other, while performing at parity in others.

         

        The trick is figuring out when a particular application lends itself to a particular MCU architecture.

         

        Star Trek is Better Than Star Wars

         

        Is "Star Trek better than Star Wars?" is similar to asking, “Is ARM Cortex better than 8051?”.

         

        The truth is that while both questions are interesting, neither one is logical. Each fits different applications very well. (And Star Wars is clearly superior. Just kidding. Please don’t comment-bomb me.)

         

        For MCUs, the much better question to ask is "Which MCU will best help me solve the problem I'm working on today?" Different jobs require different tools, and the goal is to understand how best to apply the available 8-bit and 32-bit devices.

         

        A Note on Tools and Updated Technology

         

        Before we begin comparing architectures, it's important to note that I am comparing modern 8-bit technology with modern 32-bit technology. I am using the Silicon Labs’ EFM8 line of 8051-based MCUs which are far more efficient than the original 8051 architecture with modern process technology.

         

        Development tools are also important. Modern embedded firmware development requires a fully-featured IDE, ready-made firmware libraries, extensive examples, comprehensive evaluation and starter kits, and helper applications to simplify things like hardware configuration, library management and production programming.

         

        ARM has an army of tools developers supporting their impressive IDE. Again, the Silicon Labs 8-bit IDE, Simplicity Studio, is what I used, and it compares nicely with various suites for both ARM and 8-bit development.

         

        Obvious Choices for 8-bit and 32-bit MCUs

         

        System Size

         

        The first generality is that ARM Cortex-M cores excel in large systems (> 64 KB of code), while 8051 devices excel in smaller systems (< 8 KB of code). The middle ground could go either way, depending on what the system is doing. It's also important to note that in many cases, peripheral mix will play an important role. If you need three UARTs, an LCD controller, four timers and two ADCs, chances are you won't find all of those on an 8-bit part, while many 32-bit parts support that feature set.

         

        Ease-of-Use vs. Lowest Cost and Smallest Size

         

        For systems sitting in the middle ground where either architecture might do the job, the big trade-off is between the ease of use that comes with an ARM core and the cost and physical size advantages that can be gained with an 8051 device.

         

        The unified memory model of the ARM Cortex-M architecture, coupled with full C99 support in all common compilers, makes it very easy to write firmware for this architecture. In addition, there is a huge set of libraries and third-party code to draw from. Of course, the penalty for that ease-of-use is cost. Ease-of-use is an important factor for applications with high complexity, short time-to-market or inexperienced firmware developers.

         

        While there is some cost advantage when comparing equivalent 8- and 32-bit parts, the real difference is in the cost floor. It's common to find 8-bit parts as small as 2 KB/512 bytes (flash/RAM), while 32-bit parts rarely go below 8 KB/2 KB. This range of memory sizes allows a system developer to move down to a significantly lower-cost solution in systems that don't need a lot of resources. For this reason, applications that are extremely cost-sensitive or can fit in a very small memory footprint will favor an 8051 solution.

         

        8-bit parts also generally have an advantage in physical size. For example, the smallest 32-bit QFN package offered by Silicon Labs is 4 mm x 4 mm, while our 8051-based 8-bit parts are as small as 2 mm x 2 mm in QFN packages. Applications that are severely space-constrained often need to use an 8051 device to satisfy that constraint.

         

        General Code and RAM efficiency

         

        One of the major reasons for the lower cost of an 8051 MCU is that it generally uses flash and RAM more efficiently than an ARM Cortex-M core, which allows systems to be implemented with fewer resources. The larger the system, the less impact this will have.

         

        However, this 8-bit memory resource advantage is not always the case. In some situations, an ARM core will be as efficient as an 8051 core. For example, 32-bit math operations require only one instruction on an ARM device, while requiring multiple 8-bit instructions on an 8051 MCU.

         

        The ARM architecture has two major disadvantages at small flash/RAM sizes: code-space efficiency and predictability of RAM usage.

         

        The first and most obvious issue is general code-space efficiency. The 8051 core uses 1-, 2- or 3-byte instructions, and ARM cores use 2- or 4-byte instructions. The 8051 instructions are smaller on average, but that advantage is mitigated by the fact that a lot of the time, the ARM core can do more work with one instruction than the 8051. The 32-bit math case is just one such example. In practice, instruction width results in only moderately more dense code on the 8051.

         

        In systems that contain distributed access to variables, the load/store architecture of the ARM architecture is often more important than the instruction width. Consider the implementation of a semaphore where a variable needs to be decremented (allocated) or incremented (freed) in numerous locations scattered around code. An ARM core must load the variable into a register, operate on it and then store it back, which takes three instructions. The 8051 core, on the other hand, can operate directly on the memory location and requires only one instruction. As the amount of work done on a variable at one time goes up, the overhead due to load/store becomes negligible, but for situations where only a little work is done at a time, load/store can dominate and give the 8051 a clear efficiency advantage.

         

        While semaphores are not common constructs in embedded software, simple counters and flags are used extensively in control-oriented applications and behave the same way. A lot of common MCU code falls into this category.

         

        The other piece of the puzzle involves the fact that an ARM processor makes much more liberal use of the stack than an 8051 core. In general, 8051 devices only store return addresses (2 bytes) on the stack for each function call, handling a lot of tasks through static variables normally associated with the stack. In some cases, this creates an opportunity for problems, since it causes functions to not be re-entrant by default. However, it also means that the amount of stack space that must be reserved is small and fairly predictable, which matters in MCUs with limited RAM.

         

        As a simple example, I created the following program. Then I measured the stack depth inside funcB and found that the M0+ core's stack consumed 48 bytes, while the 8051 core's stack consumed only 16 bytes. Of course, the 8051 core also statically allocated 8 bytes of RAM, consuming 24 bytes total. In larger systems, the difference is negligible, but in a system that only has 256 bytes of RAM, it becomes important.

         

        Stack Depth Benchmark Code.png

         

        Next post will dive into Architecture Specifics, and a more nuanced look at where each architecture excels.

         

        PART 2 -->

      • Fully Qualified ZigBee Remote Control Adds Voice and Saves Money with Soft Codec

        Anonymous | 10/280/2015 | 12:06 PM

        Adding voice command capability to a remote control makes a lot of sense. With well-thought-out voice control, finding buttons to watch a favorite movie, or fast forward, pause or stop, is not so challenging and frustrating.

         

        After all, if our TVs have gotten so smart, why are the remotes so dumb?!

         

        Our ZigBee Remote Control Reference Design (part number: EM34X-VREVK) supports voice control, Infrared (IR) control with IR database, a backlit keyboard, and an acceleration sensor for activating the backlight. It’s a slick remote control and has is in mass production with one of our leading customers, a large remote control provider.

         

        ZigBee Remote Control ZRC image.jpg

         

        The remote is designed to be cost efficient while supporting the requirements of various cloud-based “voice-to-text” software providers. Voice control typically requires an internet connection to transmit the voice command into the could where it is converted to text. This is the model many service providers have adopted, and the remote supports these specifications as shown in the table below.

         

        One way we saved money for our customer and others who adopt this remote control is by integrating the standalone codec functionality and bill of materials (BOM) into our ZigBee SOC, the EM341. According to Digi-Key pricing, this can save between $0.50 and $1.50 per device by removing the standalone codec and its BOM, of course the savings depend on the volumes for the remote control.

         

        The reference design is orderable and configurable for both hardware codec and software codec. Find more information on Silicon Labs ZigBee Remote Control solutions at http://bit.ly/1O4YdED

         

        Read more about adding voice control in our whitepaper here: http://bit.ly/1OmoMFH

         

        ZigBee Remote Control ZRC Table.jpg

         

      • Superfast Sensor Evaluation Using the EFM8 Sleepy Bee

        Anonymous | 08/239/2015 | 04:13 PM

        Quickly Evaluating a Sensor with EFM8 Sleepy Bee MCU

        Getting a sensor and an MCU to communicate reliably can be a challenge—especially if you are new to the MCU. When you just want to quickly evaluate a sensor, or something else, the best solution is a fast way to configure the peripherals (an ADC in this case), and capture the effects of the different sensors.

         

        To illustrate this, I designed an experiment to test a flex sensor with the Silicon Labs EFM8 Sleepy Bee SB1 8-bit MCU. I chose the EFM8 Sleepy Bee MCU because my (fictional) target application will run on a battery and needs to be ultra-low power.

         

        EFM8SB Kit.jpgEFM8 Sleepy Bee EVB

         

        Sleepy Bee a great fit for this application because the ADC operates in configurable low power modes, which is a feature that is hard to find in a low cost MCU.  The ADC will support 12-bit ADC with 75ksps or 300ksps with 10 bit mode, which I chose to take many rapid readings from the sensor. 

         

        Which Sensors?

        Next I selected the sensors I wanted to evaluate. I chose a couple of different sensors. One from Spectra Symbol part number SEN-08606. And one from Flexpoint Sensor Systems part number 176-3-001.

         

        Flex Sensor.jpgFlex Sensor Options from Spectra Symbol and Flexpoint

         

        Each sensor can be powered by 5V or 3.3V, depending on performance and signal coming from the MCU. They work as a variable resistor, so as the bend of the sensor changes the resistance changes as well, altering the voltage across it.  This voltage drop corresponds to the bend or arch the sensor is. I used 3.3V for this design to reduce power consumption. So the sensor divides this and returns a voltage somewhere in the 0.5-2.5V range.

         

        Configuring the EFM8 Sleepy Bee ADC

        Next I needed to configure the ADC on the MCU. For this, I downloaded Simplicity Studio which includes a free IDE and compiler.

         

        Simplicity Studio also includes an ADC reference design, which made getting the ADC up and running superfast.  Loading the reference design was simple and it had the ADC configured with the settings I needed above. 

         

        Simp Studio Check Boxes.pngSimplicity Studio ADC Input Pin Dialog Box

         

        Simp Studio Config Boxes.png

        Simplicity Studio ADC Configuration Dialog

         

        Once I finished configuring the ADC settings and pins, the Simplicity Studio reference design spit out code for me too. In other words, I had to write no code to evaluate these sensors. All I had to do was select configuration options and check boxes.

         

        Simp Studio Code Example.png

        Simplicity Studio Code Snippet

         

        Sensor Evaluation Set-Up

        Next I configured the set up on my desktop. In the sensor set up image, the red wire runs 3.3V to the sensor. The green wire is the input to the ADC. And the brown wire goes to a potentiometer and then to ground so I can alter the pot’s resistence to create the largest voltage swing.

         

        Set-up Image.jpg

        The last step was to see if my set-up worked, and to take some power measurements.

         

        As I bent the sensor, the voltage drop increased and the voltage read back decreased. Success! My set-up was working.

         

        Voltage Image.png

        Output of Voltage Readback from Sensor

         

         

        Simplicity Studio Energy Profiler

        Now to Test power consumption. Simplicity Studio’s Energy Profiler allowed me to see the auto-generated code’s power consumption in real-time.

         

        Simp Studio Energy Profiler.png

        Simplicity Studio Energy Profiler Power Consumption Readings

         

        Energy Profiler also allows the code to be broken down to provide insight into where the most power is consumed. This is a simple example where I am reading the ADC so the current consumption is relatively flat. When I am ready to develop the application further this will be a powerful tool to extend my battery life.

         

        Summary

        10 minutes is all it took to evaluate two sensors and compare their current consumption. EFM8 Sleepy Bee is super flexible and has a perfectly suited, low-power ADC. And Simplicity Studio has a ton of EFM8 examples that make it easy to configure and utilize its functionality including its ADC, SPI, LCD, and many more.

         

        Check out the EFM8 Sleepy Bee

        Check out Simplicity Studio

         

         

         

      • Add USB Easy as 1-2-3

        lethawicker | 08/236/2015 | 09:22 AM

        Sometimes we forget to add USB to our designs, or we need USB to access the design more efficiently from our development platform.

         

        Don’t worry. It’s easy to drop-in USB connectivity to any design—old or new—with the fixed-function CP210x MCU family from Silicon Labs. In fact, you can do it in just three quick steps.

         

        Step 1 – Connect CP210x EVK to your Windows PC and your Launch Windows driver installer to walk through the wizard to set the driver name and configurations.

         

        Connect

         

        Step 2 – Install the driver on the target device and reboot Windows to recognize it. No additional code writing necessary. 

         

        This is the set up.  There are two wires that go from my UART ports on the device to the TX and RX ports of the Silicon Labs CP2102 device.  Then the USB goes to the host computer where the terminal is viewed.

         

        USB 123

         

        Step 3 – Once the drivers are in place and the device is recognized, open a com port and, USB-am! start sending and receiving USB data.

         

        com port

         

        Learn more at the Silicon Labs CP210x Page

        CP210x devices

         

        Download AN721 for more detailed instructions

        AN721, adding USB walkthrough

         

         

        Buy the CP210x EVK to get started

        Evaluation kit

         

         

        Customize the USB driver

        Custom driver info

         

        Feel free to share your thoughts in the comments!