Official Blog of Silicon Labs

    Publish
     
      • Detecting User Input with Capacitive Touch and Passive Infrared (PIR) - Part 2

        lynchtron | 03/89/2017 | 09:37 AM

        15_title.png

         

        In part 1, we configured the capacitive sensor interface to sample touches on the Wonder Gecko touch slider.  In this section, we will get our computer ready to run Python and the freely available pyqtgraph libary, which is a cross-platform graphing tool that is easy to setup and use.

         

        Graphing the Capacitive Touch Count


        This signal to noise ratio requirement requires a way to graph the count value to observe it over time.  The easiest way to do that is to feed the data from the EFM32 to a host computer, where we can run a graphing program in real time. 

         

        capacitive_sense_graph.png

        Per AN0040, we need to be able to characterize the number of pulses on a touch pad with no touch (i.e. the ambient value) and when a user is touching the pad.  The count when not touched should be 5x the count when touched to have a reliable system.

         

        In the first example, we configured the SysTick for 1ms interrupts, then counted 100 ticks, or 100ms of time before reading the CNT register in TIMER0.  This is an easy way to get started, but is not efficient.  The SysTick interrupt brings the system out of EM1 every millisecond to add one to the counter.  Rather than rely on the SysTick interrupt and a busy while loop, we will rewrite the application to make use of TIMER1 to keep track of the sample time.  This will allow the MCU to stay in EM1 sleep state until the TIMER1 count is complete, and then fetch the TIMER0 CNT value inside the TIMER0 interrupt.

         

        Set up the TIMER1 in the setup_capsense() function:

          

              // Set up TIMER0 for sampling TIMER1
              CMU_ClockEnable(cmuClock_TIMER0, true);
         
              /* Initialize TIMER0 - Prescaler 2^9, top value 10, interrupt on overflow */
              TIMER0->CTRL = TIMER_CTRL_PRESC_DIV512;
              TIMER0->TOP  = 10;
              TIMER0->IEN  = TIMER_IEN_OF;
              TIMER0->CNT  = 0;
         
              /* Enable TIMER0 interrupt */
              NVIC_EnableIRQ(TIMER0_IRQn);

         

        The TOP value is set to 10 in this code with a prescaler set to 512.  Once again, our sample window can be any value we like.  If we set TOP to a small value or use a small prescaler, we will get a very responsive interface at the cost of higher energy consumption.  If we set the TOP value to a large value or use a large prescaler, the TIMER1 interrupt will occur less often, saving energy but creating a less responsive interface.  It is up to you to find values that work best for your application.

         

        Now, to fetch the count, we can make the count variable global, as well as a measurement_complete flag, which is set to true in the TIMER1 interrupt handler.  The resulting main code and TIMER1 interrupt handler is shown here:

         

         

        // Global variables
        volatile unsigned int count = 0;
        volatile bool measurement_complete = false;
        
        int main(void)
        {
        	/* Chip errata */
        	CHIP_Init();
        
        	setup_utilities();
        
        	setup_capsense();
        
        	while (1)
        	{
        		// Clear the count and start the timers
        		measurement_complete = false;
        		TIMER0->CNT = 0;
        		TIMER1->CNT = 0;
        		TIMER0->CMD = TIMER_CMD_START;
        		TIMER1->CMD = TIMER_CMD_START;
        
        		// Now, wait for TIMER0 interrupt to set the complete flag
        		while(!measurement_complete)
        		{
        			EMU_EnterEM1();
        		}
        
        		// Now observe the count, send it out the USART
        		print_count();
        
        		// Delay to not overwhelm the serial port
        		delay(100);
        	}
        }
        
        void TIMER0_IRQHandler(void)
        {
        	// Stop timers
        	TIMER0->CMD = TIMER_CMD_STOP;
        	TIMER1->CMD = TIMER_CMD_STOP;
        
        	// Clear interrupt flag
        	TIMER0->IFC = TIMER_IFC_OF;
        
        	// Read out value of TIMER1
        	count = TIMER1->CNT;
        
        	measurement_complete = true;
        }

         

        All that is left to do is to define the print_count() function that we will use in lieu of a hardware breakpoint that we used in our earlier experiments.

         

        We normally would use the SWO output from the Starter Kit to the host computer for simple debug print messages, but it is difficult to reroute those messages to another program for further analysis.  Therefore, we will use the USART and a USB-to-UART adapter to route the messages to a serial port on the host computer

         

        Recall that we used the Silicon Labs USB-to-UART CP2104 MINIEK board in chapter 8.  We can reuse that board here and connect it per the following table:

         

        Starter Kit                 CP2104 MINIEK
        PC0 - USART0          TX RXD
        PC1 - USART0          RX TXD
        GND                          GND

         

        cp2104.png

        Add the following to your setup_capsense() function to enable USART0:

          

              // Set up USART0 for graphing via PC serial port on pins
              CMU_ClockEnable(cmuClock_USART0, true);
              USART_InitAsync_TypeDef usart_settings = USART_INITASYNC_DEFAULT;
              USART_InitAsync(USART0, &usart_settings);
         
              // Enable TX only at location 5
              USART0->ROUTE = (USART0->ROUTE & ~_USART_ROUTE_LOCATION_MASK) | USART_ROUTE_LOCATION_LOC5;
              USART0->ROUTE |= USART_ROUTE_TXPEN;
         
              // Set the pin as push/pull in the GPIO
              GPIO_PinModeSet(gpioPortC, 0, gpioModePushPull, 0);

         

         Then, the following function is defined to send the count value out through USART0:

          

        // Prints out the global count variable to the USART
        void print_count()
        {
              // Create a place to store a message
              char message[6];
         
              // Format the string as count, encoded in hex, with a space
              sprintf(message, "%x ", count);
         
              // A string pointer to the start of the message
              char * string = message;
         
              // While the data pointed to by the string is not null
              while (*string != 0)
              {
                    // Send the dereferenced value of the pointer to the USART
                    // And then the pointer is incremented to the next address with ++
                    USART_Tx(USART0, *string++);
              }
        }

        Open a terminal emulator program as we did in chapter 8.  You can use Putty or TeraTerm, or the screen utility on Mac and Linux.  Set the baud rate to 115200.  Make sure that you select the serial port that is assigned to the MINIEK board and open it in the terminal emulator.  If all goes well, you will see a list of count values that scrolls forever.  Touch the left-most touch pad on the Starter Kit, and you will see the values change.

          

        4a 48 48 4a 48 48 48 48 48 48 4a 48 48 4a 48 48 4a 48 
        48 4a 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48
        4a 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48
        48 48 48 4a 48 48 4a 4a 48 48 48 48 48 4a 48 48 4a 48
        48 48 4a 48 48 48 44 36 2c 2c 2c 2c 2c 2c 2a 2a 2c 2c
        2c 2a 2c 2c 2c 2c 2c 2a 2c

         

        The touch event can be seen at the end of this listing.

         

        Configuring Your Computer to Run Python and pyqtgraph

        We can graph the data arriving at the host computer’s serial port in real time with the help of a Python script and Python libraries called pySerial and pyqtgraph.  

         

        The Python scripting language has many benefits over the C language that we have been using thus far in programming the EFM32.  The first thing that you need to know is that Python is dynamically typed.  What this means is that you don’t need to declare a variable as a specific type, like uint32_t.  Any variable can hold any data and can be changed at runtime; Python automatically manages this for you, as it also manages memory.  Also, Python is object oriented, which allows for each object to keep track of its own state, allowing more powerful and reusable code.

         

        Python is interpretive, which means that we don’t compile and link during the build process.  There is no resulting executable or binary file.  Every script file is evaluated line-by-line as it runs, and you can even invoke Python by itself as an “interpreter” which allows you to write code and explore functions and return values in a live and active session.  There are plenty of Python IDE’s with breakpoint and single-stepping functionality, but they are not strictly required, as the interpreter provides some of those benefits. 

        Python and the libraries are cross-platform, meaning that they will work on Windows, Mac, and Linux computers.

         

        IMPORTANT NOTE: Python uses tabs and/or whitespace to indicate an indention block.  You don’t need to use curly braces {} after an if statement.  This is a great feature that saves keystrokes, but can be maddening if you use an editor that mixes tabs and spaces.  Therefore, make sure to use a good editor or IDE when editing Python scripts that converts tabs into spaces.  For example, Notepad++, Komodo Edit, PyCharm, or Pydev, the last of which uses Eclipse.

         

        If you have a Mac or Linux computer, Python is likely already installed.  If not, just visit http://python.org and follow the instructions to install it. 

         

        If you have a Windows computer, download the Python2.7 “Windows x86 MSI installer” from https://www.python.org/downloads/.  At the time of this writing, the sub version was 2.7.12, but any newer 2.7.x versions should work fine.  This code has not been tested on the newer Python 3.x branches.

         

        IMPORTANT:  For Windows users, make sure that you modify the default setting “Add python.exe to Path” to “Will be installed on local hard drive” as shown in the figure below.  This is required to run the commands listed in this chapter. 

        python_install.png

        Once Python is installed, open a command window (or console) on your computer and type “python --version” at the prompt.  You should see a message that displays the python version.  If this doesn’t work for you, resolve this problem before continuing.

          

        C:\WINDOWS\system32>python --version
        Python 2.7.12

         

        Next, install the libraries needed for the graphing script.  Type the following commands at your OS command prompt:

         

        python -m pip install --upgrade pip
        pip install pyserial
        pip install -U PySide
        pip install pyqtgraph
         

        Then, at the OS command prompt, type python:

        C:\WINDOWS\system32>python
        Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32
        Type "help", "copyright", "credits" or "license" for more information.
        >>> 

         

        The three greater-than symbols is the Python interpreter prompt.  Type the following commands to ensure that all libraries are installed:

         

        >>> import pyserial
        >>> import pyqtgraph.examples
        >>> pyqtgraph.examples.run()

         

        When you run these commands, you should not see and error messages and a window should open on your computer that looks like the following.  If so, it confirms that you are ready to run the graphing code.  This is a demo application for pyqtgraph and demonstrates all the neat things you can do with it.

        pyqtgraph_window.png

        In the next section, we will use our serial data port and Python GUI tool to create a program that graphs the output from the capacitive sensor in real time.

      • Betting Big on the Connected Home

        Lance Looper | 03/88/2017 | 09:50 AM

        EE Times’ Chief International Correspondent Junko Yoshida recently interviewed Yorbe Zhang, EE Times China’s editor-in-chief, focusing on China’s electronics industry and its outlook for 2017. Zhang singled out connected devices for the home as among the top five areas of interest for Chinese consumers in 2017.

         

        To get another perspective on Zhang’s IoT outlook, we followed up with Sunray Liu, the chief analyst of New Synergy Consulting, a consulting partner of Silicon Labs. Liu has more than 25 years’ experience in the semiconductor industry and co-founded New Synergy in 2005 as a professional marketing consultancy in China.

         

        Sunray, how does the EE Times’ opinion on Chinese consumer interests reflect what New Synergy is seeing in the market?

        I agree with Yorbe’s points about the top five electronic products, especially the IoT products. Actually, the consumption/demand for smart phones and passenger cars are still strong, but the growth rates are starting to flatten around to single digit in China. These markets will move gradually into a red sea situation unless there are disruptive innovations. The evidence is that the companies like Huawei are focusing on profit now, not just scale of business. Huawei is asking its consumer business group to set profitability as its first priority, even though the group shipped 140 million smart phones in 2016.

         

        IDC announced its research on global smartphone market, which only grew about 2% in 2016 with total shipment about 1.47 billion phones.

         

        IoT-Solutions-house_Blog.png

         

        What do you think about connected devices at home being included?

        The best future opportunity is still in the IoT, specifically for areas like the connected home.  Yorbe mentioned that large OEMs like Haier, Huawei and others are betting big on the connected home market. I also very much agree with this point. I have believed for some time that the final winners in the networked home market are more likely to be these big OEMs. The small companies and a few makers will be winners in areas like standalone device markets, especially in the stage of early adoption of new IoT devices.

         

        Why do you think big OEMs will win out in connected home market?

        Large OEMs will win due to the nature of the IoT market. It will gradually become a technology + service market instead of the pure device market. For example, many Chinese customers would like to hire a locksmith to change or repair their mechanical locks in China. They will definitely not buy a Bluetooth-connected lock from an e-commerce site or store and install it themselves. So the big OEMs will win the most pieces of the pie because they have widespread sales and service channels in China.

         

        As we analyzed in our 2017 marketing proposal, IoT businesses should include four layers: consumer-served devices such as watches, professional-served markets including the smart home, system integration businesses like building the wireless factory, as well as fully-customized IoT systems for the business model or customer operations. The value of services is climbing along all four layers, so the IoT service is as crucial as development of the IoT product itself. And services are highly relevant to the future business of chip vendors.

         

        How will chip vendors win in future service-driven IoT market sectors?

        Because the IoT service professionals are becoming an important part of whole industry, so we believe that we should extend marketing efforts beyond design engineers and product managers to include service professionals. These professionals may accelerate the growth of market because they will promote products to end consumers. They’ll make these recommendations based on two factors: first, the channel compensation or service fee from the OEMs, which is out of our control; and second, the technologies and products they know best, which means they can easily introduce and install products with higher efficiency. Time is money for these people.

         

        China will need a lot of IoT product design engineers and much more IoT service people to meet this demand, right? Where are they from?

        They should come from the top technology universities like Tsinghua and my university, the University of Electronics Science and Technology of China, as well as more colleges. A packaged IoT education program covering wireless technologies, MCUs, development tools, embedded OS and sensors will be very important for the future successes of IoT chip vendors. We describe this kind of university program as a strategic marketing approach for IoT vendors and believe that it’s still an important opportunity for all of us. Our company recently worked with an HTML 5 development platform vendor to publish a textbook and to cooperate with 15 colleges in China.

         

        About Sunray Zhaohui

        Sunray Zhaohui Liu is thechief analyst of New Synergy Consulting in Beijing, China. Mr. Liu started his career as assistant analyst for China’s Ministry of Electronic Industry in 1990. Then he joined CMP Media as China Bureau Chief of EETIMES. On this position, he wrote a lot of news reports on China’s electronic industry and semiconductor market. After coming back from his MBA education in the US, Mr. Liu worked as director of operation or director of marketing in technology companies in Beijing and Shanghai. In 2005, he co-founded Synergy Consulting Co. Ltd. It provides strategic marketing consulting and public relations service for clients along with electronics value chain. New Synergy was a member of Global Semiconductor Alliance. Sunray is a speaker and a freelancer for events and media in and out of China. New Synergy Consulting has conducted many research projects for industry and government. Mr. Liu got MBA from University of Illinois and BS EE from University of Electronics Science and Technology of China

         

      • IoT Hero Anrim Technologies: Taking a Different Road with Connected Cars & More

        deirdrewalsh | 03/86/2017 | 11:28 AM

         

        Banner-anrim.jpg

         

        We took the opportunity to talk to CEO Jason Harris of Anrim Technologies, an engineering and consulting firm focused on developing IoT products and capabilities. Based in Maryland, Jason and his team are currently invested in uncovering previously inaccessible opportunities in the connected car market.

         

        Tell us about Anrim and your exploration of the connected car.

        Anrim Technologies provides engineering and consulting services for our clients. We have also developed our own in-house product called DRIVE that deals with vehicle analytics in the connected car space.

         

        In the auto service industry, dealerships and service centers currently use antiquated techniques such as emails, paper flyers, and mailers to market to their customers. These techniques rely on educated guesses as to when vehicle services may be needed, but ultimately rely on customer intuition to reach back out to the service center to have the work performed. The yield is low with this style of marketing, and allows competitors to easily steal customers away.

         

        DRIVE changes things up by providing a cost-effective solution for connecting directly with customers and their vehicles. A service center can now have real-time insight into vehicle health, allowing them to call a customer about known service needs, increasing customer response, loyalty, and ultimately revenue.

         

        For vehicle owners, DRIVE makes maintenance hassle-free. You no longer need to check the sticker in the windshield that says when the oil needs to be changed or worry about getting stranded on the side of the road when the Check Engine Light comes on. Now your service center will reach out to you when your vehicle needs service, giving you peace of mind that someone is watching out for your car’s well-being.

         

        Because we are invested in providing a quality customer experience, we work with dealerships and service centers to offer DRIVE as a complementary service for the vehicle owner.

         

        Was that your vision for DRIVE from the start, and how does it relate to your other offerings?

        DRIVE was envisioned to be extremely low cost, but one of the significant costs of these types of services is the Mobile Network Operators (MNO), which provide cellular data connectivity. DRIVE was designed to eliminate the need for MNOs and enable the vehicle to connect to the DRIVE Service via the users’ smartphones. Users only need to have the DRIVE app installed on their phone. There is no need to open the app or press a sync button for the phone to relay the information between the vehicle and the cloud.

         

        After seeing the need for such a service in other markets, we decided to market the data connectivity capabilities of DRIVE as a standalone offering called FlightPipe. FlightPipe has already solved the complexities of implementing a low-cost data connectivity solution for IoT. We are now providing this solution to other IoT providers, enabling them to focus on their core application instead of worrying about how to move data between their devices and the cloud.

         

        Anrim_Drive_Image.png

         

         

        So FlightPipe was also really born out of a real focus on data security as well?

        Absolutely. Security is essential. The IoT is connecting millions of new devices to the internet every day and anything that connects is vulnerable to hacking. FlightPipe uses industry-standard cryptography and authentication mechanisms to provide a point-to-point VPN connection between the IoT service and the IoT endpoint device in the field.

         

        I really like the cascade effect of you setting out to create one product and ending up creating two as a result of the real innovation you were tapping into. How did Silicon Labs help, and what was the selection process like?

        After evaluating several different Bluetooth SoCs to meet our requirements for DRIVE, the Silicon Labs BGM123 SiP module was the singular solution that met all our needs. Additionally, it was very convenient to have pre-certification from the FCC; that enabled us to finalize the hardware portion of our design in record time. All this ultimately enabled us to meet our very aggressive go-to-market schedule.

         

        Our team also benefited from working with Silicon Labs to get access to prerelease software and SDKs so we could finish our development quickly. Between the tools and the resources that were provided, Silicon Labs was extremely helpful in enabling us to meet our aggressive schedule.

         

        In your opinion, what does the future of IoT look like? What are your ideas on what’s to come or to consider?

        There is a lot of great opportunity for innovation in this space. I think the industry needs to be careful to appreciate that it is not the technological capabilities that will drive success, but ultimately the marketplace. That said, it will be exciting to see what is to come in the next few years.

         

      • Embedded World Recap

        Lance Looper | 03/80/2017 | 10:28 AM

        We're back from Embedded World and what a week! We started by announcing an expansion to our Wireless Gecko portfolio that includes more memory and offers features like over-the-air software updates to support application enhancements and evolving protocol needs in the field.

         

        We also had the opportunity to share demos with some of the 300,000 attendees, including our latest capabilities in MCUs, multiprotocol, and power management. We also spoke at nine sessions over the course of the show. 

         

        Embedded World 2017 Recap_Sense.jpg

        Our Thunderboard Sense display was nice and subtle.

         

         

        Embedded World 2017 Recap_Mesh Demo.jpg

        Showing off the latest in Mesh networking.

         

         

        Embedded World 2017 Recap_Micrium Booth.jpg

        Jean Labrosse demonstrates Micrium OS, the latest version of the µC/OS™ RTOS.

         

         

        Embedded World 2017 Recap_Bluetooth_HomeKit Demo.jpg

        Of course Blue Gecko got in on the action.

         

         

        Embedded World 2017 Recap_Wifi.jpg

         

         

        In the video below Mark Tekippe explains how our zigbee and Thread reference designs can help beat the competition to market:

         

        Alex Koepsel and Josh Norem explain the visualization capabilities of Gecko MCUs:

         

      • Detecting User Input with Capacitive Touch and Passive Infrared (PIR) - Part 1

        lynchtron | 03/79/2017 | 03:21 PM

        15_title.png

         

        In this chapter, we will explore the common task of detecting user input via a capacitive button.  In Chapter 12, we interfaced with a capacitive sense controller via I2C.  That controller chip did all the work for us, so all our software needed to do was read some registers to calculate the position of a finger touch on the touchscreen. 

         

        In addition to touch, we will add a Passive Infrared (PIR) sensor that will detect the presence of a user to wake up the EFM32 from EM4 (deep sleep) state, illuminate an LED, and begin scanning the capacitive sense buttons for a touch input.  This example application could be used as the starting point to build a touch-enabled and motion-activated doorbell, which could then load sound files, display an image on a display, or perhaps send the request on to other hardware. 

         

        pir_sensor2.jpg

        Materials Needed for this Chapter

        • Adafruit PIR (motion) sensor

        Capacitive Sensing Overview

        As figure 2.1 from AN0040 Hardware Design for Capacitive Touch shows, a capacitive sensor is formed by observing the capacitive changes on a pad of conducting material when a human body nears the pad. The human body adds capacitance to the pad, and we can detect that change in capacitance, which registers as a touch event.

         

        capacitive_sense_pad.png

        There are several ways to detect changes in capacitance from a microcontroller.  The way we will do it in this chapter is to build a clock oscillator with the pad as part of the circuit, and measure the change in capacitance through the change in frequency of the oscillator.  Note that the actual frequency we use is not important.  It is only the change in frequency that indicates a touch or non-touch.

         

        The EFM32 Analog Comparator (ACMP) will be used with a hardware TIMER to measure the change in clock frequency over time.  There are helpful tools in the emlib software library and in ACMP hardware to set up the clock oscillator for our capacitive sensing needs.  We must use hardware to count the pulses because they run in a range of 100kHz to 1.5MHz, which are faster than we can count with a simple GPIO input trigger and software.  We must set up the ACMP to trigger a PRS event, that is then counted by a hardware timer.

         

        The capacitive sensor overview is covered in Silicon Labs application notes AN0028 Low Energy Sensor Interface - Capacitive Sense and AN0040 Hardware Design for Capacitive Touch.  You should read those application notes for a detailed understanding of this topic. 

         

        IMPORTANT NOTE: The latest version of AN0828 - Capacitive Sensing Library Overview and AN0829 - Capacitive Sensing Library Configuration Guide make use of the capacitive sensing library (CSLIB) for EFM32 devices but are not covered in this chapter.  Those application notes provide information about additional tools to help you characterize your capacitive sense application.


        The touch slider that is built in to the Starter Kit is shown in Figure 5.3 of the Wonder Gecko Starter Kit User Guide.

        15_touch_slider_schematic.png

        capacitive_sense_buttons.png

         

        There are four capacitance pads on the Starter Kit that comprise a slider.  These pads are attached to EFM32 pins PC8, PC8, PC19, and PC11.  We can still use these pins for other things because the touch pads simply add capacitance to each of these signals.  We need a very specific timing circuit to detect a touch, but these pads are not likely to impact other digital uses of these pins.

         

        Programming the Capacitive Touch Interface

         

        As a reminder, all completed code for this project can be found in Github here.

         

        We will be using the ACMP to generate the clock oscillator that uses the pad capacitance as part of the circuit.  The ACMP will then generate interrupts with every comparison event (rising edge of the clock) that will be fed via PRS into the TIMER to count those pulses over a set period of time, which will allow us to compare frequency changes over time.  The following function does this operation  on a single button, the left-most slider on Pad 1 (PC8).

         

        void setup_capsense()
        {
              /* Use the default STK capacative sensing setup */
              ACMP_CapsenseInit_TypeDef capsenseInit = ACMP_CAPSENSE_INIT_DEFAULT;
         
              CMU_ClockEnable(cmuClock_HFPER, true);
              CMU_ClockEnable(cmuClock_ACMP1, true);
         
              /* Set up ACMP1 in capsense mode */
              ACMP_CapsenseInit(ACMP1, &capsenseInit);
         
              // This is all that is needed to setup PC8, or the left-most slider
              // i.e. no GPIO routes or GPIO clocks need to be configured
              ACMP_CapsenseChannelSet(ACMP1, acmpChannel0);
         
              // Enable the ACMP1 interrupt
              ACMP_IntEnable(ACMP1, ACMP_IEN_EDGE);
              ACMP1->CTRL = ACMP1->CTRL | ACMP_CTRL_IRISE_ENABLED;
         
              // Wait until ACMP warms up
              while (!(ACMP1->STATUS & ACMP_STATUS_ACMPACT)) ;
         
              CMU_ClockEnable(cmuClock_PRS, true);
              CMU_ClockEnable(cmuClock_TIMER1, true);
         
              // Use TIMER1 to count ACMP events (rising edges)
              // It will be clocked by the capture/compare feature
              TIMER_Init_TypeDef timer_settings = TIMER_INIT_DEFAULT;
              timer_settings.clkSel = timerClkSelCC1;
              timer_settings.prescale = timerPrescale1024;
              TIMER_Init(TIMER1, &timer_settings);
              TIMER1->TOP  = 0xFFFF;
         
              // Set up TIMER1's capture/compare feature, to act as the source clock
              TIMER_InitCC_TypeDef timer_cc_settings = TIMER_INITCC_DEFAULT;
              timer_cc_settings.mode = timerCCModeCapture;
              timer_cc_settings.prsInput = true;
              timer_cc_settings.prsSel = timerPRSSELCh0;
              timer_cc_settings.eventCtrl = timerEventRising;
              timer_cc_settings.edge = timerEdgeBoth;
              TIMER_InitCC(TIMER1, 1, &timer_cc_settings);
         
              // Set up PRS so that TIMER1 CC1 can observe the event produced by ACMP1
              PRS_SourceSignalSet(0, PRS_CH_CTRL_SOURCESEL_ACMP1, PRS_CH_CTRL_SIGSEL_ACMP1OUT, prsEdgePos);
         
        }

        The setup_capsense() function can be summarized as follows:

        • Enable the clocks to all peripherals HFPER, ACMP1, TIMER1, PRS
        • Configure the ACMP for capacitive sensing, i.e. create a clock oscillator
        • Wait for the ACMP to warm up 
        • Enable ACMP interrupts, such that rising edge events can be sent on via PRS
        • Configure the TIMER to be clocked by the capture feature, which in turn acts on events from the PRS channel that is tied to the ACMP interrupts

        The comments in the code mention that ACMP and other analog peripherals have no route registers nor GPIO enable functions.  The physical pins are tied directly to the analog peripherals, so all you need to do is enable and configure the analog peripheral for those pins to become enabled and active.  You don’t even need to turn on the clock to the GPIO peripheral.  The datasheet states that these kinds of analog pins are specified as location zero, even though these interfaces “do not have alternate settings or a LOCATION bitfield. In these cases, the pinout is shown in the column corresponding to LOCATION 0.”

        Once this function is called in the main() function, and setup_utilities() is called to set up the SysTick interrupt for the expired_ms() function to work, a while loop can be constructed to fetch the current count:

         

        #define ACMP_PERIOD_MS  100
         
              // Setup the systick for 1ms interrupts
              setup_utilities();
         
              setup_capsense();
         
              while (1)
              {
                    // Clear the count
                    count = 0;
                    TIMER1->CMD = TIMER_CMD_START;
         
                    // Start a timer based on systick
                    int32_t timer = set_timeout_ms(ACMP_PERIOD_MS);
         
                    while (!expired_ms(timer))
                    {
                          EMU_EnterEM1();
                    }
         
                    // Now observe the count and reset
                    TIMER1->CMD = TIMER_CMD_STOP;
                    count = TIMER1->CNT;
                    TIMER1->CNT = 0;
         
              }

        If you set a breakpoint on the last statement in the while loop just after count is assigned a value, you can observe the value of the count when there is a touch and there is not a touch by running the Simplicity Studio IDE one loop at a time.  Your count observations may vary, depending on humidity and other factors.  This is OK, as long as you are seeing a clear difference in the count between a touch and a no-touch loop.

         

        Most of the work here is done by the ACMP, PRS and TIMER hardware (and SysTick to some extent.)  The only thing that the main loop is doing is starting the hardware timer, waiting 100ms, stopping and reading the count of the of the timer, and then resetting timer count and before the loop starts all over again.  Note that the ACMP is running the whole time, and we only sleep long enough to wait for the count to accumulate.

         

        The application note AN0028 Low Energy Sensor Interface - Capacitive Sense performs this while loop over all four touch pads without any looping software.  Think of the LESENSE peripheral as your main software while loop built into a programmable hardware peripheral.  And since LESENSE has its own clock and counter, no TIMER is necessary at all.  In the software examples that are given for AN0028, the only thing to do in software is enable and configure ACMP for capacitive sense, as the LESENSE takes over from there, once it is also configured to do so.  The example uses all four touch pads to slide a message on the LCD screen left or right according to your finger position.  The use of LESENSE is beyond the scope of this chapter.

         

        If you want to see the actual clock oscillator generated by ACMP, and to observe the change in frequency produced by your finger, add the following lines to your setup_capsense() function:

         

              // Configure ACMP1 output to a pin D7 at location 2
              CMU_ClockEnable(cmuClock_GPIO, true);
              GPIO_PinModeSet(gpioPortD, 7, gpioModePushPull, 0);
              ACMP_GPIOSetup(ACMP1, 2, true, false);

         

        You can then view the internal ACMP clock oscillator directly on an oscilloscope and observe the frequency differences between a touch and a no-touch event.

         

        capacitive_sense_raw_frequency.png

        In the next section, we will setup our computer with a simple graphing script in the Python computer language, so that we can use it to observe the capacitive sense count over time and characterize the performance of the button.

      • Helping Home Appliance Developers Comply with International Safety Certification

        Lance Looper | 03/76/2017 | 09:45 AM

        We recently released a new software safety package designed to help developers ensure the safe, reliable operation of a wide range of home appliances such as washing machines, refrigerators, ovens, vacuum cleaners, blenders, and automatic gates. Available free of charge as a set of software libraries, the package enables end products using Silicon Labs’ EFM8 microcontrollers (MCUs) to quickly comply with the International Electro-technical Commission’s IEC-60730 Class B standard.

         

        Appliances_Banner.jpg

         

        Our new safety software library has been certified by external compliance bodies to adhere to the IEC-60730 Class B standard. The IEC certification applies to systems or components of end products that use the EFM8 family of 8-bit MCUs along with the new software library. Home appliances or components used in home applications that could cause a fire, leak or personal injury must be IEC-certified, especially for the European market. End products will pass IEC certification more easily if all components and sub-systems have been pre-certified.

         

        The library consists of test functions that continually monitor the state of the device, ensuring that the EFM8 MCU will not malfunction and cause a failure of the home appliance, endangering the home or its occupants. The software tests the EFM8 MCU’s system clock, timers, volatile memory (RAM), non-volatile memory (flash), UART interface, ADC and DAC, registers and other critical peripheral functions. The library includes all the Power On Self Test (POST) functions executed when a device is first powered on, as well as Built-In Self Test (BIST) functions that are called periodically to ensure correct operation.

         

        BIST Call Fequency.png

         

        We’re committed to helping home appliance developers pass stringent IEC safety certification testing all the way down to the component level. The combination of our IEC-60730 software package, EFM8 MCU portfolio, Simplicity Studio development tools and extensive documents provides developers with the fastest, easiest path to safety certification for global markets.

         

        Pricing and Availability

        Silicon Labs’ IEC-60730-compliant software package libraries offer everything developers need to achieve IEC-60730 certification for a final product or module. The software package is available now to developers free of charge in Silicon Labs’ comprehensive Simplicity Studio™ suite of tools. For more information on the libraries, visit silabs.com/appliances.

         

        Software Safety Standard.png

      • Webinar: Getting the Most Out of Bluetooth 5

        Siliconlabs | 03/74/2017 | 10:51 AM

        800x300.jpg

         

        Webinar: Getting the Most Out of Bluetooth 5
        Date: Wednesday, May 17, 2017
        Duration: 1 hour

         

        Bluetooth 5 is the most significant update to the Bluetooth specification since the introduction of Bluetooth low energy. With the launch of Bluetooth 5, the technology continues to evolve to meet the needs of the industry as the global wireless standard for simple, secure connectivity. With 4x range, 2x speed and 8x broadcasting message capacity, the enhancements of Bluetooth 5 focus on increasing the functionality of Bluetooth for the IoT. In this webinar, we'll explore the new capabilities of Bluetooth 5 and how to get the most out of it using the new EFR32 Blue Gecko SoCs. If your applications need more range and performance with Bluetooth this is the webinar for you.

         

        original.png

      • These New EFR32 Gecko SoCs are Ready for Whatever Comes Next

        Lance Looper | 03/73/2017 | 08:31 AM

        We’re at Embedded World in Nuremberg, Germany this week where we’ve announced an expansion to our Wireless Gecko SoC portfolio. The new EFR32xG12 SoCs can help make sure your devices are ready for whatever comes next.

         

        Update a protocol? No problem. Add another protocol? No sweat. The new Wireless Gecko has more memory and offers features including over-the-air software updates to support application enhancements and evolving protocol needs in the field. Superior RF performance and robust wireless software stacks make it possible to deliver more reliable, differentiated products to market fast.

         

        Blog_banner-wireless-gecko.jpg

         

        Teach Your Device New Tricks

        The new Wireless Geckos make it easier to add multiprotocol switching capabilities to complex IoT applications, regardless of skill level while supporting a wider range of multiprotocol, multiband use cases for home automation, connected lighting, and wearables.

         

        Wireless Gecko SoCs support zigbee® and Thread mesh networking, Bluetooth® 5, as well as proprietary wireless protocols and we’ve optimized the wireless protocol stack architecture to for efficient switching between different network protocols. Now device makers can use a single chip to commission and configure devices over Bluetooth with a smartphone, and then join a zigbee or Thread mesh network to connect to dozens or even hundreds of end nodes.

         

        We also released new Jade and Pearl Gecko 32-bit MCUs that make it possible for developers to easily add touch-control interfaces, powerful security capabilities and multiple low-power sensors to IoT devices.  The new MCUs are optimized for high performance, low-energy applications and support over-the-air (OTA) updates to deployed end products.

         

        Jade and Pearl Gecko MCUs offer hardware cryptography technology featuring an energy-efficient security accelerator, a true random number generator and an SMU, making it possible to secure connectivity for IoT devices without sacrificing battery life. The encryption/decryption accelerator runs the most up-to-date security algorithms with higher performance and lower power than conventional software implementations. An addition to the conventional memory protection unit, the SMU enables software to set up fine-grained security for peripheral access. Peripherals may be secured by hardware on an individual basis, allowing only privileged access to the peripheral’s register interface.

         

        The new MCUs offer more flash memory (up to 1024 kB with a dual-bank architecture) and RAM (up to 256 kB) than previous-generation Jade and Pearl Gecko products, making it easier to develop feature-rich embedded applications supporting real-time operating systems such as Micrium OS. The dual-bank memory architecture enables robust in-field update capabilities after product deployment.

         

        The new Jade and Pearl MCUs are software compatible with the full range of EFM32 Gecko MCUs and Wireless Gecko SoCs, enabling broad software reuse and reduced development time and cost for developers.

         

         

      • Elite Athletes and Wearables: Turning Data into Results

        Lance Looper | 03/72/2017 | 05:13 PM

        So far one of my favorite topics from SXSW 2017 has been this discussion on the impact wearables can have on college and pro sports teams. NBA analyst Tom Haberstroh moderated a panel that included UC San Francisco sleep and performance expert Cheri Mah and Marcus Elliott, the founder of P3 Peak Performance.

         

        Wearables Panel.jpg

         

        Their discussion focused on the performance aspects of wearable technology, specifically how the data generated by these devices is being used by athletes and teams to improve movement, identify problem mechanics, and even predict potential injuries. They also touched on how far the technology has come, and how far it still needs to go in order to realize some of the most promising benefits.

         

        Elite athletes are ravenous for anything that will give them a competitive edge and wearables offer at least some insight that could be used to improve some aspect of their game. For example, some of the work Elliott is doing at P3 Peak Performance involves measuring the force applied to an athlete’s body during certain movements. Through inertial sensors, he can tell if there’s an imbalance with the way a basketball player lands after a rebound. If the player’s movement overstresses part of the body, the athlete risks injury over time. That conclusion isn’t exactly groundbreaking, but without the insight provided by the technology, a player may not know there was a problem until it’s too late. Of the four major league sports, Elliott called out the NBA as being the most rigorous. Grueling schedules that include cross-country travel takes it’s toll on players, who may overcompensate for a tender left foot by overtaxing his right foot. Over the course of an 82-game season, an injury is almost a certainty.

         

        Because of the money involved, teams have an incentive to invest in this area. Of course as athletes, and even teams, become more interested in collecting actionable intelligence, they’re actually generating more data than they can process. This avalanche of data is leading to teams hiring data sciences to interpret the data.

         

        The panel pointed out that this mirrors what we experience in our everyday lives, but where it might take 20 years for you to notice a bad knee caused by favoring an ankle, these issues show up much sooner in the fast and furious, and sometimes short, career of a professional athlete.

         

        There wasn’t a technologist on the panel, but there were several in the audience and they asked some great questions during Q&A about limitations of current technology and onboarding of data.

         

        What obstacle do you think the wearables market will overcome next?

      • Essential Best Practices and Debugging Tips for EFM32 Project Success - Part 6

        lynchtron | 03/69/2017 | 01:49 PM

        debug_teaser.png

        In this final part 6 of the Best Practices and Debug Tips chapter, you will learn more about probably the most valuable skill in your embedded toolbox...debugging your live code in hardware.

         

        Debug Issues Like a Genius
        The Simplicity Studio IDE’s most powerful feature is the ability to debug live programs running on your EFM32 device over a JTAG connection. With this tool, you are not limited to simply programming your embedded device and then hoping it works. Instead, you have introspection tools to interrogate the software as it runs as well as artificially alter the value of variables and memory for test coverage or to find hard-to-find bugs.

         

        1. Build the project before running the debugger
        Whenever you begin debugging your code, build the code before you go straight to debugging it. The debugger icon in the toolbar will do both steps at once and if your build has errors, the debugger will get launched after the failed build, which then results in confusing errors due to loading a project into the debugger that doesn’t exist.


        2. Turn on all debugging features
        Make sure that you are using a “Debug” build configuration by selecting the appropriate project from the drop down menu next to the debug icon in the toolbar.

         

        debug_as.png

        Also ensure that your debug build has no optimizations (the default) and that the debug level is set to -g2 or -g3, as found in the Project > Properties > C/C++ Build Settings menu.


        In order to get the maximum benefit out of your debug session, you will want to make sure that the debugger stops and breaks in whenever anything “bad” or unusual happens. Under the dropdown next to the debug button, click on Debug Configurations. In the Debug Configurations window, in the Exceptions tab, click on all of the types of exceptions that you would like the debugger to catch. These are not enabled by default.

         debug_options.png

         

        3. Unlock debug access when the debugger refuses to load
        Sometimes when you develop code for embedded applications, you do some bad things that can lock up your MCU early in the boot sequence. There is no OS or other supervisor in the system to rescue your program. When this happens, the normal flash routine over JTAG fails, and the debugger informs you that your debugger cannot start. You should do all of the obvious things like disconnecting the hardware from your computer and perhaps restarting your computer or the Simplicity Studio IDE, but if that doesn’t fix it, you can try unlocking debug access to your MCU.


        The Unlock Debug Access option in the Flash Programmer tile will erase the contents of the flash memory on your MCU and leave the MCU in a state that allows it to be flashed with programming again. Just beware that if you flash it again with bad programming that does something bad early in the boot sequence, you will need to unlock debug access again.


        Just remember that you have to make sure that your Detected Hardware is set to Detect Target Part in the main Simplicity Studio home screen, and that your Debug Mode is set to MCU or Out in the Kit Manager in order for the Flash Programmer to display the proper options. See the Connect to your own PCB over JTAG as if it were a Starter Kit section in this chapter for more information on how to connect to your hardware, whether it is a Starter Kit or your own hardware board.

        flash_programmer.png

         

        4. Use a blank project to fix debugger launch issues
        Whenever you are starting a new project, build a completely empty_project.c first. Then, launch the debugger on that empty file to prove that all of the connections between the host computer and EFM32 hardware platform are working. Sometimes, you will get mismatches between the type of EFM32 part detected versus the type of EFM32 part that is specified in your project. Solve that problem before you try to add your own code and library files to the mix.


        Once you have your code up and running, sometimes things just break and your code refuses to load into your hardware. After you have tried to unlock debug access and other hardware tricks in the previous steps, your problem might lie in the project settings. Something could have changed in your environment. Load an empty project and see if you can get it to work again, without any of your own code. If it works, then the problem is buried somewhere in your project settings. You can choose to hunt down the problem in your existing project or create a new project and copy all of your source files into the new project, and then modify the Project Properties settings of the new project to match that of the old project. This process should get you back in business.


        5. Write code to help the debugger
        When developing code to be used during a debugging session, there are some things that you can do to help make the process easier:

        • Define arrays that you want to examine in watch windows with constants. If the debugger can tell how big an array is at compile time, it can offer you the appropriate number of elements to be able to inspect them. Otherwise, you will have to manually add a range of indices for the debugger watch window every time you launch the debugger.
        • Use a volatile variable to control execution of a block of code that you don’t want the MCU to automatically execute every time. If you are debugging some code that erases a big chunk of flash memory or some other dangerous code right at startup, it will do it every time the MCU boots. You can prevent it from doing that by encapsulating the dangerous code inside block of code that will only execute if you use the debugger features to make it happen. For example:
          volatile int i=0;
          if (i)
          {
          // Do something dangerous, which could reset the system, etc.
          }

          Put a breakpoint on the if statement, then “Move to Line” in the debugger to move to the first line inside the if statement that you want to execute. Then on subsequent resets, the code inside the if statement will never execute. The volatile declaration of i prevents the compiler from optimizing away the block of code from within the if statement.
        • Don’t use identical variable names at multiple scopes within a module. The debugger can get confused and show you the wrong value when you hover your mouse on a variable in a live debugging session. If you have a global variable named foo and a local variable named foo, the debugger doesn’t always show you the correct value of the foo variable based on the scope. Keep variable names unique and the debugger will give you the proper value of your inspected variables.
        • Don’t set breakpoints or play around too much in the IDE while your project is running live and Simplicity Studio is looking for a breakpoint. This can alter the timing of your code and cause it to miss interrupts. Keep your hands off the IDE as much as possible until a breakpoint is reached.
        • If you have trouble setting interrupts on a line of code, that means that either you have too many breakpoints already set in the current project, or the compiler thinks that the code that you are trying to break on is unreachable. For example, an if (0) statement will never execute, and the compiler knows this. Disable all other breakpoints, restart the IDE if necessary, clean the project and rebuild. Then, try setting breakpoints again.

        6. Attach to running instances without resetting the MCU
        If you are running a project on your own hardware that was loaded with a Debug configuration, you can attach to it and inspect it without resetting the device. This is particularly helpful if your system has mysteriously locked up and you want to get in there to inspect things without starting a new debugging session. Note that sometimes the act of connecting the JTAG cables from the Starter Kit to your custom project can cause a reset. You may be able to mitigate this issue by grounding your custom project to your Starter Kit first with a jumper cable, and then connecting the JTAG cable, or just leave the JTAG cable attached throughout the test process.


        In order to attach to a running project, select Run > Attach to menu option and then select the correct project. Note that the version of software running on your target hardware and the project selected from the Attach to menu must be the same version or you will get nonsensical results. Once code compiles and the debugger attaches, you may have to double-click on the entry in the top left window, called Silicon Labs ARM MCU: etc., expand the dropdown icon next to it, and then find the <project>.axf file. Once you find that and highlight it, you can press the pause button or set breakpoints in your code as if you had launched it from the debug tool.

        debug_scope.png


        7. Force activity on a configured peripheral
        When you are configuring a peripheral for the first time, it can be a struggle to get to that first glimmer of activity that confirms that you have the right configuration of GPIO pins, route, peripheral and that you have properly interpreted all of the necessary instructions to get everything going. In summary, here are the steps necessary to enable any peripheral in an EFM32 device:
        1) Enable the GPIO clock
        2) Enable the peripheral clock (i.e USART, I2C, DAC, etc.) and other necessary clock sources
        3) Configure and enable the peripheral
        4) Route the pins used by the peripheral through to the GPIO
        5) Configure the pins used by the peripheral in the GPIO (i.e. push pull, input, etc.)


        To be sure that the GPIO pins for your chosen peripheral are connected to the appropriate place in your hardware as you expect, you can create an empty project and simply set or clear the GPIO. Verify that you see the change on an oscilloscope or multimeter. Once you are sure that you are connected to the correct GPIO pins, you can resume debug of the peripheral you are trying to program. If you are like me, you forgot to enable the peripheral clock.


        8. Don’t use print statements over UART to debug timing issues
        Debugging on an MCU is a tricky business because the limited resources means that any attempts to observe the system will impact the performance of the system. The timing changes caused by debug print statements can cause the issue that you are trying to find to disappear. You don’t have the luxury of multiple cores and deep memory resources to handle such debug printing in the background.


        You can mitigate the timing effects of debug print statements by utilizing a print buffer with an interrupt-based mechanism. If you were to simply place a character from a print statement onto the UART, then wait for the character to complete before placing another character on the UART, you will slow down execution of your single-threaded embedded application. By utilizing a ring buffer and interrupt to feed the UART, you will lessen the impact your embedded application.


        If a UART is not available for debug output or if the time spent servicing the UART print statements are still causing timing issues, you can place all of your debug print statements in a debug buffer and examine the contents of the buffer through the Simplicity Studio debugger. Simply examine the value of your print buffer in the debugger and it will automatically translate your buffer to ASCII text, allowing you to see whatever message was transferred into it before your error occurs.


        Finally, toggling different GPIO pins in various places in the code can help illustrate the flow of the program and if things are executing in the order you expect. You can connect those GPIOs to LEDs for events that happen slowly enough to observe or to an oscilloscope, which will provide precise timing information.


        Conclusion
        I hope that this guide has helped you gain a better understanding about how to set up, develop, and debug your project for success. Don’t get discouraged if you run into a lot of issues. Embedded development is never easy. It requires a serious investment of effort and time to get it right. There are opportunities everywhere to learn something new. Stick with it, and then gloat over your amazing accomplishment!