The Project board is for sharing projects based on Silicon Labs' component with other community members. View Projects Guidelines ›

Projects

    Publish
     
      • IoT Party Button

        Mark Mulrooney | 02/33/2018 | 05:58 PM

        The following is a project write-up from a recent hackathon that took place with the Silicon Labs MCU and Micrium Application Engineering teams. The members of this team were Mark Mulrooney, Michael Dean, Alan Sy and Joe Stine.

         

        Project Summary:

        The goal of this project was to create an IoT enabled Party Button that would allow a user to press a button and trigger a number of party lights all to turn on at the same time. This was accomplished using a combination of Silicon Labs EFM32GG11 Starter Kits, Silicon Labs EFR32MG12 Starter Kits, Silicon Labs Smart Outlets, Silicon Labs Si8751-KIT Isolators, Dream Cheeky Big Red Button and a lot of party lights/disco balls. Using this hardware, a signal was send from the Big Red Button to an MQTT broker which then propagated to out to Giant Gecko kits that listening for a signal. Some of the GG11s had the isolator connected directly to the board and would toggle their specific party light and other GG11s were connected over serial to a Mighty Gecko kit. The Mighty Gecko would send a ZCL on/off message over Zigbee to other Mighty Geckos or the Silicon Labs Smart Outlet to control the other party lights.

         

        Project Background:

        Since our team typically works on the EFM32 platform or with software other than Micrium OS, our main goals of this project were to become familiar with the EFR32 chips/tools and to use Micrium OS to add internet connectivity to a LAN IoT ecosystem such as a Zigbee network. As we found out, the project did prove to be a good exercise in both the EFR32 and Micrium OS.

        The project can be divided up into three main sections: MQTT, Zigbee and Isolation. An advantage of this was since the project was somewhat complicated and involved a lot of moving parts, it allowed our team members to work on different parts of the project without holding up another part of the team. The following sections describe the different parts of the project and how they operated.

         

        MQTT:

        MQTT Diagram

        The IoT Party Button project used MQTT as the communication protocol between the Big Red Button and the GG11 nodes. MQTT is a lightweight publish-subscribe IoT protocol that sits on-top of TCP. For our project we used the Mosquitto broker as our MQTT broker for all of the clients to connect to. The Mosquitto broker was hosted on an AWS EC2 instance and implemented a simple username/password for some basic security. In a real-world application you would ideally use TLS in conjunction with MQTT to encrypt your connection to an MQTT broker.

        The project used MQTT for control of the trigger for a few reasons. The biggest reason was flexibility. Initially, during the planning of the project we discussed the possibility of plugging the Big Red Button into a Giant Gecko. This would have allowed us to use the Micrium OS USB stack to detect the button press and Micrium OS Net’s MQTT client to publish the button push to the MQTT broker. Since we only had a few days to complete this project we were unsure if there would be enough time to complete this portion, so we set that part aside.

        For testing purposes, we created a simple button simulator in Node.JS that could run on anyone’s computer and publish a message to the MQTT topic for a button press. Since we did not have enough time to complete the USB portion on a GG11, we ended up using a Node.JS script to listen for a button press while the button was plugged into one of our computers. When the Node.JS script detected the button press it sent an MQTT message to the trigger topic.

        Another advantage of using MQTT for control of the trigger is it opened up the ability to have the trigger sent from a number of places. We use Slack as a communication tool in the office, but we also have a helper bot that you can send commands to. It is possible that we could have had the bot send the same MQTT command to the MQTT broker to trigger the IoT party.

        All of the GG11s that were subscribed to the MQTT topic for the button trigger used Micrium OS and the Micrium OS Network MQTT client. Once the Micrium OS portion was set up, the subscribed nodes had one of two functions: either trigger an isolator connected to it or send a serial command to a Mighty Gecko to trigger it’s local network via Zigbee. For simplicity’s sake, we used the same application on all of the GG11s. This allowed us to program them all up without the need for individual code changes.

         

        Zigbee:

        Zigbee diagram

         

        The Giant Gecko kit only has Ethernet, we found it was not practical to use Giant Gecko’s for every node in our project. Instead, it was easier to use one Giant Gecko that had an Ethernet connection and use a Mighty Gecko to send the command out wirelessly to all nodes in its network. It also allowed us to use some of the Silicon Labs Smart Outlets as nodes in our project.

        Zigbee networks typically have three different types of nodes in them: Coordinator, Router and End Device. In our project, the coordinator was connected to the Giant Gecko via serial to receive the on or off command and would then relay that command out to all of the nodes in the network. The coordinator was configured using AppBuilder in Simplicity Studio. AppBuilder allows you to specify what packages should be included and generates the necessary code. Since the Giant Gecko was connected to the Mighty Gecko over serial, we enabled the command line as a simple way for the Giant Gecko to send commands to the Mighty Gecko.

        We took advantage of the Zigbee Cluster Library in this project to simplify the format of the on or off message being sent to the nodes. Also, the Silicon Labs Smart Outlets by default use the ZCL on/off library so we did not have to do any configuration on the outlets. Once the command line and ZCL on/off library was enabled in AppBuilder, it was able to put our project together and we were able to flash our coordinator.

        The rest of the nodes in our project were configured either as a router or end devices. Similar to the coordinator we used AppBuilder to generate a project that listens for ZCL on/off messages, but in this case, we did not need the command line. We did however have to add code to the ZCL on/off hooks to toggle a GPIO which would in-turn, toggle the power switcher which was connected to our party lights.

         

        Power Switching:

        To be energy friendly, we decided our system should control the power of the disco light. Our MCU board is running off DC power, but the disco light is powered by AC from a standard wall outlet. So we needed to control AC power from a DC system. To be safe, we should isolate the AC power from the DC power and use a high power MOSFET to turn on and off the AC power to the disco light. Fortunately, Silicon Labs makes an evaluation kit that does just this.

        The Si8751-KIT contains an evaluation board that takes care of isolating two power systems and allows for a digital input on the low voltage, DC side to control the MOSFET on the high power, AC side. Set up was as simple as configuring a few jumpers on the board, connecting the low power side to the VDD, GND, and a GPIO of the MCU, and then connecting the high-power side to the AC outlet and the disco light.

        We also had another disco light that operated from 12V DC, and fortunately, the SI8751-KIT also has high voltage DC isolation capabilities. So, we used a second Si8751-KIT to isolate the 12V DC from our low voltage DC system on the wireless MCU.

         

        Lessons Learned:

        This project required a fine balance between several different protocols all within the same network. This meant there were a lot of moving parts to deal with so sometimes it was difficult to determine where a problem may be occurring. Over the course of the week our debugging skills became a little more fine-tuned but we definitely had some hiccups at first.

        By far our biggest challenge was working with Zigbee. This was mainly because none of us were familiar with the tools or the development kits. The Zigbee tools, as we found out, have a bit of a learning curve and a few tricks to them. We also got unlucky when the first example project we chose to try didn’t work because of a software problem in a newly released SDK. After determining the issue was with the project we moved on to a known working example, Dynamic Multi-Protocol. Once we started working with that project we quickly realized that we were using an example that had a lot of extra overhead we did not need and was confusing us.

        After our failed experiments with some sample projects we decided to start from scratch and build up our own project in the App Builder. After jumping through a few hoops we were able to get a project configured the way we wanted. We found that starting small and building off that was a much better approach than trying to use a complicated example and trim off the excess features. We also found that complicated projects like Zigbee can have a steep learning curve and we underestimated the amount of time it would take to complete the Zigbee portion. Luckily, we were able to complete the Micrium OS portion on the Giant Gecko rather quickly which gave us extra time to focus on Zigbee.

         

        Next Steps:

        Due to some issues with our Zigbee configuration, our project was not complete at the end of the hackathon week. Our final presentation had the ability to send a message from the Big Red Button to the MQTT broker and down to the EFM32GG11 boards, the ability to send a serial command from the EFM32GG11 to the EFR32MG12 and the ability to switch the isolators from either the EFM32GG11 or EFR32MG12. The one gap in the project was sending the ZCL on/off message correct to all of the nodes in our network. An obvious next step for this project would be to rectify the issues in our Zigbee network configuration.

        Beyond getting the Zigbee network configured correctly, we had a few other improvements that could be implemented. First, the Big Red Button could be connected a EFM32GG11 running Micrium OS USB Host to read the button state and send it via MQTT using Micrium OS Network. The second improvement we talked about was actually hooking up a Slack chat bot to have a command to trigger the party instead of using the Big Red Button.

         

        Conclusion:

        While we were not able to get the project working as intended, it proved to be a very valuable exercise to explore the EFR32MG12 and a fun way to do it.

      • Wireless PC Remote for volume and media control

        BrianL | 02/33/2018 | 04:00 PM

        Recently, our MCU Applications team and our Micrium OS team decided to spend a few days, in teams, on a "Hackathon". This allowed us the opportunity to work on a larger, real-world application, in an effort to gain more insight into our products and uses.

        Our team comprised of Brian Lampkin, Janos Magasrevy, and Yanko Sosa. For our project, we decided to create a PC media controller for wireless volume and media control. This would consist of a wireless USB Dongle, connected to a wireless remote controller to provide media controls such as volume up/down, next track/last track, mute, etc.

        1. Requirements

        1.1 Hardware

        1. A USB ‘Dongle’ to provide wireless connectivity to the controller.

          This required a USB interface MCU to communicate with the host PC and a radio MCU to communicate with the remote. For the USB MCU, we chose an EFM32HG, since it is relatively small, and our application – a simple UART to USB HID command bridge – would require little flash. For our Radio MCU, we chose an EFR32FG12 device, which could cover any proprietary protocol we chose to implement. This would provide our UART to Wireless bridge.
           
        2. A wireless ‘Remote’ to provide the user interface for the media controller.

          We chose another EFR32FG12 radio MCU, to pair with the other on the USB dongle. Since this was to be a battery powered remote, we needed an MCU that could be run in a low duty-cycle, low power mode. To provide the user interface, buttons and a joystick on an expansion board were used.

          The completed remote and dongle hardware, using an EFM32HG STK with a Wireless Expansion Board and EFR32FG12, along with an EFR32FG12 Wireless STK with a Joystick Expansion Board, are shown below:

         

        1.2 Software

        An additional requirement was added – the project must integrate Micrium OS in some manner. We chose to implement this on the EFR32FG12 wireless devices to help manage wireless connectivity and low power features.

        2. System Overview

        The system block diagram is as follows:

         

        Buttons and Joystick input are taken by the remote’s Flex Gecko MCU, and converted into wireless packets that represend media commands. These are transmitted to the dongle’s Flex Gecko, which are then converted into UART transmissions to the dongle’s Happy Gecko MCU. Finally, these are interpreted as HID media control commands, sent to the host PC over USB.

        The joystick expansion board was mapped to the following media control functions:

         

        2.1 Wireless Protocol

        Our project has a very simple wireless communication requirement. When a button is pressed on the remote, this button status must be transmitted from the remote to the dongle’s receiver. Since there are few functions, a single byte payload was used to transmit this data. The remote never needs to receive any information from the dongle, so the dongle can be kept in RX mode, while the dongle can transmit a byte whenever the state of the remote’s buttons changes. This is an extremely simple communication protocol, so we decided to use the lower level Radio Abstraction Interface Layer (RAIL) directly rather than utilizing a stack such as Zigbee or Connect.
         

        Since no stack is used, the protocol is effectively proprietary. 2.4 GHz was chosen for the radio’s communication band, as opposed to a Sub GHz band, as this allows for a smaller antenna, useful for a handheld remote.

        2.2 Energy Concerns

        As a battery powered device, low energy consumption is a huge priority for the remote. However, since the dongle is USB powered, there is little reason to limit the power consumption there. Thus, the dongle can be awake and in RX mode continuously with little drawback when connected to the PC's USB. On the remote side, however, consideration was given into keeping the device in lower energy modes whenever possible. Due to the dongle always being in RX mode, we can effectively keep the remote in a low energy state until a button press is made, triggering a new media function update. In our design, this means that the remote only wakes to transmit a packet, then immediately re-enters sleep mode.

        3 The Dongle

        3.1 USB HID Media Device

        The first step in the project was to create a device that could communicate to the PC as a media controller. We decided to implement a HID device, which allows for driverless communication to a PC host for a limited set of known functions. For this project, we implemented what is called a USB HID “Consumer Control” device. The description for the options available in interface is provided in a table in section 15. "Consumer Page" in the USB HID Usage Tables document available on USB.org. Some of the available commands found in this interface are:

        This interface includes many of the media controls that you would normally use during the use of a media application on a PC (Playing video, music, etc): Play, Pause, Record, etc. In this project, we chose to implement the following commands on our remote:

        1. Play/Pause – ID: 0xCD
        2. Scan Next Track – ID: 0xB5
        3. Scan Previous Track – ID: 0xB6
        4. Mute – ID: 0xE2
        5. Volume Increment – ID: 0xE9
        6. Volume Decrement – ID: 0xEA
        7. Play (Unused) – ID: 0xB0
        8. Stop (Unused) – ID: 0xB7

        We eventually decided not to use the Play and Stop commands, as the Play/Pause command that we found implemented the functionality we desired, and allowed us to reduce the total number of inputs to six, which would map neatly to our expansion board’s two buttons and joystick with four cardinal directions.

        3.1.1 HID Report Descriptor

        To interface with a host using the HID interface, a HID report descriptor, describing the functionality of the device, must be constructed. We used the HID Usage Tables document, which included several examples (Specifically, Appendix A.1 Volume Control contained a useful example on volume +/-), and the HID Descriptor Tool to construct the following HID Descriptor:

        // HID Report Descriptor for Interface 0
        const char hid_reportDesc[39] SL_ATTRIBUTE_ALIGN(4) =
        {
          0x05, 0x0C,       // USAGE_PAGE (Consumer)
          0x09, 0x01,       // USAGE (Consumer Control)
          0xA1, 0x01,       // COLLECTION (Application)
          0x15, 0x00,       //   LOGICAL_MINIMUM (0)
          0x25, 0x01,       //   LOGICAL_MAXIMUM (1)
          0x75, 0x01,       //   REPORT SIZE (1)
          0x95, 0x08,       //   REPORT COUNT (8)
          0x09, 0xCD,       //   USAGE (Play/Pause)
          0x09, 0xB5,       //   USAGE (Scan Next Track)
          0x09, 0xB6,       //   USAGE (Scan Previous Track)
          0x09, 0xE2,       //   USAGE (Mute)
          0x09, 0xE9,       //   USAGE (Volume Increment)
          0x09, 0xEA,       //   USAGE (Volume Decrement)
          0x09, 0xB0,       //   USAGE (Play)
          0x09, 0xB7,       //   USAGE (Stop)
          0x81, 0x02,       //   INPUT (Data,Var,Abs)
          0x75, 0x08,       //   REPORT SIZE (8)
          0x95, 0x01,       //   REPORT COUNT (1)
          0x81, 0x03,       //   INPUT (Cnst,Var,Abs)
          0xC0              // END_COLLECTION
        };
        

        This constructs a HID report with two bytes of data. The first byte implements 8 bit options, one for each of the HID commands. The second is a placeholder byte, unused by our application (but could be used for additional functions in the future).

        As a basis for our EFM32HG USB project, we used the usbhidkbd example, which implements a USB HID Keyboard. The conversion for this was rather simple, as the USB side only required a quick swap from the HID Keyboard descriptor to the new HID Consumer Device descriptor, above. With this change, the EFM32HG device now enumerated on the host PC as a media controller.

        3.1.2 USB to Radio Interface

        The next step in the process was to develop an interface that could communicate between the dongle’s radio MCU and the USB MCU to tell the PC when a media button had been pressed. For this, we implemented a simple UART interface.

        The media control functions are represented by single bits in the HID report's first byte's bitfield, as described below:

        typedef enum {
          PLAY = 0x01,
          SCAN_NEXT = 0x04,
          SCAN_LAST = 0x02,
          MUTE = 0x08,
          VOL_UP = 0x10,
          VOL_DOWN = 0x20,
        }reports_t;
        

        On the dongle side, the radio MCU merely sends one byte of data over UART with the appropriate bit set for the desired media function. This is then transmitted over USB by sending a HID report. When the EFM32HG MCU receives a byte over UART, the report is updated:

        void USART0_RX_IRQHandler(void)
        {
        	USART_IntClear(USART0, USART_IF_RXDATAV);
        	report = USART0->RXDATA;
        }
        

         

        Then, in the main loop, if the report has changed since the last one that was sent to the host, it is sent over USB:

            if (report != lastReport) {
        	  /* Pass keyboard report on to the HID keyboard driver. */
        	  HIDKBD_KeyboardEvent(&report);
        	  lastReport = report;
            }
        

        Note the function names and comments left over from the usbhidkbd example - a result of the limited modifications we had to make to this example to implement the media controller.

        3.2 Dongle Radio Receiver

        The Dongle’s radio receiver was built with an EFR32FG12 Wireless MCU, using RAIL as the radio interface layer. The firmware is extremely simple: a single byte packet is received from the remote device, and this packet is transmitted to the EFM32HG USB device over UART.

        3.2.1 Radio Configuration

        The EFR32FG12’s radio was configured using AppBuilder. We used the default settings for a 2.4 GHz, 1 Mbps PHY, modifying it for single byte packets. No other changes were made to this default profile’s settings.

        3.2.2 Radio to UART Implementation

        Once configured, the radio initialization is simple – the device’s radio is initialized and put into RX mode, while an RX callback is registered to handle the reception of the packet and its transmission over UART. The device then waits forever in a while loop to receive packets. The initialization routines are simply:

          // Configure RAIL callbacks
          RAIL_ConfigEvents(railHandle,
                            RAIL_EVENTS_ALL,
                            (RAIL_EVENT_RX_PACKET_RECEIVED));
        
          RAIL_Idle(railHandle, RAIL_IDLE, true);
          RAIL_StartRx(railHandle, channel, NULL);
          while (1) {
          }
        

         

        In the RX callback, the packet is received, the radio is put back into RX mode, and the packet is transmitted over UART:

        void RAILCb_Generic(RAIL_Handle_t railHandle, RAIL_Events_t events) {
          report_t packet;
          if (events & RAIL_EVENT_RX_PACKET_RECEIVED) {
            RAIL_RxPacketInfo_t packetInfo;
            RAIL_GetRxPacketInfo(railHandle,
                                 RAIL_RX_PACKET_HANDLE_NEWEST, 
                                 &packetInfo);
        
            // Receive the packet's one-byte payload
            packet = *(packetInfo.firstPortionData);
        
            RAIL_Idle(railHandle, RAIL_IDLE, true);
            RAIL_StartRx(railHandle, channel, NULL);
        
            // TX Packet over UART to EFM32HG
            USART_Tx(USART0, (uint8_t) packet);
          }
        }
        

         

        3.3 The Remote

        The remote has two main components – the user interface and the radio, used for transmitting user inputs.

        3.3.1 User Interface

        The user interface of the remote uses a joystick expansion board, which provides two buttons and an analog joystick for inputs. This expansion board is described in section 8 of this document: https://www.silabs.com/documents/login/user-guides/ug122-brd4300a-user-guide.pdf

        The analog joystick has an output of one pin which changes voltages depending on the direction the joystick is pressed in. To interface this joystick with the EFR32FG12, the device’s ADC is used to sample the voltage on the joystick’s output every 25 ms, triggered by the RTCC. This voltage is then converted into a direction in the ADC’s interrupt handler.

        #define ADC_MAX_CODES (0x0FFF)
        #define JOY_NONE_THRESH  (0.93 * ADC_MAX_CODES)
        #define JOY_UP_THRESH    (0.81 * ADC_MAX_CODES)
        #define JOY_RIGHT_THRESH (0.68 * ADC_MAX_CODES)
        #define JOY_LEFT_THRESH  (0.55 * ADC_MAX_CODES)
        #define JOY_DOWN_THRESH  (0)
        void ADC0_IRQHandler(void)
        {
          uint16_t sample;
          ADC_IntClear(ADC0, ADC_IF_SINGLE);
        
          sample = ADC0->SINGLEDATA;
        
          if (sample > JOY_NONE_THRESH) {
            joyState = JOY_NONE;
          } else if (sample > JOY_UP_THRESH) {
            joyState = JOY_UP;
          } else if (sample > JOY_RIGHT_THRESH) {
            joyState = JOY_RIGHT;
          } else if (sample > JOY_LEFT_THRESH) {
            joyState = JOY_LEFT;
          } else {
            joyState = JOY_DOWN;
          }
        }
        

         

        For the pushbuttons, GPIO interrupts were enabled for each pushbutton pin, which update the status of the buttons.

        void BTN_Handler(void)
        {
          bool BTN2, BTN3;
        
          BTN2 = GPIO_PinInGet(BTN2_PORT, BTN2_PIN);
          BTN3 = GPIO_PinInGet(BTN3_PORT, BTN3_PIN);
        
          if (BTN2 == BUTTON_PRESSED) {
            BTN2State = BTN2_PRESSED;
          } else {
            BTN2State = BTN2_RELEASED;
          }
        
          if (BTN3 == BUTTON_PRESSED) {
            BTN3State = BTN3_PRESSED;
          } else {
            BTN3State = BTN3_RELEASED;
          }
        }
        

         

        3.3.1 Radio and Packet Transmission

        The radio on the EFR32FG12 device was configured exactly the same as on the dongle. In fact, the exact same AppBuilder project was used as a basis for both devices. Instead of remaining in RX mode, however, the remote is powered down between ADC measurements and button state changes. If the state of the inputs has changed (i.e. a button has been pressed or released since last sleeping), the Report Handler constructs a new report packet and transmits it. When the packet has been transmitted, the device is permitted to transition back to sleep mode.

        To construct the report packet, the states of each button and the joystick are simply ORed together, since these states are mapped to the respective bit of their function in the HID report bitfield:

        void Report_Handler(void)
        {
          report_t report_current;
          static report_t report_previous = 0;
        
          while (1) {
            report_current = BTN3State | BTN2State | joyState;
            if (report_current != report_previous) {
              report_previous = report_current;
              TX_byte((uint8_t)report_current);
            } else {
              break;
            }
          }
        }
        

         

        3.4 Integration of Micrium OS

        As an additional challenge, we were required to integrate Micrium OS into our project. For this, we decided to integrate this only on our EFR32FG12 devices, since the EFM32HG USB device was limited in flash, and it would not benefit from the addition of an operating system due to the simplicity of the firmware running on the device.

        Adding Micrium OS to the Flex Gecko EFR32FG12

        One of the challenges we faced early in the project was that Micrium OS did not natively support the EFR32FG12 in the sense that the development of a Micrium OS board support package (BSP) was required.

        1. Micrium OS Board Support Package (BSP)

        1. Compiler-specific Startup (Micrium_OS/bsp/siliconlabs/efr32fg12/source/startup/iar/startup_efr32fg12p.s)

          We first created the standard Micrium OS BSP folder structure within the Micrium_OS/bsp/siliconlabs folder using the EFM32GG11 as our reference BSP due to its similarities in the startup code. We then started modifying the compiler-specific startup file for the EFR32FG12. This step was fairly straight forward given the fact that most ARM-Cortex-M devices share the same initialization code, with the obvious difference being the number of interrupt vectors sources amongst the various devices.

          The Micrium OS kernel port relies on two ARM-Cortex-M core interrupt sources, they are the PendSV and the SysTick. In our compiler-specific startup code, we had to include these two sources found in the Micrium OS kernel port with the use of the EXTERN assembly directive:
        1. EXTERN  OS_CPU_PendSVHandler

        EXTERN  OS_CPU_SysTickHandler

        Then we allocated memory for the two handlers with:

        DCD    OS_CPU_PendSVHandler

        DCD    OS_CPU_SysTickHandler

        We now have a Micrium OS compatible compiler-specific startup file.
         

        1. Device-specific Startup (Micrium_OS/bsp/siliconlabs/efr32fg12/source/startup/system_efr32fg12p.c)

          A device-specific startup file was required for the clock initialization. For this, we looked inside the Gecko SDK and found the corresponding startup for the EFR32FG12P (system_efr32fg12p.c). This file was added as-is into the Micrium OS BSP.
           
        2. Micrium OS Tick BSP (Micrium_OS/bsp/siliconlabs/efr32fg12/source/bsp_os.c)

          The Micrium OS Tick BSP file essentially handles the kernel tick initialization in either periodic mode or in dynamic mode depending on the power consumption requirements of the project. We left this file the same as the one found in the EFM32GG11 and ran in periodic mode. As one of the potential improvements later on, we could switch to dynamic tick in order to improve the power consumption of our device.
           
        3. Micrium OS CPU BSP (Micrium_OS/bsp/siliconlabs/efr32fg12/source/bsp_cpu.c)

          The Micrium OS CPU BSP file deals with the setup of timestamp timers that are required by the OS for statistical purposes and other features. This was once again left the same as in the EFM32GG11.
           
        4. Micrium OS Interrupt Sources definitions (Micrium_OS/bsp/siliconlabs/efr32fg12/include/bsp_int.h)

          In this file, the various interrupt sources definitions are specified. Although not necessary for our project, this file is included in bsp_os.c to assign BSP_INT_ID_RTCC as a kernel aware interrupt source when dynamic tick is enabled.
           
        5. Micrium OS generic BSP API (Micrium_OS/bsp/include/bsp.h)

          This is the final piece of the Micrium OS BSP puzzle. In this file, the prototypes for BSP_SystemInit(), BSP_TickInit(), and BSP_PeriphInit() are defined. Some of these functions will later be used in our program main().

         

        2. Micrium OS main.c

         

        1. main()

          In the standard Micrium OS main(), the CPU is initialized with CPU_Init(), followed then by the board initialization via BSP_initDevice() and BSP_initBoard(), both from the Gecko SDK. After the CPU and the board clocks are initialized, the OS follows with OSInit() which initializes the kernel. Once the OS is initialized, our startup task is then created by calling OSTaskCreate() (see section 2b.). Finally, after the startup task has started its execution, the kernel starts by calling OSStart().
           
        2. StartupTask

          In the Startup Task, the kernel tick is initialized using BSP_TickInit() from the Micrium OS BSP. Other services such as the UART are also initialized here. In our case, USART2 is used. It is important to mention that the Startup Task has a 500-millisecond delay inside an infinite loop in order for it to yield CPU time to other tasks when running in a multithreaded environment.

          Since our project utilizes proprietary wireless, the RAIL library is included and therefore initialized in the Startup Task at 2.4GHz.

          In order to demonstrate different kernel services, a RAIL receive (Rx) semaphore object is created in this task.
           
        3. RAIL Rx Task

          Our model consists of two tasks: Startup Task and the RAIL Rx Task.

          In the RAIL Rx Task, the program pends on the RAIL Rx semaphore created in the Startup Task. Once data from the wireless remote is received by our device, an interrupt fires and a callback function dissects the packet and posts the first byte of data to the RAIL Rx semaphore. The RAIL Rx task then transmits the data received via USART2 to the Happy Gecko. The callback function briefly puts the radio in an idle state before waking the receiver once again to obtain the next radio packet.

         

        5. Next Steps

        With the project complete and functional using STKs and pre-made expansion boards, we want to pursue creating custom PCBs for both the remote and dongle. This would require a fair amount of work, laying out two MCUs plus a USB connector on the dongle board, and another MCU in a reasonable hand-held remote form factor for the wireless remote. Additional challenges may arise in laying out the wireless specific portions of the board, especially in regards to antenna design and placement. We hope to accomplish this early this year, and have remotes and dongles constructed for each team member to use. Overall, this has been an interesting and challenging project, and it would be great to see it to completion with a physical, practical media remote designed and built.

        6. Attached Projects

        All firmware projects can be found here: https://www.dropbox.com/s/tt55ky7m7h5hmeq/PC_Media_Remote.zip?dl=0
        This includes firmware to run on the dongle's EFM32HG USB MCU and EFR32FG12 Wireless MCU, and the remote's EFR32FG12 Wireless MCU. These are:

        1. Dongle_EFM32HG - firmware for the dongle's EFM32HG to perform UART to USB HID Media Control

        2. Dongle_EFR32FG12_Micrium - firmware for the dongle's EFR32FG12 wireless receiver, with Micrium OS integration

        3. Dongle_EFR32FG12_simple - firmware for the dongle's EFR32FG12 wireless receiver, before Micrium OS integration (simple while loop)

        4. Remote_EFR32FG12 - firmware for the remote's EFR32FG12 wireless receiver for user input

      • Building a Digital Tuner from Scratch

        JohnB | 02/32/2018 | 07:54 PM
        by Silicon Labs MCU and Micrium OS applications team members John Bodnar, Sharbel Bousemaan, Mitch Crooks, and Fernando Flores

        What is tuning and why does it matter?

        If you play a musical instrument, especially if you play a wind instrument, you’re going to want to tune once you’re sufficiently warmed up. For the not so musically-inclined, tuning is the process of making an adjustment to your instrument so that the notes you play, in particular notes which correspond naturally to the construction of the instrument, are produced accurately. Electronically speaking, you could say the notes are reproduced with the correct frequency.

        Figure 1. Clarinet
         

        For a simple example, consider the clarinet shown above. As with any tubular musical instrument, it’s fundamental pitch (the note it most naturally plays) is proportional to its length. In particular, the clarinet above is a B-flat clarinet, so by slightly lengthening it or shortening it, the fundamental note it produces can be made to match a concert B-flat.

        Without going into too much detail, modern instruments tune to notes relative to A = 440 Hz above middle C (think the middle key on a piano). In the case of a B-flat clarinet, its fundamental pitch has a frequency of 466.164 Hz. Thus, a clarinet is “in tune” when a player adjusts his/her embouchure (the relative tension of the facial muscles and positioning of the lips and teeth) to play a B-flat and the sound that comes out of the instrument has a frequency of 466.164 Hz.

        If the sound that comes out of a clarinet when attempting to play a B-flat has a frequency that is lower than expected, the instrument is said to be flat. Similarly, if the frequency of the sound is too high, the instrument is said to be sharp.

        On a clarinet, the mouthpiece, which is the plastic and metal assembly against which the player blows, can be pushed in or pulled out slightly to adjust its tuning. So, if the player’s B-flat is sharp, the mouthpiece can be pulled out a little to lower the frequency and bring it in tune. Likewise, if the B-flat is flat (too low), the mouthpiece is pushed in slightly to raise the instrument’s pitch. Tuning an instrument to its proper concert pitch (B-flat in the case of our clarinet example) is a necessary first step to getting the other notes it can produce to also be in tune when they are played.

        What is a tuner?

        Experienced musicians and people with perfect pitch can tune by ear simply by listening to the note produced and adjusting the instrument’s tuning mechanism accordingly. The rest of us generally rely upon a device called a tuner that compares the frequency of the note we play to its mathematically calculated frequency. A modern digital tuner can be a standalone electronic device or even an application for a smartphone.  An example is shown in Figure 2.

         

        Figure 2. OEM Digital Tuner

        These devices, which can be had for as little as $15, are generally powered by inexpensive, 8-bit microcontrollers. Knowing this, we can probably assume that such a tuner does not make use of digital signal processing (e.g. finding the fundamental frequency by means of a FFT) to compare the frequency of the note played to what it ideally should be. Instead, we figured such a device would probably operate in the time domain by comparing the note played to what it should be purely by means of frequency comparison.

        Every instrument produces a unique sound that is colored by timbral impurities. These impurities are introduced by the shape of the instrument, the materials from which it’s constructed, and by the uniqueness of the musician’s embouchure (for wind players) or touch (for string and percussion players). The net result of these variations is that the waveform of the sound produced on a given instrument as played by any one musician is not spectrally pure but instead consists of a fundamental frequency superimposed by waveforms with various overtones (integer multiples of the fundamental frequency).

        Knowing this, we felt that a tuning method that operates purely in the time domain with no consideration of spectral content would be most suitable for a low-cost processor. Considering that dedicated digital tuners run off one or two AA or AAA batteries, such an approach would also have the benefit of being particularly energy efficient.

        Project Summary

        Our goal was to construct a digital instrument tuner that could:

        1. distinguish the fundamental frequency of the note being played,
        2. determine the nearest equal temperament musical note (based on A = 440 Hz modern tuning),
        3. visually display whether the note is sharp or flat relative to the target note/frequency, and
        4. function without resorting to computationally intensive DSP concepts in order to minimize energy use.

        We implemented what might be considered a very simply analog-to-digital converter that takes the output from an analog microphone and turns it into a pulse train with the frequency equal to that of the note’s fundamental. The pulse train is then easily captured by a microcontroller, which can then perform all of the aforementioned tasks.

        Detailed Description

        For hardware, we opted to use the EFM32 Series 1 Giant Gecko Starter Kit. While the Series 1 Giant Gecko microcontroller might be a bit overkill for the project at hand, the starter kit has a nice dot matrix memory LCD to use for output. Optimization for a smaller EFM32 microcontroller could follow later once the whole concept and application code had been proven.

        To capture the sound from the instrument and output the pulse train, we used an analog MEMS microphone with an amplifier circuit connected to a 74VHC14 Schmitt-triggered inverter. Because we would need to both measure the frequency of the note being played and keep the tuner display updated, a multi-threaded software foundation was a no-brainer.

        We used Micrium’s µC/OS real-time kernel to provide this environment, along with the kernel services needed to protect shared resources and synchronize the tasks. This RTOS foundation allowed us to simplify the design and implementation of our application code, which, at its most basic level, required just two tasks: one for sampling the pulse train and the other for displaying the tuner’s output.

        For the display task to know what pitch is being detected, the measurement task needs some way to communicate results. To do this, we opted for a simple shared variable which the measurement task updates after each sampling period.

        In multi-threaded applications, shared data must be protected by a kernel mechanism, such as a semaphore. Pending on a semaphore usually means that a task might block (be stuck waiting for new data) until it becomes available. This behavior is undesirable for our measurement task because it must sample the pulse train periodically.

        µC/OS allows a non-blocking pend on semaphore, which provides resource safety without the risk of having a task block indefinitely. The drawback of this method is that some measurements might never be communicated to the display task. In practice, this is not an issue for the tuner because we are more concerned with a fast response to changes in pitch.

        Display Task

        The design of our display task (Figure 3) follows the general outline of the flowchart below. However, we have included a signaling semaphore that the measurement task uses to notify the display task when the frequency variable has been updated. The display task blocks on this semaphore to avoid updating the display multiple times with the same data. This helps to reduce the overall energy use of the application.

        Figure 3. Display Task Flowchart
         

        Once it is signaled, the display task tries to access the shared frequency variable. It does so by trying to grab the tuner semaphore which is used to protect the shared data. Eventually, the semaphore will become available, allowing us to read the frequency value. The frequency is converted into a pitch using a simple lookup table. The remainder of the task deals with how the user interface will look when the pitch is displayed. We decided on a minimal design which provides a visual representation of how in or out of tune the played note is, as shown in Figure 4.

        Figure 4. Tuner Display Output

        Measurement Task

        The measurement task (Figure 5) implements an algorithm for reading the pulse train and calculating its average frequency. Pulses are captured using the WTIMER0 peripheral, while the LDMA reads a timestamp from WTIMER0 for each pulse detected. The timestamps are copied into a memory buffer over a period of 125 ms, as measured by the CRYOTIMER peripheral. Once the 125 ms has elapsed, the CRYOTIMER interrupt notifies the measurement task that the sample is ready.

        Figure 5. Measurement Task Flowchart
         

        The task averages the periods between pulses to calculate the frequency of the pulse train. This value is reported to the display task using the mechanisms described above. The LDMA and timer peripherals are then reinitialized for the next sample, and the task waits for the next CRYOTIMER interrupt.

        Microphone and Pulse Generation Circuit

        A primary goal of this project was to devise a means of detecting the frequency of a note played by a musical instrument without the use of complex and computationally expensive signal processing algorithms.  Use of such techniques, for example, FFTs, complicates software development, requires a substantial number of processing cycles, and increases energy use.

        We needed a computationally simpler and less energy-intensive solution that would still permit reliable detection of the frequency of the note being played.  A combination of hardware signal processing and software capture and analysis allowed us to do this with a substantially smaller computational footprint and, thus, less energy, than an FFT-based or similar approach.

        The hardware front end of the frequency measurement portion of the tuner consists of an ADMP401 analog MEMS microphone with preamplifier circuitry followed by a 74VHC14 inverting Schmitt trigger.  Figure 6 shows the signal flow through each stage of the hardware.

        Figure 6. Audio Signal Flowchart through Hardware Front-end Stages
         

        Although not currently implemented in the project due to time constraints but available for future implementation, a digitally-tunable low pass filter is placed between the microphone and the Schmitt trigger. Ideally, this would filter out harmonics (overtones) above the fundamental frequency of the note being played in order to improve the quality of the input to the Schmitt trigger.  Figure 7 shows the hardware front end prototype.

        Figure 7. Hardware Front-end Prototype
         

        The MEMs microphone captures the note played and passes an analog signal to the input of the Schmitt trigger. Depending on the instrument being played, this analog waveform will have a different envelope or shape, but it will still be periodic in nature and have the fundamental frequency of that note.  As such, it will have periodic vertical crossings of the high and low Schmitt trigger threshold voltages if the input signal is properly scaled. In Figure 8, CH1 shows the input analog waveform at a frequency of about 866 Hz.

        Figure 8. Oscilloscope Capture showing analog audio signal (microphone output) on CH1 and Schmitt-triggered output on CH2

        A Schmitt trigger is essentially a comparator circuit with hysteresis, and, in this application, it functions as a 1-bit analog-to-digital converter. Thus, as the input signal rises above the Schmitt trigger input high threshold voltage (VIH), the inverting Schmitt trigger output transitions from logic high to logic low. Likewise, when the analog signal falls below the input low threshold voltage (VIL), the inverting Schmitt trigger output transitions from logic low to logic high.

        Note that the Schmitt trigger implements hysteresis where VIH > VIL resulting in a more stable digital output in the presence of a noisy or non-monotonic input signal. The resulting output is a pulse train with the same frequency as the input analog waveform, in this case about 880 Hz.  This digital signal is then routed to one of the MCU’s timer input pins where its edges are captured and used to quickly calculate the frequency of the note being played.

        Results and Lessons Learned

        Surprisingly, the application ran almost exactly as expected when first tested with simulated instrument sounds. However, we did encounter a problem with higher frequencies resulting in pulse trains that did not correspond with the expected frequencies. In this case, our problem was caused by failure to reset the timer before each new sampling period. This meant that any pulses occurring after one measurement task ended up being counted and affecting the frequency calculated in the next run of the measurement task.

        The system responded well to clean inputs from frequency generators, sine waves recorded through the microphone, and some simulated instrument sounds. However, accuracy began to degrade when more complex tones were played, such as those from a brass instrument.

        As noted above, one aspect of the project originally conceived but not yet implemented is a low-pass filter that can be tuned to strip out harmonics coloring the sound from instruments as played by real people. Time constraints prevented this feature from being integrated into the demonstrated project. Naturally, more effort can still be spent to optimize energy use and get the entire system to provide substantial operating life from one or two alkaline batteries.