The Project board is for sharing projects based on Silicon Labs' component with other community members. View Projects Guidelines ›

Projects

    Publish
     
      • Building a Magnetic Alarm System for the Giant Gecko Series 1 STK

        Siliconlabs | 08/242/2017 | 09:17 AM

        This project is created by Silicon Labs’ summer intern Rikin Tanna. 

         

        Project:

         

        A magnetic alarm system uses a Hall Effect magnetic sensor and a supporting magnet attached to the door frame to determine if the door is opened or closed. This project includes a notification service that sends a message to your mobile phone when the alarm is triggered, for added security. By removing any moving parts from the system, the magnetic alarm system proves to be very reliable.

         

        spectrum-1.jpg

         

        Materials Used:

         

        EFM32 Giant Gecko GG11 Starter Kit (SLSTK3701A)

        WGM110 Wi-Fi Expansion Kit (SLEXP4320A)

        Hall Effect Magnetic Sensor (Si7210)

         

        Background:

         

        My goal with this project was to demonstrate an important use case for the Si7210 as a component of an alarm system. Given that Silicon Labs is an IOT company, I figured it would be beneficial to use IFTTT, an IOT workflow service that connects two or more internet-enabled devices, to demonstrate our GG11 STK being used with an established IOT enabler. With this service included, I could also showcase the WGM-110 Wi-Fi Expansion kit working with the GG11 STK. The GG11 STK was chosen due to its onboard Si7210 sensor, its compatibility with the WGM-110 Wi-Fi Expansion kit, and its recent launch (to expand its demo portfolio).

         

        Operation:

         

        The demo is split into 3 phases:

        1. WiFi-Setup – The first phase of the demo configures the WGM110. Here, the WGM110 boots up, sets the operating mode to client, connects to the user’s access point, and enters a low power state (Power Mode 4) to wait for further command. As it configures, status messages will display to the LCD as the GG11 receives responses from the WGM110.
        2. Calibration – The second phase of the demo calibrates the Si7210 digital output pin thresholds based on the user’s “closed door” position (magnetic field reading).
        3. Operation – The third phase is operation. Closing and opening the door (crossing the magnetic field threshold) will cause the Si7210 digital output pin to toggle, which will result in LED0 flashing. Additionally, the output pin will toggle if a second magnet is brought in to try and tamper with the alarm system. When the alarm is triggered, the GG11 will command the WGM110 to contact the IFTTT servers to send a message to the user’s mobile phone.

        Explanation:

         

        GG11 STK:

        The GG11 STK was programmed using Simplicity Studio. Simplicity provides an ample array of examples and demos to help any beginners get started with Silicon Labs MCUs (this was my first experience with SiLabs products).

         

        Below is a representation of data flow for the project. 

         

        spectrum-2.png

         

        WGM110:

         

        The WGM110 is a versatile part, as it can act as Wi-Fi client, a local access point, or a Wi-Fi Direct (Wi-Fi P2P) interface. In this system, the WGM110 acts as a client, and it is a slave to the host GG11 MCU. Communication is based on the BGAPI command-response protocol via SPI on a UART terminal. Debugging this proved to be difficult, as there are two individual MCUs involved, but the Salae Logic analyzer allowed me to view the communication between the devices to help fix any issues I encountered. Below is a capture of a typical boot correspondence.

         

        spectrum-3.png

         

        spectrum-4.png

         

        When the alarm is triggered, the WGM110 establishes a TCP connection with the IFTTT server and sends an HTTP Get request to the extension specified by the IFTTT applet I have created. Unfortunately, IFTTT allows free users to only create private applets, but creating the applet was simple: step by step instructions for creating my applet can be found in the project ReadMe file.

         

        Si7210:  

         

        The GG11 STK comes with an onboard Si7210 Hall Effect Magnetic sensor. It can detect changes in magnetic field to the hundredth of Millitesla (mT), which is more than enough sensitivity for this use case. The part has multiple OTP registers that store various part configurations, and the calibration process specified earlier writes to the register that determines the digital output threshold through I2C. The Si7210 also features a tamper threshold, in case someone tries to fool the alarm by using a second magnet to replace the original magnet as the door opens. This threshold is configured to be slightly greater than the original calibration threshold to detect even the slightest tamper. When either threshold is crossed, the part automatically toggles an OTP digital output pin, allowing any programmer to easily interface the sensor into their designs.

         

        Using this Project:

         

        This project provides a good starting point for anyone who wants to utilize the Si7210 Hall Effect sensor and/or the WGM110 Wi-Fi Expansion kit working in sync with the GG11 STK. The expansion kit can also be used with the PG1 or PG12 boards, but my code may require a few changes in initialization, depending on which specific peripherals are used. 

         

        Below is a slide that details all the various features that I utilized for each part. Feel free to download the project (link below) and use my code to get started on your own projects!

         

        spectrum-5.png.jpg

         

        Source Files: 

         

        • Magnetic alarm zip file (attached) 

         

        Other EFM32GG11 Projects: 

      • Building a Spectrum Analyzer for the Giant Gecko Series 1

        Siliconlabs | 08/241/2017 | 12:35 PM

        This project is created by Silicon Labs’ summer intern David Schwarz. 

         

        spectrum GG11.png

         

        Project:

         

        A real-time embedded spectrum analyzer with a waterfall spectrogram display. The spectrum analyzer displays the most recently captured magnitude response, and the spectrogram provides a running history of the changes in frequency content over time.

         

        Background:

         

        The original intent of this project was to demonstrate real time digital signal processing (DSP) using the Giant Gecko 11 MCU and the CMSIS DSP library. Since many use cases for real time DSP on an embedded platform pertaining to signal characterization and analysis, I decided that a spectrum analyzer would be a good demonstration.

         

        Description:

         

        The spectrum analyzer works by capturing a buffer of data from a user selected input source: either the microphone on the Giant Gecko 11 Starter Kit (STK) or the channel X input of the primary analog-to-digital converter (ADC0) on the Giant Gecko 11 device. It then obtains and displays the frequency response of that data. The display also shows a spectrogram to give the user information about how a signal is changing over time. The format used here is a ‘waterfall’ spectrogram, where the X axis represents frequency, the Y axis represents time, and the color of the pixel at that coordinate corresponds to the magnitude.

         

        Below is a video demonstration of the final project, the legend on the right shows how the spectrogram color scale relates to intensity.

         

         

        There are two parts to the video. One is for the mic input using classical music. The other is sweeping the ADC input using a function generator.

         

        Spectrogram Data flow Block Diagram (1).png

         

        The block diagram above shows the steps required to convert the incoming time domain data to visual content. Certain parts of the process demanded specific implementations in order to function in real time.

         

        I found it necessary to implement dual buffering to allow for simultaneous data capture and processing, which allowed for lower overall latency without losing sections of incoming data.

         

        The microphone data also required further processing to properly format the incoming bytes. This needed to be done post capture, as input data was obtained using direct memory access (DMA).

         

        Finally, I chose to only normalize and display 0 to 8 kHz frequency data since most common audio sources, including recorded music, don’t contain much signal energy above 8 kHz. However, to avoid harmonic aliasing, I decided to oversample at a frequency of 34133 Hz. I used this specific sampling frequency in order to give me 512 samples (one of the few buffer sizes the ARM fft function supports) in 15 milliseconds. This 15 millisecond time constraint is very important for maintaining real-time functionality, as humans are very sensitive to latency when a video source lags audio.

         

        Using This Project:

         

        This project provides a good starting point for anyone wanting to implement real time DSP on the Giant Gecko microcontroller. It can be run on an out of the box Giant Gecko Series 1 STK, or it can be configured with an analog circuit or module that generates a 0 to 5V signal as the input source. The complete source code and Simplicity Studio project files are linked below, along with inline and additional documentation that should be useful in understanding how the application works.

         

        The ADC input mode and DSP functionality of this project is also fully compatible with any Silicon Labs STK using an ARM Cortex-M4 core (eg. Wonder, Pearl, Flex, Blue, and Mighty Geckos). The microphone and color LCD, however, are not present on other STKs.

         

        Source Files:

         

        https://www.dropbox.com/s/wvuk5yk192xywfl/spectrum_analyzer.zip?dl=0

      • EFM32 Voice Recognition Project Using Giant Gecko's Temperature /Humidity Sensor

        Siliconlabs | 08/237/2017 | 12:37 PM

        This project is created by Silicon Labs’ summer intern Cole Morgan.

         

        Background and motivation:

         

        This project is a program that implements voice recognition for the Giant Gecko 11 (GG11) using the starter kit’s temperature and humidity sensor and the Wizard Gecko Module. My motivation to work on this project was mainly that I wrote another project that implemented voice recognition for the GG11 using the starter kit’s LEDs, and I wanted a more advanced application for my voice recognition algorithm.

         

        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After it has learned the user’s voice, the user can set either a temperature or humidity threshold by saying “set” followed by either “temp” for temperature or “humid” for humidity. After this, the user can say a number from 0-99 one digit at a time to set the threshold value; for example, saying “one nine” would be interpreted as 19. For instance, saying “set humid four two” would set a humidity threshold at 42% humidity. Then, if the humidity measured by the onboard sensor crosses this threshold, the user will receive a text.

         

        Description:

         

        Using my previous voice recognition project as a base, I first added the support for multiple word commands using the first command word “set” as a kind of trigger so that the program won’t get stuck in the wrong stage of a command. One side effect of using a lot more keywords than the previous project was that I had to stop storing the reference MFCC values in Flash, as there wasn’t enough space for all of them.

         

        The next stage in my development was to interface the Si7021 temperature/humidity sensor on the GG11 starter kit. This stage was quite simple because there was already a demo for the GG11 that interacted with the Si7021, so all I had to do was integrate the LCD.

         

        Then, I interfaced the Wizard Gecko Module (WGM) to connect to IFTTT via Wi-Fi and send an HTTP GET request. This part was the most difficult of this project because I have never worked with communication over Wi-Fi or sending HTTP requests. I designed two different IFTTT triggers for temperature and humidity so that the SMS alert message could be tailored to the type of threshold trigger.

         

         

         

        Accomplishments:

         

        • I adapted my voice recognition to work accurately and quickly with a larger bank of keywords
        • I successfully created two IFTTT applets to send alerts quickly to a phone number
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code

         

        Lessons Learned:

        • I learned how to scale an algorithm to work with a larger set of data
        • I learned how to use web requests to interface a microcontroller with applications through the Internet
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far

         

        Potential Use Cases:

         

        • Voice-controlled Nest thermostat
        • A shipping container application where temperature or humidity in an area needs to be monitored to make sure it is at a certain level

         

        Materials Used:

         

        • GG11 STK with Si7021 and microphone
        • Pop filter for STK microphone
        • Wizard Gecko Module
        • Simplicity Studio IDE
        • CMSIS DSP Library

         

        Source Code: 

         

        • VRTempHumid (attached) 

      • EFM32 Voice Recognition Project Using Giant Gecko's LEDs

        Siliconlabs | 08/237/2017 | 12:26 PM

        This project is created by Silicon Labs’ summer intern Cole Morgan.

         

        Background and motivation:

         

        This project is a program that implements voice recognition for the GG11 using the starter kit’s onboard LEDs. My motivation to work on this project was mainly that I have never done anything remotely close to voice recognition before, and I thought it would be a good challenge. But another motivation was also that I am very interested in the Amazon Echo and the other emerging home assistant technologies.
        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After the program has learned the user’s voice, the user can turn the LED on, red, blue, green, or off simply by saying “on”, “blue”, “red”, “green”, or “off”.

         

        Description:

         

        My first step was getting audio input from the microphone into the microcontroller and storing it. This proved a little more difficult than I expected because I hadn’t worked with SPI or I2S before. In addition to this, I also had to design a sound detection system that captures as much significant sound as possible. I did this by squaring and summing the elements of the state buffer of the bandpass FIR filter that I apply on each sample and then setting a threshold for the result of that operation. This system turned out to be extremely useful because, in addition to saving processor time, it also time-aligned the data to be processed.

         

        After this step, I began to implement the actual voice recognition. At first, I thought I could just find a library online and implement easily, but this turned out to be far from true. Most voice recognition libraries are much too big for a microcontroller, even one with a very large Flash memory of 2MB like the GG11. There was one library I found that was written for Arduino, but it didn’t work very well. So, I began the process of writing my own voice recognition algorithm.

         

        After a lot of research, I decided I would use Mel’s Frequency Cepstral Coefficients (MFCCs) as the basis for my algorithm. There are a number of other audio feature coefficients, but MFCCs seemed to be the most effective. The calculation of MFCCs is basically several signal processing techniques applied in a specific order, so I used the CMSIS ARM DSP library for those functions.

         

        After beginning work on this, I created a voice training algorithm to allow the program to learn any voice and adapt to any user. The training program has the user say each word a configurable number of times, and then calculates the MFCCs of that person’s pronunciation of the keyword and stores them in flash memory.

         

        Next, because the input data was time-aligned, I could simply put all the MFCCs for the 4 buffers in one array and use that as the basis for comparison. In addition to this, I also calculated and stored the first derivative (delta coefficients) of the MFCC data to increase accuracy.

         

        Coefficient.png

         

         

         

        Accomplishments:

         

        • I wrote my own voice recognition algorithm for microcontrollers with relatively little RAM and flash memory usage
          • Can store up to 10 keywords in Flash and up to 1,150 keywords in RAM (this number would require program modification to not store in Flash and to use less trainings)
        • Successfully created a voice recognition and training technique that works for everyone, no matter their accent or voice, with an excellent success rate
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code

        Lessons Learned and Next Steps:

         

        • I learned how voice recognition algorithms generally work and how to implement them
        • I learned lots of signal processing, as I didn’t know anything about it before
        • I learned how to read a large library like emlib more efficiently
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far

        My next steps are to apply the voice recognition to a temperature / humidity controller application, which should be easier than this LED application as the keywords are very different from each other unlike “on” and “off”.

         

        Materials Used:

        • GG11 STK with microphone and LEDs
        • Pop filter for STK microphone
        • Simplicity Studio IDE
        • CMSIS DSP Library

        Source Files: 

        • VRLEDs (attached) 

      • Wireless Encrypted Voice Communication with the EFM32GG11

        Siliconlabs | 08/237/2017 | 11:55 AM

        This project is created by Silicon Labs’ summer intern Kevin Black.

         

        EFM32GG11-1.jpg

         

        Project Summary:

         

        The goal of this project was to perform one-way, encrypted, real-time, wireless voice communication from an embedded system to an arbitrary client like a laptop or tablet. This was accomplished using the EFM32GG11 starter kit for audio input/processing and the Wizard Gecko Wi-Fi expansion kit for wireless transmission. Audio data is sampled from the starter kit’s onboard microphone and encrypted with AES using the GG11 32-bit MCU; it is then streamed to any clients connected to the Wizard Gecko’s Wi-Fi access point, where it can be decrypted and played back only with the correct password.

         

        Background and Motivation:

         

        My project primary purpose was to demonstrate useful features of both the EFM32GG11 starter kit and the Wizard Gecko Wi-Fi expansion kit, as well as the two working smoothly in conjunction through the EXP header.

         

        The first main feature it demonstrates is the EFM32GG11’s CRYPTO module, which exists on all the EFM32 Series 1 devices and provides fast hardware-accelerated encryption. The project utilizes the mbed TLS library configured to use the CRYPTO module, which speeds it up significantly. It demonstrates the high throughput of the CRYPTO module (up to ~123 Mbps max*) by encrypting uncompressed audio in real time with plenty of overhead. The type of encryption is 256-bit AES in CBC mode, which is currently considered universally secure.

        (*Assuming 256-bit AES on the GG11 driven by HFRCO at 72 MHz)

         

        Another motivation behind the project was to demonstrate two features of the GG11 starter kit itself: the onboard microphone, and the ability of the Wi-Fi expansion kit to easily attach to and be controlled through the EXP header. No examples existed for the microphone, and very few firmware examples existed for the Wizard Gecko in externally hosted mode. My projects demonstrate the quality of the built-in microphone by allowing the user to listen to the audio, as well as shows how to use the BGLib C library to communicate with the Wizard Gecko from an external host. Additionally, it demonstrates the throughput of a transparent/streaming endpoint on the Wizard Gecko.

         

        Project Description:

         

        EFM32GG11-2.png

         

        Block diagram of data flow through transmitter device

         

        Microphone Input:

         

        The GG11 starter kit provides an onboard audio codec that automatically converts the PDM (pulse density modulation) data from the onboard MEMS microphones into PCM (pulse code modulation) data and outputs it on a serial interface in I2S format. The codec’s serial interface is connected to the GG11 USART3 location 0 pins, so reading in the audio data is simply a matter of initializing USART3 to I2S with the correct settings, enabling autoTx, and asserting an additional microphone enable pin.

         

        The audio data arrives in 32-bit words, so the sample rate is controlled by setting the I2S baud rate to 64 times the desired sample rate (2 channels, 32 bits each). Each word contains a single 20-bit sample of audio, but very few systems support 20-bit audio, so for my project, I ignore the least significant 4 bits of each sample and only read 16 bits from each word. I also ignore samples from the right microphone, meaning the final audio data I obtain for processing is in 16-bit mono PCM format. The sample rate is easily configurable, but in the end, I settled on 20 KHz as that seems to be the upper limit of what the Wizard Gecko can handle while being high enough to cover the range of human hearing and provide clear and understandable audio.

         

        The audio input data is transferred into memory using LDMA in order to save CPU cycles. The right channel data is repeatedly written to a single byte in order to discard it, while the left channel data is alternately transferred into two 16-byte buffers; when one buffer is being filled, the other is being processed by the CPU.

         

        Encryption & Transmission:

         

        When a left channel transfer completes, it triggers an interrupt that switches the current process buffer and signals that the next packet is ready to be processed. The GG11 then encrypts the current 16-byte buffer (16 bytes is the AES block size) using the mbed TLS library configured to use the CRYPTO module. In CBC (cipher block chaining) mode, the library automatically XORs the plaintext with the previous ciphertext before encryption.

         

        The 256-bit key used for encryption is derived from a password using SHA-256. Only clients with the same password can obtain the correct key by hashing the password.

         

        In my project, I decided to fix the initialization vector as all zeros. Normally, initialization vector reuse is considered bad practice and weak security; however, it only has the potential to leak data from the first few blocks of data streams with identical prefixes, and that poses an insignificant threat to my project due to the enormous quantity of blocks and the amount of noise in a meaningful segment of audio.

         

        Once a block is encrypted, it is put into a first-in-first-out queue where it is transmitted over UART through the EXP header to the Wizard Gecko. Flow control is implemented using an additional CTS (clear to send) pin connected to the Wizard Gecko; the module can drive CTS high when it cannot keep up with the transmission rate, in which case the transmission halts and the queue fills up. The transmission is driven by interrupts, which allows it to run “in the background” while the next buffer is being encrypted, and does not block the main thread when the Wizard Gecko raises CTS.

         

        The baud rate for UART transmission is configurable as long as the GG11 and the Wizard Gecko are both configured to the same value. Interestingly, however, the Wizard Gecko seemed to perform better (raise CTS for less time) at higher baud rates— perhaps because that increases the gap between packets— so I settled on 3 MHz.

         

        Wi-Fi:

         

        The Wizard Gecko Wi-Fi module, when connected to an external MCU in hosted mode, operates in a command-response format. The GG11 sends commands through the EXP header via SPI, formatted with a binary protocol called BGAPI. When the Wizard Gecko is ready to send a response (or an event) back to the MCU, it raises a notify pin (also connected to the EXP header) that tells the GG11 to read and handle the message. All of the BGAPI commands and responses are defined in a C library called BGLib.

         

        Upon initialization, my project configures the Wizard Gecko to be a hidden wireless access point and a TCP server. When a client connected to the access point opens a connection to the IP address and port of the TCP server, it triggers an event that is forwarded back to the GG11. The GG11 then enables the microphone and begins encrypting and transmitting audio via UART to the Wizard Gecko’s second USART interface (the one not used for BGAPI commands). That interface is configured in transparent/streaming mode, which means it forwards all received data unmodified to a single endpoint. Before the encryption starts, the GG11 configures this endpoint to be that of the connected client.

         

        Accomplishments, Flaws, and Next Steps:

         

        Ultimately, the project was successful and met its end goal of building a one-way encrypted voice communication device. Speech is clear and comprehensible at up to several inches away from the onboard microphone, and the real-time encryption is secure.

         

        The primary flaw in the final implementation is that the Wizard Gecko itself has trouble constantly streaming a large quantity of data without interruptions. The module will occasionally “choke” for 1-2 seconds, during which it will stop transmitting and refuse to accept data by raising CTS. Performance is inconsistent, and the device will go anywhere from 10 to more than 60 seconds in between “chokes”. This causes frustrating gaps in the audio, much like a cell phone connection that is “breaking up”; although on average, the project is still quite usable for talking to someone. I added a blue LED that turns on whenever CTS is raised, so the user can at least tell when the device is not transmitting by observing the LED light up solid blue.

         

        In the future, this behavior could likely be eliminated by changing the protocol that the device uses to transmit. Bluetooth would have much more bandwidth, or if the Wizard Gecko is still used, Wi-Fi Direct or a TCP connection over a third-party local area network (rather than using the Wizard Gecko as the access point). The last two options would make the demo much more difficult to use, so Bluetooth would be the ideal solution; this explains why Bluetooth has become so popular for real-life products with similar functionality.

         

        Using this Project:

         

        Follow the instructions in the readme of the encrypted voice transmitter folder to configure the Wizard Gecko and GG11 to act as the transmitter portion of the project.

         

        To use the receiver, download the executable Java applet below and run the .exe file inside (no JVM installation required). Unless the IP address and port were changed in the firmware, leave those fields blank. Enter the password defined in the firmware (default “gecko123”).

         

        After booting up the transmitter, wait for the LCD output to reach “waiting for client”, and then connect to the hidden access point that the device has created (default SSID is “Encrypted Voice Demo”).

         

        EFM32GG11-3.png

         

        Once the LCD displays “client joined”, click “Connect” on the Java applet’s dialog. When the status message below the connect button displays “Connected” in green, audio from the microphone should begin playing back on the PC.

         

        EFM32GG11-4.jpg.png

         

        Source Files: 

         https://www.dropbox.com/s/1uofaidpdz061ti/encrypted-voice-master.zip?dl=0

         

        [zip file containing encrypted_voice_transmitter (firmware source code)]
        [zip file containing executable Java applet]

        [zip file containing encrypted_voice_receiver (Java source code)]

         

         

      • Sensor node network with Thunderboard Sense and MicroPython

        ThomasFK | 04/92/2017 | 08:04 PM

        I am a member of NUTS, the NTNU Student Test Satellite. The main goal is to create a CubeSat, a tiny satellite that piggybacks on the launch of a larger satellite.

         

        Another goal of NUTS is trying to promote aerospace/STEM topics among other students. Last fall we participated in "Researchers Night" at NTNU, which is used to promote STEM education among high school students. A lot of institutes and organizations show up at Researchers Night with really flashy displays, such as flamethrowers or slightly violent chemical reactions.

         

        At our disposal we had a vacuum chamber, a DC-motor, space-grade and regular solar panels, and several Thunderboard Senses. Showing off how marshmallows behave in vacuum, and how the DC motor behaves when connected to the different solar panels might be interesting enough in and of itself. However we decided to add some Thunderboards to spice it up a bit.

        Using a budding implementation of MicroPython for Thunderboard Sense (which will be released soon), we brainstormed and programmed a small sensor network for our stand, simulating logging telemetry data from our satellite. The Thunderboards were utilized as follows:

        • Glued to the DC motor, transmitting gyroscope data from the IMU.
        • Inside the vacuum chamber transmitting pressure.
        • Transmitting the light-level with the light-sensor.
        • Sampling the sound-level with the microphone.
        • A master that could tune into transmissions from either of the other Thunderboards, logging the output to screen and also showing how much the slave deviated from "normal" status by using the  RGB LEDs.

        I have embedded two video. The first one gives a short overview over the entire project, while the second shows the setup in action, logging data from the vacuum chamber.

         

        Our stand was a great success! Robot Very Happy We got several people standing around for up to half an hour discussing intricacies of satellite development as well as giving us an opportunity to talk more about the satellite radio link.

         

        At last I want to brag a bit about how neat this code turned out with MicroPython, and how MicroPython really was ideal for bringing up a project like this in such a short time.  The code for reading data from the IMU and transmitting it ended under 40 LOC.

        from tbsense import *
        from radio import *
        from math import sqrt
        
        rdio = Rail()
        i = IMU(gyro_scale = IMU.GYRO_SCALE_2000DPS, gyro_bw = IMU.GYRO_BW_12100HZ)
        
        def float_to_pkt(flt):
            integer = int(flt)
            decimal = round(flt, 3) - integer
            decimal = int(decimal*1000)
            ret = bytearray(6)
            ret[0] = (integer >> 24) & 0xFF
            ret[1] = (integer >> 16) & 0xFF
            ret[2] = (integer >> 8)  & 0xFF
            ret[3] = integer & 0xFF
            ret[4] = (decimal >> 8) & 0xFF
            ret[5] = decimal & 0xFF
            return ret
        
        def loop():
            meas = i.gyro_measurement()
            meas = sqrt((meas[0]**2)+(meas[1]**2)+(meas[2]**2))
            pkt = float_to_pkt(meas)
            rdio.tx(pkt)
            delay(200)
            
        def init():
            rdio.init()
            rdio.channel(MODE_IMU)
            i.init()
        
        delay(2000)
        init()
        while True:
            loop()

         

         

      • Big red button on Thunderboard Sense and Samsung Artik 5

        DanilBorchevkin | 02/32/2017 | 12:19 AM

        As a weekend project, I decided to select an implementation of a big red button based on Thunderboard Sense and Samsung Artik 5. Why a big red button? Because everybody likes red buttons. About it even make films (The Box, 2009).

         

        Sorry for any possible mistakes - I am not a native English speaker/writer. By the way this article there is in russian language.

         

        What is needed for the project

         

        1. Big red button - for this all was conceived. The button should to be normally open
        2. Thunderboard Sense Kit
        3. Samsung Artik 5
        4. Seeed Studio Arduino Base Shield
        5. Seeed Studio Grove Buzzer

         

         

        Setup Thunderboard Sense

         

        Fortunately, we do not need to change anything in the code of the Thunderboard Sense, otherwise it would become a headache - Bluetooth SDK to this device requires an IAR ARM 7.80 compiler, that for many would be a big problem.

         

        As an actuator will act SW1 button - on this button will be tied up all logic of own project.

         

         

        Default firmware do not need any changes because it have following logic:

        1. Board sleep in inactive state and can't accept incoming connections. For change it mode to connectable mode need to push SW1.
        2. After push on SW1 green led start blink there is a possibility for connect to it.
        3. When disconnect is happened Thunderboard Sense again going to sleep.

        Thunderboard Sense provides various BLE services and characteristics (full list available in Android App repository). For us interested only one characteristic - CHARACTERISTIC_PUSH_BUTTONS, which have UUID fcb89c40-c601-59f3-7dc3-5ece444a401b and constist from one value uint8_t, with following states:

        1. Zero (=0) if no buttons was pressed;
        2. One (=1) if SW1 was pressed;
        3. Two (=2) if SW2 was pressed;
        4. Three (=3) if was pressed both SW1 and SW2;

        It is noteworthy that this characteristic has only read property which leads us to the solution of a periodic value reading.

         

        Setup Artik 5

        If it a first-time work with Artik 5 then you need to basic setup according to Quick Start Guide. For start developing you should upgrade to latest firmware and setup internet connection.

         

        Pre-installed packet manager for OS Fedora on Artik is dnf so all following instructions will be for it.

         

        First install all software needed for Bluetooth:

         

        dnf install bluez bluez-libs bluez-libs-devel

        Besides install Git:

        dnf install git

        Futher install Node.JS and npm:

        dnf install nodejs npm

        After this we need to install main module for work with BLE - noble - this module allows to serve BLE connections in case when controller have to do a central role.(for inverse situation there is the bleno module):

        npm install noble

        Now we ready for coding!

         

        Pairing

         

        For pairing devices we need to use the interactive util bluetoothctl. First start it:

         

        bluetoothctl

        After starting we need to switch on a pairing possibility:

         

        pairable on

        Then we should activate Thunderboard sense by pressing on SW1 and start scanning:

         

        scan on

        Target device we can determine by friendly name or by address.

         

         

        When the needed device is finding we should stop the scanning:

        scan off

        The most important for us - remember the address of own Thunderboard Sense and execute following commands:

        pair 00:0B:57:36:71:82
        trust 00:0B:57:36:71:82
        connect 00:0B:57:36:71:82

        At this stage we get message "Connection successful" and now we can request information about connected device:

        info 00:0B:57:36:71:82

        Output will be similar to my:

         

         

        Now input:

        exit

        and ... we ready to write code on Node.JS!

         

        Work with Bluetooth on Artik 5

         

        noble allows develop code for central device. So we have to implement following logic for work with Thunderboard Sense:

        1. Scanning for finding connectable Thunderboard Sense.
        2. Connect to device with a known address.
        3. Gettings lists of services and characteristics.
        4. Read the buttons characteristic's value.

        If you need only working code - go to https://bitbucket.org/sithings/app_thunderboardtoartik

         

        For start scanning we must make sure that Bluetooth controller is power up and only after that we can start scanning:

        /* Event of the state change of the BLE controller on board */
        noble.on('stateChange', function(state) {
        	if (state === "poweredOn") {
        		/*
        		 * If BLE normally started start scanning with duplicates with any UUIDs services
        		 */
        		console.log("\x1b[36m", "\nBLE is poweredOn. Start scanning", "\x1b[0m");
        		noble.startScanning([], false);
        	} else {
        		console.log("\nStopScanning. Status is %s", state);
        		noble.stopScanning();
        	}
        });

        After start scanning we must push SW1 on the Thunderboard Sense for wake up it. Once the device will be available script will be attempt connect to it:

        noble.on("discover", function(peripheral) {
        	
                ...
        
        	/* If founded device is own big red button */
        	if(config.bigRedButton.deviceAddress === peripheral.address) {
        		peripheral.connect( function(error) {
                        
                        ...
        
        		});
        	}
        });

        After successful connect we need to get all services and characteristics:

        peripheral.connect( function(error) {
        	peripheral.discoverAllServicesAndCharacteristics(function(error, services, characteristics) {
        		...
        	});

        In my code I define the desired characteristic by enumeration method:

         

        for(i = 0; i < characteristics.length; i++) {
        	/* If we find characteristic with button state when color it */
        	if (characteristics[i].uuid === config.bigRedButton.characteristicUUID) {
        		buttonStateChar = characteristics[i];
        	}
        	...

        Desired characteristic which contaits value with button state have only one property - read. For acquiring button states we need implement peridical reading of desired value. Afrter define the desired characteristic we need to set a polling interval.

         

        readingInterval = setInterval(readButtonCallback, config.bigRedButton.pollingInterval);

        Listing of the callback:

         

        /* Button callback started by setInterval */
        function readButtonCallback() {
        	buttonStateChar.read(function(error, data) {
        		buf = data[0];
        		console.log("\nData: %d", buf);
        		if (buf === 1) {
        			console.log("SW1 was pressed");
        			/* Enable buzzer */
        			config.buzzer.set();
        		}
        		else if (buf === 2) {
        			console.log("SW2 was pressed");
        		}
        		else if (buf === 0) {
        			console.log("No button pressed");
        			/* Disable buzzer */
        			config.buzzer.clear();
        		} 
        	});
        }

         

        Make buzz

         

        Unfortunately, I could not get to work with GPIO over artik-sdk module and I decided work with GPIO over sysfs. Grove Buzzer in my solution connected over Seeed Studio Base shield to pin gpio121 (2 pin of Arduino expansion).

         

        After starting the script a desired pin must to init as "out":

         

        exec("echo 121 > /sys/class/gpio/export");
        exec('echo "out" > /sys/class/gpio/gpio121/direction');

        To buzzer squeaked the pin must be pulled up:

         

        exec('echo "1" > /sys/class/gpio/gpio121/value');

        To silence buzzer the pin must be pulled down:

         

        exec('echo "0" > /sys/class/gpio/gpio121/value');

        Then exiting the script the buzzer's pins must be released:

         

        exec("121 > /sys/class/gpio/unexport");

        All of this was implemented in config.js of the project. Link to repository.

         

        Connect big red button to Thunderboard Sense.

         

        I have this kind of the button:

         

         

        Interesting fact - the button is more expensive than Thunderboard Sense Kit.

         

        Connect the button by soldering as described on a following scheme:

         

         

        In my case, it looks like this:

         

         

        Testing

         

        Video with test:

         

         

         

         

        But it isn't serous because buzzer too weak. I change the buzzer to relay with signal lamp:

         

         

         

         

        Now the big red button is really cool!

         

        Results

         

        It works. Repository with project - https://bitbucket.org/sithings/app_thunderboardtoartik

         

        Links

         

      • Thunderboard Sense - a review and some experiments

        hlipka | 01/24/2017 | 10:25 PM

        Thanks to the generous folks at Silicon labs, two months ago I found a package containing a Thunderboard Sense at my doorstep. Many thanks for giving me the chance to review this board! (And a big sorry I didn't finish this review sooner)

         

        Small Introduction
        This wasn't the first board from Silabs that I got to see, but it was the first one to come in a different package size. My first thought was "**bleep**, that thing is small!"
        Nice touch: this was the first Silabs (or Energy Micro) board that came in a smaller packaging, which makes sense since it really is just the size of 2 CR2032 cells (OK, just a little bit bigger). Although the parcel it came in was way too big as usual Robot Sad

        I applied for this review since when I saw this board I had immediately three different applications in mind:

        • run this as a small weather station on top of our house
        • use this as head-impact monitor during skiing
        • monitor the heating system for our house for function and gas leakage

        Given the number of sensors on this board it seams quite natural to use it to monitor environmental conditions. It would be quite interesting to add a large battery and a solar panel and then mount it on the top of our house (there is already a satellite antenna beam there) so I get my own weather station. It would need a clear side wall to measure ambient light, and some holes so it can measure humidity. Most sensible would be the usage of a radiation shield (such as these). But since its winter here in Europe this doesn't look like an inviting project at the moment...


        Skiing is more interesting. Two years ago I create the "Skier impact monitor" (SkiMon for short) for an Element14 challenge. SkiMon was designed so I could track the effects to the head whenever my son (who started downhill skiing back then) would tumble or hit something.
        The Sense Board would add a Gyro sensor to the mix, and have them together with the acceleration monitor and the BLE chip on one small board. This means it could be even smaller than my original solution. I had this in mind for the Thunderboard React board already, but this doesn't come with programming capabilities on board, so it was difficult there. For this project you really need to program the BLE chip, since you need to gather data with a high frequency (about 1000 samples per second) to aggregate them together and calculate the effects of any impact (see this posting about the Head Injury Criterion). Unfortunately there are two road blocks for this project: First I need to change the firmware to do all the calculations, and development requires an IAR embedded workbench license (more to that further down). Second the acceleration sensor must be able to handle high impact forces (a hard impact can easily reach 100g for some short moments). And  the ICM-20648 can only handle 16g. Bummer Robot Sad (I still do know only about three chips from Analog Devices which can handle such forces - back then I used the ADXL375).


        This leaves the heating system monitor project. The idea for this also came from a skiing vacation - some years ago the heating system went down while we were skiing. And since there were -20°C outside the house was just above the freezing point when we came back. Since then I have a mbed board running which pushes the current temperature of one room to the cloud so I can look for problems. It would be nice to have a solution in place which is much smaller, and also alerts me (by short message or email). Since I already have an OpenHAB server running, this would be a perfect integration. And the air quality sensors can tell me when something else is wrong. So I went out to see how well the Sense board works as environmental sensor.

         

        Having a deeper look
        As I said already - the Thunderboard Sense is much smaller than the other Silabs (or EnergyMicro) boards (apart from its sibling, the Thunderboard React). Both are about 45x30mm - as a comparison you could fit two CR2032 cells on them.
        While the React board comes just with a motion sensor (acceleration and rotation) and a temperature+humidity+light sensor, the Sense kit (as the name implies) packs much more sensors to the board:

        • acceleration and rotation
        • humidity and temperature
        • ambient light (and UV index)
        • air pressure
        • audio (MEMS microphone)
        • indoor air quality
        • (there is a hall-effect sensor mentioned, but its not populated on my board)


        All of these do fit on such a small board, together with

         

        • a CR2032 cell
        • a Bluetooth LE SoC with antenna
        • a debugging interface (JLink - how did they manage to fit that on this board too?)
        • 4 RGB LEDs
        • and two push buttons


        This is quite a powerful package! Unfortunately it also means the board is so densely packaged that there is no real silk screen designating the components (the are no numbers for the IC on board). Fortunately all of the sensors are accompanied by small images showing their function (although it took me a while to discover this - the gold-coloured silk screen is quite unobtrusive).

        So after inspecting the board, the first step was to download the Thunderboard app. Since I have a set of devices, I tested on a range of them. I used my iPad Air for iOS, and my Nexus 5 for Android. Since I was not able to connect the Thunderboard React to the Nexus 5 when I played with this board, I also took my old Samsung S4 mini from its archiving box (just to be see whether it fared better). Download was easy, and I was able to connect with all three devices. So the next step is playing around with the app and taking measurements.

         

        The Thunderboard app comes with multiple pages which can be select from the start screen (after connecting to the board). The Motion page shows the accelerometer / gyro data (even in 3D Robot Happy, the Environment page all the sensor data, and IO allows to control the LEDs on the board. When looking at the sensor data, its possible to stream all data to the cloud (Thundercloud, that is) so you can get historical data and nice graphs.

         

        Time for some measurements
        The temperature seemed to be a bit on the high side - it showed 28.5°C where my other thermometers show only 27° (and they agree mostly). But it settled down after while, so maybe this was me handling the board. (The Sense board itself was in my room for several hours before testing so it should be settled). So this seems to be fine (and probably still within spec of all sensors involved).

        Since I was in my study, I looked at workplace lighting (hey, there is a ambient light sensor in this thing, why shouldn't I use it?). After finding out how to orient the board properly (look for the small silk screen symbols for the sensors) it showed that my work desk has about 740 Lux, and with an additional direct light it goes up to 1200 Lux. According to this report this is more than sufficient. But OTOH the air in my room is too dry (its a typical winter problem).

        Next test was looking at the microphone. I originally ignored this sensor, but the Thunderboard app uses it to show the ambient noise level. And since I have a 3D printer right beneath my desk, it was interesting to see how loud it really gets. The app showed between 43 and 50db when the printer was idle (this is due to the fans), and depended a little bit on the orientation of the board. So I put it in a fixed spot right at the printers base plate (its mounted on a wooden board). When running, the noise goes up to 60db with spikes up to 67db. Interestingly the noise level didn't account for mechanical vibrations at lower frequencies which (at least to my subjective feeling) are even louder. Either the microphone cannot handle them or they are filtered by the boards firmware somehow. As a comparison: the noise floor in my study is at about 38db with just my (nearly silent) PC running and the windows closed. This is quite OK, and the printer seems to be at an acceptable level.

        While doing these test prints I also looked at the air quality. I'm printing with PLA so there shouldn't be a problem, but its always better to verify Robot Happy The air quality sensor needs some start-up time, even though it shows up in the app after about 30 seconds or so. Carbon Dioxide level was at 400ppm when I started (which is nearly outside air quality - I was venting the room before), and the VOC level was at 0ppb (which seemed strange). During printing the VOC rose to 3ppb, and CO2 stayed constant. Unfortunately I did not find any actual data which VOCs the sensors measures. And all the documents I found about air quality differed in the recommended threshold levels because they looked at different kinds of VOCs. But 3ppb should fine according to all of them...

        For acceleration and rotation I did not make tests since I looked at them with the React board already (although I need to write the review for that too, shame on me). The biggest drawback I see here is that there are not enough measurements sent from the board to the app, so any graphs you make are not fine enough in their time resolution.

        And I also did not look at the air pressure - I have nothing to compare it to, and on its own it is not really interesting. I just assume that it will work properly, like the rest of the board does.

        I did experiment with the LEDs a little bit, and can confirm that you can indeed control them from the app (provided that you power the board from USB and not via the coin cell).

        Unfortunately I did not have time to measure power consumption. One would either need a connector with 1.25mm pin pitch, or need to solder to the expansion connectors. But the Sense board uses an elaborate scheme to power all the sensors only when they are needed. This way it should keep the overall power consumption quite low, especially when the sensors are needed only once in a while.

         

        How to monitor a heating system
        My next test was to see how well I can use the Thunderboard Sense (together with the app) to monitor my heating system. Its a gas heater (nearly 20 years old, but quite efficient for that time, and its build robust). I placed the board about 50 cm away and powered it by an USB supply. I also powered the S4 mini via USB (and disabled any standby function so it would run until disabled). Then I connected the board to the app, enabled cloud streaming (so I would get nice graphs) and went to work. Unfortunately I found out that the app stopped sending any data after about an hour and said it got disconnected from the board. When repeating this experiment, the app disconnected after 30 minutes. That's not nice! When I tested with my Nexus 5, it got disconnected even faster. I then tested again with the React board - it could not connect to that one either (as before). Then I switched to my iPad Air, and lo and behold, it was able to stream data to the cloud for as long as wanted it to (or at least could abstain from using it...). So it seems either Android has a general problem with BLE (which I cannot confirm from my own work with it) or its the Thunderboard Android app.

        So here is the graph I got from my experiments:

        environmental_data_heater

        Thundercloud allows you to download the captured data. For that I did send the link to the last data set from the app via email, opened it with m,y browser and from there one can the navigate and download. The CSV files can be opened with LibreOffice, and then I create a diagram showing what I need.

        Its interesting to see that the temperature curve is nearly flat. Even though the heating is configured to lower the temperature during the day by several degrees, the room with the heater stays at the same temperature. And the board was far away enough to not capture the heat from the actual flame.
        The VOC levels are interesting - I started my measurements in the morning, right after the heater was running for longer periods to get the water hot and the rooms warm. So there were plenty of VOCs around, but their level fell over the day (since the heater is running less frequently - its doesn't need to keep the water and the rooms that warm). The same can be seen with carbon dioxide, just not as pronounced. In the afternoon the heating started again, because its configured to heat up the rooms before we are back at home.

         

        Its interesting that there are large spikes of VOC and CO2 levels. Either the heater sometimes really produces these levels, or something peculiar happens with the air quality sensor. Unfortunately I was not able to find any data sheet for the CCS881 sensor, so I cannot dive deeper into that. (Its similar for the ICM-20648 - there is a data sheet but you need to sign an NDA for it...)

         

        Most surprising for me was that the microphone was a really good indicator whether the heater is running or not. The flame is loud enough to be picked up well enough that one can see the heating periods, and even the longer runs in the afternoon. I found that interesting, and would try to incorporate that into a monitoring solution.

        I later did run a test with the board nearer to the heater and the flame, and then the heating periods were more visible, but still not that much. It seems one needs a thermocouple really close, or an IR / thermopile sensor for that.

        So after all these tests I can say that the Thunderboard sense really is a nice board and very capable. The application support for Android seems to be sub-optimal for the moment, unfortunately. To create a real monitoring solution (or a weather station), one probably needs a BLE app running on a PC (or maybe a RaspBerry PI), and a customized firmware on the Sense board.

         

        Firmware development
        And this is then my last topic for this review: firmware development. In the past I did several projects and experiments with the EFM32 MCUs, and was quite spoiled by Simplicity Studio and all the support. I always found development easy and well-supported. This is especially true since I'm a long-time Linux used and do all my work there whenever possible. And Energy Micro, and also Silabs later on supported that well. But when I looked into the documentation for BLE development, the picture was completely different. For BLE development one needs an IAR embedded workbench license, since the precompiled BLE stack library only work with that. I can understand that the BLE stack is delivered as library (regulatory reasons most probably), but requiring a separate tool for working with it? IAR only runs on Windows, and they do not even state prices on their web site (but from what I found it seems to be several thousands of Euros).

        Fortunately after some swearing and spending time with Google I found two ways around that:
        First, there is work underway to  have the BLE stack compile with GCC (and so it should work with Simplicity Studio again). Its not officially released, but a preview is available from the knowledge base (and also see the discussion). Maybe in some months this can be a stable solution.

        The second solution is to use BGScript. I always thought that this is intended for the BlueGiga modules only, but the developers guide explains how to use it for other EFR32 chips too. You just need to know the correct EFR32 part number, and hand this to the BGScript compiler. So I tested this with one of the example BGScript firmwares (which converts implements the temperature sensor BLE profile), compiled it and uploaded it to the Sense board (via the "Flash Programmer" tool), and it appeared as a different BLE device now (and needs a different app). Evewn though I could connect fine to the board, the app could not read the temperature. The reason: its doesn't know about the power and interrupt controller of the Sense board, so the sensor is still disabled. So the biggest problem with using BGScript is that you cannot re-use the Sense board demo firmware to talk to all the sensors, but need to implement anything from scratch. And since for some of them no data sheets are available, this seems difficult.
        So it seems I should wait for the GCC BLE stack to be officially supported.
         

        Aftermath

        After playing around with programming I wanted to re-install the original firmware (to get some screenshots of the app for this review). It is only available when BLE stack 2.0.1 is installed (which is not  the newest version), and can be selected from the "Getting started" section of Studio. Unfortunately after doing only the S4 mini would find the board in the Thunderboard app. I suspect that the board initially came with a newer firmware which is not available in the BLE SDK (see the forum discussion). So it seems I really need to look into the development setup...

         

        Outlook

        So what's my verdict for this review? Thunderboard sense surely is a fine board, but the software support (especially for Android) needs some more work. Also, currently BLE development is only feasible right now if you are willing to invest some real money (at least for hobbyists and small companies) or willing to work with beta and preview versions (BGScript doesn't seems capable enough more my needs right now, but I need to have a deeper look into it).

        I surely will experiment with, and follow the state of the BLE stack GCC support. When it reaches a usable state I will come back and look at how to implement my own firmware. Only then I will decide whether it will end up on my roof or near the heater Robot Happy

         

      • Thunderboard Sense - a review and some experiments

        hlipka | 01/24/2017 | 10:25 PM

        Thanks to the generous folks at Silicon labs, two months ago I found a package containing a Thunderboard Sense at my doorstep. Many thanks for giving me the chance to review this board! (And a big sorry I didn't finish this review sooner)

         

        Small Introduction
        This wasn't the first board from Silabs that I got to see, but it was the first one to come in a different package size. My first thought was "**bleep**, that thing is small!"
        Nice touch: this was the first Silabs (or Energy Micro) board that came in a smaller packaging, which makes sense since it really is just the size of 2 CR2032 cells (OK, just a little bit bigger). Although the parcel it came in was way too big as usual Robot Sad

        I applied for this review since when I saw this board I had immediately three different applications in mind:

        • run this as a small weather station on top of our house
        • use this as head-impact monitor during skiing
        • monitor the heating system for our house for function and gas leakage

        Given the number of sensors on this board it seams quite natural to use it to monitor environmental conditions. It would be quite interesting to add a large battery and a solar panel and then mount it on the top of our house (there is already a satellite antenna beam there) so I get my own weather station. It would need a clear side wall to measure ambient light, and some holes so it can measure humidity. Most sensible would be the usage of a radiation shield (such as these). But since its winter here in Europe this doesn't look like an inviting project at the moment...


        Skiing is more interesting. Two years ago I create the "Skier impact monitor" (SkiMon for short) for an Element14 challenge. SkiMon was designed so I could track the effects to the head whenever my son (who started downhill skiing back then) would tumble or hit something.
        The Sense Board would add a Gyro sensor to the mix, and have them together with the acceleration monitor and the BLE chip on one small board. This means it could be even smaller than my original solution. I had this in mind for the Thunderboard React board already, but this doesn't come with programming capabilities on board, so it was difficult there. For this project you really need to program the BLE chip, since you need to gather data with a high frequency (about 1000 samples per second) to aggregate them together and calculate the effects of any impact (see this posting about the Head Injury Criterion). Unfortunately there are two road blocks for this project: First I need to change the firmware to do all the calculations, and development requires an IAR embedded workbench license (more to that further down). Second the acceleration sensor must be able to handle high impact forces (a hard impact can easily reach 100g for some short moments). And  the ICM-20648 can only handle 16g. Bummer Robot Sad (I still do know only about three chips from Analog Devices which can handle such forces - back then I used the ADXL375).


        This leaves the heating system monitor project. The idea for this also came from a skiing vacation - some years ago the heating system went down while we were skiing. And since there were -20°C outside the house was just above the freezing point when we came back. Since then I have a mbed board running which pushes the current temperature of one room to the cloud so I can look for problems. It would be nice to have a solution in place which is much smaller, and also alerts me (by short message or email). Since I already have an OpenHAB server running, this would be a perfect integration. And the air quality sensors can tell me when something else is wrong. So I went out to see how well the Sense board works as environmental sensor.

         

        Having a deeper look
        As I said already - the Thunderboard Sense is much smaller than the other Silabs (or EnergyMicro) boards (apart from its sibling, the Thunderboard React). Both are about 45x30mm - as a comparison you could fit two CR2032 cells on them.
        While the React board comes just with a motion sensor (acceleration and rotation) and a temperature+humidity+light sensor, the Sense kit (as the name implies) packs much more sensors to the board:

        • acceleration and rotation
        • humidity and temperature
        • ambient light (and UV index)
        • air pressure
        • audio (MEMS microphone)
        • indoor air quality
        • (there is a hall-effect sensor mentioned, but its not populated on my board)


        All of these do fit on such a small board, together with

         

        • a CR2032 cell
        • a Bluetooth LE SoC with antenna
        • a debugging interface (JLink - how did they manage to fit that on this board too?)
        • 4 RGB LEDs
        • and two push buttons


        This is quite a powerful package! Unfortunately it also means the board is so densely packaged that there is no real silk screen designating the components (the are no numbers for the IC on board). Fortunately all of the sensors are accompanied by small images showing their function (although it took me a while to discover this - the gold-coloured silk screen is quite unobtrusive).

        So after inspecting the board, the first step was to download the Thunderboard app. Since I have a set of devices, I tested on a range of them. I used my iPad Air for iOS, and my Nexus 5 for Android. Since I was not able to connect the Thunderboard React to the Nexus 5 when I played with this board, I also took my old Samsung S4 mini from its archiving box (just to be see whether it fared better). Download was easy, and I was able to connect with all three devices. So the next step is playing around with the app and taking measurements.

         

        The Thunderboard app comes with multiple pages which can be select from the start screen (after connecting to the board). The Motion page shows the accelerometer / gyro data (even in 3D Robot Happy, the Environment page all the sensor data, and IO allows to control the LEDs on the board. When looking at the sensor data, its possible to stream all data to the cloud (Thundercloud, that is) so you can get historical data and nice graphs.

         

        Time for some measurements
        The temperature seemed to be a bit on the high side - it showed 28.5°C where my other thermometers show only 27° (and they agree mostly). But it settled down after while, so maybe this was me handling the board. (The Sense board itself was in my room for several hours before testing so it should be settled). So this seems to be fine (and probably still within spec of all sensors involved).

        Since I was in my study, I looked at workplace lighting (hey, there is a ambient light sensor in this thing, why shouldn't I use it?). After finding out how to orient the board properly (look for the small silk screen symbols for the sensors) it showed that my work desk has about 740 Lux, and with an additional direct light it goes up to 1200 Lux. According to this report this is more than sufficient. But OTOH the air in my room is too dry (its a typical winter problem).

        Next test was looking at the microphone. I originally ignored this sensor, but the Thunderboard app uses it to show the ambient noise level. And since I have a 3D printer right beneath my desk, it was interesting to see how loud it really gets. The app showed between 43 and 50db when the printer was idle (this is due to the fans), and depended a little bit on the orientation of the board. So I put it in a fixed spot right at the printers base plate (its mounted on a wooden board). When running, the noise goes up to 60db with spikes up to 67db. Interestingly the noise level didn't account for mechanical vibrations at lower frequencies which (at least to my subjective feeling) are even louder. Either the microphone cannot handle them or they are filtered by the boards firmware somehow. As a comparison: the noise floor in my study is at about 38db with just my (nearly silent) PC running and the windows closed. This is quite OK, and the printer seems to be at an acceptable level.

        While doing these test prints I also looked at the air quality. I'm printing with PLA so there shouldn't be a problem, but its always better to verify Robot Happy The air quality sensor needs some start-up time, even though it shows up in the app after about 30 seconds or so. Carbon Dioxide level was at 400ppm when I started (which is nearly outside air quality - I was venting the room before), and the VOC level was at 0ppb (which seemed strange). During printing the VOC rose to 3ppb, and CO2 stayed constant. Unfortunately I did not find any actual data which VOCs the sensors measures. And all the documents I found about air quality differed in the recommended threshold levels because they looked at different kinds of VOCs. But 3ppb should fine according to all of them...

        For acceleration and rotation I did not make tests since I looked at them with the React board already (although I need to write the review for that too, shame on me). The biggest drawback I see here is that there are not enough measurements sent from the board to the app, so any graphs you make are not fine enough in their time resolution.

        And I also did not look at the air pressure - I have nothing to compare it to, and on its own it is not really interesting. I just assume that it will work properly, like the rest of the board does.

        I did experiment with the LEDs a little bit, and can confirm that you can indeed control them from the app (provided that you power the board from USB and not via the coin cell).

        Unfortunately I did not have time to measure power consumption. One would either need a connector with 1.25mm pin pitch, or need to solder to the expansion connectors. But the Sense board uses an elaborate scheme to power all the sensors only when they are needed. This way it should keep the overall power consumption quite low, especially when the sensors are needed only once in a while.

         

        How to monitor a heating system
        My next test was to see how well I can use the Thunderboard Sense (together with the app) to monitor my heating system. Its a gas heater (nearly 20 years old, but quite efficient for that time, and its build robust). I placed the board about 50 cm away and powered it by an USB supply. I also powered the S4 mini via USB (and disabled any standby function so it would run until disabled). Then I connected the board to the app, enabled cloud streaming (so I would get nice graphs) and went to work. Unfortunately I found out that the app stopped sending any data after about an hour and said it got disconnected from the board. When repeating this experiment, the app disconnected after 30 minutes. That's not nice! When I tested with my Nexus 5, it got disconnected even faster. I then tested again with the React board - it could not connect to that one either (as before). Then I switched to my iPad Air, and lo and behold, it was able to stream data to the cloud for as long as wanted it to (or at least could abstain from using it...). So it seems either Android has a general problem with BLE (which I cannot confirm from my own work with it) or its the Thunderboard Android app.

        So here is the graph I got from my experiments:

        environmental_data_heater

        Thundercloud allows you to download the captured data. For that I did send the link to the last data set from the app via email, opened it with m,y browser and from there one can the navigate and download. The CSV files can be opened with LibreOffice, and then I create a diagram showing what I need.

        Its interesting to see that the temperature curve is nearly flat. Even though the heating is configured to lower the temperature during the day by several degrees, the room with the heater stays at the same temperature. And the board was far away enough to not capture the heat from the actual flame.
        The VOC levels are interesting - I started my measurements in the morning, right after the heater was running for longer periods to get the water hot and the rooms warm. So there were plenty of VOCs around, but their level fell over the day (since the heater is running less frequently - its doesn't need to keep the water and the rooms that warm). The same can be seen with carbon dioxide, just not as pronounced. In the afternoon the heating started again, because its configured to heat up the rooms before we are back at home.

         

        Its interesting that there are large spikes of VOC and CO2 levels. Either the heater sometimes really produces these levels, or something peculiar happens with the air quality sensor. Unfortunately I was not able to find any data sheet for the CCS881 sensor, so I cannot dive deeper into that. (Its similar for the ICM-20648 - there is a data sheet but you need to sign an NDA for it...)

         

        Most surprising for me was that the microphone was a really good indicator whether the heater is running or not. The flame is loud enough to be picked up well enough that one can see the heating periods, and even the longer runs in the afternoon. I found that interesting, and would try to incorporate that into a monitoring solution.

        I later did run a test with the board nearer to the heater and the flame, and then the heating periods were more visible, but still not that much. It seems one needs a thermocouple really close, or an IR / thermopile sensor for that.

        So after all these tests I can say that the Thunderboard sense really is a nice board and very capable. The application support for Android seems to be sub-optimal for the moment, unfortunately. To create a real monitoring solution (or a weather station), one probably needs a BLE app running on a PC (or maybe a RaspBerry PI), and a customized firmware on the Sense board.

         

        Firmware development
        And this is then my last topic for this review: firmware development. In the past I did several projects and experiments with the EFM32 MCUs, and was quite spoiled by Simplicity Studio and all the support. I always found development easy and well-supported. This is especially true since I'm a long-time Linux used and do all my work there whenever possible. And Energy Micro, and also Silabs later on supported that well. But when I looked into the documentation for BLE development, the picture was completely different. For BLE development one needs an IAR embedded workbench license, since the precompiled BLE stack library only work with that. I can understand that the BLE stack is delivered as library (regulatory reasons most probably), but requiring a separate tool for working with it? IAR only runs on Windows, and they do not even state prices on their web site (but from what I found it seems to be several thousands of Euros).

        Fortunately after some swearing and spending time with Google I found two ways around that:
        First, there is work underway to  have the BLE stack compile with GCC (and so it should work with Simplicity Studio again). Its not officially released, but a preview is available from the knowledge base (and also see the discussion). Maybe in some months this can be a stable solution.

        The second solution is to use BGScript. I always thought that this is intended for the BlueGiga modules only, but the developers guide explains how to use it for other EFR32 chips too. You just need to know the correct EFR32 part number, and hand this to the BGScript compiler. So I tested this with one of the example BGScript firmwares (which converts implements the temperature sensor BLE profile), compiled it and uploaded it to the Sense board (via the "Flash Programmer" tool), and it appeared as a different BLE device now (and needs a different app). Evewn though I could connect fine to the board, the app could not read the temperature. The reason: its doesn't know about the power and interrupt controller of the Sense board, so the sensor is still disabled. So the biggest problem with using BGScript is that you cannot re-use the Sense board demo firmware to talk to all the sensors, but need to implement anything from scratch. And since for some of them no data sheets are available, this seems difficult.
        So it seems I should wait for the GCC BLE stack to be officially supported.

        Aftermath

        After playing around with programming I wanted to re-install the original firmware (to get some screenshots of the app for this review). It is only available when BLE stack 2.0.1 is installed (which is not  the newest version), and can be selected from the "Getting started" section of Studio. Unfortunately after doing only the S4 mini would find the board in the Thunderboard app. I suspect that the board initially came with a newer firmware which is not available in the BLE SDK (see the forum discussion). So it seems I really need to look into the development setup...

         

        Outlook

        So what's my verdict for this review? Thunderboard sense surely is a fine board, but the software support (especially for Android) needs some more work. Also, currently BLE development is only feasible right now if you are willing to invest some real money (at least for hobbyists and small companies) or willing to work with beta and preview versions (BGScript doesn't seems capable enough more my needs right now, but I need to have a deeper look into it).

        I surely will experiment with, and follow the state of the BLE stack GCC support. When it reaches a usable state I will come back and look at how to implement my own firmware. Only then I will decide whether it will end up on my roof or near the heater Robot Happy

         

      • Threading Christmas - Thunderboard Sense on Steroids

        Alf | 01/03/2017 | 12:16 PM

        This project is an upgrade of my previous project so not too much code will be included here. What I wanted to do was to take the Thunderboard Sense and create wireless Christmas tree lights. Luckily enough we have some silicone balls around here that fits a Thunderboard Sense and a double-A battery holder which is perfect for the purpose of creating electronic Christmas baubles. You can see the boards, the battery holders and the silicone balls in this workshop picture, taken on Christmas Eve:

        Thread - Workshop

         

        And here's a couple more workshop pictures:

        Thread - Christmas lights

        My dad installing the wires for holding the baubles on the tree (obligatory white shirt on Christmas Eve).

         

        Thread - W

        Several units getting finished soldering.

         

        Thread - Workshop

        Installing the electronics in the silicon cases.

         

        Thread - Christmas tree lights

        All the baubles ready for the tree.

         

        Thread - Christmas tree lights

        Getting help with putting them up.

         

        Thread - Christmas tree lights

        And the last two baubles. Of course they need to go on the exact same branch.

         

        The program have been updated slightly, the clients now poll at a one second interval instead of 10 which is done with this line of code:

        #define REPORT_PERIOD_MS (1 * MILLISECOND_TICKS_PER_SECOND)

         

        The server was also ported to a Thunderboard Sense so I can have a more sexy remote. It is now also possible to turn the lights off by pressing both buttons simultaneously, so the new halButtonIsr looks like this:

        void halButtonIsr(uint8_t button, uint8_t state)
        {
          // button: BUTTON0 BUTTON1
          // state:  BUTTTON_RELEASED BUTTON_PRESSED
          if(button == BUTTON0)
            buttonsPressed = (buttonsPressed & ~0x1) | (state << 0x0);
          else if(button == BUTTON1)
            buttonsPressed = (buttonsPressed & ~0x2) | (state << 0x1);
        
          if(buttonsPressed == 0x3)
          {
        	colorOff   = 0x1;
        	colorStep  = 0;
        	colorFixed = 0x1;
        	colorStepped = 0;
          }
          else if((button == BUTTON0) && (state == BUTTON_PRESSED))
          {
        	  colorOff   = 0;
        	  colorStep  = 0;
        	  colorFixed = 0x1;
          	  incrementColorIndex();
          }
          else if((button == BUTTON1) && (state == BUTTON_PRESSED))
          {
        	colorOff   = 0;
        	colorStep++;
        	colorFixed = 0;
        	colorStepped = 0;
          }
        
          //Setting server led
          if(!colorOff)
          {
            BOARD_rgbledEnable( true, 0xf );
            BOARD_rgbledSetColor(colorTableSine[colorIndex][0],
            					 colorTableSine[colorIndex][1],
            					 colorTableSine[colorIndex][2]);
            ledOn = 0x1;
          }
          else
          {
        	BOARD_rgbledEnable( true, 0xf );
        	BOARD_rgbledSetColor(0x80,
        	   					 0x80,
        	   					 0x80);
        	ledOn = 0x1;
          }
          colorHold = 0;
        }

         

        You can also see that the LED's of the server is lit up according to the new color. They are turned off again in the handler for GET-requests:

        void clientGetHandler(const EmberCoapMessage *request)
        {
          // Requests from clients are sent as CoAP GET requests to the "client/get"
          // URI.
        
          EmberCoapCode responseCode;
        
          if (state != ADVERTISE) {
            responseCode = EMBER_COAP_CODE_503_SERVICE_UNAVAILABLE;
          } else {
        	if (!colorFixed)
        	  incrementColorIndex();
        	else if (colorHold <= 5)
        	  colorHold++;
        
        	// Disabling server LED after a while
            if (ledOn && colorHold > 5)
        	{
        	  BOARD_rgbledEnable( false, 0xf );
        	  ledOn = 0x0;
        	}
        
        	coapmessage[0] = colorOff ? 0x0 : colorTableSine[colorIndex][0];
        	coapmessage[1] = colorOff ? 0x0 : colorTableSine[colorIndex][1];
        	coapmessage[2] = colorOff ? 0x0 : colorTableSine[colorIndex][2];
        
        	emberAfCorePrint("Sending %ld %ld %ld to client at ", coapmessage[0], coapmessage[1], coapmessage[2]);
            emberAfCoreDebugExec(emberAfPrintIpv6Address(&request->remoteAddress));
            emberAfCorePrintln("");
            responseCode = EMBER_COAP_CODE_205_CONTENT;
          }
        
          if (emberCoapIsSuccessResponse(responseCode)
              || request->localAddress.bytes[0] != 0xFF) { // not multicast
            emberCoapRespond(responseCode, coapmessage, 3); // Payload
          }
        }

        And that's it! Here's a video demonstrating the new features on our Christmas tree: