• Building a Magnetic Alarm System for the Giant Gecko Series 1 STK

        Siliconlabs | 08/242/2017 | 05:17 AM

        This project is created by Silicon Labs’ summer intern Rikin Tanna. 




        A magnetic alarm system uses a Hall Effect magnetic sensor and a supporting magnet attached to the door frame to determine if the door is opened or closed. This project includes a notification service that sends a message to your mobile phone when the alarm is triggered, for added security. By removing any moving parts from the system, the magnetic alarm system proves to be very reliable.




        Materials Used:


        EFM32 Giant Gecko GG11 Starter Kit (SLSTK3701A)

        WGM110 Wi-Fi Expansion Kit (SLEXP4320A)

        Hall Effect Magnetic Sensor (Si7210)




        My goal with this project was to demonstrate an important use case for the Si7210 as a component of an alarm system. Given that Silicon Labs is an IOT company, I figured it would be beneficial to use IFTTT, an IOT workflow service that connects two or more internet-enabled devices, to demonstrate our GG11 STK being used with an established IOT enabler. With this service included, I could also showcase the WGM-110 Wi-Fi Expansion kit working with the GG11 STK. The GG11 STK was chosen due to its onboard Si7210 sensor, its compatibility with the WGM-110 Wi-Fi Expansion kit, and its recent launch (to expand its demo portfolio).




        The demo is split into 3 phases:

        1. WiFi-Setup – The first phase of the demo configures the WGM110. Here, the WGM110 boots up, sets the operating mode to client, connects to the user’s access point, and enters a low power state (Power Mode 4) to wait for further command. As it configures, status messages will display to the LCD as the GG11 receives responses from the WGM110.
        2. Calibration – The second phase of the demo calibrates the Si7210 digital output pin thresholds based on the user’s “closed door” position (magnetic field reading).
        3. Operation – The third phase is operation. Closing and opening the door (crossing the magnetic field threshold) will cause the Si7210 digital output pin to toggle, which will result in LED0 flashing. Additionally, the output pin will toggle if a second magnet is brought in to try and tamper with the alarm system. When the alarm is triggered, the GG11 will command the WGM110 to contact the IFTTT servers to send a message to the user’s mobile phone.





        GG11 STK:

        The GG11 STK was programmed using Simplicity Studio. Simplicity provides an ample array of examples and demos to help any beginners get started with Silicon Labs MCUs (this was my first experience with SiLabs products).


        Below is a representation of data flow for the project. 






        The WGM110 is a versatile part, as it can act as Wi-Fi client, a local access point, or a Wi-Fi Direct (Wi-Fi P2P) interface. In this system, the WGM110 acts as a client, and it is a slave to the host GG11 MCU. Communication is based on the BGAPI command-response protocol via SPI on a UART terminal. Debugging this proved to be difficult, as there are two individual MCUs involved, but the Salae Logic analyzer allowed me to view the communication between the devices to help fix any issues I encountered. Below is a capture of a typical boot correspondence.






        When the alarm is triggered, the WGM110 establishes a TCP connection with the IFTTT server and sends an HTTP Get request to the extension specified by the IFTTT applet I have created. Unfortunately, IFTTT allows free users to only create private applets, but creating the applet was simple: step by step instructions for creating my applet can be found in the project ReadMe file.




        The GG11 STK comes with an onboard Si7210 Hall Effect Magnetic sensor. It can detect changes in magnetic field to the hundredth of Millitesla (mT), which is more than enough sensitivity for this use case. The part has multiple OTP registers that store various part configurations, and the calibration process specified earlier writes to the register that determines the digital output threshold through I2C. The Si7210 also features a tamper threshold, in case someone tries to fool the alarm by using a second magnet to replace the original magnet as the door opens. This threshold is configured to be slightly greater than the original calibration threshold to detect even the slightest tamper. When either threshold is crossed, the part automatically toggles an OTP digital output pin, allowing any programmer to easily interface the sensor into their designs.


        Using this Project:


        This project provides a good starting point for anyone who wants to utilize the Si7210 Hall Effect sensor and/or the WGM110 Wi-Fi Expansion kit working in sync with the GG11 STK. The expansion kit can also be used with the PG1 or PG12 boards, but my code may require a few changes in initialization, depending on which specific peripherals are used. 


        Below is a slide that details all the various features that I utilized for each part. Feel free to download the project (link below) and use my code to get started on your own projects!




        Source Files: 


        • Magnetic alarm zip file (attached) 


        Other EFM32GG11 Projects: 

      • Building a Spectrum Analyzer for the Giant Gecko Series 1

        Siliconlabs | 08/241/2017 | 08:35 AM

        This project is created by Silicon Labs’ summer intern David Schwarz. 


        spectrum GG11.png




        A real-time embedded spectrum analyzer with a waterfall spectrogram display. The spectrum analyzer displays the most recently captured magnitude response, and the spectrogram provides a running history of the changes in frequency content over time.




        The original intent of this project was to demonstrate real time digital signal processing (DSP) using the Giant Gecko 11 MCU and the CMSIS DSP library. Since many use cases for real time DSP on an embedded platform pertaining to signal characterization and analysis, I decided that a spectrum analyzer would be a good demonstration.




        The spectrum analyzer works by capturing a buffer of data from a user selected input source: either the microphone on the Giant Gecko 11 Starter Kit (STK) or the channel X input of the primary analog-to-digital converter (ADC0) on the Giant Gecko 11 device. It then obtains and displays the frequency response of that data. The display also shows a spectrogram to give the user information about how a signal is changing over time. The format used here is a ‘waterfall’ spectrogram, where the X axis represents frequency, the Y axis represents time, and the color of the pixel at that coordinate corresponds to the magnitude.


        Below is a video demonstration of the final project, the legend on the right shows how the spectrogram color scale relates to intensity.



        There are two parts to the video. One is for the mic input using classical music. The other is sweeping the ADC input using a function generator.


        Spectrogram Data flow Block Diagram (1).png


        The block diagram above shows the steps required to convert the incoming time domain data to visual content. Certain parts of the process demanded specific implementations in order to function in real time.


        I found it necessary to implement dual buffering to allow for simultaneous data capture and processing, which allowed for lower overall latency without losing sections of incoming data.


        The microphone data also required further processing to properly format the incoming bytes. This needed to be done post capture, as input data was obtained using direct memory access (DMA).


        Finally, I chose to only normalize and display 0 to 8 kHz frequency data since most common audio sources, including recorded music, don’t contain much signal energy above 8 kHz. However, to avoid harmonic aliasing, I decided to oversample at a frequency of 34133 Hz. I used this specific sampling frequency in order to give me 512 samples (one of the few buffer sizes the ARM fft function supports) in 15 milliseconds. This 15 millisecond time constraint is very important for maintaining real-time functionality, as humans are very sensitive to latency when a video source lags audio.


        Using This Project:


        This project provides a good starting point for anyone wanting to implement real time DSP on the Giant Gecko microcontroller. It can be run on an out of the box Giant Gecko Series 1 STK, or it can be configured with an analog circuit or module that generates a 0 to 5V signal as the input source. The complete source code and Simplicity Studio project files are linked below, along with inline and additional documentation that should be useful in understanding how the application works.


        The ADC input mode and DSP functionality of this project is also fully compatible with any Silicon Labs STK using an ARM Cortex-M4 core (eg. Wonder, Pearl, Flex, Blue, and Mighty Geckos). The microphone and color LCD, however, are not present on other STKs.


        Source Files:


      • EFM32 Voice Recognition Project Using Giant Gecko's Temperature /Humidity Sensor

        Siliconlabs | 08/237/2017 | 08:37 AM

        This project is created by Silicon Labs’ summer intern Cole Morgan.


        Background and motivation:


        This project is a program that implements voice recognition for the Giant Gecko 11 (GG11) using the starter kit’s temperature and humidity sensor and the Wizard Gecko Module. My motivation to work on this project was mainly that I wrote another project that implemented voice recognition for the GG11 using the starter kit’s LEDs, and I wanted a more advanced application for my voice recognition algorithm.


        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After it has learned the user’s voice, the user can set either a temperature or humidity threshold by saying “set” followed by either “temp” for temperature or “humid” for humidity. After this, the user can say a number from 0-99 one digit at a time to set the threshold value; for example, saying “one nine” would be interpreted as 19. For instance, saying “set humid four two” would set a humidity threshold at 42% humidity. Then, if the humidity measured by the onboard sensor crosses this threshold, the user will receive a text.




        Using my previous voice recognition project as a base, I first added the support for multiple word commands using the first command word “set” as a kind of trigger so that the program won’t get stuck in the wrong stage of a command. One side effect of using a lot more keywords than the previous project was that I had to stop storing the reference MFCC values in Flash, as there wasn’t enough space for all of them.


        The next stage in my development was to interface the Si7021 temperature/humidity sensor on the GG11 starter kit. This stage was quite simple because there was already a demo for the GG11 that interacted with the Si7021, so all I had to do was integrate the LCD.


        Then, I interfaced the Wizard Gecko Module (WGM) to connect to IFTTT via Wi-Fi and send an HTTP GET request. This part was the most difficult of this project because I have never worked with communication over Wi-Fi or sending HTTP requests. I designed two different IFTTT triggers for temperature and humidity so that the SMS alert message could be tailored to the type of threshold trigger.







        • I adapted my voice recognition to work accurately and quickly with a larger bank of keywords
        • I successfully created two IFTTT applets to send alerts quickly to a phone number
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code


        Lessons Learned:

        • I learned how to scale an algorithm to work with a larger set of data
        • I learned how to use web requests to interface a microcontroller with applications through the Internet
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far


        Potential Use Cases:


        • Voice-controlled Nest thermostat
        • A shipping container application where temperature or humidity in an area needs to be monitored to make sure it is at a certain level


        Materials Used:


        • GG11 STK with Si7021 and microphone
        • Pop filter for STK microphone
        • Wizard Gecko Module
        • Simplicity Studio IDE
        • CMSIS DSP Library


        Source Code: 


        • VRTempHumid (attached) 
      • EFM32 Voice Recognition Project Using Giant Gecko's LEDs

        Siliconlabs | 08/237/2017 | 08:26 AM

        This project is created by Silicon Labs’ summer intern Cole Morgan.


        Background and motivation:


        This project is a program that implements voice recognition for the GG11 using the starter kit’s onboard LEDs. My motivation to work on this project was mainly that I have never done anything remotely close to voice recognition before, and I thought it would be a good challenge. But another motivation was also that I am very interested in the Amazon Echo and the other emerging home assistant technologies.
        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After the program has learned the user’s voice, the user can turn the LED on, red, blue, green, or off simply by saying “on”, “blue”, “red”, “green”, or “off”.




        My first step was getting audio input from the microphone into the microcontroller and storing it. This proved a little more difficult than I expected because I hadn’t worked with SPI or I2S before. In addition to this, I also had to design a sound detection system that captures as much significant sound as possible. I did this by squaring and summing the elements of the state buffer of the bandpass FIR filter that I apply on each sample and then setting a threshold for the result of that operation. This system turned out to be extremely useful because, in addition to saving processor time, it also time-aligned the data to be processed.


        After this step, I began to implement the actual voice recognition. At first, I thought I could just find a library online and implement easily, but this turned out to be far from true. Most voice recognition libraries are much too big for a microcontroller, even one with a very large Flash memory of 2MB like the GG11. There was one library I found that was written for Arduino, but it didn’t work very well. So, I began the process of writing my own voice recognition algorithm.


        After a lot of research, I decided I would use Mel’s Frequency Cepstral Coefficients (MFCCs) as the basis for my algorithm. There are a number of other audio feature coefficients, but MFCCs seemed to be the most effective. The calculation of MFCCs is basically several signal processing techniques applied in a specific order, so I used the CMSIS ARM DSP library for those functions.


        After beginning work on this, I created a voice training algorithm to allow the program to learn any voice and adapt to any user. The training program has the user say each word a configurable number of times, and then calculates the MFCCs of that person’s pronunciation of the keyword and stores them in flash memory.


        Next, because the input data was time-aligned, I could simply put all the MFCCs for the 4 buffers in one array and use that as the basis for comparison. In addition to this, I also calculated and stored the first derivative (delta coefficients) of the MFCC data to increase accuracy.







        • I wrote my own voice recognition algorithm for microcontrollers with relatively little RAM and flash memory usage
          • Can store up to 10 keywords in Flash and up to 1,150 keywords in RAM (this number would require program modification to not store in Flash and to use less trainings)
        • Successfully created a voice recognition and training technique that works for everyone, no matter their accent or voice, with an excellent success rate
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code

        Lessons Learned and Next Steps:


        • I learned how voice recognition algorithms generally work and how to implement them
        • I learned lots of signal processing, as I didn’t know anything about it before
        • I learned how to read a large library like emlib more efficiently
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far

        My next steps are to apply the voice recognition to a temperature / humidity controller application, which should be easier than this LED application as the keywords are very different from each other unlike “on” and “off”.


        Materials Used:

        • GG11 STK with microphone and LEDs
        • Pop filter for STK microphone
        • Simplicity Studio IDE
        • CMSIS DSP Library

        Source Files: 

        • VRLEDs (attached) 
      • Wireless Encrypted Voice Communication with the EFM32GG11

        Siliconlabs | 08/237/2017 | 07:55 AM

        This project is created by Silicon Labs’ summer intern Kevin Black.




        Project Summary:


        The goal of this project was to perform one-way, encrypted, real-time, wireless voice communication from an embedded system to an arbitrary client like a laptop or tablet. This was accomplished using the EFM32GG11 starter kit for audio input/processing and the Wizard Gecko Wi-Fi expansion kit for wireless transmission. Audio data is sampled from the starter kit’s onboard microphone and encrypted with AES using the GG11 32-bit MCU; it is then streamed to any clients connected to the Wizard Gecko’s Wi-Fi access point, where it can be decrypted and played back only with the correct password.


        Background and Motivation:


        My project primary purpose was to demonstrate useful features of both the EFM32GG11 starter kit and the Wizard Gecko Wi-Fi expansion kit, as well as the two working smoothly in conjunction through the EXP header.


        The first main feature it demonstrates is the EFM32GG11’s CRYPTO module, which exists on all the EFM32 Series 1 devices and provides fast hardware-accelerated encryption. The project utilizes the mbed TLS library configured to use the CRYPTO module, which speeds it up significantly. It demonstrates the high throughput of the CRYPTO module (up to ~123 Mbps max*) by encrypting uncompressed audio in real time with plenty of overhead. The type of encryption is 256-bit AES in CBC mode, which is currently considered universally secure.

        (*Assuming 256-bit AES on the GG11 driven by HFRCO at 72 MHz)


        Another motivation behind the project was to demonstrate two features of the GG11 starter kit itself: the onboard microphone, and the ability of the Wi-Fi expansion kit to easily attach to and be controlled through the EXP header. No examples existed for the microphone, and very few firmware examples existed for the Wizard Gecko in externally hosted mode. My projects demonstrate the quality of the built-in microphone by allowing the user to listen to the audio, as well as shows how to use the BGLib C library to communicate with the Wizard Gecko from an external host. Additionally, it demonstrates the throughput of a transparent/streaming endpoint on the Wizard Gecko.


        Project Description:




        Block diagram of data flow through transmitter device


        Microphone Input:


        The GG11 starter kit provides an onboard audio codec that automatically converts the PDM (pulse density modulation) data from the onboard MEMS microphones into PCM (pulse code modulation) data and outputs it on a serial interface in I2S format. The codec’s serial interface is connected to the GG11 USART3 location 0 pins, so reading in the audio data is simply a matter of initializing USART3 to I2S with the correct settings, enabling autoTx, and asserting an additional microphone enable pin.


        The audio data arrives in 32-bit words, so the sample rate is controlled by setting the I2S baud rate to 64 times the desired sample rate (2 channels, 32 bits each). Each word contains a single 20-bit sample of audio, but very few systems support 20-bit audio, so for my project, I ignore the least significant 4 bits of each sample and only read 16 bits from each word. I also ignore samples from the right microphone, meaning the final audio data I obtain for processing is in 16-bit mono PCM format. The sample rate is easily configurable, but in the end, I settled on 20 KHz as that seems to be the upper limit of what the Wizard Gecko can handle while being high enough to cover the range of human hearing and provide clear and understandable audio.


        The audio input data is transferred into memory using LDMA in order to save CPU cycles. The right channel data is repeatedly written to a single byte in order to discard it, while the left channel data is alternately transferred into two 16-byte buffers; when one buffer is being filled, the other is being processed by the CPU.


        Encryption & Transmission:


        When a left channel transfer completes, it triggers an interrupt that switches the current process buffer and signals that the next packet is ready to be processed. The GG11 then encrypts the current 16-byte buffer (16 bytes is the AES block size) using the mbed TLS library configured to use the CRYPTO module. In CBC (cipher block chaining) mode, the library automatically XORs the plaintext with the previous ciphertext before encryption.


        The 256-bit key used for encryption is derived from a password using SHA-256. Only clients with the same password can obtain the correct key by hashing the password.


        In my project, I decided to fix the initialization vector as all zeros. Normally, initialization vector reuse is considered bad practice and weak security; however, it only has the potential to leak data from the first few blocks of data streams with identical prefixes, and that poses an insignificant threat to my project due to the enormous quantity of blocks and the amount of noise in a meaningful segment of audio.


        Once a block is encrypted, it is put into a first-in-first-out queue where it is transmitted over UART through the EXP header to the Wizard Gecko. Flow control is implemented using an additional CTS (clear to send) pin connected to the Wizard Gecko; the module can drive CTS high when it cannot keep up with the transmission rate, in which case the transmission halts and the queue fills up. The transmission is driven by interrupts, which allows it to run “in the background” while the next buffer is being encrypted, and does not block the main thread when the Wizard Gecko raises CTS.


        The baud rate for UART transmission is configurable as long as the GG11 and the Wizard Gecko are both configured to the same value. Interestingly, however, the Wizard Gecko seemed to perform better (raise CTS for less time) at higher baud rates— perhaps because that increases the gap between packets— so I settled on 3 MHz.




        The Wizard Gecko Wi-Fi module, when connected to an external MCU in hosted mode, operates in a command-response format. The GG11 sends commands through the EXP header via SPI, formatted with a binary protocol called BGAPI. When the Wizard Gecko is ready to send a response (or an event) back to the MCU, it raises a notify pin (also connected to the EXP header) that tells the GG11 to read and handle the message. All of the BGAPI commands and responses are defined in a C library called BGLib.


        Upon initialization, my project configures the Wizard Gecko to be a hidden wireless access point and a TCP server. When a client connected to the access point opens a connection to the IP address and port of the TCP server, it triggers an event that is forwarded back to the GG11. The GG11 then enables the microphone and begins encrypting and transmitting audio via UART to the Wizard Gecko’s second USART interface (the one not used for BGAPI commands). That interface is configured in transparent/streaming mode, which means it forwards all received data unmodified to a single endpoint. Before the encryption starts, the GG11 configures this endpoint to be that of the connected client.


        Accomplishments, Flaws, and Next Steps:


        Ultimately, the project was successful and met its end goal of building a one-way encrypted voice communication device. Speech is clear and comprehensible at up to several inches away from the onboard microphone, and the real-time encryption is secure.


        The primary flaw in the final implementation is that the Wizard Gecko itself has trouble constantly streaming a large quantity of data without interruptions. The module will occasionally “choke” for 1-2 seconds, during which it will stop transmitting and refuse to accept data by raising CTS. Performance is inconsistent, and the device will go anywhere from 10 to more than 60 seconds in between “chokes”. This causes frustrating gaps in the audio, much like a cell phone connection that is “breaking up”; although on average, the project is still quite usable for talking to someone. I added a blue LED that turns on whenever CTS is raised, so the user can at least tell when the device is not transmitting by observing the LED light up solid blue.


        In the future, this behavior could likely be eliminated by changing the protocol that the device uses to transmit. Bluetooth would have much more bandwidth, or if the Wizard Gecko is still used, Wi-Fi Direct or a TCP connection over a third-party local area network (rather than using the Wizard Gecko as the access point). The last two options would make the demo much more difficult to use, so Bluetooth would be the ideal solution; this explains why Bluetooth has become so popular for real-life products with similar functionality.


        Using this Project:


        Follow the instructions in the readme of the encrypted voice transmitter folder to configure the Wizard Gecko and GG11 to act as the transmitter portion of the project.


        To use the receiver, download the executable Java applet below and run the .exe file inside (no JVM installation required). Unless the IP address and port were changed in the firmware, leave those fields blank. Enter the password defined in the firmware (default “gecko123”).


        After booting up the transmitter, wait for the LCD output to reach “waiting for client”, and then connect to the hidden access point that the device has created (default SSID is “Encrypted Voice Demo”).




        Once the LCD displays “client joined”, click “Connect” on the Java applet’s dialog. When the status message below the connect button displays “Connected” in green, audio from the microphone should begin playing back on the PC.




        Source Files: 


        [zip file containing encrypted_voice_transmitter (firmware source code)]
        [zip file containing executable Java applet]

        [zip file containing encrypted_voice_receiver (Java source code)]



      • Project Completed and Working a Treat (TB Sense and Pi)

        neal_tommy | 08/232/2017 | 12:20 PM



        Whilst I've received much assistance from this community I thought it time I give back and feedback on my working project (thanks all who helped along the way). 


        Essentially I have a TB Sense connected up via BLE to a RPi3. I did some changing to the code on the TB Sense to make it continuously advertise and then have a Python script on the Pi to collect data once every 10 minutes. 


        This data is fed to Thingspeak (considering alternative options here) and graphed for viewing. I'm still in the phase of looking at some daily / weekly averages and seeing what changes it would make to general lifestyle. I'm collecting data from 6 enviromental sensors (sound, temp., humidity, pressure, TVOC and eCO2). 


        Board holder

        Board holder


        Overall 3D printed enclosure (enough to let some air in for measurement)



        I've also got a cool 3D printed enclosure made which houses the TB Sense in a nice looking (and acceptable by the wife) designed box whilst on the table top. The Pi is sitting next to my router collecting the data. 


        So far I've collected a couple of days of data as shown below. It all seems to be working and is ready for a powercut and suitable reboot / reconnect if that happens (common here in South Africa). 




        Happy to answer any questions on this, and share details. It is by no means a complex project however did keep me busy for a few weekends. There are still some areas I'd like to improve and then work from there (probably on the efficiency of the Python code). 




        from __future__ import division
        import sys
        from bluepy.btle import *
        import struct
        import thread
        from time import sleep
        import urllib2
        # Base URL of Thingspeak
        baseURL = ''
        def vReadSENSE():
            scanner = Scanner(0)
            devices = scanner.scan(2)
            for dev in devices:
                print "Device %s (%s), RSSI=%d dB" % (dev.addr, dev.addrType, dev.rssi)
                for (adtype, desc, value) in dev.getScanData():
                    print "  %s = %s" % (desc, value)
            num_ble = len(devices)
            print num_ble
            if num_ble == 0:
                return None
            ble_service = []
            char_sensor = 0
            non_sensor = 0
            TVOC_char = Characteristic
            eCO2_char = Characteristic
            Pressure_char = Characteristic
            Sound_char = Characteristic
            temperature_char = Characteristic
            humidity_char = Characteristic
            #bat_char = Characteristic
            count = 15
            for i in range(num_ble):
                    #ble_service[char_sensor].connect(devices[i].addr, devices[i].addrType)
                    char_sensor = char_sensor + 1
                    print "Connected %s device with addr %s " % (char_sensor, devices[i].addr)
                    non_sensor = non_sensor + 1
                for i in range(char_sensor):
                    services = ble_service[i].getServices()
                    characteristics = ble_service[i].getCharacteristics()
                    for k in characteristics:
                        print k
                        if k.uuid == "efd658ae-c401-ef33-76e7-91b00019103b":
                            print "eCO2 Level"
                            TVOC_char = k
                        if k.uuid == "efd658ae-c402-ef33-76e7-91b00019103b":
                            print "TVOC Level"
                            TVOC_char = k
                        if k.uuid == "00002a6d-0000-1000-8000-00805f9b34fb":
                            print "Pressure Level"
                            Pressure_char = k
                        if k.uuid == "c8546913-bf02-45eb-8dde-9f8754f4a32e":
                            print "Sound Level"
                            Sound_char = k
                        if k.uuid == "00002a6e-0000-1000-8000-00805f9b34fb":
                            print "Temperature"
                            temperature_char = k
                        if k.uuid == "00002a6f-0000-1000-8000-00805f9b34fb":
                            print "Humidity"
                            humidity_char = k
                        #if k.uuid == "2a19":
                            #print "Battery Level"
                            #bat_char = k
                return None
            while True:
                # units of ppb
                TVOC_data =
                TVOC_data_value = ord(TVOC_data/100)
                #units of ppm
                eCO2_data =
                eCO2_data_value = ord(eCO2_data[0])
                # pressure is in units of 0.1Pa
                Pressure_data =
                Pressure_data_value = (Pressure_data * 10)
                # units of 0.01dB
                Sound_data =
                Sound_data_value = (Sound_data * 100)
                #bat_data =
                #bat_data_value = ord(bat_data[0])
                #convert from farenheit
                temperature_data =
                temperature_data_value = (ord(temperature_data[1]) << 8) + ord(temperature_data[0])
                float_temperature_data_value = (temperature_data_value / 100)
                humidity_data =
        	humidity_data_value =(ord(humidity_data[1])<<8)+ord(humidity_data[0])
        	print "TVOC: ", TVOC_data_value
        	print “eCO2: ", eCO2_data_value
        	print “Pressure: ", Pressure_data_value
        	print “Sound: ", Sound_data_value
        	print “Temperature: “, float_temperature_data_value
        	print “Humidity: “, humidity_data_value
        	if count > 14:
                	f = urllib.urlopen(baseURL + PRIVATE_KEY + "&field1=%s&field2=%s&field3=%s&field4=%s&field5=%s&field6=%s" % (TVOC_data_value, eCO2_data_value, Pressure_data_value, Sound_data_value, float_temperature_data_value, humidity_data_value))
                	count = 0
                	count = count + 1
        while True:
      • Setting BLE characteristic values – a Thunderboard Sense practical approach

        m_dobrea | 07/191/2017 | 02:45 AM

             Few months ago, I received a Thunderboard Sense kit from Silicon Labs Company. Analyzing the market for applications capable of working with this device, I noticed the existence of many applications capable of running on operating systems such as iOS, Android or Linux. But, I have not found a professional application, developed in Windows, able to work with this device. As a result, I decide to develop one - BLE SensorTags application.

             Right now, the BLE (Bluetooth Low Energy) SensorTag application – BlessTags (BLE SensorTags) can work with 2 different sensor tags from Silicon Labs Company: Thunderboard React and Thunderboard Sense.

             The BlessTags (BLE SensorTags) application has the following functionalities:

        1. To set, communicate, use and display (in graphic and numerical form) the information from all the sensors included on the SensorTags presented above. The supported sensor’s characteristics are:
          • For ThunderBoard React: accelerometer, orientation, temperature, humidity, light (ambient & UV), keys and output LEDs. 
          • For ThunderBoard Sense: accelerometer, orientation, barometer, temperature, humidity, air quality (CO2 & TVOC), light (ambient & UV), sound level, keys and output LEDs (2 x low power LEDs & 4 x power LEDs).
        2. In the developer mode the software provides the user with lots of messages obtained from the communication process with a specific SensorTag - these messages enable the user to identify the communication/configuration setbacks and some other problems.
        3. The software also gives the possibility to interrogate different types of unknown BLE devices - to be able to obtain the complete GATT attribute table for the unknown BLE device. At this link is presented the entire procedure used by me to obtain the complete GATT table for Thunderboard Sense and in the end a pdf file containing all the obtained results.
        4. To obtain and tune the optimal Kalman filter parameters;
        5. ... and the most exciting features: the gadgets. The gadgets are several practical applications that use one or more sensors from the SensorTag to achieve a concrete, fully functional and useful application. For instance, using the two buttons placed on Thunderboard React or Thunderboard Sense, these SensorTags are turning into wireless presenters for PowerPoint.

             For the development of the application I used: (a) intensively the documentation provided by Microsoft and (b) the source code (BluetoothLowEnergy.cpp) developed by Donald Ness and publicly offered at this address:

             The following code sequence, developed entirely by me, will complete the program provided by Mr. Donald Ness. With this code addition, we will not only be able to read data from the descriptors of a characteristic, but we will be able also to write characteristic values - in this way we can influence the state of the SensorTag.

             To exemplify the concepts of writing to a characteristic of a sensor, I will customize the code to be presented for the Profile User Interface service; service with the UUID: FCB89C40-C600-59F3-7DC3-5ECE444A401B. All the characteristics of this service (Profile User Interface service) are presented in the figure below. We, in this presentation, will focus only on the 0xC603 characteristic – UUID: FCB89C40-C603-59F3-7DC3-5ECE444A401B. This characteristic allow us to control the intensity and the color of the high brightness RGB LEDs placed on the SensorTag.




             In order to work correctly, the code below must be inserted after the code sequence from the Step 3, presented in ConnectBLEDevice() function, from the BluetoothLowEnergy.cpp file.  At this point and for the 0xC600 service pCharBuffer is a buffer with 4 elements. Each element stores a data structure related with one of the each of the 4 characteristics presented in the table above (UUID1: FCB89C40-C601-59F3-7DC3-5ECE444A401B, UUID2: FCB89C40-C602-59F3-7DC3-5ECE444A401B, UUID3: FCB89C40-C603-59F3-7DC3-5ECE444A401B and UUID4: FCB89C40-C604-59F3-7DC3-5ECE444A401B).


        if (pCharBuffer == NULL)	//step A				
        	free(pServiceBuffer);	pServiceBuffer = NULL; 		//free the service buffer
        	CloseHandle (hLEDevice); 				//close the BLE device handle
        	return -4;
        if (numChars > 4)		//step B				
        	{				//error we have more than 4 characteristics
        	free (pCharBuffer);	pCharBuffer = NULL; 		//free the characteristics buffer
        	free (pServiceBuffer); 	pServiceBuffer = NULL;		//free the service buffer
        	CloseHandle (hLEDevice); 				//close the BLE device handle
        	return -5;
        //Get the specific characteristic from where the high brightness RGB LEDs can be controlled.
        //Here only the first 48 bits were checked - FCB89C40-C603-59F3-7DC3-5ECE444A401B – in
        //   order to identify this specific characteristic.
        currGattCharTB = NULL;		//step C
        for (i = 0; i< numChars; i++)
        	if (pCharBuffer[i].CharacteristicUuid.IsShortUuid == 0)
        		if (pCharBuffer[i].CharacteristicUuid.Value.LongUuid.Data1 == 0xFCB89C40 &&
        		     pCharBuffer[i].CharacteristicUuid.Value.LongUuid.Data2 == 0xC603)
        currGattCharTB = &pCharBuffer[i]; break; } if (currGattCharTB == NULL) //step D { // if no such characteristic can be found: free all the data structures and return free (pCharBuffer); pCharBuffer = NULL; free(pServiceBuffer); pServiceBuffer = NULL; CloseHandle (hLEDevice); return -6; } typedef union //step E { BTH_LE_GATT_CHARACTERISTIC_VALUE newValue; struct { ULONG DataSize; UCHAR Data[4]; } myValue; } rezolvare; //step F rezolvare newValue_base; RtlZeroMemory(&newValue_base.newValue, ( sizeof(rezolvare) )); //step G //fill the structure with the required data newValue_base.newValue.DataSize = sizeof (UCHAR)*4; newValue_base.myValue.Data[0] = comLED; //which LEDs will be on newValue_base.myValue.Data[1] = valR; //red level [0, 255] newValue_base.myValue.Data[2] = valG; //green level [0, 255] newValue_base.myValue.Data[3] = valB; //blue level [0, 255] //step H hr = BluetoothGATTSetCharacteristicValue( hLEDevice, currGattCharTB, &newValue_base.newValue, 0, BLUETOOTH_GATT_FLAG_NONE); //step I if (S_OK != hr) { if (dbgMode) { InsertTextBoxLine (panelHandleDbg, PANEL_Dbg_TEXTBOX, -1, "Error at: BluetoothGATTSetDescriptorValue - impossible to set the I/O lines (LEDs) !"); HRESULTtoErrTxt (hr, buffErr); sprintf (buffDeAfisat, " - %s", buffErr); InsertTextBoxLine (panelHandleDbg, PANEL_Dbg_TEXTBOX, -1, buffDeAfisat); } free (pCharBuffer); pCharBuffer = NULL; free (pServiceBuffer); pServiceBuffer = NULL; CloseHandle (hLEDevice); return -4; } //step J //all is OK now and release all resources previously used free (pCharBuffer); pCharBuffer = NULL; free (pServiceBuffer); pServiceBuffer = NULL; CloseHandle (hLEDevice);

             The first 2 “if” instances (steps A and B) check the correctness of the service data in accordance with our previous knowledge related to this service. If everything is correct, we go to identifying the specific characteristic capable to influences the state of the LEDs – step C, this is done in the “for” loop.

             In D step, the software check if this specific characteristic (0xC603) was found. If this last test is passed, now we can dispatch data to the SensorTag via BluetoothGATTSetCharacteristicValue function - step H. But to do this a new data structure is defined, step E, a new variable (of this data specific type) is declared and initialized with 0, on the step F, and, in the end, the variable is initialized with the required data – step G.

             Analyzing steps E, F, G and H, one could say that a simpler approach can be easily found.  Yes, it is true, but this approach is perfectly functional for a compiler that would support a development in C++ (like Visual Studio). The BlessTags application has been fully developed in LabVindows/CVI, which has a compiler that only supports ANSI C. And for this compiler this was the only functional solution found up to now.

             Going further, if errors occur in the SensorTag communication process, they are treated in step I. At the end, all data structures used are released, step J.

             For the correct operation of the previous code, make sure the device is powered from the USB port, otherwise the RGB LEDS will be disabled by the Thunderboard Sense to conserve the power from the coin cell battery.

             And now a video to show BlessTags main functions:



             This application, BlessTags (BLE SensorTags) application, can be downloaded from the Windows Store Apps: For more information, demo, practical applications, examples etc. please visit the following blog:


      • Sensor node network with Thunderboard Sense and MicroPython

        ThomasFK | 04/92/2017 | 04:04 PM

        I am a member of NUTS, the NTNU Student Test Satellite. The main goal is to create a CubeSat, a tiny satellite that piggybacks on the launch of a larger satellite.


        Another goal of NUTS is trying to promote aerospace/STEM topics among other students. Last fall we participated in "Researchers Night" at NTNU, which is used to promote STEM education among high school students. A lot of institutes and organizations show up at Researchers Night with really flashy displays, such as flamethrowers or slightly violent chemical reactions.


        At our disposal we had a vacuum chamber, a DC-motor, space-grade and regular solar panels, and several Thunderboard Senses. Showing off how marshmallows behave in vacuum, and how the DC motor behaves when connected to the different solar panels might be interesting enough in and of itself. However we decided to add some Thunderboards to spice it up a bit.

        Using a budding implementation of MicroPython for Thunderboard Sense (which will be released soon), we brainstormed and programmed a small sensor network for our stand, simulating logging telemetry data from our satellite. The Thunderboards were utilized as follows:

        • Glued to the DC motor, transmitting gyroscope data from the IMU.
        • Inside the vacuum chamber transmitting pressure.
        • Transmitting the light-level with the light-sensor.
        • Sampling the sound-level with the microphone.
        • A master that could tune into transmissions from either of the other Thunderboards, logging the output to screen and also showing how much the slave deviated from "normal" status by using the  RGB LEDs.

        I have embedded two video. The first one gives a short overview over the entire project, while the second shows the setup in action, logging data from the vacuum chamber.


        Our stand was a great success! Robot Very Happy We got several people standing around for up to half an hour discussing intricacies of satellite development as well as giving us an opportunity to talk more about the satellite radio link.


        At last I want to brag a bit about how neat this code turned out with MicroPython, and how MicroPython really was ideal for bringing up a project like this in such a short time.  The code for reading data from the IMU and transmitting it ended under 40 LOC.

        from tbsense import *
        from radio import *
        from math import sqrt
        rdio = Rail()
        i = IMU(gyro_scale = IMU.GYRO_SCALE_2000DPS, gyro_bw = IMU.GYRO_BW_12100HZ)
        def float_to_pkt(flt):
            integer = int(flt)
            decimal = round(flt, 3) - integer
            decimal = int(decimal*1000)
            ret = bytearray(6)
            ret[0] = (integer >> 24) & 0xFF
            ret[1] = (integer >> 16) & 0xFF
            ret[2] = (integer >> 8)  & 0xFF
            ret[3] = integer & 0xFF
            ret[4] = (decimal >> 8) & 0xFF
            ret[5] = decimal & 0xFF
            return ret
        def loop():
            meas = i.gyro_measurement()
            meas = sqrt((meas[0]**2)+(meas[1]**2)+(meas[2]**2))
            pkt = float_to_pkt(meas)
        def init():
        while True:



      • Thunderboard Sense with Raspberry Pi and Python

        DDB | 03/67/2017 | 08:16 PM

        The goal of this project was to make a very simple python script that runs on a Raspberry Pi and collects data from one or more Thunderboard Sense devices, using the same Google Firebase backend and web application that the official app uses.


        To get started, go to, and clone the repository. Follow the instructions and set up a firebase account and project, replacing the database name in src/main.js as instructed. For this example, I have not yet gotten authentication working, so change the database.rules.json for now to allow anyone to write (basically replacing "auth.uid === 'YsGcsiI8hkwjImSrr25uZdqlNw3Qkgi8vUWx9MU6'": with "true")


          "rules": {
            ".read": false,
            ".write": false,
              ".read": true,
              ".write": true
              ".read": true,
              ".write": true
              ".write": true,
              ".indexOn": ["startTime", "contactInfo/deviceName"],
              "$session":{//2592000000 is 30 days
                ".read": "data.child('startTime').val() > (now - 2992000000)",
                ".write":  true


        Now, deploy the app to Firebase using the firebase tools



        sudo npm install -g firebase
        firebase login
        firebase deploy --project <your firebase project id>


        In the Firebase console you should now see your deployed rules, as well as the application url. Following the URL should bring you to the default page:

        Screenshot 2017-03-09 01.35.43.png

        Now the next step is to log onto the Raspberry Pi, and install the necessary tools. The script uses bluepy to communicate with the sensor, and python-firebase to push data to the cloud. I did have some trouble installing python-firebase because of the specific version of the requests library, but eventually got it installed.


        Remember to put in your own database URL in the script:

        from firebase import firebase
        import uuid
        import time
        class Thundercloud:
           def __init__(self):
              self.addr     = 'https://-- Your database URL'
              self.firebase = firebase.FirebaseApplication(self.addr, None)


        The script continuously looks for advertising thunderboards, and automatically connects to the board if a new one is discovered.


        $ sudo python3 
        No Thunderboard Sense devices found!
        No Thunderboard Sense devices found!
        Starting thread <Thread(Thread-1, initial)> for 37020
        Thunder Sense #37020
        Ambient Light:	0.25 Lux
        Sound Level:	37.83
        tVOC:		0
        Humidity:	28.25 %RH
        Temperature:	29.86 C
        Pressure:	951.586
        UV Index:	0
        eCO2:		0

        When a Thunderboard Sense has been discovered and connected to, the script will print out the read data periodically. Take note of the number in "Thunder Sense #37020". We will need to give the number to the web application.


        The script generates a session ID for each board connected, and then continuously generates json strings for the data read from the Thunderboard Sense. The json string is then inserted into the appropriate location in the database. Firebase has a useful live database view that shows us that our data is indeed being pushed into the cloud:

        Screenshot 2017-03-09 01.20.16.png

        Finally if you go to the app url and append your Thunderboard Sense id, you should see the data being displayed https://<project id>

        Screenshot 2017-03-09 01.19.34.png

         Screenshot 2017-03-09 01.19.47.png
 simply discovers and sets up handles to the different characteristics. It also contains functions to read these characteristics:



        from bluepy.btle import *
        import struct
        from time import sleep
        class Thunderboard:
           def __init__(self, dev):
      = dev
              self.char = dict()
     = ''
              self.session = ''
              self.coinCell = False
              # Get device name and characteristics
              scanData = dev.getScanData()
              for (adtype, desc, value) in scanData:
                 if (desc == 'Complete Local Name'):
           = value
              ble_service = Peripheral()
              ble_service.connect(dev.addr, dev.addrType)
              characteristics = ble_service.getCharacteristics()
              for k in characteristics:
                 if k.uuid == '2a6e':
                    self.char['temperature'] = k
                 elif k.uuid == '2a6f':
                    self.char['humidity'] = k
                 elif k.uuid == '2a76':
                    self.char['uvIndex'] = k
                 elif k.uuid == '2a6d':
                    self.char['pressure'] = k
                 elif k.uuid == 'c8546913-bfd9-45eb-8dde-9f8754f4a32e':
                    self.char['ambientLight'] = k
                 elif k.uuid == 'c8546913-bf02-45eb-8dde-9f8754f4a32e':
                    self.char['sound'] = k
                 elif k.uuid == 'efd658ae-c401-ef33-76e7-91b00019103b':
                    self.char['co2'] = k
                 elif k.uuid == 'efd658ae-c402-ef33-76e7-91b00019103b':
                    self.char['voc'] = k
                 elif k.uuid == 'ec61a454-ed01-a5e8-b8f9-de9ec026ec51':
                    self.char['power_source_type'] = k
           def readTemperature(self):
              value = self.char['temperature'].read()
              value = struct.unpack('<H', value)
              value = value[0] / 100
              return value
           def readHumidity(self):
              value = self.char['humidity'].read()
              value = struct.unpack('<H', value)
              value = value[0] / 100
              return value
           def readAmbientLight(self):
              value = self.char['ambientLight'].read()
              value = struct.unpack('<L', value)
              value = value[0] / 100
              return value
           def readUvIndex(self):
              value = self.char['uvIndex'].read()
              value = ord(value)
              return value
           def readCo2(self):
              value = self.char['co2'].read()
              value = struct.unpack('<h', value)
              value = value[0]
              return value
           def readVoc(self):
              value = self.char['voc'].read()
              value = struct.unpack('<h', value)
              value = value[0]
              return value
           def readSound(self):
              value = self.char['sound'].read()
              value = struct.unpack('<h', value)
              value = value[0] / 100
              return value
           def readPressure(self):
              value = self.char['pressure'].read()
              value = struct.unpack('<L', value)
              value = value[0] / 1000
              return value


          handles the connection to the Firebase database. getSession() generates a new session ID and is called once for every new Thunderboard Sense connection. putEnvironmentData() inserts the data and updates the timestamps:



        from firebase import firebase
        import uuid
        import time
        class Thundercloud:
           def __init__(self):
              self.addr     = 'https://'-- Firebase Database Name --''
              self.firebase = firebase.FirebaseApplication(self.addr, None)
           def getSession(self, deviceId):
              timestamp = int(time.time() * 1000)
              guid = str(uuid.uuid1())
              url = 'thunderboard/{}/sessions'.format(deviceId)
              self.firebase.put(url, timestamp, guid)
              d = {
                    "startTime" : timestamp,
                    "endTime" : timestamp,
                    "shortURL": '',
                    "contactInfo" : {
                         "fullName":"First and last name",
                         "deviceName": 'Thunderboard #{}'.format(deviceId)
                     "temperatureUnits" : 0,
                     "measurementUnits" : 0,
              url = 'sessions'
              self.firebase.put(url, guid, d)
              return guid
           def putEnvironmentData(self, guid, data):
              timestamp = int(time.time() * 1000)
              url = 'sessions/{}/environment/data'.format(guid)
              self.firebase.put(url, timestamp, data)
              url = 'sessions/{}'.format(guid)
              self.firebase.put(url, 'endTime', timestamp)


        Finally, continuously searches for new Thunderboard Sense devices, and spawns a new thread for each one successfully connected to:


        from bluepy.btle import *
        import struct
        from time import sleep
        from tbsense import Thunderboard
        from thundercloud import Thundercloud
        import threading
        def getThunderboards():
            scanner = Scanner(0)
            devices = scanner.scan(3)
            tbsense = dict()
            for dev in devices:
                scanData = dev.getScanData()
                for (adtype, desc, value) in scanData:
                    if desc == 'Complete Local Name':
                        if 'Thunder Sense #' in value:
                            deviceId = int(value.split('#')[-1])
                            tbsense[deviceId] = Thunderboard(dev)
            return tbsense
        def sensorLoop(fb, tb, devId):
            session = fb.getSession(devId)
            tb.session = session
            value = tb.char['power_source_type'].read()
            if ord(value) == 0x04:
                tb.coinCell = True
            while True:
                text = ''
                text += '\n' + + '\n'
                data = dict()
                    for key in tb.char.keys():
                        if key == 'temperature':
                                data['temperature'] = tb.readTemperature()
                                text += 'Temperature:\t{} C\n'.format(data['temperature'])
                        elif key == 'humidity':
                            data['humidity'] = tb.readHumidity()
                            text += 'Humidity:\t{} %RH\n'.format(data['humidity'])
                        elif key == 'ambientLight':
                            data['ambientLight'] = tb.readAmbientLight()
                            text += 'Ambient Light:\t{} Lux\n'.format(data['ambientLight'])
                        elif key == 'uvIndex':
                            data['uvIndex'] = tb.readUvIndex()
                            text += 'UV Index:\t{}\n'.format(data['uvIndex'])
                        elif key == 'co2' and tb.coinCell == False:
                            data['co2'] = tb.readCo2()
                            text += 'eCO2:\t\t{}\n'.format(data['co2'])
                        elif key == 'voc' and tb.coinCell == False:
                            data['voc'] = tb.readVoc()
                            text += 'tVOC:\t\t{}\n'.format(data['voc'])
                        elif key == 'sound':
                            data['sound'] = tb.readSound()
                            text += 'Sound Level:\t{}\n'.format(data['sound'])
                        elif key == 'pressure':
                            data['pressure'] = tb.readPressure()
                            text += 'Pressure:\t{}\n'.format(data['pressure'])
                fb.putEnvironmentData(session, data)
        def dataLoop(fb, thunderboards):
            threads = []
            for devId, tb in thunderboards.items():
                t = threading.Thread(target=sensorLoop, args=(fb, tb, devId))
                print('Starting thread {} for {}'.format(t, devId))
        if __name__ == '__main__':
            fb = Thundercloud()
            while True:
                thunderboards = getThunderboards()
                if len(thunderboards) == 0:
                    print("No Thunderboard Sense devices found!")
                    dataLoop(fb, thunderboards)


      • EFM8-Powered Plug And Play Solar Concept

        nikodean1 | 02/33/2017 | 01:46 PM

        Hey everyone,


        I used the EFM8UB1 starter kit to complete construction of my plug and play solar system concept. The goal of the concept is to make solar system installation as easy (and hopefully) as cheap as possible by reducing the amount of on-site assembly required.


        It has a built-in inverter, charge controller, 12 Volt, 12 Ah UPS battery, 12 Volt solar panel input, a 5 VDC power rail for a later centralized USB power project, and 120 Volt AC outlets. You just connect a solar panel, flip a switch, and you have a 120 VAC solar power source.


        I used the EFM8UB1 to construct an automatic transfer switch for it, so it can automatically switch appliances (or a house, if it is scaled up) to the grid if there is a shortage of solar power. This is convenient for those who want to minimize battery costs without running the risk of a blackout.


        Here is a video of me discussing and demonstrating it. 


        The EFM8 made it very easy by providing the option of a low-energy USB port (which the white USB cable is connected to) and a built-in CR2032 battery slot.


        This is important because it is battery-powered, and I couldn't afford to have it deplete the battery during a cloudy week, or in general: wasting the power generated by the 20 watt solar panel that recharges it. 


        In addition to that, the analog-to-digital converter configuration was very quick and easy.