The Project board is for sharing projects based on Silicon Labs' component with other community members. View Projects Guidelines ›

Projects

    Publish
     
      • Building a Magnetic Alarm System for the Giant Gecko Series 1 STK

        Siliconlabs | 08/242/2017 | 09:17 AM

        This project is created by Silicon Labs’ summer intern Rikin Tanna. 

         

        Project:

         

        A magnetic alarm system uses a Hall Effect magnetic sensor and a supporting magnet attached to the door frame to determine if the door is opened or closed. This project includes a notification service that sends a message to your mobile phone when the alarm is triggered, for added security. By removing any moving parts from the system, the magnetic alarm system proves to be very reliable.

         

        spectrum-1.jpg

         

        Materials Used:

         

        EFM32 Giant Gecko GG11 Starter Kit (SLSTK3701A)

        WGM110 Wi-Fi Expansion Kit (SLEXP4320A)

        Hall Effect Magnetic Sensor (Si7210)

         

        Background:

         

        My goal with this project was to demonstrate an important use case for the Si7210 as a component of an alarm system. Given that Silicon Labs is an IOT company, I figured it would be beneficial to use IFTTT, an IOT workflow service that connects two or more internet-enabled devices, to demonstrate our GG11 STK being used with an established IOT enabler. With this service included, I could also showcase the WGM-110 Wi-Fi Expansion kit working with the GG11 STK. The GG11 STK was chosen due to its onboard Si7210 sensor, its compatibility with the WGM-110 Wi-Fi Expansion kit, and its recent launch (to expand its demo portfolio).

         

        Operation:

         

        The demo is split into 3 phases:

        1. WiFi-Setup – The first phase of the demo configures the WGM110. Here, the WGM110 boots up, sets the operating mode to client, connects to the user’s access point, and enters a low power state (Power Mode 4) to wait for further command. As it configures, status messages will display to the LCD as the GG11 receives responses from the WGM110.
        2. Calibration – The second phase of the demo calibrates the Si7210 digital output pin thresholds based on the user’s “closed door” position (magnetic field reading).
        3. Operation – The third phase is operation. Closing and opening the door (crossing the magnetic field threshold) will cause the Si7210 digital output pin to toggle, which will result in LED0 flashing. Additionally, the output pin will toggle if a second magnet is brought in to try and tamper with the alarm system. When the alarm is triggered, the GG11 will command the WGM110 to contact the IFTTT servers to send a message to the user’s mobile phone.

        Explanation:

         

        GG11 STK:

        The GG11 STK was programmed using Simplicity Studio. Simplicity provides an ample array of examples and demos to help any beginners get started with Silicon Labs MCUs (this was my first experience with SiLabs products).

         

        Below is a representation of data flow for the project. 

         

        spectrum-2.png

         

        WGM110:

         

        The WGM110 is a versatile part, as it can act as Wi-Fi client, a local access point, or a Wi-Fi Direct (Wi-Fi P2P) interface. In this system, the WGM110 acts as a client, and it is a slave to the host GG11 MCU. Communication is based on the BGAPI command-response protocol via SPI on a UART terminal. Debugging this proved to be difficult, as there are two individual MCUs involved, but the Salae Logic analyzer allowed me to view the communication between the devices to help fix any issues I encountered. Below is a capture of a typical boot correspondence.

         

        spectrum-3.png

         

        spectrum-4.png

         

        When the alarm is triggered, the WGM110 establishes a TCP connection with the IFTTT server and sends an HTTP Get request to the extension specified by the IFTTT applet I have created. Unfortunately, IFTTT allows free users to only create private applets, but creating the applet was simple: step by step instructions for creating my applet can be found in the project ReadMe file.

         

        Si7210:  

         

        The GG11 STK comes with an onboard Si7210 Hall Effect Magnetic sensor. It can detect changes in magnetic field to the hundredth of Millitesla (mT), which is more than enough sensitivity for this use case. The part has multiple OTP registers that store various part configurations, and the calibration process specified earlier writes to the register that determines the digital output threshold through I2C. The Si7210 also features a tamper threshold, in case someone tries to fool the alarm by using a second magnet to replace the original magnet as the door opens. This threshold is configured to be slightly greater than the original calibration threshold to detect even the slightest tamper. When either threshold is crossed, the part automatically toggles an OTP digital output pin, allowing any programmer to easily interface the sensor into their designs.

         

        Using this Project:

         

        This project provides a good starting point for anyone who wants to utilize the Si7210 Hall Effect sensor and/or the WGM110 Wi-Fi Expansion kit working in sync with the GG11 STK. The expansion kit can also be used with the PG1 or PG12 boards, but my code may require a few changes in initialization, depending on which specific peripherals are used. 

         

        Below is a slide that details all the various features that I utilized for each part. Feel free to download the project (link below) and use my code to get started on your own projects!

         

        spectrum-5.png.jpg

         

        Source Files: 

         

        • Magnetic alarm zip file (attached) 

         

        Other EFM32GG11 Projects: 

      • Building a Spectrum Analyzer for the Giant Gecko Series 1

        Siliconlabs | 08/241/2017 | 12:35 PM

        This project is created by Silicon Labs’ summer intern David Schwarz. 

         

        spectrum GG11.png

         

        Project:

         

        A real-time embedded spectrum analyzer with a waterfall spectrogram display. The spectrum analyzer displays the most recently captured magnitude response, and the spectrogram provides a running history of the changes in frequency content over time.

         

        Background:

         

        The original intent of this project was to demonstrate real time digital signal processing (DSP) using the Giant Gecko 11 MCU and the CMSIS DSP library. Since many use cases for real time DSP on an embedded platform pertaining to signal characterization and analysis, I decided that a spectrum analyzer would be a good demonstration.

         

        Description:

         

        The spectrum analyzer works by capturing a buffer of data from a user selected input source: either the microphone on the Giant Gecko 11 Starter Kit (STK) or the channel X input of the primary analog-to-digital converter (ADC0) on the Giant Gecko 11 device. It then obtains and displays the frequency response of that data. The display also shows a spectrogram to give the user information about how a signal is changing over time. The format used here is a ‘waterfall’ spectrogram, where the X axis represents frequency, the Y axis represents time, and the color of the pixel at that coordinate corresponds to the magnitude.

         

        Below is a video demonstration of the final project, the legend on the right shows how the spectrogram color scale relates to intensity.

         

         

        There are two parts to the video. One is for the mic input using classical music. The other is sweeping the ADC input using a function generator.

         

        Spectrogram Data flow Block Diagram (1).png

         

        The block diagram above shows the steps required to convert the incoming time domain data to visual content. Certain parts of the process demanded specific implementations in order to function in real time.

         

        I found it necessary to implement dual buffering to allow for simultaneous data capture and processing, which allowed for lower overall latency without losing sections of incoming data.

         

        The microphone data also required further processing to properly format the incoming bytes. This needed to be done post capture, as input data was obtained using direct memory access (DMA).

         

        Finally, I chose to only normalize and display 0 to 8 kHz frequency data since most common audio sources, including recorded music, don’t contain much signal energy above 8 kHz. However, to avoid harmonic aliasing, I decided to oversample at a frequency of 34133 Hz. I used this specific sampling frequency in order to give me 512 samples (one of the few buffer sizes the ARM fft function supports) in 15 milliseconds. This 15 millisecond time constraint is very important for maintaining real-time functionality, as humans are very sensitive to latency when a video source lags audio.

         

        Using This Project:

         

        This project provides a good starting point for anyone wanting to implement real time DSP on the Giant Gecko microcontroller. It can be run on an out of the box Giant Gecko Series 1 STK, or it can be configured with an analog circuit or module that generates a 0 to 5V signal as the input source. The complete source code and Simplicity Studio project files are linked below, along with inline and additional documentation that should be useful in understanding how the application works.

         

        The ADC input mode and DSP functionality of this project is also fully compatible with any Silicon Labs STK using an ARM Cortex-M4 core (eg. Wonder, Pearl, Flex, Blue, and Mighty Geckos). The microphone and color LCD, however, are not present on other STKs.

         

        Source Files:

         

        https://www.dropbox.com/s/wvuk5yk192xywfl/spectrum_analyzer.zip?dl=0

      • EFM32 Voice Recognition Project Using Giant Gecko's Temperature /Humidity Sensor

        Siliconlabs | 08/237/2017 | 12:37 PM

        This project is created by Silicon Labs’ summer intern Cole Morgan.

         

        Background and motivation:

         

        This project is a program that implements voice recognition for the Giant Gecko 11 (GG11) using the starter kit’s temperature and humidity sensor and the Wizard Gecko Module. My motivation to work on this project was mainly that I wrote another project that implemented voice recognition for the GG11 using the starter kit’s LEDs, and I wanted a more advanced application for my voice recognition algorithm.

         

        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After it has learned the user’s voice, the user can set either a temperature or humidity threshold by saying “set” followed by either “temp” for temperature or “humid” for humidity. After this, the user can say a number from 0-99 one digit at a time to set the threshold value; for example, saying “one nine” would be interpreted as 19. For instance, saying “set humid four two” would set a humidity threshold at 42% humidity. Then, if the humidity measured by the onboard sensor crosses this threshold, the user will receive a text.

         

        Description:

         

        Using my previous voice recognition project as a base, I first added the support for multiple word commands using the first command word “set” as a kind of trigger so that the program won’t get stuck in the wrong stage of a command. One side effect of using a lot more keywords than the previous project was that I had to stop storing the reference MFCC values in Flash, as there wasn’t enough space for all of them.

         

        The next stage in my development was to interface the Si7021 temperature/humidity sensor on the GG11 starter kit. This stage was quite simple because there was already a demo for the GG11 that interacted with the Si7021, so all I had to do was integrate the LCD.

         

        Then, I interfaced the Wizard Gecko Module (WGM) to connect to IFTTT via Wi-Fi and send an HTTP GET request. This part was the most difficult of this project because I have never worked with communication over Wi-Fi or sending HTTP requests. I designed two different IFTTT triggers for temperature and humidity so that the SMS alert message could be tailored to the type of threshold trigger.

         

         

         

        Accomplishments:

         

        • I adapted my voice recognition to work accurately and quickly with a larger bank of keywords
        • I successfully created two IFTTT applets to send alerts quickly to a phone number
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code

         

        Lessons Learned:

        • I learned how to scale an algorithm to work with a larger set of data
        • I learned how to use web requests to interface a microcontroller with applications through the Internet
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far

         

        Potential Use Cases:

         

        • Voice-controlled Nest thermostat
        • A shipping container application where temperature or humidity in an area needs to be monitored to make sure it is at a certain level

         

        Materials Used:

         

        • GG11 STK with Si7021 and microphone
        • Pop filter for STK microphone
        • Wizard Gecko Module
        • Simplicity Studio IDE
        • CMSIS DSP Library

         

        Source Code: 

         

        • VRTempHumid (attached) 

      • EFM32 Voice Recognition Project Using Giant Gecko's LEDs

        Siliconlabs | 08/237/2017 | 12:26 PM

        This project is created by Silicon Labs’ summer intern Cole Morgan.

         

        Background and motivation:

         

        This project is a program that implements voice recognition for the GG11 using the starter kit’s onboard LEDs. My motivation to work on this project was mainly that I have never done anything remotely close to voice recognition before, and I thought it would be a good challenge. But another motivation was also that I am very interested in the Amazon Echo and the other emerging home assistant technologies.
        The program works by first learning your voice through a small training protocol where the user says each keyword a couple times when prompted. After the program has learned the user’s voice, the user can turn the LED on, red, blue, green, or off simply by saying “on”, “blue”, “red”, “green”, or “off”.

         

        Description:

         

        My first step was getting audio input from the microphone into the microcontroller and storing it. This proved a little more difficult than I expected because I hadn’t worked with SPI or I2S before. In addition to this, I also had to design a sound detection system that captures as much significant sound as possible. I did this by squaring and summing the elements of the state buffer of the bandpass FIR filter that I apply on each sample and then setting a threshold for the result of that operation. This system turned out to be extremely useful because, in addition to saving processor time, it also time-aligned the data to be processed.

         

        After this step, I began to implement the actual voice recognition. At first, I thought I could just find a library online and implement easily, but this turned out to be far from true. Most voice recognition libraries are much too big for a microcontroller, even one with a very large Flash memory of 2MB like the GG11. There was one library I found that was written for Arduino, but it didn’t work very well. So, I began the process of writing my own voice recognition algorithm.

         

        After a lot of research, I decided I would use Mel’s Frequency Cepstral Coefficients (MFCCs) as the basis for my algorithm. There are a number of other audio feature coefficients, but MFCCs seemed to be the most effective. The calculation of MFCCs is basically several signal processing techniques applied in a specific order, so I used the CMSIS ARM DSP library for those functions.

         

        After beginning work on this, I created a voice training algorithm to allow the program to learn any voice and adapt to any user. The training program has the user say each word a configurable number of times, and then calculates the MFCCs of that person’s pronunciation of the keyword and stores them in flash memory.

         

        Next, because the input data was time-aligned, I could simply put all the MFCCs for the 4 buffers in one array and use that as the basis for comparison. In addition to this, I also calculated and stored the first derivative (delta coefficients) of the MFCC data to increase accuracy.

         

        Coefficient.png

         

         

         

        Accomplishments:

         

        • I wrote my own voice recognition algorithm for microcontrollers with relatively little RAM and flash memory usage
          • Can store up to 10 keywords in Flash and up to 1,150 keywords in RAM (this number would require program modification to not store in Flash and to use less trainings)
        • Successfully created a voice recognition and training technique that works for everyone, no matter their accent or voice, with an excellent success rate
        • The program is written in a way that is very easily adaptable for many different uses
          • It is well modularized, so if any part of the program is useful to a specific application, it can be easily separated from the rest of the code

        Lessons Learned and Next Steps:

         

        • I learned how voice recognition algorithms generally work and how to implement them
        • I learned lots of signal processing, as I didn’t know anything about it before
        • I learned how to read a large library like emlib more efficiently
        • I learned about large program organization and good general coding practice: this was the biggest software project I have written by far

        My next steps are to apply the voice recognition to a temperature / humidity controller application, which should be easier than this LED application as the keywords are very different from each other unlike “on” and “off”.

         

        Materials Used:

        • GG11 STK with microphone and LEDs
        • Pop filter for STK microphone
        • Simplicity Studio IDE
        • CMSIS DSP Library

        Source Files: 

        • VRLEDs (attached) 

      • Wireless Encrypted Voice Communication with the EFM32GG11

        Siliconlabs | 08/237/2017 | 11:55 AM

        This project is created by Silicon Labs’ summer intern Kevin Black.

         

        EFM32GG11-1.jpg

         

        Project Summary:

         

        The goal of this project was to perform one-way, encrypted, real-time, wireless voice communication from an embedded system to an arbitrary client like a laptop or tablet. This was accomplished using the EFM32GG11 starter kit for audio input/processing and the Wizard Gecko Wi-Fi expansion kit for wireless transmission. Audio data is sampled from the starter kit’s onboard microphone and encrypted with AES using the GG11 32-bit MCU; it is then streamed to any clients connected to the Wizard Gecko’s Wi-Fi access point, where it can be decrypted and played back only with the correct password.

         

        Background and Motivation:

         

        My project primary purpose was to demonstrate useful features of both the EFM32GG11 starter kit and the Wizard Gecko Wi-Fi expansion kit, as well as the two working smoothly in conjunction through the EXP header.

         

        The first main feature it demonstrates is the EFM32GG11’s CRYPTO module, which exists on all the EFM32 Series 1 devices and provides fast hardware-accelerated encryption. The project utilizes the mbed TLS library configured to use the CRYPTO module, which speeds it up significantly. It demonstrates the high throughput of the CRYPTO module (up to ~123 Mbps max*) by encrypting uncompressed audio in real time with plenty of overhead. The type of encryption is 256-bit AES in CBC mode, which is currently considered universally secure.

        (*Assuming 256-bit AES on the GG11 driven by HFRCO at 72 MHz)

         

        Another motivation behind the project was to demonstrate two features of the GG11 starter kit itself: the onboard microphone, and the ability of the Wi-Fi expansion kit to easily attach to and be controlled through the EXP header. No examples existed for the microphone, and very few firmware examples existed for the Wizard Gecko in externally hosted mode. My projects demonstrate the quality of the built-in microphone by allowing the user to listen to the audio, as well as shows how to use the BGLib C library to communicate with the Wizard Gecko from an external host. Additionally, it demonstrates the throughput of a transparent/streaming endpoint on the Wizard Gecko.

         

        Project Description:

         

        EFM32GG11-2.png

         

        Block diagram of data flow through transmitter device

         

        Microphone Input:

         

        The GG11 starter kit provides an onboard audio codec that automatically converts the PDM (pulse density modulation) data from the onboard MEMS microphones into PCM (pulse code modulation) data and outputs it on a serial interface in I2S format. The codec’s serial interface is connected to the GG11 USART3 location 0 pins, so reading in the audio data is simply a matter of initializing USART3 to I2S with the correct settings, enabling autoTx, and asserting an additional microphone enable pin.

         

        The audio data arrives in 32-bit words, so the sample rate is controlled by setting the I2S baud rate to 64 times the desired sample rate (2 channels, 32 bits each). Each word contains a single 20-bit sample of audio, but very few systems support 20-bit audio, so for my project, I ignore the least significant 4 bits of each sample and only read 16 bits from each word. I also ignore samples from the right microphone, meaning the final audio data I obtain for processing is in 16-bit mono PCM format. The sample rate is easily configurable, but in the end, I settled on 20 KHz as that seems to be the upper limit of what the Wizard Gecko can handle while being high enough to cover the range of human hearing and provide clear and understandable audio.

         

        The audio input data is transferred into memory using LDMA in order to save CPU cycles. The right channel data is repeatedly written to a single byte in order to discard it, while the left channel data is alternately transferred into two 16-byte buffers; when one buffer is being filled, the other is being processed by the CPU.

         

        Encryption & Transmission:

         

        When a left channel transfer completes, it triggers an interrupt that switches the current process buffer and signals that the next packet is ready to be processed. The GG11 then encrypts the current 16-byte buffer (16 bytes is the AES block size) using the mbed TLS library configured to use the CRYPTO module. In CBC (cipher block chaining) mode, the library automatically XORs the plaintext with the previous ciphertext before encryption.

         

        The 256-bit key used for encryption is derived from a password using SHA-256. Only clients with the same password can obtain the correct key by hashing the password.

         

        In my project, I decided to fix the initialization vector as all zeros. Normally, initialization vector reuse is considered bad practice and weak security; however, it only has the potential to leak data from the first few blocks of data streams with identical prefixes, and that poses an insignificant threat to my project due to the enormous quantity of blocks and the amount of noise in a meaningful segment of audio.

         

        Once a block is encrypted, it is put into a first-in-first-out queue where it is transmitted over UART through the EXP header to the Wizard Gecko. Flow control is implemented using an additional CTS (clear to send) pin connected to the Wizard Gecko; the module can drive CTS high when it cannot keep up with the transmission rate, in which case the transmission halts and the queue fills up. The transmission is driven by interrupts, which allows it to run “in the background” while the next buffer is being encrypted, and does not block the main thread when the Wizard Gecko raises CTS.

         

        The baud rate for UART transmission is configurable as long as the GG11 and the Wizard Gecko are both configured to the same value. Interestingly, however, the Wizard Gecko seemed to perform better (raise CTS for less time) at higher baud rates— perhaps because that increases the gap between packets— so I settled on 3 MHz.

         

        Wi-Fi:

         

        The Wizard Gecko Wi-Fi module, when connected to an external MCU in hosted mode, operates in a command-response format. The GG11 sends commands through the EXP header via SPI, formatted with a binary protocol called BGAPI. When the Wizard Gecko is ready to send a response (or an event) back to the MCU, it raises a notify pin (also connected to the EXP header) that tells the GG11 to read and handle the message. All of the BGAPI commands and responses are defined in a C library called BGLib.

         

        Upon initialization, my project configures the Wizard Gecko to be a hidden wireless access point and a TCP server. When a client connected to the access point opens a connection to the IP address and port of the TCP server, it triggers an event that is forwarded back to the GG11. The GG11 then enables the microphone and begins encrypting and transmitting audio via UART to the Wizard Gecko’s second USART interface (the one not used for BGAPI commands). That interface is configured in transparent/streaming mode, which means it forwards all received data unmodified to a single endpoint. Before the encryption starts, the GG11 configures this endpoint to be that of the connected client.

         

        Accomplishments, Flaws, and Next Steps:

         

        Ultimately, the project was successful and met its end goal of building a one-way encrypted voice communication device. Speech is clear and comprehensible at up to several inches away from the onboard microphone, and the real-time encryption is secure.

         

        The primary flaw in the final implementation is that the Wizard Gecko itself has trouble constantly streaming a large quantity of data without interruptions. The module will occasionally “choke” for 1-2 seconds, during which it will stop transmitting and refuse to accept data by raising CTS. Performance is inconsistent, and the device will go anywhere from 10 to more than 60 seconds in between “chokes”. This causes frustrating gaps in the audio, much like a cell phone connection that is “breaking up”; although on average, the project is still quite usable for talking to someone. I added a blue LED that turns on whenever CTS is raised, so the user can at least tell when the device is not transmitting by observing the LED light up solid blue.

         

        In the future, this behavior could likely be eliminated by changing the protocol that the device uses to transmit. Bluetooth would have much more bandwidth, or if the Wizard Gecko is still used, Wi-Fi Direct or a TCP connection over a third-party local area network (rather than using the Wizard Gecko as the access point). The last two options would make the demo much more difficult to use, so Bluetooth would be the ideal solution; this explains why Bluetooth has become so popular for real-life products with similar functionality.

         

        Using this Project:

         

        Follow the instructions in the readme of the encrypted voice transmitter folder to configure the Wizard Gecko and GG11 to act as the transmitter portion of the project.

         

        To use the receiver, download the executable Java applet below and run the .exe file inside (no JVM installation required). Unless the IP address and port were changed in the firmware, leave those fields blank. Enter the password defined in the firmware (default “gecko123”).

         

        After booting up the transmitter, wait for the LCD output to reach “waiting for client”, and then connect to the hidden access point that the device has created (default SSID is “Encrypted Voice Demo”).

         

        EFM32GG11-3.png

         

        Once the LCD displays “client joined”, click “Connect” on the Java applet’s dialog. When the status message below the connect button displays “Connected” in green, audio from the microphone should begin playing back on the PC.

         

        EFM32GG11-4.jpg.png

         

        Source Files: 

         https://www.dropbox.com/s/1uofaidpdz061ti/encrypted-voice-master.zip?dl=0

         

        [zip file containing encrypted_voice_transmitter (firmware source code)]
        [zip file containing executable Java applet]

        [zip file containing encrypted_voice_receiver (Java source code)]

         

         

      • Project Completed and Working a Treat (TB Sense and Pi)

        neal_tommy | 08/232/2017 | 04:20 PM

        All, 

         

        Whilst I've received much assistance from this community I thought it time I give back and feedback on my working project (thanks all who helped along the way). 

         

        Essentially I have a TB Sense connected up via BLE to a RPi3. I did some changing to the code on the TB Sense to make it continuously advertise and then have a Python script on the Pi to collect data once every 10 minutes. 

         

        This data is fed to Thingspeak (considering alternative options here) and graphed for viewing. I'm still in the phase of looking at some daily / weekly averages and seeing what changes it would make to general lifestyle. I'm collecting data from 6 enviromental sensors (sound, temp., humidity, pressure, TVOC and eCO2). 

         

        Board holder

        Board holder

         

        Overall 3D printed enclosure (enough to let some air in for measurement)

        Enclosure

         

        I've also got a cool 3D printed enclosure made which houses the TB Sense in a nice looking (and acceptable by the wife) designed box whilst on the table top. The Pi is sitting next to my router collecting the data. 

         

        So far I've collected a couple of days of data as shown below. It all seems to be working and is ready for a powercut and suitable reboot / reconnect if that happens (common here in South Africa). 

         

        Capture.JPG

         

        Happy to answer any questions on this, and share details. It is by no means a complex project however did keep me busy for a few weekends. There are still some areas I'd like to improve and then work from there (probably on the efficiency of the Python code). 

         

        Ciao. 

         

        from __future__ import division
        import sys
        from bluepy.btle import *
        import struct
        import thread
        from time import sleep
        import urllib2
        
        PRIVATE_KEY = 'H4HMW1TRAGNYUPBJ'
        
        # Base URL of Thingspeak
        baseURL = 'https://api.thingspeak.com/update?api_key='
        
        
        def vReadSENSE():
            scanner = Scanner(0)
            devices = scanner.scan(2)
            for dev in devices:
                print "Device %s (%s), RSSI=%d dB" % (dev.addr, dev.addrType, dev.rssi)
        
                for (adtype, desc, value) in dev.getScanData():
                    print "  %s = %s" % (desc, value)
            num_ble = len(devices)
            print num_ble
            if num_ble == 0:
                return None
            ble_service = []
            char_sensor = 0
            non_sensor = 0
            TVOC_char = Characteristic
            eCO2_char = Characteristic
            Pressure_char = Characteristic
            Sound_char = Characteristic
            temperature_char = Characteristic
            humidity_char = Characteristic
        
            #bat_char = Characteristic
            
            count = 15
        
            for i in range(num_ble):
                try:
                    devices[i].getScanData()
                    ble_service.append(Peripheral())
                    ble_service[char_sensor].connect('00:0b:57:36:63:ff',devices[i].addrType)
                    #ble_service[char_sensor].connect(devices[i].addr, devices[i].addrType)
                    char_sensor = char_sensor + 1
                    print "Connected %s device with addr %s " % (char_sensor, devices[i].addr)
                except:
                    non_sensor = non_sensor + 1
            try:
                for i in range(char_sensor):
        
                    services = ble_service[i].getServices()
                    characteristics = ble_service[i].getCharacteristics()
                    for k in characteristics:
                        print k
                        if k.uuid == "efd658ae-c401-ef33-76e7-91b00019103b":
                            print "eCO2 Level"
                            TVOC_char = k
                        if k.uuid == "efd658ae-c402-ef33-76e7-91b00019103b":
                            print "TVOC Level"
                            TVOC_char = k
                        if k.uuid == "00002a6d-0000-1000-8000-00805f9b34fb":
                            print "Pressure Level"
                            Pressure_char = k
                        if k.uuid == "c8546913-bf02-45eb-8dde-9f8754f4a32e":
                            print "Sound Level"
                            Sound_char = k
                        if k.uuid == "00002a6e-0000-1000-8000-00805f9b34fb":
                            print "Temperature"
                            temperature_char = k
                        if k.uuid == "00002a6f-0000-1000-8000-00805f9b34fb":
                            print "Humidity"
                            humidity_char = k
        
                        #if k.uuid == "2a19":
                            #print "Battery Level"
                            #bat_char = k
        
            except:
                return None
            while True:
                # units of ppb
                TVOC_data = TVOC_char.read()
                TVOC_data_value = ord(TVOC_data/100)
        
                #units of ppm
                eCO2_data = eCO2_char.read()
                eCO2_data_value = ord(eCO2_data[0])
        
                # pressure is in units of 0.1Pa
                Pressure_data = Pressure_char.read()
                Pressure_data_value = (Pressure_data * 10)
        
                # units of 0.01dB
                Sound_data = Sound_char.read()
                Sound_data_value = (Sound_data * 100)
        
                #bat_data = bat_char.read()
                #bat_data_value = ord(bat_data[0])
        
                #convert from farenheit
                temperature_data = temperature_char.read()
                temperature_data_value = (ord(temperature_data[1]) << 8) + ord(temperature_data[0])
                float_temperature_data_value = (temperature_data_value / 100)
        
                humidity_data = humidity_char.read()
        	humidity_data_value =(ord(humidity_data[1])<<8)+ord(humidity_data[0])
        
        	print "TVOC: ", TVOC_data_value
        	print “eCO2: ", eCO2_data_value
        	print “Pressure: ", Pressure_data_value
        	print “Sound: ", Sound_data_value
        	print “Temperature: “, float_temperature_data_value
        	print “Humidity: “, humidity_data_value
        
        	if count > 14:
        
                	f = urllib.urlopen(baseURL + PRIVATE_KEY + "&field1=%s&field2=%s&field3=%s&field4=%s&field5=%s&field6=%s" % (TVOC_data_value, eCO2_data_value, Pressure_data_value, Sound_data_value, float_temperature_data_value, humidity_data_value))
                	print f.read()
                	f.close()
                	count = 0
                	count = count + 1
                	sleep(1)
        
        while True:
            vReadSENSE()