Official Blog of Silicon Labs

    Publish
     
      • 2016 Silicon Labs Community Highlights

        Nari | 12/364/2016 | 08:00 AM

         

        2016 has been a great year, with lots of exciting news, events, and campaigns to keep us busy. As 2017 approaches we wanted to recap some of the highlights that made the Silicon Labs Community even better this year.

         

        2016.png

         

         

        All-Time High Page Views and Registrations 

         

        Did you know that more than 2 million pages were viewed and about 4,000 members registered for the community this year? Both are new records for the community. We want to say thank all of you for making this community friendly to new members and help them find answers!

         

        new registrations.png

         

        3 New Wireless Forum Boards

         

        With the rapid growth of the Wireless forum in 2016, we decided to divide it into three sub-forums covering various Wireless solutions: Blutooth/Wi-Fi, Mesh, and Proprietary. Now it’s easier for you to find the right place to ask your questions and get them escalated to the right party faster. 

         

        Wireless.png

         

        34 User-Generated Projects

         

        At the heart of development are the design projects that lead to inspiration and creation. Throughout 2016, 34 projects were created by our internal and external community members. Check out some of the projects from the past year to get an idea for your next project:

         

        Creating F1-Style Race Starting Lights with EZR32 Wireless MCUs

        EFR32WG Range Test

        Blue Gecko Digital Tokens

        Portable sensor logger with LCD, Bluetooth LE & cloud connection

        Thunderboard Sense kit Sound Alarm and notification in Telegram

        IoT demo using Thunderboard

         

        Projects.jpg

         

        15 IoT Hero Stories 

         

        Since kicking off IoT Hero Series in May 2015, 28 IoT heroes have been interviewed and 15 of them were featured in the community this year as blog posts. These IoT Heroes are the innovators behind creating real-world IoT applications based on Silicon Labs’ parts.

         

        Check out the IoT Hero Series here.

         

        original-IoTHeroes.png

         

        Community Member Spotlight

         

        In addition to the IoT Heroes, we also featured selected community members each month. We started featuring community members who are active or new to the community on a monthly basis to help members connect each other.

         

        Special thanks to, @rev, @sureshjoshi, @Timur, @klangdon79, @operator, @hollie, @spiritonline, @Yuriy, and @Scotty for being featured members of 2016 and sharing your thoughts on Silicon Labs’ products and the community!

         

        If you haven’t met the featured members yet, please follow the link here to read their stories. 

         

        monthly member spotlight.png

         

        Product Launches

         

        In 2016, exciting new products and software solutions have been added to our portfolio. The multi-protocol Wireless SoC families such as Blue Gecko, Flex Gecko, and Mighty Gecko were launched to help developers create multitude of IoT applications such as connected home, lighting, building automation, wearables, and start metering. Read the announcement here.

         

        The Wizard Gecko was created for applications where leading RF performance, low power consumption and fast time to market are key requirements. Read the announcement here.

         

        To help you bring your IoT product ideas to life, Thunderboard React, the small demo board was introduced as well. Read the announcement here.

         

        Lastly, an upgraded version of Simplicity Studio 4 was announced through the community with a series of useful training videos. You can find the announcement here.

         

        We will continue to update existing product offerings and introduce new solutions in 2017. So please stay tuned for an update!

         

        Wireless Gecko announcement.png

         

        Community Super Users

         

        As we sum up 2016, we want to take this opportunity to give our sincerest thanks to the active super users who achieved the three highest community ranks (See how our ranking system works here). By the end of 2016, we have 21 Hero members, 2 Master members and 1 Legend member.

         

        A special mention goes to @erikm, our first ever Legend rank holder. He has been an active member of the community for 13 years. Congratulations, Erik, and thanks for your contribution.

         

        The Silicon Labs team wants to let all of you know that we appreciate you for keeping the community active and valuable.

         

        super hero2.png

         

        Whether you just visited the community once or became a super user, I want to say thank you to each and every one of you for making the community greater this year. Without your participation and engagement, it was not possible to achieve all the highlights and milestones mentioned above.

         

        Lastly, I wish you a wonderful holiday season and look forward to seeing you in the community in 2017.

         

        Happy holidays!

         

         

        Nari Shin | Community Manager

      • Heading to Vegas for CES 2017

        Lance Looper | 12/357/2016 | 07:41 AM

        It’s holiday season again, which means CES is right around the corner. This is the show’s 50th anniversary and we’re bringing some of our coolest demos. If you plan on attending, we can be found at:

         

        Location: The Venetian Palazzo Hospitality Suite, Level 3, Toscana 3709 & 3710, 3708

        Date: January 5-8, 2017

         CES Banner.png

         

        And we'll be showing off some of our latest stuff: 

         

        Integrated Solutions for Multiple Devices  

        Check out our high-performance, low power heart rate monitoring solution for wearables. Our module provides a compact but extensible platform to support next-generation biometrics.

         

        Connected Lighting for a Smarter Home

        We will be demoing our robust Smart Home and Connected Lighting solutions built with Mighty Gecko Multi-Band, Multi-Protocol Wireless SoC and best-in-class wireless mesh networking software and tools.

         

        Digital Audio Solutions

        Demo our audio solutions for iPhone and Android apps with flexible user interface configuration, updated sensors and much more for quick migration to compatible phones.

         

        Smart Home Ecosystem

        See how our Bluetooth solutions seamlessly sync with Apple HomeKit and Bluetooth LE applications. With our Blue Gecko and voice over Bluetooth software and hardware, you can enhance your third party Bluetooth enabled devices.  

         

         CES CTA.png

         

         

      • Build Your own Lightsaber (Sounds)! - Part 5

        lynchtron | 12/349/2016 | 01:22 AM

        14_title.png

        In the last section, we configured the internal DAC to play audio from the MicroSD card.  In this section, we will develop the ability to blend multiple sound effects on the fly and then use that new capability to play the appropriate sound whenever an accelerometer sees activity.

         

        Blending Sound Samples on the Fly

        In order to create a lightsaber sound effects generator, we need to always play a primary sound that handles the idle “humming” sound and then other sounds such as swings and clashes that play on top of the first sound.  Since we don’t know when these things are going to happen ahead of time, we can’t simply create static sound effects that cover all possible combination of sound events.  We have to dynamically blend sounds together at run time.  This code is started in lightsaber_effects_player.c, while most of the work once again is performed in dac_helpers.c.

         

        If we had infinite time and infinite RAM, our lives as embedded developers would be so much easier.  Our resources are limited, but they are perfectly capable to do the job.  It just requires us to think a little bit differently.  We have to be able to keep data flowing from multiple sources to “feed” the MCU with new audio data before the next sample is due at the inputs to the speaker.  This requires more complicated code than simply reading the entire contents of sound file A, then read from sound file B, and then mixing them to create sound file C, just sitting there waiting for its data to be accessed.  Instead, we have to do all of this source data fetching and mixing on the fly, just before the speaker needs a new sample.  This is more efficiently accomplished with the DMA engine, which will free your core up to do other things or allow the MCU drop into a lower power state to save energy.

         

        Surprisingly, the only thing that needs to be done to blend sounds is to add two samples from different files together (or different channels, in the case of stereo-to-mono conversion.)  There is no averaging that needs to be done or anything like that, and it sounds very naturally blended.  Most of the audio data is contained in the lower quarter of the possible digital range, with peaks that will occasionally jump above the three-quarter mark.  The adding of sounds together works because the peaks in the audio files don’t usually happen at the same exact moment in time.  When they do, you will have some clipping, but it should work for simple sound effects used in this example.

         

        In order to blend more than a single sound effect, we fetch data from multiple files on the SD card and do the blending right as we load the RAM buffers, keeping separate file pointers and byte counts for each file read.  In order to re-use the same example from before and keep thing simple, I have chosen to create a new main file called lightsaber_effects_player.c and create a new add_track function in dac_helpers.c that can only add sounds to the currently-playing track.  It would be a better solution to rewrite the whole dac_helper.c to handle any number of sounds to be played at any time, but I was trying to show the difference between playing a single sound and playing multiple sounds to be clear in what was changing.  I have marked the places in dac_helper.c where extra code was added to play multiple blended sounds so that you can clearly see that code.

         

        Here is the add_track function, plus an array of new data structures that I created to handle the dynamically added tracks.

         

        typedef struct
        {
              FIL file_object;
              uint32_t total_bytes;
              uint32_t bytes_processed;
        } additional_tracks_struct;
         
        static additional_tracks_struct additional_tracks[MAX_TRACKS];
        static uint8_t num_of_additional_tracks = 0;
         
        void add_track(char * filename)
        {
              // If we have exceeded the number of avail tracks, return
              if (num_of_additional_tracks >= MAX_TRACKS) return;
         
              // Find an available track
              uint8_t track_num = 0;
              while (track_num < MAX_TRACKS)
              {
                    if (additional_tracks[track_num].file_object.fs == 0)
                    {
                          break;
                    }
                    track_num++;
              }
         
              if (track_num >= MAX_TRACKS) return;
         
              /* Open wav file from SD-card */
              if (f_open(&additional_tracks[track_num].file_object, filename, FA_READ) != FR_OK)
              {
              /* No micro-SD with FAT32, or no WAV_FILENAME found */
                    DEBUG_BREAK
              }
         
              /* Read header and place in header struct */
              WAV_Header_TypeDef tmp_header;
              UINT bytes_read;
        f_read(&additional_tracks[track_num].file_object, &tmp_header, sizeof(tmp_header), &bytes_read);
            additional_tracks[track_num].total_bytes = tmp_header.bytes_in_data;
              num_of_additional_tracks++;
        }

        Notice that the add_track does not call on prepare_microsd_card as it relies on the play_sound function to have already done that.

         

        Inside of the FillBufferFromSDcard function, I loop over the additional_tracks array looking for additional sounds to fetch from the MicroSD card:

         

            /* Read data into temporary buffer. */
              if (WAVfile.fs != 0)
              {
                    f_read(&WAVfile, ramBufferTemporaryMono, BUFFERSIZE*2, &bytes_read);
                    ByteCounter += bytes_read;
              }
         
            /////////////////////////////////////////////////////////////////////////////////////
            // This part added for lightsaber_effects_player.c
              uint8_t track_num = 0;
              int16_t tmp_buffer[MAX_TRACKS][BUFFERSIZE*2];
              while (track_num < num_of_additional_tracks)
              {
                    if (additional_tracks[track_num].total_bytes > 0)
                    {
                          // Add more sounds to ramBufferTemporaryMono as necessary
                    f_read(&additional_tracks[track_num].file_object, tmp_buffer[track_num],
                                      BUFFERSIZE*2, &bytes_read);
                    additional_tracks[track_num].bytes_processed += bytes_read;
                    }
                    track_num++;
              }
        /////////////////////////////////////////////////////////////////////////////////////

        The first call to f_read is using the initial sound played, addressed with the global WAVfile file pointer.  Any additional sounds are read into a new tmp_buffer for each new track found to contain a non-zero file pointer in the additional_tracks array.  The bytes read by the f_read function are each stored separately for each new sound effect to be blended.

         

        After all of the sound effects are loaded into buffers, the data is added to the single sample being processed before it is converted to 12-bit data and stored back in the RAM buffer for later retrieval by the DMA engine:

         

            j = 0;
            for (i = 0; i < (2 * BUFFERSIZE) - 1; i += 2)
            {
              tmp = ramBufferTemporaryMono[j];
         
              /////////////////////////////////////////////////////////////////////////////////////
              // This part added for lightsaber_effects_player.c
                track_num = 0;
                while (track_num < num_of_additional_tracks)
                {
                    if (additional_tracks[track_num].total_bytes > 0)
                    {
                          // Add more sounds as necessary
                          tmp += tmp_buffer[track_num][j];
                    }
                    track_num++;
                }
                ///////////////////////////////////////////////////////////////////////////////////
         
              /* Convert to 12 bits */
              tmp >>= 4;
         
                buffer[ i     ] = tmp;
                buffer[ i + 1 ] = tmp;
              j++;

        Finally, in the DMA callback PingPongTransferComplete function, the additional_tracks array is searched for sounds that are still playing, which would keep the DMA engine active even if the primary sound effect has stopped playing.  Any sounds that have stopped are closed so that the file pointers are set back to zero.

         

          /* Stop DMA if bytecounter is equal to datasize or larger */
          bool stop = true;
         
          if (ByteCounter >= wavHeader.bytes_in_data)
          {
              f_close(&WAVfile);
          }
          else
          {
                stop = false;
          }
         
          /////////////////////////////////////////////////////////////////////////////////////
          // This part added for lightsaber_effects_player.c
          for (int track_num=0; track_num < MAX_TRACKS; track_num++)
          {
                if (additional_tracks[track_num].total_bytes > 0)
                {
                      if (additional_tracks[track_num].bytes_processed >=
                                additional_tracks[track_num].total_bytes)
                      {
                          f_close(&additional_tracks[track_num].file_object);
                          additional_tracks[track_num].bytes_processed = 0;
                              additional_tracks[track_num].total_bytes = 0;
                                num_of_additional_tracks--;
                      }
                      else
                      {
                            stop = false;
                      }
                 }
          }
          /////////////////////////////////////////////////////////////////////////////////////

        Connect the Accelerometer for Motion Triggered Sound Effects

        Now that we have sound effects engine that can blend the lightsaber hum with the lightsaber swing sounds, all on the fly, we can connect this to our ADXL345 accelerometer from chapter 10 and play those sounds at the appropriate time, based on the accelerometer event.  This will create a motion-activated sound effects generator.

         

        The simplest case for prototyping is to configure the ADXL345 for motion-activated interrupts and then poll the interrupt register for the required interrupt.  But the accelerometer also outputs those interrupts to pins that will allow your MCU to go into a low power state until that interesting event happens.   The interrupt pins should be used for a production solution.

         

        Connect the ADXL345 to the MCU as follows:

        Starter Kit                                                   ADXL345 Card Breakout

        VMCU (since we are out of 3V3 pins)      VIN  Be careful, DO NOT connect to 3V3 here

        GND                                                             GND

        SDA                                                              PD6

        SCL                                                               PD7

         

        Once connected, I reused some of the setup files from chapter 10 and made sure that communication was established to the ADXL345 by checking that the ADXL345_REG_DEVID register on the ADXL345 could be read from the device.  I then configured the ADXL345 interrupts for our purposes to set them up like so in adxl.c:

        void i2c_init_registers()
        {
              // Set range to 16g and FULL resolution
        i2c_write_register(ADXL345_REG_DATA_FORMAT, ADXL345_RANGE_16_G );
         
              // ADXL_BW_RATE = 50 hz, limited by I2C speed
             i2c_write_register(ADXL345_REG_BW_RATE, ADXL345_DATARATE_50_HZ);
         
              // Set up threshold for activity, found through trial and error
          i2c_write_register(ADXL345_REG_THRESH_ACT, 0x90);
         
              // Turn on the axes that will participate
        i2c_write_register(ADXL345_REG_ACT_INACT_CTL, ADXL345_ACT_ac_dc | ADXL345_ACT_X | ADXL345_ACT_Y | ADXL345_ACT_Z);
         
              // Set up interrupt outputs, sent to INT1 pin by default
          i2c_write_register(ADXL345_REG_INT_ENABLE, ADXL345_INT_Activity );
         
              //uint32_t value = i2c_read_register(ADXL345_REG_INT_ENABLE);
         
              // Clear interrupts by reading the INT_SOURCE register
          i2c_read_register(ADXL345_REG_INT_SOURCE);
         
              // Start measurement
           i2c_write_register(ADXL345_REG_POWER_CTL, 0x08);
        }

         

        This is all called from main in lightsaber_effects_player.c, which follows:

         

        #include "em_device.h"
        #include "em_chip.h"
        #include "em_cmu.h"
        #include "em_timer.h"
        #include "em_gpio.h"
        #include "dac_helpers.h"
        #include "utilities.h"
        #include <stdlib.h>
        #include "adxl.h"
        #include "utilities.h"
         
        int main(void)
        {
              CHIP_Init();
         
              // Need to boost the clock to get above 7kHz
              CMU_ClockSelectSet(cmuClock_HF, cmuSelect_HFXO);
         
              CMU_ClockEnable(cmuClock_GPIO, true);
              CMU_ClockEnable(cmuClock_DMA, true);
         
              // Enable GPIO output for Timer3, Compare Channel 2 on PE2
              GPIO_PinModeSet(gpioPortE, 2, gpioModePushPull, 0);
         
              // Show the sample rate on PE1 for debug
              GPIO_PinModeSet(gpioPortE, 1, gpioModePushPull, 0);
         
              // Get the systick running for delay() functions
              if (SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE) / 1000))
              {
                    DEBUG_BREAK;
              }
         
              i2c_setup();
         
              // Offset zero is Device ID
              uint16_t value = i2c_read_register(ADXL345_REG_DEVID);
         
              if (value != DEVICE_ID)
              {
                    DEBUG_BREAK
              }
         
              i2c_init_registers();
         
              // We need to start the sound, which starts DMA, before we set up the DAC_TIMER
              play_sound(SABER_IDLE);
              DAC_setup();
              DAC_TIMER_setup();
         
              uint32_t adxl_debounce_timer = set_timeout_ms(500);
         
              while (1)
              {
                    play_sound(SABER_IDLE);
         
                    // Clear interrupts by reading the INT_SOURCE register
                    uint8_t int_source = i2c_read_register(ADXL345_REG_INT_SOURCE);
                    if (int_source & ADXL345_INT_Activity)
                    {
                          // Clear interrupts by reading the INT_SOURCE register
                      i2c_read_register(ADXL345_REG_INT_SOURCE);
         
                          if (expired_ms(adxl_debounce_timer))
                          {
                                add_track(SABER_SWING);
                                adxl_debounce_timer = set_timeout_ms(500);
                          }
                    }
              }
        }

        At idle, we get a nice crackle hum sound.  When you tap the ADXL345 breakout board, you should hear a swing sound blended with the hum sound.  You could add additional reads of the ADXL345 acceleration registers described in chapter 10 to determine if the motion was a swing or an impact and load up the appropriate sound at that point.  I leave that as an exercise for the reader.  Give it a try, and use the force, Luke!

         

        This wraps up our foray into sound.  I hope that you have an easier time creating loud and clear sound than what I had to go through to get to this point.  Don’t get too intimidated by the vast amount of information on this topic that you will find on the Internet. 

      • Happy Holidays: Bringing a Futuristic Smart Home to Life

        AlexK | 12/348/2016 | 06:04 PM

        As the holiday season approaches, I’ve got gadgets and gifts on my mind.  Thanks to the explosion in popularity of IoT products, there’s lots of great stuff to choose from.  Verizon is out advertising their connected-car Hum device, the Google Home just recently launched to compete with Amazon’s wildly successful Echo, and hundreds of other home automation products are now available to make our lives a little bit easier.

         

        To illustrate this point, just look at a handful of the VC-backed startup companies in the Home Automation space, shown below in the graphic from CB Insights.  Chances are you’ve heard of many of these, such as Canary (camera), ecobee (thermostat), and LIFX (lighting).  This is just the beginning of an increasingly long list.

         

        CB Insights Graphic.png                                                                                                                                              

        According to Gartner research, a typical home could contain more than 500 smart devices by 2022.  That astonishing, and soon - only five years away.  In addition, a survey of 1,600 North American consumers conducted by iControl, found that 50 percent of consumers planned on buying at least one smart home device this past year.  On a list of most desired smart home devices, the top five were self-adjusting thermostats, door locks, master remotes, home monitoring cameras, and automatic adjustable outdoor lighting.

         

        Making all these statistics and predictions possible requires some careful consideration and design by the product providers.  Some of the downward pressures that could hinder consumer excitement are cost, ease-of-use, interoperability, and reliability.  

         

        Icontrol Report Graphic.png

                                                                                                                                                  

         

        This is why having a state-of-the-art and low cost solution is essential to adoption and widespread consumer enjoyment.  As consumers, we demand easy-to-use products at affordable prices.  More importantly, we prefer seamless experiences with products that can interoperate across our entire home network, enabling capabilities such as quick set up with smartphones and tablets and the flexibility to grow and easily connect with existing and future home automation devices.

         

        As a major part of our mission, Silicon Labs has doubled-down on helping companies large and small with simplifying their IoT challenges and addressing the challenges noted above.  Since the uptick in home automation (HA) that started with the Nest thermostat, we’ve helped HA product innovators with developing applications that sense, compute, and/or wirelessly connect.

         

        In addition to offering low-power sensors, MCU’s, and wireless SoCs and modules, we also provide complete hardware and software solutions for home automation, lighting, and other applications.  For Home Automation, we’re proud to announce two new products – a passive infrared motion sensor, and a wireless smart plug.  Both are complete reference designs that ship with a pre-loaded ZigBee stack and HA1.2 profiles, and are equipped with a multiprotocol wireless SoC. This means that with some software modifications the same hardware design could support Thread and Bluetooth – a critical capability that gives consumers the ability to control their products over Bluetooth and interoperate with other HA systems and products.

         

        As smart home technology continues to grow in popularity, we’re proud to lend our silicon hardware, software stacks, development tools, and expertise and lessons learned so that you can quickly launch your own wireless product. 

         

        For more information, visit the Silicon Labs Connected Home page.

      • IoT Hero Xively Wants to Ensure a Jetsons-like Future

        deirdrewalsh | 12/344/2016 | 10:09 AM

        We recently got to chat with Xively’s Paul Caponneti, Director of Engineering, and Adam Lewis, Alliance Manager for IoT. Under the LogMeIn umbrella of companies, Xively is an award-winning enterprise IoT platform helping companies navigate the IoT landscape and intelligently build connected products and services. Check out how they make the magic happen and meaningfully expand the IoT ecosystem one customer at a time.

         

        Gentlemen, tell us the heart of Xively’s business offering. What’s going on up there in Boston?

        Xively’s IoT platform is best-of-breed in terms of security, scalability, performance — the key components that any company will absolutely need to have a successful connected product. We partner with other leading companies like Silicon Labs to build a complete end-to-end IoT solution for our customers trying to figure out how to really enter the IoT space.

         

        A lot of these companies have the vision for an IoT product. They know where they want to get, but the road is often murky and undefined. Our job is to take them through all of the considerations, whether business or technical, in order to ultimately deliver a successful product to their end customer.

         

        We make it easy for companies to very quickly, cost-effectively, and with minimal risk stand up and manage a connected business at scale. Our purpose is to make the journey as painless as possible.

         

        Xiveley Offering.png

         

        What’s the biggest challenge for relative newcomers to the IoT in your minds?

        We think there’s a lot of noise out there about the IoT, and that it can be really confusing and discouraging to customers taking their first steps. Also, some vendors take a myopic and narrow-minded approach in terms of what the IoT actually “is” — that the IoT is only about singular things like data or machine learning, etc. We think the IoT is actually the sum of many things and we want to help customers understand that type of broad thinking and potential for innovation as well.

         

        A lot of organizations we help don’t necessarily have a deep technical background so parts of the IoT can seem like alchemy to them. But we make it very easy for them to instead focus their energies on creating products that their customers want, finding new and exciting ways to connect with end-users, and creating features and functionality that really add value to people’s lives. They allow us handle the IoT infrastructure for them.

         

        I heard you saying performance, scalability, and security together is your golden triad ultimately. What’s Xively’s philosophy on security?

        Because we’re part of a larger company that’s built on the platform of secure, remote connectivity, we get a lot of clients coming to us because of that very reason. They just have the assumption that a LogMeIn product is and will always be secure, and they’re correct. We have a robust security-focused team, and their work is never-ending. We’ve actually turned away work because we thought that a client was asking us to cut corners in our security processes. Those would-be clients then unfortunately did indeed get hacked.

         

        Tell us what Silicon Labs’ products you’re using as you help craft solutions for your customers and why you picked it/them?

        Sure. The Thunderboard React is a great prototyping tool which we used for our wearables projects and any other solutions requiring a BLE personal area network. It’s a tremendous tool — it’s got some sensors on it, some buttons, some LEDs; it makes for a nice pairing and comes with an Android/iOS app that helps us bootstrap a sales demo.

         

        Silicon Labs is also a leader in the 802.15.4 market. We have used Silicon Labs’ ZigBee hardware for our customers looking for low power wireless solutions. In fact, Xively can easily run on the Silicon Labs ZigBee gateways, making it easier for customers to get their hardware developed fast.

         

        Xively is a member of Thread Group, which we think represents the future of consumer IoT.

         

        Xiveley_Thunderboard.png

         

        Excellent. Now for the Bonus Question: Where do you gentlemen see the IoT going in the next 5–8 years given your experience helping so many companies in so many different fields?

        At LogMeIn and Xively we believe that possibilities increase with connectivity. We work tirelessly to simplify the way that people connect with each other and the world around them — the IoT is no exception.

        As technology barriers decrease, and the expectation for seamless customer experiences increases, we see the IoT becoming more mainstream. It will be less about gadgetry and more about overarching, truly useful applications. The IoT provides companies an opportunity to get closer to their customers and truly deliver what their customers seek, both implicitly and explicitly. Additionally, the IoT will allow companies to create new revenue channels, streamline business processes, and design innovative products and services that will transform our everyday lives.

         

        From a more technical perspective, the IoT is going to be about true interoperability among devices and moving toward ubiquitous and robust interfaces that manufacturers can put their trust and development time into. Right now, for example, people have 1,000 IoT devices, and as such have 1,000 apps to run all of them; it’s confusing and cumbersome. As the IoT continues to mature, standards and protocols will emerge both on the hardware and software side, ultimately ensuring a truly Jetsons-like future for us all.

         

      • Friday Fun: Iliad of the Transistor

        Lance Looper | 12/344/2016 | 10:00 AM

        Engineering and art are kindred topics, leaning on one another where properties like time, space, form, and function have to be considered to deliver the best possible experience. With that in mind, we'd like to share a poem from one of our very own engineers. We hope it starts your weekend off right. 

         

        Iliad3.jpg

      • December 2016 Member Spotlight: Scotty

        Siliconlabs | 12/343/2016 | 10:58 AM

        We are featuring one of the Silicon Labs Community members who is active or new in the community on a monthly basis to help members connect with each other.

         

        Meet our December member of the month: Scotty

        profile.png

         

        Q: Congrats on becoming our featured member of the month! Can you introduce yourself to our community members?

         

        Hello Community Robot Happy

         

        First, I have to say that Scotty isn't my real name. I've chosen this name because I'm a Star Trek fan and maybe you know, Mr. Scott was the main engineer on the star ship Enterprise. I was interested in both Star Trek and electronics since I was a child, so that's why I've chosen this name...

        I'm located in south of Germany, Black Forest. I made my hobby to my job, which means that I'm working as a hardware engineer, assigned to create microcontroller circuits and corresponding PCB layouts as well as small test software for initial circuit start-up. Additionally, at home, I chose small projects for topics which I'm interested in. Believe me, building an analog Theremin circuit as a mainly digital engineer isn't easy... Robot wink

         

        Q: How did you know about the Silicon Labs Community?

         

        A big part of my electronic education (about 16 years ago) was programming the original Intel 8032 in assembler language. As soon as I understood the architecture (which will definitively happen when using assembler), I wanted to do my own projects, which lead to collecting information about the 8032 implementations of those many manufacturers like Atmel, Dallas/Maxim, Infineon, etc. and there also was SiLabs. In short, for me, SiLabs provided the best mix of speed, peripherals, usable documentation and software, so I decided to chose SiLabs.


        And as my experience increases, I changed from asking questions to giving answers to community members to give at least something back from what I got from the community.

         

        Q: What features, products, services would you like to see in the future from Silicon Labs?

         

        Good question Robot Happy Keep up with the broad product range, but also make polls to ask the community about this. If you have a new device in mind, ask the community about the 'wish list'.

         

        Q: What advice would you give to someone new to the community?

         

        If you're new to programming, then, at least:

         

        • Make comments in your source code about what the code does (or what you think what it does) - this makes it easier to find errors, believe me
        • keep your comments up to date
        • Make small steps - don't begin with a MP3 player, a Bluetooth application or something like that, just try to blink a LED, then let the LED fade in and out, and so on - move from simple port manipulation to the peripherals like timers, then interfaces like UART, SPI, etc. Even if those small 'practice projects' will consume a lot of your time before beginning with the main project, you'll save a lot more of time later due to the experience.
        • When you post your first questions, post your code also - as a beginner, there'd be nothing which is worth to keep secret - posting the code will help others to see what you do (or what you want to do and what you're really doing)
          When posting a problem, keep in mind to describe your problem as detailed as possible - even facts which sound useless to solve the problem might be helpful - the more information you give, the more (and faster) help you'll get
        • Don't code a line without a concept - divide the things to do into small tasks, and make flowcharts about each tasks, it will help you, believe me

        Q: Thanks for answering the questions. Any final comment?

         

        Yes! As mentioned above, I really appreciate that SiLabs takes the community into their focus - keep this up!

      • Build your own lightsaber (sounds)! - Part 4

        lynchtron | 12/341/2016 | 05:35 PM

        14_title.png

        In the last section, we connected the EFM32 DAC to an amplifier and speaker and connected the MicroSD card.  In this section, we will fetch the sounds from the MicroSD card and send those sounds to the speaker.

         

        MicroSD and DAC Software

        When possible, I like to begin with an example application and then modify it for my needs, at least for the prototyping stage.  In chapter 13, I used the DK3850 wavplayer example to create sound effects with an I2S audio chip.  I had to remove some Board Support Package (BSP) files from the example in order to get the wavplayer.c file to run on my Wonder Gecko, because  the example was taken from a different kit.  Once I did that, I found a few errors here or there that I had to correct before I could play sound files from a MicroSD card.  Once I got the example working, only then did I start making changes to add my own functionality.  I also used a local Git revision control system so that I could see differences between file revisions, and I could always roll back the changes if I broke something that was once working.  I recommend revision control systems even for experimentation purposes because one small change in a file can be hard to find later.

         

        For this chapter, I started with the code from chapter 13.  I tell you this so that you can take a look at the differences between those files and see how things have evolved. 

         

        The first thing that I did was comment out all support for I2S.  I didn’t want to look at those blocks of code in my new DAC implementation anymore.  I left them around for a while so that I could refer to them as I made the switch to the DAC, and then deleted away most of those lines entirely once the DAC was working.  My main function in sound_effects_player.c is very simple:

         

        #include "em_device.h"
        #include "em_chip.h"
        #include "em_cmu.h"
        #include "em_timer.h"
        #include "em_gpio.h"
        #include "dac_helpers.h"
        #include "utilities.h"
         
        int main(void)
        {
              CHIP_Init();
         
              // Need to boost the clock to get above 7kHz
              CMU_ClockSelectSet(cmuClock_HF, cmuSelect_HFXO);
         
              CMU_ClockEnable(cmuClock_GPIO, true);
              CMU_ClockEnable(cmuClock_DMA, true);
         
              // Enable GPIO output for Timer3, Compare Channel 2 on PE2
              GPIO_PinModeSet(gpioPortE, 2, gpioModePushPull, 0);
         
              // Show the sample rate on PE1 for debug
              GPIO_PinModeSet(gpioPortE, 1, gpioModePushPull, 0);
         
              // Get the systick running for delay() functions
              if (SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE) / 1000))
              {
                    DEBUG_BREAK;
              }
         
              play_sound(TEST_SOUND4);
              DAC_setup();
              DAC_TIMER_setup();
         
              while (1) ;
        }

        Most of the work is done in the play_sound function, located in dac_helpers.c, which opens the file sent in, which in this case is “sweet4.wav” defined as TEST_SOUND4.  The play_sound function loads two ping pong RAM buffers and initializes the DMA transfers.  We have to call play_sound before we set up the DAC and sample rate timer because the timing of the DAC depends on the sample rate, and we don’t know that until after we have read the sample rate from the sound file.  The DAC is then setup to work with DMA, and a timer is used to tell the DAC when to start the digital-to-analog conversion according to the sample rate contained in the .wav file header.

         

        The dac_helper.c file contains all of the code necessary to open files, load data from those files into two ping pong DMA buffers, setup the sample rate timer, setup the DAC, setup the DMA, and handle the callback from DMA cycle completion.  The following figure is a graphical overview of how the code works.

         

         14_file_load_diagram.png

         

        The code starts by mounting the MicroSD card and ensuring that there is a file system, and then opening the named file.  If any of those steps fail, the debugger will break in at several DEBUG_BREAK lines to show you where things went wrong.  If you are not connected to the debugger (Simplicity Studio IDE), then your firmware will just halt and refuse to function. Therefore, DEBUG_BREAK statements are only for experimenting and learning using the IDE.

         

        The open_file function has a static variable that remembers if the MircoSD card has been initialized, and if it has not been initialized yet, then it calls on prepare_microsd_card to do the one-time initialization and mounts the FAT file system.

        void prepare_microsd_card()
        {
              /* Initialize filesystem */
              MICROSD_Init();
         
              FRESULT res = f_mount(0, &Fatfs);
              if (res != FR_OK)
              {
              /* No micro-SD with FAT32 is present */
                    DEBUG_BREAK
              }
        }
         
        void open_file(char * filename)
        {
              static bool first_time = true;
         
              if (first_time)
              {
                    prepare_microsd_card();
              }
         
              /* Open wav file from SD-card */
              if (f_open(&WAVfile, filename, FA_READ) != FR_OK)
              {
              /* No micro-SD with FAT32, or no WAV_FILENAME found */
                    DEBUG_BREAK
              }
         
              ByteCounter = 0;
         
              /* Read header and place in header struct */
              f_read(&WAVfile, &wavHeader, sizeof(wavHeader), &bytes_read);
         
              if (first_time)
              {
                    /* Fill both primary and alternate RAM-buffer before start */
                    FillBufferFromSDcard( wavHeader.channels, true);
                    FillBufferFromSDcard( wavHeader.channels, false);
                    first_time = false;
              }
        }

        When the file is first opened, the ping pong buffers in RAM are filled with the first bits of data.  The FillBufferFromSDcard function sets up DMA after the RAM buffers are loaded, but DMA is waiting for a DMA request from the DAC to do anything.  As soon as the DAC is configured and started, it will begin to request data from DMA using the DMA request when the sample rate timer indicates it is time for a new sample over the Peripheral Reflex System (PRS).  Therefore, the TIMER drives the DAC, which drives the DMA and everything else.  This is exactly as it should be since sound is based on a sample rate.

         

        Note that two “ping pong” buffers are used so that the DMA engine can be working on one buffer while the other buffer is loaded from the MicroSD card.  The two buffers are used for both mono and stereo files, and both buffers are used for left and right channels or just a single mono channel.  Stereo data is always interlaced in .wav files, and I have modified the stereo logic of the FillBufferFromSDcard to properly blend sound from the left and right channels into a single channel for our mono DAC:

         

        if (channels > 1)
          {
            /* Stereo, Store Left and Right data interlaced as in wavfile */
            /* DMA is writing the data to the combined register as interlaced data*/
         
            /* First buffer is filled from SD-card */
            f_read(&WAVfile, buffer, 4 * BUFFERSIZE, &bytes_read);
            ByteCounter += bytes_read;
         
            for (i = 0; i < 2 * BUFFERSIZE; i++)
            {
              if (!(i & 1))
              {
                tmp = 0;
              }
              tmp += buffer[i];
         
              if (i & 1)
              {
                /* Convert to 12 bits */
                tmp >>= 4;
         
                buffer[i-1] = tmp;
                buffer[i] = tmp;
              }
            }
          }

        In this code, the buffer points to one of the two global ping pong buffers that is not currently in use by the DMA engine.  It then fills that buffer with data from the MicroSD card with the f_read function and keeps track of how much data has been processed with the ByteCounter variable.  But more processing is needed after that.  For stereo .wav files, we have to convert the samples from two stereo channels into a single mono channel for our mono DAC.  This is accomplished by reading two samples, adding them together, and then placing the blended samples back into the left and right channels in the buffer.  This code was originally used on I2S chip that was capable of stereo, which is why we have the extra space allocated for stereo sound in the buffer.  We are still not done.  The samples in .wav files are 16-bits and we only have a 12-bit DAC, so we do a right shift by four bits to create a 12-bit sample, and then place this modified sample back into the global RAM ping pong buffer.

         

        After the RAM buffers are ready to go, a DMA transfer is initiated.  The DMA engine waits for the DAC to tell it when to give it another sample, and the DAC waits for the sample rate TIMER0 (named DAC_TIMER in the code) to tell it when to process each sample.  This whole conversion process starts at the TIMER0, which is the heartbeat that controls the overall cadence of all audio conversion.  The DAC processes data on that heartbeat and is fed by the DMA engine at the exact moment it requires 32 bits of data to perform the next digital-to-audio conversion of a single audio sample.  When the DMA engine processes all of the data in one of the ping pong buffers, it triggers a Ping Pong Complete callback to a function that loads more data from the MicroSD card, processing the data on the fly to blend samples, adjusting for 12-bit DAC, etc., right as it loads the RAM buffers.  This whole process repeats until the number of bytes processed by FillBufferFromSDcard is equal to the number of bytes in the source file, and then the DMA engine is told to stop.  At that point, the TIMER0 continues to run, but the DAC doesn’t do anything because it never receives any data from the DMA engine.

         

        The DAC in this implementation is set up to operate in differential mode, which allows 2’s complement signed data from normal .wav files to be placed into the DAC registers for conversion.  The DAC automatically configures the integrated Op Amps for the primary differential outputs.  During the function, we also set up the PRS system to watch for a signal from the DAC timer and request new samples from the DMA engine.

         

        void DAC_setup(void)
        {
              CMU_ClockEnable(cmuClock_DAC0, true);
              CMU_ClockEnable(cmuClock_PRS, true);
         
          DAC_Init_TypeDef        init        = DAC_INIT_DEFAULT;
          DAC_InitChannel_TypeDef initChannel = DAC_INITCHANNEL_DEFAULT;
         
          /* Calculate the DAC clock prescaler value that will result in a DAC clock
           * close to 1 MHz. Second parameter is zero, if the HFPERCLK value is 0, the
           * function will check what the HFPERCLK actually is. */
          init.prescale = DAC_PrescaleCalc(1000000, 0);
         
          // Differential mode
          init.diff = true;
         
          // Higher reference of 3.3V
          init.reference = dacRefVDD;
         
          /* Initialize the DAC. */
          DAC_Init(DAC0, &init);
         
          /* Enable prs to trigger samples at the right time with the timer */
          initChannel.prsEnable = true;
          initChannel.prsSel    = dacPRSSELCh0;
         
          /* Both channels can be configured the same
           * and be triggered by the same prs-signal. */
          DAC_InitChannel(DAC0, &initChannel, 0);
          DAC_InitChannel(DAC0, &initChannel, 1);
         
          DAC_Enable(DAC0, 0, true);
          DAC_Enable(DAC0, 1, true);
         
          // By default, DAC is output to PB11 and PB12.  Use following code to move to alt pins if necessary
        //  DAC0->OPA0MUX |= DAC_OPA0MUX_OUTMODE_ALT | DAC_OPA0MUX_OUTPEN_OUT4;
        //  DAC0->OPA1MUX |= DAC_OPA1MUX_OUTMODE_ALT | DAC_OPA1MUX_OUTPEN_OUT4;
         
        }

        TIMER0 is likewise set up to trigger a PRS signal according to the sample rate contained in the sound file, and it is set up to use the same PRS channel that the DAC is waiting on to start digital-to-analog conversion.

         

        void DAC_TIMER_setup(void)
        {
              CMU_ClockEnable(cmuClock_TIMER0, true);
         
          uint32_t timerTopValue;
          /* Use default timer configuration, overflow on counter top and start counting
           * from 0 again. */
          TIMER_Init_TypeDef timerInit = TIMER_INIT_DEFAULT;
         
          TIMER_Init(TIMER0, &timerInit);
         
          /* PRS setup */
          /* Select TIMER0 as source and TIMER0OF (Timer0 overflow) as signal (rising edge) */
          PRS_SourceSignalSet(0, PRS_CH_CTRL_SOURCESEL_TIMER0, PRS_CH_CTRL_SIGSEL_TIMER0OF, prsEdgePos);
         
          /* Calculate the proper overflow value */
          timerTopValue = CMU_ClockFreqGet(cmuClock_TIMER0) / wavHeader.frequency;
         
          /* Write new topValue */
          TIMER_TopBufSet(TIMER0, timerTopValue);
        }

        The configuration and initialization of the DMA transfers were detailed in chapter 13 and the only change in this chapter is to change the target and data increment size from USART, which was necessary for the I2S chip, to the DAC, the new target.  The USART allowed two 32-bit words to be transferred at one time, while the DAC only allows one 32-bit word to be transferred at one time.

         

        In the next section, we will add some sound blending to the audio playback algorithm to ensure that multiple sound effects can play at the same time.  We will then connect the accelerometer from chapter 10 to induce some motion-activated sounds.

      • On the Cutting Edge of Smart Grid Innovation: IoT Hero Nexgrid

        deirdrewalsh | 12/341/2016 | 03:01 PM

         

        Nexgrid Banner

        We had the opportunity to speak with Nexgrid Founder and CEO Costa Apostolakis this month about the unique IoT journey of a utility sector innovator. Founded in 2009, Nexgrid is a manufacturer and integrator of self-managing devices that offer utility companies unrestricted monitoring and control of metering and data for electric, water, and gas.

         

        For those who may be unfamiliar with Nexgrid, tell us a little about your business.

        Nexgrid is in the Smart grid space. Long story short, we focus on providing full turnkey solutions for utilities to manage their electric, water, and gas metering needs. Our products also include monitoring and control for street lights as well as in-home tools for thermostats and remotely managed devices for hot water heaters, pool pumps, and other high-consumption devices.

         

        Essentially, we build very large wireless networks that communicate from the utility all the way into homes without using the customer’s Internet connection. All the smart meters and smart devices communicate in real time over one network, using Nexgrid’s technology which is ultimately a utility-owned network for smart grid — unlike some of our competitors’ products that use a proprietary wireless technology or require the consumer’s home Wi-Fi modem. We create a high-speed, secure wireless mesh network that connects the home or business all the way back to the utility. And even more important, the entire network utilizes standards based communication so the utility can add new products from third party vendors if they so choose.

         

        Screenshot 2016-12-06 13.57.39.png

         

        That’s incredible. It sounds like you’re really democratizing access to energy consumption data for impoverished utility customers who maybe only have their mobile phone as their main point of Internet contact. Maybe they can’t afford formal monthly Internet or all the crazy expensive energy. But they can still download a phone app from their provider that enables communication about their energy usage because your products are doing all the talking back to the utility over this incredible network. Wow.

        Absolutely. For people who really need to gauge their monthly energy bills and avoid surprises, especially in prepaid situations, they can easily see where their usage is on any given day.

         

        Tell me what specific Silicon Labs product you’re using to help make all this happen, and why did Nexgrid select it?

        We're using the EM357 ZigBee wireless radios. It’s in every one of our products, truly every last one. Additionally, our smart grid gateways have an additional Wi-Fi radio. So ultimately, all our units can use each other as repeaters if needed. For example, a thermostat can wirelessly mesh to a hot water heater controller, then to an electric meter, then out to a streetlight and to another streetlight and so forth until it gets back to the utility.

         

        At the company’s inception we felt it was critical to only utilize standardized methods of communication throughout our entire platform, specifically 802.15.4 and 802.11.  At that time the lack of standardized communication in the space was very prevalent, and we felt this detail was an important missing piece in the smart grid space. What we didn't want to do was build a big proprietary wireless network that a utility was locked into; we wanted to build a truly open Smart grid. That said, the ZigBee platform was very robust and provided the security and high data rates that we needed. Then we found the Ember chip set platform, which looked to us like the most powerful and the most proven technology. And that's how we ultimately found Silicon Labs. Today we have hundreds of thousands of devices deployed and the most powerful smart grid technology on the market so we clearly made the right choice.

         

        Last question: How do you see the IoT continuing to unfold in the next 5–7 years, given your experience in a really future forward space?

        I think a lot of doors are now being opened. With the cost reduction of solar panels, windmills, and introduction of geothermal technology, consumers are sometimes producing more power in their own home or business than what they use from their utility for the first time ever. And they’re putting that power back onto the grid. The electric grid of the future will no longer be receiving power from nuclear, coal, or gas-powered plants only, it will come from individual homes and businesses.

         

        There’s a reverse flow happening from the consumer side where residential and businesses can actually generate clean power, enough to power their home and potentially a few neighbors’ homes, and it is happening now; it completely changes the electric grid which has remained the same for over 100 years.  And today Nexgrid’s technology supports the monitoring of the electric that not only comes into the home or business but how much goes back onto the grid.

        Screenshot 2016-12-06 13.57.26.png