Official Blog of Silicon Labs

    Publish
     
      • Build your own lightsaber (sounds) - Part 3

        lynchtron | 11/334/2016 | 08:55 PM

        14_title.png

        In the last section, we set up a GPIO pin with a Pulse Width Modulation (PWM) mode of a timer to create a simple audio tone, and observing that waveform on an oscilloscope.  We then amplified that tone with a crude single-transistor amplifier.  In this section, we will connect the MircoSD card to the Starter Kit in order to play audio stored in a file and introduce the onboard Digital-to-Analog Converter (DAC) for single-ended and differential modes.  We will also connected a Class D differential amplifier to the output of our DAC.

         

        Drive Digital Audio Files to Your Speaker

        Now that we have proven that we can make sounds using only digital outputs, we will re-use our MicroSD card example from chapter 13 and send those files through the onboard DAC to produce audio data that is more analog and requires no filtering for adding an amplification stage.

         

        Download the lightsaber sound profiles from the repo (originally found on freesound.org thanks to Joe Barlow) and load them from your computer into your MicroSD card.  These sound files are already in the .wav format, and that format will work for our needs in this chapter.  If you had found sound effects in any other format, such as .mp3, you would need to use the Audacity program as discussed in the last chapter to convert those sounds to .wav format or build a .mp3 software decoder into your firmware solution.

         

        Note that the FF library used by MicroSD library requires all of your filenames to be in the “8.3” naming format, which harkens back to the 1980’s DOS days.  This means that your names must be named with fewer than eight characters, followed by a period, and then three more characters.  If you use longer names or don’t follow this format, your files won’t open.  So I renamed the files from freesound.org to a few files called swing0.wav and idle1.wav on my SD card and placed those in the source code repo.  If you still have the voice tracks on the card from before, it’s OK to leave them on there so that you have some voice tracks for experiments.  Then, connect the MicroSD card reader to the Starter Kit as follows:

         

        14_microsd_connection.png

         

        Starter Kit                                                    MicroSD Card Breakout

        3V3                                                               VCC

        GND                                                              GND

        PC2 – US2_TX, Location 0                          DI

        PC3 – US2_RX, Location 0                          DO

        PC4 – US2_CLK, location 0                         SCK

        PC5 – US2_CS, Location 0                          CS

         

        To ensure that the MicroSD card is working, load the sound_effects_player.c file, set a breakpoint in dac_helpers.c in the open_file function, and verify that you see “RIFF” in the wavHeader variable.  This ensures that all of those connections are correct and at least the first file it is looking for is on the card.  If you get stopped on a DEBUG_BREAK command, fix the hardware connections and try again.

         

        Connect the Onboard Differential DAC

        Some of the EFM32 family, including our Wonder Gecko on the Starter Kit, contain an onboard DAC peripheral and integrated Op Amp that can be used to play audio.  The DAC is 12 bits, which is a little bit shy of what is considered to be high quality audio, but certainly good enough for our sound effects purposes.  But the nice part about the DAC is that it can output an analog voltage, i.e. a voltage that is not limited to either 0 V or 3.3 V, and it can drive to any of 212 values in-between those two voltages, or 4096 steps.  This means that an audio filter design is not required on the MCU outputs, and we can send the data from a DAC directly to any ordinary amplifier circuit.

         

        The DAC is differential, which means that it can double the effective volume of your sounds by driving the opposite voltage to each pin of the speaker.  Before, we simply connected one pin of the speaker to ground, but now we will connect and drive the two pins of the speaker to 0 V or 3.3 V on every edge, resulting in up to 6.6 V of total signal swing as seen by the speaker. 

         

        14_waveforms_diff_vs_single_ended.png

        Note that sometimes in embedded development you need more MCU pins than are available.  If this is the case and you need to save pins, you could use the Single Ended mode of the DAC and then send the single-ended signal through an inverter circuit.  Just be sure to send both speaker inputs through the same circuit so that the signal timing will be identical.  In addition, the way that you interpret and feed data to the DAC registers changes depending on single-ended or differential mode, so you will have to take that into account.  This is beyond the scope of this chapter and left as an exercise for the reader.

        14_diff_vs_single_ended.png 

        The integrated Op Amp in the EFM32 is automatically used by the DAC and does not need to be configured separately, other than for configuring and routing the DAC output pins to alternate locations.  For differential mode DAC, two of the integrated Op Amps will be used, leaving one of them free for other purposes.  The Op Amp is still limited to 20 mA of GPIO pin drive and 3.3 V power supply voltage.  It cannot add any more power to the circuit than those limitations.

         

        Connect the TI TPA2005D1 Class D Amplifier

        The volume from the integrated differential DAC good for sending to other onboard audio devices, but perhaps it is not loud enough to drive the volume that you want on our little speaker.  Since the DAC is driving an analog voltage, it is easy to amplify using many types of available amplifiers.  Keep in mind that all amplifiers are limited to the source voltage and current requirements on your board.  In our case, we can use either 3.3 V or 5 V on the Starter Kit, and up to 500 mA of power from the USB port on the connected computer.  If we exceed 500 mA of power draw through our amplifier, the connected computer will likely shut down the USB port to avoid damage. 

         

        Each GPIO on the Wonder Gecko is capable to drive 20 mA at 3.3 V on our Starter Kit.  By driving differentially from two pins, this looks like 20 mA over 6.6 V to our speaker, or 132 mW of power.   If we want to drive higher than that, we can turn to a differential amplifier, such as the TI TPA2005D1.

        14_tpa_breakout.png 

        The TPA2005D1 is listed as a 1.4 W differential Class D mono amplifier.  It can operate at up to 6 V, which means that it is able to drive up to 12 V of differential voltage to our speaker, while delivering up to 242 mA of current per output pin.

        14_tpa_diagram.png 

        The gain of the audio amplifier is set by that are in series with the signal that is coming from our DAC.  The gain is the amount of amplification factor that you wish to achieve with your amplifier.  The recommended setting is a gain of two, which should double your volume, and it is set by choosing 150 kΩ resistors for those two series resistors RI with the following equation:

        14_gain_equation.png 

        If you set these resistors to something too low, for example 0 Ω resistors, the amp will protect itself by going into shutdown mode.  Start with 150 kΩ and then go from there.  I have successfully run it at a gain of six by using 50 kΩ resistors.

         

        The gain resistors on the TPA2005D1 are set to 50 kΩ with a pair of surface-mount 0603 size resistors just to the left of the input pins.  If you would like to change the gain, you must remove those resistors or calculate the parallel resistance by adding new through-hole resistors over top the surface-mounted 0603 resistors.  For example, by adding two new 150 kΩ resistors on top of the surface-mounted 0603 resistors, you will drop the resistance to 75 kΩ for a gain of four.  Just be sure to use 1% resistors and apply the gain to both inputs because the differential amplifier works best if things are evenly matched.

         

        The SHUTDOWN pin on the TPA2005D1 is important to designs that wish to save power.  When the amplifier is enabled (SHUTDOWN pin is high), the amplifier will be consuming up to 4.5 mA of power.  But with the amplifier disabled (SHUTDOWN pin is low), the amplifier will only consume up to 50 µA of power.

         

        Connect the TPA2005D1 as follows:

        Starter Kit                                                    TPA2005D1 Card Breakout                          Speaker

        5V                                                                 PWR +

        GND                                                              PWR -

        3V3                                                               S (the shutdown pin)

        PB11                                                             IN +

        PB12                                                             IN -

                                                                              OUT +                                                           Speaker +

                                                                              OUT -                                                            Speaker –

         

        Note that the polarities of PB11/PB12 and Speaker +/- are not important.  All that matters is that you have two wires from PB11 and PB12 going to IN + and IN – on the TPA2005D1 and two wires going from OUT + and OUT – going to the two likely unlabeled pins on your plastic speaker.

         

        In the next section, we will learn how to configure the differential DAC on the EFM32 to play higher quality audio from a MicroSD card.

         

        PREVIOUS 

      • Touch and Go: 3 Reasons Our New EFM8 MCUs Will Make Your Next Car Your Favorite

        Lance Looper | 11/326/2016 | 09:30 AM

        This week we introduced two families of automotive-grade EFM8 MCUs; the ultra-low-power EFM8SB1 Sleepy Bee and the EFM8BB1/BB2 Busy Bee. These new automotive-grade 8-Bit MCUs are designed specifically for in-cabin touch interface and motor control applications and here are three reasons they’re going to change the way you look at your car’s mirrors, headlights, and seats.

         

        Touch and Go.jpg

         

        Performance

        The AEC-Q100-qualified EFM8SB1 family provides advanced on-chip capacitive touch technology, supplanting physical buttons. And like all EFM8 MCUs, these deliver best-in-class 8-bit performance through advanced features and capabilities including a high-speed pipelined 8051 core, ultra-low power, precision analog, and enhanced communication peripherals, on-chip oscillators, small-footprint packages, and a patented crossbar architecture that enables flexible digital and analog peripheral multiplexing to simplify PCB design and I/O pin routing.

         

        The automotive-grade EFM8SB1 devices support -40 to +85 oC ambient temperatures, core speeds up to 25 MHz and flash sizes up to 8 kB. The MCUs integrate a 12-bit analog-to-digital converter (ADC), high-performance timers, a temperature sensor, and enhanced SPI, I2C and UART serial ports. An on-chip high-resolution capacitive-to-digital converter (CDC) offers an ultra-low < 1 µA wake-on-touch capability and 12 robust capacitive touch channels, eliminating the need for on/off switches in many applications.

         

        automotive-banner-touch-control-panel.png

         

        Supporting an extended temperature range of -40 to +125 oC, the EFM8BB1/BB2 devices are suitable for applications that must meet tough automotive qualifications and operate over a wide temperature range while delivering high performance at all temperatures. The EFM8BB1 devices offer optimal price/performance for cost-sensitive designs, while the BB2 products deliver enhanced analog and digital peripheral performance. These MCUs are a good choice for analog-intensive automotive body control applications such as seat adjustment, fan control, window lifters and fuel tank sensors.

         

        Value

        EFM8BB1/BB2 Busy Bee MCUs provide the right balance of no-compromise performance, energy efficiency and value for cost-sensitive applications. With core speeds scaling up to 50 MHz and 2-64 kB flash sizes, the MCUs offer an array of high-performance peripherals including a high-resolution 12-bit ADC, high-speed 12-bit DACs, low-power comparators, voltage reference, enhanced-throughput communication peripherals and internal oscillators in packages as small as 3 mm x 3 mm. This exceptional single-chip integration eliminates the need for discrete analog components, reducing system cost and board space.

         

        Simplicity

        Designed to handle a wide range of in-cabin touch interface and body electronics motor control applications, the EFM8SB1 family provides advanced on-chip capacitive touch technology enabling easy replacement of physical buttons with touch control. The EFM8BB1/BB2 Busy Bee family features high-performance analog and digital peripherals, making these devices a versatile choice for controlling motorized rear view mirrors, headlights and seats.

         

        automotive-touch-rear-view-mirror.png 

         

        Silicon Labs supports touch-sense interface design with its Capacitive Sense Library available within the Simplicity Studio™ development tool suite, offering all of the features and algorithms required to add capacitive sensing interfaces to automotive applications. Simplicity Studio provides designers with production-ready firmware, from scanning buttons to filtering noise. By using the Capacitive Sense Profiler to visualize real-time data and the noise levels of cap-sense buttons, developers can easily customize touch and no-touch thresholds and noise filtering settings, greatly simplifying the addition of capacitive touch to in-vehicle user interfaces.

         

         

        .

      • Build your own lightsaber (sounds)! - Part 2

        lynchtron | 11/323/2016 | 02:05 PM

        14_title.png

        In the last section, we learned more about digital audio and Digital-to-Audio Converters (DACs) and different types of amplification.  In this section, we will take a step back to the fundamentals of digital audio and use the Pulse Width Modulation (PWM) mode of a timer peripheral output to a GPIO pin to create a tone. 

         

        Create an Audio Tone with a PWM Timer

        For the first experiment with digitally-produced sound, we will create simple tones only, through the use of nothing but a digitial PWM output.  We will construct a program that will output any frequency under 24 kHz.  The program is pretty short, so I will provide the complete source here and then discuss it below the source code.

         

        #include "em_device.h"
        #include "em_chip.h"
        #include "em_cmu.h"
        #include "em_timer.h"
        #include "em_gpio.h"
         
        #define PWM_TIMER_CHANNEL           2
        #define PWM_TIMER                   TIMER3
        #define SAMPLE_TIMER                TIMER1
        #define SAMPLE_TIMER_INT            TIMER1_IRQn
        #define TIMER_PRESCALE              timerPrescale2
        #define TONE_FREQ                   3500
         
        // Storage for the max frequency available to this program
        uint32_t max_frequency;
         
        uint32_t top_value(uint32_t frequency)
        {
              if (frequency == 0) return 0;
         
              return (100*max_frequency/frequency);
        }
         
        // This is the timer that controls the PWM rate and PWM value per sample
        // This timer is routed to a GPIO and needs no interrupt to set the PWM value
        // when the compare register is reached in the timer count
        void setup_pwm_timer3()
        {
              // Create the timer count control object initializer
              TIMER_InitCC_TypeDef timerCCInit = TIMER_INITCC_DEFAULT;
              timerCCInit.mode = timerCCModePWM;
              timerCCInit.cmoa = timerOutputActionClear;
              timerCCInit.cofoa = timerOutputActionSet;
         
              // Configure Compare Channel 2
              TIMER_InitCC(PWM_TIMER, PWM_TIMER_CHANNEL, &timerCCInit);
         
              // Route CC2 to location 1 (PE3) and enable pin for cc2
              PWM_TIMER->ROUTE |= (TIMER_ROUTE_CC2PEN | TIMER_ROUTE_LOCATION_LOC1);
         
              // Create a timerInit object, and set the freq to maximum
              TIMER_Init_TypeDef timerInit = TIMER_INIT_DEFAULT;
              timerInit.prescale = TIMER_PRESCALE;
         
              // Set Top Value
              TIMER_TopSet(PWM_TIMER, top_value(TONE_FREQ) / 10);
         
              TIMER_Init(PWM_TIMER, &timerInit);
        }
         
        // This is the timer that used to know when to fetch a sample to process
        // This timer is only used to generate interrupts
        void setup_sample_rate_timer1()
        {
              // Create a timerInit object, and set the freq to maximum
              TIMER_Init_TypeDef timerInit = TIMER_INIT_DEFAULT;
              timerInit.prescale = TIMER_PRESCALE;
         
              // Set Top Value
              TIMER_TopSet(SAMPLE_TIMER, top_value(TONE_FREQ) );
         
              TIMER_Init(SAMPLE_TIMER, &timerInit);
         
              TIMER_IntEnable(SAMPLE_TIMER, TIMER_IF_OF);
         
              // Enable TIMER0 interrupt vector in NVIC
              NVIC_EnableIRQ(SAMPLE_TIMER_INT);
        }
         
         
        uint32_t sine_wave_generator()
        {
              const uint8_t lookup_table[10] = {5,8,10,10,8,5,2,0,0,2};
              static uint8_t count = 0;
         
              // Lookup the value
              uint32_t result = lookup_table[count];
         
              // Adjust count for next time and correct for overflow
              count++; if (count >= 10) count = 0;
         
              return result * top_value(TONE_FREQ) / 100;
        }
         
        int main(void)
        {
              CHIP_Init();
         
              CMU_ClockEnable(cmuClock_GPIO, true);
              CMU_ClockEnable(cmuClock_TIMER1, true);
              CMU_ClockEnable(cmuClock_TIMER3, true);
         
              // Need to boost the clock to get above 7kHz
              CMU_ClockSelectSet(cmuClock_HF, cmuSelect_HFXO);
         
              // Calculate the max frequency supported by this program
              uint32_t timer_freq = CMU_ClockFreqGet(cmuClock_TIMER3);
              max_frequency = (timer_freq / (1 << TIMER_PRESCALE)) / 1000;
         
              // Set up our timers that do almost all the work
              setup_sample_rate_timer1();
              setup_pwm_timer3();
         
              // Enable GPIO output for Timer3, Compare Channel 2 on PE2
              GPIO_PinModeSet(gpioPortE, 2, gpioModePushPull, 0);
         
              // Show the sample rate on PE1 for debug
              GPIO_PinModeSet(gpioPortE, 1, gpioModePushPull, 0);
         
              while (1)
              {
              }
        }
         
        void TIMER1_IRQHandler(void)
        {
              // Clear the interrupt
              TIMER_IntClear(SAMPLE_TIMER, TIMER_IF_OF);
         
              // This is for debug viewing on a scope
              GPIO_PinOutToggle(gpioPortE, 1);
         
              // Get the next sine value
              uint32_t pwm_duty = sine_wave_generator();
         
              // Set the new duty cycle for this sample based on the generator
              TIMER_CompareBufSet(PWM_TIMER, PWM_TIMER_CHANNEL, pwm_duty);
        }

         

        First, we initialize clocks and GPIOs in the main part of the program, then call upon two different timers, TIMER1 defined as SAMPLE_TIMER for the sample rate and TIMER3 defined as PWM_TIMER for the PWM function. 

         

        The sample timer is configured to trigger an interrupt every time that a new sample is required.  This sets the overall cadence of the tone generator.  When the timer expires, the TIMER1 interrupt handler springs into action, fetching the next value of a sine wave, then placing this value into the PWM compare registers through the TIMER_CompareBufSet function.

         

        Audible sound is located in a range from around 50 Hz to 20,000 Hz, so I needed a frequency that would allow my sample rate timer to run up to 200,000 times a second and my PWM timer at 2,000,000 times a second.  At first, I tried to run everything with the HFRCO clock source, which only allowed my top frequency to get to 7 kHz.  So I switched to using HFXO as the clock source, which gave a 48 MHz HFPER clock.  I then used a timer divisor of 2 to give me 24 MHz PWM clock.  I used a spreadsheet to make all of these things easier to figure out.  In order to get a PWM timer that was 10x the sample rate timer, I set my sample rate TOP register to 100 and PWM timer TOP register to 10.  This resulted in a 240 kHz sample rate, which creates a maximum frequency that is able to be generated by this code to be 24 kHz, which is stored in the max_frequency global variable.  This is a higher sample rate than what was required for 20 kHz audio.  It is calculated in the code by finding the timer frequency, dividing that by the timer clock divisor, and then dividing that result by 1000, which takes into account the two TOP values, i.e. 100 x 10 = 1000.

         

        Here is another way of looking at the same algorithm a different way: Since the sample rate timer is running at 24 MHz, setting SAMPLE_TIMER ->TOP to overflow every 100 ticks creates a maximum sample rate of 24 Mhz/100 = 240 kHz.  For the PWM_TIMER, we want that to reset its cycle at 10x the rate of the sample rate of 240 kHz.  The PWM_TIMER and the SAMPLE_TIMER are both running at 24 MHz, but the sample timer is triggered 240,000 timers a second and the PWM_TIMER is set to overflow by setting its TOP value to 1/10th of the sample rate timer, which is 24,000 times a second for a 24 kHz sound tone. 

         

        The sine_wave_generator function simply returns a value between 0 and 10 and every time it is called, incrementing the static count variable, and then scales the return value so that it fits within PWM_TIMER->TOP register range.  The values in the lookup_table array were generated with an online tool here that takes the number of points and range of values to construct one period of a sine wave.  You don’t need a lot of data points when the sound data is a simple tone.


        In order to test my implementation, I constructed a test circuit like so:

        14_scope_setup.png

         

        The low-pass RC filter in this circuit smoothed out the digital PWM pulses and allowed the frequency to be studied.  The cutoff frequency of the RC filter is

        eq1.png

        where I have chosen R = 10 kΩ.  Rearranging, and setting Fcutoff to 7 kHz, we can find the value of C:

        eq2.png

        Where C is found to be around 2 nF, or 0.002 µF.  I placed the resistor and capacitor on the output of my PE2 pin and recorded the following waveforms.

         

        14_pwm_vs_filtered.png

         

        As you can see, the purely digital PWM waveforms are being translated into nice sinusoidal waveforms after the RC filter.  The sample rate indicator on channel 2 shows when a new PWM value is loaded into the PWM compare buffer registers whenever channel 2 changes state.

         

        Therefore, you don't always need a DAC to create audio.  The key to getting the tone we wanted, in this case, is external analog filtering.

         

        In the next section, we will connect the MicroSD card to the starter kit, introduce the internal EFM32 DAC peripheral and cover single-ended versus differential mode output. 

         

        PREVIOUS | NEXT

      • Simplifying RAN Timing Using DSPLL Technology

        Lance Looper | 11/320/2016 | 11:12 AM

        We recently published this e-book describing some of the requirements, architectures, and system requirements that come with deploying heterogeneous networks by telecom companies to meet capacity and coverage requirements.  

         

        RAN Clocking_Cover.png

         

        RAN Clocking_2.png

          

        RAN Clocking_3.png

         

        RAN Clocking_4.png

         

        To read the rest of the eBook, click here

      • Webinar: Miniaturizing IoT Designs

        Nari | 11/315/2016 | 10:41 AM

        exploded990x320.jpg

         

        Webinar: Miniaturizing IoT Designs
        Date: Wednesday, January 11, 2017
        Duration: 1 hour


        As we wirelessly connect more and more devices to the Internet, electronics engineers face several challenges, including packaging a radio transmitter into existing device real estate and the demand to build increasingly smaller devices. In this webinar, we’ll explore some of the obstacles that come with the size expectations of IoT designs, from concerns around antenna integration to new packaging options that can help solve issues like detuning and size limitations.

         

        Join our hour-long webinar on January 11, 2017 at 10:00 AM and get your questions answered during our Live Q&A session at the end.

         

        original.png

      • Build your own lightsaber (sounds)! - Part 1

        lynchtron | 11/315/2016 | 12:05 AM

        14_title.png

        In the last chapter, we learned all about how to create sound using an external I2S chip and digital sound files.  In this chapter, we will get a little bit more in depth with sound and generate simple tones with a GPIO pin.  Then, we will use the integrated 12-bit DAC within the Wonder Gecko to produce sound with a more analog nature and run that through an external audio amplifier.  We will blend together sounds from multiple lightsaber sound effects and trigger those sounds with the accelerometer from chapter 10, resulting in our very own lightsaber sound effects generator.

         

        Sound effects for consumer gadgets can be more forgiving than high-end musical audio.  We can get by on just a bare GPIO pin if we are careful about the type of sounds that we try to reproduce.  For some gadgets, perhaps a few beeps, clicks, and scratchy noises will do the trick.  As we move to voice tracks that have a wider range of audio frequencies, more fidelity is needed.  The 12-bit DAC on the Wonder Gecko (also available in most EFM32 models, check the datasheet) can be used to generate sound effects, voices, and lower-quality music.

         

        The EFM32 can drive up to 20 mA on a single GPIO pinThis is probably not going to do the trick for the volume that we need on a small speaker.  If we tried to drive a speaker directly from a GPIO pin, it could damage the GPIO or just result in low volume audio.  In order to boost the volume, we need to amplify it in either voltage swing, current capacity, or both, to produce the desired power and sound levels for the situation.

         

        Since we have already learned how to use an accelerometer with I2C in chapter 10, we can dust that off now and use it to detect motion that triggers lightsaber sounds that are stored on the MicroSD card that we used in the last chapter.   We will drive all of that out of the DAC on the EFM32 and then on to a TI differential mono amplifier.  But before go there, we will take a step back just a bit and learn about how to make the most rudimentary sound with nothing but a GPIO pi and, a speaker.

         

        The source code for all examples in this book can be found on github, here.

         

        Materials Required for this Chapter:

        Sound and Amplifier Theory

        Audio theory will melt your brain.  When you go looking for information on how to build audio circuits for your electronic gadgets, you will quickly run into discussions of physics, calculus, and all sorts of horrendous-looking mathematical expressions.  But all I want to do is play some sound effects!  I hear you.  I ran into this problem when I first tried to build my first audio circuit.  I still have a lot to learn, but I will share with you some simple things that I have learned that should give you a good foundation to learn more. 

         

        In the old days when we played music on a record player or a cassette tape in a tape player, the audio information was stored and retrieved in a completely analog format.  The resulting audio signal consisted of sine waves and were fed into an analog amplifier,  then routed through a speaker and turned into sound, which we can hear.  


        Today, most of our music that we enjoy is stored or transmitted as a digital representation of an analog signal.  So, digital is the starting point for your gadget sound effects.  In order to turn that digital information into an analog signal suitable for amplification, a Digital-to-Analog Converter (DAC) is used, such as the I2S DAC that we learned about in the last chapter.  A DAC is capable of outputting an analog voltage, that is, a voltage that is not a binary 0 or 1, but some small division of voltage in between.  Any discussion of DAC’s then requires the explanation of sample rates and bit counts and the minimum number of those to accurately reproduce the audio for our ears.

         14_analog_vs_digital.png

        On the one end of the spectrum, we have the old-fashioned and warm tones of the old analog audio format, and on the other end, we have the precisely calculated, perfect-in-every-way digital recordings of the modern era.  Yet, there is middle ground!  By using nothing more than a GPIO pin and purely digital signaling, we can produce analog sound.  The reason why this works is because of filtering.  The digital pulses that we will use to generate audio from a GPIO are fast-switching square waves, and the impedance of the audio speaker coil and magnet is an inductor, which is a low-pass filter.  It simply won’t allow the high-frequencies of the digital signal to make it through the speaker coil.  The impedance of a typical 8Ω speaker limits the switching frequency, so that what is left on the speaker cone is an average voltage over time.  If we can alter the average voltage value many times a second, we have a cheap DAC.

         

        Since we have to control the GPIO many times per second, we will use the EFM32 timer circuits connected to a GPIO in Pulse Width Modulation (PWM) mode to make the quick changes necessary from instant to instant in order to reproduce a sinusoidal waveform.  The rule of thumb is to run a sample rate that is 10 times as fast as the frequency of sound that we want to create and a PWM switching frequency is that is 10 times as fast as the sample rate.  So if we want to create a 4 kHz tone for example, then we should use a 40 kHz sample rate and a 400 kHz PWM switching frequency.  We will vary the PWM duty cycle every audio sample, which will then run for 10 PWM clocks, before the PWM duty cycle is changed for the next sample.

         

        Once we have created an analog signal from the digital audio data, we need to amplify it if we want to be able to hear it loud and clear.  This is another area of electronics where your research can cause your head to explode.  There is an enormous amount of information and opinions on the Internet, and it can get you sidetracked from finding your way to a solution.  There are many ways to amplify a signal.  The type of amplification that we will be learning about in this chapter is known as Class D, but also goes by the name of PWM or simply digital amplification.

         

        Class D amplifiers use the same PWM principle that we have just learned about, but are connected to more powerful current and voltage sources than the source audio device.  They are up to 95% efficient because at times where there is no audio data, the PWM duty cycle is 50%, and the output signal stays quiescent on both inputs to the speaker.  If there is no change in output voltage to either pin of the speaker, then there is no current consumed by the speaker.  Class D amps are more efficient than Class A, B, or AB, which are all more analog in nature and can waste power when there is no audio data to amplify.  Later in the chapter, we will use TI’s TPA2005D1 Class D differential mono amplifier.

         

        In the next section, we will create a simple audio tone on a GPIO pin, examine that waveform on an oscilloscope.

         

         PREVIOUS | NEXT

      • Make it Small, Make it Fast, Make it First: Introducing the World’s Smallest Bluetooth SiP Module

        Lance Looper | 11/313/2016 | 09:00 AM

        We know designers face a host of stressful design challenges, day in and day out, as well as the never-ending rush to get their products to market ahead of the pack. That’s why we’re always genuinely excited when we can release truly cutting-edge, industry-first innovations that can help our customers — innovations such as our new BGM12x system-in-package (SiP) module.

         

        Ultimately the industry’s smallest Bluetooth low-energy SiP module with an integrated antenna, the BGM12x offers designers a way to finally effectively miniaturize many IoT applications in a manner that’s both cost-effective and doesn’t compromise on performance. We think the BGM12x is poised to help lead innovations in many application types, including sports and fitness wearables, smartwatches, personal medical devices, wireless sensor nodes, and other space-constrained connected devices.

         

        SiP_Ruler.png

         

        One of the biggest benefits of the BGM12x is simply helping vent pressures on designers by helping lessen the traditional development cycle thanks to the integrated antenna, radio, MCU, and software stack. With the BGM12x, engineering departments don’t have to spend any time on antenna design and certification testing. And quite simply, there is nothing else on the market with the size and performance combination that includes a high-performance antenna and an always up-to-date Bluetooth stack together. On top of this, designers get to utilize the favored ARM Cortex-M4 processor.

         

        Additionally, there is just the overall benefit that using a wireless module reduces design time and long-term costs by lowering risks. With a proven, ready-to-go system, designers can skip complex RF design and move on to adding value to their core applications – what meaningful IoT development is all about anyway. And the exceptionally small size makes it very easy to use in two-layer PCB designs where board clearance continues to remain a critical concern.

         

        We’re also happy that like all our Blue Gecko modules, designers can begin with a module-based design but easily migrate to a Blue Gecko SoC with minimal system redesign and full software reuse because of very similar technical features and identical APIs. Again, it’s about giving designers huge time savings, flexibility, and reduced headaches in the long term and helping work seem more a labor of love vs. going down a rabbit hole of chaotic alternatives.

         

         Screen Shot 2016-11-08 at 7.50.37 AM.png

         

        And speaking of flexibility, we’re also excited that BGM12x is supported by our efficient wireless software development kit (SDK) that gives designers the ability to use either a host or fully standalone operation with BGScript. Our Bluegiga BGScript tool lets designers create Bluetooth applications quickly without using an external MCU to run the application logic, ultimately reducing system cost, board cost, and time to market as well.

         

        BGM12x Features Overview:

        • Best-in-class SiP module size: 6.5 mm x 6.5 mm x 1.5 mm
        • Integrated chip antenna with exceptional RF performance (70% antenna efficiency) and an RF pad option for connecting to an external antenna
        • Output power: +3 dBm to +8 dBm supporting ranges from 10 meters to 200 meters
        • Based on Silicon Labs’ Blue Gecko SoC combining a 2.4 GHz transceiver with a 40 MHz ARM Cortex-M4 processor core and 256 kB flash and 32 kB RAM
        • Energy-efficient Bluetooth solution consuming 9.0 mA (peak receive mode) and 8.2 mA @ 0 dBm (peak transmit mode)
        • Hardware cryptography accelerator supporting advanced AES, ECC and SHA algorithms
        • Industry-proven Silicon Labs Bluetooth 4.2 stack with frequent feature enhancements
        • Rapid time to market with global RF certifications
        • Easy-to-use development tools: Simplicity Studio, Energy Profiler, BGScript
        • Worldwide application engineering support

         

        How Do I Get My Hands on One?
        We always encourage people to grab a Starter Kit or sample module to explore our new technologies hands-on, and the BGM12x Blue Gecko is no exception. Learn how to get both here to check out the technology first-hand.

         

        SIP Cover.pngMinaturizing IoT Designs

        To better understand and consider the major impact the BGM12x will have on wireless application development, we highly encourage you to check out one of our current whitepapers, Miniaturizing IoT Designs, penned by our own Tom Nordman and Pasi Rahikkala.

         

        This whitepaper gives a great overview of valuable information for designers to understand as they start to really push the envelope of IoT development, including antenna integration and how system-in-package modules can assist. We’re excited to see the innovations our customers discover and implement in their own product portfolios using the BGM12x module as we all help make the IoT realm a richer, even more dynamic environment.

      • Designing for Signal and Power Supply Isolation

        Lance Looper | 11/309/2016 | 09:56 AM

        Signal Isolation Basics

        Isolating signals is necessary to provide the following design-critical functions:

         

        • Protection from high voltages: Isolation provides a dielectric barrier that acts as an insulator against high voltages in systems where higher power levels are required.
        • Level translation: Enabling noise-free data transfer between circuits that operate at different voltage rails is a common challenge for electronics designers. Although there are many non-isolated level shifters available to circumvent this problem, using an isolator provides several solid advantages. Isolators are the most noise-free and robust solution, and they prevent parasitic paths that may inadvertently switch devices on or off.
        • Noise elimination: Isolated products restrict the ground current (return path) of an electrical circuit to only one side of the barrier, enabling a noise-free environment for sensitive measurements on the other side.

        System Considerations

        In order to ensure that true isolation has been achieved, it is important for the circuit designer to eliminate all possible coupling paths from one circuit (Circuit A in Figure 1) to the other that needs to be isolated (Circuit B in the illustration below). Hence, when isolating signals, it is equally important to isolate the power supplies. For a circuit designer, the challenge of isolating signals is really two-fold: to provide safe, reliable and accurate signal isolation as well as power isolation. There are multiple solutions available for signal isolation to suit the needs of designers – based on data rate capabilities, jitter restrictions, noise immunity concerns, high voltage capability, compliance with the various isolation component safety standards etc. However, for many applications where only a watt or so of isolated power is required, there have not been readily available or easily implementable solutions for power isolation.

         

        Iso1.png

         

        Application Example

        Factory automation systems depend on efficient and reliable real time distributed networks to monitor and control complex manufacturing processes. A typical and simplified hierarchical structure used in these systems is shown in Figure 6. Human machine interface in the control room at the top is linked to an intermediary controller level and finally down to the physical layer where the sensors and actuators are situated as part of motor drive units or machines controlled by PLC’s (programmable logic controllers).

         

        PLC Diagram.png

         

        The physical layer connects the sensors and actuators in a process module and across the factory floor or plant. As shown above, a CAN-based bus communicates with the various motor control units while an RS-485-based bus (PROFIBUS) communicates with the various machines on the factory floor. These physical layers are used commonly in industrial automation because they are very robust even in a noisy environment and support the long distance, multi-point communication needed on a factory floor that may cover hundreds of square meters.

         

        These buses have multiple nodes that connect to the bus through a CAN or an RS-485 transceiver. Isolating these interfaces is critical to protect against high voltages, high electromagnetic (EM) noise and large ground potential differences within the network.

         

        The illustration below shows a detailed diagram of an RS-485 transceiver node that has been isolated from the processor. The isolated power solution is referred to as the isolated dc-dc converter block. Very few easy-to-deploy, high-performance solutions isolated power solutions are available to developers. Designers frequently have to design their own solutions from scratch to provide isolated power to the secondary side of the isolator and to the RS-485 transceiver on the isolated side.

         

         

        Isolating an RS-485.png

         

         

        The transceiver in the illustration below is a half-duplex device with receive and transmit lines connected together. It communicates with the RS-485 bus through differential I/Os labelled A and B in Figure 3. The transceiver provides the interface to the processor through its single-ended digital I/Os labelled Rx (receiver) and Tx (transmitter) and an EN (enable pin) signal that controls the transmitter.

         

        The transceiver typically has two to four digital signals that require fast and accurate digital isolation and needs 0.5W to 1W of power, which has to be supplied by a dedicated isolated source with the following characteristics:

         

        • Compact solution: Depending on the particular application, space may be at a premium, and, in general, a smaller BOM is always better for manufacturability, reliability and cost.
        • High efficiency: It is important to have a compact solution with high efficiency so that heat is kept to a minimum and green energy standards can be maintained.
        • Low EMI: It is critical to keep the overall system noise to a minimum for sensitive measurements. To fine tune the emissions spectrum to a specific use case, it is preferable to have a programmable frequency option which lets users choose the switching frequency of the DC/DC converter.
        • Safety features: In industrial environments where safety is a top concern, it is recommended that the device have a soft start option to avoid inrush currents, current limiting capability and thermal detection and auto shutdown in case of excessive heat conditions.
        • Multiple isolation channels: Lastly, the solution needs to support multiple isolation channels with a minimum of 2.5kVrms rated isolation capability for meeting safety standards. The isolator needs to have excellent signal integrity even in a high noise environment.

         

        Solutions for industrial isolation

        There are only a few products on the market that strike the right balance between compactness and the ability to deliver power and between minimizing emissions while maximizing efficiency.

         

        Discrete solutions that use FET’s, controllers, single channel isolators (or optos) for feedback as well as other supporting BOM for power isolation are very common. Such solutions have to be designed from scratch and take specialized experience and skill and could take multiple iterations to get right.

         

        Some solutions integrate digital isolation and the power transformer in a single IC package. These air core transformers have poor coupling coefficients and need to be driven at much high frequencies to deliver equivalent power. This results in a much higher emissions profile for EMI, which is a strong deterrent for many designers.

         

        In addition, the power converter efficiency of such products is usually low, from 10-35%. In applications where space is at a premium, efficiency is a “don’t-care” and high emissions not a problem, these might work. But more often than not, such solutions are not compelling.

         

        There are other solutions that integrate the signal isolators and the dc-dc converter and are designed to work with a discrete transformer. This approach is optimized for the highest efficiency and integration. These solutions are a total solution, are compact and can deliver up-to 2W of power at about 78 percent efficiency.

         

        For example, Silicon Labs’ Si88xx isolation products combine quad digital isolators with a modified fly-back topology dc-dc converter with built-in secondary sensing feedback control. The Si88xx devices have been designed for very low emissions by employing dithering techniques.

         

        Additional features include a soft start capability to avoid inrush currents on startup, cycle-by-cycle current limiting, thermal detection and shutdown for over-temperature events, and cycle skipping to reduce switching losses and thus boost efficiency at lighter loads.

         

        Options for the Si88xx isolators are available for various voltage levels from 5 V to 24 V and for various combinations of digital isolation channels and their directionality. This solution leverages Silicon Labs’ proprietary signal isolation technology, with its signature low EMI profile, to provide high integration, high efficiency and very low EMI.

         

        Figure 4 provides a simplified block diagram of an Si88xx isolator. In addition to the four high-speed digital isolation channels, the Si88xx device integrates a dc-dc controller and internal FET switches that modulate power to the external transformer. The output side incorporates feedback through an external resistor divider to provide excellent line and load regulation.

         

        The dc-dc converter uses dithering techniques to minimize EMI peaks and a zero voltage switching (ZVS) scheme to minimize power loss when modulating power to the transformer. The device uses cycle skipping at light loads to minimize switching losses and boost efficiency. Multiple safety features include cycle-by-cycle current limiting, soft start to avoid inrush currents and thermal shutdown. The device also incorporates several user-programmable features such as soft start time control, a shutdown option for the dc-dc converter and switching frequency control to fine-tune the EMI profile.

         

         

        High Speed Iso with dc-dc converter.png

         

        In the application example above, the Si88xx is an ideal fit as shown in Figure 5 below. The isolated transformer is rated to 2.5kVrms and is designed to work with the Si88xx IC. By adding a few other components like resistors, diodes and capacitors, a complete power and signal isolation solution is available.

         

         

        Si88x solutions.png

         

        Elegant solutions that combine excellent digital isolation characteristics with high power conversion efficiency and extremely low EMI emissions are now available that make development easier for the digital designer. These are plug and play solutions that eliminate costly design time and iterations and take the guesswork completely out of the picture, ensuring first time success and the fastest time to market.

         

        Check out the Si88X Isolator Evaluation Kit here. 

      • High Quality Audio with I2S - Part 3

        lynchtron | 11/306/2016 | 03:37 PM

        13_title.png

        In the last section, we generated some sound on our headphones that originated an audio file on a MicroSD card.  We made use of one of Simplicity Studio’s examples to make this happen, so in this section, we will build a helper library from that example to customize the I2S driver to our application.  We will also create, fetch and play stereo audio this time around, and dive a little deeper into the role of DMA in this example.

         

        As a reminder, complete code for the Maker's Guide can be found on Github.

         

        Building Our Own MicroSD and I2S Library

        At this point, we have some code that will play an audio file at power on, but it has to be modified to make it useful to be called when an event happens, like at a button press.  I decided to simplify my main.c file and put most of the supporting driver functions in an i2s_helper.c file.  That left me with a more succinct and easy-to-follow main.c file:

         

        // main.c
        // Rebuilt solution for playing .wav files over I2S
        // Chapter 13
         
        #include "em_device.h"
        #include "em_system.h"
        #include "em_chip.h"
        #include "em_cmu.h"
         
        #include "i2s_helpers.h"
        #include "utilities.h"
         
        int main(void)
        {
              CHIP_Init();
         
              /* Use 32MHZ HFXO as core clock frequency, need high speed for 44.1kHz stereo */
              CMU_ClockSelectSet(cmuClock_HF, cmuSelect_HFXO);
         
              /* Start clocks */
              CMU_ClockEnable(cmuClock_DMA, true);
         
              // Get the systick running for delay() functions
              if (SysTick_Config(CMU_ClockFreqGet(cmuClock_CORE) / 1000))
              {
                    DEBUG_BREAK;
              }
         
              // This creates MCLK for the I2S chip from TIMER1
              create_gpio_clock();
         
              // Enable the PB1 pushbutton on the devkit
              setup_pushbutton();
         
              // Enable the I2S output pins
              I2S_init();
         
              // Give the I2S chip time to get started
              delay(100);
         
              while (true)
              {
                    int track = get_next_track();
                    play_sound(track);
         
                    // Debounce the switch...
                    delay(750);
         
                    while (!get_button())
                          ;
              }
        }
         
        // Define the systick for the delay function
        extern uint32_t msTicks;
        void SysTick_Handler(void)
        {
              msTicks++;
        }

        As you can see, I added a button press to rotate through sounds defined in the i2s_helper.h and i2s_helper.c files:

         

         

        typedef struct filename_data
        {
              char filename[15];
        } wav_files;
        #define MAX_TRACKS 3
        wav_files file_array[MAX_TRACKS] = {{"sweet4.wav"}, {"sweet5.wav"}, {"sweet6.wav"}};

        That enabled me to make a simple function to get the next track:

        int get_next_track()
        {
              static int track = 0;
              int result = track;
              track++;
              if (track == MAX_TRACKS) track = 0;
              return result;
        }

        I used Audacity to convert the first two sounds to 32000 Hz to demonstrate that a different sample rate would still work.  This required a change of the MCLK to 8 MHz, and you can see that change in the i2c_helpers.c file in the create_gpio_clock function. 

         

        For the third file, I created a stereo .wav file by combining the two mono sources in sweet4.wav and sweet5.wav.  You can do that by using the controls on each track to specify the first as right and the second as left, and then exporting the combination as a .wav file:


         13_audacity_channel_control.png

         

        In the i2s_helper.c file, I cleaned up some things that weren’t exactly right in the wavplayer.c file.  For example, the original wavplayer.c was designed to be run on the internal DAC within the EFM32 part. We will cover the internal DAC in the next chapter, but it is limited to 12 bits and the example has it configured in single-ended mode that only accepts unsigned integers.  However, the I2S DAC is 24-bits capable and also accepts 2’s complement data.  I was able to put a compile-time switch in the code that boosted the volume when using I2S DAC:

         

        #ifndef USE_I2S
              tmp = buffer[i];
         
              /* Convert from signed to unsigned */
              tmp = buffer[i] + 0x8000;
         
              /* Convert to 12 bits */
              tmp >>= 4;
         
              buffer[i] = tmp;
        #endif

         

        Direct Memory Access (DMA) - Ping Pong Mode

        One thing that is new in this project is the use of the Ping Pong DMA mode.  Up until now, we have used single descriptor DMA tables or the DMADRV driver tool.  In this example, the author of the original wavplayer.c file needed very fast DMA transfers to keep up with 44.1 kHz audio files, which requires the use of ping pong mode.  I did not have to change a line of code in that section, but the following is a diagram of what it is doing.

         13_dma_ping_pong.png

        One thing to note is that DMA is transferring data from a RAM source through to either the DAC or I2S destination, depending on the USE_I2S parameter.  In our example, we have the USE_I2S parameter set, so that means that the destination is the USART in I2S mode.  The RAM buffer is being filled from the MicroSD card over the SPI bus whenever the RAM buffer is empty.  DMA is not involved with those SPI transfers, other than to signal when it is time to fill up the empty RAM buffers.  Note that it probably could have been used there too, as long as it used a different channel than the one being used for the RAM to I2S/DAC transfers.

         

        The first thing that the author does in the DMA_setup function is to initialize this DMA peripheral.  This only needs to be done once, but I have actually called it repeatedly after reset to no harm.  Here, the author chooses unprivileged DMA transfers with hprot = 0, which is an ARM core signal that means “user mode,” and it utilizes a standard control block that is included as part of the em_dma library.

         

        /* DMA configuration structs */
          DMA_Init_TypeDef       dmaInit;
          DMA_CfgChannel_TypeDef chnlCfg;
          DMA_CfgDescr_TypeDef   descrCfg;
         
          /* Initializing the DMA */
          dmaInit.hprot        = 0;
          dmaInit.controlBlock = dmaControlBlock;
          DMA_Init(&dmaInit);

        In the next section, the author configures the DMA channel that is specific to I2S or DAC transfers.  Whenever a channel in the DMA completes, there is a callback that is your code’s entry point into that event.  PingPongTransferComplete is the function that is defined in wavplayer.c and gets called when each DMA transfer is complete. 

         

        You can see that the chnlCfg.select variable tells the DMA controller which DMA request line to use for this channel.  Since this example sends data externally onto the USART1 bus and onto the I2S CS4344 device, it uses the DMAREQ_USART1_TXBL parameter.  Had it been using the internal DAC, it would be pointing to the DMAREQ_DAC0_CH0 parameter. 

         

        DMAcallBack is registered with chnlCfg.cb, and then the chnlCfg object is passed into the DMA channel config function called DMA_CfgChannel, along with the DMA channel number.

         

         

          /* Set the interrupt callback routine */
          DMAcallBack.cbFunc = PingPongTransferComplete;
         
          /* Callback doesn't need userpointer */
          DMAcallBack.userPtr = NULL;
         
          /* Setting up channel */
          chnlCfg.highPri   = false; /* Can't use with peripherals */
          chnlCfg.enableInt = true;  /* Interrupt needed when buffers are used */
         
          /* channel 0 and 1 will need data at the same time,
           * can use channel 0 as trigger */
         
        #ifdef USE_I2S
          chnlCfg.select = DMAREQ_USART1_TXBL;
        #else
          chnlCfg.select = DMAREQ_DAC0_CH0;
        #endif
         
          chnlCfg.cb = &DMAcallBack;
          DMA_CfgChannel(0, &chnlCfg);

        Now that the DMA channel is set up, DMA descriptors must be set up for the channel.  Descriptors tell the DMA peripheral where to fetch data, where to send the data, and how much data to tackle in a single transaction.  A ping pong transfer requires two descriptors that it uses alternately, hence the “ping pong” naming.  Both primary and secondary descriptors are set up on the same DMA channel and share the same interrupt callback.  This can be seen in the FillBufferFromSDcard function where it checks to see which descriptor is active so it knows which RAM buffer to fill. 

         

        The following code sets up the primary and secondary descriptors exactly the same so that throughput is increased and the system can keep up with the time-sensitive demands of streaming audio.  The dstInc parameter controls how much to increment the address of the destination on each DMA transaction.  It is set to dmaDataIncNone because there is no address to increment in either USART or DAC modes.  The data is simply placed in a static register when the time is right. 

         

        On the source side for the DMA transfer, the data address is incremented in the RAM buffer according to how much data can be received in a single transaction on the destination side.  Since I2S uses USART, the best we can do is 2 bytes of data at time.  If we were to fetch four bytes, we would overwrite the data in the USART before the USART was ready for more data.  If we were using a DAC for the destination, it can accept four bytes at a time, so we would fetch four bytes from the MicroSD card in each DMA transaction.

         

         

        /* Setting up channel descriptor */
          /* Destination is DAC/USART register and doesn't move */
          descrCfg.dstInc = dmaDataIncNone;
         
          /* Transfer 32/16 bit each time to DAC_COMBDATA/USART_TXDOUBLE register*/
        #ifdef USE_I2S
          descrCfg.srcInc = dmaDataInc2;
          descrCfg.size   = dmaDataSize2;
        #else
          descrCfg.srcInc = dmaDataInc4;
          descrCfg.size   = dmaDataSize4;
        #endif
         
          /* We have time to arbitrate again for each sample */
          descrCfg.arbRate = dmaArbitrate1;
          descrCfg.hprot   = 0;
         
          /* Configure both primary and secondary descriptor alike */
          DMA_CfgDescr(0, true, &descrCfg);
          DMA_CfgDescr(0, false, &descrCfg);

         

        Finally, the DMA transfers are started in the code below.  This DMA_setup function is sort of mis-named.  It should be called a DMA_setup_and_start function.  I learned that when I paused the debugger after this function and still observed the transactions go out on the scope that was hooked up to the I2S bus.  I rearranged the code to first setup I2S and then call on the DMA_setup function so that I could single step through the I2S_setup function before the DMA transaction started.

         

        When the DMA is activated with DMA_ActivatePingPong, the size of the entire cycle for each descriptor is given in addition to where to send the data.  When setting up the descriptor tables, we only specify how many bytes per individual DMA transfer to send at a time, but in this function we specify exactly where to send the data (USART1->TXDOUBLE or DAC0->COMBDATA) and how many bytes make up the entire DMA cycle in either 2 * BUFFERSIZE – 1 or BUFFERSIZE – 1.  

         

        With the ping pong transfer type, the DMA hardware is moving a certain number of bytes per DMA cycle for one descriptor, for example the primary descriptor, while the secondary descriptor is stopped and the RAM buffers for the secondary descriptors are being refilled.  This allows a seamless streaming of audio data from MicroSD card to RAM to the USART bus and into the CS4344 I2S DAC.

         

          /* Enabling PingPong Transfer*/
          DMA_ActivatePingPong(0,
                               false,
        #ifdef USE_I2S
                               (void *) & (USART1->TXDOUBLE),
                               (void *) &ramBufferDacData0Stereo,
                               (2 * BUFFERSIZE) - 1,
                               (void *) &(USART1->TXDOUBLE),
                               (void *) &ramBufferDacData1Stereo,
                               (2 * BUFFERSIZE) - 1);
        #else
                               (void *) &(DAC0->COMBDATA),
                               (void *) &ramBufferDacData0Stereo,
                               BUFFERSIZE - 1,
                               (void *) &(DAC0->COMBDATA),
                               (void *) &ramBufferDacData1Stereo,
                               BUFFERSIZE - 1);
        #endif
        }

        The DMA has a lot of steps and a lot of places to go wrong.  To make matters worse, it cannot be single-stepped or debugged very easily.  Once the DMA peripheral, channels, and descriptors are all set up and the transfer is started, the hardware will take care of the rest automatically in the background.  Any buffer overruns in that process can cause hard faults or logic errors in your program, so you have to set things up right.

         

        Final Thoughts on I2S Library

        The i2s_helpers.c library that I have created is not perfect.  It could use some fixes that I will leave to the reader as an exercise.  The first issue is that the CS4344 seems to power down when there are no transactions on the I2S bus for a while.  This means that sometimes when you press the button, the sound file has a stutter in the beginning.  Therefore, the FUDGE factor that is controlled by need_to_rewind variable should be controlled by a timeout.  If it has been a while since the last sound was played, the part has probably powered down, so we need the rewind.  If we just played something, then the .wav file doesn’t need to be rewound before the next play event.

         

        The second issue with this library is that there is a resolution mismatch.  The CS4344 is a 24-bit DAC, and I2S is based on 32-bit transactions, but our .wav files are 16-bit samples.  The first bits that enter the DAC are the Most Significant Bits (MSB) and therefore when the DAC sees our 16-bit data, it is placing those bits at 23:16 in the voltage range.  This creates an audible pop when the sound is done playing because a voltage value of 1 in our file has a meaning of (1 << 8) or 128 voltage steps.  In order to fix this, we would need to:

        1. Change the I2S setup in the USART to 32-bit wide transactions and fix the I2S setup block up to set init.sync.format = usartI2sFormatW32D32
        2. Change the buffer sizes in RAM to int32_t, casting a 16-bit sound 2’s complement sample to 32-bits and inflating each sample with extra digits to fill the higher order bits with zeros or ones depending on positive or negative data values.

        This should eliminate the pop or click at the end of each sound.

         

        Note that whenever you are dealing with large arrays/buffers like the RAM buffers in this example project, you can run out of available RAM on the device quickly.  This examples reserves 2.5 KB of RAM for buffers, which isn’t too bad since the Wonder Gecko has 32 KB of RAM in total.  But Zero Gecko and Tiny Gecko parts are limited to as little as 2 KB in total, so keep that in mind.  When you use too much RAM, sometimes you will get compiler errors, but other times you will just get a fault that can be difficult to debug.

         

        This wraps up the chapter on I2S sound generation.  In the next chapter, we will cover other ways to create sound and how to use the internal DAC instead of an external DAC.

         

         PREVIOUS | NEXT