The General Interest forum is for non-support discussion only. If you have a question about, or need help with a Silicon Labs product, please visit one of the individual product forums.

Application engineers do not monitor this forum.

View General Interest Board Guidelines ›



After hours of debugging and tests with the Si8900, I discovered that its functionning greatly differs from that explained in the datasheet. Although I use the Si8900, I believe the situation should be identical for the Si8901 and Si8902.

The datasheet of the Si890x clearly explains how the configuration and communication protocols work. Basically, a configuration byte is sent by the master. The Si890x then sends the configuration byte back to the the master followed by the data bytes.


I discovered that this is not generally true. For example, if a burst mode configuration byte with ch. AIN0 is initially transmitted to the Si890x, it sends back the configuration byte followed by a continuous stream of data bytes for ch. AIN0, exactly as expected. However, if later on another bust mode configuration byte is sent with selection of ch. AIN1, it is not sent back from the Si890x. What will happen is that the data stream will be modified so that the ADC data alternate between those of ch. AIN0 and those of ch. AIN1 (... ADC_H (AIN0) , ADC_L (AIN0) , ADC_H (AIN1) , ADC_L (AIN1) , ADC_H (AIN0) , ...). From the datasheet, one would rather assume that the new configuration byte will be sent back by the Si890x, and that the following data stream will only be composed of ADC data from ch. AIN1. This is not the case. There is no information about this fundamental behaviour of the Si890x in the datasheet, which, in my opinion, is quite unacceptable as it directly relates to the control and usage of the module.


This post is simply intended to warn anybody designing with the Si890x about this particular behaviour. It is also a message to the hardware team at Silicon Labs. There needs to be an update of the Si890x datasheet to describe its functionning in more complete details.


  • Discussion Forums
  • Hi Race,

    Thanks a lot for pointing this out to the community. The Silicon Labs applications engineering team is aware of this issue with the Si890x family of parts and are working on addressing this issue in the near future through updates to the device and to the documentation.


  • so it has been 6 months and not so much as a note on here describing the function???  


    ideally it should be that you can add 1 or all the ADCs to the burst mode cycle and that the chip saves the PGA and reference for each one as well ... BUT...WE...JUST...DON'T...KNOW


    why couldn't someone have simply hammered out a text file and thrown it in the web site as an app note so the information was there until someone else got around to making it pretty?   at least the website isn't ENTIRELY flash BS like some are - all engineers care about is getting to technical data in the most expedient manner and not waiting for BS flash content to load!

  • Dear Forum, Dear Fellow Posters,

    after two weeks of heavy fighting with the SI8902 to get it into a meaningful communication with an Atmel µC I am about to give up.
    The problem is not so much "real nonsense" in the SPI communication (which would point to signal transmission problems/bit errors) but rather the byte replies from the SI8902.
    They seem to be "mixed up", "delayed" and I simply could not get it to transmit the sequence stated in the datasheet. Prominently I was always missing the ADC_L byte.
    The best result after all measures described below was:
    Byte:         0                    1                 2                    3                                    
                10000000 # 11001111 # 10000000 # 00000000

    • byte 0 just "garbage" (or some "old" ADC_H byte of a former measurement) while transmitting the first CONFG_0
    • byte 1 being the mirrored CONFG_0 byte as expected
    • byte 2 an ADC_H (verified by varying the AIN0 input)
    • byte 3 just empty (not a zero-readout ADC_L, verified)

    What I (at least seemingly) found out is that the transmit sequence of the master should not be:
        0                                 1                      2                  3
    CONFG_0 - (delay) - don't care -     don't care - don't care

    but rather
        0                                  1                           2                             3
    CONFG_0 - (delay) - CONFG_0     -     CONFG_0         -     CONFG_0

    What I have tried up to now with the limited results stated above:

    •  Read through all the literature available to me (datasheet, AN637, AN638, EvalBoard Description) and follow every hint
    • search the net for resources (outside the SiLabs realm with very limited results)
    • software tweaking:
    1. tried all SPI data modes (0-3), with MODE_3 (CPOL=1 [active low] and CPHA=1 [sample on    rising edge]) working best - the datasheet does not explicitly state that, but you may imply it from  Figures 1 and 13.
    2. clock timing variations between 250kHz an 2MHz.
    3. varied the mandatory delay time (8µs) between CONFG_0 and subsequent transfers between 0 and 1000µs
    4. varied delay times between individual bytes between 0 and 250µs
    5. tried longer transfer frames than the 4 bytes (these seemed to me obvious from reading the datasheet)
    6. Followed the sentence "To ensure that the SPI port is reset, the master must toggle its /EN output each time a data byte is transmitted or received." (AN637, p.6). To me that is ambiguous, because in my understanding "toggle" could mean a single transition or a double one. Nevertheless I tried all combinations of toggling (after every single byte or only once per transmit cycle after sending the CONFG_0 byte; single transition or double).
    • hardware side (signal integrity): Hardware setup was exactly as in tne datasheet/AppNotes/EvalBoard. The single real difference were longer signal line lengths in my original design (and certainly on a breadboard). High-frequency signal transmission is admittedly a matter where my knowledge is too little, so I tried these things out:
    1. line-termination by series resistors matching the (estimated) line impedance
    2. low-pass filtering the clock signal
    3. putting the whole design to a dedicated test-PCB with signal-line-lengths of max. 8.5mm (SDI/SDO/SCLK) and 16mm (/EN) respectively.
    4. measured the signal integrity with the scope: not too bad in my opinion (a photo of the SCK line in a breadboard-setup is attached. Clock frequency is 1MHz)

    I post here because I have the same opinion as the honored fore-posters in that the datasheet could be more verbose and would deserve an update. The developer simply has too little help when problems arise. E. g. why is an essential (valid?) information about "toggling the /EN line" hidden somewhere in an AppNote? A good thing, too, would be to have some (pseudo-) code available as a guideline That itself would answer most of the questions.

    But, concluding, I am here for help and not for complaining!

    Regards from



    P.S.: Product category for this thread seems wrong...

  • I got mine working with a PIC32 after a bit of frustration;  ultimately it worked perfect by just switching to mode 3 (CPOL = 1, CKE(NCPHA) = 0, check which mode notation you're using!).  


    I did not ultimately have to do any tricky stuff to account for the supposed 8us track and hold delay or the supposed need to toggle CS for every byte.  See the attached image for a logic analyzer capture operating in demand mode (CNFG_0 = 1b'11000011 - CH_0 w/ external reference) and the correct ADC data showing a reading of 1023 bits.

  • I do not know if anyone cares at this point as i had thought they would have posted it here.  I got a response 10/15 from a support ticket and thought for sure things would have been updated... but they were not



    I have reviewed the firmware of this product. It stores the VREF selection, PGA gain parameters for each burst channel in an array and then uses the array and sets VREF and PGA specifically for that channel before triggering ADC sampling. So the answer to your question is yes.

    I agree that our datasheet does not have specifications for a very low reference voltage. I believe the intent of making an option for an external reference was to provide customers with an option of using a low temperature drift reference IC. Since this ADC architecture is a SAR, the ADC should still function with a low VREF, however the offset error will be a higher percentage of the full scale signal.



    I had earlier been told that burst mode continuously cycles through the configured channels and this told me that they save the channel details instead of overwriting them.  I would venture a guess that "not used" is an actual option allowing you to switch which channels are in the continuous read, burst being a poor choice of terms in the data sheet