Hello and welcome to the “Clocktoberfest” edition of the Timing 101 blog from Silicon Labs.
Nice weather is starting to arrive here in central Texas. By late September and October we get hints of cooler and less humid days to come. Those of us of German descent, and plenty who are not, may celebrate Oktoberfest or some local variant such as Wurstfest in nearby New Braunfels. So “Happy Clocktoberfest!” from the Timing group.
This month I am going to discuss a very common question that arises when measuring the phase noise and phase jitter of relatively low clock frequencies. All things being equal, we generally expect divided-down lower frequency clocks to yield lower phase noise than higher frequency clocks. Quantitatively, you may recall this as the 20log(N) rule.
However, the 20log(N) rule only applies to phase noise and not integrated phase noise or phase jitter. Phase jitter should generally measure about the same. Further, as we get low enough in frequency we do not find this relation to hold true in actual measurements. So the question this month is - why is that?
The 20log(N) Rule
First, a quick review of the 20log(N) rule for those who may not be familiar with it:
If the carrier frequency of a clock is divided down by a factor of N then we expect the phase noise to decrease by 20log(N). For example, every division by a factor 2 should result in a decrease of phase noise by 20log(2) or about 6 dB. The primary assumption here is a noiseless conventional digital divider.
Why is this? The output of a practical digital divider is rising and falling edges with the signal at a logic high or low level otherwise. Jitter is presented at the rising and falling edges only. The proportion of jitteriness to each clock cycle is reduced. Our intuition may suggest that if we reduce the number of jittery edges then we reduce the jitter transmitted by the divided down clock. That turns out to be correct.
Formally, this can be written as follows:
What About Phase Jitter?
We integrate SSB phase noise L(f) [dBc/Hz] to obtain rms phase jitter in seconds as follows for “brick wall” integration from f1 to f2 offset frequencies in Hz and where f0 is the carrier or clock frequency.
In practice the quantities involved are small enough for good clocks that the RMS phase jitter, for a 12 kHz to 20 MHz jitter bandwidth, is on the order of 10s to 100s of femtoseconds.
Note that the rms phase jitter in seconds is inversely proportional to f0. When frequency is divided down, the phase noise, L(f), goes down by a factor of 20log(N). However, since the frequency goes down by N also, the phase jitter expressed in units of time is constant. Therefore, phase noise curves, related by 20log(N), with the same phase noise shape over the jitter bandwidth, are expected to yield the same phase jitter in seconds.
Let’s look at a specific example. As an experiment, I took an Si5345 jitter attenuator, input a 25 MHz clock, and configured it so that I only changed an (internal) output divider by factors of 2 to obtain frequencies running from 800 MHz down to 50 MHz. I then measured the phase noise using an Agilent (now Keysight) E5052B and compared the phase noise and phase jitter for each case. Five runs were averaged and correlated for each frequency. I omitted any spurs for clarity and to simplify the experiment.
Through the magic of MS Paint and use of the “Transparent Selection” feature I am able to overlay all of the E5052B screen caps as follows. (If the runs are identical each time only unique text is obscured.) In the figure below, the traces generally run top to bottom in descending carrier frequency, i.e. 800 MHz, then 400 MHz, etc. on down to 50 MHz. The shapes of the curves are the same except where the curves are compressed at the highest offset frequencies.
I then tabulated the measured phase jitter results over the 12 kHz to 20 MHz jitter bandwidth as follows:
There are two immediate observations we can make from the overlaid plots and the table.
Despite the 20log(N) rule, the phase jitter is getting worse as I decrease the output clock frequency, especially below 200 MHz. These lower frequency clocks measure far jitterier than expected. Thus arises the case of the jitterier divided-down clock. So what’s going on?
Curve compression due to the apparent phase noise floor appears responsible for the differences in the calculated RMS phase jitter. Let’s verify that by comparing the data from 10 kHz to 20 MHz offset for the 800 MHz and 100 MHz cases. All of the spot phase noise data came from the original markers plotted except for the 20 MHz points which were estimated from the screen cap plots. (Note that for a factor of 8 or 23 we would expect a delta of 3 x 6 dB or 18 dB in phase noise.)
Taking just these values and entering them in to the Silicon Labs online Phase Noise to Jitter Calculator we obtain the following.
Not too shabby for the online calculator considering it only had 5 data points to work with!
Now let’s modify the 100 MHz dataset to remove the higher offset frequency compression as follows. The 18 dB Δ is as would be otherwise expected applying the 20log(N) rule.
Entering the modified values in to the online calculator we add its calculation to the table as highlighted:
This exercise confirms that the curve compression accounts for the significant difference in phase jitter
measured between the 800 MHz and 100 MHz cases.
The Noise Floor
All of the traces flatten or get close to flattening by 20 MHz offset. So, what is the apparent or effective noise floor? Note that in general this will be some RSS (Root Sum Square) combination of the instrument phase noise floor and the DUT’s far offset phase noise. For example, if both the DUT and the instrument had an effective phase noise of –153 dBc/Hz at 20 MHz offset then the RSS result would be 3 dB higher or –150 dBc/Hz.
If the instrument noise floor was well below the DUT's we would expect the spot phase noise at 20 MHz offset to decrease by 6 dB, for every division by a factor of 2, from that measured for the 800 MHz clock. But that is not what happened. See the table and accompanying figure below:
The phase noise floor is not varying monotonically which suggest multiple factors may be involved. Reviewing the E5052B specs indicates that the SSB phase noise sensitivity should decrease slightly as the carrier frequency is lowered. Also, far offset phase noise from the DUT (Device Under Test) is typically dominated by the output driver’s phase noise and that’s unlikely to vary in this way. We are most likely running in to a combination of the instrument's "actual" phase noise floor as a function of input frequency plus aliasing on the part of the DUT. The Si5345's frequency divider edges can be regarded as sampling the phase noise of the internal clock presented to the divider. This factor is independent of the instrument. It is understood that aliasing can occur but quantifying the specific contribution due to aliasing can be problematic.
This paper suggests that provided the noise BW of the input signal is > 4 x divider output frequency v0 then divided PM (Phase Modulation) noise will degrade via aliasing by 10log[(BW/2v0) +1]. The aliasing described primarily impacts the far offsets where we are interested.
The authors write:
"Aliasing of the broadband noise generally has a much smaller effect on the close-to-carrier noise
because it is typically many orders of magnitude higher than the wideband noise."
In these particular measurements, estimated noise floor degradation for the lowest carrier frequencies are plausible assuming a given BW and instrument noise floor. However, no one solution appears to accommodate all the data. It might require operating the device at the highest output frequency and then employing external dividers and filters to properly sort this out. Perhaps in some future post.
While this month's post has concentrated on phase noise, it should be noted that divided spurs can be aliased or folded in the same way as the authors cited above discuss. One of my colleagues has demonstrated this also and I recommend his article for further reading.
We have reviewed the impact that a phase noise instrument’s apparent or effective phase noise floor can have on both the phase noise curve and phase jitter measurement of sufficiently low frequency clocks. After you have worked with your DUTs and phase noise equipment for some time you will recognize what a typical phase noise curve will look like, the approximate phase noise floor of the equipment, and what are reasonable expectations for phase jitter. Certainly, for the cases above, we would take have to take phase jitter measurements below 200 MHz with a grain of salt. If in doubt, try a similar configuration at a higher frequency for comparison. You will only miss the secondary phase noise degradation due to any instrument noise floor variation and/or aliasing due to higher division factors.
As always, if you have topic suggestions, or there are questions you would like answered, appropriate for this blog, please send them to firstname.lastname@example.org with the words Timing 101 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on.
1. The derivation here is adapted from a slim but information packed book by Dr. William “Bill” Egan
called Frequency Synthesis by Phase Lock published in 1981. See section 4.5, “Effect of Modulation
of a Divided Signal”, pages 75-76. (There is a later and much expanded edition of this book.) Dr. Egan
has passed on but he was a great engineer, author, and teacher in Silicon Valley. He wrote several
excellent, clear, and precise books on frequency synthesis, PLLs, and RF systems design.
2. A. SenGupta and F.L. Walls, “Effect of Aliasing on Spurs and PM Noise in Frequency Dividers”, 2000
IEEE/EIA International Frequency Control Symposium and Exhibition, pages 541 – 548.
Retrieved from http://tf.boulder.nist.gov/general/pdf/1380.pdf.
3. H. Mitchell, "Perfect Timing: performing clock division with jitter and phase noise measurements", EE
Times, 8/25/2011. Howell's paper covers this topic from a different perspective with some additional
detail using data from the previous generation Si5324. He also demonstrates spur aliasing by mixing the
outputs of separate RF signal generators. Retrieved from
This week we’ve released the new Si522xx PCIe clock generators, bringing best-in-industry jitter performance and energy efficiency to PCI Express® (PCIe®) Gen1/2/3/4 applications. This new clock family delivers on the stringent requirements of PCIe Gen 4 and Separate Reference Independent Spread (SRIS) standards with 20 percent jitter margin to spare, and its jitter performance (0.4 ps RMS) also provides up to 60 percent jitter margin for PCIe Gen 3.
The PCIe standard, originally developed as a serial interconnect for desktop PCs, and has become popular in blade servers, storage equipment, embedded computing, IP gateways, industrial systems, and consumer electronics. High-output clock generators like the Si522x family reduce the number of buffers needed as data bus usage expands in these types of systems. Designed specifically for clock-distribution-intensive applications, the Si522x family supports up to 12 outputs from a single device. This higher output count per device reduces BOM cost. The clocks’ output drivers take advantage of our innovative push-pull HCSL technology, eliminating external resistors required by conventional constant-current output drivers.
Additionally, internal power filtering prevents power supply noise from affecting jitter performance while reducing component count, saving about 30 percent of board space compared to competing solutions.
Developers designing battery-powered applications like digital cameras are especially concerned about power consumption. The 2-output Si52202 clock is optimized for low-power 1.5 V to 1.8 V applications, offering the lowest power consumption for PCIe applications. Packaged in a small 3 mm x 3 mm 20-pin QFN, the clock is also 45 percent smaller than competing solutions.
For more information, visit www.silabs.com/pcie-learningcenter.
Hello and welcome to another Timing 101 blog article.
In this post, I will go over an interesting and curious clock chip feedback arrangement that comes up from time to time. It can arise accidentally, or as an attempted recovery or test mode, but should generally be avoided as explained. Further, understanding the Ouroboros clock might help explain some odd behavior in a complicated timing application. Before diving in to exactly what I mean by an "Ouroboros" clock, let's review some basic clock switching terminology and the standard input clock switching configuration.
Some Basic Clock Switching Terminology
Clock chips often support switching from one input clock to another based on some qualifying criteria such as LOS (Loss of Signal) or an OOF (Out of Frequency) condition. Here’s the terminology most often used:
Output clock based on an
attached crystal, or other resonator, or substitute external reference clock. The output clock's frequency stability, wander, and jitter characteristics are determined by the chip's crystal oscillator for example, independent of an input clock.
Output clock based on historical frequency data of a selected input clock and employed when the input clock is lost and no valid alternate is available. Usually historical data must be collected over some minimum time window to be considered valid. The frequency accuracy is only as good as the data collected.
Output clock frequency and phase locked to a selected input clock, i.e. normal operation.
The Standard Input Clock Switching Configuration
Consider the illustration in the figure below where two jitter attenuator clock ICs are cascaded. This could be for additional jitter attenuation or for optimizing frequency plans and distribution. For the purposes of illustration, the devices are depicted as very simplified Si5345 block diagrams. In this figure there are two input clocks supplied to Device #1, IN0 and IN3. In typical applications one clock may be regarded as the "primary" clock and the other as the "secondary" or backup clock. The primary clock might be recovered from network data while the secondary clock relies on a local oscillator. If the primary clock fails or is disqualified by LOS or OOF, then the clock chip switches to the secondary clock. This is usually intended to keep "downstream" devices up and running. If the primary clock returns and is valid then the clock IC may revert to it, or not, depending on the option selected.
The presumption here is that as long as either of these two clocks is present then a valid locked mode clock will be yielded at OUT0 supplying an input clock to downstream Device #2. In fact, if both input clocks to Device #1 were lost the device could go in to holdover mode, as described above, or even freerun mode, and still yield a temporary reasonable output clock.
The Ouroboros Clock Configuration
In standard applications, downstream clocks are not fed back to upstream clock inputs. Rather they are usually scaled or jitter attenuated versions of upstream independent stable or data-derived clocks.
But what if we did attempt the configuration shown in Figure 2 below? In this case, one of the outputs of downstream Device #2 is being fed back in to upstream Device #1. This might be intended as a temporary expedient backup clock.
Now what happens when we lose our primary clock IN0 as suggested by Figure 3 below? The secondary or backup clock IN3 to Device #1 relies on the output of Device #2. Note that this is just a locked version of Device #1's output. We generally do not do see this sort of connection with one device but it is proposed occasionally with applications involving 2 devices. Even then, engineers will usually intuit that we are trying to get away with something.
This is the Ouroboros clock configuration. (And yes, it does sound almost like a Big Bang Theory episode title.) The Ouroborus clock configuration is so named because its feedback resembles the mythological symbol for a snake chasing (or biting) its tail. According to the Wiktionary entry the word comes from the Greek words ourá for "tail" and bóros, for "devouring or swallowing". See the illustration below in Figure 4. It is an ancient symbol for cyclic infinity and the term fits this application.
Let’s consider a simplified gedanken or thought experiment consisting of a single basic PLL. Then assume that it has successfully been placed in to the Ouroboros configuration as follows in Figure 5 below.
Now we can think through the probable consequences. If everything is ideal and there is no PFD (Phase Frequency Detector) error output then the situation is at least marginally stable. However, even ignoring loop noise, it is most likely in a practical PLL that there is a fixed phase offset between the clocks presented at PFD (+) and PFD (-). In normal PLL operation the VCO can be adjusted so as to frequency and phase lock the output clock to the independent input clock. In the Ouroboros configuration, there is nothing the VCO can do to reduce phase error.
Assume the output clock is measured with phase just a little bit faster, at PFD (+) versus PFD (-). The loop will then attempt to track for that by tuning the VCO to a higher frequency. But a relative phase difference will still be present. So, the loop will continue attempting to correct for the measured phase error until the VCO is “railed” at its highest frequency. Note that, to generalize, the VCO could be tuned either higher or lower in frequency depending on the polarity of the phase difference. All that matters is that a phase delta be seen by the PFD that leads to a runaway condition.
Trying to accomplish this with two Si5345s is just this problem writ large, albeit with further complications due to clock validation and switching logic. In addition there will always be slight part to part variations in output frequency and calculated HO frequency. These can also drive the PFD in one direction or another where 2 separate devices are involved.
So, what really happens in the lab? Consider a project plan with these attributes:
Now take such a project plan and apply it to 2 Si5345 evaluation boards, configured as shown in the second figure above, except using IN1 instead of IN3 as the secondary or backup input clock.
Apply a signal generator to Device #1 IN0 and let both boards run until HOLD_HIST_VALID is true. What happens when you remove the 100 MHz input clock at IN0?
Initially only LOS is reported by Device #1. Otherwise all is well. However, the output clock frequency from Device #2 starts ramping in frequency (it can be ramping up or down in general but happened to be ramping up in my particular experiment.)
Eventually the output clock from Device #2 being used as the backup input clock goes far enough out of frequency that it fails Device #1’s OOF criterion. The settled conditions are as follows:
Note that in general there is no reason why the devices could not be stable with each in the opposite states. Our experience has been that most of the time there is a preferred set of states but you will see the alternate set from time to time, almost as if there is a chaotic element to the results.
In this case, the Ouroboros configuration didn’t really buy us anything except perhaps a little time. However, note that the output frequency was ramping the entire time until Device #1’s OOF asserted and Device #2 still ends up relying on Device #1 HO clock. That’s just one potential issue for this impractical configuration. But there’s another potentially worse effect.
This configuration can also result in a positive feedback system that can be made to oscillate, leading to puzzling and odd behavior. In particular, this can happen if one of the devices can be made to enter and exit HO. For example, this phenomenon can be observed if the project plan OOF specs are tightened as follows.
Now the two devices will interact with each other and may never settle. Below is an annotated frequency plot of Device #2 output clock data from a logging frequency recorder. You can see that the Device #2 output frequency is slowly oscillating frequency-wise with a varying period on the order of 8 or 9 seconds.
There are three features noted on the plot above about the state of Device #1 as Device #2's output frequency varies:
During this time period no alarms are issued by Device #2. This state can last indefinitely. I started one trial of this experiment on a Friday afternoon and it was still cycling on Monday morning. The devices can even exchange roles as to which one is in the HO state!
Having a device constantly entering and exiting HO is even worse than simply going straight in to HO.
The bottom line is that the Ouroboros clock configuration either does nothing useful except delay entering HO or can even trigger an oscillation which produces repetitive wander in the output clock. Downstream clocks should generally stay downstream.
I hope you have enjoyed this Timing 101 article and will understand the implications if you spot an Ouroboros
As always, if you have topic suggestions, or there are questions you would like answered, appropriate for this blog, please send them to email@example.com with the words Timing 101 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading.
Keep calm and clock on,
Silicon Labs provides RF range calculators for customers to help estimate the actual range of their wireless applications. Simple RF Range Calculator is available to download here.
RF range depends on the following parameters
Propagation factor, depends on the environment
Simple RF Range Calculator
This simple RF range calculator is for those customers who don’t want to deal with difficult RF questions just simply would like to get fast and reasonable results for both outdoor and indoor environments.
Simple RF Range Calculator provides fast and accurate result as the customer selected the frequency band and set TX and RX parameters:
Simple RF Range Calculator with frequency band selection
Frequency bands and custom frequency channels also can be selected:
Simple RF Range Calculator with custom frequency channel set up
TX Output Power and RX Sensitivity need to set up based on the radio device’s actual link parameters based on the data sheet. If the exact antenna parameters are unknown notes at the right side can help to determine the closest values:
Simple RF Range Calculator with notes
Hello and welcome to this inaugural article for Timing 101. My goal is to introduce and review technical topics of interest to board and systems designers who apply timing components or ICs (aka “clock chips”). Clock chips deliver frequency and phase information via clock waveforms, and in some cases packetized time information.
In this post, I will go over a common test set-up measurement situation whose results may be unexpected when one initially encounters jitter attenuators. I will first review some requisite background material, then present the "mystery" and its root cause, and finally suggest an improved test set-up.
Jitter and Phase Noise in a Nutshell
Briefly, clocks are periodic signals with digital signal levels used to sample data in a synchronous digital system. In other words, clocks provide the “heartbeat” or cadence necessary for sampling and sequentially processing data in synchronous digital circuits or systems. They are usually, but not always, at or near 50% duty cycle.
Ideal clocks would provide a perfect specified frequency and phase to optimize this process. However, practical clocks have timing jitter which can be defined as the short-term timing variation of the clock edges from their ideal values. One reason to care about clock jitter for synchronous digital systems is that it eats into the timing margins and, therefore, the reliability and validity of the data.
There is also a frequency domain counterpart to jitter: phase noise. Phase noise measures the random short-term phase fluctuations of a clock. It’s an indication of the spectral purity of the clock.
In short, this is a tabular or graphical plot of L(f) [script ell of f]; the noise power in one phase modulation sideband versus carrier power, at frequency offsets from the carrier. For example, -70 dBc/Hz at 100 kHz and –150 dBc/Hz at 20 MHz. The dBc/Hz units refer to power in dB relative to the carrier power per Hertz of bandwidth. Phase noise is typically measured using a phase noise analyzer or a spectrum analyzer with a phase noise option.
Often shown on the same plot are non-random short-term clock phase fluctuations referred to as spurs or spurious. These spurs, depicted as discrete components, have units of dBc.
As with other systems analyses, we will generally find it easier to understand clock devices and clock distribution networks or clock trees in the frequency domain. (I plan to cover phase noise and spurs in more detail in a subsequent post.)
The Role of Jitter Attenuators
It’s not uncommon to have to work with (or at least start with) relatively noisy or jittery clocks. These can arise for a number of reasons. For example, when the clock is:
In such cases, we need a particular type of clock device, a jitter attenuator or "jitter cleaner", to attenuate or minimize phase noise and spurs over the offset frequencies of interest. The resulting output clock is then distributed to the devices that need its improved jitter performance.
The distinguishing characteristic of Jitter Attenuators is that they are essentially narrowband Phase Locked Loops (PLLs) with a "low pass" jitter transfer function. That is these devices attenuate jitter components whose frequencies are greater than the PLL's loop bandwidth (BW). Modern jitter attenuators often have programmable loop BWs over a wide range, from as low as 0.1 Hz to as high as 1 or a few kHz.
By contrast, another category of clock chip, the clock generator, is a wideband PLL used primarily for clock multiplication from a low jitter source. These devices usually have fixed loop bandwidths on the order of 100s of kHz to 1 MHz.
The Measurement Problem
So what's the problem? Well, every so often a customer will contact us and write something like 'We're testing one of your clock chips and comparing the output clock versus the input clock and it seems surprisingly jittery'. Invariably we will find that the test set-up boils down to something like the following where the oscilloscope is being triggered by the jittery input clock.
The result will often look similar to that shown in the below. In this example the jitter attenuator is an Si5347 with loop BW = 100 Hz. The top yellow trace is the input clock which is a 25 MHz sine wave from a signal generator with 1 kHz FM, 100 Hz deviation applied. The bottom green trace is the output clock which is also 25 MHz just to keep things simple.
Shouldn't the output clock be less jittery? Is it jitter attenuated or not? This is the case of the (apparently) jittery jitter attenuated clock.
Given the measurement set-up shown earlier, three factors must be present to observe this apparent mystery:
Now you should be able to recognize the basic problem even if disguised in a more complicated application.
Note that if you triggered on the output clock then the input clock would look jittery by comparison. See below. Which clock appears jittery then is just a question of trigger perspective. This particular scope measurement is not conclusive without knowing which clock was more jittery a priori.
Diagnosis by Loop Bandwidth
You can obtain some insight as to what's really going on with this particular test configuration by playing with the Jitter Attenuator's loop bandwidth. Try narrowing and widening the BW and then observing the results on the scope.
Assuming a jittery input clock, you should generally see that widening the BW makes the output clock appear less jittery versus the input clock. This is because widening the BW means the PLL will track the input clock more, jitter and all. In Figure 4 below, the Si5347’s loop BW has been widened to 4 kHz. There is essentially no jitter attenuation and the output clock does not appear jittery compared to the input clock.
Conversely, narrowing the BW makes the output clock appear more jittery versus the input clock. That's because a narrower loop BW corresponds to more jitter attenuation. Ironically, it is the very success of the jitter attenuator in this test configuration that is the root cause of the apparent mystery. If the output clock simply tracked the input clock then the trigger source would be irrelevant. In the figure below, the Si5347’s loop BW is narrowed back down to 100 Hz.
A jitter attenuated clock is generally different from its jittery input clock, above and beyond any frequency scaling. If its spectrum has significantly changed, this should be relatively obvious when measuring and comparing the phase noise of each clock. However, as I mentioned before, this takes specialized equipment such as a phase noise analyzer or a spectrum analyzer with a phase noise option.
Third Party Arbitration
OK, so what's a better way to simultaneously compare the jittery input and jitter attenuated output clocks if all you have is a scope? Find a third party to arbitrate. In other words, find or generate a low jitter reference clock integer-related and synchronous to both the input and output clocks. Then use this reference as the trigger for both the input and output clocks. See the revised test set-up diagram in the figure below. Now you can clearly and fairly compare the jitter of the input and output clocks simultaneously in the time domain.
Here are a couple of example plots in which all the oscilloscope traces are 25 MHz as before. The top yellow trace is the jittery (frequency modulated) input clock and the middle green trace is the jitter attenuator’s output clock. The bottom blue trace is the new low jitter reference clock being used as the trigger. In the first instance, in the figure below, the jitter attenuator’s loop BW is 4 kHz and the output clock is fairly jittery just like the input clock.
In the second instance, in the figure below, the jitter attenuator’s loop BW is 100 Hz and the output clock is much less jittery. In this particular example, the standard deviation of the jitter attenuated clock’s cycle to cycle jitter dropped from 8.2 ps to 1.1 ps when the loop BW was decreased from 4 kHz to 100 Hz.
I hope you have enjoyed this first Timing 101 article and, if you are new to the field, that you won't be caught off guard should you run in to this scenario.
Some of the subjects I hope to cover in later installments include a deeper dive in to jitter, phase noise, clock trees, wander, and output clock formats. If you have topic suggestions, or there are questions you would like answered, appropriate for this blog, please send them to firstname.lastname@example.org with the words Timing 101 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading.
Keep calm and clock on. And click here for more information
The more power generated by a smaller power supply, the more cost effective it is and the less room it takes in the design. Designers can improve W/mm3 with faster switching. Switching technology using Gallium Nitride (GaN) or Silicon Carbide (SiC) allows faster switching rates than are currently available—up to 20 times faster.
The technologies’ lateral structures, compared to vertical for silicon, make them low-charge devices capable of switching hundreds of volts in mere nano-seconds (ns). The challenge is that they require isolated gate drivers to work, and today’s technology typically cannot support the required noise immunity from their superfast switching.
Faster Switching - SMPS Want It
Fast power switching is most prevalent in switched mode power supplies (SMPS). SMPS convert their input power from ac to dc (ac-dc) or from dc to dc (dc-dc). In most cases, they also change voltage levels to suit the needs of the application.
Typical ac-dc SMPS block diagram
The new GaN and SiC technology switches faster and is more efficient than current switches. But their faster switching causes higher switching transients as shown in the figure below with a typical 600 V high side rail.
Switching transients in a power converter
GaN switching times are typically about 5ns, or about 10x - 20x faster than conventional systems. In this case, the 600 V high voltage rail results in a 120 kV/µs transient (600 V / 5 ns = 120 V/ns or 120 kV/µs).
CMTI – Common Mode Transient Immunity – the Key Isolated Gate Driver Spec
The isolated gate drivers controlling the power switches have to be designed to withstand these noise transients without creating glitches or latching-up. The ability of the driver to withstand these common mode noise transients is generally defined as CMTI (common mode transient immunity), and is expressed in units of kV/µs.
What are the Isolated Gate Driver Options?
Isolated gate drivers must preserve the integrity of the isolation from the primary to the secondary side. There are a number of isolated gate driver solutions available today:
There are other advantages for these drivers. Their latency (propagation delay) can be as much as 10x better than popular optocoupled gate drivers, and their part-to-part matching can be more than 10x better. This provides the designer with another key advantage—the system’s overall modulation scheme can be fine-tuned for maximum efficiency (W/mm3) and safety without having to accommodate specification slop. They also support lower voltage operation (2.5 V compared to 5 V) and a wider operating temperature range.
Plus, they offer advanced features such as input noise filters, asynchronous shutdown capability, and multiple configurations such as half bridge or dual independent drivers in a single package.
Finally, they are rated to 60 years of operating life at high voltage conditions, longer than any other comparable solution.
Power supply designers want to maximize their W/mm3 using the fastest power switching technology available. The latest GaN- and SiC-based switches are the fastest available technology today, but isolated require gate drivers with very high noise immunity (CMTI).
The Si827x isolated gate drivers from Silicon Labs meet GaN’s and SiC’s noise immunity requirements with margin to spare (120 kV/µs required, 200 kV/µs supplied).
Signal Isolation Basics
Isolating signals is necessary to provide the following design-critical functions:
In order to ensure that true isolation has been achieved, it is important for the circuit designer to eliminate all possible coupling paths from one circuit (Circuit A in Figure 1) to the other that needs to be isolated (Circuit B in the illustration below). Hence, when isolating signals, it is equally important to isolate the power supplies. For a circuit designer, the challenge of isolating signals is really two-fold: to provide safe, reliable and accurate signal isolation as well as power isolation. There are multiple solutions available for signal isolation to suit the needs of designers – based on data rate capabilities, jitter restrictions, noise immunity concerns, high voltage capability, compliance with the various isolation component safety standards etc. However, for many applications where only a watt or so of isolated power is required, there have not been readily available or easily implementable solutions for power isolation.
Factory automation systems depend on efficient and reliable real time distributed networks to monitor and control complex manufacturing processes. A typical and simplified hierarchical structure used in these systems is shown in Figure 6. Human machine interface in the control room at the top is linked to an intermediary controller level and finally down to the physical layer where the sensors and actuators are situated as part of motor drive units or machines controlled by PLC’s (programmable logic controllers).
The physical layer connects the sensors and actuators in a process module and across the factory floor or plant. As shown above, a CAN-based bus communicates with the various motor control units while an RS-485-based bus (PROFIBUS) communicates with the various machines on the factory floor. These physical layers are used commonly in industrial automation because they are very robust even in a noisy environment and support the long distance, multi-point communication needed on a factory floor that may cover hundreds of square meters.
These buses have multiple nodes that connect to the bus through a CAN or an RS-485 transceiver. Isolating these interfaces is critical to protect against high voltages, high electromagnetic (EM) noise and large ground potential differences within the network.
The illustration below shows a detailed diagram of an RS-485 transceiver node that has been isolated from the processor. The isolated power solution is referred to as the isolated dc-dc converter block. Very few easy-to-deploy, high-performance solutions isolated power solutions are available to developers. Designers frequently have to design their own solutions from scratch to provide isolated power to the secondary side of the isolator and to the RS-485 transceiver on the isolated side.
The transceiver in the illustration below is a half-duplex device with receive and transmit lines connected together. It communicates with the RS-485 bus through differential I/Os labelled A and B in Figure 3. The transceiver provides the interface to the processor through its single-ended digital I/Os labelled Rx (receiver) and Tx (transmitter) and an EN (enable pin) signal that controls the transmitter.
The transceiver typically has two to four digital signals that require fast and accurate digital isolation and needs 0.5W to 1W of power, which has to be supplied by a dedicated isolated source with the following characteristics:
Solutions for industrial isolation
There are only a few products on the market that strike the right balance between compactness and the ability to deliver power and between minimizing emissions while maximizing efficiency.
Discrete solutions that use FET’s, controllers, single channel isolators (or optos) for feedback as well as other supporting BOM for power isolation are very common. Such solutions have to be designed from scratch and take specialized experience and skill and could take multiple iterations to get right.
Some solutions integrate digital isolation and the power transformer in a single IC package. These air core transformers have poor coupling coefficients and need to be driven at much high frequencies to deliver equivalent power. This results in a much higher emissions profile for EMI, which is a strong deterrent for many designers.
In addition, the power converter efficiency of such products is usually low, from 10-35%. In applications where space is at a premium, efficiency is a “don’t-care” and high emissions not a problem, these might work. But more often than not, such solutions are not compelling.
There are other solutions that integrate the signal isolators and the dc-dc converter and are designed to work with a discrete transformer. This approach is optimized for the highest efficiency and integration. These solutions are a total solution, are compact and can deliver up-to 2W of power at about 78 percent efficiency.
For example, Silicon Labs’ Si88xx isolation products combine quad digital isolators with a modified fly-back topology dc-dc converter with built-in secondary sensing feedback control. The Si88xx devices have been designed for very low emissions by employing dithering techniques.
Additional features include a soft start capability to avoid inrush currents on startup, cycle-by-cycle current limiting, thermal detection and shutdown for over-temperature events, and cycle skipping to reduce switching losses and thus boost efficiency at lighter loads.
Options for the Si88xx isolators are available for various voltage levels from 5 V to 24 V and for various combinations of digital isolation channels and their directionality. This solution leverages Silicon Labs’ proprietary signal isolation technology, with its signature low EMI profile, to provide high integration, high efficiency and very low EMI.
Figure 4 provides a simplified block diagram of an Si88xx isolator. In addition to the four high-speed digital isolation channels, the Si88xx device integrates a dc-dc controller and internal FET switches that modulate power to the external transformer. The output side incorporates feedback through an external resistor divider to provide excellent line and load regulation.
The dc-dc converter uses dithering techniques to minimize EMI peaks and a zero voltage switching (ZVS) scheme to minimize power loss when modulating power to the transformer. The device uses cycle skipping at light loads to minimize switching losses and boost efficiency. Multiple safety features include cycle-by-cycle current limiting, soft start to avoid inrush currents and thermal shutdown. The device also incorporates several user-programmable features such as soft start time control, a shutdown option for the dc-dc converter and switching frequency control to fine-tune the EMI profile.
In the application example above, the Si88xx is an ideal fit as shown in Figure 5 below. The isolated transformer is rated to 2.5kVrms and is designed to work with the Si88xx IC. By adding a few other components like resistors, diodes and capacitors, a complete power and signal isolation solution is available.
Elegant solutions that combine excellent digital isolation characteristics with high power conversion efficiency and extremely low EMI emissions are now available that make development easier for the digital designer. These are plug and play solutions that eliminate costly design time and iterations and take the guesswork completely out of the picture, ensuring first time success and the fastest time to market.
Check out the Si88X Isolator Evaluation Kit here.
Electromagnetic relays (EMRs) are broadly used in motor control, automotive, HVAC, valve control, solar inverter and many other industrial applications. Over the last 10 years, solid-state relays (SSRs) have seen fast growth as they begin to replace EMRs. Designers have found that SSRs can address most of the limitations of EMRs. But as is often the case with alternative solutions, SSRs have their own set of tradeoffs that can challenge designers. A third alternative exists: using custom SSRs. Let’s exam the limitations and tradeoffs of each approach.
Fast Growth of SSRs at the Expense of EMRs
Electromechanical relays use a coil that when sufficiently powered can move an armature to switch contacts based on the magnetic flux generated. EMRs have the benefit of truly being off without leakage current when not energized. However, they have many limitations that SSRs can address, which is contributing to SSR market growth. SSRs have no moving parts as they are based on semiconductor technology, which directly contributes to better reliability, much longer lifetime and fast switching speed. As EMRs switch, the contacts generate both acoustical and electrical noise along with arcing, which makes them unsuitable in some applications. EMRs are bulky, often impacting industrial design and placement options on a printed circuit board (PCB).
The typically large, through-hole EMRs also increase manufacturing cost versus smaller surface-mount SSRs.
Design Challenges of Optocoupler based SSRs
As designers look to SSRs to address EMR limitations, they are finding a different set of challenges. SSRs offer small board space when used in low-power switching applications. However, higher power switching applications must use larger custom packages to deal with the power dissipation and heat of the integrated FETs. Quite often the SSR user must compromise on FET performance, power or cost as there are limited choices of integrated FETs.
SSRs typically use optocoupler based designs to achieve isolation. These optocoupler designs have inherent LED limitations such as poor reliability and stability across temperature and time. A key optocoupler wear-out mechanism is LED light output. As LEDs age their light output declines, which negatively impacts timing. The degradation in light output grows worse over time with increased temperature and higher currents.
Other common issues include unstable input current thresholds and complicated current transfer ratio. Designers are forced to use more current and add external components to address these issues or use alternatives to optocoupler-based isolation. More and more industrial applications such as industrial drives, solar inverters, factory automation and metering are targeting 20+ years of system life so it is important for the designer to carefully consider these effects on the system lifetime.
Alternative Custom SSR Using Optocoupler-Based Isolation
Many system designers prefer to use existing high volume and cost-effective discrete FETs as their performance and thermal characteristics are well understood in contrast to the often unknown integrated FETs of SSRs in non-standard packaging. A custom SSR enables the use of these application optimized FETs instead of the typically compromised FETs that are integrated into SSRs. There is a tradeoff in board space versus low-power switching SSRs, but this tradeoff becomes less important with higher powered SSRs due to heat dissipation challenges of integrated SSRs.
The figure below shows a custom SSR based on traditional optocoupler based isolation. A secondary, switch side power supply is usually required with these types of solutions as power is not transferred across the isolation barrier.
Alternative Custom SSR Using CMOS-Based Isolation
Over the last few years, multiple semiconductor suppliers have introduced more advanced CMOS-based isolation products with double-digit growth over traditional optocoupler based isolation. This is especially true in high temperature and high reliability industrial applications. Traditional optocoupler based custom SSRs can address the limitations of integrated FETs but require an additional power supply. Even then, customer SSRs built around an optocoupler cannot address the inherent limitations of using LEDs. Another alternative is now available that enables developers to use their choice of application specific, high volume FETs without the disadvantages of optocoupler based designs as shown below.
The Silicon Labs Si875x family features the industry’s first isolated FET drivers designed to transfer power across an integrated CMOS isolation barrier, eliminating the need for isolated secondary switch-side power supplies and reducing system cost and complexity. Since Si875x drivers do not use LEDs or optical components, they provide superior stability over time and temperature with up to 125˚C automotive operation. A single Si875x can support either dc or ac load switching with one FET required for dc load switching as shown in Figure 2 or two FETs for ac load switching as shown here.
Developers have the option to use a CMOS digital input with the Si8751 (Figure 2) or the diode emulation input of the Si8752 (Figure 3). The Si8752 isolated FET driver makes it easy for developers to migrate from optocoupler-based solutions while efficiently generating a nominal 10.3 V gate drive using only 1 mA of input current. Optional miller clamp inputs are implemented that allow the addition of a capacitor to eliminate the possibility of inductive kickback changing the state of the switch when used in applications with high dV/dt present on the FET’s drain. The Si8751 easily interfaces with low-power controllers down to 2.25 V and provides a unique low-power TT mode that provides exceptionally fast turn-on speeds, as fast as 100 µs, while dropping static holding current as much as 90 percent. An optional capacitor is tied to ground using the TT pin to enable this power-saving feature. This approach allows the device to draw more current to initially switch the external FET on quickly yet draw less supply current in the steady state. Total power over time is reduced while maintaining the FET’s fast switching speed.
Developers face continual challenges to implement next-generation designs with lower system cost, higher performance and better reliability. Offering a unique combination of robust, reliable CMOS-based isolation technology and advanced capability to transfer power across the isolation barrier, new isolated FET drivers now provide a much-needed replacement solution for antiquated EMRs and optocoupler-based SSRs. CMOS-based isolated FET drivers give developers the flexibility to choose a cost-effective FET customized to their application needs, creating an easy migration to state-of-the-art solid-state switching.
For more information on the Si875x isolated FET driver family, click here.
This article originally appeared in PowerPulse.net
Green standards are challenging power designers to deliver more energy-efficient, cost-effective, smaller, and more reliable power delivery systems. A critical building block within ac-dc and isolated dc-dc power supplies is the isolated gate driver. These trends push the need for greater power efficiency and increased isolation-device integration.
Optocoupler-based solutions and gate-drive transformers have been the mainstay for switch-mode power supply (SMPS) systems for many years, but fully integrated isolated gate driver products based on RF technology and mainstream CMOS provide more reliable, smaller, and power-efficient solutions.
Anatomy of an Isolated Power Converter
Isolated power converters require power stage and signal isolation to comply with safety standards. The example below shows a typical ac-dc converter for 500 W to 5 kW power systems, such as those used in high- efficiency data center power supplies.
From a high-level perspective, this two-stage system has a power factor correction (PFC) circuit that forces power system ac line current draw to be sinusoidal and in-phase with the ac line voltage; thus, it appears to the line as a purely resistive load for greater input power efficiency.
The high-side switch driver inputs above are referenced to the primary-side ground, and its outputs are referenced to the high-side MOSFET source pins. The high-side drivers must be able to withstand the 400 VDC common-mode voltage present at the source pin during high-side drive, a need traditionally served by high- voltage drivers (HVIC).
The corresponding low-side drivers operate from a low voltage supply (e.g., 18 V) and are referenced to the primary-side ground. The two ac current sensors in the low-side legs of the bridge monitor the current in each leg to facilitate flux balancing when voltage mode control is used. The isolation barrier is provided to ensure that there is no current flow between the primary- and secondary-side grounds; consequently, the drivers for synchronous MOSFETs Q5 and Q6 must be isolated.
The secondary-side feedback path must also be isolated for the same reason.
Gate Driver Solutions
Although optocouplers are commonly used for feedback isolation, their propagation delay performance is not fast enough to achieve the full benefit of the synchronous MOSFET gate-drive isolation circuit.
Optocouplers with faster delay-time specifications are available, but they tend to be expensive while still exhibiting some of the same performance and reliability issues found in lower-cost optocouplers. This includes unstable operating characteristics over temperature, device aging, and marginal common mode transient current (CMTI) resulting from a single-ended architecture with high internal coupling capacitance. In addition, Gallium Arsenide- based process technologies common in optocouplers create an intrinsic wear-out mechanism (“Light Output” or LOP) that causes the LED to lose brightness over time.
Gate Drive Transformers
Given the above considerations, gate drive transformers have become a more popular method of providing isolated gate drive. Gate drive transformers are miniature toroidal transformers that are preferred over optocouplers because of their shorter delay times. They are faster than optocouplers, but cannot propagate a dc level or low-frequency ac signal. They can pass only a finite voltage-time product across the isolation boundary, thereby restricting ON time (tON) and duty cycle ranges.
These transformers must also be reset after each ON cycle to prevent core saturation, necessitating external circuitry. Finally, transformer-based designs are inefficient, have high EMI, and occupy excessive board space.
CMOS-based Isolated Gate Drivers
Fortunately, better alternatives to gate drive transformers and optocouplers are now available. Advancements in CMOS-based isolation technology have enabled isolated gate drive solutions that offer exceptional performance, power efficiency, integration, and reliability. Isolated gate drivers, such as Silicon Labs’ Si823x ISOdriver family, combine isolation technology with gate driver circuits, providing integrated, low-latency isolated driver solutions for MOSFET and insulated-gate bipolar transistor (IGBT) applications.
The Si823x ISOdriver products are available in three basic configurations (see Figure 2), including:
The Si823x ISOdriver family supports 0.5 A and 4.0 A peak output drive options and is available in 1 kV, 2.5 kV, and 5 kV isolation ratings. The high-side/low-side versions have built-in overlap protection and an adjustable dead time generator (dual ISOdriver versions contain no overlap protection or dead time generator). As such, the dual ISOdriver can be used as a dual low-side, dual high-side or high-side/low-side isolated driver.
These devices have a three-die architecture that causes each drive channel to be isolated from the others as well as from the input side. This allows the polarity of the high-side and low-side channel to reverse without latch-up or other damage.
Read the Whitepaper
To learn more about how isolated gate drivers can significantly increase the efficiency, performance, and reliability of switch-mode power supplies compared to legacy solutions, check out this whitepaper.