Member | Action | Date |
---|---|---|
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. Why 10Hz? So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. Why 10Hz? So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Time Interval Error (TIE) Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Posted
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. Why 10 Hz? So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t.
Time Interval Error (TIE) Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |
|
Updated
Timing 201 #9: The Case of the Really Slow Jitter – Part 1 on Blog
Introduction You have probably read or heard that phase noise is the frequency domain equivalent of jitter in the time domain. That is essentially correct except for what would appear to be a somewhat arbitrary dividing line. Phase noise below 10 Hz offset frequency is generally considered wander as opposed to jitter. Consider the screen capture below where I have measured phase noise down to 1 Hz minimum offset and explicitly noted the 10 Hz dividing line. Wander is on the left hand side and jitter is on the right hand side. The phase noise plot trends as one might expect right through the 10 Hz line. So what’s different about wander as opposed to jitter and why do we care? From the perspective of someone who takes a lot of phase noise plots, I consider this the case of the really slow jitter. It’s both slow in terms of phase modulation and in how long it takes to measure. The topic of wander covers a lot of material. Even introducing the highlights will take more than one blog article. In this first post, I will discuss the differences between wander and jitter, the motivation for understanding wander, and go in to some detail regarding a primary wander metric: MTIE or Maximum Time Interval Error. Next in this mini-series, I will discuss TDEV or Time Deviation. Finally, I plan to wrap up with some example lab data. Some Formal Definitions The 10 Hz dividing line, in common use today, has been used in synchronous optical networking (SONET) and synchronous digital hierarchy (SDH) standards for years. For example, ITU-T G.810 (08/96) Definitions and terminology for synchronization networks [1] defines jitter and wander as follows. 4.1.12 (timing) jitter: The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz). 4.1.15 wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long-term implies that these variations are of frequency less than 10 Hz). Similarly, the SONET standard Telcordia GR-253-CORE [2] states in a footnote “Short-term variations” implies phase oscillations of frequency greater than or equal to some demarcation frequency. Currently, 10 Hz is the demarcation between jitter and wander in the DS1 to DS3 North American Hierarchy. Wander and jitter are clearly very similar since they are both “variations of the significant instants of a timing signal from their ideal positions in time”. They are also both ways of looking at phase fluctuations or angle modulation (PM or FM). Their only difference would appear to be scale. However, that can be a significant practical difference. Consider by analogy the electromagnetic radiation spectrum, which is divided into several different bands such as infrared, visible light, radio waves, microwaves, and so forth. In some sense, these are all “light”. However, the different types of EM radiation are generated and detected differently and interact with materials differently. So it has always made historical and practical sense to divide the spectrum into bands. This is roughly analogous to the wander versus jitter case in that these categories of phase fluctuations differ technologically. Why 10 Hz? So, how did this 10 Hz demarcation frequency come about? Generally speaking, wander represented timing fluctuations that could not be attenuated by typical PLLs of the day. PLLs in the network elements would just track wander, and so it could accumulate. Networks have to use other means such as buffers or pointer adjustments to accommodate or mitigate wander. Think of the phase noise offset region, 10 Hz and above, as “PLL Land”. Things have changed since these standards. Back in the day it was uncommon or impractical to measure phase noise below 10 Hz offset. Now phase noise test equipment can go down to 1 Hz or below. Likewise with digital and FW/SW PLLs it is possible to have very narrowband PLLs which can provide some “wander attenuation”. Nonetheless, 10 Hz offset remains a useful dividing line and lives on in the standards. Wander Mechanisms Clock jitter is due to the relatively high frequency inherent or intrinsic jitter of an oscillator or other reference ultimately caused by flicker noise, shot noise, and thermal noise. Post processing by succeeding devices such as clock buffers, clock generators, and jitter attenuators can contribute to or attenuate this random noise. Systemic or deterministic jitter components also can occur due to crosstalk, EMI, power supply noise, reflections etc. Wander, on the other hand, is caused by slower processes. These include lower frequency offset oscillator and clock device noise components, plus the following.
For a good discussion of some of these wander mechanisms and their impact on a network, see [3]. Since wander mechanisms are different, at least in scale, and networks tend to pass or accumulate wander, industry has focused on understanding and limiting wander through specifications and standards. Wander Terminology and Metrics You may recall the use of the terms jitter generation, jitter transfer, and jitter tolerance. These measurements can be summarized as follows.
These definitions generally apply to phase noise measurements made with frequency domain equipment such as phase noise analyzers or spectrum analyzers. They are useful when cascading network elements. By contrast, wander is typically measured with time domain equipment. Counterpart definitions apply as listed below.
Wander has its own peculiar metrics too. In particular, standards bodies such as the ITU rely on masks that provide limits to wander generation, tolerance, and transfer based on one or both of the following two wander parameters. See for example ITU-T 8262 [4].
Very briefly, MTIE looks at peak-peak clock noise over intervals of time as we will discuss below. TDEV is a sort of standard deviation of the clock noise after some filtering. We will discuss TDEV next time. Before going into detail about MTIE, let’s discuss the foundational measurements Time Error and TIE (Time Interval Error). These are both defined in the previously cited ITU-T G.810. Time Error (TE) The Time Error function x(t) is defined as follows for a measured clock generating time T(t) versus a reference clock generating time Tref(t). The frequency standard Tref(t) can be regarded as ideal, i.e., Tref(t) = t. Time Interval Error (TIE) Similarly, the Time Interval Error function is then defined as follows, where the lower case Greek letter "tau" is the time interval or observation interval. Maximum Time Interval Error (MTIE)
The sampling period represents the minimum measurement interval or observation interval. There are many terms used in the industry that are synonymous and should be recognizable in context: averaging time, sampling interval, sampling time, etc. This could mean every nominal period if you are using an oscilloscope to capture TIE data. However, most practical measurements over long periods of time are only sampling clocks. This would correspond to a frequency counter’s “gate time”, for example, if post-processing frequency data to obtain phase data. An MTIE Example It’s better to show you the general idea at this point. Below, I have modified an illustration after ITU-T G.810 Figure II.1 and indicated a tau=1*tau0 observation interval or window as it is moved across the data. (The data are for example only and do not come from the standard. I have also started at 0 as is customary to show changes in Time Error or phase since the start of the measurement.) The initial xppk peak-peak value at the location shown is about 1.1 ns – 0 ns = 1.1 ns.
Now slide the tau=1*tau0 observation interval right and the next xppk peak-peak value is 1.4 ns – 1.1 ns = 0.3 ns. If we continue in this vein to the end of the data, we will find the worst case to be between 17*tau0 and 18*tau0 and the value is 7.0 ns – 4.0 ns = 3.0 ns. Therefore, the MTIE for tau=1*tau0 is 3.0 ns. I have calculated the MTIE plot for this dataset in the attached Excel spreadsheet Example_MTIE_Calcs.xlsx. Note that the first value in the plot is 3 ns as just mentioned. This is a relatively simple example for illustration only. MTIE data typically spans many decades and are plotted against masks on logarithmic scales. However, even this simple example suggests a couple of items to note about MTIE plots:
Why is MTIE Useful? MTIE is a relatively computation intensive measurement. So what good are these type of plots? There are at least two good reasons besides standards compliance:
Conclusion In this post, I have discussed the differences between wander and jitter, the motivation for understanding wander, and delved in to MTIE, a wander metric important to standards compliance and useful in sizing buffers. I hope you have enjoyed this Timing 201 article. In the Part 2 follow-up post, I will discuss another important wander metric: TDEV or Time Deviation. As always, if you have topic suggestions or questions appropriate for this blog, please send them to kevin.smith@silabs.com with the words Timing 201 in the subject line. I will give them consideration and see if I can fit them in. Thanks for reading. Keep calm and clock on. Cheers, References [1] ITU-T G.810 Definitions and terminology for synchronization networks [2] Telcordia GR-253-CORE, Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria [3] Understanding Jitter and Wander Measurements and Standards, 2003 [4] ITU-T G.8262 Timing characteristics of a synchronous equipment slave clock [5] K. Shenoi, Clocks, Oscillators, and PLLs, An introduction to synchronization and timing in telecommunications, WSTS – 2013, San Jose, April 16-18, 2013 [6] L. Cossart, Timing Measurement Fundamentals, ITSF November 2006.
|
32 days ago |