Once an auto Rx frequency hopping configuration has been devised and loaded onto the chip there are a few relatively easy tests to verify functionality and performance.
The article goes through 3 different test methods that check for (Test#1) functionality, (Test#2) preamble margin on signal detection and (Test#3) timeout configuration margin on no signal detection.
The easiest way to test functionality is transmitting to the receiver and observe some of diagnostic signals on GPIOs. The transmitter may as well be another EZRadioPro node running a Custom packet Tx project from WDS. Make sure the length of the preamble at the Tx side is set to the recommended value based on the calculation presented earlier in this article series.
Connect the TX_STATE signal to one of the GPIOs, this signal will serve as a trigger event for our measurements.
At the Rx side connect the RX_DATA, HOPPED, SYNC_WORD_DETECT and HOP_TABLE_WRAP signals to GPIOs and hook them up on a scope.
All of the tests below can be applied to all three no signal detection schemes.
Test#1: Transmit a packet at any of the frequencies that reside in the channel plan and observe the scope screen. You should be looking at something like the capture below. (This capture was taken with an Rx configuration of 5 channels based at 916 MHz and a modulation of 100 kbps 2GFSK with preamble timeout no signal detection scheme and a 40 byte long preamble at the Tx side.) If the packet has been successfully received you will also hear a beep from the platform.
The trigger event (TX_STATE signal from the Tx) is marked with a full orange triangle at the 1st vertical division line from the left. This event signals the beginning of the packet in the air.
First off the SYNC_DETECT trace tells us that there has been a successful signal detection. From the HOP_WRAP signal’s transition (that signals the beginning of the new cycle in the hopping sequence) you can count the number of hops on the HOPPED trace and tell which channel reception has occurred at (hopefully this is the same channel you transmitted at). In this particular example it is the 3rd channel. (Note that this number will also appear on the LCD display on the platform.) Try transmitting at each channel (or at least a sub-set of channels if you have too many) and check for functionality the same way!
Test#2: Transmit many packets randomly and place the scope’s display in infinite persistence mode. You will see a screen similar to the one below. I traded the HOP_WRAP signal for VALID_PREAMBLE as it is a touch more interesting in this case. The purpose of this experiment is to make the receiver see all kind of different timing situations (including worst case scenarios discussed in the Timing article) and check for the margin in preamble length.
At this test I used a lab signal generator for transmitting packets for being able to control the power level. The test was performed at -102 dBm power level which is 3 dB above 5% PER sensitivity level. With this I not only verified functionality but also checked for performance. If you don’t have a signal generator you can keep on using the Tx node approach.
1st off the receiver should have received all packets. You can assess this by examining the SYNC_DETECT trace; all trace instances on the display must go up at detection, if there is one that has not, that packet has got missed.
Now, place a vertical cursor on the SYNC_DETECT rising edge and measure back the sync word length + the signal detection time. In this particular example this is 16 bits for SYNC_WORD and 40 bits for signal detection yielding 56 bits which in turn is 560 us at a 100 kbps DR. Now, replace the SYNC_DETECT cursor to the rightmost trace on the HOPPED and check the distance to the previously established timeline.
The measured time (50us – 5Tb) is the margin on preamble length for signal detection. Now, is this number good enough? Taking into account that this number includes the propagation delay through the Rx chain (T_PropDel) that is typically 4 Tb our margin shrinks to 1 Tb that seems a bit tight. We, however did expect a tight margin as our preamble length calculation is based on worst case scenarios, scenarios we were trying to replicate at this test. The conclusion so is that this low margin is expected and is considered good enough.
For a thorough check repeat the same test at the worst case expected frequency offset between Tx and Rx. I have done an experiment assuming 20 ppm XO specifications that resulted in 37 kHz worst case offset at my operating frequency.
Test#3: The previous test checks for preamble length margin on signal detection, it however does not tell us much on how well we chose the timeout (in the case of LoP and RSSI measurement methods) for no signal detection. As an example we may as well have chosen a timeout value that is way higher than the minimum required (i.e., 100 Tb) with an correspondingly longer preamble at the Tx side (as per our formula) and we would still measure the margin on signal detection to be the same with the previous method. In other words the previous test checks how good the formula is for calculating the preamble length but does not tell us much on how well we estimated the input parameters (most notably the maximum no signal detection time) to the formula.
To check the margin on no signal detection performance test a particular configuration with the recommended preamble length than keep reducing the timeout parameters nibble by nibble until the performance starts degrading. What we do with this experiment is we make the receiver decide on a no signal condition faster, which will have an implication that the receiver will start missing valid packets to premature no signal detections.
I undertook this test by measuring PER floor with a test signal at -102 dBm and -60 kHz frequency offset. You, however can get to the same result (albeit less accurate) if you stick with the missed packet hunt on the scope.
To make this experiment a bit more exciting I calculated the preamble length based on a 37 Tb no signal detection time (on the grounds that as I hinted before preamble detection based timings have 3Tb margin built into them.) and got 39 bytes for a revB1 chip.
From the graph you can tell that decreasing the preamble timeout from 10 nibbles immediately introduces a PER floor (however low that may be) and decreasing it even further degrades the performance dramatically. Our estimate with the 37 Tb no detection time did not leave any margin within the resolution of the preamble timeout (which is what we expected as I deliberately removed the built in 3Tb margin from the no detection times).
Interestingly the performance starts degrading again with increasing timeout values. This is because the prolonged no signal detection times prevent the receiver from arriving to the correct channel in time to capture the preamble at extreme cases. (Essentially this is the same effect we are checking for in test#2).
Had I stuck with the 40 Tb no signal detection time my preamble length would have come to 41 bytes and we would have seen two points on the graph (at TO 10 and 11) with 0% PER floor.
Now you can ask the question: can I squeeze out the margin from the LoP scheme no detection times too? Answer: If you are squeezed for preamble length to fit inside a spec, yes go ahead, however it will only buy you a relatively little preamble length reduction. Stick with the recommended detection times otherwise; what this buys you is guarantee that it will never break as these are the numbers millions of tests have been performed at.
Test#2 with DSA scheme: Generally speaking Test#2 checks for margin on signal detection whereas Test#3 checks for margin on no signal detection. With the DSA scheme however Tests#2 checks for both. The reason for this is that at the DSA scheme the no signal detection times is not a timeout driven mechanism. The receiver simply stays as much time on an empty channel as needed for a no signal detection; no more no less. It follows than that Test#2 in this case checks for the overall margin on preamble length.
Below screenshot was taken with the DSA detection hopping scheme with the following parameters: 5 channels starting from 916 MHz, 10 kbps 2GFSK H=1 modulation, 7 byte preamble length on the Tx side.
On above plot I measured 32 Tb time back from sync word detection counting 16Tb for sync word and another 16Tb for worst case detection time.
On this plot I replaced the sync word cursor to the rightmost HOPPED trace to measure the margin on preamble length; I got 5Tb (or 9% percent of the 7 byte long preamble); comfortable enough.
You can see the power of the DSA detection scheme here: with preamble detection one would need 4 bytes on a single channel to detect a signal, with the DSA we can scan 5 times as many channels with less than twice the preamble length; quite attractive!
This article is part of a series that discusses various aspects of auto frequency hopping. Find the links to the other articles below.