Im using the WMB930 evaluation card with a Si4463 transceiver daughter card. I generated the the application code automatically from WDS for the bi-directional packet application. Then Im inserting my own application code which mostly hands the packet off to a PC via a UART connection. When I retrieve the received packet from the buffer using the API buffer "customRadioPacket" in radio.c there is always an extra nibble in the buffer and the actual packet is shifted by 4 bits thus truncating the last 4 bits. I can not find where this nibble is coming from and have tried various changes to the radio_config.h (via WDS) without any luck. Does anyone have any insight into this issue? Is it some kind of indicator for RSSI or something? Is there some setting in WDS that I'm missing? Yes I verified its really in the radio buffer and not due to some UART problem. Thanks in advance.
Well I messed around with the WDS configuration some more and that resolved seems to have resolved the issue. Not sure what the magic bullet was but I changed the preamble length and sync word. I think the packet handler was treating part of the preamble as a the sync byte and in turn part of the sync byte as data. I've attached the .xml files from WDS for the working and the non-working configuration if anyone wants to take a look.
si4463_revb1_bidirectional_packet.xml - has the issue
si4463_revb1_bidirectional_packet_1200bps_2400dev_optimized_radio&preamble_params.xml - appears to work
Im trying to optimize this for long distant low data rate communication but the physical layer of the chip is so poorly documented its driving me nuts. You really have no idea the effect that some of the WDS or radio_config.h parameters will have on performance or functionality. Im ready to dump this kit in the trash for another vendor that just has a simple memory mapped set of configuration registers with a clear data sheet.
Discovered it was due to the preamble and sync config. Long story is I'm adding forward error correction to the packet and I want the preamble and sync to be tolerant of about a 10% BER. So I shortened the sync to 1 byte and allowed 2 bit errors. I also changed the sync word from 0x2D to something else. Big mistake. The physical layer would detect a false sync at the preamble to sync transition. Imagine an 8bit sliding window going over the transition and allow any two bits to be inverted and you can see how a false sync pattern would be encountered. 0x2D appears to be optimal in the sense it does not allow for a false sync when allowing 2 bit errors in the sync word.
I ended up writing a matlab script to prove it to myself. Its attached for your enjoyment but you have to change it back to a .m file. You can call it like this for the case under consideration.
ew = sync_analyzer( 'AA','2D',2);