While working at Savoir-faire Linux (SFL) in Montreal, I was commissioned to perform some Linux experimentation on an old SBC, using Buildroot, the PREEMPT-RT kernel build option, and FTrace. The first two articles have been posted on SFL’s web site – part 1 and part 2.
Although the work dried up and they let me go, I still believe in the folks at SFL and their work.
SMS text messages are historically limited to 160 characters (70 characters if in unicode). Years ago, devices would refuse to send a longer message, forcing users to break the message up themselves. These days, however, most devices will allow a user to create a much longer message.
So, what happens when you send a longer message?
There are two ways to handle SMS messages of excess length:
1. Split the text into multiple shorter messages, which might be reassembled into one message by the receiver, or might be presented as multiple messages. Each message is shortened by a few characters, to put in a reconstruction header that the receiver can use to put it together.
2. Change the text to an MMS message, which actually uses the data plan to communicate.
What is this MMS of Which You Speak?
MMS is an alternative texting data format that can also do images, sound files, etc. On most devices, if you try to send a text with a picture or other media file, the entire message is automatically upgraded to MMS. As mentioned, this uses the data plan to send the content, but uses the SMS channel to send meta-data that advises the receiving device to load the actual message using its data plan.
“Legacy” SMS Support on VoIP.ms
Up until August 2020, VoIP.ms did not support MMS. Neither did the two most common means of sending and receiving text messages on their platform. They do provide an API whereupon you can write your own interface (as Michael did for item 1 below).
1. The android application VoIP.ms SMS is an open source project developed by Michael Kourlas, an independent programmer. It only supports SMS, never has done MMS. Also, this application relies on his servers to transfer data – for which I am thankful, because I appreciate the service!
2. The “legacy” web application, has no MMS.
In both “legacy” programs,
1. If you try to send a long message, it splits it up automatically, according to the standard.
2. You can’t send any media.
3. If someone sends you a long text message, it’s a crap shoot – if it is split up by the sender, the pieces come through. If it isn’t split up, the message is just silently lost.
4. If someone sends you media, the message is just silently lost.
“New” MMS Support on VoIP.ms
A few months ago (around end of August), VoIP.ms started supporting MMS using a new web interface from the main page -> DID Numbers -> SMS/MMS Message Centre, or directly at the SMS/MMS portal. Here, you can do full SMS and MMS send and receive.
Now, can you use VoIP.ms as a “complete” alternative to your cell phone texting? Well… yes, and no.
Upgrade for the Android Application Some Day?
The old “legacy” app and web site still work, but they have the same limitations that they always had. This is unfortunate, especially for the Android app – it sure would be nice to have full MMS support on mobile devices. I’ve contacted Michael and asked if he planned to support it eventually, and he said “yes”, but had no further details. I thought I might assist, but when I looked at the code… well, it’s Android code, and it’s pretty opaque.
Not All Providers Can Send to VoIP.ms Text System?
Yes, if it works, then you are away to the races. I use it all the time to keep contact with folks in the US and elsewhere.
The interesting thing is that some texting systems will not send to VoIP.ms text system, not sure why. There is some kind of “provider matrix” used by each telco for delivery of their texts to another telco, and for whatever reason, VoIP.ms isn’t on some lists. For instance, my bank in the USA tries and tries to send SMS texts to my VoIP.ms number to confirm my login, but it never works. The telco automation provider Twilio sometimes has trouble too.
It all seems kind of hit-and-miss. Maybe harassing the VoIP.ms guys would get them to chase it down and get onto those “choice matrix” lists, but I have not tried.
Text Message Provider Matrix
This text message “provider matrix” thing has been around for a long time. Long time ago I found that every telco has an incoming E-mail to text message portal – and found out that you can just carpet bomb all of them with the telephone_number@each_telco. Those telcos who don’t have the specified number will silently ignore. By extension, connection between telcos is probably done the same way. If VoIP.ms were not on the list, then they would not get the message, and would not receive the text.
Other Means of Getting Your VoIP.ms Text Messages
You can also have text messages forwarded to a “real” cell phone, but you do have to remember if you reply that it comes from that “real” cell phone – so your correspondents will send to one number, get back from another. Worse if that number is out of country – you reply, it comes from your “real” cell phone, and you get charged for it.
You can also have incoming text messages bundled into an E-mail, but not sure how you would reply to that.
So, here is how to set up VoIP.ms for SMS texting and call forwarding.
Important Term “DID”
The first thing to know is a specific term. DID, or “direct inward dial”, is your telephone number that can be called. In other contexts, it can mean the phone that will be connected to, when someone dials that number. In our context, it will just be the number.
Why the distinction? Outbound calls use a different system. Simple as that. Leave that for another time.
Follow the Money?
Bear in mind that, although the base cost for each DID is ridiculously low, nothing is free. SMS texts cost something like ¾ cent each, inbound and outbound calls cost by the minute (something around 1 cent per minute). I find this acceptable, because there’s no way that I could ever even come close to the cost of my old phone bill!
OK, first thing is to create your account and fund it. All amounts are in USD. I would suggest using PayPal to fund it, and I’d suggest putting in US20 or US30 to start. You can set an “alarm” on your account to send an E-mail when your balance falls below a certain amount.
Once you’ve got your account and it is funded, then for the task at hand, here are the main management menu items you will use.
Set Up Call Forwarding Target
First, set up a call forwarding target. Select “Call Forwarding” and create an entry pointing to where you want calls to be potentially redirected to. You can create more than one – you can select which one is the actual target, for each DID.
Here’s a sample entry screen. You don’t have to touch anything else except to put the 10 digit phone number of the target to forward calls to.
Ordering Arbitrary DID(s)
If you don’t have any DID(s) yet, you will have to go to “Order DID” first, and create them. The word “order” is a bit of a misnomer, because it’s all automatic and practically immediate. You can create as many as you want, and it’s quick. You can pick a telephone number in pretty well any area code in North America, and some numbers overseas. Be conscious of their cost, they don’t all cost the same in monthly cost or in inbound & outbound per-minute charges.
Porting Your Number In
You can also “port” your existing number from a cell phone or landline phone carrier to VoIP.ms. It’s a bit of a process – not that hard, you just have to read the procedure and go through the steps. The telcos are anal about making sure you follow the steps – they are trying to prevent port-out fraud, which has happened in the past, with disastrous consequences – think “SIM hijacking”, not nice.
Anyway, it is standard practice these days to set a port PIN on any mobile DID. This is wise to do. Be sure to keep it private. If set, then without this PIN, port requests are ignored. Of course, keep track of that port PIN, or else you won’t be able to perform a port either 😊 Keep these things in your password manager (use LastPass – don’t pass “go”, don’t do anything else – just do it).
When you port your cell phone number, be sure to indicate that it’s a mobile/cell phone.
Managing the DID Settings
Anyway, once you have any kind of DID in your account , go to “Manage DID(s)”:
On the left, under “Actions”, you will see three coloured icons – an orange pencil & paper (edit this DID settings), a blue paper with lines on it (read-only view this DID settings), and optionally a green cell phone, which indicates that this DID supports SMS & MMS.
Click the orange pencil & paper icon, which should bring you to this screen:
Select “call forwarding”, and if necessary, drop the selection box and choose where to route the call.
Scroll down and choose the DID point of presence:
This simply is the Internet server location that you will connect to, when you come around to using a VoIP phone. I would select one close to your primary use location. You can change it later, but for your VoIP phone to work (inbound and outbound), your VoIP phone must point to the same server name.
Continue to scroll and you will see the SMS settings. Above that are a few key settings related to the cost of calls, review each one.
For SMS, you have to “enable SMS/MMS” and, if you want to SMS/MMS forward, to select this option and put the 10 digit target telephone number in here as well.
Installation of Android App to Support Near-Native Texting
There are a few limitations, but it works very well for me. It’s how I keep in touch with my friends in Phoenix (and formerly of Phoenix 😊). I ported my US cell phone number to VoIP.ms and use this app to text with them.
Now, the difficult part – setting it up. The app is open source, and its help page looks like this:
You have to enter that string into the DID’s “callback” entry (see above), then enable the API connection back on VoIP.ms, see below
Enabling the API Connection
From the VoIP.ms main page, select “Main Menu”, then “SOAP and REST/JSON API”:
Put in an API password (this will be what you give to the Android app, above), enable the API, and ensure that the “Enable IP Address” is set to 0.0.0.0. You can restrict the IP address here, if it is well defined and won’t change.
Reyax RYLR896 module (have 3 of them) – contains RYLR890 RF module (which apparently contains Semtech SX1276 RF chip) + STMicro STM32L151C8T6A MCU – has TTL async interface, so used USB to TTL serial to talk to it. Reliable communications but weird, uses “AT” command set. I see this as being a replacement for the EOL’d Linx Technologies FHSS radios on the RMM – but the command protocol is different, so I’d have to rework the Microchip PIC18LF4321 MCU code. To replace the CM, I’d use this module and say the Silicon Labs EFM8UB2 Universal Bee to translate the weird commands to emulate the old Linx FHSS radio. The EFM8UB2 has built-in USB interface, SPI, 8051 core, and lots more – I used it in the DC Module at ERLPhase, and I like the chip. I have the dev kit for it.
Ronoth LoStick (have 2 of them) – contains Microchip RN2903 RF chip + USB interface. I got them to talk to themselves, that was easy. I’d love to use this to replace the Collector module, but it’s not clear how to get it to talk to the Reyax RYLR896, haven’t been able to make it work. Its settings are very different than the Reyax RYLR896, so I haven’t been able to make them intercommunicate. The internal design and internal software do not appear to be open source.
HopeRF RFM95W little LoRa module (have 2) – pretty small – almost small enough to use on a puck – castellated “postage stamp” SMT mount. Apparently has an Semtech SX1276 RF chip inside – which should make it compatible with the Reyax RYLR896… it needs sync serial (SPI) and I have no easy way to talk to it.
I retrieved my personal Raspberry Pi 3 B+ from the office – where I had it placed to facilitate my staff being able to navigate to an important source scanning site from their homes – no longer required because the local IT folks spun up an Ubuntu virtual machine to use instead. I put the Waveshare SX1262 LoRa HAT for Raspberry Pi onto my Raspberry Pi, and gave it a try. Well, had to remove the Raspberry Pi from its nice little plastic Adafruit box, because the LoRa HAT interfered with the internal ribs.
Anyway, after some fiddling, I was able to get the first stage of talking to the LoRa HAT to work… but after ser.inWaiting() said there were characters waiting at the port, the subsequent ser.read() call caused an unceremonious abort. No message, no nothing. I wasn’t happy with this, but after playing around for a bit, I abandoned my efforts. Oh, well.
So, overall with the Waveshare SX1262 LoRa HAT for Raspberry Pi , it seems that the mode pins are important and have to be twiddled in order for the board to work. That’s probably why I couldn’t make it work on USB from my LINUX computer. That’s a problem for another time, perhaps.
Back to the Waveshare SX1262 LoRa HAT for Raspberry Pi . With manually twiddling the pins, I can talk to the RF module in configuration mode, and got it to respond. There seems to be a “temporary” configuration and a non-volatile configuration. Even when I’ve written the parameters I want to non-volatile configuration, then unplug, switch mode bits to talk through the radio, and have the Reyax RYLR896 module chattering away beside it, I get nothing.
What I have is one Reyax RYLR896 module set as Network ID 6, Node ID 10, alternating transmissions to Node IDs 20 and 30 (yes, in decimal – I checked). Then I have a second Reyax RYLR896 module set as Network ID 6, Node ID 20, receiving the alternate transmissions. So, I set the Waveshare SX1262 LoRa HAT for Raspberry Pi set to what I think is the same frequency, and as Network ID 6, Node ID 30 (0x1E)… and receive nothing.
I’m not absolutely sure they are set to the same transmission parameters. I see no way on the Waveshare LoRa HAT to set Bandwidth, Spreading Factor, Preamble, or Coding Rate. Hmm, and the frequency is 850.125 MHz + ((0 to 80) x 1 MHz) – so I set it to 915.125 MHz. Does the 0.125 MHz offset matter? Argh!
The EByte E22-900T22S RF module , which is the RF module on the Waveshare LoRa HAT, has documentation which is rather sparse. It documents only 9 registers to set (although it does show all the bit settings) from locations 0x00 through 0x08, and then another 7 bytes of identification at 0x80. I cycled through the entire space from location 0x00 through then end of the identification. I found several more non-zero values up to location 0x17. Perhaps they hold the key to interoperation with the Reyax RYLR896… then again, maybe not.
I went to the datasheet for the Semtech SX1262 radio chip that’s inside the Ebyte E22-900T22S RF module, just to see if maybe if I could infer more from that – if the existing E22-900T22S commands were reflected in the SX1262, there might be details of other commands. Apparently not, the structure is completely different. It makes sense, because the SX1262 communicates solely by SPI interface… and the E22-900T22S talks by async serial – so there must be a chip inside there to do the interpretation, and perhaps other things in the protocol. I do seem to recall seeing something about this MCU, but from the Ebyte E22-900T22S documentation, it does not appear to be exposed to the outside world, so working on that could be a severely uphill battle.
Oh well, I’m honing my Python serial skills, and almost ready for the other modules to arrive – Seeed Systems Dragino LoRa/GPS HAT for Raspberry Pi should be here this week – the package is currently in Cincinnati, OH, at the DHL warehouse. Apparently, the two pieces should arrive on Tuesday. I hope so – I had to pay duties & fees on it 🙁 I will still have to switch the radio module on both pieces – although I might test it at the (wrong for North America) frequency of 868 MHz first.
The RonothLoDev has not shipped from CrowdSupply after a week; I suppose that means that it’s not in stock in their warehouse… or maybe this COVID-19 trouble has got them waylaid. That’s unfortunate, because I see the S76S module on the LoDev as the ultimate long-term solution – it has the MCU on-board, I can program it, it has a good example, and can support USB – well heck, maybe the LoDev itself would be the CM replacement. Anyway, it could work well in both the RMM and the remote in-the-ice measurement “puck”.
The RFM95W Approach
The Seeed Systems Dragino LoRa/GPS HAT for Raspberry Pi has the European frequency LoRa chip on it but the same footprint as the North American frequency HopeRF RFM95W module, which I have here. I can switch the modules so that the unit is on North American frequencies, then put this onto the Raspberry Pi, load up the Github code to do LoRa and try to talk to the Reyax RYLR896. I’m not sure if this will lead to a viable solution – it would be more economical to buy Reyax RYLR896 modules and add the EFM8UB2 Universal Bee to translate between the silly “AT” commands and the old Linx Technologies FHSS protocols. I’d like to replace the STMicro STM32L151 MCU on the RYLR896 board, but the code doesn’t seem complete – looks like it might be a bear to get up and running… might be worth the trouble, if I could maybe just reprogram the STM32L151 MCU to do the whole job and emulate the Linx Technologies FHSS protocols directly (instead of “AT” commands).
The S76S Approach
Assuming that I can get the LoDev from Ronoth to talk, and can program the MCU inside of it, then I could develop a version that would emulate the old Linx Technologies FHSS protocols inside the MCU. That MCU has USB capability, so it could be a single-chip solution for a revised Collector Module (CM) and still talk to the legacy MS-Windows software using this emulation. On the other hand, it also has async serial capability, so it could be a single-chip solution to replace the Linx Technologies FHSS modules inside the Remote Measurement Module (RMM) without changing the programming of the Microchip MCU inside it, again using this emulation. Thirdly, the S76S chip has I2C and SPI capability, and it’s tiny, so with a temperature sensor, it could become the basis of a new “puck” to be buried in the ice as well.
Common Ground between the Approaches
Since both the Reyax RYLR896 and S76S modules have the Semtech SX1276 RF chip inside, it’s also possible that they could both be interoperable with each other… although, based on my recent experience, I won’t hold my breath on that. Having options is always a good thing!
Beating Up on Some Existing Ones a Bit More
I might work on the Waveshare SX1262 LoRa HAT for Raspberry Pi a bit more. I seem to recall that I got it to answer to my settings etc., but upon further review of the Python code, maybe I didn’t 🙁 I should review again.
A second review of the Ronoth LoStik shows that its settings aren’t so much different than the RYLR896. Maybe I need to revisit this one.
STMicro ST-LINK Debug Adapters
I’ve also got 2 different ST-LINK/V2 debug adapters – one “official” STMicro one, and one cheap knock-off (because it arrived quicker and was really inexpensive), so if I could figure out how to set up the whole STM32L151 development environment to reprogram the Reyax RYLR896, I could do that.
The LoRa modules came in. 1 x Reyax RYLR896 Module, 1 x Ronoth LoStik USB LoRa stick. I realized that I didn’t have any TTL level serial devices near at hand to talk to the RYRL896, so then I ordered 2 x Covvy CP2102 USB to 3.3V/5V TTL Serial devices.
There are some Python code examples available for the LoStik on github, so I snagged them and gave them a run. The easiest one is to toggle the on-board LEDs, and that worked fine, but the transmit/receive didn’t do much of course, because I don’t yet have anything set up to transmit/receive with! Sigh.
I did set up a Raspberry Pi 3+ to talk to the LoStik though – with WiFi access, VNC desktop, and full Python implementation.
Once the CP2102s arrived, I wired one up with the RYLR896, plugged it in, found out that it enumerated as /dev/ttyUSB0 on my system, and fired up minicom with the default settings of 115200 bps, 8/N/1. Sure enough, I could get a response from the RYLR896, but it constantly said “+ERR=1” with every character. I’m pretty sure that it timed out between characters. Try as I might, I couldn’t get minicom to batch up the characters and only send when I hit <RETURN>. Well, that and it was confusing on how to get <CR><LF> at the end of the line… let’s see… stty cooked </dev/ttyUSB0… what else??? Argh.
So I installed and tried puTTY. It’s in the repos, apparently! Wow. Same thing, argh.
Then I tripped over this Youtube video showing use of the RYLR896, and it showed a terminal emulator called CoolTerm, available for LINUX, Mac and MS-Windows. Sure enough, it’s the real deal, with a GUI, and the right options to allow “Line Mode”, Local Echo, and CR+LF on <ENTER>. It worked!
The RYLR896 “AT” commands are case sensitive.
Now, to see if I can get the LoStik to talk to the RYLR896…
I’ve ordered a second LoStik and a SX12621 LoRa HAT for the Raspberry Pi.
Filipe and I have been looking into expanding and extending the Eye on the Ice(tm) system for Hans Wuthrich of Ice Consultants International. The original system was developed around 2008/2009 by my team at Norscan Instruments Ltd., back when we were diversifying our product offering. I was Product Development Manager with a fantastic group of very talented people working with me. They did a great job of the design of the Remote Measurement Module (RMM), the Collector Module (CM), and the MS-Windows software that went with it. It’s a great system!
Alas, RF modules eventually go obsolete, and the ones used in the Eye on the Ice system will likely soon be unavailable. They might be around for a few years, who knows, but the writing is on the wall – technology changes, RF emissions regulations tighten, and products change.
Last fall, and into the new year, Filipe and I did a proof-of-concept for an in-ice Eye on the Ice sensor which would use Silicon Labs Zen Gecko EFR32ZG14, the Zen Gecko. Although the system did work, we found that the low-level control of a mostly-asleep sensor was going to be a huge job. The examples given, and the information provided, just weren’t enough to get us over the challenge. In addition, the Z-Wave devices appear to “retry themselves to death” if the central station goes away – not a good thing for a low-cost, sealed temperature sensor device.
One thing that looks like an interesting alternative is the LoRa system – long range, low data rate, adaptable for different jurisdictions. Probably an excellent alternative for transmission of data like Eye on the Ice.
The Reyax web site has a few interesting LoRa modules. Their modules use “AT commands” to set centre frequency, spectral parameters, and data rate. With that in mind, these modules could almost be a drop-in replacement for the present Linx FHSS modules that Eye on the Ice uses!
Then I found the Hope RF series of LoRa modules, very interesting, especially the RFM95W. It looks like it’s just what we need. It does simpler modulation as well, just by flipping a bit in its configuration. You would lose all the error correction hyper sensitivity, and built-in spread spectrum, but it would be an easy way to establish simple communication. There seems to be plenty of documentation on the chip, the RF95. If you Google “HopeRF RF95”, you will find RFM95_96_97_98W.pdf on the SparkFun web page, which gives comprehensive documentation of the chip, its operation, and its registers. I suspect SparkFun stocks, them, and Digi-Key has them as well. There appears to be a nice Demonstration Kit, the RFDK_RFM95, but I can’t seem to find a vendor… GorillaBulderz in Australia lists them but has no inventory, and no indication of when it will be in stock.
I’ve been playing, on and off, with IRIG-B decoding – first, modifying NTP source code to continuously dump decoded data from what it reads on the audio input port… then extracting the IRIG-B decoding code to a stand-alone program which would read from the audio input port and output decoded data… then added unmodulated IRIG-B decoding (which was a challenge, due to audio bandpass limitations, but I made it work)… then putting it to an LCD display, then another, then another with keypad…
Through it all, I struggled to find a practical way of making it useful. After a discussion with Norbert of ERLPhase, I think maybe we came up with something.
Mobile Device Use?
Rather than expect someone to load up LINUX onto their laptop, drag the laptop to the site where the IRIG-B source is, and use a special cable to connect to the audio input port… why not use the mobile device that everyones seems to carry around with them?
I briefly tried to capture the IRIG-B signal (modulated and unmodulated) into a mobile device, without success 🙁 I tried both my Samsung Galaxy S5 (SM-G900T) and my Samsung Galaxy Tab S2. No luck. Maybe my physical setup might have been a bit precarious – I found out later that my audio jack connection may have been suspect – but, the result, even when it did seem to get signal, was not good. A severely attenuated low frequency response (basically gone below about 100 Hz) made it almost impossible to decode the audio captured by either device.
Create a USB OTG Device?
Having used the Silicon Labs‘ Universal Bee EFM8UB2 processor last year while working at ERLPhase, and after talking to Norbert… why not capture the IRIG-B signal on the USB2’s analog input port, and then funnel it to any customer’s mobile device through the UB2’s well-integrated USB port? After all, most modern Android devices support USB-on-the-go, where the mobile device can act like a USB host (similar to a computer) or a USB device (similar to a USB flash drive, camera, or MP3 player).
I thought maybe the EFM8 could do simple 8 ksamples/second signal acquisition (the sample rate used by NTP and later read_irig programs), then send packets of data to the mobile device, where it would be saved to a file that would be submitted for post-acquisition analysis. This way, the burden of processing would be moved from real-time to remote post-acquisition, maximizing the likelihood of successful data capture.
The SLKSTK has a neat little graphical LCD display and a few buttons on it, and one example supplied was a simple oscilloscope program EFM8UB2_Oscilloscope which could acquire data on an analog input at 24 to 500 ksamples/second. I created a custom version of this software locked at a sample rate of 8 ksamples/second with favourable amplitude and trigger settings. I was readily able to see unmodulated IRIG-B waveforms! Unfortunately, the EFM8 device only does unipolar conversions, and SLKSTK doesn’t have any circuitry to enable bipolar input – so modulated IRIG-B would only show half cycles.
I introduced a crude offset by wiring a 47k bias resistor to the analog input from the 3.3V supply bus. Together with the 10k-22k-10k divider chain on the input, this gave a reasonable DC offset, so both modulated and unmodulated IRIG-B could be seen.
A second example supplied was a program to emulate a Silicon Labs CP210x serial port on the USB interface, EFM8UB2_VCPXpress_Echo, which performed a simple reflection echo of characters sent out.
To prove that it was working, and not just a local echo from minicom – ugh sometimes minicom frustrates me – I modified the Echo program to echo every character twice, then follow every character with an arbitrary string “Burp!” and carriage return. It took a bit of doing, but it worked.
Oscilloscope with added VCPXpress USB Transmission – Failed
Well, I had the 8 kS/S data acquisition in my modified Oscilloscope project, so I mashed it together with the VCPXpress libraries… and got crap. It seems as thought the whole thing messed up the link process so badly that symbols resolved, but overlapping memory areas caused unpredictable behaviour… and it would not run. There was just too much gratuitous complexity in the Oscilloscope project. So, I thought I would work the problem from the other end.
Starting with VCPXpress
I started with EFM8UB2_VCPXpress_Echo, putting out canned strings. I wrote a simple Python script on my LINUX machine to accept the strings. That worked.
I added framing and a packet structure to the strings, and decoding into the Python script. That worked too.
I had thought that I’d send the full 10 bit ADC values, packing them as needed into the serial stream. However, the nominal line rate is 115 kbaud, or about 11,500 characters per second. Sending 8 kS/S of 10 bit data would take 10,000 bytes per second. With 5 bytes of overhead per packet, and 50 bytes of data per packet, then this 10% overhead would result in 11,000 bytes per second, not enough margin to make me feel comfortable that it would be a robust transfer.
In fact, I decided to add 2 more bytes of overhead per packet – a running binary sample count, to tell if overflow or underflow had occurred – so the overhead is now 14%.
Now, the original NTP code only used 8 bit samples, and seemed to work just fine. So, why not reduce the data size to 8 bits per sample? Then 8 kS/S of data would be 8,000 bytes per second, and with 14% overhead, still “only” 9,120 bytes per second.
It turns out that the 115 kbaud line rate is conservative, and really only even supported to make legacy software happy – the USB connection is far faster than this, and will transfer much more data than the stated. So, in the end, with 8 bits per sample, the link is very solid.
Here’s the packet format:
00 --------- SOH
01 --------- TYPE - Echo of Command (presently fixed)
02 --------- DATLEN - Data Length (binary)
. +- DATLEN bytes of binary data
DATLEN+3 --- CKSUM - sum TYPE to CKSUM (inclusive) is zero
DATLEN+4 --- EM - end marker - end of frame
The binary data that I typically send is 52 bytes total, including 50 bytes of analog data:
00 --------- Acquisition Count High Byte
01 --------- Acquisition Count Low Byte
02 --------- First Data Sample
DATLEN-1 --- DATLEN-2th Data Sample
Data Acquisition and Transmission
I added data code to make the 8 kS/S ADC acquisition, directly timer driven for jitter reduction, interrupt at the end for buffer stuffing. The data was reduced from 10 bits to 8 bits at interrupt level. Two swing buffers were employed, with main line code creating the packet and firing it off to the VCPXpress code for transmission.
Data Reception and Processing
I modified the Python script to receive the analog samples and stuff them into a list, keeping track of whether packets were valid and all samples were present, then writing them to a CSV for processing in a spreadsheet. I was able to verify that the data was transferred intact and complete, with reasonable fidelity.
I then proceeded to decode the unmodulated IRIG-B signal bits as (0/1/Position Indicator/Invalid), and add them to the CSV file output. Then, I decoded the signal bits and compose a string of characters to represent each one-second frame of IRIG-B data. Lastly, if the frame met proper format criteria (PIs in the right places, 1/0s in the right places, not too many bits), I pulled the data out and performed full decoding just as I had done with read_irig in the past. I also added this to the CSV file output.
I actually created three CSV file outputs:
Raw data sample file – with extended columns for bit decode internal data
Bit decode sample file – each bit as it was decoded – with extended column for full decode output when successfully performed
Full decode file – a bit string for each one-second frame, plus fully decoded data
It was difficult to consistently get good thresholds for decoding, so I added rescaling. The maximum and minimum values of signal in the file are calculated. An offset is subtracted to centre the signal around zero, then a gain is multiplied to make the signal approximately -3 dB of full scale, or about +/-25,000 counts.
Modulated IRIG-B Processing
It was difficult to decide how to work with modulated IRIG-B. The original NTP code did a kind of phase-locked loop decode on the signal, so it could recover the bit stream, but also the precise zero cross time, so the local clock could be time sync’d to the IRIG-B signal. I didn’t want to do this. The code is hard to understand, we don’t need time sync for post processing, and I’d rather not incorporate someone else’s code if I don’t have to.
Instead, I went back to the way that Filipe and I designed modulated IRIG-B processing so many years ago: watching for positive side pulses due to the sinusoid going above zero (Polarity), and also for “high” level amplitude by positive side pulses due to the sinusoid going above a threshold (Peaks). Polarity without Peak means low level cycle, Polarity with Peak means high level cycle.
One problem was an unknown baseline. As mentioned, an arbitrary offset had been applied. The maximum and minimum values of signal in the file are calculated. Zero cross was assumed to be at the halfway point – the median – the average of maximum and minimum – this gives Polarity.
The amplitude of a low level cycle is defined as 1/3 the amplitude of a high level cycle. The threshold would be halfway between these two levels, or 2/3 the amplitude of a high level cycle, or 5/6 of the way to maximum level –
minimum + (5/6 x (maximum – minimum))
Now, I tracked how many high level cycles in a row happened before low level cycles resumed. This would determine whether a bit was a 0/1/Position Indicator.
Tarball and Steganography
I added a feature to roll the three CSV files up into a GZIPped tarball. Because the files are ASCII, they compress well. Then the original CSV files are deleted. A tarball is easier to transfer for post processing.
A tarball might not pass E-mail inspection, so I added a steganography library to encode the tarball into an arbitrary JPEG image. The JPEG image would likely get through the E-mail system more easily. However, this processing took a very long time (over 10 minutes) and created a huge image file (original 100k, over 5 Meg afterwards), so this was abandoned.
Automatic Gain Control
By putting the EFM8 MCU’s VREF on an RC-integrated PWM output, and using the same signal as a DC bias offset on the analog input, I could change the bias and the ADC span under program control.
I added code to the EFM8 to track the maximum and minimum ADC input levels (single ended) and calculate whether the system gain should be increased or decreased to make the span approximately 128 counts, or about half scale. Margin is maintained, but gain is maximized. The system is designed to track within a second or two of amplitude change.
Building it Onto a Beadboard
The analog circuitry was put onto a breadboard and wired to the EFM8 MCU on the development kit.
Next, to connect this to a mobile device, invoke a serial port driver, and capture the data there for post processing analysis. That’s easier said than done!
I purchased a Crystalfontz CFA635-YYK-KU 4 line LCD display with integrated USB interface, 4 bicolour LEDs and 6 button keypad, then proceeded to modify the read_irig program to talk to it.
I’ve modified the keyboard interface a bit as well. Keys now supported:
a - Change backlight intensity (CFA635 only)
d / keypad "DOWN" - Change display format: DECODED, RAW, TITLED
h / keypad "CHECKMARK" - Hold/unhold display
u / keypad "UP" - Reverse RAW or TITLED display order MSbit <>LSbit
r / keypad "RIGHT" - Shift display data right
l / keypad "LEFT" - Shift display data left
v - Diagnostic dump of display data to terminal
f - Change format of terminal displayed data: RAW+DECODED, DECODED only, RAW only
q / keypad "X CANCEL" twice consecutively - Exit program
I’ve also made the top LED blink green bright and dim, each time it gets a time update, solid red when IRIG input fails and a timeout is declared, and blink orange bright and dim slowly, when display is in hold mode.