There is a huge demand today for adding Wi-Fi connectivity to IoT applications because of the many advantages over other wireless protocols (Zigbee, Bluetooth, etc.) such as longer range, native IP connectivity, and high bandwidth. For millions of IoT applications, including industrial machines and sensors, Wi-Fi is often the best choice for connectivity because of its robust infrastructure and global reach- Wi-Fi exists almost everywhere in the world today.
Challenges for developers: The biggest challenge for developers has been the high-power consumption of Wi-Fi in IoT systems. Wi-Fi protocols were designed primarily to optimize bandwidth, range, and throughput, not power consumption. This makes it a poor choice for power-constrained applications that rely on battery power. Of the various cons of using standard Wi-Fi protocols, high power consumption is the most impactful (range limitations and busy networks are cons as well). Until today, developers have avoided adding Wi-Fi to their IoT applications as there hasn’t been a viable option for adding Wi-Fi connectivity to battery operated devices that didn’t require high power consumption.
These are the four key challenges when adding Wi-Fi connectivity:
Power consumption in Wi-Fi varies dramatically across various modes of operation and it’s important to understand the different modes and optimize them to reduce overall power consumption. One strategy is to stay in the lowest power mode as much as possible and transmit/receive data quickly when needed.
RF performance: Unlike many wireless protocols, Wi-Fi power consumption is significantly impacted by RF performance and network conditions. This is a significant problem with the increasingly crowded Wi-Fi networks today. A busy network leads to many retries/retransmissions which consumes a high level of power. Developers must focus on reducing retransmissions and controlling link budgets to be successful.
Wi-Fi devices typically consume significant power in both Transmit (Tx) and Receive (Rx) modes. There are several ways to reduce power consumption and optimize Tx and Rx modes. First choose devices with high selectivity/out of band rejection. Also, choose devices with high Rx sensitivity, and if possible, choose uncrowded channels for device operation. This might mean using channels not used by chatty connections such as video streaming.
Applications: Power consumption is highly dependent on the application and use case. IoT applications typically fall into one of three categories:
Always on/connected-these devices are always on which allows users to access the device remotely at any time via cloud or mobile application. A Wi-Fi video camera is a good example of this use case. Latency is a critical factor in these applications and power consumption is dominated by the transmit power mode (the highest power consumption), as the device is transmitting data and it would be detrimental to be inactive or inaccessible.
Periodically connected - These devices are connected to a remote server or cloud platform and only need to transmit occasionally. A good example is a temperature or humidity sensor that sends data every few minutes and it can tolerate the small amount of time it takes to become active. Latency is not a major concern and the power consumption is dominated by receive and sleep currents. It stays in intermediate power levels so it’s never completely awake or asleep so it wakes up faster.
Event-driven - An online shopping order button is a good example of event-driven Wi-Fi connectivity. It’s almost always inactive/asleep, meaning there is no data transmission. It wakes up infrequently, and it takes longer to wake up from this mode. An event occurs that triggers wakeup such as when a user selects the order button. This mode is dominated by the lowest sleep current and is best when needing to use the least amount of power possible for an IoT application.
Design issues - Lowering Wi-Fi power consumption is also a design system issue and is a critical challenge for developers today. Power management and extended battery life are major factors when developing IoT applications. Although standard Wi-Fi protocols weren’t designed initially for low power operations, there are many techniques to help significantly reduce power consumption. These techniques include optimizing Rx and Tx modes, optimizing power-saving modes (sleep modes, WMM, DTIM, shutdown/standby), choosing the right hardware, using built-in specifications, optimizing RF performance, and system level optimization. Developers must understand all the contributing factors to overall energy consumption in IoT devices.
They must also understand both system-level factors and deep application factors in order to achieve low energy consumption in their applications. Finding the right mix of power-saving Wi-Fi modes and selecting the right hardware are the keys to dramatically reducing power consumption. Leveraging hardware and software designed specifically for IoT devices and low power consumption can reduce long term costs, overcome development challenges, extend battery life, and potentially enhance the life of products and customer satisfaction.
We solve the power management issues for IoT developers by providing drop-in Wi-Fi solutions, including pre-programmed modules (WF200 and WGM160) that can cut power consumption in half. These solutions are designed proactively with low power IoT applications in mind and work in a wide range of applications from home automation to commercial, retail, security, and consumer health-care products. Pre-programmed modules provide a prototype quickly which helps developers get products to market faster.
To read the full whitepaper on this topic. click here:
Recently, we had the opportunity to speak with Alex Rogers, Professor of Computer Science at Oxford University. One of his recent projects exploring technology and zoology resulted in the creation of a small, low-power acoustic device built to record the songs of a potentially extinct cicada. The project began a little more than two years ago and has since morphed into a start-up called Open Acoustic Devices spinning out of the university.
The Open Acoustic device, known as the AudioMoth, is already in the hands of many ecologists and conservation organizations that are using it to track and study hard-to-detect wildlife and/or potential threats to wildlife, such as gun shots by illegal poachers or chain saws in protected forests. Previously, if ecologists or wildlife enthusiasts needed a highly sensitive audio recorder for field research, they had to pay nearly $1,000 per audio recorder. Or they could opt for an open-source recorder built from a low-cost single-board computer, which required large battery packs -- sometimes even car batteries! The AudioMoth, on the other hand, is slightly larger than a smart phone (batteries included) and costs roughly $50.
Check out our conversation below about how a small university project scaled itself to commercialize a one-of-a-kind audio recorder for wildlife.
Tell me a little bit about yourself and how Open Acoustic Devices came about.
As a professor of computer science, my interest has been in deploying machine learning algorithms on devices constrained by computing power and battery power.
My interest in conservation technology stemmed from an event at the Zoology Dept. at Oxford, which was exploring new technology for biodiversity monitoring. The department was interested in using low-cost phones to change how people conduct environmental monitoring. With PhD student Davide Zilli, we set out to use smartphones to listen for a rare cicada insect in the U.K., which we still don’t know is extinct, hidden or just rare. The cicada sings at a very high frequency, at about 15 kilohertz, which most adults can’t hear, but smartphones can.
We didn’t find the cicada with the smartphones, but we started thinking about how we could design a small acoustic device to automatically detect the song of this insect. Two new PhD students, Andy Hill and Peter Prince, joined the project, and we ended up building a prototype device, and then made it available to others about a year ago.
We soon discovered a huge appetite for low-cost, open-source acoustic recorders. We are now working with ecologists who use our device to record bats, birds, insects and other wildlife. Until now, professional ecologists typically had been surveying wildlife with commercial equipment.
The cost advantage of AudioMoth completely changes the science people can do. It means ecologists can do research that would have been cost-prohibitive before. Previously, if an ecologist had a small budget, they could maybe only deploy three or four recorders. Now they can potentially deploy 100 recorders, meaning different types of wildlife surveys can be conducted.
Who is your buying audience?
It’s a big mix – it’s a split equally between university researchers (ecologists) and conservation organizations. We’ve done some large bat survey deployments with the Zoological Society of London and the Bat Conservation Trust. But then there’s a whole pool of individuals and enthusiasts recording birds and bats on their own.
Can you tell me about the performance of the device?
From the beginning, we were looking to create a minimal device we could run smart algorithms on to only record when hearing a sound of interest. In the first instance, this was the New Forest cicada.
We combined an inexpensive MEMS microphone, similar to what’s inside a smartphone, with an SD card and MCU to create a programmable and highly mobile device. Because of the small size, the microphones are extremely sensitive to high frequencies -- perfect for people interested in bats, where they are recording at 100 kilohertz.
We have a lot of deployments in remote jungles and forests with extremely limited Internet access, but we are still planning to add low-power wireless connectivity to new versions of the device for alerting, streaming and research purposes.
Did you have any design challenges?
The key challenge for a battery-powered device is power -- we knew we had to focus on low power from the beginning. Our users worry most about how much data they will end up recording. We used Silicon Labs’ Wonder Gecko microcontrollers because of their low power capabilities, which results in smaller batteries and longer life in the field.
The non-commercial, open-source recorder alternative is typically based on Raspberry Pi, which uses a much more capable processor running a Linux operating system, and as a result requires a much larger battery pack. In many wildlife applications, the devices have to be carried to the deployment sites in backpacks, making the size and weight of the batteries critical.
Can you give me some idea of the power gains experienced by using the Gecko MCU?
To give an example, right now we have a deployment in Belize that involves listening for gunshots to detect illegal hunting in tropical forests. With a small battery pack (a 6V lantern battery), we can deploy a sensor that lasts for 12 months and listens continuously for 12 hours a day, only making recordings if it thinks it detected a gun shot. With the Gecko MCU, we can do nearly all the listening while the processor sleeps, then it can wake up to run the detection algorithms across a 4-second sound buffer.
How did the Gecko get on your radar?
We originally used an NXP processor and the Arm Mbed development platform in our prototype. We really liked the development platform, but the processor used too much power. Silicon Labs ended up being a better option because of the integrated tool chain, allowing us to directly measure and optimize energy consumption. We can also distribute the code, knowing that the development tools are free and are available on all operating systems, which is a critical benefit.
As a university project, how did you manufacture these devices?
To keep costs low, we started exploring alternative manufacturing routes. With Alasdair Davies of the Arribada Initiative (an organization promoting open, affordable conservation technology), we started running group purchasing campaigns through GroupGets, a low-cost assembly company that facilitates group purchasing. After testing the market with some relatively small orders, GroupGets enabled us to run off a batch of 1,500 devices from a PCB assembler, providing real economy of scale.
This model allows designers the ability to offer various types of devices, yet manufacture at a low risk. We’ve manufactured close to 4,000 devices so far and have a live campaign running at the moment that will likely result in another 1,500 orders. As a small university project, there is no way we would have been able to do without this model.
We also used CircuitHub, which enabled us to post our hardware design and bill of materials on its website. The concept essentially hacks low volume manufacturing. Suddenly, people can share and distribute hardware in the same way people have been able to share and distribute software.
Where do you see IoT going in the next 5-8 years?
Computation on devices is always more energy efficient than storing or transmitting data, meaning devices will continue to become smarter and handle more processing on their own. Many of the deep learning algorithms that researchers are exploring at the moment are still too complex to run on very low-power small devices, but there’s already a huge amount of interest in figuring out how to push these algorithms down to small, low-power devices.