This article is part of my 1000² Smart Home project. Check out the Architecture and Software and then move on to Hardware. An most importantly, check the Lessons learned and why I did what I did. There are already some first results, connecting all parts of the system.
This page covers things I have learned and why I did what I did in my 1000² project. Some experience, random calculations, opinions and thoughts.
So what is the battery philosophy? Should I make everything last as long as possible and change batteries each time some device is dead? What if I have 100 devices at home? Should I use the smallest battery that lasts 1 year and change all of them every year? The expected lifetime for motion sensors will be close to 1 year of life while devices like temp sensors could go for up to 10 years on AA batteries, with others falling in between.
I am staying out of energy harvesting, be it simple solar or more complicated as this is not a universal solution. Long operating times are a must, randomly needing to replace the batteries of many devices every few months becomes quite a big hassle, even for a small home.
On top of that, no weird shapes, just CR2032, AAA and AA, they are easy to find anywhere.
So here is an analysis of battery usage featuring the possible sizes (2032, AAA, AA) and a few configurations: simple sensors with low idle current (ex temperature) or combined with motion sensors (or similar) requiring currents in the order of 50 µA. Depending on the types of sensors, there are regulator options: no regulator, a LDO or a step up converter.
Some conclusions can be drawn from here: with low power sensors, one can achieve many years of sensor lifetime with regular AA batteries, provided the size is not an issue. For sensors requiring a high supply, the option for an LDO takes up most space, as it needs 3 batteries. Alternatively, a switch mode supply can be used, boosting 1 or 2 cells, with shorter life time. But….
Step up or step down
A step up regulator used for this application like the TLV61225 does not squeeze every last bit of battery when using a power hungry radio. The RFM69 radio I will use needs 45 mA (or even 130 mA for the high power version which the TLV cannot provide) while transmitting. Supplying this current from a step up regulator providing 3.3V from a pair of discharged AA(A) batteries proved problematic. Because of the high internal resistance of a battery reaches in the order of Ohms, the voltage drops too much shutting down the node with significant charge left in the battery. Step up converters are best left for very low power radios, such as the RFM75 which need significantly less current. Obviously the RFM69 can transmit at similarly low power requiring similarly low current, but I am interesting in designing a supply that can handle all situations.
The other alternative, of using an LDO from 3 AA(A) batteries provides a longer battery life as it discharges them at half the current (taking into account the different number of batteries). Efficiency is not that bad either, as with 3.3V out and a fresh set of batteries (4.5V) it starts at 73%. As the batteries discharge, the efficiency increases (funny, ha).
A charge pump is perfect for motion sensors
I tried using the PIR sensors with a charge pump. This effectively doubles the battery voltage bringing 2 AA(A) to the range of 4 to 6V, which is right in the range for a motion sensor. It is very efficient, compared to a switch mode step up converter. It’s made by routing the internal precision 32kHz clock used for the RTC to a pin. 100n capacitor is used as flying capacitor while a 47u electrolytic is the final filter. For diodes I used the tiny and cute BAT54J, which have almost no loss at the required current of 15uA. Overall, the total sleep current is 35uA, with 30uA for the motion sensor and the rest for the micro, radio, temperature and light sensor. With cheaper BISS001 based sensors I would expect about 105uA of consumption.
This setup is much better for the sensitive motion sensor than using a TLV6155 which would be pushed to operate in a discontinuous mode, resulting in a much higher output ripple.
Inside the devices, the brains will be a microcontroller, while radio will take care of communication, we’ll call this ‘the core’. Finding the optimal pair is not an easy task. There will be a huge diversity of devices and tasks and one size fits all might not be the best way to go. At the opposite side of the spectrum, the optimum size for each function will blow up in an unmanageable diversity. I know how little it takes to get a button and an LED on the internet, a few times more than that to measure a bunch of LEDs and quite a lot more to get a nice user interface. I am aiming for 2 flavours of the core: devices that need to be very power efficient and devices that can feast from an all you can drink buffet of electrons. If possible for the 2 aromas to merge into one and get power efficiency and performance, even better.
Universal – particular
Going full universal and making a node board suited for all the required tasks is complicated. Let’s just look at the power supply: some devices can work from most range of 2x AA batteries, some need at least 3V, some will need more. Combine with USB power when connected while sipping every last bit from batteries and it gets complicated to manage already.
Using a pair of boards suited for a specific task serves as the best compromise for the majority of things. First, a ‘Core’ contains the microcontroller and radio, along with programming options and maybe regulator. Attached to this is an application specific board. This split allows for faster design times, as a new application board contains only the minimum, specific parts.
In my experience, using small scale assembly, getting 40 simpler boards (20 & 20) designed and assembled is simpler than designing a more universal one and then assembling it with missing components and different configurations.
Drop by the Hardware page to check the solutions.
But how complicated?
I counted about 100 simple devices that I want to measure, control or interact with in my home for now. A simple device is a switch, a light, a motion sensor, temperature sensor, button, led, small display, power meter, switch, plant humidity etc. The combinations of these will result in functionality of the system, like a motion activated light. In order to minimize the amount of hardware, I have grouped them by functionality and location. Keeping the universal – particular balance, I grouped the devices in “common denominators”. The surprise here was that each core needs to handle just 2 devices on average.
This points out to the complexity required for each ‘core’. Drop by the Hardware page to check the solutions.
Software and hardware modularity go hand in hand. While OOP is not my favourite on microcontrollers, it’s the best way here. Fundamentally, most devices will fall in just a few classes (GPIO, analog I/O, read/write at a specific I2C address, PWM, counter etc). Allow these types of classes to operate on any hardware resource and the platform becomes highly modular.
How much does it cost to keep an ESP8266 running per year?
Assuming always on, 70mA from a standard 5V supply through linear regulator, 0.35W consumed (excluding the screen!!). Adding inefficiency and self consumption of DC/DC converter getting to 0.6W from mains(I measured). This is 5KWh per year, or about 1.1 euro at average EU price. If the price does not tell you anything, think of the effort for a human to produce this: your not so sporty, but decently conditioned person would spend 50 hours on a bike to generate this electricity.
A pair of AA batteries will set me 0.4€ (in some quantity) and have the potential to keep a low power radio running a few years. So a low power radio battery powered temperature sensor will cost less in the long run compared to a cheap, mains powered ESP8266 temperature sensor.
NRF24L01 goes a long way….or does it?
I have used a hell lot of NRFs in my project, Stockie and encountered no major issues. They are super cheap and easy to get…but the very low transmission power limits the range. Not so fast! By combining the low power modules on the Stockies with a high power modules and antenna on the gateway I could get a lot of range (+/- 1 floors and anywhere inside the apartment). With a caveat, that did not matter for the application: allowing a lot of retries, just to play it safe. The reason is simple: the 2.4GHz band is swamped with WiFis in the urban area, which can use a much higher power – thus the NRF needs to find the right moment in time to get through.
I started working on this project with a RFM75 in mind, which is a clone of the NRF as well. It comes from a distributor, because of my ‘no cheap china‘ rule. The first lesson: automatic ACK does not work between cloned NRF modules and RFM75 modules. Rumor has it that Nordic hid some things in the datasheet and some bits are actually flipped in the packet, with the NRF clones following the datasheet while the RFM following the over the air transmission. Hence, automatic packet ACK and retry does not work between the sensor nodes (RFM75) and a NRF24L01 PA LNA (=NRF clone) gateway. Implementing ACK in software (I used the Radiohead library) showed a higher on air time for the modules, hence more power. This really shows the advantages of the auto ACK the module does in HW.
Moreover, modules located far from the gateway required a significant amount of retries before getting a packet through, quite frequently. This leads to higher battery consumption and unpleasant delays, which could end up being a second even, which pretty much killed this module. I have tried the system with a single node, so only the nearby WiFis as pollution, though quite a lot of them.
So I moved on to my next favourite, the RFM69, which is superior due to operating in sub GHz band, more power and lower data rate. Check out the hardware section for more thoughts on this and my wishes for the perfect radio.
Bad Watchdog, Bad!
As a way to increase that chance that everything works and because sometimes my RFM69 modules were hanging (similar to this, still don’t know why), I started using the watchdog timer (WDT). Courtesy of the complex and flexible clock generation of the SAMD21, the WDT is fed with a 32 Hz clock, which allows for a max timeout of 8-9 minutes. This covers my longest sleep time of 5 minutes.
Soon after, I realized that my light dimmers were behaving weird when transitioning – something that is controlled within the infinite loop, which normally needs 3-4 ms to pass through everything. Cue some debugging and it turns out the WDT required over 50 ms to reset. Why? Because ARM micros can have multiple clock domains, which have synchronisation registers between them. Feed one of those domains with a slow 32 Hz clock and that is why it takes 50 ms to reset the WDT. So I made the compromise of increasing the clock and reducing the WDT to 2 minutes and limit the sleep time to 1 minute. This has shorten the reset time, at the expense of not allowing some nodes to sleep more.
Sleepy nodes will need about 30 ms to reset every wake cycle (default sleep time is 1 minute), which combined with about 2.5mA for the CPU current results in an average current of 1.25uA, little but not insignificant. For the awake nodes, I reset them every 100 ms, and simply don’t wait for synchronisation of the registers (it happens in the background).
Note that a similar situation can happen with the RTC, requiring a long read time, but that is more flexible because the RTC has an internal prescaler as well, allowing for a higher input clock.
When dealing with battery powered radio nodes, it’s naturally desired to send as little information as possible, each extra bit requires energy. Therefore, when a node sends it’s data, the minimum needed is an identification and the value, provided that the receiving end knows what type of device is the one with the specific identification. Given the size of the network, I was considering allocating a byte for the node address (the hardware), another byte for the channel number (so I can have an identification for each channel, eg voltage 1, 2 3) and another byte for the type of channel (eg voltage). Finally, the value is send in whatever way it is stored (float, integer, string) and the gateway needs to know how to decode it.
But, there is a catch: having predefined types means restrictions and having to configure all those predefined types in multiple places. It is one of the reasons why I stayed away from MySensors: they use predefined types and adding one requires updating the whole library.
My solution was to replace a channel number and type with simply sending channel_name=value in text format over the radio. This gives me some advantages: first, the originating node has total control over how the data is displayed (it is normal to have 3 digits for an 8 bit ADC value or 5 for a 16 bit ADC, even though you will always send a float number). Second: there is no middle software that needs updating to know when a new type of data arrives. Third: with a self explanatory channel name, it will be very easy to know what to do with the new sensor that pops up.
I am obviously wasting more power, so what’s the damage? A typical transmission looks like this: once a button is pressed, the node wakes up. About 2 ms are used (-2 to 0) to read the data and process other nodes plus preparing the radio packet, then the big current spike is the transmission then the longer time (~6 ms to ~30 ms) is waiting for ACK, with the radio receiver ON.
Below is the radio packet format that I am using. Taking into account the overhead required for the RFM69 module and the Radiohead library, 14 bytes on top of the actual payload are required.
When all is taken into account, an average length of a text payload of 10 bytes requires just 16% more power than the ultra short 2 bytes one.
At the extreme end, sending 20 bytes requires 35% more than sending 2 bytes. This is great, the 16% or even 35% more power is something easy to manage: you will not care that your temperature sensor updated the data 16% less frequent to compensate of it. It’s manageable and saves a lot of complication.
(of course, in a distant future, a node could negotiate with the gateway automatically to send this more compressed, but in the meantime it is not a deal breaker).
The ceiling lights above above my desk are halogen spotlights, which I used to take some pictures. Moving the temperature sensor node for the picture I later noticed almost 4°C in temperature increase. A quick check with a shade confirmed – the halogen lights heat the black SI7021 temperature sensor by about 4 degrees. Or could it be the exposed die is sensitive to light?