In a previous blog post I introduced my idea of Axis Rotation Planning. Because this concept is more generally applicable, it is instead called “State Change Planning” in the Python code, which is now available on GitHub. It also contains the transformation from the target axis speeds to motor speeds (as derived here). I also added my very first Python unit tests! :).
As long as my description of Axis rotation planning was, as simple it is to use:
# change the roll angle ...
# from: roll = 5, rollSpeed = 1
rollState = State(5, 1)
# to: roll = 6, rollSpeed = 1
rollTarget = State(6, 1)
# plan the change
plan = rollState.planChangeTo(rollTarget).inSeconds(1)
# usage example: target speed in the period (0s, 0.1s)
# This method's results will be used as inputs for the roll PID.
One thing I always hated about the Raspberry Pi is the Wifi configuration, especially with multiple networks that need to be configured and the Raspberry Pi should connect to them as they are in range. Until recently, I edited the files /etc/network/interfaces and /etc/wpa_supplicant/wpa_supplicant.conf by hand. Unfortunately, it is very easy to make errors in these configuration files and an error means that I cannot connect to the Raspberry Pi anymore – I have to go there, connect monitor + keyboard, find the error and retry. This is of course a waste of time.
The tool will then show you a list of networks. You can connect to networks here directly or first go to settings for the selected network (“P” or right arrow). The settings are very good, for example you can select to always prefer a cable connection over wifi. I activated the automatic reconnect feature for wifi, too. Under extended settings you can tell it to show signal strength in dBm, which I prefer over nonsense percentages. It is also possible to scan for invisible networks, but I did not try because our network is not hidden.
First, as I always have to look this up myself – the following image explains yaw, pitch and roll rotations:
At some point in the quadrocopter control loop, we might have calculated that we need significantly more thrust on the left side than on the right, slightly emphasizing the front rotors to increase pitch and reducing the overall thrust a little to stop elevating.
In many quadrocopter projects, three separate PID controllers are used to control rotations around the three axis: yaw, pitch and roll. This is no easy task, because the directly controlled variable is the motor force / acceleration, which is the second derivative of the rotation angle. As many successful projects show, this gap is nothing to worry about – the PID constants just need to be tuned nicely – but for me it always seemed an unnatural, unnecessary and imprecise setup. In my last blog post, I introduced the idea of axis rotation planning. Axis rotation planning eases the task of a PID controller so that it now only controls the rotation speed – which it has a more direct influence on.
For a comparison on problem complexity, consider yourself confronted with the task of driving at 50 km/h compared to parking your car at a given position – both should be no real problem for a human, but parking often involves additional thoughts.
During the last weeks I have been thinking about how the basic control loop of my Quadrocopter should look like. That’s the piece of software that takes sensor readings and determines the turning speed for each of the four motors. If it works well, the Quadrocopter stays calm in the air, even under windy conditions. Otherwise, the Quadrocopter starts to oscillate, which uses lots of energy (the motors are busy with alternately compensating each other’s thrusts) and can resonate to a really unstable (and uncontrollable) flight condition, healthy for neither the environment or the vehicle. As we see, this is a really important component and deserves a considerable amount of thought. Continue reading “Quadrocopter control loop”→
The sensors and remote control are modelled as Go routines – a kind of low-cost threads. They communicate with the main routine via channels so that no other kind of synchronisation or shared state is neccessary. This concept is really cool.
To learn more about the features of Go, this site was very helpful:
The Raspberry PI Camera videos are h.264 compressed by the GPU. As part of that process, the GPU has to gather movement information in image space. What I missed until now is that the picamera module has functionality to access the current movement estimates:
At least with some fixed camera angle (like face downwards) the camera space is quite easy to translate to quadrocopter frame space, so this should lead to be a great additional motion estimate. Big thanks to the nice blog entry that helped me find this information! Can’t wait to try this out 🙂