SRF-02 Ultrasonic Sensor with I2C

Yesterday I was able to install the SRF-02 ultrasonic sensor on my quadrocopter.

Range: approx. 15 cm – 6m
Speed: up to 15.38 measurements per second (65ms between measurements)
Connectivity: i2c or UART
Up to 16 SRF-02 can be combined in one i2c bus.

ultrasonic_range_finders_text.jpg
We see both of my range finders on the bottom side of my quadrocopter: the new SRF-02 (with built-in timer) and the HC-SR04 where the Raspberry Pi has to stop the time itself. They cannot operate at the same time as both use bursts of eight 40 kHz pulses.

The SRF-02 is the cheapest (around 16 Euros) ultrasonic sensor I could get with an own clock, as the Raspberry Pi (especially with a non-realtime OS) is really bad at measuring the ultrasonic times accurately. I demonstrated this earlier on my blog. This new sensor takes a command to start a measurement, measures on its own and then stores the result in a register for the Raspberry Pi to read through i2c (specification). Additionally, it automatically adjusts an estimation of the current minimum range.

Read more to see what problems I faced when trying to use it…

Continue reading “SRF-02 Ultrasonic Sensor with I2C”

Raspberry Pi 3 and OpenCV installation

I just got my new Raspberry Pi 3 and want to check if it is fast enough to finally bring some computer vision to my autonomous quadrocopter.

After a fresh Raspbian installation I followed the tutorial to install OpenCV 3.0 on Debian Jessie. For all those who did not yet use virtual environments with python, the tutorial covers the basic usage. Really nice! Unfortunately, the tutorial did not work for me with the newest OpenCV 3.1: CMake refused to recognize Python3, so I specifically added parameters for this:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D CMAKE_INSTALL_PREFIX=/usr/local \
 -D INSTALL_C_EXAMPLES=OFF \
 -D INSTALL_PYTHON_EXAMPLES=ON \
 -D OPENCV_EXTRA_MODULES_PATH=/home/pi/opencv_contrib/modules \
 -D BUILD_EXAMPLES=ON \
 -D PYTHON3_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.4m.so \
 -D PYTHON3_PACKAGES_PATH=/home/pi/.virtualenvs/cv/lib/python3.4/site-packages/ \
 -D PYTHON3_INCLUDE_DIR=/home/pi/.virtualenvs/cv/include/python3.4m \
 -D PYTHON3_EXECUTABLE=/home/pi/.virtualenvs/cv/bin/python3 \
 -D BUILD_opencv_python3=ON \
 ..

opencv_python3

Now CMake did find Python3 and OpenCV was installed on my Raspberry Pi 3:)

I did not see the exact timing, but the make -j4 call took at most 1h 22min. Really fast when I compare it to my early tryouts with the Raspberry Pi 1 and OpenCV!

After the compilation, I found out that the additional step in the tutorial to create a symbolic link in the virtualenv from ~/.virtualenvs/cv/lib/python3.4/sitepackages/ to /usr/local/lib/python3.4/sitepackages/cv2.so was not needed because the installation was already inside the virtualenv folder.Now that my installation is finished, stay tuned for some experiments with the new setup…

Ultrasonic range finder reliability

I have several problems with the ultrasonic range finder HC-SR04. They are connected to the manual time measuring from ultrasonic signal to it’s echo:

  1. The code often stops to work while waiting for an UP or DOWN edge. I am unsure what is wrong here, maybe the circuit has some bad solder joint.
  2. Even if it works, the measured distances are not really precise. I expected some inaccuracy because I don’t use a realtime OS, but they are really, really bad. See the image:ultrasonic-inaccuracy
    First look at the measured distances: they vary greatly, although the sensor was mounted solid facing a wall and nothing moved. A very simple filtering approach (where each new value is only weighted 0.3 as opposed to the previously filtered value with weight 0.7) doesn’t make the values usable; consecutive filtered readings (approximately 12.2 ms lie between them) sometimes still differ by more than 4 cm! If we want to get a speed approximation from this, we really need a more sophisticated method like the altitude sensor fusion that I proposed earlier, otherwise we would get a speed of 4 cm in 12.2 ms, resulting in approximately 3.28 meters per second – very fast, compared to the reality of no movement at all!

Maybe the first point was a hardware failure – I tried another one of my HC-SR04 and it worked much more reliable. But the second point really bugs me. Even if the altitude sensor fusion will work perfectly, the results could still be much better with more precise sensor readings. This is why I ordered a another ultrasonic range finder that has an own timer and an I2C interface: This SRF02, which measures the distances itself and then gives the results. Hopefully I can play around with it soon.

Ego Motion Estimation from video and Google TensorFlow

Some time ago, I wrote that maybe one could use the image frame motion that the Raspberry Pi graphics chip is outputting when capturing a video to estimate the motion of the attached quadrocopter in 3d space. It is definately possible to retrieve this information from the full video (see this paper and this video), but it seems that the 2d video encoding motion vectors are not accurate enough to carry this information, as this paper from Bristol HP labs written by Maurizio Pilu in 1997 demonstrates

For me, this means that it takes more computational effort to get motion estimation from the video, possibly using Deep Learning strategies as currently hyped by all kinds of press. Some weeks ago I worked through the MNIST expert tutorial of Google TensorFlow and must say that I like the idea to define some calculation model in Python and then use some high-performance execution backend (CPU or GPU) to execute it. At least in my quick look, the Python API was quite easy to use and I would like to use it in the following scenario: when the altitude sensor fusion (see my last blog entry) is working, I will try to train a convolutional recurrent network to predict the height above ground and vertical speed (as estimated based on the other sensors) from just the video stream.

Unfortunately, this will most likely not perform well on the Raspberry Pi 2, so I won’t be able to use it in the air (but I am interested in how easy it is to get this running in TensorFlow).

Altitude sensor fusion

For an autonomous quadrocopter, it is essential to know the height above ground. We have to distinguish between:

  1. height above ground (as measured from a downward facing distance sensor)
  2. height above starting place
  3. height above average sea level from barometer
  4. height above average sea level from GPS

If we assume that the ground is even, (1) and (2) are equal. However, this is seldom the case. The height in (3) is subject to weather conditions, temperature deviations and different air layers, while (4) suffers from an overly simple model of the earth’s geoid.

Sensors to fuse

  • Barometer (height above sea level according to some avionic standard model)
  • Ultrasonic distance sensor (height above ground)
  • Accelerometer (linear “up” acceleration)
  • GPS (height above modelled sea level)

Regarding the accuracy of barometer vs GPS, I have found a nice article on xcmag.com. There seem to be huge differences in the altitude readings in both technologies, so we need some alignment mechanism.

Benefits of this sensor fusion

  1. Even if a single sensor fails (might be a bird flying under the quadrocopter, indoor or a storm), the system keeps working
  2. Noise filtering: even when used with just the ultrasonic distance sensor, the noise is reduced.
  3. We get a vertical speed estimate that we can use for a smoother height control
  4. No startup phase is needed: in the very beginning, GPS and barometer might not have reached their full accuracy. This lack of knowledge is part of our model and doesn’t hurt. A less elegant (but computationally cheaper) solution could remove the GPS and barometer bias from the Kalman state and use a startup phase at the ground to find the biases.
  5. If the ground level changes, the system adapts. By keeping track of the states, we might even combine them to get a map of the landscape.

Now that the marketing part is done, let’s get our hands dirty :).

The Kalman filter

To fuse these sensor readings to one unified state, I use a Kalman filter, which is a very common tool in this domain. Here is a great introduction to Kalman Filters that helped me some time ago and Wikipedia has a nice description with a great example.

At its core, the Kalman filter is a very powerful enhancement of a weighted sum between some previous estimation \hat{x}_{k-1} at time k-1 and a measurement \hat{z}_{k} (at time k ):

\hat{x}_{k} = K_k\cdot z_k + (1-K_k)\cdot \hat{x}_{k-1} .

For an assumed K_k = 0.5 we would get a simple average, but the Kalman filter is much smarter than that and (just as important) multi-dimensional. The power of the Kalman filter lies in the probability distributions it works with, and a linear state prediction model. I will not give an introduction to Kalman Filters in general, so please study the links above if you have problems to follow the next paragraphs.

Kalman State and Prediction Model

When designing a Kalman Filter, the most essential part is the state vector x .

x=\begin{pmatrix}\text{heightAboveGround}\\\text{verticalSpeed}\\\text{barometerGroundHeight}\\\text{gpsGroundHeight}\end{pmatrix}

As a state, we mainly want to model the height above ground. To model its change, we also need the quadcopter’s vertical speed. The other two state variables (barometerGroundHeight and gpsGroundHeight) are offsets that help with relating the GPS and barometer readings to the ultrasonic distances.

The Kalman Filter tries to predict the next state based on the current state estimate and user control input (if present):

\begin{aligned} x_k= \underbrace{F_k x_{k-1}}_{\text{prediction}} + \underbrace{B_k u_k}_{\substack{\text{influence of}\\\text{vertical acceleration}}} + \underbrace{\mathcal{N}(0,Q_k)}_{\text{noise}} \end{aligned}

We define F to be:

F_k = \begin{pmatrix} 1 & \Delta{t} & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}

As we see, the diagonal is one, so we take all the values from the last state as a basis. Additionally the time delta is multiplied with the vertical speed and added to the height above ground to reflect the current movement. The ground heights in contrast should not move with the quadrocopter.

\begin{aligned} u_k &= \underbrace{\text{vertical acceleration}}_{\text{accelerometer reading in world space}}\quad B_k &= \begin{pmatrix} \frac{\Delta{t}^2}{2}\\ \Delta{t}\\ 0\\ 0 \end{pmatrix} \end{aligned}

Here we apply integration rules: vertical speed is the integral of vertical acceleration in \Delta{t} and the height above ground is the second integral. The offsets of GPS and barometer do not change with quadrocopter acceleration.

Although this is not really a predict step if we use the accelerometer data from this point in time, the trick seems to work for attitude sensor fusion in The Design and Implementation of a Robust AHRS for Integration into a Quadrotor Platform by Matthew Watson (2013). With this trick, we don’t need acceleration as another state dimension.

Kalman Measurement Model

The matrix H maps the current state to a measurement vector z_k :

z_k= H_k \cdot x_k + \underbrace{ \mathcal{N}(0,R_k)}_{\text{noise}} ,

where x_k is the current state estimation. The lines of H_k refer to the individual sensors, so we want the upper equation to be:

\begin{aligned} z_k = &\begin{pmatrix}\text{Barometer measurement}\\ \text{Ultrasonic sensor measurement}\\ \text{GPS height measurement} \end{pmatrix} \\ = & \begin{pmatrix} \text{heightAboveGround} + \text{barometerGroundHeight}\\ \text{heightAboveGround}\\ \text{heightAboveGround} + \text{gpsGroundHeight} \end{pmatrix} + noise \\ = & \underbrace{\begin{pmatrix} 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 1\end{pmatrix}}_{H_k} \cdot \underbrace{\begin{pmatrix}\text{heightAboveGround}\\\text{verticalSpeed}\\\text{barometerGroundHeight}\\\text{gpsGroundHeight}\end{pmatrix}}_{x_k} + \underbrace{noise}_{\mathcal{N}(0,R_k)} \end{aligned}

As we see, the vertical speed has no mapping in H (the second column is zero). This is because we cannot directly observe it. The Kalman Filter will however deduce a probability distribution for it from the movement model.

The height above ground is observed by three sensors, which should give good precision and noise insensitivity.

Adapting Covariances and Noises

Because the Kalman filter always calculates with gaussian probability distributions, the probabilities and covariances are as important as the expected values.

Q_k = \begin{pmatrix} 0.3 & 0 & 0 & 0 \\ 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \qquad \text{Reliability of state prediction}

The state prediction only affects height above ground and vertical speed, so we don’t have to introduce a lack of knowledge in the other two dimensions. Since the influence of the noisy accelerometer reading is stronger on the vertical speed, the noise is stronger there.

\begin{aligned} & R_k &= & \begin{pmatrix} 1 & 0 & 0 \\ 0 & \sigma^{\text{ultrasonic}}_k & 0 \\ 0 & 0 & \sigma^{\text{gps}}_k \end{pmatrix} \qquad \text{Reliability of height sensor data} \\ & \sigma^{\text{ultrasonic}}_k &= & \begin{cases} 10000 & \mbox{if no echo} \\ 0.25 &\mbox{otherwise} \end{cases} \\ & \sigma^{\text{gps}}_k &= & \begin{cases} 10000 & \mbox{if too few satellites} \\ 1 + (\text{num satellites})^{-\frac{1}{2}} &\mbox{otherwise} \end{cases} \end{aligned}

The matrix R_k expresses no noise covariance between the different height sensors because only the values should be correllated, not the noise. It changes as soon as we get above the maximum distance of the ultrasonic sensor, as then we don’t get a result from it. Then the corresponding entries should be very high in R_k . Luckily, the Kalman filter supports changes to the matrices in every step.

Similarly, we need to adapt the entries for GPS with the number of satellites available. With fewer than three satellites, the GPS has no value for us and the values in R_k need to be extraordinary high. On the other hand, with more satellites, the accuracy is better and noise should be reduced.

Initial values

In the beginning, we assume that we are on the ground with zero speed, but we have no idea where we are on earth and what weather conditions we face, so the GPS and barometer offsets are unknown.

x_0 = \begin{pmatrix} 0 \\ 0 \\ 100 \\ 100 \end{pmatrix}

One very nice aspect of Kalman filtering is that the initial values can be very inacurate, if the initial probability distribution reflects this lack of knowledge. So we set the initial probability distribution to be quite confident in the first two state dimensions and very unsure in the third and fourth:

P_0 = \begin{pmatrix} 0.1 & 0 & 0 & 0 \\ 0 & 0.1 & 0 & 0 \\ 0 & 0 & 10000 & 0 \\ 0 & 0 & 0 & 10000 \end{pmatrix}

All the values in this post should be treated as just a first idea because I did not yet implement this. As soon as the implementation is done, I will post some results and tuned parameters.

PID implementation finished

My current plan is to use four PID controllers to control the height and three rotation speeds of my quadrocopter. If you don’t know PID controllers – Wikipedia has a nice description.

Today I found some time to finish my PID controller implementation – but wait! Why an own implementation? First, the code is very, very simple. Second, there is not the one PID controller, but many very different ones and I just did not find a Python implementation that suits my needs (like resetting the target state frequently). The author of the Arduino PID library, Brett Beauregard, describes common pitfalls in his blog posts in http://brettbeauregard.com/blog/2011/04/improving-the-beginners-pid-introduction/. So I use his C implementation and refinement ideas as a basis for my own PID code in Python. The points that I picked from his improvement list are:

  • Derivative Kick: When the target state (=setpoint) is changed, the derivative of error seems to change, too, because the last error was calculated with respect to the last target state. This can be omitted by remembering the last state instead of the last error, and calculating the difference between current and last error only with respect to the current setpoint: \frac{\delta\text{error}}{\delta{t}} =\frac{(\text{setpoint} - \text{currentState})-(\text{setpoint} - \text{lastState})}{\delta{t}} = \frac{\text{lastState} - \text{currentState} }{\delta{t}}.
    With this redefinition of the derivative, the output remains much more stable in the case of a setpoint change.
  • Reset window mitigation: A PID does not assume much about the system it tries to control. By giving it at least some information about the value domain (minimum and maximum), some errors can be omitted.

If I find a need to refine it further, I can do so, it’s my code! Now the plan is:

  • develop a very basic high-level planning module (just fly up 1m, stay there 5s, go down)
  • test the control loop
  • think about failsafe mechanisms
  • finish the hardware build as described in my previous blog entry
  • tune the PID settings and state change planning parameters
  • let it fly!

Solidifying the quadrocopter build

Up to now, I mostly worked on the concepts and theory of quadrocopter flight. However I also have to build a sold quadrocopter to survive some crashes (which are inevitable in the beginning, when no tuning has been done).

There were some things that really bugged me:

  • The motor power connections had to be removed to access the MPU board below the power distribution board.
  • I want to have four ultrasonic sensors and LEDs under the wings, but the ESCs already occupy the space.
  • With elements connected to both the bottom and the top with many connections in between, there is no simple way of separating the two parts, which means that cables have to be pulled in order to access the boards for soldering or other changes.
  • Most parts were attached with cable ties for prototyping, mostly because I had no drill bit.
upper_frame_from_below
Shows the upper frame part from below (frame’s lower plane is removed). The ESCs will all be mounted as the front left one to leave some space for LEDs and ultrasonic sensors. The Raspberry Pi will be mounted on top.

Now all the basic flight parts (wings, motors, ESCs, power distribution board, power trafo board, MPU board, PWM board) are attached to the frame’s upper plane, so that it is possible to dismount the lower plane and then access all the boards and wiring from below. Connections between frame and parts are now made with screws (except for a few pieces). The ESCs will be mounted on the inner sides of the wings, so that there is space on the outer and lower side. On the frame’s lower plane there is still space for a LiPo (below the power trafo board).

The picture shows the quadrocopter upside down with the frame’s lower plane removed. The front left wing already shows the new ESC positioning. The MPU board and raspberry pi are not yet mounted.