Overview
The project is my diploma thesis. Assumption is to achieve autonomous dron flight with use of detecting object based on image processing. There are two applications that are part of a work. First one is drone simulator that sends video to second application, which controlls the drone. Both are written in C++.
Drone simulator
Simulator uses my own 3D graphics library (https://marcindziedzic.wixsite.com/portfolio/proengine3d) for rendering with use of OpenGL. Because drone has its own camera, the scene is rendered twice - first time from simulator perspective and then, from drone's camera perspective, to give user both preview of simulator environment and what drone actually sees.
Mathematical model
Mathematical model of a drone is very easy for this purpose. There is only one force called thrust, always directed upward in the drone space. That means, if we change orientation of drone, it will move, because force vector will be also transformed. To do this, we rotate thrust force vector in the same way as 3D model is rotated. First step is to build rotation matrix from three angles: pitch, yaw and roll:
RotationMatrix = RotY(yaw)RotX(pitch)RotZ(roll)
As I said, drone produces only one force, directed upward:
ThrustAcceleration = vec3(0, thrust, 0)
But it is in drone space. To move this vector to world space, we just apply rotation transformation:
ThrustAcceleration = RotationMatrix * vec3(0, thrust, 0)
Generaly, that's all. This is base of mathematical model of this simple drone simulation. Next thing is rotational movement simulation. To make it easier, I am rotating drone by controllining velocity, not acceleration. Result is pretty good and don't need any 'regulators'.
Streaming video
Actual version of video streaming is a 'fast implementation' that not works very well, but... Works. It uses TCP to send rendered frames. It doesn't have any frame checking, but while it is TCP, it isn't neccessary in this version. System has no video compression also, what causes high latency due to large size of frames.
Sending video is done by rendering drone view to the texture and then reading data from color buffer (glReadPixels).
Fly controller
Controlling things like drone is not so easy, because we are controlling second derivative of a position. Higher derivative means higher 'latency' and need of more complex equations. In case of drone, some values affect others what makes is even more complicated. The base idea for stabilizing drone opposite to the marker is to use distance of the marker from image center as input parameter.
Controlling drone
Simulation is based on controlling real drone (Ar Drone 2) to make swapping simulator with real robot as easy as possible.
Drone is controlled by sending TCP packets with pitch, roll angles, yaw angular velocity and gaz. Parameters associated with angles are self-explanatory. Gaz is a [-1; 1] value that specifies power of all 'motors', without causing rotation. 0 value means that thrust force is equal to the gravity (hovering). Applying positive value causes drone to move upward and negative - to move downward.
Detecting objects
OpenCV library is used to process each frame of video. It has very nice module for Aruco markers recognition (it is not in a standard package and should be compiled on your own if I remember corretly). Markers are similar to QR codes, but instead of reading the data, we are just recognizing ID of marker. This is much less informations, but it is very enough in projects like this.

Implementation of regulator
As I said, base idea is to use distance of the marker from image center as input parameter. If we could set linear velocity of a drone, equation would be easy:
Velocity = Distance * Factor
But controlling acceleration instead of velocity in this way will give us never ending oscilations. The difference is that when controlling acceleration, we need to slow the drone down by applying opposite value when its close to the target position. To do this, we can use velocity in the equation. Template look as follows:
Acceleration = Distance * DistanceFactor - Velocity * VelocityFactor
Next step is to compensate inpact of input parameters on eachother. Applying pitch or yaw angle will change position of the marker on the image what will cuase that drone will 'think' that marker is higher or lower, for example.
