Probabilistic Kinematics
θ is the heading or the direction that the robot points to. The pose is described by the vector (x, y, θ)T. The corresponding source code can be found in the file Pose.cs in the project ProbabilisticRobot.
Next we need to establish the conditional density for the robot's next pose xt based on its previous pose xt-1 and the motion command ut. Here x (x bold) represents the complete pose of the robot, not just its x coordinate:
p(xt | ut, xt-1)
It is important to realize that we are not talking about one resulting end pose. Since the outcome of the motion command is smeared by noise we get a probabilistic distribution of end poses. We need to model this distribution in a way that allows us to draw samples for the end poses. We assume a robot with differential drive. In this case typically two models are used:
- Odometry-based
- Velocity-based (dead reckoning)
In my application I use the velocity based model. The associated formulas including the sampling algorithm are described in the document VelocityMotionModel.pdf by Çetin Meriçli. Please note the six parameters α 1, ... α6 that describe the various probabilistic motion errors; in the code they are represented as A1, ... A6 in the VelocityModel class in \ProbabilisticRobot\MotionModel\VelocityModel.cs. Sampling of the the new poses is implemented in the functions Sample(...) of the VelocityModel class. They leverage the Sampler class which supports sampling from a normal distribution as well as sampling from a triangular dicstribution. In the code the former is used.
For a detailed description of the velocity model and the underlying math please see the book Probabilistic Robotics.
In the application the velocity model parameters are displayed in the lower part of the Motion Model tab. The upper part shows a map with the robot situated on the left, pointing to the right.
Each time the Next Step button is clicked a new drive command is simulated. Under perfect conditions the robot would exactly follow the drive command (drawn as a red line with the expected robot position drawn as a blue circle). However, in reality the resulting robot poses are distributed according to the probability distributions. The UI depicts this by drawing 50 sampled robot poses. With each step the error increases, soon leading to a very wide spread of robot poses. Below is an animated gif that shows how drastically the uncertainty of the robot pose increases over time soon spreading far beyond the map.
Without sensing its environment the robot would very quickly be completely lost even if the initial pose is exactly known. The next section discusses how the robot 'senses' are modeled.