Monte Carlo Localization Implementation

So far we covered all tabs but the first of the Monte Carlo Localization (MCL) application. The first tab provides the view into the MCL simulation. It is initialized with a map, the 'real' robot pose drawn in blue, and a subset (1000) of the all particles (probabilistic robot poses) drawn as small black robot icons:

 

The simulation is set up in the constructor of the MainWindowModel class in the ProbabilisticRobot.UI project:

Sampler.Initialize(new SamplerNormal());
InitializeMap();
InitializeRobotPath();

m_MCLTabModel = new MCLTabModel(this);
m_SensorsTabModel = new SensorsTabModel(this);

VelocityModel velocityModel = new VelocityModel(0.01, 0.01, 0.01, 0.01, 0.01, 0.01);
BeamModel beamModel = new BeamModel(3, 0.02, 1, new WeighingFactors(1, 0.1, 0, 0.1));
this.MCL = new MonteCarloLocalization(this.Map, new Robot(), 20000, velocityModel, beamModel);
this.MCLSimulation = new MCLSimulation(this.MCL, m_MCLTabModel.RobotPath.StartPose);

First the map is initialized from a hard coded list of corner points, then the list of robot drive commands defining the robot path is set up. Next comes the definition of the velocity motion model and the beam model for the perception. The MCL algorithm is implemented in the class MonteCarloLocalization which is initialized next. As implemented it uses 20,000 particles. Finally an instance of the MCLSimulation class is instantiated. It is responsible for handling the drive commands and simulating the sensor measurements.

After the start of the application, or after clicking on the Reset button, the initial real robot position is located at the start pose (m_MCLTabModel.RobotPath.StartPose). The initial random poses for the particles representing potential robot poses are initialized in the MonteCarloLocalization class:

public void InitializeParticles()
{
    for (int i = 0; i < this.ParticleCount; i++)
    {
        this.Particles[i] = CreateRandomPose();
    }
}

private Pose CreateRandomPose()
{
    int angleBucketCount = (int)(360 / DeltaTheta.Degrees);
    Angle heading = Angle.FromDegrees(Sampler.Random.Next(angleBucketCount) * DeltaTheta.Degrees);

    double x, y;
    do
    {
        x = this.Map.XMin + Sampler.Random.NextDouble() * (this.Map.XMax - this.Map.XMin);
        y = this.Map.YMin + Sampler.Random.NextDouble() * (this.Map.YMax - this.Map.YMin);
    }
    while (!this.Map.IsInside(x, y));

    return new Pose(x, y, heading);
}

In order to ensure sufficient coverage of the full heading range the robot headings are randomly selected with a granularity of DeltaTheta (0.1 rad).

The first time the Next Step button is clicked sensor measurements for each robot particle are performed (simulated) and incorporated. From then on clicking the Next Step button triggers the execution of the next move command and the handling of the sensor measurements for the new positions. Incorporating the measurements is at the very center of the algorithm since only in this step unlikely robot poses are actively removed:

public void Update(DriveCommand driveCommand, double[] measurements)
{
    double weightsSum = 0;
    for (int i = 0; i < this.ParticleCount; i++)
    {
        Pose originalParticle = this.Particles[i];
        Pose movedParticle = this.VelocityModel.Sample(originalParticle, driveCommand);

        while(!this.Map.IsInside(movedParticle.Location))
        {
            // The particle moved out of the map. We replace it with a new random particle
            originalParticle = CreateRandomPose();
            movedParticle = this.VelocityModel.Sample(originalParticle, driveCommand);
        }

        double weight = CalculateWeight(movedParticle, measurements);

        weightsSum += weight;
        m_AggregatedWeights[i] = weightsSum;

        this.Particles[i] = movedParticle;
    }

    Debug.WriteLine(weightsSum);
    DrawParticles(weightsSum);
}

private double CalculateWeight(Pose pose, double[] measurements)
{
    double weight = 1.0;
    for (int i = 0; i < this.Robot.SensorDirections.Length; i++)
    {
        Angle heading = pose.Heading + this.Robot.SensorDirections[i];
        double distanceToWall = this.Map.GetClosestWallDistance(pose.Location, heading);

        weight *= this.BeamModel.GetProbability(measurements[i], distanceToWall, DeltaR);
    }
    return weight;
}

First the move command is applied to each robot 'particle' resulting in new probabilistic robot poses. (The first time Next Step is clicked the move command is set to null. As a result the move step is omitted.) If a robot particle falls outside of the map the particle is removed and a new particle with a completely random pose is added instead. Next a weight for each of the particles is calculated based on how probable it is that the sensors for the given particle yield the simulated 'real' measurements.The individual weights are stored in the m_AggregatedWeights array. In addition the sum of all weigths is built up in the weightsSum variable.

Up to this point the particles still only represent the probability based on the move command. In order to incorporate the sensor measurements we sample with replacement from the available particles based on the weight of each particle. I.e., a particle with a high weight is likely to be drawn (potentially multiple times) while a particle with a much lower weight might not be drawn at all. Essentially it is a Darwinian concept allowing the fittest (the most likely) robot poses to survive while the weaker poses die out.

The drawing of the new set of particles based on the weights is implemented in the function DrawParticles:

private void DrawParticles(double sumOfWeights)
{
    for (int i = 0; i < this.ParticleCount; i++)
    {
        double random = Sampler.Random.NextDouble() * sumOfWeights;
        m_TempNewParticles[i] = DrawParticle(random);
    }

    m_Swap = this.Particles;
    this.Particles = m_TempNewParticles;
    m_TempNewParticles = m_Swap;
}

private Pose DrawParticle(double random)
{
    // binary search for the index; see http://www.dreamincode.net/code/snippet1835.htm
    int lowIndex = 0;
    int highIndex = this.ParticleCount;

    //loop while the low index is less or equal to the high index
    while (lowIndex <= highIndex)
    {
        //get the middle point in the array
        int midNum = (lowIndex + highIndex) / 2;

        double rangeLower = midNum - 1 < 0 ? 0 : m_AggregatedWeights[midNum - 1];
        double rangeUpper = m_AggregatedWeights[midNum];

        //now start checking the values
        if (random < rangeLower)
        {
            //search value is lower than this index of our array
            //so set the high number equal to the middle number - 1
            highIndex = midNum - 1;
        }
        else if (random >= rangeUpper)
        {
            //search value is higher than this index of our array
            //so set the low number to the middle number + 1
            lowIndex = midNum + 1;
        }
        else if (random >= rangeLower && random < rangeUpper)
        {
            //we found a match
            //Debug.WriteLine(rangeLower + " - " + rangeUpper + ": " + random);
            return this.Particles[midNum];
        }
    }

    throw new Exception("Program error");
}

The function DrawParticles picks a random number between zero and the sum of all weights. It then searches for the particle into whose weight bracket the random number falls. If the random number is between zero and the weight of the first particle then the first particle is picked. If the random number falls between the weight of the first particle and the weight of the first particle plus the weight of the second particle then the second particle is chosen; etc. After drawing a complete set of new particles we end up with the same particle count but typically with some poses being represented multiple times. (Note that multiple particles with identical poses almost certainly will result in different poses once the next drive command is applied.) The new set correctly reflects the probability based on the drive command and the sensor measurements.

The UI displays the true robot poses (blue) and the robot path (red) and a subset of the particles as small black robot poses for each step as can be seen in the animated gif below:

Running the actual application is slightly slower. The runtime performance is significantly impacted by the time it takes to draw the subset of the particles on the canvas. The simulation clearly shows how quickly the MCL algorithm can go from a completely random distribution of poses (start) to a fairly narrow distribution of poses provided the sensors can pick up enough distinguishing structure. This is the case in the left portion of the map and the lower right portion. Also clearly visible is how the once narrow distribution widens up again when the robot drives through the middle part of the map where there is very little structure and some of the robot's sensors don't even reach the walls.

This concludes the description of the application. The final section provides download links for the application and the source code.

Last Updated Monday, 29 March 2010 01:24

News

Apr 4, 2010
Category: Robotics
Posted by: rainer
A new article about the Monte Carlo Localization algorithm is now available in the robotics section.
Jan 2, 2010
Category: General
Posted by: rainer
Information about ongoing projects will be available on my blog.
Dec 24, 2009
Category: Robotics
Posted by: rainer
Detailed information about my Tic Tac Toe playing robotic arm is now available under the Robotics section.