Localized Vacuum Cleaner

Localized Vacuum Cleaner


For this practice, we are going to design the control system for a high-end vacuum cleaner. High-end vacuum cleaners stand out from the rest due to their more powerful sensors, which are required to enable the use of more optimal home cleaning algorithms. In this case, the algorithm we are going to use is the Backtracking Spiral Algorithm (BSA) for coverage.


Backtracking Spiral Algorithm (BSA):


This algorithm involves dividing the provided simulation world map into cells. The size of these cells will depend on the dimensions of the robot, and it is recommended that they have a slightly smaller size than the robot's diameter to prevent the robot from passing over the same area of the world multiple times, which would be represented by different cells in our auxiliary map. We will also classify our cells into three types: obstacle, already visited, and dirty.

We will also expand the obstacles on the map using erosion to avoid getting too close to them and to prevent collisions. The size of the erosion kernel we are going to use will be a little bigger than the robot's radius (in pixels).

You can see how the obstacle areas are inflated in the image below.

Original map - eroded map




Once we have enlarged the obstacles, we can proceed to divide the map into cells.

A first approach we could take is to draw vertical and horizontal lines on our map, which will discretize it into the grid cells we are looking for. However, this may result in some information loss since we are modifying map values that we will use for deliberation purposes just to create this grid. The approach I propose to avoid this is to use the 'grid' formed in the spaces between the pixels that make up our map. What I will do is look at which pixel my robot is in, paint a square as a cell (with the robot's diameter being the size of the side of this cell).

To check whether the 'neighboring cells' of my robot are occupied, clean, or dirty, I will look forward, right, down or left from my robot's pixel (the central pixel of the cell) at a distance equal to the  the robot's diameter in pixels (end of the neighboring cell) obtaining the pixels that will be the centers of my neighboring cells from which I will check if they are dirty or not. This way, I discretize the map into the cells I was looking for without losing information, creating cells starting from the center of my robot, and simplifying the verification of the type of neighboring cells, avoiding the need to consider offsets.

Knowing how we are going to build our cell map and how we will check the state of each cell, we can proceed to generate the plan:
  1.  Mark the current cell where the robot is located as clean (by painting it on the map with a specific value).
  2. Check if the neighboring cells to the robot's position are dirty, and if they are, add them to the list of possible return points.
  3. Remove the robot's current position from the list of possible return points if it's in there (it has already been cleaned).
  4. Check if the neighboring cells to the robot (north, east, south, or west, in that order of priority) are dirty and:
  •  If any of them is dirty, update the robot's position and store the new position in a list that will contain the points forming the cleaning route.
  • If none of them is dirty and I have points saved in my list of possible return points, it means we have reached a breakpoint, and we need to use a search algorithm to find a way back to a possible return point. I start with the robot's current position and use a breadth-first search algorithm to find a route to a potential return point. If it takes too long to find a possible route, I switch to the A* algorithm, using the Euclidean distance to the last saved return point as the heuristic. Once I have the route to the return point, I add it to the robot's cleaning route.
  •  If none of them is dirty and I have no points saved in my list of possible return points, the construction of the cleaning route is complete.
You can see a demonstration of the BSA planning algorithm here:


Once we have the path, we need to create a function that translates it into motion commands for the robot. To do this, we first need to convert the map coordinates (in pixels) to real-world coordinates (in meters). There are a couple of ways to achieve this:

1. Using Known Scale: Knowing that each grid cell in the Gazebo world represents one meter, we can calculate the necessary scale to convert pixel coordinates to real-world coordinates using a simple proportional rule of three.



*PX_ORIGIN_WORLD_X and PY_ORIGIN_WORLD_Y represent the world coordinates origin of Gazebo in pixels.

2. Random Sampling: Another approach is to randomly select several points in the Gazebo world, determine the robot's real-world coordinates in Gazebo, and visually estimate which pixels correspond to those coordinates. After gathering multiple data points, you can perform a linear regression to obtain the scale factor and the offset of the origins if there were any. The lines that fit our world-to-map conversion take the form y = m*x + n, with m being the scale and n being the offset of the axes. Using MATLAB's machine learning function for linear regressions, we find that the line for the x-axis registration is y = -50.592*x + 293.68,  and the line for the y-axis registration is  y = 50.387*x + 212.35


Both methods have given me a fairly similar solution.

Finally, once we can convert the points of the path from pixels to coordinates, all we need to do is approach them using two PID controllers. One for linear velocity (using the linear distance to the desired point as input) and another for angular velocity (using the difference between the robot's orientation and the required orientation to move in the desired direction, i.e., north, east, south, or west). To continue using rectilinear movements, as indicated by the BSA algorithm, the robot first turns to achieve a reasonably accurate orientation toward the point and then moves toward it, continuously correcting its orientation if it deviates along the way.

Here you can see a demonstration of the final behavior of the robot.



We can observe that many areas near the walls have been left uncleaned. This is a disadvantage of the BSA algorithm, but it can be easily addressed by implementing a wall-following algorithm using the robot's laser sensor.





Comentarios

Entradas populares de este blog

Autonomous drone for search and rescue mission

OMPL Amazon warehouse