Entradas

3D Reconstruction Using Stereo Vision and Template Matching

Imagen
Introduction In this post, I explain how I performed a 3D reconstruction from two images captured with a stereo system, using computer vision. The main idea is to detect pixels of interest in the left image, find their correspondences in the right image, and compute the 3D position of the real-world point based on the geometry of both cameras. 1. Image Preparation The first step is to enhance the images to facilitate the edge detection process. This is done by applying filters that remove noise without affecting important contours, allowing the detection of sharp, well-defined edges in the images. In this exercise, I used a bilateral filter, as recommended in the assignment, along with the Canny edge detector. The edges detected in the left image will serve as candidates for reconstruction. 2. Pixels of Interest Once the edges have been detected, their coordinates are selected as interest points, but only in the left image. These points represent areas in the image with relevant i...

Robot Localization Using AprilTags

Imagen
  Robot Localization Using AprilTags Introduction In this blog post, I will explain how I implemented a robot localization system using AprilTags. By leveraging a series of roto-translations, we can determine the robot's position relative to the world frame. The transformation sequence used is: world2robot = world2apriltag * apriltag2apriltagopticalframe * apriltagopticalframe2cameraopticalframe * cameraopticalframe2camera * camera2robot This approach allows us to accurately estimate the robot's pose by detecting AprilTags in the environment and using known transformations between different coordinate frames. Coordinate Systems Understanding the coordinate frames is crucial: Standard frames: X-axis points forward, Y-axis to the left, and Z-axis upwards. Optical frames: X-axis points to the right, Y-axis downward, and Z-axis forward. This distinction is crucial, as libraries processing image data typically operate in an "optical frame" convention, requirin...

OMPL Amazon warehouse

Imagen
 OPML AMAZON WAREHOUSE The objective of this exercise is to empower our robot to transfer a designated shelf from its initial location to a distinct area, utilizing a supplied map and shelf locations as points of reference. In my scenario, I have chosen the initial point as the destination. Additionally, it is crucial to highlight that we were constrained to crafting movement paths solely through the utilization of the OMPL library, a factor that has notably streamlined the entire task. How have I achieved it? First version The initial step involved configuring the robot to generate routes using OMPL. To achieve this, it was necessary to establish bidirectional mappings between the map and real-world coordinates, enabling seamless coordinate base transformations. Subsequently, I approximated the robot as a point (or pixel, in this context) and applied erosion to the map using a kernel size equivalent to the robot's pixel size plus an inflated area, akin to the approach employed by ...

Autoparking

Imagen
 Autoparking Autonomous Parking We are tasked with enabling our car for autonomous parking. Key considerations include: 1) The utilization of only three Lidar sensors, each providing information over 180 degrees. These sensors are strategically placed at the front, right side, and rear of the car. 2) A compass is available for determining the robot's orientation. 3) The initial position of the robot may not align with the correct lane for parking. 4) The initial orientation of the car may not be parallel to the street. auxiliar link Fig.1 Car sensors Having established the rules, we can now proceed to tackle the exercise. To address this challenge, I've chosen to break it down into manageable subproblems. Each subproblem represents a state for the robot, transitioning to the next upon resolution. The overarching goal is to construct a state machine, providing a comprehensive solution to the autoparking problem. Initial development phase In the initial development phase, I chose...

Autonomous drone for search and rescue mission

Imagen
Autonomous drone for search and rescue mission For this exercise, we will design the control system for a drone intended for search and rescue missions. To assess the effectiveness of our program, we will simulate a shipwreck scenario. In this hypothetical situation, we will be informed of the presence of shipwreck survivors (with an unknown number of individuals) at specific coordinates. Our drone will need to navigate to the designated area, sweep the vicinity for survivors, report to the rescue team, and ultimately return to the takeoff zone. Drone behavior: For the fulfillment of our mission, we will implement a state machine in which our drone will start from the 'disarmed' state, needing to arm and take off before transitioning to the 'go to survivors' state. Once the drone is in the last sighting zone, it will switch to the 'search' state, where it will perform a spiral sweep and record the location of any spotted survivor. Finally, upon completing the se...

Localized Vacuum Cleaner

Imagen
Localized Vacuum Cleaner For this practice, we are going to design the control system for a high-end vacuum cleaner. High-end vacuum cleaners stand out from the rest due to their more powerful sensors, which are required to enable the use of more optimal home cleaning algorithms. In this case, the algorithm we are going to use is the Backtracking Spiral Algorithm (BSA) for coverage. Backtracking Spiral Algorithm (BSA): This algorithm involves dividing the provided simulation world map into cells. The size of these cells will depend on the dimensions of the robot, and it is recommended that they have a slightly smaller size than the robot's diameter to prevent the robot from passing over the same area of the world multiple times, which would be represented by different cells in our auxiliary map. We will also classify our cells into three types: obstacle, already visited, and dirty. We will also expand the obstacles on the map using erosion to avoid getting too close to them and to ...

Arduino Vending Machine

Imagen
 Arduino Vending Machine For this practice we have to create an arduino program and use some sensors and actuators in order to simulate a real coffee vending machine. HARDWARE COMPONENTS: Joystick DHT11 (humidity and temperature sensor) Ultrasonic sensor Boton 2 Leds LCD Arduino Uno What is its behavior ? We are going to have 3 different behavior stages  1) Loading mode: The led 1 is going to blink 3 times at 1 seconds intervals. At the same time that we are executing the blinks of the led, we are going to write a loading message ("CARGANDO...") in the LCD; after that, it is going to entry to the service mode. 2) Service Mode: We are going to print a waiting for client message ("ESPERANDO CLIENTE") at the LCD until the distance sensors get a 1 meter or less detection, simulating the client detection. When the client has been detected, we are going to show the products at the LCD and letting the client search for the product that he wants using the joystick (up / dow...