In this project, we use mobile robotics concepts to do autonomous reconnaissance in a disaster simulation. Our job is to place a TurtleBot3 in an unknown environment populated by AprilTags, which serve as stand-ins for the simulated victims. The Turtlebot must create an accurate map of the area as well as a detailed record of the AprilTags that are there. This list must include the AprilTag's absolute position in relation to the created map, as well as its ID number. The Turtlebot is outfitted with a 360° LiDAR scanner for localization and mapping, as well as a Rasberry Pi Camera V2 to detect Apriltags.Our team is using techniques like mobile robotic kinematics and sensing, feature extraction, simultaneous localization and mapping (SLAM), and motion planning to conduct a successful reconnaissance mission.
In order to generate the complete occupancy grid, Frontier-based exploration is suggested.
As shown in the figure , the robot explores the environment following frontier centroids until there is no more frontier left
which means every area is searched in the closed environment. SLAM is required for the exploration so as to keep updating the occupancy grid map and localize the robot. In ROS, many packages are already available to execute this exploration.
One of the implementations is using the gmapping package for SLAM, explore_lite package for Frontier-based exploration and move_base package for driving the actual robot. The entire architecture is explained in the figure below.
Fig 2. ROS node architecture for Frontier-based exploration (http://wiki.ros.org/explore_lite)
In order to identify the victims, a Pi camera on the robot is used. Also, there is an existing package called apriltag_ros, that reads
images from the camera and outputs tag IDs and poses when a tag is detected. Its overview is shown in the figure below
However, these two functions do not ensure that all the tags are going to be detected by the camera because the spinning lidar sensor can scan all around the robot while the camera is at the fixed position looking in the forward direction. The robot will not search every single corner of the environment with the camera as the lidar will have finished creating the map. Moreover, the tag measurement of the camera might be rather noisy because the robot is moving and the tags could be viewed in a distance which will increase the noise in the measurement. These two problems have to be resolved so as to thoroughly perform the robot’s tasks. Many solutions are proposed to ensure detecting all tags. One approach is rotating the robot to scan around with the camera while creating the map. Another one is limiting the lidar’s performance so that the robot could move in the range of the accurate tag measurement. The other one is focusing more on the information about the tags. It is specified that the tags are “stand-ins” for simulated victims which means they are always on walls or obstacles not just on some random ground in the environment. This is found to be an inevitable condition as the camera is not good at detecting the tags on the ground when tested. Therefore, the rough locations of the tags are informed which would be really useful when planning the robot’s motion. This information ensures 100% detection rate of all tags in the environment as the robot just needs to survey the walls and surfaces of the obstacles carefully.
Our strategy is to create the map as soon as possible to figure out where the walls and obstacles are using the full capability of the lidar sensor and the robot motion. Subsequently, the path along the walls and obstacles will be created based on the complete map. If the robot just moves straight along the path, the camera is still not oriented toward the possible tag areas. Thus, the camera is relocated to the side of the robot as shown in the figure below so that it can now follow the path and detect the tags at the same time. To resolve the second problem as to the noise of the tag measurement, the camera is tested several times with different tags to determine in what distance the tags are detected accurately and this distance is taken into account when the robot plans the tag detection path after creating the complete map. We try to use this method because we find the AprilTags can not be detected very well on the ground with a flat angle. So we dececided to put them on the wall and the side of the obstacles.