This work explores the application of Gabor filters in the first convolutional layer of con volutional neural networks (CNNs) to improve feature extraction and representation for vision based tasks.
In this project, different LiDAR and visual SLAM algorithms were explored, including Hector SLAM, Gmapping, Cartographer, RTAB-Map, and ORB-SLAM 3, with the goal of creating a map or trajectory of a round-trip route.
This project focuses on enhancing the navigation experience for the blind by leveraging the power of CNNs for object detection. This project uses computer vision and machine learning to provide visually impaired individuals with real-time object and sign detection and recognition.
An autonomous system is developed on the robot under ROS and Ubuntu environments. The objective of the autonomous system is performing reconnaissance to assist emergency workers by creating a map of an unknown area and detecting all necessary figures in it.
This was started as an R & D Project. It is aimed at industries where, whenever an object comes into the workspace of the arm, it will pick up using concepts of Inverse Kinematics and Computer Vision. ROS was used as the main firmware of the system
The theme was to build two arrow-throwing robots that could hurl arrows into pots and defend against opponents while completing other tasks like picking and placing. ROS was used to program the robots, and sensor fusion principles ensured smooth interaction and maneuvering.