Our primary research interests and contributions lie within the areas of control systems, computational intelligence and robotic vision with applications in guidance, control and automation of unmanned ground and aerial vehicles.
In today’s world, it is not sufficient to design autonomous systems that are able to repeat the given tasks in repetitive manner. We must be ready to push the boundaries by leveraging the current state-of-the-art autonomy level towards smarter systems which will learn and interact with their environment, collaborate with people and other systems,plan their future actions and execute the given task accurately.
Our public code and datasets are available at https://github.com/open-airlab/.
The Artificial Intelligence in Robotics (AiR) Lab is a fascinating environment with 15x15x4.5m flight arena, equipped with 12 Vicon V8 tracking cameras, capable of tracking multiple flying objects in 6 degrees of freedom with a range of millimeter accuracy. Air Lab allows us to test next-generation ground and flying robots for a variety of applications, including but not limited to tracking control, aerial manipulation and formation flight.
Object Detection in Aerial Images
We use a camera-mounted unmanned aerial vehicle (UAV) to detect objects in aerial images. Having different types of sensors, a UAV can gather multi-modal data (e.g., GPS, altitude, IMU) which can increase the performance of object detectors. To fuse vision and UAVs, we have collected multi-modal UAV dataset (The AU-AIR dataset) to detect vehicles and pedestrians in an aerial image.
Anomaly Detection for Aerial Surveillance
Anomaly detection is a key goal of autonomous surveillance systems that should be able to alert unusual observations. Aerial monitoring and flight capabilities allow UAVs to be used for surveillance tasks. However, they are mainly used as flying cameras, as a part of human-in-the-loop surveillance systems in which humans conduct anomaly detection tasks in a control room.
High-speed Autonomous Navigation for Aerial Robot Systems in Drone Racing and Challenging Indoor Environments
In autonomous drone racing, we use a very light-weight, highly maneuverable quadrotor equipped with only visual sensors and onboard computers, has to fly at high-speed while avoiding collisions in multiple challenging, dynamic and unknown scenarios. Fast, reliable perception systems and precise, intelligent motion planning and control are required to complete such a mission. By focusing more on the available local information and incorporating Deep-learning framework for perception into motion planning, our robot can outperform traditional methods in dynamic or unstructured environments. The technique can be later extended to other applications, such as Search and Rescues and disaster relief missions.
End-to-end planning for aerial vehicles
Our ultimate goal is to demonstrate clear advantages for replacing — or complementing — state of the art conventional blocks of any robotic application (e.g., sense-plan-act) with end to end planners in challenging real-world scenarios. We are trying to solve challenging real-world problems of vision-based control of aerial robots (e.g., autonomous drone racing) by directly mapping the information from onboard sensors (RGB-D image, IMU) to output motor control signals. We expect to have faster inference, improved robustness, and online adaptation using state of the art methods, e.g., deep reinforcement learning.