Modular self-reconfigurable robotic systems have strong self-adaptation and self-healing abilities, which makes them capable of coping with various tasks in complex and changeable environments. Most existing modular self-reconfigurable robots have connection constraints on the mechanical structure. Their relative positioning and motion planning capacities are also limited to the structured environment, which is different from the real and changing environment. To break these limitations, this project aims to develop key technologies for modular self-reconfiguration robots to be applied in unstructured environments. The research results can lay a theoretical and technical foundation for developing robotic swarms and field robots. They can also be widely used in search and rescue, space exploration, etc.
The collaboration between the University of Edinburgh (UoE) and Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) aims at fundamental and applied research in artificial intelligence and robotics. The current research focuses on three scientific pillars: Multi-Contact Planning and Control, Multi-Agent Collaborative Manipulation, and Robot Perception.
PI: Prof. Sethu Vijayakumar (UoE), Prof. Tin Lun Lam (AIRS)
More information: https://airs.cuhk.edu.cn/en/page/357
Heterogeneous Multi-robot Planning
A heterogeneous multi-robot cooperative system comprises a team of robots with different configurations that can cooperate to achieve complex tasks in dynamic environments. We study the problems of accurate modeling, real-time solving, and dynamic adaptation for the collaborative planning problem of heterogeneous multi-robot systems. By constructing a constrained multi-task assignment and scheduling model, we use operations research algorithms and machine learning techniques to solve and analyze specific scenario tasks. The research results provide theoretical support for using robots in complex and changing environments in reality. Future application scenarios include unmanned mines, security, anti-terrorism, dynamic path planning, human-machine co-adaptation, etc.
Multi-robot Environment Perception
In multi-robot systems, environment perception and data fusion from different sources are crucial for effective collaboration. This research aims to address the challenges associated with data matching and visual interference in such systems, focusing on three main issues:
- Data Matching Difficulty due to Time Differences: The difference in data collection times can lead to variations in lighting conditions, making it challenging to match and fuse data from multiple sources.
- Data Matching Difficulty due to Viewpoint Differences: The different perspectives from which robots collect data can cause inconsistencies, making it difficult to match and integrate the information for a comprehensive understanding of the environment.
- Visual Interference among Robots in the Same Environment: As multiple robots move and operate within the same environment, they may generate visual interference with each other, further complicating data matching and fusion processes.
This study aims to propose solutions to overcome these challenges and improve the overall performance of multi-robot collaborative systems in complex environments.