AIRobers


Ph.D. Students


M.Sc. Students


Alumni


Current Members


Dingcheng Hu (Ph.D. Student)
Dingcheng received a B.S. in Computer Science in 2018 and an M.S. in Computer Science in 2020, both from the University of California, San Diego. His main research interests include machine learning, robotics, and reinforcement learning.

Dingyi Sun (Ph.D. Student)
Dingyi received a B.Eng. in Electrical Engineering and Automation from Huazhong University of Science and Technology in 2018 and an M.S. in Electrical and Computer Engineering (Robotics Track) from the University of Michigan in 2020. He is interested in path planning, machine learning, and robotic perception.

Danoosh Chamani (M.Sc. Student)
Danoosh received a B.Sc. in Computer Software Engineering from the University of Tehran in 2019. He is interested in reinforcement learning, machine learning, and robotics.

Baiyu Li (M.Sc. Student)
Baiyu received a B.Eng. in Computer Science from Northeastern University (Shenyang, China) in 2020. He is interested in path planning, multi-agent system, and parallel computing.

Qiushi Lin (M.Sc. Student)
Qiushi received a B.Eng. in Computer Science and Technology from Southern University of Science and Technology in 2020. He is interested in machine learning, reinforcement learning, and multi-agent system.

Zining Mao (M.Sc. Student)
Zining received a B.Sc. in Computer Science from New York University Shanghai in 2021. He is interested in reinforcement learning and multi-agent systems.

Ervin Samuel (M.Sc. Student)
Ervin received a B.Sc. in Computer Science from National Tsing Hua University in 2022. He is interested in artificial intelligence, particularly reinforcement learning and its application to path planning.

Alumni


Qinghong Xu (Former M.Sc. Student)
Qinghong received a B.S. in Computational Mathematics from Xiamen University in 2017 and an M.S. in Computational and Applied Mathematics in 2019 and an M.Sc. in Computing Science in 2022, both from Simon Fraser University. She is interested in multi-agent systems, path planning, and machine learning.
Last seen: Software Development Engineer at Amazon
  • In this work, we consider the Multi-Agent Pickup-and-Delivery (MAPD) problem, where agents constantly engage with new tasks and need to plan collision-free paths to execute them. To execute a task, an agent needs to visit a pair of goal locations, consisting of a pickup location and a delivery location. We propose two variants of an algorithm that assigns a sequence of tasks to each agent using the anytime algorithm Large Neighborhood Search (LNS) and plans paths using the Multi-Agent Path Finding (MAPF) algorithm Priority-Based Search (PBS). LNS-PBS is complete for well-formed MAPD instances, a realistic subclass of MAPD instances, and empirically more effective than the existing complete MAPD algorithm CENTRAL. LNS-wPBS provides no completeness guarantee but is empirically more efficient and stable than LNS-PBS. It scales to thousands of agents and thousands of tasks in a large warehouse and is empirically more effective than the existing scalable MAPD algorithm HBH+MLA*. LNS-PBS and LNS-wPBS also apply to a more general variant of MAPD, namely the Multi-Goal MAPD (MG-MAPD) problem, where tasks can have different numbers of goal locations.
    @inproceedings{XuIROS22,
     author = {Qinghong Xu and Jiaoyang Li and Sven Koenig and Hang Ma},
     booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and System},
     pages = {in press},
     title = {Multi-Goal Multi-Agent Pickup and Delivery},
     year = {2022}
    }


Xinyi Zhong (Former M.Sc. Student)
Xinyi received a B.C.S. Honours in Computer Science from Carleton University in 2019 and an M.Sc. in Computing Science from Simon Fraser University in 2021. She is interested in path planning, multi-agent system, and robotics.
Last seen: Software Development Engineer at Amazon
  • We formalize and study the multi-goal task assignment and pathfinding (MG-TAPF) problem from theoretical and algorithmic perspectives. The MG-TAPF problem is to compute an assignment of tasks to agents, where each task consists of a sequence of goal locations, and collision-free paths for the agents that visit all goal locations of their assigned tasks in sequence. Theoretically, we prove that the MG-TAPF problem is NP-hard to solve optimally. We present algorithms that build upon algorithmic techniques for the multi-agent pathfinding problem and solve the MG-TAPF problem optimally and bounded-suboptimally. We experimentally compare these algorithms on a variety of different benchmark domains.
    @inproceedings{ZhongICRA22,
     author = {Xinyi Zhong and Jiaoyang Li and Sven Koenig and Hang Ma},
     booktitle = {IEEE International Conference on Robotics and Automation},
     pages = {10731--10737},
     title = {Optimal and Bounded-Suboptimal Multi-Goal Task Assignment and Path Finding},
     year = {2022}
    }


Ziyuan Ma (Former Undergraduate Student)
Ziyuan received a B.Sc. in Computing Science from Simon Fraser University in 2020. He was an undergraduate research student in our lab in 2020/2021.
  • Multi-Agent Path Finding (MAPF) is essential to large-scale robotic systems. Recent methods have applied reinforcement learning (RL) to learn decentralized polices in partially observable environments. A fundamental challenge of obtaining collision-free policy is that agents need to learn cooperation to handle congested situations. This paper combines communication with deep Q-learning to provide a novel learning based method for MAPF, where agents achieve cooperation via graph convolution. To guide RL algorithm on long-horizon goal-oriented tasks, we embed the potential choices of shortest paths from single source as heuristic guidance instead of using a specific path as in most existing works. Our method treats each agent independently and trains the model from a single agent’s perspective. The final trained policy is applied to each agent for decentralized execution. The whole system is distributed during training and is trained under a curriculum learning strategy. Empirical evaluation in obstacle-rich environment indicates the high success rate with low average step of our method.
    @inproceedings{MaICRA21,
     author = {Ziyuan Ma and Yudong Luo and Hang Ma},
     booktitle = {IEEE International Conference on Robotics and Automation},
     pages = {8699--8705},
     title = {Distributed Heuristic Multi-Agent Path Finding with Communication},
     year = {2021}
    }


Yudong Luo (Former Visitor)
Yudong is a Ph.D. student at the University of Waterloo. He received a B.Eng. in Computer Science from Shanghai Jiao Tong University in 2018 and an M.Sc. in Computing Science from Simon Fraser University in 2020. He is interested in reinforcement learning, machine learning, and multi-agent system. Yudong visited our lab for 12 months in 2020/2021. More information can be found on his homepage.
  • Multi-Agent Path Finding (MAPF) is essential to large-scale robotic systems. Recent methods have applied reinforcement learning (RL) to learn decentralized polices in partially observable environments. A fundamental challenge of obtaining collision-free policy is that agents need to learn cooperation to handle congested situations. This paper combines communication with deep Q-learning to provide a novel learning based method for MAPF, where agents achieve cooperation via graph convolution. To guide RL algorithm on long-horizon goal-oriented tasks, we embed the potential choices of shortest paths from single source as heuristic guidance instead of using a specific path as in most existing works. Our method treats each agent independently and trains the model from a single agent’s perspective. The final trained policy is applied to each agent for decentralized execution. The whole system is distributed during training and is trained under a curriculum learning strategy. Empirical evaluation in obstacle-rich environment indicates the high success rate with low average step of our method.
    @inproceedings{MaICRA21,
     author = {Ziyuan Ma and Yudong Luo and Hang Ma},
     booktitle = {IEEE International Conference on Robotics and Automation},
     pages = {8699--8705},
     title = {Distributed Heuristic Multi-Agent Path Finding with Communication},
     year = {2021}
    }