Crowd navigation has becoming an increasingly prominent problem in robotics. The main challenge comes from the lack of understanding of pedestrians’ behaviors. Encouraged by the great achievement in trajectory prediction, the twin field of crowd navigation, this work focus on integrating trajectory prediction with path planning and proposed a crowd navigation algorithm named RHC-T (Receding Horizon Control with Trajjectron++). It consists of two independent modules: one for trajectory prediction and another for receding horizon control. Benefiting from the trajectory prediction module, RHC-T builds up an explicit understanding of pedestrians’behaviors in the form of predicted trajectories. Base on the formulation of receding horizon control, the proposed algorithm can deal with the time-varying obstacle constraints from pedestrians, naturally. Furthermore, extensive experiments are performed on two pedestrian trajectory datasets, ETH and UCY, to evaluate the proposed algorithm in a more realistic way than previous works. Experimental results show that RHC-T reduces the intervention to pedestrians significantly and navigates the robot in time-efficient paths. Compared with three baseline algorithms, RHC-T achieves better performance with an improvement in the intervention rate and navigation time of at least 8.00% and 3.88%, respectively.
This video is the accompanying video of the paper: Jiayang Liu, Junhao Xiao, Huimin Lu, Zhiqian Zhou, Sichao Lin, Zhiqiang Zheng. Terrain Assessment Based on Dynamic Voxel Grids in Outdoor Unstructured Environments
Abstract: For ground robots working in outdoor unstructured environments, terrain assessment is a key step for path planning.In this paper, we propose a novel terrain assessment method. The raw 3D point clouds are segmented based on dynamic voxel grids, then the untraversable areas are extracted and stored in the form of 2D occupancy grid maps. Afterwards, only the traversable areas are processed and stored in the form of 2.5D digital elevation maps (DEMs). In this case, the efficiency of the terrain assessment is improved and the query space of terrain feature information is reduced. To evaluate the proposed algorithm, the approach operating on point clouds has served as the baseline. According to the experimental results, our method has a better performance in both assessment time and query efficiency.
The team description paper can be downloaded from here.
[1] Wei Dai, Huimin Lu, Junhao Xiao and Zhiqiang Zheng. Task Allocation without Communication Based on Incomplete Information Game Theory for Multi-robot Systems. Journal of Intelligent & Robotic Systems, 2018. [PDF]
3rd place in MSL scientific challenge in RoboCup 2019, Sydney, Australia
1st place in MSL technique challenge in RoboCup 2019, Sydney, Australia
4th place in MSL of RoboCup 2019, Sydney, Australia
2018
4th place in MSL scientific challenge in RoboCup 2018, Montréal, Canada
3st place in MSL technique challenge in RoboCup 2018, Montréal, Canada
4th place in MSL of RoboCup 2018, Montréal, Canada
2nd place in MSL of RoboCup 2018 ChinaOpen, ShaoXing, China
2nd place in MSL technique challenge of RoboCup 2018 ChinaOpen,ShaoXing, China
3rd place in MSL scientific challenge in RoboCup 2017, Nagoya, Japan
3rd place in MSL technique challenge in RoboCup 2017, Nagoya, Japan
4th place in MSL of RoboCup 2017, Nagoya, Japan
3rd place in MSL of RoboCup 2017 ChinaOpen, RiZhao, China
1st place in MSL scientific challenge of RoboCup 2017 ChinaOpen, RiZhao, China
4. Qualification video
The qualification video for RoboCup 2021 (Virtual) can be found at our youku channel(recommended for users in China) or our YouTube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded from here.
Zhiqian Zhou, one member of NuBot team, served to MSL community as a member of OC RoboCup 2019 Syndey and TC RoboCup 2020 Bordeaux, France.
We built a dataset for robot detection which contained fully annotated images acquired from MSL competitions. The dataset is publicly available at: https://github.com/Abbyls/robocup-MSL-dataset
7. Declaration regarding mixed team
No!
8. Declaration regarding 802.11b AP
No!
9. MAC address
The list of our team's MAC addresses can be downloaded from here.
The team description paper can be downloaded from here.
[1] Wei Dai, Huimin Lu, Junhao Xiao and Zhiqiang Zheng. Task Allocation without Communication Based on Incomplete Information Game Theory for Multi-robot Systems. Journal of Intelligent & Robotic Systems, 2018. [PDF]
3rd place in MSL scientific challenge in RoboCup 2019, Sydney, Australia
1st place in MSL technique challenge in RoboCup 2019, Sydney, Australia
4th place in MSL of RoboCup 2019, Sydney, Australia
2018
4th place in MSL scientific challenge in RoboCup 2018, Montréal, Canada
3st place in MSL technique challenge in RoboCup 2018, Montréal, Canada
4th place in MSL of RoboCup 2018, Montréal, Canada
2nd place in MSL of RoboCup 2018 ChinaOpen, ShaoXing, China
2nd place in MSL technique challenge of RoboCup 2018 ChinaOpen,ShaoXing, China
3rd place in MSL scientific challenge in RoboCup 2017, Nagoya, Japan
3rd place in MSL technique challenge in RoboCup 2017, Nagoya, Japan
4th place in MSL of RoboCup 2017, Nagoya, Japan
3rd place in MSL of RoboCup 2017 ChinaOpen, RiZhao, China
1st place in MSL scientific challenge of RoboCup 2017 ChinaOpen, RiZhao, China
4. Qualification video
The qualification video for RoboCup 2021 Bordeaux, France can be found at our youku channel(recommended for users in China) or our YouTube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded from here.
Zhiqian Zhou, one member of NuBot team, served to MSL community as a member of OC RoboCup 2019 Syndey and TC RoboCup 2020 Bordeaux, France.
We built a dataset for robot detection which contained fully annotated images acquired from MSL competitions. The dataset is publicly available at: https://github.com/Abbyls/robocup-MSL-dataset
7. Declaration regarding mixed team
No!
8. Declaration regarding 802.11b AP
No!
9. MAC address
The list of our team's MAC addresses can be downloaded from here.
The team description paper can be downloaded from here.
[1] Wei Dai, Huimin Lu, Junhao Xiao and Zhiqiang Zheng. Task Allocation without Communication Based on Incomplete Information Game Theory for Multi-robot Systems. Journal of Intelligent & Robotic Systems, 2018. [PDF]
3rd place in MSL scientific challenge in RoboCup 2019, Sydney, Australia
1st place in MSL technique challenge in RoboCup 2019, Sydney, Australia
4th place in MSL of RoboCup 2019, Sydney, Australia
2018
4th place in MSL scientific challenge in RoboCup 2018, Montréal, Canada
3st place in MSL technique challenge in RoboCup 2018, Montréal, Canada
4th place in MSL of RoboCup 2018, Montréal, Canada
2nd place in MSL of RoboCup 2018 ChinaOpen, ShaoXing, China
2nd place in MSL technique challenge of RoboCup 2018 ChinaOpen,ShaoXing, China
3rd place in MSL scientific challenge in RoboCup 2017, Nagoya, Japan
3rd place in MSL technique challenge in RoboCup 2017, Nagoya, Japan
4th place in MSL of RoboCup 2017, Nagoya, Japan
3rd place in MSL of RoboCup 2017 ChinaOpen, RiZhao, China
1st place in MSL scientific challenge of RoboCup 2017 ChinaOpen, RiZhao, China
4. Qualification video
The qualification video for RoboCup 2020 Bordeaux, France can be found at our youku channel(recommended for users in China) or our YouTube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded from here.
Zhiqian Zhou, one member of NuBot team, served to MSL community as a member of OC RoboCup 2019 Syndey and TC RoboCup 2020 Bordeaux, France.
We built a dataset for robot detection which contained fully annotated images acquired from MSL competitions. The dataset is publicly available at: https://github.com/Abbyls/robocup-MSL-dataset
7. Declaration regarding mixed team
No!
8. Declaration regarding 802.11b AP
No!
9. MAC address
The list of our team's MAC addresses can be downloaded from here.
This video is the accompanying video of the paper:Xiao Li, Bingxin Han, Zhiwen Zeng, Junhao Xiao, Huimin Lu. Human-Robot Interaction Based on Battle Management Language for Multi-robot System
Abstract: Commanding and controlling a multi-robot system is a challenging task. Static control commands are difficult to fully meet the requirements of controlling different robots. As the number of robots increases, it is difficult for the robot's motion-level commands to simultaneously satisfy the demands of commanding multi-robot system. This paper uses a limited natural language to control multi-robot systems, and proposes a framework based on Battle Management Language (BML) to command multi-robot systems. Based on the framework, the capabilities and names of the robot can be dynamically added to the dictionary, and the limited natural language can be converted into a standard BML command according to the dictionary to control the multi-robot system. In this way, the robot can execute motion-level commands, such as movement, steering, etc., and can also perform task-level commands, such as enclosing, defense, etc. The experimental results show that the system composed of different types of robots can be commanded by using the interactive framework proposed in this paper.
The team description paper can be downloaded from here, with the main contribution of a newly designed three-wheel robot.
[1] Wei Dai, Huimin Lu, Junhao Xiao and Zhiqiang Zheng. Task Allocation without Communication Based on Incomplete Information Game Theory for Multi-robot Systems. Journal of Intelligent & Robotic Systems, 2018. [PDF]
4th place in MSL scientific challenge in RoboCup 2018, Montréal, Canada
3rd place in MSL technique challenge in RoboCup 2018, Montréal, Canada
4th place in MSL of RoboCup 2018, Montréal, Canada
2nd place in MSL of RoboCup 2018 ChinaOpen, ShaoXing, China
2nd place in MSL technique challenge of RoboCup 2018 ChinaOpen, ShaoXing, China
3rd place in MSL scientific challenge in RoboCup 2017, Nagoya, Japan
3rd place in MSL technique challenge in RoboCup 2017, Nagoya, Japan
4th place in MSL of RoboCup 2017, Nagoya, Japan
3rd place in MSL of RoboCup 2017 ChinaOpen, RiZhao, China
1st place in MSL scientific challenge of RoboCup 2016 ChinaOpen, RiZhao, China
3rd place in MSL scientific challenge in RoboCup 2016, Leipzig, Germany
4th place in MSL of RoboCup 2016, Leipzig, Germany
3rd place in MSL of RoboCup 2016 ChinaOpen, Hefei, China
1st place in MSL scientific challenge of RoboCup 2016 ChinaOpen, Hefei, China
4. Qualification video
The qualification video for RoboCup 2019 Sydney, Australia can be found at our youku channel(recommended for users in China) or our YouTube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded from here.
Zhiqian Zhou, one member of Nubot team, served to MSL community as a member of OC RoboCup 2019 Sydney, Australia.
7. Declaration regarding mixed team
No!
8. Declaration regarding 802.11b AP
No!
9. MAC address
The list of our team's MAC addresses can be downloaded from here.
This video is about the experimental results of the following paper: Yi Li, Chenggang Xie, Huimin Lu, Xieyuanli Chen, Junhao Xiao and Hui Zhang. Scale-aware Monocular SLAM Based on Convolutional Neural Network. Proceedings of the 15th IEEE International Conference on Information and Automation 2018 ( ICIA 2018 ), Mount Wuyi, 2018.
Abstract—Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, due to the scale ambiguity limitation of monocular vision, the existing monocular SLAM systems can not directly restore the absolute scale in unknown environments. Given the amazing results in the field of depth estimation from Convolutional Neural Networks (CNNs), we propose a CNN-based monocular SLAM, where we naturally combine the CNN-predicted depth maps together with the monocular ORB-SLAM, overcoming the scale ambiguity limitation of the monocular SLAM. We test our method using the popular KITTI odometry benchmark, and the experimental results show that the overall performance of average translational and rotational error can reach 2.00% and 0.0051º/m. In addition, our approach can work well under the pure rotation motion, which shows the robustness and high accuracy of the proposed algorithm.
Abstract— Most robots in urban search and rescue (USAR) fulfill tasks teleoperated by human operators. The operator has to know the location of the robot and find the position of the target (victim). This paper presents an augmented reality system using a Kinect sensor on a customly designed rescue robot. Firstly, Simultaneous Localization and Mapping (SLAM) using RGB-D cameras is running to get the position and posture of the robot. Secondly, a deep learning method is adopted to obtain the location of the target. Finally, we place an AR marker of the target in the global coordinate and display it on the operator's screen to indicate the target even when the target is out of the camera’s field of view. The experimental results show that the proposed system can be applied to help humans interact with robots.
This video is the accompanying video of the paper: Junchong Ma, Weijia Yao, Wei Dai, Huimin Lu, Junhao Xiao, Zhiqiang Zheng. Cooperative Encirclement Control for a Group of Targets by Decentralized Robots with Collision Avoidance. Proceedings of the 37th Chinese Control Conference, 2018.
Abstract: This study focuses on multi-target capture and encirclement control problem for multiple mobile robots. With the distributed architecture, this problem involves a group of robots to encircle several moving targets in a coordinated circle formation. In order to efficiently allocate the targets to robots, a Hybrid Dynamic Task Allocation (HDTA) algorithm was proposed, in which a temporary "manager" robot was assigned to negotiate with other robots. For encirclement formation, a robust control law was introduced for any number of mobile robots to form a specific circle formation with arbitrary inter-robot angular spacing. In view of safety, an online collision avoidance algorithm combining the sub-targets and Artificial Potential Fields (APF) approaches was proposed, which ensures that the paths of robots are collision-free. To prove the validity and robustness of the proposed scheme, both theoretical analysis and simulation experiments were conducted.
The team description paper can be downloaded from here, with the main contribution of a newly designed three-wheel robot.
[1] Wei Dai, Huimin Lu, Junhao Xiao and Zhiqiang Zheng. Task Allocation without Communication Based on Incomplete Information Game Theory for Multi-robot Systems. Journal of Intelligent & Robotic Systems, 2018. [PDF]
3rd place in MSL scientific challenge in RoboCup 2017, Nagoya, Japan
3rd place in MSL technique challenge in RoboCup 2017, Nagoya, Japan
4th place in MSL of RoboCup 2017, Nagoya, Japan
3rd place in MSL of RoboCup 2017 ChinaOpen, RiZhao, China
1st place in MSL scientific challenge of RoboCup 2016 ChinaOpen, RiZhao, China
3rd place in MSL scientific challenge in RoboCup 2016, Leipzig, Germany
4th place in MSL of RoboCup 2016, Leipzig, Germany
3rd place in MSL of RoboCup 2016 ChinaOpen, Hefei, China
1st place in MSL scientific challenge of RoboCup 2016 ChinaOpen, Hefei, China
2rd place in MSL technique challenge in RoboCup 2015, Hefei, China
3rd place in MSL scientific challenge in RoboCup 2015, Hefei, China
6th place in MSL of RoboCup 2015, Hefei, China
4. Qualification video
The qualification video for RoboCup 2018 Montreal, Canada can be found at our youku channel (recommended for users in China) or our YouTube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded from here.
This video is about the experimental results of the following paper: Xieyuanli Chen, Hui Zhang, Huimin Lu, Junhao Xiao, Qihang Qiu and Yi Li. Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue. Proceedings of the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017), Shanghai, 2017
Abstract. In this paper, we propose a monocular SLAM system for robotic urban search and rescue (USAR). Based on it, most USAR tasks (e.g. localization, mapping, exploration and object recognition) can be fulfilled by rescue robots with only a single camera. The proposed system can be a promising basis to implement fully autonomous rescue robots. However, the feature-based map built by the monocular SLAM is difficult for the operator to understand and use. We therefore combine the monocular SLAM with a 2D LiDAR SLAM to realize a 2D mapping and 6D localization SLAM system which can not only obtain a real scale of the environment and make the map more friendly to users, but also solve the problem that the robot pose cannot be tracked by the 2D LiDAR SLAM when the robot climbing stairs and ramps. We test our system using a real rescue robot in simulated disaster environments. The experimental results show that good performance can be achieved using the proposed system in the USAR. The system has also been successfully applied in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class Small Robot Mobility in 2016 RoboCup RRL Leipzig Germany, and the champions of 2016 and 2017 RoboCup China Open RRL.
Supported by National University of Defense Technology, our team has designed the NuBot rescue robot from the mechanical structure to the electronic architecture and software system. Benefiting from the strong mechanical structure, our rescue robot has good mobility and is quite durable, so it will not be trapped even facing the highly cluttered and unstructured terrains in the urban search and rescue. The electronic architecture is built based on industrial standards which can bear electromagnetic interference and physical impact from the intensive tasks. The software system is developed upon the Robot Operating System (ROS). Based on self-developed programs and several basic open source packages provided in the ROS, we developed a complete software system including the localization, mapping, exploration, object recognition, etc. Our robot system has been successfully applied and tested in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class small robot mobility in 2016 RoboCup RRL Leipzig Germany, and won the champions of 2016 and 2017 RoboCup China Open RRL competitions.
The following pictures show that our rescue robot participated in RoboCup 2016 RRL competition.
This video is about the experimental results of the following paper: Xieyuanli Chen, Huimin Lu, Junhao Xiao, Hui Zhang, Pan Wang. Robust relocalization based on active loop closure for real-time monocular SLAM. Proceedings of the 11th International Conference on Computer Vision Systems (ICVS), 2017.
Abstract. Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, tracking failure is still a challenging problem during the monocular SLAM process, and it seems to be even inevitable when carrying out long-term SLAM in large-scale environments. In this paper, we propose an active loop closure based relocalization system, which enables the monocular SLAM to detect and recover from tracking failures automatically even in previously unvisited areas where no keyframe exists. We test our system by extensive experiments including using the most popular KITTI dataset, and our own dataset acquired by a hand-held camera in outdoor large-scale and indoor small-scale real-world environments where man-made shakes and interruptions were added. The experimental results show that the least recovery time (within 5ms) and the longest success distance (up to 46m) were achieved comparing to other relocalization systems. Furthermore, our system is more robust than others, as it can be used in different kinds of situations, i.e., tracking failures caused by the blur, sudden motion and occlusion. Besides robots or autonomous vehicles, our system can also be employed in other applications, like mobile phones, drones, etc.
This video is the accompanying video for the following paper: Weijia Yao, Zhiwen Zeng, Xiangke Wang, Huimin Lu, Zhiqiang Zheng. Distributed Encirclement Control with Arbitrary Spacing for Multiple Anonymous Mobile Robots. Proceedings of the 36th Chinese Control Conference, 2017.
Abstract: Encirclement control enables a multi-robot system to rotate around a target while they still preserve a circular formation, which is useful in real world applications such as entrapping a hostile target. In this paper, a distributed control law is proposed for any number of anonymous and oblivious robots in random three dimensional positions to form a specified circular formation with any desired inter-robot angular distances (i.e. spacing) and encircle around the target. Arbitrary spacing is useful for a system composed of heterogeneous robots which, for example, possess different kinematics capabilities, since the spacing can be designed manually for any specific purpose. The robots are modelled by single-integrator models, and they can only sense the angular positions of their two neighboring robots, so the control law is distributed. Theoretical analysis and simulation results are provided to prove the stability and effectiveness of the proposed control strategy.
1. Team description Paper
The team description paper can be downloaded at here, with the main contribution of a newly designed three-wheel robot.
2. 5 Papers in recent 5 years
[1] Dai, W., Yu, Q., Xiao, J., & Zheng, Z., Communication-less Cooperation between Soccer Robots. In 2016 RoboCup Symposium, Leipzig, Germany. [PDF]
[2] Xiong, D., Xiao, J., Lu, H., et al, The design of an intelligent soccer-playing robot, Industrial Robot: An International Journal, 43(1): 91-102, 2016. [PDF]
[3] Yao, W., Dai, W., Xiao, J., Lu, H., & Zheng, Z. (2015). A Simulation System Based on ROS and Gazebo for RoboCup Middle Size League, IEEE Conference on Robotics and Biomimetics, Zhuhai, China. [PDF]
[4] Lu, H., Yu, Q., Xiong, D., Xiao, J., & Zheng, Z. (2015). Object Motion Estimation Based on Hybrid Vision for Soccer Robots in 3D Space. In RoboCup 2014: Robot World Cup XVIII (pp. 454-465). Springer International Publishing. [PDF]
[5] Lu, H., Li, X., Zhang, H., Hu, M., & Zheng, Z. Robust and Real-time Self-localization Based on Omnidirectional Vision for Soccer Robots. Advanced Robotics, 27(10): 799-811, 2013. [PDF]
3. Results and awards in recent 3 years
2016
2015
2014
4. Qualification video
The qualification video for RoboCup 2017 Nagoya, Japan should be shown below. If it does not appear, it can be found at our youku channel (recommended for users in China) or our youtube channel (recommended for users out of China).
5. Mechanical and Electrical Description and Software Flow Chart
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded here.
6. Contributions to the RoboCup MSL community
This video is the accompanying video for the following paper: Huimin Lu, Junhao Xiao, Lilian Zhang, Shaowu Yang, Andreas Zell. Biologically Inspired Visual Odometry Based on the Computational Model of Grid Cells for Mobile Robots. Proceedings of the 2016 IEEE Conference on Robotics and Biomimetics, 2016.
Abstract: Visual odometry is a core component of many visual navigation systems like visual simultaneous localization and mapping (SLAM). Grid cells have been found as part of the path integration system in the rat's entorhinal cortex, and they provide inputs for place cells in the rat's hippocampus. Together with other cells, they constitute a positioning system in the brain. Some computational models of grid cells based on continuous attractor networks have also been proposed in the computational biology community, and using these models, self-motion information can be integrated to realize dead-reckoning. However, so far few researchers have tried to use these computational models of grid cells directly in robot visual navigation in the robotics community. In this paper, we propose to apply continuous attractor network model of grid cells to integrate the robot's motion information estimated from the vision system, so a biologically inspired visual odometry can be realized. The experimental results show that good dead-reckoning can be achieved for different mobile robots with very different motion velocities using our algorithm. We also implement a full visual SLAM system by simply combining the proposed visual odometry with a quite direct loop closure detection derived from the well-known RatSLAM, and comparable results can be achieved in comparison with RatSLAM.
Real-time Terrain Classification for Rescue Robot Based on Extreme Learning Machine
Yuhua Zhong, Junhao Xiao, Huimin Lu and Hui Zhang
Full autonomous robots in urban search and rescue (USAR) have to deal with complex terrains. The real-time recognition of terrains in front could effectively improve the ability of pass for rescue robots. This paper presents a real-time terrain classification system by using a 3D LIDAR on a custom designed rescue robot. Firstly, the LIDAR state estimation and point cloud registration are running in parallel to extract the test lane region. Secondly, normal aligned radial feature (NARF) is extracted and downscaled by a distance based weighting method. Finally, an extreme learning machine (ELM) classifier is designed to recognize the types of terrains. Experimental results demonstrate the effectiveness of the proposed system.
Video
The video can be found here if the below link does not work.
Real-time Object Segmentation for Soccer Robots Based on Depth Images
Qiu Cheng, Shuijun Yu, Qinghua Yu and Junhao Xiao
Object detection and localization is a paramount important and challenging task in RoboCup MSL (Middle Size League). It has a strong constraint on real-time, as both the robot and obstacles (also robots) are moving quickly. In this paper, a real-time object segmentation approach is proposed, based on a RGB-D camera in which only the range information has been used. The method has four main steps, e.g., point cloud filtering, background points removing, clustering and object localization. Experimental results show that the proposed algorithm can effectively detect and segment objects in 3D space in real-time.
The video can be found here if the below link does not work.
This video is the accompanying video of the paper: Yi Liu, Yuhua Zhong, Xieyuanli Chen, Pan Wan, Huimin Lu, Junhao Xiao, Hui Zhang, The Design of a Fully Autonomous Robot System for Urban Search and Rescue, Proceedings of the 2016 IEEE International Conference on Information and Automation, 2016.
Abstract: Autonomous robots in urban search and rescue (USAR) have to fulfill several tasks at the same time: localization, mapping, exploration, object recognition, etc. This paper describes the whole system and the underlying research of the NuBot rescue robot for participating RoboCup Rescue competition, especially in exploring the rescue environment autonomously. A novel path following strategy and a multi-sensor based controller are designed to control the robot for traversing the unstructured terrain. The robot system has been successfully applied and tested in the RoboCup Rescue Robot League (RRL) competition and won the championship of 2016 RoboCup China Open RRL competition.
This video is the accompanying video for the paper: Yuxi Huang, Ming Lv, Dan Xiong, Shaowu Yang, Huimin Lu, An Object Following Method Based on Computational Geometry and PTAM for UAV in Unknown Environments. Proceedings of the 2016 IEEE International Conference on Information and Automation, 2016.
Abstract: This paper introduces an object following method based on the computational geometry and PTAM for Unmanned Aerial Vehicle(UAV) in unknown environments. Since the object is easy to move out of the field of view(FOV) of the camera, and it is difficult to make it back to the field of camera view just by relative attitude control, we propose a novel solution to re-find the object based on the visual simultaneous localization and mapping (SLAM) results by PTAM. We use a pad as the object which includes a letter H surrounded by a circle. We can get the 3D position of the center of the circle in camera coordinate system using the computational geometry. When the object moves out of the FOV of the camera, the Kalman filter is used to predict the object velocity, so the pad can be searched effectively. We demonstrate that the ambiguity of the pad's localization has little impact on object following through experiments. The experimental results also validate the effectiveness and efficiency of the proposed method.
The team description paper can be downloaded at here, with the main contribution of a Gazebo based simulation system compared to our 2015 TDP.
[1] Xiong, D.,Xiao, J., Lu, H., Zeng, Z., Yu, Q., Huang, K., ... & Zheng, Z. (2015). The design of an intelligent soccer-playing robot. Industrial Robot: An International Journal, 43(1). [PDF]
[2] Yao, W., Dai, W., Xiao, J., Lu, H., & Zheng, Z. (2015). A Simulation System Based on ROS and Gazebo for RoboCup Middle Size League, IEEE Conference on Robotics and Biomimetics, Zhuhai, China. [PDF]
[3] Lu, H., Yu, Q., Xiong, D., Xiao, J., & Zheng, Z. (2015). Object Motion Estimation Based on Hybrid Vision for Soccer Robots in 3D Space. In RoboCup 2014: Robot World Cup XVIII (pp. 454-465). Springer International Publishing. [PDF]
[4] Lu, H., Li, X., Zhang, H., Hu, M., & Zheng, Z. (2013). Robust and real-time self-localization based on omnidirectional vision for soccer robots. Advanced Robotics, 27(10), 799-811. [PDF]
[5] Lu, H., Yang, S., Zhang, H., & Zheng, Z. (2011). A robust omnidirectional vision sensor for soccer robots. Mechatronics, 21(2), 373-389. [PDF]
2rd place in MSL technique challenge in RoboCup 2015, Hefei, China
3rd place in MSL scientific challenge in RoboCup 2015, Hefei, China
6th place in in the MSL of RoboCup 2015, Hefei, China
5th place in RoboCup 2014 João Pessoa Brazil, July 19th~25th
3rd place in MSL of 9th RoboCup China Open, October 10th~12th, Hefei, China
3rd place in both MSL technique challenge of RoboCup ChinaOpen, October 10th~12th, Hefei, China
Entering into the top 8 teams in the MSL of RoboCup 2013 Eindhoven
Champion in the MSL technical challenge 1 of 8th RoboCup China Open, Dec 18th~20th, HeFei, China
The qualification video for RoboCup 2016 Leipzeig, Germany should be shown below. If it does not appear, it can be found at our youku channel (recommended for users in China) or our youtube channel (recommended for users out of China).
NuBot Team Mechanical and Electrical Description together with a Software Flow Chart can be downloaded here.
Junhao Xiao, one member of NuBot team, he is served to MSL community as a member of TC of RoboCup 2016 Leipeig, Germany; and has served to MSL community as a member of TC and OC of RoboCup 2015 Hefei, China, and he has also been appointed as the local chair of RoboCup 2015 MSL.
Huimin Lu, one member of NuBot team, served to MSL community as member of TC and OC of RoboCup 2008 Suzhou , and He was also appointed as the local chair of RoboCup 2008 MSL. He was a member of TC of RoboCup 2011 Istanbul.
2016.01.15: 机器人基于云的目标识别架构
报告人:程球1. 文章:A Cloud-based Object Recognition Engine for Robotics的一些分享
主要介绍一种基于云的目标识别框架(CORE)进行云计算来处理目标识别任务,CORE是分布式,模块化和可扩展的软件体系,它在机器人网络中能够有效地执行数据的传输和处理。
2.全向视觉系统标定中遇到的一些问题以及解决方法。
1. 论文 An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle 中无人机姿态估算方法及其局限性;
2. 基于PTAM与PnP算法对其局限性的改进。
1. Real Robot Code
nubot_ws: https://github.com/nubot-nudt/nubot_ws
2. Simulation System Based on ROS and Gazebo
Single robot simulation demo: https://github.com/nubot-nudt/single_nubot_gazebo
Multi-robot simulation: https://github.com/nubot-nudt/gazebo_visual
Simatch for China Robot Competition: https://github.com/nubot-nudt/simatch
Note: The last option is an integration of every components needed for a complete simulation. So it is recommended to download it for multi-robot coordination research. There are English documentation and some Chinese comments.
3. Coach for Simulation
coach_ws: https://github.com/nubot-nudt/coach4sim
Anyone is welcome to download and use them. :)
大规模开放在线研究(Massive Open Online Research,MOORE)是我校杨学军院士提出的面向高校的新型创新实践概念,为全面支持高校科学研究和人才培养提供了新思路。
在研究生院MOORE教学环境项目支持下,机器人技术创新基地现面向全校研究生不定期发布一批创新课题,欢迎对机器人足球和自主移动机器人技术感兴趣的硕士和博士研究生申请课题,针对每个创新课题会有实验设备支持和必要的经费支持。NuBot有自主研制的基于ROS和Gazebo的三维仿真系统,用于支持算法研究和验证,尤其是多机器人协同控制和基于通信的全分布式控制算法。算法在仿真环境验证后可以到机器人创新基地实际机器人系统上测试,真正做到线上线下协同创新。
此外,我们非常欢迎硕士研究生来机器人技术创新基地选题,完成硕士毕业设计论文,让我们一起和机器人足球成长进步。
如有兴趣,请联系卢惠民博士(lhmnew_at_163.com)和肖军浩博士(junhao.xiao_at_hotmail.com),电话:0731-84576455。
当然可以!只要对机器人技术创新基地中型组足球机器人的发展做出了贡献,我们将邀您一起征战机器人足球世界杯!2016年德国,2017年俄罗斯,2018年日本,欢迎加入我们战队!
在机器人比赛中,目标的检测和识别是移动自主机器人的基础问题之一,尤其是带有视觉感知且能与现实世界相互交互的移动机器人。目标识别对机器人来说是非常重要的一个环节,也是机器人实现协同协作、运动规划、控制决策等自主能力的前提,是自主移动机器人领域的热点问题。移动机器人在真实环境中的目标识别有一些基本因素:目标的多样性(纹理,无纹理和平面等),低精度传感器的图像噪声,视角的差异性和阴影等。
本课题主要是研究基于RGB-D的目标检测和目标识别方法。基于视觉的识别过程中,怎样提取RGB-D图像中的3D特征是一个比较重要也是非常困难的问题。目标检测实际上主要是关键点检测,特征提取,特征分类,感兴趣区域的生成和目标定位等一系列环节完成的。移动机器人通过3D传感器获取包含颜色和深度信息的点云,可以分别通过形状特征、视觉特征和融合形状和视觉特征的方法对物体进行目标识别。然而,大部分的目标识别的算法计算量比较大,基于RGB-D的目标识别算法实时性一般达不到实际的应用要求。研究基于RGB-D的自主机器人实时识别算法对提高机器人在比赛环境中实时辨别障碍、环境感知能力等具有重要意义。
针对中型组足球机器人比赛环境,对低成本RGB-D摄像头获取的3D点云进行点云分割、特征提取,并根据提取的特征和特征描述算子实现目标的检测和识别,要求设计的算法必须满足实时性和鲁棒性,且识别率不低于90%。
NuBot机器人平台
微软Kinect相机
实验场地
NuBot机器人仿真软件
NuBot机器人软件系统
|
name |
Founder and director |
Prof. Dr. Zhiqiang Zheng |
Staff |
Prof. Dr. Hui Zhang Associate Prof. Dr. Huimin Lu Associate Prof. Dr. Junhao Xiao Dr. Zhiwen Zeng Dr. Ming Xu Dr. Qinghua Yu Dr. Kaihong Huang |
Graduate students |
Xiaoxiang Zheng Wei Dai Weijia Yao Xieyuanli Chen Bailiang Chen Bingxin Han (female) Xiao Li Zhiqian Zhou Shanshan Zhu (female) Chenghao Shi Zirui Guo Pengming Zhu Zhengyu Zhong Yang Zhao Chuang Cheng Wenbang Deng Che Guo Haoran Ren Daoxun Zhang Junqi Zhang Yao Li Zhiwen Zhang |
Alumni |
Dr. Lin Liu Dr. Fei Liu Dr. Xiucai Ji
Prof. Dr. Wenjie Shu
Dr. Dan Hai
Associate Prof. Dr. Xiangke Wang
Dr. Shaowu Yang
Dr. Lina Geng (female) Dr. Shuai Tang
Dr. Jie Liang Dr. Xiabin Dong Dr. Dan Xiong Mrs. Wei Liu
Mr. Yupeng Liu Mr. Dachuan Wang Mr. Baifeng Yu
Mr. Fangyi Sun
Mr. Lianhu Cui Mr. Shengcai Lu Mr. Peng Dong
Mr. Yubo Li Mr. Xiaozhou Zhu Mr. Qingzhu Cui
Mr. Xingrui Yang Mr. Kaihong Huang Mr. Shuai Cheng Mr. Xiaoxiang Zheng Mr. Yunlei Chen Mr. Xianglin Yang Mr. Yu Zhang Mrs. Yaoyao Lan Mr. Yuxi Huang Mr. Yi Liu
Mr. Yuhua Zhong Mr. Qiu Cheng
Mr. Junkai Ren Mr. Peng Chen Mrs. Minjun Xiong Mr. Pan Wang Mrs. Sha Luo Mr. Runze Wang Mr. Junchong Ma Mrs. Ruoyi Yan Mr. Yi Li Mr. Qihang Qiu Mr. Shaozun Hong |
Many thanks to Dr. Gang Yin! The trustie platform is a well organized place for homing courses, groups, projects, and many more.
The NuBot MSL robots.
The Mechanical system of our whole MSL robot is subdivided into five main modules: the base frame, the ball handling mechanism, the electromagnet shooting system, the omnidirectional vision system and the front vision system, as illustrated in Fig.1a For the goalie robot, the ball handling mechanism, the electromagnet shooting device and the front vision system are removed, instead two RGB-D cameras are integrated as shown in Fig.1b.Each modules has its specific function.The base frame is a 4-wheel-configuration platform which can move in all direction and generate more traction force than a normal 3-wheel-configuration one.The each side of the ball handing mechanism contains a wheel, a DC motor, a set of transmission bevel-gear, a linear displacement transducer and a support mechanism. It enables the robot to catch and dribble a ball during the game.The electromagnet shooting system can be subdivided into three categories: spring mechanisms, pneumatic systems and solenoids.It enables the robot to pass the ball and shoot. The omnidirectional vision system which is composed of a convex mirror and a camera pointing upward towards the mirror not only makes the imaging resolution of the objects near the robot on the field constant and the imaging distortion of the objects far from the robot small in the vertical direction, but also enables the robot to acquire a very clear image of the scene which is very close to it, such as the robot itself. The front vision system and the RGB-D camera are auxiliary sensors for the regular robots and the goalie robot, respectively.
Fig.1a The NuBot regular robot
Fig.1b The goalie robot
Our current electrical system uses PC-based control technology as shown in Fig 2,so that all vision and control algorithms are processed on the industrial PC. The industrial PC communicates with the EtherCAT system via Ethernet. In addition,the system also uses Elmo Motion Control (SOL-WHI 20/60) which is the intelligent miniature digital servo drive for the 150W DC brushless motor, the CANopen modular EL751 embedded in the EtherCAT which is used to realize communication between the industrial PC and the Elmo Motion Controls, the kicker driver,also named as shooting module, which is mainly composed of a relay and an IGBT FGA25N120ANTD. The PC can send control signals to the kicker driver for shooting or passing via the EtherCAT. The system can meet the demands of the RoboCup MSL competition and provide a good solution for the design of an intelligent robot.
Fig.2 The NuBot electrical system
Our software is developed on Ubuntu, and it is also open source.Therefore, we also use ROS to build our NuBot software. The operating system is Ubuntu 14.04, and the version of ROS is Indigo. As shown in Fig.3, the software framework is divided into 5 main parts: the Prosilica Camera node and the OmniVision node; the UVC Camera node, the FrontVision node and the Kinect node; the NuBot Control node; the NuBot HWControl node; the RTDB and the WorldModel node. Two Kinect nodes replace the FrontVision node and the UVC Camera node for the goalie.
Fig.3 The software framework based on ROS
Led by Prof. Zhiqiang Zheng, our NuBot team was founded in 2004. Currently we have two full professors (Prof. Zhiqiang Zheng and Prof. Hui Zhang), one associate professor (Prof. Huimin Lu), one assistant professor (Dr. Junhao Xiao), and several graduate students. Till now, 8 team members have obtained their doctoral degree with the research on RoboCup Middle Size League (MSL), and more than 20 have obtained their master degrees. For more detail of each member please see NuBoters.
As shown in the figure below, five generations of robots have been created since 2004. We participated in RoboCup Simulation and Small Size League (SSL) initially. Since 2006, we have been participating in RoboCup MSL actively, e.g., we have been to Bremen, Germany (2006), Atlanta, USA (2007), Suzhou, China (2008), Graz, Austria (2009), Singapore (2010), Eindhoven, Netherlands (2013), Joao Pessoa, Brazil (2014), Hefei, China (2015), Leipzig Germany (2016), Nagoya Japan(2017), Montréal Canada(2018) and Sydney Australia(2019) . We have also been participating in RoboCup China Open since it was launched in 2006.
The NuBot robots have been employed not only for RoboCup, but also for other research as an ideal test bed more than robot soccer. As a result, we have published more than 70 journal papers and conference papers. For more detail please see the publication list. Our current research mainly focuses on multi-robot coordination, robust robot vision and formation control.
The following items are our team description papers (TDPs) which illustrates our research progress over the past years.
Many thanks to Dr. Gang Yin! The trustie platform is a well organized place for homing courses, groups, projects, and many more.
The NuBot MSL robots.
Bo Sun, Yadan Zeng, Houde Dai, Junhao Xiao, Jianwei Zhang, (2017) "A novel scan registration method based on the feature-less global descriptor – spherical entropy image", Industrial Robot: An International Journal, Vol. 44 Issue: 4, pp.552-563.
Junhao Xiao, Dan Xiong, Weijia Yao, Qinghua Yu, Huimin Lu, Zhiqiang Zheng, Building Software System and Simulation Environment for RoboCup MSL Soccer RobotsBased on ROS and Gazebo, Springer Book on Robot Operating System (ROS) –The Complete Reference (Volume 2), pp. 597-631, 2017.
Junhao Xiao, Huimin Lu, Lilian Zhang, Jianhua Zhang. Pallet recognition and localization using an RGB-D camera. International Journal of Advanced Robotic Systems, 2017.
Shaozun Hong, Meiping Wu, Junhao Xiao, Xiaohong Xu, Huimin Lu. Kylin: a transformable track-wheel hybrid robot. Proceedings of the 2017 International Conference on Advanced Mechatronic Systems, Xiamen, China, December 6-9, 2017.
Sha Luo, Weijia Yao, Qinghua Yu, Junhao Xiao, Huimin Lu and Zongtan Zhou. Object Detection Based on GPU Parallel Computing for RoboCup Middle Size League. Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO 2017), Macau, 2017.
Minjun Xiong, Huimin Lu, Dan Xiong, Junhao Xiao, Ming Lv. Scale-Aware Monocular Visual-Inertial Pose Estimation for Aerial Robots. Chinese Automation Congress 2017, Jinan, 2017.
Sha Luo, Huimin Lu, Junhao Xiao, Qinghua Yu, Zhiqiang Zheng. Robot Detection and Localization Based on Deep Learning. Chinese Automation Congress 2017, Jinan, 2017.
Pan Wang, Junhao Xiao, Huimin Lu, Hui Zhang, Ruoyi Yan, Shaozun Hong. A Novel Human-Robot Interaction System Based on 3D Mapping and Virtual Reality. Chinese Automation Congress 2017, Jinan, 2017.
Xieyuanli Chen, Hui Zhang, Huimin Lu, Junhao Xiao, Qihang Qiu and Yi Li. Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue. Proceedings of the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017), Shanghai, 2017.
Xieyuanli Chen, Huimin Lu, Junhao Xiao, Hui Zhang, PanWang. Robust relocalization based on active loop closure for real-time monocular SLAM. Proceedings of the 11th International Conference on Computer Vision Systems (ICVS), 2017.
Weijia Yao, Zhiwen Zeng, Xiangke Wang, Huimin Lu, Zhiqiang Zheng. Distributed Encirclement Control with Arbitrary Spacing for Multiple Anonymous Mobile Robots. Proceedings of the 36th Chinese Control Conference, 2017.
Zhiwen Zeng, Xiangke Wang, Zhiqiang Zheng, et al. Edge Agreement of Second-order Multi-agent System with Dynamic Quantization via Directed Edge Laplacian. Nonlinear Analysis: Hybrid Systems, Vol. 23, pp. 1-10, 2017.
Yuhua Zhong, Junhao Xiao, Huimin Lu, Hui Zhang. Real-Time Terrain Classification for Rescue Robot Based on Extreme Learning Machine. In: Sun F., Liu H., Hu D. (eds) Cognitive Systems and Signal Processing. ICCSIP 2016. Communications in Computer and Information Science, vol 710, 2017. Springer, Singapore.
卢惠民,肖军浩,郑志强,ROS与中型组足球机器人,国防工业出版社,ISBN: 978-7-118-10952-8,31.7万字, 2016.10.
肖军浩,李鹏,耿丽娜,郑志强,实用机器人设计——竞赛机器人(译),机械工业出版社,ISBN:978-7-111-53601-7,20万字,2016.05.
肖军浩,机器人操作系统浅析(译),国防工业出版社,10万字,ISBN: 978-7-118-11056-2,18万字, 2016.09.
Lilian Zhang, Huimin Lu, Xiaoping Hu, Reinhard Koch. Vanishing Point Estimation and Line Classification in a Manhattan World with a Unifying Camera Model. International Journal of Computer Vision, Vol. 117, No. 2, pp. 111-130, 2016.
Dan Xiong, Junhao Xiao, Huimin Lu, Zhiwen Zeng, Qinghua Yu, Kaihong Huang, Xiaodong Yi, Zhiqiang Zheng.The design of an intelligent soccer-playing robot. Industrial Robot: An International Journal, Vol. 43, No.1, pp. 91-102, 2016. [PDF]
Wei Dai, Qinghua Yu, Junhao Xiao, and Zhiqiang Zheng,Communication-less Cooperation between Soccer Robots. In 2016 RoboCup Symposium, Leipzig, Germany. [PDF]
Huimin Lu, Lixing Jiang, Andreas Zell. Long Range Traversable Region Detection Based on Superpixels Clustering for Mobile Robots. Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015), Hamburg, Germany, September 28~October 02, 2015, pp. 546-552. [PDF]
Shuai Cheng, Junhao Xiao, Huimin Lu. Real-time obstacle avoidance using subtargets and Cubic B-spline for mobile robots. Proceedings of 2014 IEEE International Conference on Information and Automation, China, 2014, pp. 634-639.[PDF]
Xun Li, Huimin Lu, Dan Xiong, Hui Zhang and Zhiqiang Zheng. A Survey on Visual Perception for RoboCup MSL Soccer Robots. International Journal of Advanced Robotic Systems, Vol.10, 110:2013, pp.1-10, 2013. [PDF]
Huimin Lu, Xun Li, Hui Zhang, and Zhiqiang Zheng. Robust Place Recognition Based on Omnidirectional Vision and Real-time Local Visual Features for Mobile Robots. Advanced Robotics, Vol.27, No.18, pp.1439-1453, 2013. [PDF]
Huimin Lu, Xun Li, Hui Zhang, Mei Hu and Zhiqiang Zheng. Robust and Real-time Self-Localization Based on Omnidirectional Vision for Soccer Robots. Advanced Robotics, Vol.27, No.10, pp.799-811, 2013. [PDF]
Zhiwen Zeng, Huimin Lu, Zhiqiang Zheng. High-speed Trajectory Tracking Based on Model Predictive Control for Omni-directional Mobile Robots. Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, May 25-27, 2013, pp. 3179-3184. [PDF]
Hui Zhang, Huimin Lu, Peng Dong, Dan Xiong, and Zhiqiang Zheng. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots. International Journal of Advanced Robotic Systems, Vol. 10, 388:2013, pp. 1-12, 2013. [PDF]
YU Qinghua, HUANG Kaihong, Huimin Lu, GUO Hongwu. Object Motion Estimation and Interception Based on Stereo Vision for Soccer Robots in 3D Space. Proceedings of the 32nd Chinese Control Conference, Xi'an, China, July 26-28, 2013, pp. 5943-5948. [PDF]
Dan Xiong, Huimin Lu, Zhiwen Zeng, Zhiqiang Zheng. Topological Localization Based on Key-frames Selection and Vocabulary Tree for Mobile Robots. Proceeding of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, December 2013, pp. 2505-2510.
Dan Xiong, Huimin Lu, Zhiqiang Zheng. A self-localization method based on omnidirectional vision and MTi for soccer robots. Proceedings of the 10th World Congress on Intelligent Control and Automation, Beijing, China, 2012, pp. 3731-3736. [PDF]
Huimin Lu, Shaowu Yang, Hui Zhang, Zhiqiang Zheng. A Robust Omnidirectional Vision Sensor for Soccer Robots. Mechatronics, Elsevier, Vol.21, No.2, pp. 373-389, 2011. [PDF]
Huimin Lu, Hui Zhang, Zhiqiang Zheng. A Novel Real-Time Local Visual Feature for Omnidirectional Vision Based on FAST and LBP. RoboCup 2010: Robot Soccer World Cup XIV, LNAI 6556, Springer, pp. 291-302, 2011. [PDF]
Huimin Lu, Zhiqiang Zheng. Two Novel Real-Time Local Visual Features for Omnidirectional Vision. Pattern Recognition, Elsevier, Vol.43, No.12, pp. 3938-3949, 2010. [PDF]
Huimin Lu, Hui Zhang, Shaowu Yang, Zhiqiang Zheng. Camera Parameters Auto-Adjusting Technique for Robust Robot Vision. Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), Anchorage, Alaska, USA, May 5~8, 2010, pp. 1518-1523. [PDF]
Xiangke Wang, Hui Zhang, Huimin Lu, Zhiqiang Zheng. A New Triple-based Multi-robot System Architecture and Application in Soccer Robots. ICIRA 2010, Part II, LNAI 6425, Springer, pp. 105-115, 2010. [PDF]
卢惠民, 张辉, 杨绍武, 郑志强. 一种鲁棒的基于全向视觉的足球机器人自定位方法. 机器人, Vol.32, No.4, pp. 553-559+567, 2010. [PDF]
Huimin Lu, Hui Zhang, Shaowu Yang, Zhiqiang Zheng. A Novel Camera Parameters Auto-Adjusting Method Based on Image Entropy. RoboCup 2009: Robot Soccer World Cup XIII, LNAI 5949, Springer, pp. 192-203, 2010. [PDF]
Huimin Lu, Hui Zhang, Shaowu Yang, Zhiqiang Zheng. Vision-based Ball Recognition for Soccer Robots without Color Classification. Proceedings of 2009 IEEE International Conference on Information and Automation, Zhuhai/Macau,China, 2009, pp. 916-921. [PDF]
卢惠民, 张辉, 郑志强. 基于视觉的移动机器人自定位问题. 中南大学学报(自然科学版), Vol.40, Suppl.1, pp. 127-134, 2009.
Huimin Lu, Hui Zhang, Junhao Xiao, Fei Liu, Zhiqiang Zheng. Arbitrary Ball Recognition Based on Omni-directional Vision for Soccer Robots. RoboCup 2008: Robot Soccer World Cup XII, LNAI 5399, Springer, pp. 133-144, 2009.[PDF]
Huimin Lu, Zhiqiang Zheng, Fei Liu, Xiangke Wang. A Robust Object Recognition Method for Soccer Robots.Proceedings of 7th World Congress on Intelligent Control and Automation,Chongqing,China, 2008, pp. 1645-1650. [PDF]
刘斐, 卢惠民, 郑志强. 基于线性分类器的混合空间查找表颜色分类方法. 中国图象图形学报, Vol.13, No.1, pp. 104-108, 2008.
柳林, 刘斐, 季秀才, 卢惠民, 海丹, 郑志强. 全向移动机器人编队分布式控制研究. 机器人, Vol. 29, No.1, pp. 23-28, 2007.
Fei Liu, Huimin Lu, Zhiqiang Zheng. A Robust Approach of Field Features Extraction for Robot Soccer. Proceedings of 4th IEEE LARS 07/COMRob 07, ROBOTIC FORUM Monterrey 2007, November 05-09, 2007.
卢惠民, 刘斐, 郑志强. 一种新的用于足球机器人的全向视觉系统. 中国图象图形学报, Vol.12, No.7, pp. 1243-1248, 2007.
Fei Liu, Huimin Lu, Zhiqiang Zheng. A Modified Color Look-Up Table Segmentation Method for Robot Soccer. Proceedings of 4th IEEE LARS 07/COMRob 07, ROBOTIC FORUM Monterrey 2007, November 05-09, 2007.
卢惠民, 王祥科, 刘斐, 季秀才, 郑志强. 基于全向视觉和前向视觉的足球机器人目标识别. 中国图象图形学报, Vol.11, No.11, pp. 1686-1689, 2006.
Xiucai Ji, Lin Liu and Zhiqiang Zheng. A modular hierarchical architecture for autonomous robots based on task-driven behaviors. International Conference on Sensing, Computing and Automation, Chongqing, China, May 8-11, 2006: 631~636.
柳林, 季秀才, 郑志强. 基于市场法及能力分类的多机器人任务分配研究. 机器人, 2006, 28(3): 337~343.
LIU Lin and ZHENG Zhiqiang. Combinatorial bids based multi-robot task allocation method. Proceedings of the 2005 IEEE International Conference on Robotics and Automation(ICRA2005), 2005: 1157~1162.
LIU Lin and ZHENG Zhiqiang. A novel multi-robot coordination method using capability category. Proceedings of the 16th IFAC World Congress, 2005.
LIU Lin, WANG Lei, ZHENG Zhiqiang, SUN Zengqi. A learning market based layered multi-robot architecture. Proceedings of the 2004 IEEE International Conference on Robotics and Automation(ICRA2004), 2004: 3417~3422.
柳林, 郑志强. 多机器人任务分配及其在机器人足球中的应用. 控制理论与应用, 2004, 21(Suppl.): 46~50.