访问计数 347040 (自2016年5月)
Videos
0?1470885445
发布时间:2016-10-17 08:43
更新时间:2017-12-22 17:36

This video is the accompanying video for the following paper: Huimin Lu, Junhao Xiao, Lilian Zhang, Shaowu Yang, Andreas Zell. Biologically Inspired Visual Odometry Based on the Computational Model of Grid Cells for Mobile Robots. Proceedings of the 2016 IEEE Conference on Robotics and Biomimetics, 2016.

Abstract: Visual odometry is a core component of many visual navigation systems like visual simultaneous localization and mapping (SLAM). Grid cells have been found as part of the path integration system in the rat's entorhinal cortex, and they provide inputs for place cells in the rat's hippocampus. Together with other cells, they constitute a positioning system in the brain. Some computational models of grid cells based on continuous attractor networks have also been proposed in the computational biology community, and using these models, self-motion information can be integrated to realize dead-reckoning. However, so far few researchers have tried to use these computational models of grid cells directly in robot visual navigation in the robotics community. In this paper, we propose to apply continuous attractor network model of grid cells to integrate the robot's motion information estimated from the vision system, so a biologically inspired visual odometry can be realized. The experimental results show that good dead-reckoning can be achieved for different mobile robots with very different motion velocities using our algorithm. We also implement a full visual SLAM system by simply combining the proposed visual odometry with a quite direct loop closure detection derived from the well-known RatSLAM, and comparable results can be achieved in comparison with RatSLAM.

回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2016-07-16 16:10
更新时间:2017-12-22 17:34

Title

Real-time Terrain Classification for Rescue Robot Based on Extreme Learning Machine

Author


Yuhua Zhong, Junhao Xiao, Huimin Lu and Hui Zhang

Abstract

Full autonomous robots in urban search and rescue (USAR) have to deal with complex terrains. The real-time recognition of terrains in front could effectively improve the ability of pass for rescue robots. This paper presents a real-time terrain classification system by using a 3D LIDAR on a custom designed rescue robot. Firstly, the LIDAR state estimation and point cloud registration are running in parallel to extract the test lane region. Secondly, normal aligned radial feature (NARF) is extracted and downscaled by a distance based weighting method. Finally, an extreme learning machine (ELM) classifier is designed to recognize the types of terrains. Experimental results demonstrate the effectiveness of the proposed system.

Video

The video can be found here if the below link does not work.


回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2016-06-23 22:05
更新时间:2017-12-22 17:33

This video is the accompanying video for the paper: Yuxi Huang, Ming Lv, Dan Xiong, Shaowu Yang, Huimin Lu, An Object Following Method Based on Computational Geometry and PTAM for UAV in Unknown Environments. Proceedings of the 2016 IEEE International Conference on Information and Automation, 2016.

Abstract: This paper introduces an object following method based on the computational geometry and PTAM for Unmanned Aerial Vehicle(UAV) in unknown environments. Since the object is easy to move out of the field of view(FOV) of the camera, and it is difficult to make it back to the field of camera view  just by relative attitude control, we propose a novel solution to re-find the object based on the visual simultaneous localization and mapping (SLAM) results by PTAM. We use a pad as the object which includes a letter H surrounded by a circle. We can get the 3D position of the center of the circle in camera coordinate system using the computational geometry. When the object moves out of the FOV of the camera, the Kalman filter is used to predict the object velocity, so the pad can be searched effectively. We demonstrate that the ambiguity of the pad's localization has little impact on object following through experiments. The experimental results also validate the effectiveness and efficiency of the proposed method.


回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2017-03-01 13:40
更新时间:2017-12-22 17:30

This video is the accompanying video for the following paper: Weijia Yao, Zhiwen Zeng, Xiangke Wang, Huimin Lu, Zhiqiang Zheng. Distributed Encirclement Control with Arbitrary Spacing for Multiple Anonymous Mobile Robots. Proceedings of the 36th Chinese Control Conference, 2017.

Abstract: Encirclement control enables a multi-robot system to rotate around a target while they still preserve a circular formation, which is useful in real world applications such as entrapping a hostile target. In this paper, a distributed control law is proposed for any number of anonymous and oblivious robots in random three dimensional positions to form a specified circular formation with any desired inter-robot angular distances (i.e. spacing) and encircle around the target. Arbitrary spacing is useful for a system composed of heterogeneous robots which, for example, possess different kinematics capabilities, since the spacing can be designed manually for any specific purpose. The robots are modelled by single-integrator models, and they can only sense the angular positions of their two neighboring robots, so the control law is distributed. Theoretical analysis and simulation results are provided to prove the stability and effectiveness of the proposed control strategy.

回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2017-05-19 08:11
更新时间:2017-12-22 17:28

This video is about the experimental results of the following paper: Xieyuanli Chen, Huimin Lu, Junhao Xiao, Hui Zhang, Pan Wang. Robust relocalization based on active loop closure for real-time monocular SLAM. Proceedings of the 11th International Conference on Computer Vision Systems (ICVS), 2017.


Abstract. Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, tracking failure is still a challenging problem during the monocular SLAM process, and it seems to be even inevitable when carrying out long-term SLAM in large-scale environments. In this paper, we propose an active loop closure based relocalization system, which enables the monocular SLAM to detect and recover from tracking failures automatically even in previously unvisited areas where no keyframe exists. We test our system by extensive experiments including using the most popular KITTI dataset, and our own dataset acquired by a hand-held camera in outdoor large-scale and indoor small-scale real-world environments where man-made shakes and interruptions were added. The experimental results show that the least recovery time (within 5ms) and the longest success distance (up to 46m) were achieved comparing to other relocalization systems. Furthermore, our system is more robust than others, as it can be used in different kinds of situations, i.e., tracking failures caused by the blur, sudden motion and occlusion. Besides robots or autonomous vehicles, our system can also be employed in other applications, like mobile phones, drones, etc.

回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2017-06-29 00:31
更新时间:2017-09-22 23:44

This video is about the experimental results of the following paper: Xieyuanli Chen, Hui Zhang, Huimin Lu, Junhao Xiao, Qihang Qiu and Yi Li. Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue. Proceedings of the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017), Shanghai, 2017


Abstract. In this paper, we propose a monocular SLAM system for robotic urban search and rescue (USAR). Based on it, most USAR tasks (e.g. localization, mapping, exploration and object recognition) can be fulfilled by rescue robots with only a single camera. The proposed system can be a promising basis to implement fully autonomous rescue robots. However, the feature-based map built by the monocular SLAM is difficult for the operator to understand and use. We therefore combine the monocular SLAM with a 2D LiDAR SLAM to realize a 2D mapping and 6D localization SLAM system which can not only obtain a real scale of the environment and make the map more friendly to users, but also solve the problem that the robot pose cannot be tracked by the 2D LiDAR SLAM when the robot climbing stairs and ramps. We test our system using a real rescue robot in simulated disaster environments. The experimental results show that good performance can be achieved using the proposed system in the USAR. The system has also been successfully applied in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class Small Robot Mobility in 2016 RoboCup RRL Leipzig Germany, and the champions of 2016 and 2017 RoboCup China Open RRL.



回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2017-09-14 20:29
更新时间:2017-09-14 20:29
This video shows a track-wheel hybrid robot, named Kylin, which is designed to integrate the advantages of both wheeled locomotion and tracked locomotion. To save the research and development time, the robot is built upon our tracked robot named NuBot, by integrating modular components for wheeled locomotion without changing the main body of NuBot. Kylin can run 3.7 m/s on the ground, climb up 45 degree slops and 0.5 m steps. It has been employed in UGVC 2016, and won the vice-champion.
( 26.419 MB) 肖军浩, 2017-09-14 20:31
回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2017-03-26 17:49
更新时间:2017-06-23 23:36
视频中展示的是基于三维建图和虚拟现实技术的新型人机交互系统,便于审稿专家对系统进行全面了解和评价。
回复 ︿
用户头像
登录后可添加回复
0?1470885445
罗莎 TO  NuBot Research Team | Videos
发布时间:2017-04-14 13:19
更新时间:2017-04-14 13:19
In this video, a ball, a robot and a piece of orange luggage are fixed at posi-tions (0, 144), (0, 7) and (0, -144) respectively. Then another robot with a Kinect sensor rotates around these objects following a circular trajectory centered at (0, 0) with the radius of 300cm. The robot moves at the speed of 3m/s and its heading points towards the origin all the time during the dynamic test. We marked the detected ball with grey sphere and obstacle with red/white cube in the video, and there have some disfluency because of the visualization of point cloud in TX1. From the video we can conclude that our algorithm can  detect the ball and obstacles accurately.
( 15.911 MB) 罗莎, 2017-04-14 13:19
回复 ︿
用户头像
登录后可添加回复
0?1470885445
发布时间:2016-06-26 18:13
更新时间:2016-10-06 13:03

Title

Real-time Object Segmentation for Soccer Robots Based on Depth Images

Author

Qiu Cheng, Shuijun Yu, Qinghua Yu and Junhao Xiao


Abstract

Object detection and localization is a paramount important and challenging task in RoboCup MSL (Middle Size League). It has a strong constraint on real-time, as both the robot and obstacles (also robots) are moving quickly. In this paper, a real-time object segmentation approach is proposed, based on a RGB-D camera in which only the range information has been used. The method has four main steps, e.g., point cloud filtering, background points removing, clustering and object localization. Experimental results show that the proposed algorithm can effectively detect and segment objects in 3D space in real-time.

Video:

The video can be found here if the below link does not work.



回复 ︿
用户头像
登录后可添加回复
点击展开更多