top of page

LATEST RESEARCH

In this work, we propose a deep reinforcement learning algorithm as well as a novel tailored neural network architecture for mobile robots to learn navigation policies autonomously. To the best of our knowledge, this work is the first study that has developed a DRL model which is capable to achieve depth-based autonomous navigation in an end-to-end manner while also outperforming the conventional method. Specifically, we introduce a new feature extractor to better acquire critical spatiotemporal features from raw depth images. The obtained features are combined with the encoded destination information and mapped to control commands directly. During training, the experiences are collected from both the proposed model and a conventional planner alternatively based on a switching criterion to provide more comprehensive samples for learning. Meanwhile, two sets of networks with different purposes are trained simultaneously. The auxiliary network is devoted to learn the depth feature extractor while the primary network is deployed to learn the autonomous navigation policy. It is noteworthy that although merely trained in a simple virtual environment, the developed model is readily generalizable to various unseen virtual as well as real-world scenarios without any fine-tuning. Experimental results have demonstrated the remarkable performance of the proposed model.

Show more

Real-time path planning is crucial for robots to achieve autonomous navigation. Therefore, in this work, we propose a novel deep neural network (DNN) based method for real-time online path planning in completely unknown cluttered environments. Firstly, an end-to-end DNN architecture named online three-dimensional path planning network (OTDPP-Net) is designed to learn 3D local path planning policies. It determines actions in 3D space based on multiple value iteration computations approximated by recurrent 2D convolutional neural networks. Furthermore, a path planning framework is also developed accordingly to realize real-time online path planning. In our framework, near-optimal paths are generated efficiently by referring to the agent's current location, surrounding obstacles and target position. Besides, the effectiveness of the proposed planner is further improved by switching among multiple OTDPP-Nets considering different environmental ranges. And in the meanwhile, line-of-sight checks are also performed to optimize the path quality. Both virtual and real-world experiments are conducted to evaluate the proposed DNN-based path planner. And experimental results demonstrate its remarkable performance in terms of efficiency, success rate and path quality. Moreover, the computational time and effectiveness of the developed DNN-based path planner are both independent of environmental conditions, which distinguishes it from existing methods and reveals its superiority in large-scale complex environments.

Show more

It is vital for mobile robots to achieve safe autonomous steering in various changing environments. In this work, a novel end-to-end network architecture is proposed for mobile robots to learn steering autonomously through deep reinforcement learning. Specifically, two sets of feature representations are firstly extracted from the depth inputs through two different input streams. The acquired features are then merged together to derive both linear and angular actions simultaneously. Moreover, a new action selection strategy is also introduced to achieve motion filtering by taking the consistency in angular velocity into account. Besides, in addition to the extrinsic rewards, the intrinsic bonuses are also adopted during training to improve the exploration capability. Furthermore, it is worth noting the proposed model is readily transferable from the simple virtual training environment to much more complicated real-world scenarios so that no further fine-tuning is required for real deployment. Compared to the existing methods, the proposed method demonstrates significant superiority in terms of average reward, convergence speed, success rate, and generalization capability. In addition, it exhibits outstanding performance in various cluttered real-world environments containing both static and dynamic obstacles.

Show more

It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this work to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments.

Show more
bottom of page