Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance

For autonomous navigation, a mobile robot is required to move toward a destination while avoiding obstacles. In this paper, we present a motion planner based on CNN. In terms of obstacle avoidance, since a position of a dynamic obstacle changes with time, it is important for the robot to plan avoida...

Full description

Saved in:
Bibliographic Details
Main Authors: Satoshi Hoshino, Yu Kubota, Yusuke Yoshida
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:SICE Journal of Control, Measurement, and System Integration
Subjects:
Online Access:http://dx.doi.org/10.1080/18824889.2024.2307684
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850061562726842368
author Satoshi Hoshino
Yu Kubota
Yusuke Yoshida
author_facet Satoshi Hoshino
Yu Kubota
Yusuke Yoshida
author_sort Satoshi Hoshino
collection DOAJ
description For autonomous navigation, a mobile robot is required to move toward a destination while avoiding obstacles. In this paper, we present a motion planner based on CNN. In terms of obstacle avoidance, since a position of a dynamic obstacle changes with time, it is important for the robot to plan avoidance motions in consideration of the time series variation in the images. For this purpose, an LSTM block is applied to the CNN. The policy of the motion planner represented by CNN with LSTM is trained through imitation learning. In this regard, however, it is difficult for the robot to recognize unknown objects as obstacles. For obstacle recognition, a perception process is further provided between the image inputs and CNN with LSTM in the motion planner. Moreover, the robot plans different avoidance motions depending on the velocity of the dynamic obstacle. For this purpose, an obstacle state classifier based on CNN is used ahead of the motion planner. A depth-difference image generated from two depth images is fed as the input to the classifier. A classified state that indicates the velocity of the obstacle is fed as the input to the following motion planner. In the navigation experiments, we show that the robot based on the proposed motion planner is able to move toward a destination autonomously while avoiding standing and walking persons, respectively. Furthermore, we show that the robot based on the motion planner with the obstacle state input is able to plan different avoidance motions for a person walking slowly or fast using the obstacle state classifier.
format Article
id doaj-art-768f5fd4658240dcabcbf8694d58cd13
institution DOAJ
issn 1884-9970
language English
publishDate 2024-12-01
publisher Taylor & Francis Group
record_format Article
series SICE Journal of Control, Measurement, and System Integration
spelling doaj-art-768f5fd4658240dcabcbf8694d58cd132025-08-20T02:50:12ZengTaylor & Francis GroupSICE Journal of Control, Measurement, and System Integration1884-99702024-12-01171193010.1080/18824889.2024.23076842307684Motion planner based on CNN with LSTM through mediated perception for obstacle avoidanceSatoshi Hoshino0Yu Kubota1Yusuke Yoshida2Utsunomiya UniversityUtsunomiya UniversityHitachi Construction MachineryFor autonomous navigation, a mobile robot is required to move toward a destination while avoiding obstacles. In this paper, we present a motion planner based on CNN. In terms of obstacle avoidance, since a position of a dynamic obstacle changes with time, it is important for the robot to plan avoidance motions in consideration of the time series variation in the images. For this purpose, an LSTM block is applied to the CNN. The policy of the motion planner represented by CNN with LSTM is trained through imitation learning. In this regard, however, it is difficult for the robot to recognize unknown objects as obstacles. For obstacle recognition, a perception process is further provided between the image inputs and CNN with LSTM in the motion planner. Moreover, the robot plans different avoidance motions depending on the velocity of the dynamic obstacle. For this purpose, an obstacle state classifier based on CNN is used ahead of the motion planner. A depth-difference image generated from two depth images is fed as the input to the classifier. A classified state that indicates the velocity of the obstacle is fed as the input to the following motion planner. In the navigation experiments, we show that the robot based on the proposed motion planner is able to move toward a destination autonomously while avoiding standing and walking persons, respectively. Furthermore, we show that the robot based on the motion planner with the obstacle state input is able to plan different avoidance motions for a person walking slowly or fast using the obstacle state classifier.http://dx.doi.org/10.1080/18824889.2024.2307684mobile robotsmotion planningobstacle avoidancecnnlstm
spellingShingle Satoshi Hoshino
Yu Kubota
Yusuke Yoshida
Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
SICE Journal of Control, Measurement, and System Integration
mobile robots
motion planning
obstacle avoidance
cnn
lstm
title Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
title_full Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
title_fullStr Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
title_full_unstemmed Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
title_short Motion planner based on CNN with LSTM through mediated perception for obstacle avoidance
title_sort motion planner based on cnn with lstm through mediated perception for obstacle avoidance
topic mobile robots
motion planning
obstacle avoidance
cnn
lstm
url http://dx.doi.org/10.1080/18824889.2024.2307684
work_keys_str_mv AT satoshihoshino motionplannerbasedoncnnwithlstmthroughmediatedperceptionforobstacleavoidance
AT yukubota motionplannerbasedoncnnwithlstmthroughmediatedperceptionforobstacleavoidance
AT yusukeyoshida motionplannerbasedoncnnwithlstmthroughmediatedperceptionforobstacleavoidance