Three-Dimensional Path-Following Control of a Robotic Airship with Reinforcement Learning

This paper proposed an adaptive three-dimensional (3D) path-following control design for a robotic airship based on reinforcement learning. The airship 3D path-following control is decomposed into the altitude control and the planar path-following control, and the Markov decision process (MDP) model...

Full description

Saved in:
Bibliographic Details
Main Authors: Chunyu Nie, Zewei Zheng, Ming Zhu
Format: Article
Language:English
Published: Wiley 2019-01-01
Series:International Journal of Aerospace Engineering
Online Access:http://dx.doi.org/10.1155/2019/7854173
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper proposed an adaptive three-dimensional (3D) path-following control design for a robotic airship based on reinforcement learning. The airship 3D path-following control is decomposed into the altitude control and the planar path-following control, and the Markov decision process (MDP) models of the control problems are established, in which the scale of the state space is reduced by parameter simplification and coordinate transformation. To ensure the control adaptability without dependence on an accurate airship dynamic model, a Q-Learning algorithm is directly adopted for learning the action policy of actuator commands, and the controller is trained online based on actual motion. A cerebellar model articulation controller (CMAC) neural network is employed for experience generalization to accelerate the training process. Simulation results demonstrate that the proposed controllers can achieve comparable performance to the well-tuned proportion integral differential (PID) controllers and have a more intelligent decision-making ability.
ISSN:1687-5966
1687-5974