Graduate Thesis Or Dissertation
 

Bayesian Reinforcement Learning Based Control For Ocean Wave Energy Conversion

Public Deposited

Contenu téléchargeable

Télécharger le fichier PDF
https://ir.library.oregonstate.edu/concern/graduate_thesis_or_dissertations/0z709409h

Descriptions

Attribute NameValues
Creator
Abstract
  • This work – in which three peer-reviewed academic papers are presented – addresses the ap-plication of Bayesian Reinforcement Learning to the control of a class of ocean wave energy conversion systems. The first paper presents a comparison of a Reinforcement-Learning (RL) based wave energy converter controller against standard Reactive Damping and Model Predic-tive Control (MPC) approaches, in the presence of modeling errors. Wave energy converters (WECs) are affected by many non-linear hydrodynamic forces, yet for ease and expediency, it is common to formulate linear WEC models and control laws. Therefore it is expected that sig-nificant modeling errors may be present, which may degrade model-based control performance. Model-free RL approaches to control may offer a significant advantage in robustness to model-ing errors, in that the model is learned by the controller by experience. It is shown that, for an annual average sea state, RL-based controllers can outperform model-based control – reactive control and MPC – by 19% and 16%, respectively, when significant modeling error is present. In the second paper, a wave energy converter with two PTOs is simulated in a condition where one of the PTOs fails to operate. Due to ocean wave resources characteristics, wave energy converter components are required to bear peak loads (i.e., large torques, forces, and powers) that can cause component degradation and breakdown. In addition, any failure of a switching element (MOSFET or IGBT) in the power electronics drive for the Power Take-Off (PTO), or a malfunction of a pump or hose in the hydraulics can cause a PTO failure. An RL based control strategy is proposed to deal with PTO failures since they are model-free algorithms and can adapt their policy to a changing environment. Results prove the adaptability of the proposed control model to the condition where one PTO fails to generate power. Over winter sea state, before the fault, the WEC with two PTOs generates 255.1 kW and when the PTO fault happens, the mean power drops to 223.4 kW. After fault, the RL-based control retrains its policy and generates 247.3 kW. In the third paper, a WEC controller is evaluated in a Hardware in the Loop real-time testbed. As most wave energy converter technologies are located off-shore, operation, maintenance, and repair costs may be significant. Therefore, any cost-effective control and operation of wave en-ergy converters must be robust to faults, loss of information from sensors, and changes in the operational environment. In this chapter an adaptive Bayesian Reinforcement-Learning method with online sparsification approach that is responsive and adaptive to faults in controller feed-back, controller actuation, and changes in the plant model is presented. The results show that the proposed control is capable of adapting its policy to the loss of information from one or two sensors and recovering full power operation. Moreover, the Bayesian RL based control model proves its adaptability by compensating a large portion of power loss due to PTO failures and rapid changes to the dynamic system.
Contributor
License
Resource Type
Date Issued
Degree Level
Degree Name
Degree Field
Degree Grantor
Commencement Year
Advisor
Academic Affiliation
Déclaration de droits
Publisher
Peer Reviewed
Language
Embargo reason
  • Pending Publication
Embargo date range
  • 2022-03-25 to 2023-04-26

Des relations

Parents:

This work has no parents.

Dans Collection:

Articles