Buy article PDF
The purchased file will be sent to you
via email after the payment is completed.
US$ 35
Wind and Structures Volume 36, Number 5, May 2023 (Special Issue) pages 321-331 DOI: https://doi.org/10.12989/was.2023.36.5.321 |
|
|
Active flutter control of long-span bridges via deep reinforcement learning: A proof of concept |
||
Teng Wu, Jiachen He and Shaopeng Li
|
||
Abstract | ||
Aeroelastic instability (i.e., flutter) is a critical issue that threatens the safety of flexible bridges with increasing span length. As a promising technique for flutter prevention, active aerodynamic control using auxiliary surfaces attached to the bridge deck (e.g., winglets and flaps) can be utilized to extract the stabilizing forces from the surrounding wind flow. Conventional controllers for the active aerodynamic control are usually designed using linear model-based schemes [e.g., linear quadratic regulator (LQR) and H-infinity control]. In addition to suffering from model inaccuracies, the obtained linear controller may not work well considering the high complexity of the inherently nonlinear wind-bridge-control system. To this end, this study proposes a nonlinear model-free controller based on deep reinforcement learning for active flutter control of longspan bridges. Specifically, a deep neural network (DNN), with the powerful ability to approximate nonlinear functions, is introduced to map from the system state (e.g., the motion of bridge deck) to the control command (e.g., reference position of the actively controlled surface). The DNN weights are obtained by interacting with the wind-bridge-control environment in a trialand-error fashion (hence the explicit model of system dynamics is not required) using reinforcement learning algorithms of deep deterministic policy gradient (DDPG) due to its ability to tackle continuous actions with high training efficiency. As a proof of concept, numerical examples on active flutter control of a flat plate and a bridge deck are conducted to demonstrate the good performance of the proposed scheme. | ||
Key Words | ||
active control; deep neural networks; flutter; long-span bridges; reinforcement learning | ||
Address | ||
Teng Wu:University at Buffalo, Buffalo, NY 14260, USA Jiachen He:China Railway Siyuan Survey and Design Group Co., Ltd., Wuhan, Hubei 430063, China Shaopeng Li:University of Florida, Gainesville, FL 32611, USA | ||