Show simple item record

dc.contributor.authorZhang, Nailong
dc.contributor.authorSi, Wujun
dc.date.accessioned2020-07-09T16:24:06Z
dc.date.available2020-07-09T16:24:06Z
dc.date.issued2020-11
dc.identifier.citationNailong Zhang, Wujun Si, 2020. Deep reinforcement learning for condition-based maintenance planning of multi-component systems under dependent competing risks, Reliability Engineering & System Safety, vol. 203:art. no. 107094en_US
dc.identifier.issn0951-8320
dc.identifier.urihttps://doi.org/10.1016/j.ress.2020.107094
dc.identifier.urihttps://soar.wichita.edu/handle/10057/18678
dc.descriptionClick on the DOI link to access the article (may not be free).en_US
dc.description.abstractCondition-Based Maintenance (CBM) planning for multi-component systems has been receiving increasing attention in recent years. Most existing research on CBM assumes that preventive maintenances should be conducted when the degradations of system components reach specific threshold levels upon inspection. However, the search of optimal maintenance threshold levels is often efficient for low-dimensional CBM but becomes challenging if the number of components gets large, especially when those components are subject to complex dependencies. To overcome the challenge, in this paper we propose a novel and flexible CBM model based on a customized deep reinforcement learning for multi-component systems with dependent competing risks. Both stochastic and economic dependencies among the components are considered. Specifically, different from the threshold-based decision making paradigm used in traditional CBM, the proposed model directly maps the multi-component degradation measurements at each inspection epoch to the maintenance decision space with a cost minimization objective, and the leverage of deep reinforcement learning enables high computational efficiencies and thus makes the proposed model suitable for both low and high dimensional CBM. Various numerical studies are conducted for model validations.en_US
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.relation.ispartofseriesReliability Engineering & System Safety;v.203:art.no.107094
dc.subjectCost minimizationen_US
dc.subjectDeep Q networken_US
dc.subjectFailure dependencyen_US
dc.subjectMaintenanceen_US
dc.subjectMarkov decision processen_US
dc.titleDeep reinforcement learning for condition-based maintenance planning of multi-component systems under dependent competing risksen_US
dc.typeArticleen_US
dc.rights.holder© 2020 Elsevier Ltden_US


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • ISME Research Publications
    Research works published by faculty and students of the Department of Industrial, Systems, and Manufacturing Engineering

Show simple item record