• Media type: E-Article
  • Title: Application of various machine learning architectures for crash prediction, considering different depths and processing layers
  • Contributor: Rezapour, Mahdi; Ksaibati, Khaled
  • imprint: Wiley, 2020
  • Published in: Engineering Reports
  • Language: English
  • DOI: 10.1002/eng2.12215
  • ISSN: 2577-8196
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title><jats:p>Over a million people die every year as a result of road crashes. A significant proportion of all road fatalities are related to vulnerable road users, including motorcyclist and pedestrian. Artificial intelligent can proactively act to identify those drivers as higher risk of crashes. However, an accurate predictive model is essential for the safety of the road users. High accuracy could be achieved by implementing a reliable method with an optimal architecture and structure being able to identify and flag the risky drivers before a crash occurs. Over the last decade, extensive research has been conducted to achieve a higher accuracy by adjusting the depth of various machine learning algorithms or staking different processing layers. The literature highlights that having a higher algorithm complexity or blindly stacking up layers would not necessarily enhance the model accuracy. Despite the extensive efforts being made about the impacts of depth and various processing layers in the literture, not much research has been conducted on transportation problems to investigate the importance of those factors on the accuracy of crash prediction. In this context, this study aims to implement and compare various deep learning architectures in predicting motorists' crash severity. Various processing layers and depths are considered and compared. Long short‐term memory (LSTM) models have extensively been used in the literature for different types of sequential datasets. However, a comprehensive application of this method for non‐sequential data is still missing. Different processing layers of LSTM and deep neural network (DNN)‐based models with various depths and combinations are here considered and compared. The results indicate that a simple LSTM outperforms an LSTM model with higher depths, and a model with DNN stacked on top of the LSTM model. This study discusses in detail the methodological approach to stacking various layers and hyperparameters tuning.</jats:p>
  • Access State: Open Access