Explainable AI for Autonomous Ship Navigation Aims to Increase Trust
The sinking of the Titanic 113 years ago on April 14-15 after colliding with an iceberg serves as a stark reminder of the importance of safe navigation at sea. Human error likely caused the ship to enter dangerous waters, but today, autonomous systems built on artificial intelligence (AI) could help ships avoid such accidents. However, the question remains: can such a system explain to the captain why it is maneuvering in a certain way?
This is where explainable AI comes into play, aiming to increase trust in autonomous systems among human operators. Researchers from Osaka Metropolitan University’s Graduate School of Engineering have developed an explainable AI model for ships that quantifies the collision risk for all vessels in a given area. This feature is particularly important as key sea-lanes have become increasingly congested.

Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto created an AI model that not only makes decisions but also explains the basis for those decisions and the intention behind its actions using numerical values for collision risk. “By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” Professor Hashimoto stated. “I also believe that this research can contribute to the realization of unmanned ships.”
The findings of their research were published in Applied Ocean Research. The development of explainable AI for autonomous ship navigation represents a significant step forward in maritime safety and technology. As the maritime industry continues to evolve, such innovations are crucial for enhancing safety, efficiency, and trust in autonomous systems.