The advancement of AI is accompanied by great opportunities for economic development and addressing societal challenges. In the maritime sector, the potential of AI in combination with big data and High-Performance Computing (HPC) is expected to revolutionise maritime processes, by leveraging the unprecedented amount of data that is being generated by naval vessels to produce analytical models that can support domain professionals make more accurate decisions. However, AI can put pressure on ethical values and fundamental rights that drive our lives and our societies. As such, ethical considerations serve as guidance where the law is not entirely adapted to new phenomena, e.g., where technology enables practices, which the legislator had not anticipated. In this context, the development of maritime AI systems must be in line with ethical principles and requirements, preventing any harmful implications.
The VesselAI project aims at realising a holistic, beyond-the-state-of-the-art AI-empowered framework for decision-support models, data analytics and visualisations to build digital twins and maritime applications for a diverse set of cases with high impact, including simulating and predicting vessel behaviour and manoeuvring (including the human factor), ship energy design optimisation, autonomous shipping and fleet intelligence.
The VesselAI consortium has carefully examined the solutions being developed to identify and face potential ethical issues that may arise. Firstly, it was concluded early in the project that the developed AI models are trained by only using open data (AIS, weather datasets, etc.) and pilot-specific data (in effect, a maritime company’s naval data). In fact, no personal or sensitive data are leveraged at any point of the model training, which means that the project poses no risk to user privacy and is compliant with the GDPR. The second step entailed the assessment of VesselAI’s compliance with the rules that govern ethical AI in the EU. For that reason, the Ethics Guidelines for Trustworthy AI developed by the High-Level Expert Group on AI (AI HLEG) were at the centre of the VesselAI ethical framework both during the implementation and the delivery stages of its AI solutions. To draw a common approach regarding trustworthy AI services, a framework that is based on the Assessment List on Trustworthy Artificial Intelligence (ALTAI), as well as on the proposed guidelines for the identification and management of ethical risks has been designed. The ALTAI questionnaire makes ethics central to the development of AI systems. It acts as a self-evaluation tool for assessing AI systems under the seven key requirements defined in the Guidelines. The list contains a set of questions relevant to the requirements that guide their practical implementation. In addition, this list raises awareness around the potential impact and risks of the proposed AI systems and the kind of measures that can be taken to mitigate these risks.
The assessment for the VesselAI system was performed in three cycles by the pilots (with the support of the technical partners and the project’s legal and ethical experts), who were requested to answer the questions based only on their pilot case and use case scenarios as well as the technical components of the VesselAI technical solution that they will utilise. Based on the results of the questionnaire as well as an overall analysis of VesselAI from a legal and ethical standpoint, it was assessed that VesselAI is a relatively low-risk project concerning the AI part. The few risks that were identified were quickly discussed and resolved with the help of ALTAI’s recommendations, making it a powerful and intuitive tool when it comes to ensuring the compliance of AI solutions with legal and ethical requirements.