Generative AI applications for risk and crisis management: The good, the bad, and the ugly
Published 2024-05-20
Keywords
- risk and crisis management,
- Generative AI,
- knowledge management,
- knowledge extraction,
- misinformation
How to Cite
This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
Generative Pre-Trained Transformers (GPT) have a potential to drastically improve our ability to extract actionable knowledge out of unstructured and semi structured data, by automatizing the processing of unstructured and semi structured data in a generic way. Potential for use of generative AI in risk and crisis management is enormous, and ranges from extraction of specific knowledge from large document sets over real time processing of texts, pictures and even video materials, to semi-automated timely responses to misinformation. Unfortunately, this high potential also comes with high risks, as generative AI systems often produce incomplete and misleading information, especially on topics they haven’t been explicitly trained on and on the topics where the training data is already tainted by misinformation. In this paper, we present several application classes where generative AI could help improve environmental risk management and climate change action, (2) discuss the common sources of errors, legal and ethical challenges in such applications, and (3) propose a “safe” pathway towards gradual introduction of generative AI in applications with high societal impact.