In a Forbes article (https://www.forbes.com/sites/glenngow/2023/08/06/a-simple-ai-governance-framework-in-the-age-of-chatgpt/?sh=5f703ef2...) Glenn Gow uses a Singapore model to offer a framework for how to implement the use of artificial intelligence.
The framework focuses on harm - its severity and probability - if AI is left to its own devices without a human in the loop. There are three potential scenarios:
- Human in the loop: A high probability and severity of harm. Humans are essential in decision making. AI can remove the most tedious and time-consuming work, but humans must have ultimate responsibility.
- Human over the loop: Either high probability or high severity of harm, but not both. AI can operate without humans but needs human oversight.
- Human outside the loop: Where AI can make decisions as well as or better than humans, with a low probability of harm.
This framework gives organizations guidance for deciding how to use AI in their operations.