Broker Check

Ethics in Artificial Intelligence/Machine Learning

April 07, 2021

With the increasing use of artificial intelligence and machine learning (AI/ML) comes decisions about what data can and should be accessed, how to guarantee fairness and avoid bias and discrimination. These questions are addressed by John Hurlock in a blog on Smarter Risk Management (htpps://www.smarterrsikmanagement.com/ai-ml-ethics-what-will-it-take-to-trust-the-model/).

In the EU, the Assessment List for Trustworthy Artificial Intelligence (ALTAI) lists seven requirements:

  1. Human agency and oversight.
  2. Technical robustness and safety.
  3. Privacy and data governance.
  4. Transparency.
  5. Diversity, non-discrimination and fairness.           
  6. Environmental and societal well-being.
  7. Accountability.

Hurlock recommends using these attributes as guidelines in the United States.

Without human oversight, machine learning can go wrong. Cyber attacks can lead to "data poisoning", intentionally feeding bad data into an algorithm. Algorithms can lead to informational redlining. There is also a risk of disclosing protected individual information.

Bias is a growing issue that can and should be addressed by programmer diversity. Impacts on the environment and society must be considered.

The goal of these requirements is to make sure  AI/ML is sued for the right purposes.