With the increasing use of artificial intelligence and machine learning (AI/ML) comes decisions about what data can and should be accessed, how to guarantee fairness and avoid bias and discrimination. These questions are addressed by John Hurlock in a blog on Smarter Risk Management (htpps://www.smarterrsikmanagement.com/ai-ml-ethics-what-will-it-take-to-trust-the-model/).
In the EU, the Assessment List for Trustworthy Artificial Intelligence (ALTAI) lists seven requirements:
- Human agency and oversight.
- Technical robustness and safety.
- Privacy and data governance.
- Diversity, non-discrimination and fairness.
- Environmental and societal well-being.
Hurlock recommends using these attributes as guidelines in the United States.
Without human oversight, machine learning can go wrong. Cyber attacks can lead to "data poisoning", intentionally feeding bad data into an algorithm. Algorithms can lead to informational redlining. There is also a risk of disclosing protected individual information.
Bias is a growing issue that can and should be addressed by programmer diversity. Impacts on the environment and society must be considered.
The goal of these requirements is to make sure AI/ML is sued for the right purposes.