Karen Hoa of MIT Technology Review summarises the principles as follows:
- Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
- Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
- Scientific integrity and information quality. Policy decisions should be based on science.
- Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
- Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
- Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
- Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
- Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
- Safety and security. Agencies should keep all data used by AI systems safe and secure.
- Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.
The US was the
19th country to announce an AI initiative in Feb 2019 and these principles are the first major attempt to suggest direction and position the US as a leader rather than an 'also ran' - the question is, is it too little, too vague and too late?