Privacy group calls on US government to adopt universal AI guidelines to protect safety, security and civil liberties

0
26
- Advertisement -

After months of work, a set of guidelines designed to protect humanity from a range of threats posed by artificial intelligence have been proposed.

Now, a privacy group wants the U.S. government to adopt them too.

The set of 12 universal guidelines revealed at a meeting in Brussels last week are designed to “inform and improve the design and use of AI” by maximizing the benefits while reducing the risks. AI has been for years a blanket term for machine-based decision making, but as the technology gets better and is more widely adopted, the results of AI-based outcomes are having a greater effect on human lives — from gaining credit, employment, and even to criminal sentencing.

But often those decisions are made with proprietary and closed-off algorithms, making it near-impossible to know if the decisions are fair or justified.

These guidelines, according to the Electronic Privacy Information Center (EPIC), are designed to be baked in to AI to ensure the protection of human rights. That includes a right to know the factors, logic and the techniques used to the outcome of a decision; a fairness obligation that removes discriminatory decision making; and an obligation to secure systems against cybersecurity threats. The principles also include a prohibition on unitary scoring — to prevent governments from using AI to score their citizens and residents — a subtle jab at China’s controversial social credit system.

Now, EPIC wants to bring those principles stateside, where many of the next-generation AI technologies are under development.

In a letter to the National Science Foundation, EPIC called on the little-known government agency to adopt the universal guidelines, months after it opened its doors to proposals on a national AI policy.

“By investing in AI systems that strive to meet the [universal] principles, NSF can promote the development of systems that are accurate, transparent, and accountable from the outset,” wrote Marc Rotenberg, EPIC’s president and executive director. “Ethically developed, implemented, and maintained AI systems can and should cost more than systems that are not, and therefore merit investment and research.”

EPIC said that the 12 principles fit neatly within the seven strategies already set out by the U.S. so far — making the case for their adoption easier.

More than 200 experts and 50 organizations have signed on to the guidelines — including the Federal of American Scientists and the Government Accountability Project.

With the government’s request for information now closed, it’s likely to be many more weeks — if not months — before the government decides what its next steps will be — if any. It’s not so much up to the NSF to decide, but likely the White House’s Office of Science and Technology Policy.

A White House spokesperson did not respond to a request for comment.

You can read the full set of guidelines below:

  • Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.

  • Right to Human Determination. All individuals have the right to a final determination made by a person.

  • Identification Obligation. The institution responsible for an AI system must be made known to the public.

  • Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.

  • Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.

  • Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.

  • Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.

  • Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.

  • Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.

  • Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.

  • Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.

  • Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.

Written by Zack Whittaker
This news first appeared on https://techcrunch.com/2018/10/29/us-government-universal-artificial-intelligence-guidelines/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “Privacy group calls on US government to adopt universal AI guidelines to protect safety, security and civil liberties”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.