[LINK] EPIC: OECD Announces AI Principles, 42 Nations Endorse

Roger Clarke Roger.Clarke at xamax.com.au
Sat May 25 15:24:00 AEST 2019


[I'll shortly run the ruler over the OECD Principles, using as a 
yardstick the collation of 10 Themes and 50 Principles, at:
http://www.rogerclarke.com/EC/AIP.html
http://www.rogerclarke.com/EC/AIP.html#App1
http://www.rogerclarke.com/EC/GAIF-App1.pdf


OECD Announces AI Principles, 42 Nations Endorse
EPIC Alert 26.09
May 24, 2019
https://epic.org/alert/epic_alert_26.09.html#_1.__

The OECD this week announced the OECD Principles on Artificial 
Intelligence, the first international standard for AI, with the backing 
of 42 countries. The OECD AI principles make central "the rule of law, 
human rights and democratic values" and set out requirements for 
fairness, accountability and transparency.

OECD Secretary-General Guerra said the OECD AI principles "place the 
interests of people at its heart." Guerra also quoted Alan Turing, who 
once said, "We can only see a short distance ahead, but we can see 
plenty there that needs to be done." Civil society groups, working 
through the CSISAC, played a key role in the development of the OECD AI 
Principles as did the EPIC Public Voice project.

Earlier this year, EPIC President Marc Rotenberg commended the US 
administration for backing the OECD process, but also wrote in the New 
York Times that there is much more to be done. "The United States must 
work with other democratic countries to establish red lines for certain 
AI applications and ensue fairness, accountability, and transparency as 
AI systems are deployed," EPIC's Rotenberg wrote.

EPIC has also proposed the Universal Guidelines for Artificial 
Intelligence as the basis for AI legislation. The Guidelines aim to 
reduce bias in decisionmaking algorithms, to ensure that digital 
globalization is inclusive, to create human-centered evidence-based 
policy, to promote safety in AI deployment, and to rebuild trust in 
institutions. The Universal Guidelines have been endorsed by more than 
250 experts and 60 organizations in 40 countries.

_____

OECD (2019)  'Recommendation of the Council on Artificial Intelligence' 
Organisation for Economic Co-operation and Development, 22 May 2019, at
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Principles for responsible stewardship of trustworthy AI

1.1.Inclusive growth, sustainable development and well-being

Stakeholders should proactively engage in responsible stewardship of 
trustworthy AI in pursuit of beneficial outcomes for people and the 
planet, such as augmenting human capabilities and enhancing creativity, 
advancing inclusion of underrepresented populations, reducing economic, 
social, gender and other inequalities, and protecting natural 
environments, thus invigorating inclusive growth, sustainable 
development and well-being.

1.2.Human-centred values and fairness

a)AI actors should respect the rule of law, human rights and democratic 
values, throughout the AI system lifecycle. These include freedom, 
dignity and autonomy, privacy and data protection, non-discrimination 
and equality, diversity, fairness, social justice, and internationally 
recognised labour rights.

b)To this end, AI actors should implement mechanisms and safeguards, 
such as capacity for human determination, that are appropriate to the 
context and consistent with the state of art.


1.3.Transparency and explainability

AI Actors should commit to transparency and responsible disclosure 
regarding AI systems. To this end, they should provide meaningful 
information, appropriate to the context, and consistent with the state 
of art:

i.to foster a general understanding of AI systems,

ii.to make stakeholders aware of their interactions with AI systems, 
including in the workplace,

iii.to enable those affected by an AI system to understand the outcome, and,

iv.to enable those adversely affected by an AI system to challenge its 
outcome based on plain and easy-to-understand information on the 
factors, and the logic that served as the basis for the prediction, 
recommendation or decision.

1.4.Robustness, security and safety

a)AI systems should be robust, secure and safe throughout their entire 
lifecycle so that, in conditions of normal use, foreseeable use or 
misuse, or other adverse conditions, they function appropriately and do 
not pose unreasonable safety risk.

b)To this end, AI actors should ensure traceability, including in 
relation to datasets, processes and decisions made during the AI system 
lifecycle, to enable analysis of the AI system’s outcomes and responses 
to inquiry, appropriate to the context and consistent with the state of art.

c)AI actors should, based on their roles, the context, and their ability 
to act, apply a systematic risk management approach to each phase of the 
AI system lifecycle on a continuous basis to address risks related to AI 
systems, including privacy, digital security, safety and bias.

1.5.Accountability

AI actors should be accountable for the proper functioning of AI systems 
and for the respect of the above principles, based on their roles, the 
context, and consistent with the state of art.


-- 
Roger Clarke                            mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916   http://www.xamax.com.au  http://www.rogerclarke.com

Xamax Consultancy Pty Ltd      78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professor in the Faculty of Law            University of N.S.W.
Visiting Professor in Computer Science    Australian National University



More information about the Link mailing list