AUTOMATING SOCIETY REPORT 2020

By AlgorithmWatch

RECOMMENDATIONS

Germany

In light of the findings detailed in the 2020 edition of the Automating Society report, we recommend the following set of policy interventions to policymakers in the EU parliament and Member States’ parliaments, the EU Commission, national governments, researchers, civil society organizations (advocacy organizations, foundations, labor unions, etc.), and the private sector (companies and business associations). The recommendations aim to better ensure that ADM systems currently being deployed and those about to be implemented throughout Europe are effectively consistent with human rights and democracy.


The EqualAI Framework

By EqualAI

FRAMEWORK

Equal AI helps companies reduce bias in their AI by addressing each touchpoint and the full spectrum of the issue.


AI explained – Non technical guide for policymakers

By AI for PEACE

PRINCIPLES & GUIDELINES

The Guide offers explanations and additional resources, videos, articles, papers, and tutorials, to help policymakers prepare for the current and future AI developments and impacts. It serves as an open resource, welcoming all comments and suggestions to make it better and inviting, continuing dialogue in explaining AI and keeping up with its developments.


Policy Guidance on AI for Children

By UNICEF

RECOMMENDATIONS

As part of our Artificial Intelligence for Children Policy project, UNICEF has developed this guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights.


A Practical Guide to Building Ethical AI

By Harvard Business Review

PRINCIPLES & GUIDELINES



On Artificial Intelligence – A European approach to excellence and trust

By European Commission

RECOMMENDATIONS

Europe

On 19 February 2020, the European Commission published a White Paper aiming to foster a European ecosystem of excellence and trust in AI.



The Ethics of AI Ethics – An Evaluation of Guidelines

RESEARCH

Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies...


Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing

RESEARCH

In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development life-cycle...


Using AI for social good

By Google

PRINCIPLES & GUIDELINES

This guide helps nonprofits and social enterprises learn how to apply artificial intelligence and machine learning to social, humanitarian and environmental challenges...


Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development

By Future of Humanity Institute – University of Oxford

STANDARDS

UK

Standards are an institution for coordination. Standards ensure that products made around the world are interoperable. They ensure that management processes for cybersecurity, quality assurance, environmental sustainability, and more are consistent no matter where they happen. Standards provide the institutional infrastructure needed to develop new technologies, and they provide safety procedures to do so in a controlled manner. Standards can do all of this, too, in the research and development of artificial intelligence (AI)...


From What to How: An initial review of publicly available AI Ethics Tools, Methods and Research to translate principles into practices

RESEARCH

The debate about the ethical implications of Artificial Intelligence dates from the 1960s. However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such debate has primarily focused on principles - the what of AI ethics - rather than on practices, the how. Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs...


Decision Point in AI Governance

By Center for Long-Term Cybersecurity – UC Berkeley

RESEARCH

USA

This paper provides an overview of efforts already under way to resolve the translational gap between principles and practice, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline. The paper presents a typology and catalog of 35 recent efforts to implement AI principles, and explores three case studies in depth. Selected for their scope, scale, and novelty, these case studies can serve as a guide for other AI stakeholders — whether companies, communities, or national governments — facing decisions about how to operationalize AI principles...


Safe Face Pledge

By The Algorithmic Justice League, The Center on Technology & Privacy at Georgetown Law

PRINCIPLES & GUIDELINES

The Safe Face Pledge is an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology. This historic pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use...


Guide to Data Protection

By ICO

PRINCIPLES & GUIDELINES

UK

This guide is for data protection officers and others who have day-to-day responsibility for data protection. It is aimed at small and medium-sized organisations, but it may be useful for larger organisations too...


Linking Artificial Intelligence Principles

By Institute of Automation – Chinese Academy of Sciences

PRINCIPLES & GUIDELINES

The following table presents an analysis of different AI Principles world wide (currently 74 proposals) from the perspective of coarser topics, which shows mainly on the consensus of various proposals...


Microsoft AI principles

By Microsoft

PRINCIPLES & GUIDELINES

We put our responsible AI principles into practice through the Office of Responsible AI (ORA) and the AI, Ethics, and Effects in Engineering and Research (Aether) Committee. The Aether Committee advises our leadership on the challenges and opportunities presented by AI innovations. ORA sets our rules and governance processes, working closely with teams across the company to enable the effort.


GE Healthcare AI principles

By GE Healthcare

PRINCIPLES & GUIDELINES

We are publishing the following AI principles, which we will apply to improve healthcare quality, cost, access and the patient experience...


Exploring the future of responsible AI in government

By Government of Canada

PRINCIPLES & GUIDELINES

Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws...