Cord-19

By Allen Institute for AI (AI2)

DATASET

Free resource of more than 280,000 scholarly articles about the novel coronavirus for use by the global research community.


PowerTransformer

By Allen Institute for AI (AI2), University of Washington

MODEL

PowerTransformer is a tool that aims to rewrite text to correct implicit and potentially undesirable bias in character portrayals.


Intel Geospatial

By Intel

PLATFORM

The Intel Geospatial cloud platform is launching today to transform the way industries manage their assets, including Utilities, Smart cities, and Oil and Gas.


The Building Data Genome 2 (BDG2) Data-Set

By Kaggle

DATASET

BDG2 is an open data set made up of 3,053 energy meters from 1,636 buildings.


The LinkedIn Fairness Toolkit (LiFT)

By Linkedin

LIBRARY

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows. The library can be deployed in training and scoring workflows to measure biases in training data, evaluate fairness metrics for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis.


StereoSet

By MIT

DATASET

StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.


SafeLife: Avoiding Side Effects in Complex Environments

By Partnership on AI

BENCHMARK

SafeLife is a reinforcement learning environment that's designed to test an agent's ability to learn and act safely. In this benchmark, they focus on the problem of avoiding negative side effects. The SafeLife environment has complex dynamics, procedurally generated levels, and tunable difficulty. Each agent is given a primary task to complete, but there's a lot that can go wrong! Can you train an agent to reach its goal without making a mess of things?


Medical Open Network for AI (MONAI)

By NVIDIA

FRAMEWORK

The MONAI framework is the open-source foundation being created by Project MONAI. MONAI is a freely available, community-supported, PyTorch-based framework for deep learning in healthcare imaging. It provides domain-optimized foundational capabilities for developing healthcare imaging training workflows in a native PyTorch paradigm.


Adversarial ML Threat Matrix

By Microsoft, IBM, NVIDIA, Bosch, Airbus, The MITRE Corporation, PwC, Software Engineering Institute – Carnegie Mellon University

FRAMEWORK

Industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems


Open Differential Privacy

By Microsoft, Harvard University

TOOLKIT

This toolkit uses state-of-the-art differential privacy (DP) techniques to inject noise into data, to prevent disclosure of sensitive information and manage exposure risk...


eXplainability Toolbox

By The Institute for Ethical AI & Machine Learning

TOOLKIT

XAI - An eXplainability toolbox for machine learning


FairLean

By Microsoft Research

PACKAGE

A Python package to assess and improve fairness of machine learning models.


InterpretML

By Microsoft

TOOLKIT

Fit interpretable models. Explain blackbox machine learning...


FairTest: Discovering Unwarranted Associations in Data-Driven Applications

By IEEE

FRAMEWORK

IEEE introduces the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities...


SHAP (SHapley Additive exPlanations)

TOOLKIT

A game theoretic approach to explain the output of any machine learning model.


From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

RESEARCH

Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs...


Remove problematic gender bias from word embeddings

RESEARCH

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases...


Fairness in Classification

RESEARCH

In this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy...


Fairness Comparison

TOOLKIT

Comparing fairness-aware machine learning techniques...


Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation

LIBRARY

themis-ml is a Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms...