top of page
index.png

2019 Assembly Projects

1-dOV0mkw9lpB5DD3PTScIDw_2x.png
MIT_ML_Logo.png

THE 2019 ASSEMBLY COHORT came together to work on the challenge of artificial intelligence and its governance. Over four months, Assemblers took part in a short course taught by Jonathan Zittrain and Joi Ito, participated in team building and ideation activities, and developed their projects.

This year, projects tackle a range of problems. Kaleidoscope: Positionality - Aware Machine Learning interrogates the creation of classification systems. Surveillance State of the Union highlights the risks of pursuing surveillance-related work in AI. Watch Your Words examines the expansion of Natural Language Processing / Natural Language Understanding systems. Finally, AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems.

Read more below about the four projects developed during Assembly 2019.

kaleidoscope.png

YouTube Presentation

 

"Unbiased" data and ML systems are risky fiction; there is no view from nowhere. The Kaleidoscope: Positionality - Aware Machine Learning project explores the development of positionality-aware ML/AI systems.

ML/AI systems are trained on data, and classification systems enable data set creation and curation. Classification systems are, simply put, sets of boxes into which things can be put (e.g., the International Classification of Disease, ICD). In the design of such systems, one decides what can and will be visible in data sets. Context shapes these decisions (e.g., the discovery of HIV required changes in the ICD). Classification systems are informed by the perspectives, experiences, and knowledge of their creators. As such, categories are data, too, and classification systems have positionality, an inherited perspective.

Kaleidoscope

Surveillance State of the Union

YouTube Presentation

Surveillance State of the Union is a data visualization and set of short illustrative cases that seeks to raise awareness among tech workers, academics, military-decision makers, and journalists about the risks of pursuing surveillance-related work in AI. Work that may, to a researcher, be thought of as theoretical has very real consequences for people who are subjected to state surveillance, as evidenced in the suppression of the Uyghur minority in Xinjiang province of China and other marginalized communities around the world.

The project leveraged a variety of data sources such as government contracts, co-authored papers, and public releases to begin to map the surviellance research netowkr. The work shows, for example, overlap between universities collaborating on US state-funded surveillance research and similar research by Chinese comapnies implicated in Xinjiang.

SurveillanceStateUnion
wyw_logo.png

YouTube Presentation

 

Watch Your Words examines the expansion of Natural Language Processing / Natural Language Understanding systems. More & more often, people are being asked to interact with these systems in order to access education, job markets, customer service, medical care, and government services. Without active attention, biases encoded in written language will be reinforced, extended & perpetuated in these systems, resulting in multiple types of harm to vulnerable populations.

Because discussion of bias needs to move beyond the machine-learning community to include developers who build applications based on "off-the-shelf" models, Watch Your Words will present evidence of these biases, explore approaches to raise awareness of bias, define harms visited on vulnerable groups, and suggest approaches for bias mitigation.

Watch Your Words
ai_blindspot_logo.png

YouTube Presentation

 

AI Blindspot offers a process for preventing, detecting, and mitigating bias in AI systems.

Organizations lack a framework for preventing, detecting, and mitigating bias in AI systems. Audit tools often focus on specific parts of a system rather than the entire AI pipeline, which can lead to unintended consequences. AI Blindspot is a discovery process to help AI developers and teams evaluate and audit how their systems are conceptualized, built, and deployed. We produced a set of printed and digital prompt cards to help teams identify and address potential blindspots.

AI Blindspot
bottom of page