Need Help?

Skip to Content

CCA Portal

Hana Nagel: Simple Ethics for Complex Technology

Add to calendar icon + Add to calendar
nov 12

Thu, Nov 12 2020, 6PM - 7PM

Zoom RSVP here

Part of event series: Fall 2020 Design Lecture Series

Hana Nagel.jpg

Organized by

Graduate Interaction Design

ninaevez@cca.edu

Event description

Lecture recording is now available to the CCA community for educational purposes. 

The Design Division at CCA welcomed Hana Nagel as our eighth and final speaker in the 2020 Fall Design Lecture Series. These lectures bring leading designers, strategists, curators and educators to speak with our community. The Fall 2020 series speaks to design as a tool for empowerment. 


Hana Nagel is an AI Researcher with Element AI, an artificial intelligence company headquartered in Montreal.  Having worked in nonprofits, startups, and large enterprises, it is easy to grasp Nagel’s commitment to theorizing change in pursuit of more ethical AI. In a most captivating tone, Nagel leverages a series of case studies to guide us through what she calls “simple ethics for complex technology”. In the end, what Nagel offers us is a framework for weaving in more critical consciousness, integration and collaboration into the end-to-end process of building AI models.

Presenting an approachable definition of AI to ground us, Nagel turns to the work of John McCarthy, a professor and computer scientist from Stanford University.  Relying in part on Macarthy’s definition of AI, Nagel describes “intelligence” as “the computational part of the ability to achieve goals in the world”.  Likewise, she describes the “artificial” context as “the science and  engineering of making intelligent machines, especially intelligent computer programs that have the ability to solve problems”.  Offering some second-order definitions to ground us even further, Nagel details the various branches, applications and use cases of AI.

Nagel’s ethical concerns lie at the intersection of AI’s growth and proliferation and that of the inherent complexity entailed in maximizing its benefits while also minimizing its harm.  She makes the point that despite the adoption of AI principles by an ecosystem of stakeholders in the space, efforts to minimize the harm of AI have not been met with the same level of intent as efforts to maximize its benefits. Nagel is keenly aware of not just the gap between ‘principles’ and ‘practice’ but also the potential harm to human lives. 

Before illuminating her framework for more ethical AI, Nagel again turns to prior research in the AI space.  In the Convention on the Organization for Economic Cooperation and Development, Nagel finds five digestible principles of AI: ‘Inclusive Growth’, ‘Human-Centered Values’, ‘Transparency and Explainability’, ‘Robustness, Security and Safety’ and ‘Accountability’.  She picks up the mantle with ‘accountability’ given its high level priority.  In ‘accountability’ we find that “AI actors should be accountable for the proper functioning of AI Systems” and “for the respect of the prior four principles”.  

Through a series of case studies, Nagel argues for accountability when AI Systems maximize harm to human beings. In the case of an Uber self-driving car crash that killed a pedestrian, Nagel refers to a paper by Madeleine Clare Elish that “explores the concept of accountability when there’s distributed agency in a complex autonomous AI system”.  In the case of Babylon, Nagel discusses a situation in which an AI powered health chatbot presents different results for a male user and female user although both had entered the same information about their symptoms.   Through Ring, a home security system from Amazon that uses facial recognition technology, we also learn of the racial bias that AI can perpetuate.  Nagel tells us that facial recognition technology has a particularly “higher rate of error with people of color”.

In conclusion, Nagel offers us one way to bridge the gap between principles and practice in order to minimize the harm to human lives.  Dubbing it a framework for an “outcomes-based approach”, Nagel’s “simple ethics” leverages a set of key questions that can be applied throughout each phase in the flow of creating AI models.  This includes concept design, model development, verification and validation, implementation and use, and ongoing monitoring.  In this approach, Nagel suggests working backwards from the planned outcome to really hone in on the events, activities and shifts that lead to the outcome. So instead of “How Might We?”, Nagel says we should shift to asking, “What Happens If We?”.

Entry details

Online via Zoom