
10 October 2022

Over the past decade, algorithms have been increasingly employed to automate or assist public decision-making processes and service delivery. Indeed, many local and national administrations have turned to automation to look for technical, “unbiased” support in a variety of areas, including urban planning, social care and welfare, education, health, housing, and public surveillance for law enforcement.
As the applications of AI in the public sector increase, so do the indications about the potential harm they can cause and, consequently, concerns over their legitimacy, accountability, and transparency. As a result, calls for the emerging concept of “algorithmic accountability” have grown significantly, and public administrations have been more and more interested in finding new, innovative ways to respond to such calls from a policy perspective.
This blog aims to illustrate the ways in which local municipalities have attempted to operationalize the concept of algorithmic accountability and oversight in cities. It particularly focuses on specific areas of urban public administration, such as law enforcement, urban planning and design, and public service delivery.
In order to understand AI accountability and oversight, the foundations of algorithmic accountability need first to be explored. Public AI use tends to boil down to a system of algorithms. An algorithm uses a series of steps to automatically turn inputs (such as facial scan data from street cameras) into outputs (such as identifying people of interest for police). Thus, a system of algorithms uses automated reasoning to organize and prioritize inputs, ultimately producing outputs upon which decisions can be made.
Drawing from a recently published report on “Algorithmic accountability for the public sector” by the Ada Lovelace Institute, the AI Now Institute, and the Open Government Partnerships (OGP), algorithmic accountability can be defined as “the set of policies oriented towards ensuring that those that build, procure and use algorithms are eventually answerable for their impacts”.
Oversight, then, is a fundamental element for accountability practices to be fully realized. Indeed, AI-based policies and decisions being accountable does not necessarily mean that they are overseen. In other words, something being verifiable does not automatically mean it is verified. However, when talking about responsible AI practices, accountability cannot be wholly achieved without oversight. Therefore, accountability and oversight are different steps of the same process.
Tools to engage in algorithmic accountability and oversight practices include the transparent publication of principles and guidelines, regulatory bans and restrictions, algorithm registers, impact assessments and audits, and rights to appeal.
Finally, the actors that can oversee AI systems and hold them accountable include citizens, tech-workers, community organizers, investigative journalists, and civil society organizations. On the other hand, actors to be held accountable mainly include policymakers employing the given AI system for public purposes, the companies and organizations involved in designing the tool, and those chiefly participating in generating and collecting the data that feed it.
Starting from the premise that automated systems can bring about benefits such as speed, efficiency, and sometimes even fairness, it is important to note that they can in fact be fallible–and, when adopted for public decision-making especially, harmful.
AI-driven decisions are based on a series of training data–the input–that are automatically organized and turned into other data–the output. When adopting AI to make decisions, it is important to note that the training data is always contextual and generated. Indeed, as Rob Kitchin notes in his book Data Lives, data is always generated before being collected; it does not exist in itself. Instead, different things are categorized and turned into numbers–i.e. datafied–according to specific systems of knowledge. Thus, the same thing can be turned into data differently, depending on different contexts and people.
Ultimately, this makes it essential to ensure that AI systems and those involved in their design and employment are responsible, fair, and equitable, and that their being so is accountable and verifiable.
Increasingly, local administrations have been at the forefront of operationalizing the concept of AI accountability and oversight. In particular, this blog investigates three specific areas where this is being practiced: law enforcement, urban planning and design, and public service delivery.
Ultimately, accountability and oversight practices of AI systems are essential for responsible AI localism to be practiced and for equitable and accountable public policy to flourish alongside innovation.
***
We are deeply grateful to Christophe Mondin, Researcher at CIRANO, Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, and Ben Snaith, Researcher at the Open Data Institute, for reviewing this blog.
In our sixth blog, we will explore methods to increase transparency around AI and data use by public authorities for stronger legitimacy and trust among the public.
***
Disclaimer: We want to specify that we are not endorsing the examples here reported. These are only useful to depict a picture of AI Localism.