Blog 5: Accountability and Oversight of AI in the Local Public Sector

10 October 2022

Deepmind Liskvdgflek Unsplash

Over the past decade, algorithms have been increasingly employed to automate or assist public decision-making processes and service delivery. Indeed, many local and national administrations have turned to automation to look for technical, “unbiased” support in a variety of areas, including urban planning, social care and welfare, education, health, housing, and public surveillance for law enforcement.

As the applications of AI in the public sector increase, so do the indications about the potential harm they can cause and, consequently, concerns over their legitimacy, accountability, and transparency. As a result, calls for the emerging concept of “algorithmic accountability” have grown significantly, and public administrations have been more and more interested in finding new, innovative ways to respond to such calls from a policy perspective.

This blog aims to illustrate the ways in which local municipalities have attempted to operationalize the concept of algorithmic accountability and oversight in cities. It particularly focuses on specific areas of urban public administration, such as law enforcement, urban planning and design, and public service delivery. 

What is AI accountability and oversight?

In order to understand AI accountability and oversight, the foundations of algorithmic accountability need first to be explored. Public AI use tends to boil down to a system of algorithms. An algorithm uses a series of steps to automatically turn inputs (such as facial scan data from street cameras) into outputs (such as identifying people of interest for police). Thus, a system of algorithms uses automated reasoning to organize and prioritize inputs, ultimately producing outputs upon which decisions can be made. 

Drawing from a recently published report on “Algorithmic accountability for the public sector” by the Ada Lovelace Institute, the AI Now Institute, and the Open Government Partnerships (OGP), algorithmic accountability can be defined as “the set of policies oriented towards ensuring that those that build, procure and use algorithms are eventually answerable for their impacts”. 

Oversight, then, is a fundamental element for accountability practices to be fully realized. Indeed, AI-based policies and decisions being accountable does not necessarily mean that they are overseen. In other words, something being verifiable does not automatically mean it is verified. However, when talking about responsible AI practices, accountability cannot be wholly achieved without oversight. Therefore, accountability and oversight are different steps of the same process.

Tools to engage in algorithmic accountability and oversight practices include the transparent publication of principles and guidelines, regulatory bans and restrictions, algorithm registers, impact assessments and audits, and rights to appeal. 

Finally, the actors that can oversee AI systems and hold them accountable include citizens, tech-workers, community organizers, investigative journalists, and civil society organizations. On the other hand, actors to be held accountable mainly include policymakers employing the given AI system for public purposes, the companies and organizations involved in designing the tool, and those chiefly participating in generating and collecting the data that feed it.

Why AI accountability and oversight?

Starting from the premise that automated systems can bring about benefits such as speed, efficiency, and sometimes even fairness, it is important to note that they can in fact be fallible–and, when adopted for public decision-making especially, harmful. 

AI-driven decisions are based on a series of training data–the input–that are automatically organized and turned into other data–the output. When adopting AI to make decisions, it is important to note that the training data is always contextual and generated. Indeed, as Rob Kitchin notes in his book Data Lives, data is always generated before being collected; it does not exist in itself. Instead, different things are categorized and turned into numbers–i.e. datafied–according to specific systems of knowledge. Thus, the same thing can be turned into data differently, depending on different contexts and people. 

Ultimately, this makes it essential to ensure that AI systems and those involved in their design and employment are responsible, fair, and equitable, and that their being so is accountable and verifiable. 

How are AI accountability and oversight realized in cities?

Increasingly, local administrations have been at the forefront of operationalizing the concept of AI accountability and oversight. In particular, this blog investigates three specific areas where this is being practiced: law enforcement, urban planning and design, and public service delivery. 

Law enforcement

  • Shenzhen, often known as China’s Silicon Valley, has become the first local government in China to regulate its artificial intelligence applications. The Regulations on the Promotion of Artificial Intelligence Industry of Shenzhen Special Economic Zone “seek to promote the use and development of AI in both the public and private sectors, establish a framework to govern the approval of AI products and services, and regulate AI usage ethics.” The regulation outlines incentives for public-private collaboration, data oversight systems, comprehensive data collection and monitoring methods, and government oversight of big tech.
  • Following a decision by the Portland, Maine City Council to ban facial surveillance technology by its police force, legislators in the Maine House of Representatives voted in favor of a proposal that allows the use of digital technology for the investigation of serious crimes, such as rape and murder. However, the “bill would require police to have probable cause before they use facial recognition in the investigation of a crime and would limit searches to databases maintained by the Department of Motor Vehicles or the Federal Bureau of Investigation.”
  • The Santa Cruz City Council banned the use of predictive policing (the use of crime data and algorithms to predict where offenses are likely to occur) and facial recognition technologies by law enforcement authorities. The decision was backed by civil liberties and racial justice groups in the city, who drew attention to the racially discriminatory outcomes that the technologies often foster.

Urban planning and design

  • The City of Syracuse’s Office of Accountability, Performance, and Innovation (OAPI) “develops innovative solutions to Syracuse’s most pressing problems. It leverages idea generation techniques and utilizes a structured, human-centered and data-driven approach to affect change and deliver results within the city.” Working on risk management of local AI implementation, “OAPI and the rest of the City government consider developing and using an evaluative framework to ensure completeness and consistency in their decision making central to this task.”
  • Starting in 2019, the Iowa state legislature approved the use of autonomous (self-driving) vehicles on public highways provided the vehicle met "certain conditions including that the vehicle must be capable of attaining minimal risk if the automated driving system malfunctions." Further regulation and adherence to traffic standards have been instated, including a requirement that manufacturers may not test self-driving cars without a valid permit, and the provision of  oversight authority to the Transportation Commission to “restrict operation” of an autonomous vehicle on a road.
  • For cities, water and sanitation form the bedrock of urban development. In particular, sewer systems act as “a form of insurance against future service disruptions. Washington, D.C.’s Water and Sewer Authority has enlisted the use of Pipe Sleuth, an AI-powered treatment system, to provide vital information about the quality and state of wastewater collection and treatment.

Public service delivery

  • Through local law 49, New York City established an Automated Decision Systems Task Force (ADS) in 2018 to review the use of algorithms by city agencies and offices to ensure “fairer and more equitable” use of these tools. On the recommendation of the ADS, the role of Algorithms Management and Policy Officer within the Mayor’s Office of Operations was developed, with the aim to create policies and guidelines to use AI systems for public service delivery in an equitable and accountable manner.
  • Adopting a children’s rights perspective, the New York City Council created a Special Task Force in 2017 to “investigate city agencies’ use of algorithms and deliver a report with recommendations” for improving the welfare and tracking of children in protective care.
  • The City of Amsterdam is developing an algorithmic register that aims to give an overview of all the artificial intelligence systems and algorithms used by the city. This allows residents to be aware of, give feedback on, and actively participate in the employment of AI systems for urban service delivery in Amsterdam.

Conclusion

Ultimately, accountability and oversight practices of AI systems are essential for responsible AI localism to be practiced and for equitable and accountable public policy to flourish alongside innovation.

***

We are deeply grateful to Christophe Mondin, Researcher at CIRANO, Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, and Ben Snaith, Researcher at the Open Data Institute, for reviewing this blog.

In our sixth blog, we will explore methods to increase transparency around AI and data use by public authorities for stronger legitimacy and trust among the public.

***

Disclaimer: We want to specify that we are not endorsing the examples here reported. These are only useful to depict a picture of AI Localism.