Blog 6: Realizing Local Transparency to Mitigate Risks to Public

29 September 2021

Wilhelm Gunkel 3vq4afokcvc Unsplash

A significant concern about AI use in the public sector is the opacity of AI-based tools. As described in Blog 5, an algorithm automatically turns inputs into outputs, making it possible to make decisions more efficiently. However, a lack of information and traceability over how those outputs were reached creates ‘black-box’ AI that cannot be fully audited, overseen, or explained to neither citizens nor policymakers. 

Thus, this blog aims to illustrate the ways in which local municipalities have attempted to operationalize the concept of algorithmic transparency in cities. It particularly focuses on the local attempts to regulate the use of black box algorithms, and illustrate examples of informational registries as tools to implement principles of transparency.

What is AI transparency?

In addition to being a technical system made of a series of algorithms that automatically turn data (inputs) into other data (outputs) upon which decisions can be made, AI is also a social system, made of policies, laws, social contexts and norms, and cultures. In fact, AI is a sociotechnical system. As a consequence, AI transparency needs to be conceptualized and operationalized both from a technical and a social perspective.

From a technical perspective, the issue of transparency is particularly hard to tackle, mainly because of the complications of intricacies of what is known as explainability. Especially when the system in place works through machine learning (ML), it is often hard to trace back the reasons why certain decisions were taken by the system. To face the issue of meaningful transparency and explainability from a technical, data-focused perspective, Timnit Gebru and others advance the idea of data sheets for data sets. These are aimed at providing “standard operating characteristics, test results, recommended usage” as well as the data set’s “creation, composition, intended uses, maintenance, and other properties”.

From a social and policy perspective, AI transparency is related to a series of key questions–including details about the system’s purpose, the social context around its design and use, and its intended users and beneficiaries–that need to be answered for the system to be deemed “transparent”.

For instance, some of the questions can be:

  • Why is the system being developed?
  • Are there any alternatives to that system? Why was that selected?
  • What data is needed for the system to work?
  • What are the data sources of the system?
  • Who is designing the system?
  • Who is using the system?
  • What are the consequences of and necessary conditions to use the system in other contexts, other than the one it was initially intended for?
  • How does the system impact the lives of the environment, communities, and individuals involved in its design, development, and implementation?

Why AI transparency?

As mentioned in our previous blog, the growing concerns around the negative and discriminatory impacts automated systems can have has escalated the calls for accountability and transparency.

Indeed, knowing how AI-based decisions are made requires both accountability–which, as previously detailed, refers to the ability and exercise of holding AI, its designers, and employers accountable for their impacts– and transparency to users and regulators, which is essential to understand the AI’s processes and more effectively address discriminatory and intrusive effects of the technology. 

Cities have started taking significant steps to address opaque, unexplainable AI in public sector use to ensure that the tools used are transparent and traceable by key stakeholders. As mentioned in one of our previous blogs, indeed, both national and municipal governments are pioneering efforts to institutionalize algorithmic transparency as a part of their AI strategies. 

How is AI transparency realized in cities?

It seems increasingly essential for public administrations to share a set of well-defined, accessible, and understandable information on the development, deployment, and evaluation of algorithmic tools used in public decision-making. This can be done in a number of ways, and this blog focuses on (a) the regulation of back box algorithms and (b) the use of informational registries.

Regulating the Local use of Black Box Algorithms

To illuminate the use and inner workings of public AI, cities have taken steps to disclose and monitor where and why algorithms are applied. These measures allow citizens to monitor and contact those in charge of using AI technologies and encourage thoughtful use or limitation of AI.

  • In 2018, the municipality of Rotterdam’s Data-Driven Working Program investigated an algorithm used to detect benefits fraud. The program discovered that the algorithm “ha[d] been trained with biased data,” which was especially alarming because of the model’s use of demographic and personal characteristics to gauge if a benefits recipient was a high fraud risk. Moreover, the ways in which decisions were made, and the reasons for certain decisions, were often “impossible to trace,” making it difficult for citizens to understand how algorithms played a role in assessing them. Due to explainability and transparency issues, the algorithm has not been cleared for official use.
  • In 2020, Portland, Oregon, instituted sweeping facial recognition regulation by passing two ordinances that ban the use of surveillance software by private and public agencies in public spaces. This legislation prohibits the use of facial recognition tools on video from public and private surveillance because of their “non-transparent” use of information gathered from police body cameras and hotel and pharmacy cameras to identify and target people.
  • Asheville, North Carolina, created an Office of Data and Performance (ODAP) that informs Asheville residents on how local government uses data to improve its work, drive data-driven decision-making and goal measurement, and manage and govern its data stores in an equitable, secure, accurate, and accessible manner.

Informational Registries

In addition to overseeing explainability levels, cities and states across the world have begun to respond to calls for greater transparency on how they use and source AI and algorithms. Many have turned to informational registries to understand what types of algorithms are being used and how. 

  • The Washington state senate put forth a bill that would require public agencies to provide information about automated decision systems in plain language and make the systems and their training data publicly available.
  • Amsterdam and Helsinki have created AI registry lists, which provide public records of the data used to train models, how algorithms are used, how outcomes of these models are used by human decision-makers, and what the potential biases or risks associated with the model are. The registry also includes contact information about those in charge of deploying the algorithm and allows residents to get in touch and give feedback on algorithm uses by local governments. Similarly, the Barcelona City Council’s AI and data strategy creates a public register of all algorithms used by the city for open review. 
  • In 2019, the Nantes Metropolis in France opened its algorithms used to make decisions in public service decision-making. Currently, two public algorithms are in use: one to determine the price of public transit, which takes into consideration income and the number of family members of an individual, and another to determine the social pricing of water for households.
  • Since 2016, the city of Antibes, France has managed an inventory of algorithms used by the local government. The inventory lists what and how algorithms are fully or partially used to make decisions and are updated on a regular basis.

Conclusion

In conclusion, although AI explainability and transparency remain complex to achieve, local administrations around the world are experimenting with new ways to ensure that automated systems are employed transparently. Examples of transparency-related efforts at the city and local level are indeed an opportunity to investigate new, innovative ways to provide algorithmic transparency at the national level, and may help foster wider public awareness and accountability.

***

We are deeply grateful to Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, for reviewing this blog.

Next week, we will explore how increasing literacy can improve public understanding of AI, in the process making efforts such as the ones included in this blog series more transparent and effective.