
29 September 2021

A significant concern about AI use in the public sector is the opacity of AI-based tools. As described in Blog 5, an algorithm automatically turns inputs into outputs, making it possible to make decisions more efficiently. However, a lack of information and traceability over how those outputs were reached creates ‘black-box’ AI that cannot be fully audited, overseen, or explained to neither citizens nor policymakers.
Thus, this blog aims to illustrate the ways in which local municipalities have attempted to operationalize the concept of algorithmic transparency in cities. It particularly focuses on the local attempts to regulate the use of black box algorithms, and illustrate examples of informational registries as tools to implement principles of transparency.
In addition to being a technical system made of a series of algorithms that automatically turn data (inputs) into other data (outputs) upon which decisions can be made, AI is also a social system, made of policies, laws, social contexts and norms, and cultures. In fact, AI is a sociotechnical system. As a consequence, AI transparency needs to be conceptualized and operationalized both from a technical and a social perspective.
From a technical perspective, the issue of transparency is particularly hard to tackle, mainly because of the complications of intricacies of what is known as explainability. Especially when the system in place works through machine learning (ML), it is often hard to trace back the reasons why certain decisions were taken by the system. To face the issue of meaningful transparency and explainability from a technical, data-focused perspective, Timnit Gebru and others advance the idea of data sheets for data sets. These are aimed at providing “standard operating characteristics, test results, recommended usage” as well as the data set’s “creation, composition, intended uses, maintenance, and other properties”.
From a social and policy perspective, AI transparency is related to a series of key questions–including details about the system’s purpose, the social context around its design and use, and its intended users and beneficiaries–that need to be answered for the system to be deemed “transparent”.
For instance, some of the questions can be:
As mentioned in our previous blog, the growing concerns around the negative and discriminatory impacts automated systems can have has escalated the calls for accountability and transparency.
Indeed, knowing how AI-based decisions are made requires both accountability–which, as previously detailed, refers to the ability and exercise of holding AI, its designers, and employers accountable for their impacts– and transparency to users and regulators, which is essential to understand the AI’s processes and more effectively address discriminatory and intrusive effects of the technology.
Cities have started taking significant steps to address opaque, unexplainable AI in public sector use to ensure that the tools used are transparent and traceable by key stakeholders. As mentioned in one of our previous blogs, indeed, both national and municipal governments are pioneering efforts to institutionalize algorithmic transparency as a part of their AI strategies.
It seems increasingly essential for public administrations to share a set of well-defined, accessible, and understandable information on the development, deployment, and evaluation of algorithmic tools used in public decision-making. This can be done in a number of ways, and this blog focuses on (a) the regulation of back box algorithms and (b) the use of informational registries.
To illuminate the use and inner workings of public AI, cities have taken steps to disclose and monitor where and why algorithms are applied. These measures allow citizens to monitor and contact those in charge of using AI technologies and encourage thoughtful use or limitation of AI.
In addition to overseeing explainability levels, cities and states across the world have begun to respond to calls for greater transparency on how they use and source AI and algorithms. Many have turned to informational registries to understand what types of algorithms are being used and how.
In conclusion, although AI explainability and transparency remain complex to achieve, local administrations around the world are experimenting with new ways to ensure that automated systems are employed transparently. Examples of transparency-related efforts at the city and local level are indeed an opportunity to investigate new, innovative ways to provide algorithmic transparency at the national level, and may help foster wider public awareness and accountability.
***
We are deeply grateful to Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, for reviewing this blog.
Next week, we will explore how increasing literacy can improve public understanding of AI, in the process making efforts such as the ones included in this blog series more transparent and effective.