
In the context of AI Localism, engagement refers to the public involvement, input, and awareness of AI use in cities or towns. Engagement ensures that non-specialists and the broader public, in general, are able to participate in decision-making around AI use and improve their knowledge about local research and investment in the use of automated decision-making systems. Engagement can happen in multiple ways: through research between research and impact centers, which are leading public engagement and education activities; via dialogues with citizens to increase participation in designing and implementing local AI; and with working groups and committees that bring stakeholders and the general public together to increase public awareness and conversation around public AI practices.
Research and Impact Centers
Over the past few years, research around AI governance has skyrocketed in academic, policymaking, and public popularity. Increased demand to investigate AI applications and implications has been met with innovative, locally-centered research programs. Research labs, such as those discussed below, also help ensure that cutting-edge AI scholarship is effectively implemented in practice by directly engaging experts and government officials and indirectly reaching the broader non-specialist public (see more in Citizen Assemblies). Examples include:
- The Urban AI think tank based in France launched a global call to uphold six key principles of smart city regulation. They advocate for smart city and urban technology to stem from a social contract, be open and accessible, decentralized, frictional, meaningful, and ecological by design. Thus far, over 100 technology and governance personalities have signed the call.
- Alongside the city of Helsinki and its partners, the Berkman Klein Center ran a three-week AI Policy Research Clinic with two teams of global scholars to turn public AI principles into tangible policy measures. One group created an oversight model for stronger collaboration and interoperability that fit with Helsinki’s existing government structure, a translational matrix for ethical and regulatory requirements at a European level with use cases at a city level, a wireframe for a web-portal to increase public engagement with AI tools, and an overarching policy playbook for actions and recommendations. The second group reconfigured an existing method of multi-stakeholder engagement used in Catalonia for Helsinki’s requirements and produced a playbook outlining a four-phase participatory process for introducing and implementing public AI technologies.
- The Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) looks to create a pan-European network of AI research to support human-centered AI innovation. CLAIRE's main goal is to use various actors, stakeholders, and mechanisms for “citizens engagement, industry and public sector collaboration” to create a European knowledge hub that advances understanding and application of AI. Partnerships are built through knowledge sharing and the integration of stakeholders in order to boost overall European competitiveness and well-being. It also functions as a meeting place for researchers and policymakers to learn about AI and implement these lessons in their home institutions to increase the overall understanding of AI across organizations. The group undertakes many projects, such as its collaboration with the AI, Data, and Robotics Association (ADRA).
- The Canadian Institute for Advance Research (CIFAR) conducts pan-Canadian research on AI strategies through coordination between Amii in Edmonton, Alberta, Mila in Montreal, Quebec, and the Vector Institute in Toronto, Ontario to centralize local AI research and priorities and make Canada a global leader in AI. Specifically, they focus on provincial- and national-level research for health innovation, energy and environment initiatives, and public-private collaborations.
Citizen Deliberations
Citizens and civil society at large play an important role in bringing in new perspectives on governance and policies. A cornerstone of transparent and accountable governance is citizen deliberation, a tried and tested approach, especially for emerging and new technologies to generate critical and democratic consideration of the risks and rewards of AI use at the local level. These practices shed light on larger gaps in public digital knowledge, highlighting which fundamental aspects of digital and data governance need to be communicated around for better general understanding.
Thus, when it comes to employing AI in public spaces and realms, it seems fundamental to encourage and initiate citizen deliberations and assemblies. For instance, in 2020, The GovLab hosted a Data Assembly in partnership with the Henry Luce Foundation to get feedback from New Yorkers via three ‘mini-publics’ to discern what sorts of data residents did (and did not) feel comfortable sharing with city officials to address COVID-19. Similar to this endeavor, we present some examples of citizen-level engagement around local AI.
- The Laboratorio para la Ciudad de Mexico (LabCDMX), the innovation and experimental subdivision of Mexico City's local government, sought to develop a strategic plan for Mexico City to leverage the opportunities of the public use of algorithms and automated learning in the short, medium, and long terms. The first “exploratory” session defined the expectations, opportunities, and risks of AI for democracy. The second “co-creation” session brought together experts to advise on challenges, potential, and governance techniques for Mexico City’s AI strategy. This session informed internal analysis to craft Mexico City’s Strategic Roadmap for AI.
- Kowloon East in Hong Kong used a public participatory process to solicit local feedback in designing its smart city initiative, which includes improvements to urban infrastructure, walkability, resource management, and communication infrastructure. In addition to improving WiFi infrastructure and developing mobile apps to inform residents on updates and collect their data to improve services, the smart city initiative focuses on fostering a more sustainable and environmentally-friendly neighborhood through data-driven initiatives for effective energy consumption monitoring, waste management, city-wide cooling, and green space additions.
- Starting in 2014, the Community Control of Police Surveillance (CCOPS) movement allows residents to voice their opinion on the use of surveillance and policing technologies in their neighborhoods. Eighteen towns across the United States have adopted CCOPS laws that require community approval before the implementation of surveillance technology and regular audits and reviews.
- In the United Kingdom, the Royal Society of the Arts organized a citizens’ jury to deliberate on the ethical use of AI. The citizens' engagement and public deliberation have raised pressing concerns with regard to ways in which the public and private sectors alike must alter their mechanisms and functionalities to account for greater accountable and legitimate practices.
Local Working Groups and Committees
More broadly, forums for public and expert consultation around AI help open up conversations around the use, disclosure, and impact of the technology on the public. Like citizen deliberations, these consultations become spaces for education and foundational understanding of overarching technology concepts for policymakers and council members to make informed decisions.
- The Alabama State Legislature established the Alabama Council on Advanced Technology and Artificial Intelligence “to review and advise the Governor ... on the use and development of advanced technology and artificial intelligence in this state.” A council of policymakers and technologists will discuss and provide recommendations on the use of AI by local governments.
- Washington state established an Automated Decision Systems working group to recommend policy and regulation updates on the “development, procurement, and use” of AI by public offices. The group consists of representatives across public agencies and advocacy organizations, with a specific focus on marginalized individuals, who will debate when automated decision-making and AI systems should be banned, methods of auditing and retaining transparency in system processes, and data handling and storage processes.
Conclusion
Bringing together various stakeholders and sectors is only part of the work needed to enhance AI localism engagement. Including representative voices from communities, especially those who are historically underrepresented and unaware of broader digital literacy, across the design and employment of digital technologies to develop inclusive and pluralistic AI. Participation of the broader public throughout the design phase is valuable for the construction of a datafication paradigm, a phenomenon by which social actions are transformed into quantifiable data, allowing for real-time tracking and predictive analysis, to capture experiences and visions as possible and thus improve the representative capacity of AI. Moreover, the examination and evaluation of automated systems conducted by citizens and residents are fundamental for both digital technologies to gain legitimacy and for people to be aware of participants in the increasingly digitized society they live in.
***
We are deeply grateful to Christophe Mondin, Researcher at CIRANO, and Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, for reviewing this blog.
Our fourth blog will investigate the laws and policies enacted around AI localism, specifically with regard to tackling concerns over discrimination and privacy created by AI.