Blog 4: How cities are at the forefront of developing laws and policies to guide the use of AI

10 March 2022

Lianhao Qu Lfan1gswv5c UnsplashLaws and policies to govern AI at a national level remain lagged and tepid. However, many cities and states have stepped into the breach by taking active and future-focused steps to legislate data collection and manipulation by public agencies and corporations to harness AI’s power in a responsible way. 

These laws and policies specifically concentrate on who AI ignores or targets through its systems. In this blog, we group local oversight governance into two groups anti-discrimination legislation and surveillance and privacy regulation fields.

Anti-Discrimination Legislation

Discriminatory consequences of AI can be caused by a myriad of factors, including assumptions underpinning model design, the goals an algorithm is optimizing for, and unrepresentative datasets and algorithms. To address concerns about algorithmic bias and discrimination, as well as propel informed oversight of those tools, legislators have taken steps to control where and how algorithms are used by public agencies and, to some extent, industries in general.

  • The state of Colorado’s CO S.B. 169 bill aims to “[p]rohibit insurers from using any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.” Working with the insurance industry, the Colorado Insurance Commissioner conducts sector-specific stakeholder meetings to design rules around big data use in a non-discriminatory manner.
  • The Stop Discrimination by Algorithms Act (SDAA) in Seattle, Washington has placed bans on companies and firms that use algorithms to deliberately marginalize vulnerable individuals from accessing crucial personal and professional opportunities such as employment and housing. The legislation aims to enforce transparency and anti-discriminatory practices by mandating decision explainability for all algorithms. The bill requires insurers to disclose information about the external data sources used in the algorithm and predictive models for insurance practices. The use of external data must be assessed by a risk management framework to mitigate discrimination.

Surveillance Regulation

Public pushback against increasing surveillance creep, or the ubiquity of algorithms and data collection being used to watch people, has led to targeted legislation to control the use and sharing of facial recognition and smart technology data.

  • In 2019, the City of Buenos Aires implemented a ‘Facial Recognition for Fugitives System’ (LFRT) that installed nearly 10,000 facial recognition cameras across the city. To regulate the use of the technology, legislators mandated that the authorities who manage the facial recognition system must transfer information to the Committee of the Public Security System and the Ombudsman Office for oversight on the technical specifications and location of facial recognition technology applications. Opposition to the surveillance practice, its procurement via private contracting, and lack of disclosure on what, how, and where the data collected will be used sparked an anti-LFRT campaign by the Observatorio de Derecho Informático Argentino (ODIA) and local watchdog organizations. In early 2022, the LFRT program was suspended.
  • The cities of San Francisco and Santa Cruz, California, King County, Seattle, and Worcester, Massachusetts, have passed bans on the use of facial recognition technology by police agencies. These bans are the result of concern over civil liberty infringement and discrimination embedded in these tools through data and algorithmic biases.
  • The New York City Council proposed the KEYS (Keep Entry to Your Home Surveillance-Free) Act that requires all tenants to have traditional key entry to their homes to prevent owners from unilaterally forcing tenants to submit to facial recognition, biometric scanning, or smart key technology that comes potentially at the expense of personal and group privacy.
  • In 2021, Virginia’s governor signed the Consumer Data Protection Act (CDPA) which outlines a framework for controlling and processing personal data in the state. The CDPA places regulations on handling and processing personal data by both manual and/or automated decision systems, security requirements, and adherence to data minimization standards, and grants consumers protections around accessing, correcting, and deleting their data that may focus on marginalized individuals.

Conclusion

The lag between local and national laws and policies around AI is reminiscent of the Red Queen’s Hypothesis. This theory draws on Lewis Caroll’s Alice in Wonderland, when Alice, taking part in a race, realizes that she needs to run twice as fast as she normally would in order to move forward. 

“Well, in our country,” said Alice, still panting a little, “you'd generally get to somewhere else—if you run very fast for a long time, as we've been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

Managing technological innovation requires policymakers to run twice as fast as AI development to ‘get somewhere’ in its governance. Proactive efforts are needed to curtail the dangerous effects of AI use, such as profiling or other measures that may infringe on individual freedoms of liberty, privacy, and due process. Yet, by their very nature, governance strategies tend to face a time lag and respond reactively and after the fact. The examples above point to the beginning of some legislative avenues undertaken to spur more responsible AI governance to help cities and states run the AI race more successfully.

***

We are deeply grateful to Christophe Mondin, Researcher at CIRANO, Jess Reia, Assistant Professor of Data Science at the University of Virginia, and Mona Sloane, Sociologist at New York University and University of Tübingen AI Center, for reviewing this blog.

Next week, we will expand from ratifying regulation to examine the steps in place to oversee and audit its success by public agencies and of public agencies by third parties.