close

THE GOVLAB BLOG

The Women’s Health Topic Map: A Foundation for the Questions and Innovations That Matter

Today, we release the first version of the Women’s Health Topic Map. The Topic Map is part of 100 Questions initiative under the Gates-funded R&I project, where CEPS and The GovLab  have teamed up to ask: what are the most important questions that could truly advance women’s health innovation?

PeaceTech at CrossRoads: Insights from the 2025 Kluz Prize for PeaceTech Award Ceremony

In today’s geopolitical environment with 61 active conflicts around the world, the need for PeaceTech has never been more critical.

AI Localism in Action: Five City-Level Approaches Shaping Responsible AI

In this post, we highlight five recent additions to the repository that exemplify the diversity and innovation of AI Localism. These examples—spanning Reykjavík to Manchester—show how local governments are using distinct mechanisms such as risk-tiered staff guidelines, bright-line safeguards in high-risk domains, procurement nudges, AI-assisted public deliberation, and community literacy roadshows to build more responsive, inclusive, and future-proof approaches to AI.

Subscribe To The Digest

A weekly curation of new findings and developments on innovation in governance

LATEST POSTS

NO RESULT FOUND

Governing AI: The Air Force’s AI Land Rush

The Air Force is quietly auctioning off slices of its bases for private AI data centers. They call it innovation; it looks like privatization. Fifty-year leases, 3,000 acres of military land, and no public say. If this is how we build the future, who’s really in command?

Global AI Watch: Brazil’s Experiment in AI-Powered Participation

When Brazil's federal government launched its 2023 Participatory Pluriannual Plan, the response was massive: 1.4 million participants submitted over 8,200 proposals through the Brasil Participativo platform. But volume creates a challenge, as manually processing thousands of contributions is slow and resource-intensive, often causing valuable insights to slip through the cracks. Now, Brazil is pioneering an open-source AI system that automatically analyzes citizen feedback, generates comprehensive reports, and tracks which suggestions made it into final policies. The result is a new model for democratic intelligence, one that transforms the flood of public input into structured, actionable knowledge without losing the nuance of individual voices.

Research Radar: The Emperor's New Agents - Why AI Won't Fix Broken Government

The Agentic State is an ambitious and inspiring blueprint for rebuilding government around AI agents that can act and decide autonomously. It powerfully diagnoses real failures in how the public sector designs, delivers, and manages services. While AI is giving us ways to accelerate change, the prescribed cure may be premature: most of what’s broken in government requires organizational reform, not automation.

The AI Fish Counter: Teaching Ourselves to Use AI—Before It Uses Us

The danger isn’t that AI will make us dumber—it’s that governments, companies, and schools won’t make us smarter with it. As policymakers stall and corporations automate, the burden of using AI wisely now falls on us. Like shoppers at the fish counter, we need to learn to read the labels—to know what’s safe, what’s risky, and how to choose well.

How Governments are Using AI

Public service professionals from across the globe have come together to learn from one another how to use AI to improve governance. From St. Louis cutting hiring times from 12 to 2 months, Hamburg analyzing 11,000 public comments in days, and New Jersey reducing Spanish-language form completion from 4 hours to 25 minutes, it becomes clear that AI works for democracy when we build the foundation first.

Governing the Undefined: Why the Debate Over Superintelligence Misses the Point

As headlines warn of “superintelligent AI” threatening human extinction, a new open letter reignites familiar fears. But beneath the apocalyptic rhetoric lies a deeper problem. The narrative around artificial superintelligence, long embraced by Big Tech, diverts attention from the real and immediate challenges of AI and how our democratic institutions can address them.

Re-thinking AI: How a Group of Civic Technologists Discovered the Power of AI to Rebuild Trust in Government

After two years of research, the RethinkAI collaborative released Making AI Work for the Public—a comprehensive field review of how U.S. governments adopt AI. Since 2019, over 1,600 AI-related bills have been introduced, but most focus on guardrails, not proactive strategy. Meanwhile, cities are piloting translation tools, engagement platforms, and predictive systems, often led by Chief Information Officers, taking on new strategic roles. The report challenges civic tech’s efficiency-first legacy and proposes a new governance model—ALT: Adapt to anticipate needs, Listen to understand communities, and build Trust through two-way accountability.

How Hamburg is Turning Resident Comments into Actionable Insight

Officials in Hamburg had long struggled with the fact that while citizens submitted thousands of comments on planning projects, only a fraction could realistically be read and processed. Making sense of feedback from a single engagement could once occupy five full-time employees for more than a week and chill any desire to do a follow-up conversation. Learn about how Hamburg built its own open source artificial intelligence to make sense of citizen feedback on a scale and speed that was once unimaginable.

Building Democracy’s Digital Future: Lessons from Boston’s Civic AI Experiments

Boston became a living laboratory for democratic innovation last week, as two major convenings—the Civic AI Summit at Northeastern and Harvard’s Digital Democracy showcase—brought together leaders reshaping how technology serves the public good. From new tools that open up lawmaking and procurement to partnerships that align city and state AI strategies, Boston’s approach offers a model nationally for how AI can strengthen democracy through human-centered design, transparency, and collaboration.

New America CEO Anne-Marie Slaughter Reflects on the National Gathering for State AI Leaders

As states take center stage in shaping how the U.S. adapts to artificial intelligence, their choices will determine not just whether America keeps pace, but whether it thrives. This summer, Princeton University’s Center for Information Technology Policy convened state AI officers, researchers, entrepreneurs, and technologists for “Shaping the Future of AI: A National Gathering for State AI Leaders.” The two-day working conference focused on building practical, responsible frameworks for public-sector AI implementation. New America CEO Anne-Marie Slaughter closed the convening with a wide-ranging keynote that called for public AI infrastructure, trust-based governance, and co-creation across sectors. What follows is a summary of her 10 core takeaways.

Vibe Coding the City: How One Developer Used Open Data to Map Every Public Space in New York City

New York City has thousands of parks, plazas, and public courtyards, but no easy way to find them. Using “vibe coding,” open data, and generative AI, one civic technologist built a map of every public space in the five boroughs. This is the story of NYC Public Space, an app that stitches together fragmented government datasets, AI-generated descriptions, and community-sourced updates to make the city’s public realm more visible and usable. It’s also a case study in how AI can help public interest technologists move faster, build smarter, and turn open data into real public value.

The Next UN: AI, Power, and What Global Governance Must Become

In late September, the UN adopted a global AI resolution backed by all 193 Member States, a diplomatic milestone. But one that risks repeating old patterns of top-down governance. The new Reboot Democracy Blog Editor, Elana Banin, argues that legitimacy doesn’t come from declarations, but from grounded, democratic practice. From California to Vietnam, she explores what real AI governance looks like and lays out three strategic tests the UN must pass to matter.

Silicon Sampling: When Communications Practitioners Should (and Shouldn’t) use AI in the Survey Pipeline

Large Language Models are becoming common tools in the communications toolkit, but not all uses are created equal. In this new post from the AIMES Lab at Northeastern University, John Wihbey and Samantha D’Alonzo offer research-backed guidance on when to use LLMs in the survey pipeline and when to steer clear. The research indicates that AI is a powerful assistant for refining survey questions and testing hypotheses, but a poor substitute for actual human respondents. Drawing on more than 30 academic studies, this piece lays out a practical, hybrid approach to “silicon sampling” that helps practitioners strengthen research integrity without falling for AI’s easy shortcuts.

The Women’s Health Topic Map: A Foundation for the Questions and Innovations That Matter

Today, we release the first version of the Women’s Health Topic Map. The Topic Map is part of 100 Questions initiative under the Gates-funded R&I project, where CEPS and The GovLab  have teamed up to ask: what are the most important questions that could truly advance women’s health innovation?

Feeding the Beast: Powering Democratic AI with Open Data

AI’s biggest breakthroughs were built on public datasets; the next wave should be, too. If governments make data AI-ready and keep access open, we can power tools that explain laws, improve services, and widen participation. In return, companies that train on taxpayer data should give back, through open licenses, benchmarks, and capacity that strengthen public institutions.

The Judicial Protection of Algorithmic Transparency

Law professor and political commentator José Luis Martí argues that Spain’s Supreme Court ruling on Bosco is a democratic milestone—establishing algorithmic transparency as a constitutional principle and putting AI at the service of democracy.

PeaceTech at CrossRoads: Insights from the 2025 Kluz Prize for PeaceTech Award Ceremony

In today’s geopolitical environment with 61 active conflicts around the world, the need for PeaceTech has never been more critical.

When Communities Lead, Appropriate Tech and Change Follow

Co-designed with parents, the AIEP tool is both a technical solution and a catalyst for civic power for families of children with diverse abilities. By building AI literacy through in-person training and WhatsApp-based courses, the project offers a model for community-centered technology, making education systems more navigable and equitable. AI tools can open doors to access and advocacy, but it’s human connection and support networks that give those tools meaning. That is where the most powerful change is taking shape, and where investment matters most.

AI Localism in Action: Five City-Level Approaches Shaping Responsible AI

In this post, we highlight five recent additions to the repository that exemplify the diversity and innovation of AI Localism. These examples—spanning Reykjavík to Manchester—show how local governments are using distinct mechanisms such as risk-tiered staff guidelines, bright-line safeguards in high-risk domains, procurement nudges, AI-assisted public deliberation, and community literacy roadshows to build more responsive, inclusive, and future-proof approaches to AI.

From Voice to Impact: What We’ve Learned So Far in the Reboot Democracy Workshop series: Designing Democratic Engagement for the AI Era

Over the first two sessions of Reboot Democracy: Designing Democratic Engagement for the AI Era, we’ve explored why participation too often falls short, and how to make it matter. From rebuilding the link between voice and implementation, to matching digital tools with public purpose, this early reflection captures key lessons so far and tees up what’s next: a practical, nine-step framework for smarter, AI-supported engagement.

friend.com or foe.com?

The New York City subway would never accept an ad for a hitman-for-hire service, or for a pill that claimed to cure depression but wasn’t FDA-approved, or a social network designed exclusively to groom minors. Those would be illegal, unethical—and obviously dangerous. So why is the MTA running a massive ad campaign for friend.com, a product that, if not already illegal, certainly should be?

Reimagining Data Access, Readiness, and Governance in the Age of AI

The State of Open Data Policy Summit is an annual conference hosted by the Open Data Policy Lab (a collaboration between The GovLab and Microsoft) to explore how policy and technology shape access to data for public interest re-use. 

In past years, the summit has looked at ways institutions have pursued purpose-driven data re-use and operationalized collaborative governance models. It has also examined the possible challenges represented by a “data winter”—a period marked by reduced data access.

On 4 September 2025, the Open Data Policy Lab hosted its Fourth Annual State of Open Data Policy Summit to focus on how generative AI and open data intersect.

Choosing Wisely: A New Resource for Picking the Right Participation Platform

Choosing a digital participation platform is a governance choice. Dane Gambrell's latest post explores what the new Guide to Digital Participation Platforms gets right, the idea that participation only works when tools are matched to purpose, capacity, and follow-through. Join us tomorrow, September 24 at 3pm ET, for a hands-on InnovateUS workshop with Greta Ríos and Nikhil Kumar to learn how to select the right tool for the job to apply AI to advance democracy.

Governing with AI: Why Albania’s Chatbot Minister Makes More Sense Than You Think

Seoul showed 25 years ago that transparency and accountability built into digital systems can blunt corruption. Albania’s chatbot minister will stand or fall on the same test: whether it reduces opportunities for bribery.

Public engagement matters. But governments need to learn to listen better (and faster)

Agueda Quiroga (InnovateUS) and Sarah Hubbard (Allen Lab) reflect on insights from the Reboot Democracy workshop series with Beth Noveck and Danielle Allen, and why 21st-century democracy needs better ways to connect citizen input to real outcomes. Their takeaway is that by repairing the broken links between voice, decision-making, and implementation, participation can shift from symbolic to systematic.

The Public’s Verdict on AI and Human Capacity: What it Means for Democracy

A new national survey finds that Americans overwhelmingly believe AI will diminish key human capacities, like empathy, deep thinking, and personal agency, by 2035. Writing for the Reboot Democracy Blog, Lee Rainie, Director, Imagining the Digital Future Center, explains why this skepticism poses a threat to the foundations of democratic life and calls for AI systems that reinforce dignity, trust, and civic strength.

AI is a Power Tool, Not a Decision Maker: Essential Lessons for Law Enforcement

Law enforcement agencies are under pressure to explore AI, but the real lesson is to treat it like any other power tool. In a recent workshop, Rutgers Senior Fellow Mark Genatempo stressed that AI should support, not replace, human decision-making. From distinguishing between machine learning, predictive analytics, and generative AI, to auditing data and building community trust, the key is preparation, oversight, and transparency. With the right training and safeguards, AI can enhance public safety without undermining accountability.

From Interim to Institution: New Jersey’s Three-Pillar Strategy for Responsible AI

With its new 2025 AI policy, New Jersey has advanced from interim guidance that encouraged experimentation to a framework that enables safe, large-scale use. Building on two years of training and adoption by more than 15,000 public servants, the state is now focused on using AI not just to streamline work, but to deliver better services where it matters most.

Why Data Governance and Collaboration Are Essential for the Future of Urban Digital Twins

The concept of digital twins has quickly become the new darling of the smart city world. By 2030, more than 500 cities plan to launch some kind of digital twin platform, often wrapped in dazzling promises: immersive 3D models of entire neighborhoods, holographic maps of traffic flows, real-time dashboards of carbon emissions. These visuals capture headlines and the political imagination. But beneath the glossy graphics lies a harder question: what actually makes a digital twin useful, trustworthy, and sustainable?

Having recently worked directly on a U.S. metropolitan digital twin pilot, we know the answer is not just shiny and sophisticated imagery. A genuine twin is a living ecosystem of different stakeholders and diverse datasets — integrating maps, open government data, IoT sensors, predictive AI models, synthetic data, and mobility data into a single responsive platform. Done right, a digital twin becomes a decision-making sandbox: where planners can simulate how pedestrianizing a street shifts congestion, for example, or how a Category 3 hurricane might inundate vulnerable neighborhoods.

Work with what you have: how Vietnam is using AI as a way to encourage a learning culture among public servants

While the West races toward artificial general intelligence, Vietnam is charting a different path with “Applied AI,” using tools like ChatGPT to overcome language barriers, limited budgets, and institutional bottlenecks. At the Academy of Public Administration and Governance in Vietnam, civil servants are building AI literacy and shifting to adaptive, tech-savvy governance. With 136 AI-assisted case studies developed in one summer, a new handbook on AI for development, and plans for an AI-powered support chatbot, Vietnam shows how even modest experiments can spark a culture of curiosity, collaboration, and public sector transformation.

Research Radar: RAND on AI-Enabled Policymaking: Opportunities, Obstacles, and the Road Ahead

A report on a recent workshop cohosted by RAND, the Stimson Center, and the Tony Blair Institute for Global Change explores how AI can support more effective policymaking. AI shows promise for automating routine tasks and democratizing access to analysis tools, but there are significant structural barriers to realizing AI's potential in policymaking. To move forward, we need more real-world case studies of AI use in governance and strategic adoption focused on maintaining human oversight and building the skills to deploy these tools responsibly.

Governing with AI - Learning the How-To's of AI-Enhanced Public Engagement

Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively.

Monitoring the Re-Use and Impact of Non-Traditional Data

Non-Traditional Data (NTD) — data digitally captured, mediated, or observed through instruments such as satellites, social media, mobility apps, and wastewater testing — holds immense potential when re-used responsibly for purposes beyond those for which it was originally collected. If combined with traditional sources and guided by strong governance, NTD can generate entirely new forms of public value — what we call the Third Wave of Open Data.

In this update, we have curated recent advances where researchers and practitioners are using NTD to close monitoring gaps in climate resilience, track migration flows more effectively, support health surveillance, and strengthen urban planning. Their work demonstrates how satellite imagery can provide missing data, how crowdsourced information can enhance equity and resilience, and how AI can extract insights from underused streams.

Join People Powered on September 16 for the Release of New Guidance on AI for Digital Democracy

Join People Powered on September 16 for the release of new guidance based on global case studies for AI for Digital Democracy.

Inaugural AI 50 List Recognizes The GovLab, Beth Noveck, Santiago Garces, and more

The Center for Public Sector AI has launched a new recognition initiative called The AI 50, which honors people and institutions that are playing important roles in implementing and developing artificial intelligence within government agencies.

The Data Commons Landscape: An Analysis of our Data Commons for Generative Artificial Intelligence Repository

Over the last six months, The GovLab’s Open Data Policy Lab has documented use cases of data commons—collectively governed data ecosystems—that provide critical infrastructure for responsible AI development around the world. By generating access to high quality, AI-ready datasets, these initiatives are unlocking new possibilities for solving pressing public challenges. Through our Data Commons for Generative AI Repository, we have identified 60 examples of data commons ranging from cultural and language preservation initiatives to biomedical imaging archives for cancer research. 

We conducted a quantitative analysis of the full repository (60 use cases) with the goal of understanding trends in existing efforts and where additional support is needed. Below we provide a summary of these trends. However, it is important to note that our search for data commons was conducted in English and likely excludes examples from countries where most initiatives are in non-English languages.

Data Commons for Generative Artificial Intelligence: Our Growing Repository of Use Cases – August Update

Data commons (collaboratively governed data ecosystems) are providing critical infrastructure in the age of AI. When designed responsibly, they can help provide access to high quality, AI-ready datasets for use in the public interest. Yet: What data commons currently exist? Where are they being developed? What data commons are needed most?

Over the last six months, The GovLab’s Open Data Policy Lab (ODPL) has sought to answer these questions by curating and documenting examples of data commons for AI from across the globe. Our Data Commons for Generative AI Repository now contains 60 real-world examples from over 20 countries across 5 continents. 

AI Localism in Action: ​​Six Local Approaches to Governing AI

Global declarations on AI governance abound—but the real test lies in implementation, much of which is unfolding in cities. Yet local initiatives are rarely monitored or shared across jurisdictions. The AI Localism Repository aims to bridge that gap by spotlighting governance mechanisms developed at the city level.

In this post, we highlight six recent additions to the repository that exemplify the diversity and innovation of AI Localism or city-level AI governance.

AI Can Revolutionize Policy Research – But Only If Implemented Responsibly

Artificial intelligence can transform evidence-based policymaking by enabling policymakers to cast a wider net for evidence, synthesize evidence more rapidly, and incorporate better and deeper engagement with communities. However, this transformation also presents significant challenges from bias and transparency concerns to the risk of over-reliance on algorithmic outputs. By understanding the promise and the pitfalls of AI-enabled research tools, while keeping human expertise at the center of the process, we can harness these powerful tools to serve the public interest while preserving the democratic values of transparency, accountability, and inclusive governance.

The Intersections of Generative AI and Open Data: Latest Additions to the Observatory – July

How are governments and researchers using generative AI to make better use of open data? In what ways can AI help make public information more accessible, interpretable, or actionable? And what new types of public services or research tools are emerging at this intersection?

These are some of the questions explored in our Observatory of Open Data and Generative AI —a growing collection of real-world use cases showing how open data from official sources is being used with generative AI technologies.

Launched last year, the Observatory builds on the findings of our report, "A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI.” 

New Jersey, Pennsylvania, and Utah Lead States in AI Readiness, Report Finds

A new Code for America assessment looks at how states are adopting artificial intelligence to support the design, delivery, and evaluation of public services. While most states remain in early development stages, the three leading states distinguished themselves by building comprehensive governance frameworks, investing in workforce training, and establishing dedicated leadership structures to support the responsible and effective use of AI.

Highway Engineer tool created by AI for Impact students highlighted by NBC Boston

HEKA, the Highway Engineer Knowledge Agent chatbot created through the AI for Impact co-op program is empowering design engineers in the MassDOT Highway Division to efficiently query department manuals and documentation, aiding in the design of quicker and safer infrastructure projects for commuters in Massachusetts. It was recently highlighted by NBC 10 Boston.

Five Takeaways from "Data for Policy 2025 Europe"

The 2025 Europe edition of the Data for Policy conference, held 12–13 June at Leiden University in The Hague, gathered researchers, policymakers, and practitioners around the theme “Twin Transitions in Data and Policy for a Sustainable and Inclusive Future.” Over two days of presentations and discussion, participants explored how data systems and digital technologies are reshaping governance and what it will take to make them serve climate goals, social equity, and democratic accountability.

Research Radar: AI Speeds Up Government Consultation Analysis Without Sacrificing Quality

New research from the UK Gov shows how AI could make it easier for institutions to do public engagement. A new process called "Consult" combined AI with human oversight to analyze public consultation responses with 76% accuracy in seconds.

The debate over state-level AI bans misses the point

While Washington fights over who gets to say "no" to AI, they're missing the bigger question: how can we actually use these tools to fix our broken institutions? States like Ohio and New Jersey are already proving AI's transformative potential—cutting millions in bureaucratic waste, speeding up citizen services, and making government actually work for people. The real debate shouldn't be about regulation versus innovation, but about the AI we need to build, buy, and design to strengthen democracy.

Fourth Wave of Open Data Seminar: Data Commons for the Public Good

How can we build and sustain data commons that balance openness, and trust, while fueling innovation for the public good? Can they ensure that communities and other networks have a say in how their data is used? Can they ensure that public interest organizations can get access to the data they need to meet societal challenges?

In “Data Commons for the Public Good,” the Open Data Policy Lab’s fourth panel on the Fourth Wave of Open Data, Vivienne Ming (Chief Scientist, The Human Trust), Angie Raymond (Director of Data Management and Information Governance, Ostrom Workshop), Heather Coates (Data Steward & Data Librarian,  Indiana University), and Alek Tarkowski (Director of Strategy, Open Future) joined The GovLab’s Stefaan Verhulst to answer these questions. 

Signals of Demand: What the New Commons Challenge Tells Us About the Need and Opportunity of Data Commons

Last week, we closed applications for our New Commons Challenge—an open innovation challenge seeking to foster the use of data commons for the development of AI for local decision-making and humanitarian response.

Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities

Civic trust is essential for strong, functional communities and effective governments. It gives institutions the license to operate. It allows people to form relationships within their communities.

However, trust is also abstract and little understood. It is difficult to define and even more difficult to quantify. For city leaders, this presents a conundrum: How can they act to improve civic trust if they don’t fully understand what it looks like?

What are the key questions that, if answered, could advance Women’s Health innovation?

The GovLab and the  Centre for European Policy Studies (CEPS), with support from The Gates Foundation, are pleased to announce the launch of a new domain within The 100 Questions Initiative: Women’s Health Innovation. The 100 Questions Initiative seeks to identify and prioritize the most pressing, data-actionable questions that can advance evidence-based policy making, and research on societal issues.

Research Radar: The Agentic State: A 20-Year Wish List, Finally Within Reach?

This week’s Research Radar highlights The Agentic State, an ambitious whitepaper arguing that AI agents could reshape the core functions of government. It’s a timely vision for public sector transformation —worth reading, debating, and building on.

Civic and Democratic AI: A New Course for Community Action

We are developing "Civic and Democratic AI," an 8-part WhatsApp course that teaches people how to use generative AI to navigate government processes, understand complex documents, and organize for community action. The course aims to provide practical AI skills for civic engagement. We are seeking feedback on the course content. Share your insights and expertise as we roll out this free program to help communities use AI to understand their government, access their rights, and organize for change.

Research Radar: Dreaming Better Elections Into Reality

A new white paper from The Institutional Architecture Lab argues that combating AI-generated deepfakes and synthetic content in elections requires purpose-built institutions. The authors propose Electoral Integrity Institutions that would coordinate across government, tech platforms, and civil society to scan, assess, and respond to synthetic content threats. But the paper also provokes a fundamental question: should we design institutions defensively to react to AI threats, or offensively to build better, more participatory and representative elections?

Coming Soon: InnovateUS to Offer Training on Responsible AI for Public Sector Legal Professionals

InnovateUS is excited to announce "Responsible AI for Public Sector Legal Professionals," two free courses which equip public sector lawyers and legal support staff to safely and responsibly use AI tools and implement AI systems to improve the efficiency and effectiveness of their work while safeguarding sensitive information. Co-created with senior legal and technical leaders from state agencies, the curriculum is designed for government attorneys, legal support staff, policymakers, and compliance officers seeking to harness AI's potential while upholding professional and ethical responsibilities.

DOGE Is Using AI To Centralize Government Power. It’s Time to Flip the Script.

The Trump administration’s January 20 executive order rechristening the US Digital Service as the Department of Government Efficiency (DOGE) has effectively hijacked the civic tech movement. While the US Digital Service focused on life-saving and government improvement functions, DOGE has used AI and other advanced technologies to burrow deep into administrative datasets and monopolize control. It’s time to flip the script (again) and break the government’s stranglehold on information. Rather than centralize power, let’s use AI to distribute it.

Leading with Purpose: Social Change in the AI Age: Delivered at the Kean University Honors Convocation

In our AI revolution, we face a pivotal choice between using these unprecedented cognitive tools to amplify our worst tendencies or solve humanity's greatest challenges. As New Jersey's Chief AI Strategist, I've witnessed firsthand how AI can transform public services, but becoming true "public entrepreneurs" requires more than technology—it demands purpose, partnership, problem definition, and participation to create meaningful change in an increasingly fractured world. Read my effort to offer hope to honors graduates of Kean University facing the collapse of dignity, decency, and due process.

House Republicans Include AI Regulation Preemption in Budget Reconciliation Bill

House Republicans have introduced a provision in the Budget Reconciliation bill that would prevent states from regulating artificial intelligence systems for a decade. This move represents a striking departure from traditional Republican advocacy for states' rights, as the party now seeks to impose federal preemption over state-level AI safety and accountability measures. Even if it doesn't survive markup, the intent is clear: technological accelerationism above all else.

Research Radar: Co-Designing AI Systems 

ETH Zurich researchers introduce "Value-Sensitive Citizen Science," a systematic framework combining design principles with citizen science to foster meaningful public participation in AI development. The paper provides a structured approach to embed community values directly into technical systems, critical as AI increasingly shapes societal outcomes.

Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse

As artificial intelligence systems become more reliant on data from low- and middle-income countries (LMICs), fundamental questions arise about who controls that data and who benefits from its use and reuse. In many cases, the people and communities who generate this data have little say in how it’s used, and few mechanisms for recourse when harms occur.

To address these challenges, The GovLab and Agence Française de Développement (AFD) have partnered on a new project exploring more equitable, participatory approaches to data governance in AI ecosystems. Today, we are pleased to release the outcome of that collaboration: Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse.

Governing AI: Fired Over Fair Use - The Bombshell AI-Training Report

President Trump’s weekend firing of the Register of Copyrights spotlights a 113-page bombshell report that brands large-scale AI training as prima-facie infringement. The Copyright Office sides with copyright owners while offering a nuanced analysis and leaving it to the market to sort out. Dive into jargon-free breakdown, political context, and a reality-check on where the Office’s analysis still misses the mark.

Global AI Watch: Listening to Public Servants - What Dubai and New Jersey Teach Us About AI Readiness

Dubai's comprehensive 60-question AI survey yielded just 4% participation while New Jersey's streamlined, AI-assisted approach garnered 5,000 responses in three weeks—yet both revealed similar insights about public servants' AI readiness. This natural experiment demonstrates that effective government listening must evolve to be shorter, faster, and continuous, while measuring success beyond efficiency to include quality, transparency, and meaningful human augmentation.

Fourth Wave of Open Data Seminar: Making Open Data Conversational

The Fourth Wave of Open Data, based around the combination of open data and generative AI, offers significant potential. When put together, open datasets can be made more open and conversational. Systems themselves can be better trained to answer questions.

Research Radar: Race, Democracy, and AI - Spencer Overton Offers a Framework for a More Inclusive Digital Future

In this week's Research Radar: The future of American democracy may hinge on whether artificial intelligence supercharges racial division or helps build more inclusive participation. Law professor Spencer Overton's groundbreaking two-part analysis reveals how AI technologies simultaneously threaten to amplify racial voter suppression and deception while potentially increasing participation by communities of color—with outcomes determined not by technological inevitability but by human choices about governance, design, and accountability.

People Before Platforms: Why OMB’s AI Memos Won’t Work Without Training

Last month, the White House's Office of Management and Budget released two memoranda that shift how our government will approach artificial intelligence. There's just one problem: the people tasked with implementing these ambitious directives haven't been prepared for this moment. To take the White House's vision for AI from idea to implementation, public servants need access to training that is tailored to their specific roles, grounded in real-world context, and aligned with the day-to-day realities of public service.

Global AI and Democracy Watch: UAE's AI-Powered Legislative Office: An Experiment Worth Watching

The UAE has announced its "Legislative Intelligence Office," promising to transform lawmaking through AI by integrating legislation, judicial rulings, and public services. While the system could potentially identify contradictions in laws and model policy impacts, serious questions remain about governance transparency, data quality, and accountability. This experiment deserves attention as legislatures worldwide struggle with information overload but we must ask whether efficiency will come at the cost of deliberation and public participation.

Research Radar: Using Machine Learning to Map State Capacity

Norwegian researchers have created detailed maps of state capacity at the local level using machine learning to combine citizen surveys with geographic data across Africa. Their approach predicts government effectiveness in areas without direct measurements, offering new ways to target democracy interventions where they're most needed. This method could help identify representation gaps in communities, though challenges with data limitations and potential misuse remain.

AI Governance Watch: The Executive Order on AI in Education: Progress or Paradox?

The White House's new executive order on AI education promises to prepare American students for an AI-driven future through challenges, partnerships, and teacher training. Yet these initiatives come alongside dismantling the Department of Education, cutting education funding, and reducing research grants – creating a fundamental contradiction. For AI education to succeed, we need consistent funding, teacher support, research investment, equity safeguards, and public AI governance – not just corporate partnerships.

The Intersections of Open Data and Generative AI: New Additions to the Observatory — April

The Open Data Policy Lab’s Observatory of Examples of How Open Data and Generative AI Intersect provides real-world use cases of how open data from official sources intersects with generative artificial intelligence (AI), building on insights from our report, A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI.

From Citizen to Senator: How Brazil is Reimagining Citizen Engagement in the Age of AI

A recent Reboot Democracy series explored how Brazil’s Federal Senate is using AI and citizen engagement to improve the lawmaking process and the opportunities and limitations of these approaches. We’ve compiled all of these posts into a collection of essays exploring Brazil's pioneering democratic innovations and how AI could be used to further improve, expand, and deepen engagement with citizens.

The New Commons Challenge: Advancing AI for Public Good through Data Commons

$200,000 in funding available for innovative data commons that enhance local decision-making and disaster response

Unlocking Public Value with Non-Traditional Data: Recent Use Cases and Emerging Trends

Non-Traditional Data (NTD)—digitally captured, mediated, or observed data such as mobile phone records, online transactions, or satellite imagery—is reshaping how we identify, understand, and respond to public interest challenges. As part of the Third Wave of Open Data, these often privately held datasets are being responsibly re-used through new governance models and cross-sector collaboration to generate public value at scale.

This update profiles recent initiatives that push the boundaries of what NTD can do. Together, they highlight the evolving domains where this type of data is helping to surface hidden inequities, improve decision-making, and build more responsive systems.

Sourcing examples on how new forms of social data can be used for health-related research innovation

The GovLab with the support of the Wellcome Trust Discovery Research team is beginning a new initiative called Social Data 4 Health. The initiative aims to understand how these new forms of social data can transform health-related research in the humanities, social sciences, and public health domains.

Launch: A Blueprint to Unlock New Data Commons for Artificial Intelligence (AI)

In today’s rapidly evolving AI landscape, it is critical to broaden access to diverse and high-quality data to ensure that AI applications can serve all communities equitably. Yet, we are on the brink of a potential “data winter,” where valuable data assets that could drive public good are increasingly locked away or inaccessible.

Data commons — collaboratively governed ecosystems that enable responsible sharing of diverse datasets across sectors — offer a promising solution. By pooling data under clear standards and shared governance, data commons can unlock the potential of AI for public benefit while ensuring that its development reflects the diversity of experiences and needs across society.

To accelerate the creation of data commons, The Open Data Policy, today, releases “A Blueprint to Unlock New Data Commons for AI” — a guide on how to steward data to create data commons that enable public-interest AI use cases.

Fourth Wave of Open Data Seminar: Why the Fourth Wave Matters

Generative AI tools have attracted enormous attention. Yet, questions remain about how it intersects with open data. How can generative AI help people better engage with open data? How can open data be used to enable societal beneficial use cases of generative AI? How can we address the many risks and challenges facing these systems?

Aligning Urban AI and Global AI Governance: Insights from a Paris AI Action Summit Side Event

On February 11, 2025, The Governance Lab (The GovLab) and Urban AI co-hosted an official side event of the Paris AI Action Summit, titled "Aligning Urban AI and Global AI Governance." Held in collaboration with Mouvement des Entreprises de France (MEDEF), Open Data France, DemocracyNext, and UN-Habitat, the event brought together policymakers, researchers, and city representatives to discuss how urban AI initiatives can align with broader governance frameworks to ensure responsible and inclusive AI deployment. 

Why Responsible Data Access will determine the Future of AI: The Increased Importance of Data Commons

Over the last year, the Open Data Policy Lab (a collaboration between The GovLab and Microsoft) has been exploring how to harness artificial intelligence responsibly and effectively as part of its work on Fourth Wave of Open Data, an approach to data openness that explores intersections between open data from official sources and generative AI.

The way to unlock data responsibly for this fourth wave, we believe, lies with data commons—collaboratively governed data ecosystems designed to pool and provide responsible access to diverse, high-quality datasets across sectors. 

Driving Product Model Development with the Technology Modernization Fund

The Technology Modernization Fund (TMF) currently funds multiyear technology projects to help agencies improve their service delivery. However, many agencies abdicate responsibility for project outcomes to vendors, lacking the internal leadership and project development teams necessary to apply a product model approach focused on user needs, starting small, learning what works, and making adjustments as needed. 

Data Stewardship as Environmental Stewardship

Why responsible data stewardship could help addressing today’s pressing environmental challenges resulting from artificial intelligence and other data-related technologies.

Introducing the Updated AI Localism Repository: A Tool for Local AI Governance

Today, we're excited to announce the launch of the newly updated AI Localism Repository—a curated resource designed to help local governments, researchers, and citizens understand how AI is being governed at the state, city, or community level.

The GovLab Launches New AI Resources for Public Problem Solvers

This week, The GovLab and the Burnes Center for Social Change published two new resources aimed at leveraging the power of artificial intelligence and collective intelligence to tackle pressing public challenges. 

 

NEW REPORT: A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI

In the Open Data Policy Lab's new report, the team provides a framework and recommendations to support open data providers and other interested parties in making open data “ready” for generative AI.

AI Localism at AI Week: Empowering Communities with Digital Self-Determination

On April 17, 2024, during AI Week, The GovLab and UrbanAI hosted a webinar on AI localism, titled "Empowering Communities through Digital Self-Determination". The session aimed at investigating how AI governance can be localized to better serve community-specific needs.

Blogcast: What will the FTC ban on Non-Compete agreements mean for innovation?

Hannah Garden-Monheit, Director of the Office of Policy Planning of the Federal Trade Commission (FTC) talks with Seth Harris on the Power at Work Blog about the FTC's new rule banning almost all non-compete agreements in employment relationships. 

Civic Trust: What’s In A Concept?

To increase civic trust, we need to know what we mean by it and how to measure it, which turns out to be a challenging exercise. Toward that end, The GovLab at New York University and the New York Civic Engagement Commission joined forces to catalogue and identify methodologies to quantify and understand the nuances of civic trust. 

Learning Package for Responsible Data for Refugee Children

From 29 to 31 January 2024, UNICEF and UNHCR and The Governance Lab at New York University hosted three 90-minute webinars on ways they can support the well-being of children through data, highlighting the ways development and humanitarian practitioners around the world can reinforce data responsibility principles and practices in their daily work with and for children.

EVENT ANNOUNCEMENT: Guardrails: Guiding Human Decisions in the Age of Artificial Intelligence

Join us for a book talk with Urs Gasser as he delves into his latest work, "Guardrails." In this talk, Gasser will explore the ways in which societal norms shape our decision-making processes in an era saturated with data and dominated by rapidly advancing technologies like artificial intelligence. 

REGISTER: Data Responsibility for Refugee Children (29, 30 and 31 January 2024)

From 29 to 31 January 2024, UNICEF and UNHCR and The Governance Lab at New York University (The GovLab) will host three 90-minute webinars to inform humanitarian practitioners around the world of ways they can reinforce data responsibility principles and practices in their daily work...

The Living Library's 2023 Book Recap: Our 5X5 curation

Welcome to our end of year 5 x 5 curation of books published in 2023: five books across five domains! 

Every week, the Living Library, and its newsletter The Digest, curate the most up-to-date knowledge on governance and data innovation.  As we bid farewell to 2023, we took a moment to select five books across five domains (5X5).

Combining Human and Machine Intelligence for Enhanced Democracy with Sir Geoff Mulgan

On Thursday, November 16, the Burnes Center for Social Change and the GovLab hosted Sir Geoff Mulgan,  Professor of Collective Intelligence, Public Policy and Social Innovation at University College London (UCL), for a thought-provoking lecture on “Combining Human and Machine Intelligence to Enhance Democracy” as part of the "Rebooting Democracy in the Age of AI" series. Mulgan discussed the challenges facing contemporary democracies and proposed innovative solutions to steer them into a more responsive and effective future.

Exploring the Power of Collective Intelligence: A Conversation with Prateek Buch and Brendan Arnold

In the latest episode of The GovLab's Collective Intelligence Podcast,  Prateek Buch and Brendan Arnold from the UK Government's Policy Lab and its innovative Collective Intelligence Lab are interviewed by Beth Simone Noveck, Director of the GovLab and the Burnes Center for Social Change. Prateek, a renowned advocate for responsible technology use, and Brendan, the government's first Creative Technologist, shed light on their work leveraging technology to engage UK residents and address policy challenges.

NJ

In an interview with Raven Santana on NJ Biz Beat, we discussed the creation of New Jersey's AI Task Force, focusing on the responsible use of generative AI to improve government, create new jobs and advance literacy and equity, and how to balance the benefits and the risks.

Navigating the New Frontier: Generative AI in Gov

The cautious yet optimistic adoption of these technologies by cities like Boston, and states like New Jersey and California, signals a significant shift in the public sector landscape.

The journey from skepticism to the beginnings of strategic implementation reflects a growing recognition of the transformative potential of AI for public good. From enhancing public engagement through sentiment analysis and accessibility to optimizing government operations and cybersecurity, generative AI is not just an auxiliary tool but a catalyst for a more efficient, inclusive, and responsive government.

Embracing the same responsible experimentation approach taken in Boston and New Jersey and expanding on the examples in those interim policies, this November, the State of California, issued an Executive Order and a lengthy but clearly-written report, enumerating potential benefits from the use of generative AI.

Artificial Intelligence can help us create a more efficient government

In the rapidly evolving technological landscape of the 21st century, the role of government in serving its citizens is undergoing a profound transformation. Emerging technologies, particularly Artificial Intelligence (AI), have become integral tools that everyone, including  public servants, will need to leverage in order to improve efficiencies across business and government alike. Recognizing the urgency of this moment, President Biden's Executive Order on Artificial Intelligence has underscored the opportunity ahead of us: to train public servants in the responsible use of AI to best be able to service their constituents using top of the line technology.

AI for the People: A Federal Mandate for Inclusive Engagement

Artificial Intelligence (AI) is not just transforming economies and industries; it holds the key to revolutionizing public engagement with the federal government.

Are we focusing too much on the risks of AI and not the potential for good?

At almost 20,000 words, President Biden’s behemoth executive order on AI mandates a laundry list of actions from federal departments and agencies.  While there’s a lot to like here, we have to ask: are we focusing so much on the risks that we are failing to invest in and maximize the potential for AI to do good? 

Google.org support to train more government workers in digital skills

Google.org is providing $2 million in funding and pro bono support to InnovateUS to help them provide digital skills training to public sector employees.

New Tool to Design Data Collaboratives

The term “data collaborative” refers to a new form of collaboration, beyond the public–private partnership model, in which participants from different sectors ,  in particular companies in the private sector, exchange their data to create public value. In this sense, data collaboratives are emerging as a powerful tool to enable data flows and unleash the full potential of data in addressing societal challenges. However, designing and implementing effective data collaboratives can be complex and challenging.

The Good and Bad of Anticipating Migration

This blog is the first in a series that will be published weekly, dedicated to exploring innovative anticipatory methods for migration policy. Over the coming weeks, we will delve into various aspects of these methods, delving into their value, challenges, taxonomy, and practical applications.