05 October 2021

On September 15, the Center for Technology Innovation at Brookings hosted a webinar on how artificial intelligence can reduce government fraud. Brookings Vice President and Director of Governance Studies, Darrell West hosted Beth Simone Noveck, director of The GovLab, along with Melissa Koide, CEO and Director of FinReg Lab.
The occasion for the event was Brooking’s new report covering the opportunities and challenges of the adoption of AI and Machine Learning by government to reduce wasteful spending.
The discussion covered how government agencies have already begun to adopt AI against abuse, some to great success. The report discusses the Centers for Medicare & Medicaid Services, which have been using their predictive Fraud Prevention System (FPS) since 2011 to track provider and beneficiary activities and identify patterns at high risk of fraudulent activity. Over its first two years in operation, the FPS is estimated to have saved the taxpayer over $50 million, which would have gone to fraudulent schemes.
Impediments to Adoption of AI for Reducing Fraud
Despite the promise, Koide, Noveck and West talked about the factors limiting agencies from implementing artificial intelligence. These include:
Koide emphasized the importance of appropriate legal frameworks for the ethical implementation of AI in government. As more datasets are made available for analysis -- many of which containing personally identifiable data -- guardrails must be set to protect the privacy of citizens. Moreover, as complex algorithms already make decisions which significantly impact people’s lives -- the use of AI to determine credit risk is nothing new -- regulations are necessary to guarantee these decisions can be explained and challenged, and are not just a black box.
Drawing on her new book Solving Public Problems, Professor Noveck also pointed out how important it is that government officials be adequately trained in problem definition. While civil servants using AI don’t need to be data scientists or statisticians, they need to know how to ask the right questions for the data available. As she mentioned, this is still a big gap; among the top public administration schools in the country, none of them offer data science classes as standard, and the picture isn't better in federal and state agencies.
The Need for Humans in the Loop
Some current uses of AI in government mark how the successful implementation of innovative technologies require appropriate guardrails. Canada's Employment Insurance Sickness Program uses optical character recognition and natural language processing to determine the authenticity of doctor's notes by cross referencing them with previous notes in order to detect fraud and abuse in workers' sick leaves. While in principle the use case could reduce fraud and save money, without appropriate safeguards the algorithm could end up unfairly depriving workers of their benefits.
That's why Canada has implemented a Directive on Automated Decision-Making Consulting, requiring that all automated decision making systems be checked by experts trained in statistics and the appropriate uses of these technologies. Final decisions are always made by humans who understand the decision making algorithms, to minimize outcomes influenced by biases in the data.
Government use of AI for fraud detection has potential beyond reducing unnecessary spending, too, and can also help to accelerate contracting. Brazil's Federal Court of Accounts has been using AI to audit public spending and contracting during the pandemic in order to spot corruption but also make contracting more efficient. The system cross references government contracts and budgetary reports with "red flag" based analyses of potential vendors. For example, if a contractor was created during the pandemic, or is tied to a politician, that raises the algorithm's perception of its risks..
Use of AI in government is still a burgeoning field, with promising potential in detecting fraud, increasing efficiency and responsiveness, and generally improving service to citizens. To reach these goals and avoid missteps, governments have a series of lessons to learn: they must adopt policies that protect privacy, train public officials to understand the technologies being implemented, including the risks, and implement guardrails to prevent biases in the data from further disenfranchising disadvantaged groups. The speakers agreed that these tools, when used well, have tremendous potential to do good and save taxpayer dollars and the greatest risk, may be the failure to use them. For a more thorough discussion on what governments need to do to tap into the full potential of these emerging technologies, you can watch the webinar here.