Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

A Conversation With…

The Government Finance Research Center works with researchers from a variety of backgrounds to analyze the role that public finance plays in our lives. In the interviews below, we talk with experts to dig deeper into pertinent topics and get their perspective on the past, present, and future of government finance.

Amber Ivey, Vice President, Impact Advisory, Social Finance Heading link

Photo of Amber Ivey

Q: Are the fields of state and local finance and budgeting ready to start using AI?

Ivey: Actually, finance and budgeting are fields that have been well-equipped to use artificial intelligence. For example, when it comes to things like fraud detection, there have been tools used in the past that have components of AI, like predictive analytics, advanced analytics, and machine learning. Organizations are able to detect fraud by analyzing large amounts of data.

 

Q: Let’s talk a little bit about how it’s used for fraud detection.

Ivey: I’ve seen it used in places that pay out benefits to individuals, like unemployment insurance. Normally you will have historical cases where fraud has been detected. You can ask AI what can we learn from our current fraud data to more easily identify fraud in the future?

You can say, “here is this dataset” and then ask, “can you overlay it into an analysis that identifies any similar issues in this other data set?” If we have identified fraud in the past, and have good data on that, you can train a model to identify future types of fraud involving any areas where money is moved back and forth.

 

Q: That’s a great start. But tell me about applications other than fraud detection please.

Ivey: If you step out of state and local government for a second, think about finance in places like Wall Street. They’ve been using these tools for a very long time to understand what trades to make. So, putting it back into the state and local government world, where they have all this access to data, it’s primed for AI to come in and help improve what they’re already doing around resource allocation and other decision-making processes. AI can assist in determining where to allocate funds to create budgets and fund initiatives. Additionally, we’ve been using components of artificial intelligence for budget forecasting, an area that I know there’s been a lot of experimentation for a long time.

 

Q: Let’s talk about using AI for resource allocation. How does that work?

Ivey: There are a few ways, and it depends on the person using it in the finance world. Imagine, for example, if I had a budget forecast for one agency and their actual outcomes. Now imagine a world where you upload the annual report and compare it to the budget and ask something like ChatGPT, “please tell me where I have the most waste? What are areas where I need to invest more?”

You can do this kind of analysis without having to go through a whole process of someone literally doing a qualitative analysis or quantitative analysis on the report or other datasets.

 

Q: Any other examples?

Ivey: Yes. With data about the number of staff positions, the funding for the staff, the number of offices you have and the number of managers who are in place, you could ask “With all the information you know about our agency, how can I make sure this budget gets dispersed in the most effective way?” This is possible where organizations have closed AI systems learning from their agencies data.

Imagine if you were able to go back to the annual report and let’s say the Health and Human Services Agency services 10,000 people. And let’s make up a number. Say, it’s costing $50 million. You can input those numbers into the AI system and tell it what you spent it on. And then you can ask “How can we ensure that next time we do this work, we do this for $45 million without losing quality? What are some of the areas we can cut and trim some of the excess waste without risking our service quality?” With access to agency data, the types of analysis are limitless.

 

Q: So, then are budget offices leading these efforts?

Ivey: I’m not sure it’s often on the budget office side. It’s been more in the agencies at the state and local level that are exploring the use of AI. Agencies are trying to figure out how to pay for these tools and bring them into their departments. But budget offices are the place where it should be happening because they’re well primed for this.

I will say this one thing that is something to think through for the future. General budget offices have access to large amounts of data. Granted, there’s always going to be bad data, but with a lot of the data, you can use AI to say “Hey, look at this agency’s budget request and let me know when someone is just popping in numbers and hasn’t done the analysis. AI can try to see if people are just making up numbers or just throwing numbers in, because humans do things in patterns that it can potentially identify.” Another example I’ve seen at Social Finance is that we are working with organizations to help identify the best jobs for people based on their experience and interests using AI.

 

Q: Of course, AI can only produce answers that are as accurate as the data that it uses. Is that an issue? Are there ways to deal with that?

Ivey: Yes, bad data will remain a problem, and we need to consider that when using AI. What data was it trained on? Where are the biases? How can we use AI to identify those bias and correct them?

You can use a closed system so you can learn from the information that you have. For example, if I read a document at my job, AI can reference the documents that are in our SharePoint. I can ask it a question about my organization, and it will have access only the relevant data. If I asked ChatGPT, without my company’s data, it would access more data and maybe get it wrong. So, I do recommend if you’re using any of these tools to help in decision making, it should involve a system that’s closed to your organization. You can even train AI to identify historical bias and consider them in the decision-making process.

 

Q: And for those organizations that are using something like ChatGPT, any advice?

Ivey: Trust, but verify. If I have a new employee in my organization who was just starting, I’m not going to take their first analysis and send it to the president of the company without reviewing it first. You would just never do that. Treat AI the same way.

 

Q: The dangers of making decisions on AI when it’s using bad data are pretty serious, right?

Ivey: Yes. We’ve seen things like predictive analytics being used for child welfare for example and have seen where children were taken out of a home prematurely because the data was bad or bias. There are parts of the process that we need a human being in order to make a decision, because in the world of predictive analytics, we’ve seen a lot of mishaps with how people have made decisions by relying totally on the risk scores that predictive analytics have sent out and not the decades of experience the social worker also brings to the work. I don’t want the same thing to happen when it’s applied in other parts of government, whether it’s for the budgets or the outcomes for people who are getting services and using programs.

 

Q: But can you trust AI to perform basic clerical tasks like writing an e-mail for you?

Ivey: Yes, you can. It will come up with a generic response and I still check it and make sure it sounds like something that came from me. But I’m not even using it exclusively for things as simple as dropping an e-mail. We should go beyond the basics. AI has the potential to change how we think of and solve problems. The AI use I am interested in goes beyond efficiency to allowing humans to evolve how we do work and change the lives of all the people that government touches for the better.

 

This interview was conducted with Richard Greene, senior advisor, GFRC, and principal of Barrett and Greene, Inc.

Read More Conversations Heading link