AI Is Here. How Will Government Use It — and Regulate It?

Source: Governing Magazine | Donald F. Kettl

It’s hailed as the next wave of transformative technology, but artificial intelligence’s market growth and rapid deployment raise a host of issues, from safety to privacy to equity.

The rise of artificial intelligence (AI) technology has significant implications for state and local governments. One of the main implications is the potential for AI to improve the efficiency and effectiveness of government services. For example, AI-powered chatbots can provide 24/7 customer service for citizens, while machine learning algorithms can analyze large amounts of data to identify patterns and insights that can inform decision-making.

Additionally, AI can be used to automate routine tasks, such as processing paperwork and data entry, freeing up government employees to focus on more complex and value-added tasks. However, there are also concerns about the impact of AI on jobs and privacy, and governments will need to consider these issues as they implement AI-based solutions.

Analysts hail AI as the next wave of technology. So I tried it out with the new chatbot, ChatGPT. I asked it to tell us about the implications of AI for government and federalism, and the bot wrote the first two paragraphs above.

The result: Not too bad. In fact, it might be difficult to distinguish between the chatbot’s contribution and lead paragraphs by a human writer. The paragraphs could use some good editing. There are missing pieces, like just what those “more complex and value-added tasks” might be. But, on the whole, they’re not bad, and they pretty neatly encapsulate the promise and the possible pitfalls of this rapidly developing technology, not only in our daily lives but also for government at all levels. And they raise, at least obliquely, the issue of how this technology should be regulated and by which level of government.

The AI opportunities are growing faster than we can keep up with them. For example, Boston Dynamics has developed an AI-powered robotic dog named Spot. It can deal on its own with complicated tasks, such as traversing unfamiliar terrain and figuring out how to open doors, instead of having an operator walk it through each step in the process. In dangerous situations, deploying Spot instead of police officers can potentially be a real life-saver.

To demonstrate what its robots can do, in fact, Boston Dynamics demonstrated Spot for “60 Minutes” and it also produced a video of dancing dogs that pulled in a lot of views. New York City’s police department was impressed enough with Spot that it leased one of the robots for a test. “This dog is going to save lives, protect people and protect officers, and that’s our goal,” explained Frank Digiacomo, a technical response unit inspector for the department. It proved handy, for example, in a hostage situation, where the NYPD’s robot, christened Digidog, was able to take food in to the hostages.

But Digidog creeped out New Yorkers who saw the mutt on patrol. Critics complained that the robot was aggravating racial conflicts and was being used as a “robotic surveillance ground drone,” as U.S. Rep. Alexandria Ocasio-Cortez put it. Following people’s complaints, New York sent its robot back to the manufacturer early.

In Dallas, a robot — ironically, one intended for bomb disposal — carried an explosive to where a gunman was hiding and blew him up. While that robot was controlled by a human, its use in Dallas nonetheless fueled debate about just how artificial intelligence could and should be used in policing. Those debates spilled over to detection systems such as police surveillance of license plates, the linking of doorbell cameras to police investigations and the tracking of suspects through cellphone pings. The Transportation Security Administration (TSA) is testing facial recognition at selected

Read more here.

Chelsea Collier