Boston, Seattle Issue Interim Generative AI Guidance

Source: GovTech

Does your local government need a stance on generative AI? Boston encourages staff’s “responsible experimentation,” Seattle’s interim policy outlines cautions, and King County begins considering what responsible generative AI use might be.

Is it time for local government to try generative AI? Boston thinks so.

The city is actively encouraging its staff to test out the tools, while taking precautions. It announced the move in a May 18 email and interim guidelines sent to city staff.

Boston may be one of the first to take such an approach.

Boston CIO Santiago Garces told Government Technology that he wanted the city to “embrace the spirit of responsible experimentation.”

Rather than wait, he said the city should start learning about the potential benefits and risks.

“Whenever there’s an opportunity of delivering government services better, I think that it is our obligation to also learn about it, and if there’s risks, understand those risks,” Garces said. That could only happen if the city first established a framework guiding safe exploration.

Plus, lack of guidance or an official stance doesn’t mean people will necessarily hold off on engaging with the tools.

“We started to think that there’s a number of people that were probably already using it either for work [or in personal lives] and we figured it was better for us to be ahead and provide guidelines, rather than wait and ignore the fact that there’s this kind of revolutionary technology that was at the disposal of a broad set of people,” Garces said.

Boston isn’t the only city thinking this way. Seattle interim CTO Jim Loter sent an interim generative AI policy to city staff in April.

That document takes a more cautionary tone than Boston’s; it does not recommend specific use cases and it focuses on outlining concerns and ways to reduce risks.

“[Boston’s] provides more guidelines and direction for staff when they are using these technologies and experimenting with the technologies,” Loter told GovTech. “The risks and the considerations in the Boston policy are very, very similar to ours. So, I think we each independently cogitated on this and came up with the same list of concerns.”

Seattle’s interim policy lasts through October, after which it will need to be extended or replaced. But the technology’s quick uptake made it important to provide some guidance now, rather than wait until a more permanent policymaking process could be completed. At the same time, Seattle is forming an advisory team to help develop a more formal, long-lasting policy.

“We’ve seen the generative AI technologies like ChatGPT and other tools just achieve such rapid adoption over such a short amount of time that it felt like the responsible thing to do to address it head on, very quickly assess risks, consider the opportunities and offer direction to city employees,” Loter said. “[The interim policy says,] ‘Go ahead and use this stuff, but here are the ways in which you need to use it cautiously, carefully and responsibly.’”

BOSTON SEES EFFICIENCY, EQUITY RISKS & BENEFITS


Boston’s guidelines said generative AI could be helpful for purposes like summarizing documents and writing first-draft job descriptions or translations into other languages. Such tools can help staff produce drafts with clear and simple phrasing and can be instructed to write materials tailored for different reading levels.

But the document also advises staff to use their best judgement and take responsibility for correcting the technology’s mistakes. The guideline warns that generative AI can produce incorrect, biased or offensive results, and that it cannot be expected to keep information shared with it private. Staff should also be transparent about their AI use, disclosing when they’ve used generative AI tools, which model and version they used and who edited it.

“Think about how racial and ethnic minorities, women, non-binary, people with disabilities or others could be portrayed or impacted by the content,” the guidelines note.

Alongside such warnings, Boston’s guide also suggests that staff could use the tool to help them think about a topic from different perspectives. Users can tell a generative AI system to respond to a prompt from a particular point of view, which might prompt them to see the issue in a new light. As an example, Garces said he tried asking for John Adams’ perspective on immigration reform.

“We think that the tool can also help people be considerate of certain groups,” Garces said. “It doesn’t replace community engagement, but it is a very low-cost, quick way of getting a different perspective when we’re trying to get an idea around reactions.”

The guidelines are an early-stage effort and should ultimately be replaced by firmer policies and standards, per the document. And while the city broadly recommends that staff experiment, it advised the public school system to hold off and wait for more tailored guidance.

One reason for caution is that city guidelines expect users to vet the accuracy of the generative AI’s output using their professional judgement, but school kids are still developing such expertise, Garces said. Still, the city will need to address this context, because “the reality is that these kids are going to grow up in an environment where these tools are going to be available.”

A CODING AID?


Boston’s Garces envisions generative AI as a timesaver for IT staff. For example, he said it could help developers translate code into programming languages with which they have less familiarity, by recommending code snippets.

“One of the things that we struggle [with] in government technology is, usually we have a few employees that are supposed to know a lot about a number of different…

Read more here.

Chelsea Collier