In a unanimous vote Tuesday, the Santa Cruz County Board of Supervisors approved a policy on artificial intelligence aimed at “harnessing the potential but also recognizing some of the risks, in particular on data privacy.” Big decisions will remain in human hands, a spokesperson said, but the county wanted a policy that could allow it to use AI as a tool while acknowledging its blind spots.
When Will Mayall received an invitation over the summer to work with Santa Cruz County on crafting an internal policy on artificial intelligence, the tech industry veteran felt a pang of skepticism.
“With this kind of technology, there has to be a realistic expectation of failure. In the tech world I come from, failure is an inherent part of the process, but that is, like, the complete opposite of government,” Mayall told Lookout on Tuesday. He said he felt the expectation laid upon government to get it right the first time and set firm policies was antithetical to the rapid and dynamic evolution of artificial intelligence.
But, at the urging of Doug Erickson, head of Santa Cruz Works and preeminent local tech scenester, Mayall agreed to help in an advisory role. By the end of his first meeting with county officials, his doubts had dissolved.
“I was so skeptical, but then, so surprised by how realistically the county is thinking about it,” Mayall said. “Artificial intelligence is not easy to think about; it’s not like anything that has existed before. The county knew that the evolution would be fast, and they were very realistic and pragmatic about it.”
Since the launch of ChatGPT in November 2022, artificial intelligence has gone from sci-fi ambition to front-page issue, dominating government agendas, opinion columns, talk shows and academic debates. The ability of ChatGPT, and more broadly generative AI platforms such as Google Bard that can, at the speed of light, eloquently answer user prompts by collecting, organizing and sorting through much of the internet catalog, has, seemingly overnight, thrust society into a new era of the technology age. Google and other early search engines organized the internet into a searchable library with original sources; generative AI can take all the knowledge from the library and use it to answer most user questions.
As Mayall told Lookout, no one fully understands the power and potential of artificial intelligence at this early stage, but its abrupt arrival has raised serious concerns around plagiarism, data privacy, misinformation and bias. That lack of understanding has kept governments from trying to develop internal policies for using the tool. The New York City school system tried an outright ban, but last month decided it wanted to remain open to the potential of artificial intelligence. Earlier this month, California Gov. Gavin Newsom signed an executive order directing the state to analyze the risks and rewards of artificial intelligence.
On Tuesday, the Santa Cruz County Board of Supervisors took a major step in unanimously adopting a policy on how the county government will use, and not use, artificial intelligence.
The county will not be handing over any decision-making to AI — it won’t use it for hiring, or anything involving employee benefits. If the county plans to employ AI for any form of public engagement or interaction, it must let the public know.
And the county will require a citation if an employee uses AI to contribute a “substantial” portion of a project. For example, the bottom of the county staff’s report submitted to supervisors ahead of the vote read: [AI assistance: An artificial intelligence large language model tool contributed to the development of this report. OpenAI GPT-4, 2023, in compliance with the proposed County of Santa Cruz Appropriate Use Policy]. However, the policy does not define how to measure a “substantial” contribution from AI.
“Folks I’ve spoken to in the tech industry are actually incredibly impressed by the county’s approach to this and really optimistic,” District 1 Supervisor Manu Koenig said ahead of the vote Tuesday. “I’m happy we have an inner-division working group and communication with industry and we’re actually measuring employees who are using this.”
District 2 Supervisor Zach Friend, who led the push to develop an AI policy for county government, said the guardrails adopted Tuesday for the “unquestionably transformative” technology walk the line of “harnessing the potential but also recognizing some of the risks, in particular on data privacy.”
The county’s guardrails around the use of generative AI platforms such as ChatGPT and Google Bard are intentionally not ironclad, and acknowledge the pace of the technology’s evolution. The supervisors imbued the county administration with the power to regularly update the policy as the abilities of AI expand. Staff will come back to the supervisors in March 2024 with a new draft.
Even before the supervisors called for a county AI policy, the government began tracking employee use of generative AI, a term most commonly associated with tools such as ChatGPT and Google Bard that allow users to make commands and the AI to generate answers. From May through August, county employees used ChatGPT and Google Bard 33,000 times, and the frequency of use has only grown since May.
Although 302 county employees used it during that time, nearly half of the sessions were tied to only 30 employees. The top five departments using AI are the Health Services Agency, Information Services Department, Sheriff’s Office, Human Services Department and County Administrative Office, said David Brown, an analyst inside County Administrator Carlos Palacios’ office.
Brown said the county has not tracked how employees use AI, only how often. However, he said conversations with employees have shown the most common uses have “have varied from programmers using it to inform code development, developing spreadsheet formulas, general ideation around various subjects, first drafts of communications, developing agendas, creating survey question options, etc.”
The county will include the new AI policy in its standard “acceptable use” policy that all county employees have to sign before using government computers. Training on AI is also ahead, he said, and will likely focus on how to create prompts, ethical use and getting to know AI tools that the county adopts.
Jason Hoppin, the county’s communication director, said big decisions will remain in human hands, but that the county wanted a policy that could allow the local government to use AI as a tool while acknowledging its blind spots.
“We … don’t want to hand over big decisions to a machine,” Hoppin said via text. “Flexibility is important because some jurisdictions have already yo-yoed between outright bans and allowing the use of AI, while others have implemented restrictions that we think inhibit adaptation [to the tool]. It’s already being rolled into software platforms, so we also need to understand how those work and what the risks and benefits are.”
Have something to say? Lookout welcomes letters to the editor, within our policies, from readers. Guidelines here.