From the Beacon, June 2023

Much has been made of the dramatic advance of artificial intelligence technology during the past few months, spurred on by the remarkable proficiency of ChatGPT, the AI chatbot that can create sophisticated human-like essays and conversations based on simple prompts. Media coverage has been growing ever since ChatGPT was released last November, primarily focused on the worry that the technology is scaling faster than our understanding of how it will impact our lives.

The idea that AI poses a threat to humanity may seem excessive, yet it is beyond disturbing to read that the very people who are leading the most advanced AI technology companies have banded together to form a new coalition, The Center for AI Safety. These executives and scientists issued a public statement that AI could pose a threat to humankind equal to nuclear war and deadly pandemics.

“Mitigating the risk of extinction from AI should be a global priority alongside [these] other societal-scale risks,” they said.

Increasingly, technology leaders are asking that governments regulate AI capabilities and uses, which is an admission that the marketplace will not regulate itself, that the drive to compete and the impulse to innovate will trump caution and reason.

AI offers great promise — as long as it is used to address deeply complicated and multifarious challenges. This will not come without great cost and disruption, of course, as AI may very well replace or reduce the human role in a wide range of specialized professions, including finance and accounting, economics, actuarial analysis, insurance underwriting, architecture and engineering design, life sciences and biotechnology, medical diagnostics and research, and so much more.

Compared to computers and sophisticated machine technology, humans have little ability to understand complex interrelated systems, especially when time delays and multiple disciplines are introduced. AI could be put to great use to inform human decision-makers and provide insights to help people understand relationships, causations and possibilities.

But machines and software programs are not ethical. They are not compassionate. They do not feel. They do not care how they achieve their purpose, or what their purpose is. Without guardrails, people and nations with bad intent will use AI to make it nearly impossible to discern truth from lies. AI could be used to manipulate social media to amplify misinformation and conspiracy theories. AI could steal personal information, create aliases, and replace identities. Our online personas on Facebook, Instagram and Zoom could be hijacked and manipulated.

In terms of impacting local government and communities, unregulated AI applications could be highly disruptive, primarily because municipalities are built on trust, consensus and engagement. We’ve all seen how social media and remote technologies have amplified uncivil behaviors, making it harder to serve and recruit volunteers into the fray. Now imagine a world where AI is used to splinter public dialogue, spread lies and misinformation, undermine political foes, erode confidence in elected leaders, and push self-interested agendas. There would be no misery index or empathy threshold to stop the technology from marching on. Thus, AI has the potential to fracture our communities, scale up misinformation and distortions, and make trust-building a hugely difficult and time-consuming challenge.

Interestingly, this issue is related to the March ruling by the Massachusetts Supreme Judicial Court striking down the enforcement of civility codes during meetings of public boards and committees. The SJC was unanimous in its finding that free speech rights allow discourteous, rude and negative comments directed toward government officials, especially in public settings. The decision does not allow behavior meant to incite violence, and does not extend to personal slander outside of public meetings. The ruling, however, underscores the challenges that federal, state and local leaders will have in regulating the use of AI in shaping public discourse and engagement.

As we emerge from the pandemic, there is a great deal of speculation on how current technology platforms such as Zoom can be used to encourage and increase citizen participation and engagement in local government. But if AI can be used to penetrate Zoom and create fictitious personas or false representations, we may be forced to go fully back to in-person engagement, especially if that’s the only way we can discern between fact and fiction. And that’s just one small example.

The bottom line is that government must take action to regulate AI. If Congress and federal agencies are deadlocked, then states will need to fill the void to take some basic steps. At the very least, AI-generated information, identities and personas should contain a watermark or disclosure to help us discern the authentic from the inauthentic. Governments should be allowed to verify and opine on the validity of information. Technology that can be easily abused to distort reality should be identified and controlled.

How AI can be regulated without stymieing its benefits is just as unclear as our current insight into AI’s potential to undermine our democratic institutions. Because AI has the potential to disrupt our communities and interfere with the leadership of government, this is an urgent issue for municipalities and the individuals who live and work in our cities and towns.

(Disclosure: this column was written by me, not ChatGPT – but how can you be sure?)

Written by Geoff Beckwith, MMA Executive Director & CEO
+
+