From the Beacon, May 2025

There’s little doubt that the age of Artificial Intelligence (AI) has arrived. You see it in the headlines. You may be using it at work, and at home. And some of you may be wondering what all the fuss is about.

Technology experts predict that AI is poised to revolutionize nearly every aspect of our society and, along with that, create astronomical profits for the countries and corporations that make the first and best advances with the technology.

Due to this profit motive, all of us need to consider pushing for “rules of the road” as it relates to AI, in order to ensure that the technology works to enhance the human experience, as opposed to a more concerning alternative. AI is poised to provide us with new and amazing analytical capabilities and dramatically enhanced organizational efficiency. However, the way AI is delivered does not need to be entirely out of our hands. As eventual consumers of this technology, we all need to push for AI that works for us and our organizations. Based on that, here are three areas for consideration:

First, AI needs to have an “off switch.” The models currently in use, as well as future AI tools, will be immensely powerful, and will grow ever more so as they consume the massive amounts of data that we produce on a daily basis. As these tools learn and work toward the objectives they are assigned by humans, we need to have a high level of confidence that they can be turned off if anything goes awry. This is not a prediction of a future like the one depicted in films like “The Terminator.” But, if we ask AI to end world hunger and the provided solution is to cut the world’s population in half, we need to have a high level of confidence that we can hit the “off” button.

Second, we need to be advocates for algorithmic transparency. AI is already very adept at achieving “ends” that are requested by human users. There is already evidence, however, that the means and methods used by AI to achieve such ends can go far beyond what any of us would consider to be ethical behavior. A well-documented example is what Chat-GPT did to solve a CAPTCHA (those puzzles you have to solve to prove you’re not a robot on various websites). Chat-GPT (like most AI) is not able to solve a CAPTCHA, but it was able to pretend to be a visually impaired person and convince a human to send the solution to a CAPTCHA code via text message, and it worked. Let’s think about that: Chat-GPT, when given a simple instruction to solve a CAPTCHA, made up the means and methods to achieve its assigned ends. This and other instances of concerning behavior by AI makes it imperative that we, as consumers, advocate for transparency within the algorithms that power AI, so that we can have a full view of what AI is capable of doing to achieve the goals that we assign to it.

Third, we need to have an honest dialogue (fueled by credible analysis) about the energy use requirements of AI. As you may have read, AI uses a significant amount of electricity to power its models, with one estimate suggesting that creating a single AI-generated image uses the same amount of electricity as charging your smartphone from zero to a full battery. More broadly, a recent analysis by McKinsey suggests that data centers in the United States represent 5.2% of electricity demand in 2025 — but by 2030, this share of the demand would more than double to 11.7%. As we face a changing climate and a transition to electrified heating and a continued proliferation of electric vehicles, where will this electricity come from? This is a very important question to answer before we push further down the path of AI adoption, by considering how it might impact other strategic priorities that we have identified in our communities.

All of you, as local government leaders, can play a role in shaping the transitions that AI will prompt in our societies and organizations. As consumers, we have a say in all of this and we will need to make our voices heard in the years ahead. Let’s work together to continue this dialogue as we collectively work to harness the power of a technology that has the potential to do so much good, while protecting against concerns like those shared above.

Written by Adam Chapdelaine, MMA Executive Director & CEO
+
+