Politicians Use ChatGPT to Push Their AI Agendas
From wanting to regulate it to wanting to use it for good, AI is on everyone's agenda.
California Congressman Ted Lieu introduced a resolution in the U.S. House of Representatives yesterday calling on Congress to commit to regulating AI. The resolution uses terms other than regulation, but that is the intent. Lieu’s office issued a statement saying that the House resolution represents the “first in the history of Congress to have been written by AI.”
Yes, the Congressman asked ChatGPT to write the resolution based on the prompt:
“You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.”
This occurred one day after Massachusetts Congressman Jake Auchincloss claimed to make the first Congressional speech written by AI (again ChatGPT) on the floor of Congress. While a resolution is not a law and not even a bill, it is an official action taken by Congress and is recorded in the Congressional Record. So are speeches by Congressmen.
Also of note. While Lieu’s focus is on regulation, Auchincloss’ speech advocated for legislation that “that would set up an AI research center operated by the U.S. and Israel,” according to Voicebot.ai.
Should AI Content Be Labeled?
ChatGPT is becoming famous. Interestingly, neither the resolution nor the speech indicated in their text entered in the Congressional Record that they were written by AI. That narrative needed to be told by the Representative’s press offices. I wonder if this sets a precedent that the Congressmen did not intend.
Congressman Lieu believes AI needs a regulatory agency similar to the U.S. Food and Drug Administration because the risks of the technology are significant. He admits that it may take some time for enough Congressmen to agree with him. However, he likely would get much more support now for regulations that would require labeling of content generated by AI. This would be nearly impossible to define, administer, and enforce, but there is a rousing chorus around this topic. Yet, neither legislator included this in the official documents.
Governments and AI Regulation
Elected officials and government personnel around the world are thinking about AI regulation. China has enacted new laws that will take effect next week. Several U.S. Representatives will introduce legislation this year proposing new laws or a commission to evaluate the technology and suggest an appropriate regulatory framework. This is sure to happen at both the state and federal levels.
The EU is also tackling this issue. The AI Act overview says:
The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
You should expect activities to accelerate in 2023. AI was viewed by many in government as either a current problem based on how algorithms work on the consumer internet or a future problem related to the rise of killer robots. However, few everyday citizens could relate to the technology. That all changed with ChatGPT.
A Cultural Phenomenon
This may ultimately be the more significant impact of ChatGPT. References are showing up about the product in popular culture, business discussions, and increasingly in government. Politicians are notorious for tapping into cultural trends to raise their profiles, and one way they do this is through proposed regulations. However, the more popular a particular technology becomes, the harder it often is for elected leaders to enact too many regulations without irritating their constituents.
ChatGPT is generally receiving a positive reception in the marketplace. The sentiment seems to lean toward enthusiasm over fear. This is not AI-enabled killer robot dogs, after all. The more popular ChatGPT and similar technologies become, the more likely they will receive light treatment from regulators. This will be an important theme of AI in 2023.
"Google is in reaction mode at the moment. However, it has proven to be the best fast-follower of technology trends in recent memory. It’s time to see if the company can still live up to that track record or if it will become the next high-profile victim of the innovator’s dilemma." I'd respectfully argue Apple, at least in terms of which company has been most successful as a "fast-follower" e.g., Apple I & II, Mac, iPod, & especially the iPhone to name a few. Of course, one could also make the case for Microsoft and Windows!