The headline jogged me out of my pre-caffeinated morning daze: "Harris to meet with CEOs about artificial intelligence risks," the Associated Press reported on May 4.
The article previewed the day’s meeting between Vice President Kamala Harris and the CEOs of corporations at the forefront of artificial intelligence research and production, including Alphabet (the parent company of Google), Anthropic, Microsoft, and OpenAI.
Harris planned to announce funding for "seven new AI research institutes" and outline the government’s next moves on this important topic, according to the story. "The government leaders’ message to the companies," wrote correspondent Josh Boak, "is that they have a role to play in reducing the risks and that they can work together with the government."
That’s what shook me awake. Since ChatGPT was released last November, there has been a lot of debate over the potential consequences of artificial intelligence. All the talk has been speculative, and most of it catastrophic in outlook. It was presumably inevitable that at some point lawmakers would become involved in the regulation of such groundbreaking technology. But does it have to be Kamala Harris? Does it have to be President Joe Biden who tackles the problems and dilemmas arising from Generative AI? Haven’t Harris and Biden caused enough harm?