This week, the United States The senators heard troubling testimony suggesting that unchecked AI can steal jobs, spread disinformation, and generally “go completely wrong,” as OpenAI CEO Sam Altman put it (whatever that means). He and several lawmakers agreed that the United States may now need a new federal agency to oversee technology development. But the hearing also saw agreement that no one wanted to ride in technology that could increase productivity and give the United States a lead in a new technological revolution.
Worried senators might consider talking to Missy Cummings, a former fighter pilot and professor of engineering and robotics at George Mason University. She teaches the use of artificial intelligence and automation in safety-critical systems including cars and aircraft, and earlier this year returned to academia after a stint at the National Highway Traffic Safety Administration, which oversees automotive technology, including Tesla’s Autopilot. and self-driving cars. Cummings’ perspective may help politicians and policymakers trying to balance the promise of much-hyped new algorithms with the risks that lie ahead.
Cummings told me this week that she left NHTSA deeply concerned about the autonomous systems being deployed by several automakers. “We’re running into a serious problem with the capabilities of these cars,” Cummings says. “They’re not even close to being as capable as people think.”
I was struck by its similarities to ChatGPT and similar chatbots that both get excited and worried about the power of AI. Autopilot features have been around for a long time, but they rely on machine learning algorithms that are inherently unpredictable, difficult to examine, and require a different kind of engineering thinking than in the past, such as big language models.
Also like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been ramped up with ridiculous amounts of hype. The wild dreams of the transportation revolution have prompted automakers, startups, and investors to pour huge sums into developing and deploying a technology that still has many unresolved problems. There was a lax regulatory environment around self-driving cars in the mid-2010s, as government officials were loath to put the brakes on technology that promised to be worth billions to American companies.
After spending billions on the technology, self-driving cars still have problems, and some car companies have backed away from large autonomy projects. Meanwhile, Cummings says, the public is often unclear about how capable semi-autonomous technology really is.
On the one hand, it’s good to see governments and lawmakers quick to suggest regulation of generative AI tools and large language models. The current panic centers on large language paradigms and tools like ChatGPT that are remarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.
At the Senate hearing this week, Altman of OpenAI, the company that introduced us to ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI. “My worst fear is that we — the field, the technology, the industry — are doing great harm to the world,” Altman said during the hearing.