under oath. OpenAI CEO Sam Altman, NYU Professor Emeritus Gary Marcus, IBM Chief Privacy Officer and … [+]
In response to recent testimony to Congress by OpenAI CEO Sam Altman, an intense national conversation is taking place about the potential existential risks posed by AI. Although he has defended the advantages of AI and the great benefits it can provide to humanity, Altman has also expressed concerns that the technology industry could “cause great harm to the world”, even going so far as to endorse a new federal agency to regulate AI and claim Licensed by artificial intelligence companies. While his concerns deserve attention, it is necessary to contextualize them against what we already know about AI and existential risk, as opposed to what is mere speculation.
One notable vocal warning about the dangers of AI is Eliezer Yudkowsky, who made the extraordinary claim that “the most likely outcome of building an intelligent, superhuman AI…is that literally every person on Earth will die.” Superintelligence is usually thought to mean something along the lines of an artificial intelligence system that exceeds the intelligence and capabilities of the smartest humans in almost every field.
However, in a recent podcast with economist Russell Roberts, Yudkovsky was unable to provide any coherent mechanism behind his outrageous claim. In other words, he couldn’t provide any plain English description of how the world has gone from chatbots answering questions on the Internet to the literal end of the human race. Even when digging into the arguments raised by the most blatant AI pessimists, one can actually extract reasons for optimism rather than alarmism.
An illustrative example lies in the concept of “instrumental convergence”, which is the idea that there are intermediate goals that an AI may set for itself on its way to some ultimate goal that humans program it for. For example, an AI tasked with producing tools might decide that pooling money is the most effective strategy for achieving this end, because money allows it to buy factories, hire workers, and so on. This idea suggests that there may be some general tendency for superintelligence AIs to converge on similar strategies even if AIs in general have a wide variety of end goals.
Critics of AI frequently use the example of AI systems striving to accumulate great wealth or resources, and—by assumption—deeming it harmful. However, this perspective can be skewed by fundamental hostility toward capitalism and wealth creation, both of which have historically underpinned human progress.
If a superintelligence AI seeks to maximize its wealth and is also programmed with reasonable restrictions such as “operating within the limits of existing law” and “accumulating resources only by meeting the needs of consumers or investors”, then it is not clear why the accumulation of resources in AI should cause any alarm. .
Capitalism has always faced criticism about the general tendency toward monopoly, so perhaps the real concern is monopoly. However, apart from a few exceptions, competition has largely dominated monopoly in capitalist economies. Already, we’re seeing significant competition in the field of AI, and there’s little reason to think this won’t continue.
Thus, even if superintelligent AI systems aspire to acquire as many resources as possible, as long as they operate within legal limits (adjustable by humans as circumstances require) and aim to meet the demands of consumers and investors, their operations may be paralleled to an extent. Kabir with traditional business activities in a souq.
While this scenario may not suit communists and socialists, for those of us who value the benefits of markets, production, and businesses competing to meet consumer demands, AI superior systems could be a catalyst for economic growth rather than a harbinger of the apocalypse.
So the question arises: Why create unnecessary new bureaucratic structures and licensing regimes if what little we know so far about the development of AI provides few reasons to worry about existential threats? According to Eliezer Yudkowsky, the onus is on those who doubt that AI poses an existential danger to refute his theory. However, the burden of proof works in the opposite direction. Unsupported claims of potential dangers, especially sensational ones, require proof before they can gain credibility.
Despite the attention-grabbing headlines, a deep dive into AI doomsday narratives suggests no cause for concern. Rather than descending into the abyss, the innate tendencies of superhuman AI may lead to a convergence in practices beneficial to humanity. While ensuring the ethical and safe development and deployment of AI is critical—and it is already happening to a large extent—overly restrictive regulations will hinder the technology’s potential to drive economic growth and societal progress.
In short, the burden of proof falls on the cynics. Their claims that we are all going to hell in a handbasket should be met with healthy skepticism until solid evidence proves otherwise. In general, based on what we know about AI and existential risks, we have more reasons for optimism than pessimism.