AI Companies Are Trying to Build God – Shouldn’t We Have a Say?

AI companies are creating powerful technologies like AGI without public consent. This article explores the ethical implications and the need for democratic oversight in shaping our future with AI.
By Rose · Email:srose@horoscopesnews.com

Oct 12, 2024

SHARE

Artificial Intelligence (AI) companies are working on technologies that could reshape the world in ways we can barely imagine. These companies are aiming to create machines with intelligence that surpasses human capabilities, sparking both awe and concern. Sam Altman, CEO of OpenAI (the creators of ChatGPT), openly speaks about his mission to develop "magic intelligence in the sky," or what’s formally called Artificial General Intelligence (AGI). He even admits that AGI might “break capitalism” and warns that it could be “the greatest threat to humanity's continued existence.”

However, this raises a crucial question: Did anyone ask for this? How do a handful of tech CEOs get the authority to initiate a world-changing transformation without public consent? Shouldn't society have a voice in these decisions?

The Need for Public Consent in AI Development

Altman’s grand vision of AGI is not something that sits comfortably with everyone. Building a superintelligence capable of reshaping economies and societies feels inherently undemocratic if the general public isn’t involved. Jack Clark, co-founder of AI company Anthropic, echoed this sentiment, stating, “It’s a real weird thing that this is not a government project.” He, like others, expresses unease over how much permission AI developers need to obtain from society before fundamentally altering it.

This is not a new debate. Silicon Valley has long championed the idea of "permissionless invention," where massive, society-altering innovations are released into the world without public input. This approach, which drove the rise of social media and ride-sharing platforms like Uber, has shaped our society in ways we didn’t always foresee. The question is, should the same “move fast and break things” mindset apply to AI?

Objection 1: “Our Use Equals Our Consent”

A common argument made by AI enthusiasts is that widespread usage of AI applications like ChatGPT indicates public consent. After all, ChatGPT is the fastest-growing consumer application in history, reaching 100 million users just two months after its launch. People clearly find AI useful and exciting, whether it's for making grocery lists, writing code, or editing documents.

However, using an AI tool doesn’t necessarily imply informed consent. Most users don’t fully understand the broader societal implications or the environmental costs of AI. For instance, few people realize that the energy demands of generative AI models are so significant that companies like Google and Microsoft have had to reconsider their climate commitments. Furthermore, many feel coerced into using these technologies because of professional or societal pressures, much like the way many of us feel forced to use social media even though we dislike it.

Even if the public’s use of AI implies consent, it’s important to distinguish between narrow AI (designed for specific tasks like language translation) and AGI (a general-purpose superintelligence). While narrow AI is generally welcomed, polls show that most people are wary of AGI, preferring a more cautious approach to such a potentially world-altering technology.

Objection 2: “The Public Doesn’t Understand Innovation”

Another common defense of rapid AI development is the claim that the public is too ignorant or unimaginative to guide technological innovation. The famous (and likely apocryphal) quote from Henry Ford comes to mind: “If I had asked people what they wanted, they would have said faster horses.”

Many of the world’s greatest innovations, from the printing press to electricity, were not the result of public demand but rather the vision of a few extraordinary individuals. Yet, while innovations like the telegraph or the internet revolutionized society, they didn’t carry the existential risks that AGI could bring, such as threatening humanity’s survival or subjugating us to a superior intelligence.

For technologies with such high stakes, democratic input becomes crucial. Society has successfully established oversight for other dangerous technologies, such as nuclear weapons and biological weapons. Treaties like the Nuclear Nonproliferation Treaty and the Biological Weapons Convention show that it is possible — and essential — to establish global standards and regulations for technologies that affect all of humanity.

The public doesn’t need to dictate the specifics of AI policy, but we do have a right to voice our opinions on broader questions, such as: Should governments enforce safety standards for AI before a disaster strikes? Are there certain types of AI that should never be developed?

Objection 3: “You Can’t Stop Innovation”

The final argument often made in favor of unbridled AI development is that technological progress is unstoppable. According to this view, trying to control AI development is futile — innovation will continue regardless of regulations or restrictions.

But this argument is a myth. There are plenty of examples of technologies that society has chosen not to pursue, or to tightly regulate. Human cloning, for example, has been largely rejected by the global scientific community. Similarly, the Asilomar Conference of 1975 saw DNA researchers voluntarily agree to a moratorium on certain risky experiments, leading to a cautious and measured approach in genetic engineering.

In the case of AI, we have an opportunity to apply the same caution before AGI is unleashed. Just as the 1967 Outer Space Treaty limited the use of nuclear weapons in space, we can establish international agreements to regulate AI development. The stakes are too high for us to adopt a “wait and see” approach.

Conclusion: A Call for Democratic Deliberation

As the Roman proverb says, “What touches all should be decided by all.” When it comes to superintelligent AI, the potential risks and rewards are so vast that it would be reckless to leave the decision-making to a few tech CEOs. The public deserves to have a say in how AGI is developed and regulated — before it’s too late.

In short, building a “god-like” AI without public consent is not just undemocratic; it’s a gamble with the future of humanity. And that’s a risk we can’t afford to take without broad, informed input from society.

SHARE