The Future of AI: Will It Destroy Us or Save Us?
There’s a lot of talk about the future of AI — some hopeful, some bleak. Which outcome is more realistic?
AI is growing at a rapid pace, which has some experts excited about a future where smart machines will remove the drudgery of our day-to-day lives, while others are worried that the further development of these tools could mean the end of humanity as we know it.
While it’s difficult to know exactly what the future holds, we can have a sober discussion about where these tools currently stand, the threats AI development poses, and what can be done to prevent these threats from being realized.
So to start, let’s talk about the current issues.
Is AI Currently a Threat to Humanity?
To answer this question, we first need to consider what we classify as a threat. Are we currently in some Terminator-like situation where Skynet could become self-aware and wage war on humanity? No, not quite. AI tools are nowhere near developed enough for something like that to happen, but are millions of people’s livelihoods on the line? You had better believe it.1
Now, the question is, should we really consider this a threat? Some would argue no. Humanity has always developed tools to make our lives easier. The development of word processors put the typewriter repairman out of work, and long before that, the invention of the automobile had the farrier scrambling for a new line of work. That’s just progress, and it’s what our economy is built around — for better or worse.
One major concern immediately facing us is that AI could blur the already fuzzy line between reality and nonsense. Generative AI tools like ChatGPT have been known to “hallucinate,” meaning that they’ll spit out answers to prompts that sound believable enough but are in reality inaccurate. Deepfakes have gone viral without anyone considering their veracity, further contributing to the already massive problem of online misinformation. People are even using these tools to scam others.
Did You Know? AI tools are already powerful enough to pass the bar exam, which has some lawyers concerned about the future of their jobs.2
So is that a “threat to humanity”? Not existentially, but culturally? Perhaps. We’ve seen what mis- and disinformation can do over the past few years — how they can and do divide us. What happens if objective reality is further degraded through AI tools?
While that is an immediate concern that should be grappled with — more on that in the solutions section — let’s address the question in the headline. Is AI going to destroy us?
What Would It Take for AI to Destroy Humans?
As we established above, AI is not currently an existential threat to humanity, so they won’t destroy us anytime soon. But there are several concerns about how AI could negatively impact our society. What would it take, though, for it to become physically dangerous for humans? David Krueger, an assistant professor in Cambridge University’s Computational and Biological Learning Lab, says it would take only three elements.3
A breakdown of logic
Right now, machine learning systems can optimize specific objectives, but they lack what we’d call common sense. Where this becomes a problem is when a system optimizes for a specification as hard as it can without a common-sense interpretation, you can get undesirable behavior. Here’s an admittedly extreme example of this in action: Let’s say you ask a powerful AI to do something simple like opening a door, it might have the unintended consequence of killing everyone in the office to keep the door closed.
FYI: Since AI algorithms are built by humans, they can often reflect biases in the data. If training sets are flawed, they will produce results that are unfair. This can cause major problems in law enforcement and the HR industry.
Like we said, that’s a really extreme example, but it illustrates how AI systems might problem-solve in ways we didn’t intend.
The second element in the “really dangerous AI” equation is over-optimization. If an AI has a singular goal, it will use all of the tools at its disposal to achieve that goal. If that goal-directed reasoning continues on a loop, there’s no end to the cycle of reasoning. The system would naturally end up with more power and more resources if given the power to do so. And speaking of power ….
Handing over control
The third and final element for creating super-dangerous AI systems, at least in Krueger’s estimation, is a transfer of control. Simply put, the more power and autonomy we give these systems, the more ability they’ll have to reshape the world.
Really safe systems won’t have the ability to do much of anything, but the more power we give systems to impact the world around us — like trading stocks, for instance — you give them more opportunity to have potentially devastating impacts — like crashing the stock market, for instance.
In order to slow all of this down and think about exactly how we want these systems to interact with us, and more importantly how to keep humanity safe from them, there are a few things we need to do right now.
Pro Tip: Concerned about your security, particularly when it comes to protecting yourself online? We can’t blame you. AI tools are making it easier than ever to hack systems. If you want to protect yourself, you might consider reading our guide to the best VPNs of 2023.
How to Protect Ourselves From a Dystopian Future
AI development is a bit like the Wild West right now: There is very little regulation in the industry guiding what should and shouldn’t be developed. That said, there are experts out there like Krueger and others like him who are making recommendations for laws to be passed and regulations to be put in place to prevent AI from, you know, blowing up the world. Some of these include:4
- Keeping weapons systems and platforms in human control
- Making AI systems explain their decision-making processes in detail
- Teaching machines about human behavior, reasoning, and values
- Ensuring some sort of economic future for humanity through universal basic income or other means
This will, of course, take intervention on the part of lawmakers, who have traditionally been slow to respond to technological advancements. That said, industry leaders are already calling for a slowdown of AI development for us to get some meaningful regulations in place. A recent open letter, famously signed by tech giant Elon Musk, called for AI labs to pause development for six months so humanity can get on the same page regarding AI’s advancement.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”5
Final Thoughts on AI and the Future of Humanity
Overall, AI tools should be designed to make our lives easier. However, without any sort of meaningful guidance on how these tools should be developed and what capabilities they should have, the progression of these tools is a bit like playing with fire. Hopefully in the coming months and years, consensus can be reached on meaningful guardrails that can be put in place to protect the future of humanity from threats of our own design.
Forbes. (2022, Jun 28). As AI Advances, Will Human Workers Disappear?
CNN. (2023, Jan 26). ChatGPT passes exams from law and business schools.
ITPro. (2023, Feb 21). Why risk analysts think AI now poses a serious threat to us all.
The Washington Post. (2015, Jul 7). The very best ideas for preventing artificial intelligence from wrecking the planet.
TIME. (2023, Mar 29). Elon Musk Signs Open Letter Urging AI Labs to Pump the Brakes.