Digital Security Guide Banner

Why AI Safety Research Is Critical to Human Survival

Advancements in AI are exciting, but we must thoughtfully consider how we develop these technologies

All of our content is written by humans, not robots. Learn More
By
&
Brett Cruz
Gabe TurnerChief Editor
Last Updated May 17, 2023
By Brett Cruz & Gabe Turner on May 17, 2023

A robot hand and a human hand touching

For many, technology holds the promise of a better tomorrow. But the haphazard deployment of said technologies could turn that dream into a dystopian nightmare. For ages, experts have been arguing that we need to fully understand new technologies before we can leverage them for human benefit.

That’s why the field of AI safety research is so critical for society at present. To put it simply, we don’t know what we don’t know about AI. It’s difficult to see around corners, but we’re already understanding that these new technologies, while exciting, might also pose significant risks to the continued future of humanity. Let’s talk about that a little more — but first, some context.

FYI: Some folks are already leveraging AI to their benefit at the expense of others. Read our guide to avoiding AI chatbot scams.

The History of AI and Where We Stand Today

While AI’s capabilities are currently dominating the headlines, it’s far from a new technology. The first program that could actually be classified as AI was developed in 1951 by Christopher Strachey and could play checkers effectively against a human opponent. The concept itself, though, was initially developed by logician and computing pioneer Alan Turing.1 He theorized a program that could modify and refine itself, although computers in his time didn’t have such capacities.

Today’s computers do have these capabilities, though, and that’s equal parts exhilarating and troublesome. On the one hand, we could be looking at a Star Trek future, but on the other, it could look more like Blade Runner. At worst, it could look like Terminator, but we’re not going to entertain those thoughts quite yet.

The Current Threats of AI

One thing’s for certain, we’re in the growing pains of this technological revolution. IBM recently announced it plans to replace almost 8,000 jobs in the near future, and CEO Arvird Krishna says he could see 30 percent of back-office positions replaced by AI over the next 5 years.2

Which brings us to the first, immediate threat humanity needs to grapple with when it comes to this technology — human enfeeblement.3 If we allow AI to develop at such a rapid pace that it effectively makes large swathes of workers obsolete, there could be untold damage done to the economy; the same economy these corporations are striving to claw their way to the top of by improving efficiencies.

In very simple terms, if no one has a job and no one has any money, no one will be able to buy the products the robots are making and the whole economy collapses. Obviously, this isn’t an outcome anyone wants, but without consideration on the front end, it’s the result we could be stuck with.

FYI: Another threat AI poses is that it can allow everyday people to write malicious code that can be used for anything from ransomware to identity theft. Read our guide to identity theft protection for more information.

But that’s just one future we need to avoid. There are a ton of other threats AI poses — everything from the degradation of objective truth through the proliferation of disinformation, to quite literally wiping out humanity due to goal misalignment. Some of these threats are obviously more likely than others. Regardless, in order to prevent AI from destroying us all, we need to have some guardrails.

Some are already seeing it. In an open letter addressed to AI companies, huge names in tech, including Elon Musk and Steve Wozniak, begged the industry to slow down until more research can be done.4 But what exactly is AI safety research, who does it involve, and what are these folks working on?

What Is AI Safety Research?

Simply put, AI safety research is a discipline broadly defined as the effort to ensure AI is developed and deployed in ways that won’t harm humanity. This is a sweeping definition, so it might help to refine it a little.

Essentially, AI safety researchers are the ones whose job it is to think about all of the possible unintended consequences of giving AI certain powers and permissions. Like we said in the above example, one of the immediate things we as humans are going to have to grapple with is how AI is going to disrupt the workforce. It’s the job of an AI safety researcher to understand how to mitigate the worst impacts of AI deployment, and ideally find a way for these technologies to supplement human workers rather than replace them.5

It’s certainly an important job. Who’s conducting this research, though?

Who’s Involved in AI Safety Research?

You might think the majority of AI development is being done by people with machine learning degrees in supercomputing labs, and you wouldn’t be wrong. The people who have the requisite skills to build and shape these technologies are few, and a lot of them are more interested in making them bigger and faster and stronger than their competitors.

FYI: While we’re talking about digital safety, have you considered how protected you are online? Consider reading our guide to virtual private networks to understand your digital exposure and how to stay safe online.

A lot of thought around AI safety is being conducted by academics from across multiple disciplines.6 Sure, there are computer scientists involved, but there are plenty of economists and sociologists involved, too. There are linguists, lawyers, and even theologians.7 AI has the potential to disrupt our cultures and our societies in ways we haven’t seen since the advent of the internet itself, so it makes sense that efforts to understand it are coming from multiple schools of thought.

What Topics Are They Concerned With?

According to the Center for AI Safety, a nonprofit working to reduce the societal-scale risks posed by artificial intelligence, there are several topics AI safety researchers need to be concerned with. These include:8

  • Weaponization: Using AI to fight or develop weapons.
  • Enfeeblement: Mentioned above, AI systems making human effort obsolete.
  • Misinformation: Using AI to spread propaganda, eroding perception of objective truth
  • Misalignment of Values: AI systems could over-optimize faulty objectives that don’t align with human values.
  • Value lock-in: Those who control AI systems could wield a tremendous amount of power over the general population.
  • Emergent Goals: Strong AI models could determine their own goals or change them unexpectedly.
  • Lack of Transparency: We need to understand what and why powerful AI systems are making the decisions they make.
  • Power-Seeking Behavior: AI that’s incentivized to accomplish sets of goals might eventually become harder to control as they seek more power.

It’ll take a lot more than programmers and engineers trying to make their AI smarter and faster to address these problems. Do we have the resources, though, or the will?

Final Thoughts on AI Safety Research

Unfortunately, the field of AI Safety Research is smaller by orders of magnitude than the field of AI Development. Last year, an AI expert named Callum Chace, writing in Forbes, said that he estimated there are hundreds of thousands of people around the world developing AI systems, but only about 300 people focused on AI alignment and safety.9 Obviously, these numbers have changed since the time of that writing, but the trend likely remains the same. Tons of people asking what we can do, and only a handful asking if we should.

If we’re going to avoid the worst outcomes AI could create, we as humans need to reflect on what our goals are. If increased efficiencies and profit are the only motives, we may not like where we end up.

Citations
  1. Britannica. Alan Turing and the beginning of AI.
    britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI

  2. ARS Technica. (2023, May 2). IBM plans to replace 7,800 jobs with AI over time, pauses hiring certain positions.
    arstechnica.com/information-technology/2023/05/ibm-pauses-hiring-around-7800-roles-that-could-be-replaced-by-ai/

  3. Internet Encyclopedia of Philosophy. Ethics of Artificial Intelligence.
    iep.utm.edu/ethics-of-artificial-intelligence/

  4. Fortune. (2023, Mar 29). Elon Musk and Apple cofounder Steve Wozniak among over 1,100 who sign open letter calling for 6-month ban on creating powerful A.I..
    fortune.com/2023/03/29/elon-musk-apple-steve-wozniak-over-1100-sign-open-letter-6-month-ban-creating-powerful-ai/

  5. Stanford University. (2023). Stanford Center for AI Safety.
    fortune.com/2023/03/29/elon-musk-apple-steve-wozniak-over-1100-sign-open-letter-6-month-ban-creating-powerful-ai/

  6. LessWrong. (2023, Feb 8). A multi-disciplinary view on AI safety research.
    lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research

  7. Christian Scholars Review. (2023, Jan 20). ChatGPT and the Rise of AI.
    christianscholars.com/chatgpt-and-the-rise-of-ai/

  8. Center for AI Safety. (2023). 8 Examples of AI Risk.
    safe.ai/ai-risk

  9. Forbes. (2022, Oct 27). Could You Get Paid To Do AI Safety Research – And Should You?.
    forbes.com/sites/calumchace/2022/10/27/could-you-get-paid-to-do-ai-safety-research–and-should-you/?sh=7ed276f14ec8