Digital Security Guide Banner

Is Anyone Coming to Save Us From AI?

We’ve all heard about the threats AI poses, but is anyone capable or willing to do something about it?

All of our content is written by humans, not robots. Learn More
By
&
Brett Cruz
Gabe TurnerChief Editor
Last Updated May 18, 2023
By Brett Cruz & Gabe Turner on May 18, 2023

Rendering of a robot

The recent advancements in the capabilities of artificial intelligence have many wondering if AI will save us or doom all of humanity. And for good reason. We’re just scratching the surface of what this technology can do, and if allowed to grow without limit, many experts feel it could do irreparable harm to our society as a whole.

There are many known risks of developing AI at the rapid trajectory we’re currently on — everything from job disruption to literal sci-fi apocalypse scenarios. So many threats, in fact, that over 10,000 tech industry leaders have signed an open letter to pause the development of these systems until more research can be conducted.1

But who, exactly, is going to conduct this research? And what will be done with their recommendations?

The Biggest Challenge of AI Safety

Gallons of ink have been spilled and thousands of hands wrung over the sweeping threats AI poses. However, the biggest challenge currently is that there is no meaningful infrastructure in place to reign any of those threats in. Our society is set up to value efficiency and profit, and if those continue to be the only motivating factors in the development of AI, humanity could be in for a bad time.

FYI: People are already using AI tools to take advantage of each other. Read our guide to identifying and preventing chatbot scams to help protect yourself from burgeoning threats.

While open letters pleading with developers to halt production until more AI safety research can be conducted are well-meaning, they will do little to actually prevent the worst possible outcomes of the AI revolution. Numerous high-profile experts are already raising the red flag — Geoffry Hinton, often called Google’s “Godfather of AI,” recently resigned because of the threats AI poses.2  But unless people in positions of power agree to put meaningful limitations on the advancement of these technologies, they will continue to advance undeterred.

So what exactly should — or can — be done to more thoughtfully and responsibly develop artificial intelligence?

A Public Sector Solution?

The immediate thought when a person says “someone should do something about this” is that legislators need to step up and create some laws to regulate whatever it is we’re talking about. Sometimes this works, but often it doesn’t. Governments are notoriously slow to respond to societal ills, and to say they’re ineffective in regulating the fast-moving tech industry would be a charitable characterization.

What’s more, it’s difficult to legislate things you don’t understand. Remember when then-Senator Ted Stevens (R-Alaska), who was at the time arguing against an amendment to a bill that would have bolstered net neutrality, described the internet as “a series of tubes?”3 Without guidance from experts, we probably shouldn’t be looking to Washington for meaningful regulation on AI advancement. It also should be noted the $70 million big tech lobbyists pumped into the capitol in 2021 as Congress was trying to limit the industry’s power.4

To be fair, in early May the Biden administration met with the CEOs of four American companies on the cutting edge of AI development — Alphabet, Anthropic, Microsoft, and OpenAI —  to discuss risks and how to advance these technologies responsibly,5 but the slow-moving machinery of government seems unlikely to see around the corners to prevent the worst consequences of AI adoption from coming to pass. These same companies are pushing back against regulation of this industry, no bills have been proposed to curb AI’s potential dangers, and efforts to restrict facial-recognition applications have all failed.6

So, if lawmakers aren’t going to come to the rescue, where does that leave us?

Is Anyone Coming to Save Us?

While many would argue that the dangers of AI are totally overblown, and we should embrace the efficiencies that these tools will create7, others argue this thinking is too short-term. The folks in that camp have organized into groups researching the future of AI and making recommendations for sensible guardrails that should be put in place to mitigate the worst outcomes of rapid AI adoption.8 Hopefully, this research will move the needle; however, without being adopted into legal frameworks, these recommendations are only suggestions.

Pro Tip: AI tools are making it easier to code, which makes it easier for laypeople to create malicious programs that can lock down your computer or steal your identity. Learn how to protect yourself with our guide to identity theft protection.

Right now, the most immediate threat of AI is job replacement.9 The best we can hope for is that corporate leaders will do the right thing and not charge headlong into slashing workforces in favor of AI tools, but so far that looks like a mixed bag. We here at Security.org have vowed to only use human writers like me, but others — notably IBM — have said they expect AI tools to disrupt their workforces with these tools. That’s the jargony way of saying IBM plans to lay off 8,000 people and replace them with AI.10

So the short answer is no. As it stands right now, AI development and adoption are in the Wild West stages, and it’s tough to say exactly what the impacts are going to be. We know this sounds a little bleak, but there is a silver lining here. We promise.

Final Thoughts on the Risks of AI

It might seem like AI technologies are happening to us rather than for us, but that won’t always be the case. Advancements like these are scary at first, but historically, societies have always found ways to adapt. Some of us are old enough to remember when the internet was brand new, and none of us had any idea how it would change the way we did business or organized our day-to-day lives. And like all new technological advancements, we as humans adapted to it.

Pro Tip: Not all jobs are at risk of AI replacement, but those in manufacturing, media, and tech are particularly tenuous at the moment.11

Now it’s apparent that there are positive and negative aspects to any technological advancement. The internet has made us more connected than ever, but isolation and burnout are at endemic levels. You can order all the books you want online, but your neighborhood book shop shut down five years ago. You can call a car to your door to take you anywhere you want with the push of a button, but that driver has to work four gig jobs to afford health insurance. Is human society better because of these advancements? Or is it just different? The delivery driver and the person who ordered the pad thai might have differing opinions.

The AI revolution might be a moment for us as a society — and us as individuals — to reconnect with what it is we actually value and what it is we actually want. The promise is that these technologies will improve our lives, but unless we strive to understand what those improvements should look like and work to protect them, instead of taking advantage of these technologies, they might take advantage of us.

Citations
  1. CNBC. (2023, Apr 6). Elon Musk wants to pause ‘dangerous’ A.I. development. Bill Gates disagrees—and he’s not the only one.
    cnbc.com/2023/04/06/bill-gates-ai-developers-push-back-against-musk-wozniak-open-letter.html

  2. The New York Times. (2023, May 1). ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.
    nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

  3. Roll Call. (2018, Feb 16). Flashback Friday: ‘A Series of Tubes’.
    rollcall.com/2018/02/16/flashback-friday-a-series-of-tubes/

  4. The Washington Post. (2022, Jan 21). Tech companies spent almost $70 million lobbying Washington in 2021 as Congress sought to rein in their power.
    washingtonpost.com/technology/2022/01/21/tech-lobbying-in-washington/

  5. The White House. (2023, May 4). FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety.
    whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/

  6. The New York Times. (2023, Mar 3). Why Lawmakers Aren’t Rushing to Police A.I.
    nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html

  7. The Seattle Times. (2022, Dec 30). Fears about artificial intelligence are overblown.
    seattletimes.com/opinion/fears-about-artificial-intelligence-are-overblown/

  8. Center for AI Safety. (2023). Reducing Societal-scale Risks from AI.
    safe.ai/

  9. Forbes. (2023, Mar 30). Which Jobs Will AI Replace? These 4 Industries Will Be Heavily Impacted.
    forbes.com/sites/ariannajohnson/2023/03/30/which-jobs-will-ai-replace-these-4-industries-will-be-heavily-impacted/?sh=2d256c925957

  10. Yahoo! Finance. (2023, May 5). IBM Plans To Replace Nearly 8,000 Jobs With AI — These Jobs Are First to Go.
    finance.yahoo.com/news/ibm-plans-replace-nearly-8-174052360.html?guccounter=1

  11. Business Insider. (2023, Apr 9). ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace.
    businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02