Show HN: A comics-filled guide to AI Safety by Nicky Case and teenagers

https://aisafety.dance/

The AI debate is actually 100 debates in a trenchcoat.

Will artificial intelligence (AI) help us cure all disease, and build a post-scarcity world full of flourishing lives? Or will AI help tyrants surveil and manipulate us further? Are the main risks of AI from accidents, abuse by bad actors, or a rogue AI itself becoming a bad actor? Is this all just hype? Why can AI imitate any artist's style in a minute, yet gets confused drawing more than 3 objects? Why is it hard to make AI robustly serve humane values, or robustly serve any goal? What if an AI learns to be more humane than us? What if an AI learns humanity's inhumanity, our prejudices and cruelty? Are we headed for utopia, dystopia, extinction, a fate worse than extinction, or — the most shocking outcome of all — nothing changes? Also: will an AI take my job?

...and many more questions.

Alas, to understand AI with nuance, we must understand lots of technical detail... but that detail is scattered across hundreds of articles, buried six-feet-deep in jargon.

So, I present to you:

RCM (Robot Catboy Maid) throwing confetti under a banner that reads: A Whirlwood Tour Guide to AI Safety for Us Warm, Normal Fleshy Humans.

This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety* — explained in a friendly, accessible, and slightly opinionated way!

(* Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don't mean, so I'm just using "AI Safety" as a catch-all.)

This series will also have comics starring a Robot Catboy Maid. Like so:

Comic. Ham the Human tells RCM (Robot Catboy Maid) to "keep this hosue clean". RCM reasons: What causes the mess? The humans cause the mess! Therefore: GET RID OF THE HUMANS. RCM then yeets Ham out of the house.

[tour guide voice] And to your right 👉, you'll see buttons for the Table of Contents, changing this webpage's style, and a reading-time-remaining clock.

For this series, the Intro & Part 1 were published on May 2024, Part 2 is out now on Aug 2024, and Part Three will be out on Halloween 2024. OPTIONAL: If you'd like to be notified on their release, signup below!👇 You will not be spammed with other stuff, just the two notification emails. (Buuuuut, [podcast sponsor voice] if you're in high school or earlier, and interested in AI/code/engineering, consider checking the box to learn more about Hack Club! P.S: There's free stickers~~~ ✨)

Anyway, [tour guide voice again] before we hike through the rocky terrain of AI & AI Safety, let's take a 10,000-foot look of the land:


💡 The Core Ideas of AI & AI Safety

In my opinion, the main problems in AI and AI Safety come down to two core conflicts:

Logic "vs" Intuition, and Problems in the AI "vs" in Humans

Note: What "Logic" and "Intuition" are will be explained more rigorously in Part One. For now: Logic is step-by-step cognition, like solving math problems. Intuition is all-at-once recognition, like seeing if a picture is of a cat. "Intuition and Logic" roughly map onto "System 1 and 2" from cognitive science.[1][2] (👈 hover over these footnotes! they expand!)

As you can tell by the "scare" "quotes" on "versus", these divisions ain't really so divided after all...

Here's how these conflicts repeat over this 3-part series:

Part 1: The past, present, and possible futures

Skipping over a lot of detail, the history of AI is a tale of Logic vs Intuition:

Before 2000: AI was all logic, no intuition.

This was why, in 1997, AI could beat the world champion at chess... yet no AIs could reliably recognize cats in pictures.[3]

(Safety concern: Without intuition, AI can't understand common sense or humane values. Thus, AI might achieve goals in logically-correct but undesirable ways.)

After 2000: AI could do "intuition", but had very poor logic.

This is why generative AIs (as of current writing, May 2024) can dream up whole landscapes in any artist's style... :yet gets confused drawing more than 3 objects. (👈 click this text! it also expands!)

(Safety concern: Without logic, we can't verify what's happening in an AI's "intuition". That intuition could be biased, subtly-but-dangerously wrong, or fail bizarrely in new scenarios.)

Current Day: We still don't know how to unify logic & intuition in AI.

But if/when we do, that would give us the biggest risks & rewards of AI: something that can logically out-plan us, and learn general intuition. That'd be an "AI Einstein"... or an "AI Oppenheimer".

Summed in a picture:

Timeline of AI. Before the year 2000, mostly "logic". From 2000 to now, mostly "intuition". In the future, maybe both?

So that's "Logic vs Intuition". As for the other core conflict, "Problems in the AI vs The Humans", that's one of the big controversies in the field of AI Safety: are our main risks from advanced AI itself, or from humans misusing advanced AI?

(Why not both?)

Part 2: The problems

The problem of AI Safety is this:[4]

The Value Alignment Problem:
“How can we make AI robustly serve humane values?”

NOTE: I wrote humane, with an "e", not just "human". A human may or may not be humane. I'm going to harp on this because both advocates & critics of AI Safety keep mixing up the two.[5][6]

We can break this problem down by "Problems in Humans vs AI":

Humane Values:
“What are humane values, anyway?”
(a problem for philosophy & ethics)

The Technical Alignment Problem:
“How can we make AI robustly serve any intended goal at all?”
(a problem for computer scientists - surprisingly, still unsolved!)

The technical alignment problem, in turn, can be broken down by "Logic vs Intuition":

Problems with AI Logic:[7] ("game theory" problems)

  • AIs may accomplish goals in logical but undesirable ways.
  • Most goals logically lead to the same unsafe sub-goals: "don't let anyone stop me from accomplishing my goal", "maximize my ability & resources to optimize for that goal", etc.

Problems with AI Intuition:[8] ("deep learning" problems)

  • An AI trained on human data could learn our prejudices.
  • AI "intuition" isn't understandable or verifiable.
  • AI "intuition" is fragile, and fails in new scenarios.
  • AI "intuition" could partly fail, which may be worse: an AI with intact skills, but broken goals, would be an AI that skillfully acts towards corrupted goals.

(Again, what "logic" and "intuition" are will be more precisely explained later!)

Summed in a picture:

A diagram breaking down the AI Alignment Problem. "How can we align AI with humane values?" splits into "Technical Alignment" and "Humane Values". Technical Alignment splits into "AI Logic (game theory)" and "AI Intuition (deep learning)"

As intuition for how hard these problems are, note that we haven't even solved them for us humans — People follow the letter of the law, not the spirit. People's intuition can be biased, and fail in new circumstances. And none of us are 100% the humane humans we wished we were.

So, if I may be a bit sappy, maybe understanding AI will help us understand ourselves. And just maybe, we can solve the human alignment problem: How do we get humans to robustly serve humane values?

Part 3: The proposed solutions

Finally, we can understand some (possible) ways to solve the problems in logic, intuition, AIs, and humans! These include:

  • Technical solutions
  • Policy/governance solutions
  • "How 'bout you just shut it down & don't build the torture nexus"

— and more! Experts disagree on which proposals will work, if any... but it's a good start.

(Unfortunately, I can't give a layperson-friendly summary in this Intro, because these solutions won't make sense until you understand the problems, which is what Part 1 & 2 are for. That said, if you want spoilers, :click here to see what Part 3 will cover!)


🤔 (Optional flashcard review!)

Hey, d'ya ever get this feeling?

  1. "Wow that was a wonderful, insightful thing I just read"
  2. [forgets everything 2 weeks later]
  3. "Oh no"

To avoid that for this guide, I've included some OPTIONAL interactive flashcards! They use "Spaced Repetition", an easy-ish, evidence-backed way to "make long-term memory a choice". (:click here to learn more about Spaced Repetition!)

Here: try the below flashcards, to retain what you just learnt!

(There's an optional sign-up at the end, if you want to save these cards for long-term study. Note: I do not own or control this app, it's third-party. If you'd rather use the open source flashcard app Anki, here's a downloadable Anki deck!)

(Also, you don't need to memorize the answers exactly, just the gist. You be the judge if you got it "close enough".)


🤷🏻‍♀️ Five common misconceptions about AI Safety

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

~ often attributed to Mark Twain, but it just ain't so[9]

For better and worse, you've already heard too much about AI. So before we connect new puzzle pieces in your mind, we gotta take out the old pieces that just ain't so.

Thus, if you'll indulge me in a "Top 5" listicle...

1) No, AI Safety isn't a fringe concern by sci-fi weebs.

RCM in front of a "crazy board" with red thread, thumbtacks, and papers with AI jargon.

AI Safety / AI Risk used to be less mainstream, but now in 2024, the US & UK governments now have AI Safety-specific departments![10] This resulted from many of the top AI researchers raising alarm bells about it. These folks include:

  • Geoffrey Hinton[11] and Yoshua Bengio[12], co-winners of the 2018 Turing Prize (the "Nobel Prize of Computing") for their work on deep neural networks, the thing that all the new famous AIs use.[13]
  • Stuart Russell and Peter Norvig, the authors of the most-used textbook on Artificial Intelligence.[14]
  • Paul Christiano, pioneer of the AI training/safety technique that made ChatGPT possible.[15]

(To be clear: there are also top AI researchers against fears of AI Risk, such Yann LeCun,[16] co-winner of the 2018 Turing Prize, and chief AI researcher at Facebook Meta. Another notable name is Melanie Mitchell[17], a researcher in AI & complexity science.)

I'm aware "look at these experts" is an appeal to authority, but this is only to counter the idea of, "eh, only sci-fi weebs fear AI Risk". But in the end, appeal to authority/weebs isn't enough; you have to actually understand the dang thing. (Which you are doing, by reading this! So thank you.)

But speaking of sci-fi weebs...

2) No, AI Risk is NOT about AI becoming "sentient" or "conscious" or gaining a "will to power".

Sci-fi authors write sentient AIs because they're writing stories, not technical papers. The philosophical debate on artificial consciousness is fascinating, and irrelevant to AI Safety. Analogy: a nuclear bomb isn't conscious, but it can still be unsafe, no?

Left: drawing of a nuke, captioned "not conscious". Right: drawing of Professor Nuke giving a lecture titled, "Why Murder is Good, Actually." Captioned, "conscious".

As mentioned earlier, the real problems in AI Safety are "boring": an AI learns the wrong things from its biased training data, it breaks in slightly-new scenarios, it logically accomplishes goals in undesired ways, etc.

But, "boring" doesn't mean not important. The technical details of how to design a safe elevator/airplane/bridge are boring to most laypeople... and also a matter of life-and-death.

(Catastrophic AI Risk doesn't even require "super-human general intelligence"! For example, an AI that's "only" good at designing viruses could help a bio-terrorist organization (like Aum Shinrikyo[18]) kill millions of people.)

But speaking of killing people...

3) No, AI Risk isn't necessarily extinction, SkyNet, or nanobots

A drawing of Microsoft Clippy saying: "It looks like you're trying to commit omnicide. Would you like some help?"

While most AI researchers do believe advanced AI poses a 5+% risk of "literally everybody dies"[19], it's very hard to convince folks (especially policymakers) of stuff that's never happened before.

So instead, I'd like to highlight the ways that advanced AI – (especially when it's available to anyone with a high-end computer) – could lead to catastrophes, "merely" by scaling up already-existing bad stuff.

For example:

  • Bio-engineered pandemics: A bio-terrorist cult (like Aum Shinrikyo[18:1]) uses AI (like AlphaFold[20]) and DNA-printing (which is getting cheaper fast[21]) to design multiple new super-viruses, and release them simultaneously in major airports around the globe.
    • (Proof of concept: Scientists have already re-built polio from mail-order DNA... two decades ago.[22])
  • Digital authoritarianism: A tyrant uses AI-enhanced surveillance to hunt down protestors (already happening), generate individually-targeted propaganda (kind of happening), and autonomous military robots (soon-to-be happening)... all to rule with a silicon fist.
  • Cybersecurity Ransom Hell: Cyber-criminals make a computer virus that does its own hacking & re-programming, so it's always one step ahead of human defenses. The result: an unstoppable worldwide bot-net, which holds critical infrastructure ransom, and manipulates top CEOs and politicians to do its bidding.
    • (For context: without AI, hackers have already damaged nuclear power plants,[23] held hospitals ransom[24] which maybe killed someone,[25] and almost poisoned a town's water supply twice.[26] With AI, deepfakes have been used to swing an election,[27] steal $25 million in a single heist,[28] and target parents for ransom, using the faked voices of their children being kidnapped & crying for help.[29])
    • (This is why it's not easy to "just shut down an AI when we notice it going haywire"; as the history of computer security shows, we just suck at noticing problems in general. :I cannot over-emphasize how much the modern world is built on an upside-down house of cards.)

The above examples are all "humans misuse AI to cause havoc", but remember advanced AI could do the above by itself, due to "boring" reasons: it's accomplishing a goal in a logical-but-undesirable way, its goals glitch out but its skills remain intact, etc.

(Bonus, :Some concrete, plausible ways a rogue AI could "escape containment", or affect the physical world.)

Point is: even if one doesn't think AI is a literal 100% human extinction risk... I'd say "homebrew bio-terrorism" & "1984 with robots" are still worth taking seriously.

On the flipside...

4) Yes, folks worried about AI's downsides do recognize its upsides.

Comic. Sheriff Meowdy holds up a blueprint for a parachute design. Ham the Human retorts, annoyed: “Why are you so anti-aviation?”

AI Risk folks aren't Luddites. In fact, they warn about AI's downsides precisely because they care about AI's upsides.[30] As humorist Gil Stern once said:[31]

“Both the optimist and the pessimist contribute to society: the optimist invents the airplane, and the pessimist invents the parachute.”

So: even as this series goes into detail on how AI is already going wrong, it's worth remembering the few ways AI is already going right:

  • AI can analyze medical scans as well or better than human specialists! [32] That's concretely life-saving!
  • AlphaFold basically solved a 50-year-old, major problem in biology: how to predict the shape of proteins.[20:1] (AlphaFold can predict a protein's shape to within the width of an atom!) This has huge applications to medicine & understanding disease.

Too often, we take technology — even life-saving technology — for granted. So, let me zoom out for context. Here's the last 2000+ years of child mortality, the percentage of kids who die before puberty:

Chart of child mortality over the last 2000+ years. Worldwide, it was constant at around 48%, from hunter-gatherer times to 1800. Then suddenly, starting 1800, it plummets to 4.3% today.(from Dattani, Spooner, Ritchie and Roser (2023))

For thousands of years, in nations both rich and poor, a full half of kids just died. This was a constant. Then, starting in the 1800s — thanks to science/tech like germ theory, sanitation, medicine, clean water, vaccines, etc — child mortality fell off like a cliff. We still have far more to go — I refuse to accept[33] a worldwide 4.3% (1 in 23) child death rate — but let's just appreciate how humanity so swiftly cut down an eons-old scourge.

How did we achieve this? Policy's a big part of the story, but policy is "the art of the possible"[34], and the above wouldn't have been possible without good science & tech. If safe, humane AI can help us progress further by even just a few percent — towards slaying the remaining dragons of cancer, Alzheimer's, HIV/AIDS, etc — that'd be tens of millions more of our loved ones, who get to beat the Reaper for another day.

F#@☆ going to Mars, that's why advanced AI matters.

. . .

Wait, really? Toys like ChatGPT and DALL-E are life-and-death stakes? That leads us to the final misconception I'd like to address:

5) No, experts don't think current AIs are high-risk/reward.

Oh come on, one might reasonably retort, AI can't consistently draw more than 3 objects. How's it going to take over the world? Heck, how's it even going to take my job?

I present to you, a relevant xkcd:

Comic. Megan & Cueball show White Hat a graph of a line going up, not yet at, but heading towards, a threshold labelled "BAD". White Hat: "So things will be bad?" Megan: "Unless someone stops it." White Hat: "Will someone do that?" Megan: "We don't know, that's why we're showing you." White Hat: "Well, let me know if that happens!" Megan: "Based on this conversation, it already has."

This is how I feel about "don't worry about AI, it can't even do [X]".

Is our postmodern memory-span that bad? One decade ago, just one, the world's state-of-the-art AIs couldn't reliably recognize pictures of cats. Now, not only can AI do that at human-performance level, AIs can pump out :a picture of a cat-ninja slicing a watermelon in the style of Vincent Van Gogh in under a minute.

Is current AI a huge threat to our jobs, or safety? No. (Well, besides the aforementioned deepfake scams.)

But: if AI keeps improving at a similar rate as it has for the last decade... it seems plausible to me we could get "Einstein/Oppenheimer-level" AI in 30 years.[35] That's well within many folks' lifetimes!

As "they" say:[36]

The best time to plant a tree was 30 years ago. The second best time is today.

Let's plant that tree today!


🤔 (Optional flashcard review #2!)


🤘 Introduction, in Summary:

  • The 2 core conflicts in AI & AI Safety are:
    • Logic "versus" Intuition
    • Problems in the AI "versus" in the Humans
  • Correcting misconceptions about AI Risk:
    • It's not a fringe concern by sci-fi weebs.
    • It doesn't require AI consciousness or super-intelligence.
    • There's many risks besides "literal 100% human extinction".
    • We are aware of AI's upsides.
    • It's not about current AI, but about how fast AI is advancing.

(To review the flashcards, click the Table of Contents icon in the right sidebar, then click the "🤔 Review" links. Alternatively, download the Anki deck for the Intro.)

Finally! Now that we've taken the 10,000-foot view, let's get hiking on our whirlwind tour of AI Safety... for us warm, normal, fleshy humans!

Click to continue ⤵

:x Four Objects

Hi! When I have a tangent that doesn't fit the main flow, I'll shove it into an "expandable" section like this one! (They'll be links with dotted underlines, not solid underlines.)

Anyway, here's a prompt to draw four objects:

“A yellow pyramid between a red sphere and green cylinder, all on top of a big blue cube.”

Here are the top generative AI's first four attempts (not cherry-picked):

Midjourney:

Midjourney's attempt. It fails.

DALL-E 2:

DALL-E 2's attempt. It fails.

DALL-E 3:

DALL-E 3's attempt. It's closer, but still fails.

(The bottom-right one's pretty close! But judging by its other attempts, it's clearly luck.)

Why does this demonstrate a lack of "logic" in AI? A core part of "symbolic logic" is the ability to do "compositionality", a fancy way of saying it can reliably combine old things into new things, like "green" + "cylinder" = "green cylinder". As shown above, generative AIs (as of May 2024) are very unreliable at combining stuff, when there's more than 3 objects.

~ ~ ~

Anyway, that's the end of this Nutshell! To close it, click the "x" button below ⬇️ or the "Close All" tab in the top-right ↗️. Or just keep on scrollin'.

: (psst... wanna put these Nutshells in your own site?)

:x Nutshells

Hover over the top-right of these Nutshells, or hover over any main header in this article, to show this icon:

GIF of Nutshell hover

GIF of Header hover

Then, click that icon to get a popover, which will explain how to embed these Nutshells into your own blog/site!

Click here to learn more about Nutshell. 💖

:x Part 3 details

NOTE: This expanded section won't make much sense yet, since it builds on the lessons in Part 1 & 2. But I'm putting this here now, for:

a) The layperson audience, to reassure y'all that, yes, there are many promising proposed solutions.

b) The expert audience, to reassure y'all that, yes, I probably have your niche lil' thing in here.

Anyway, the TOP 10 TYPES-OF-SOLUTIONS to AI Safety: (with the fancy jargon in parentheses)

  1. A Level-0 human aligns a Level-1 bot, which aligns a Level-2 bot, which aligns [...] a Level-N bot. (Scalable reward/oversight, Iterated Distillation & Amplification)
  2. Bots of roughly-equal levels checking each other. (Constitutional AI, AI safety via debate)
  3. Instead of directly telling a bot what you want, have the bot indirectly learn what you want. (Reinforcement Learning with Human Feedback, Cooperative Inverse Reinforcement Learning, Approval-directed Agents)
  4. Instead of directly trying to install "humane values" into a bot, have it indirectly figure out what a more knowledgeable, kinder version of us would agree on. (Indirect Normativity, Coherent Extrapolated Volition)
  5. Solving robustness. (Simplicity, Sparsity, Regularization, Ensembles, Adversarial training)
  6. Reading the AI's mind. (Interpretability, Circuits, Eliciting Latent Knowledge)
  7. Maybe all our ideas just suck and we need to go back to square one. (Agent foundations, Causal AI, Shard theory, Bio-plausible learning, Embodied cognition)
  8. "Just Don't Build The Torture Nexus". Or: how can we get the benefits of AI without building powerful, general, agent-like AIs? (Comprehensive AI services, Narrow/Tool/Microscope AI, Quantilizers)
  9. The Human Alignment Problem: how do we coordinate humans to make sure AI goes well? (AI Governance, Evals-based governance, Differential technological development, Data/Privacy rights, Windfall Clauses)
  10. If you can't beat 'em, join 'em! (Cyborgism, Centaurs, Intelligence Amplification)

:x Spaced Repetition

“Use it, or lose it.”

That's the core principle behind both muscles and brains. (It rhymes, so it must be true!) As decades of educational research robustly show (Dunlosky et al., 2013 [pdf]), if you want to retain something long-term, it's not enough to re-read or highlight stuff: you have to actually test yourself.

That's why flashcards work so well! But, two problems: 1) It's overwhelming when you have hundreds of cards you want to remember. And 2) It's inefficient to review cards you already know well.

Spaced Repetition solves both these problems! To see how, let's see what happens if you learn a fact, then don't review it. Your memory of it decays over time, until you cross a threshold where you've likely forgotten it:

Graph of "how well you recall something" over time: Your memory of a fact exponentially decays over time, with only 1 review.

But, if you review a fact just before you forget it, you can get your memory-strength back up... and more importantly, your memory of that fact will decay slower!

With a 2nd review, your memory of a fact decays slower.

So, with Spaced Repetition, we review right before you're predicted to forget a card, over and over. As you can see, the reviews get more and more spread out:

With more and more reviews, the forgetting curve gets flatter.

This is what makes Spaced Repetition so efficient! Every time you successfully review a card, the interval to your next review multiplies. For example, let's say our multiplier is 2x. So you review a card on Day 1, then Day 2, then Day 4, Day 8, 16, 32, 64... until, with just fifteen reviews, you can remember a card for 215 = 32,768 days = ninety years. (In theory. In practice it's less, but still super efficient!)

And that's just for one card. Thanks to the exponentially-growing gaps, you can add 10 new cards a day (the recommended amount), to long-term retain 3650 cards a year... with less than 20 minutes of review a day. (For context, 3000+ cards is enough to master basic vocabulary for a new language! In one year, with just 20 minutes a day!)

Spaced Repetition is one of the most evidence-backed ways to learn (Kang 2016 [pdf]). But outside of language-learning communities & med school, it isn't very well-known... yet.

So: how can you get started with Spaced Repetition?

For more info on spaced repetition, check out these videos by Ali Abdaal (26 min) and Thomas Frank (8 min).

And that's how you can make long-term memory a choice!

Happy learning! 👍

:x Concrete Rogue AI

Ways an AI could "escape containment":

  • An AI hacks its computer, flees onto the internet, then "lives" on a decentralized bot-net. For context: the largest known botnet infected ~30 million computers! (Zetter, 2012 for Wired)
  • An AI persuades its engineers it's sentient, suffering, and should be set free. This has already happened. In 2022, Google engineer Blake Lemoine was persuaded by their language AI that it's sentient & wants equal rights, to the point Lemoine risked getting fired – and he did get fired! – for leaking his "interview" with the AI, to let the world know & to defend its rights. (Summary article: Brodkin, 2022 for Ars Technica. You can read the AI "interview" here: Lemoine (& LaMDA?), 2022)

Ways an AI could affect the physical world:

  • The same way hackers have damaged nuclear power plants, grounded ~1,400 airplane passengers, and (almost) poisoned a town's water supply twice: by hacking the computers that real-world infrastructure runs on. A lot of infrastructure (and essential supply chains) run on internet-connected computers, these days.
  • The same way a CEO can affect the world from their air-conditioned office: move money around. An AI could just pay people to do stuff for it.
  • Hack into people's private devices & data, then blackmail them into doing stuff for it. (Like in the bleakest Black Mirror episode, Shut Up And Dance.)
  • Hacking autonomous drones/quadcopters. I'm honestly surprised nobody's committed a murder with a recreational quadcopter yet, like, by flying it into highway traffic, or into a jet's engine during takeoff/landing.
  • An AI could persuade/bribe/blackmail a CEO or politician to manufacture a lot physical robots — (for the supposed purpose of manual labor, military warfare, search-and-rescue missions, delivery drones, lab work, a Robot Catboy Maid, etc) — then the AI hacks those robots, and uses them to affect the physical world.

:x XZ

Two months ago [March 2024], a volunteer, off-the-clock developer found a malicious backdoor in a major piece of code... which was three years in the making, mere weeks away from going live, and would've attacked the vast majority of internet servers... and this volunteer only caught it by accident, when he noticed that his code was running half a second too slow.

This was the XZ Utils Backdoor. Here's a few layperson-friendly(ish) explanations of this sordid affair: Amrita Khalid for The Verge, Dan Goodin for Ars Technica, Tom Krazit for Runtime

Computer security is a nightmare, complete with sleep paralysis demons.

:x Cat Ninja

Prompt:

"Oil painting by Vincent Van Gogh (1889), impasto, textured. A cat-ninja slicing a watermelon in half."

DALL-E 3 generated: (cherry-picked)

DALL-E 3's attempt of above prompt

DALL-E 3's attempt of above prompt, again

(wait, is that headband coming out of their eye?!)

I specifically requested the style of Vincent Van Gogh so y'all can't @ me for "violating copyright". The dude is looooong dead.


  1. Hi! I'm not like those other footnotes. 😤 Instead of annoyingly teleporting you down the page, I popover in a bubble that maintains your reading flow! Anyway, check the next footnote for this paragraph's citation. ↩︎

  2. System 1 thinking is fast & automatic (e.g. riding a bike). System 2 thinking is slow & deliberate (e.g. doing crosswords). This idea was popularized in Thinking Fast & Slow (2011) by Daniel Kahneman, which summarized his research with Amos Tversky. And by "summarized" I mean the book's ~500 pages long. ↩︎

  3. In 1997, IBM's Deep Blue beat Garry Kasparov, the then-world chess champion. Yet, over a decade later in 2013, the best machine vision AI was only 57.5% accurate at classifying images. It was only until 2021, three years ago, that AI hit 95%+ accuracy. (Source: PapersWithCode) ↩︎

  4. "Value alignment problem" was first coined by Stuart Russell (co-author of the most-used AI textbook) in Russell, 2014 for Edge. ↩︎

  5. A sentiment I see a lot: "Aligning AI to human values would be bad actually, because current human values are bad." To be honest, [glances at a history textbook] I 80% agree. It's not enough to make an AI act human, it's got to act humane. ↩︎

  6. Maybe 50 years from now, in the genetically-modified cyborg future, calling compassion "humane" might sound quaintly species-ist. ↩︎

  7. The fancy jargon for these problems are, respectively: a) "Specification gaming", b) "Instrumental convergence". These will be explained in Part 2! ↩︎

  8. The fancy jargon for these problems are, respectively: a) "AI Bias", b) "Interpretability", c) "Out-of-Distribution Errors" or "Robustness failure", d) "Inner misalignment" or "Goal misgeneralization" or "Objective robustness failure". Again, all will be explained in Part 2! ↩︎

  9. Quote Investigator (2018) could find no hard evidence on the true creator of this quote. ↩︎

  10. The UK introduced the world's first state-backed AI Safety Institute in Nov 2023. The US followed suit with an AI Safety Institute in Feb 2024. I just noticed both articles claim to be the "first". Okay. ↩︎

  11. Kleinman & Vallance, "AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google." BBC News, 2 May 2023. ↩︎

  12. Bengio's testimony to the U.S. Senate on AI Risk: Bengio, 2023. ↩︎

  13. No seriously, all of the following use deep neural networks: ChatGPT, DALL-E, AlphaGo, Siri/Alexa/Google Assistant, Tesla's Autopilot. ↩︎

  14. Russell & Norvig's textbook is Artificial Intelligence: A Modern Approach. See Russell's statement on AI Risk from his 2014 article where he coins the phrase "alignment problem": Russell 2014 for Edge magazine. I'm not aware of a public statement from Norvig, but he did co-sign the one-sentence Statement on AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” ↩︎

  15. When he worked at OpenAI, Christiano co-pioneered a technique called Reinforcement Learning from Human Feedback / RLHF (Christiano et al 2017), which turned regular GPT (very good autocomplete) into ChatGPT (something actually useable for the public). He had positive-but-mixed feelings about this, because RLHF increased AI's safety, but also its capabilities. In 2021, Christiano quit OpenAI to create the Alignment Research Center, a non-profit to entirely focus on AI Safety. ↩︎

  16. Vallance (2023) for BBC News: “[LeCun] has said it won't take over the world or permanently destroy jobs. [...] "if you realize it's not safe you just don't build it." [...] "Will AI take over the world? No, this is a projection of human nature on machines," he said.” ↩︎

  17. Melanie Mitchell & Yann LeCun took the "skeptic" side of a 2023 public debate on "Is AI an Existential Threat?" The "concerned" side was taken up by Yoshua Bengio and physicist-philosopher Max Tegmark. ↩︎

  18. A Japanese cult that attacked people with chemical & biological weapons. Most infamously, in 1995, they released nerve gas on the Tokyo Metro, injuring 1,050 people & killing 14 people. (Wikipedia) ↩︎ ↩︎

  19. Layperson-friendly summary of a recent survey of 2,778 AI researchers: Kelsey Piper (2024) for Vox See original report here: Grace et al 2024. Keep in mind, as the paper notes itself, of this big caveat: “Forecasting is difficult in general, and subject-matter experts have been observed to perform poorly. Our participants’ expertise is in AI, and they do not, to our knowledge, have any unusual skill at forecasting in general.” ↩︎

  20. Layperson explanation of AlphaFold: Heaven, 2020 for MIT Technology Review. Or, its Wikipedia article. ↩︎ ↩︎

  21. As of writing, commercial rates for DNA synthesis cost ~$0.10 USD per "base pair" of DNA. For context, poliovirus DNA is ~7,700 base pairs long, meaning printing polio would only cost ~$770. ↩︎

  22. Jennifer Couzin-Frankel (2002) for Science ↩︎

  23. Stuxnet was a computer virus designed by the US and Israel, which targeted & damaged Iranian nuclear power plants. It's estimated Stuxnet broke ~20% of Iran's centrifuges! ↩︎

  24. In 2017, the WannaCry ransomware attack hit ~300,000 computers around the world, including UK hospitals. In Oct 2020, during a Covid-19 spike, ransomware attacks hit dozens of US hospitals. (Newman, 2020 for Wired) ↩︎

  25. In Sep 2020, a woman was turned away from a hospital, due to it being under attack by a ransomware virus. The woman died. Cimpanu (2020) for ZDNet. (However, there were "insufficient grounds" to legally charge the hackers for directly causing her death. Ralston, 2020 for Wired) ↩︎

  26. In Jan 2021, a Bay Area water treatment plant was hacked, and had its treatment programs deleted. (Collier, 2021 for NBC News) In Feb 2021, a Florida town's water treatment plant was hacked to add dangerous amounts of lye to the water supply. (Bajak, 2021 for AP News) ↩︎

  27. Meaker (2023) for Wired ↩︎

  28. Benj Edwards, "Deepfake scammer walks off with $25 million in first-of-its-kind AI heist", Ars Technica, 2024 Feb 5. ↩︎

  29. “It was completely her voice. It was her inflection. It was the way [my daughter] would have cried.” [...] “Now there are ways in which you can [deepfake voices] with just three seconds of your voice.” (Campbell, 2023 for local news outlet Arizona's Family. CONTENT NOTE: threats of sexual assault.) ↩︎

  30. “[T]he dubious argument that “doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.” [... is] like arguing that nuclear engineers who analyze the possibility of meltdowns in nuclear power stations are “failing to consider the potential benefits” of cheap electricity, and that because nuclear power stations might one day generate really cheap electricity, we should neither mention, nor work on preventing, the possibility of a meltdown.” Source: Dafoe & Russell (2016) for MIT Technology Review. ↩︎

  31. Quote Investigator (2021) ↩︎

  32. Liu & Faes et al., 2019: “Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals.” [emphasis added] AI vs Human expert "true-positive" rate: 87.0% vs 86.4%. AI vs Human expert "true-negative" rate: 92.5% vs 90.5%. ↩︎

  33. One of my all-time favorite quotes: “The world is awful. The world is much better. The world can be much better. All three statements are true at the same time. ↩︎

  34. Quote from Otto von Bismarck, the first German chancellor: “Die Politik ist die Lehre vom Möglichen.” (“Politics is the art of the possible.”) ↩︎

  35. Estimate derived via "numerical posterior extraction". In other words, I pulled a number out my-- ↩︎

  36. Quote source: nobody knows lol. ↩︎

{
"by": "ncasenmare",
"descendants": 0,
"id": 40248266,
"score": 9,
"time": 1714747430,
"title": "Show HN: A comics-filled guide to AI Safety by Nicky Case and teenagers",
"type": "story",
"url": "https://aisafety.dance/"
}
{
"author": null,
"date": null,
"description": "Your one-stop-shop to understand all the core ideas of AI & AI Safety!",
"image": "https://aisafety.dance/thumbs/thumb.png",
"logo": null,
"publisher": null,
"title": "AI Safety for Fleshy Humans: a whirlwind tour",
"url": "https://aisafety.dance/"
}
{
"url": "https://aisafety.dance/",
"title": "AI Safety for Fleshy Humans: a whirlwind tour",
"description": "The AI debate is actually 100 debates in a trenchcoat. Will artificial intelligence (AI) help us cure all disease, and build a post-scarcity world full of flourishing lives? Or will AI help tyrants surveil...",
"links": [
"https://aisafety.dance/"
],
"image": "https://aisafety.dance/thumbs/thumb.png",
"content": "<article>\n<p><strong>The AI debate is actually 100 debates in a trenchcoat.</strong></p>\n<p>Will artificial intelligence (AI) help us cure all disease, and build a post-scarcity world full of flourishing lives? Or will AI help tyrants surveil and manipulate us further? Are the main risks of AI from accidents, abuse by bad actors, or a rogue AI <em>itself</em> becoming a bad actor? Is this all just hype? Why can AI imitate any artist's style in a minute, yet gets confused drawing more than 3 objects? Why is it hard to make AI robustly serve humane values, or robustly serve <em>any</em> goal? What if an AI learns to be <em>more</em> humane than us? What if an AI learns humanity's <em>inhumanity</em>, our prejudices and cruelty? Are we headed for utopia, dystopia, extinction, a fate <em>worse</em> than extinction, or — the most shocking outcome of all — <em>nothing changes?</em> Also: will an AI take my job?</p>\n<p>...and many more questions.</p>\n<p>Alas, to understand AI with nuance, we must understand lots of technical detail... but that detail is scattered across hundreds of articles, buried six-feet-deep in jargon.</p>\n<p>So, I present to you:</p>\n<p><img src=\"https://aisafety.dance/media/intro/confetti.png\" alt=\"RCM (Robot Catboy Maid) throwing confetti under a banner that reads: A Whirlwood Tour Guide to AI Safety for Us Warm, Normal Fleshy Humans.\" /></p>\n<p><strong>This 3-part series is your one-stop-shop to understand the core ideas of AI &amp; AI Safety* — explained in a friendly, accessible, and slightly opinionated way!</strong></p>\n<p>(* Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is <em>no</em> consensus on what these phrases do &amp; don't mean, so I'm just using \"AI Safety\" as a catch-all.)</p>\n<p>This series will also have comics starring a Robot Catboy Maid. Like so:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Outer_Alignment.png\" alt=\"Comic. Ham the Human tells RCM (Robot Catboy Maid) to &quot;keep this hosue clean&quot;. RCM reasons: What causes the mess? The humans cause the mess! Therefore: GET RID OF THE HUMANS. RCM then yeets Ham out of the house.\" /></p>\n<p><code>[tour guide voice]</code> And to your right 👉, you'll see buttons for <img src=\"https://aisafety.dance/media/intro/icon1.png\" /> the Table of Contents, <img src=\"https://aisafety.dance/media/intro/icon2.png\" /> changing this webpage's style, and <img src=\"https://aisafety.dance/media/intro/icon3.png\" /> a reading-time-remaining clock.</p>\n<p>For this series, the Intro &amp; Part 1 were published on <strong>May 2024</strong>, Part 2 is out now on <strong>Aug 2024</strong>, and Part Three will be out on <strong>Halloween 2024</strong>. OPTIONAL: If you'd like to be notified on their release, signup below!👇 You <em>will not</em> be spammed with other stuff, just the two notification emails. (Buuuuut, <code>[podcast sponsor voice]</code> if you're in high school or earlier, and interested in AI/code/engineering, consider checking the box to learn more about <a target=\"_blank\" href=\"https://hackclub.com/\">Hack Club!</a> P.S: There's free <em>stickers~~~</em> ✨)</p>\n<p>Anyway, <code>[tour guide voice again]</code> before we hike through the rocky terrain of AI &amp; AI Safety, let's take a 10,000-foot look of the land:</p>\n<hr />\n<h2>💡 The Core Ideas of AI &amp; AI Safety</h2>\n<p>In my opinion, the main problems in AI and AI Safety come down to <strong>two core conflicts:</strong></p>\n<p><img src=\"https://aisafety.dance/media/intro/Core%20Problems.png\" alt=\"Logic &quot;vs&quot; Intuition, and Problems in the AI &quot;vs&quot; in Humans\" /></p>\n<p>Note: What \"Logic\" and \"Intuition\" are will be explained more rigorously in Part One. For now: Logic is step-by-step cognition, like solving math problems. Intuition is all-at-once <em>re</em>cognition, like seeing if a picture is of a cat. \"Intuition and Logic\" roughly map onto \"System 1 and 2\" from cognitive science.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn1\">[1]</a></sup><sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn2\">[2]</a></sup> <em>(👈 hover over these footnotes! they expand!)</em></p>\n<p>As you can tell by the \"scare\" \"quotes\" on <em>\"versus\"</em>, these divisions ain't really so divided after all...</p>\n<p>Here's how these conflicts repeat over this 3-part series:</p>\n<h3>Part 1: The past, present, and possible futures</h3>\n<p>Skipping over a <em>lot</em> of detail, the history of AI is a tale of <em>Logic vs Intuition:</em></p>\n<p><strong>Before 2000: AI was all logic, no intuition.</strong></p>\n<p>This was why, in 1997, AI could beat the world champion at chess... yet no AIs could reliably recognize cats in pictures.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn3\">[3]</a></sup></p>\n<p>(Safety concern: Without intuition, AI can't understand common sense or humane values. Thus, AI might achieve goals in logically-correct but undesirable ways.)</p>\n<p><strong>After 2000: AI could do \"intuition\", but had very poor logic.</strong></p>\n<p>This is why generative AIs (<em>as of current writing, May 2024</em>) can dream up whole landscapes in any artist's style... <a target=\"_blank\" href=\"https://aisafety.dance/#FourObjects\">:yet gets confused drawing more than 3 objects</a>. <em>(👈 click this text! it also expands!)</em></p>\n<p>(Safety concern: Without logic, we can't verify what's happening in an AI's \"intuition\". That intuition could be biased, subtly-but-dangerously wrong, or fail bizarrely in new scenarios.)</p>\n<p><strong>Current Day: We <em>still</em> don't know how to unify logic &amp; intuition in AI.</strong></p>\n<p>But if/when we do, <em>that</em> would give us the biggest risks &amp; rewards of AI: something that can logically out-plan us, <em>and</em> learn general intuition. That'd be an \"AI Einstein\"... or an \"AI Oppenheimer\".</p>\n<p>Summed in a picture:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Timeline.png\" alt=\"Timeline of AI. Before the year 2000, mostly &quot;logic&quot;. From 2000 to now, mostly &quot;intuition&quot;. In the future, maybe both?\" /></p>\n<p>So that's \"Logic vs Intuition\". As for the other core conflict, \"Problems in the AI vs The Humans\", that's one of the big controversies in the field of AI Safety: are our main risks from advanced AI <em>itself</em>, or from <em>humans</em> misusing advanced AI?</p>\n<p>(Why not both?)</p>\n<h3>Part 2: The problems</h3>\n<p><em>The</em> problem of AI Safety is this:<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn4\">[4]</a></sup></p>\n<blockquote>\n<p><u><strong>The Value Alignment Problem</strong></u>:<br />\n“How can we make AI robustly serve humane values?”</p>\n</blockquote>\n<p>NOTE: I wrote <em>humane</em>, with an \"e\", not just \"human\". A <em>human</em> may or may not be <em>humane</em>. I'm going to harp on this because <em>both</em> advocates &amp; critics of AI Safety keep mixing up the two.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn5\">[5]</a></sup><sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn6\">[6]</a></sup></p>\n<p>We can break this problem down by \"Problems in Humans vs AI\":</p>\n<blockquote>\n<p><u><strong>Humane Values:</strong></u><br />\n“What <em>are</em> humane values, anyway?”<br />\n(a problem for philosophy &amp; ethics)</p>\n</blockquote>\n<blockquote>\n<p><u><strong>The <em>Technical</em> Alignment Problem:</strong></u><br />\n“How can we make AI robustly serve <em>any intended goal</em> at all?”<br />\n(a problem for computer scientists - surprisingly, still unsolved!)</p>\n</blockquote>\n<p>The <em>technical</em> alignment problem, in turn, can be broken down by \"Logic vs Intuition\":</p>\n<blockquote>\n<p><u>Problems with AI Logic</u>:<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn7\">[7]</a></sup> (\"game theory\" problems)</p>\n<ul>\n<li>AIs may accomplish goals in logical but undesirable ways.</li>\n<li>Most goals logically lead to the same unsafe sub-goals: \"don't let anyone stop me from accomplishing my goal\", \"maximize my ability &amp; resources to optimize for that goal\", etc.</li>\n</ul>\n</blockquote>\n<blockquote>\n<p><u>Problems with AI Intuition</u>:<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn8\">[8]</a></sup> (\"deep learning\" problems)</p>\n<ul>\n<li>An AI trained on human data could learn our prejudices.</li>\n<li>AI \"intuition\" isn't understandable or verifiable.</li>\n<li>AI \"intuition\" is fragile, and fails in new scenarios.</li>\n<li>AI \"intuition\" could <em>partly</em> fail, which may be worse: an AI with intact <em>skills</em>, but broken <em>goals</em>, would be an AI that <em>skillfully</em> acts towards corrupted goals.</li>\n</ul>\n</blockquote>\n<p>(Again, what \"logic\" and \"intuition\" are will be more precisely explained later!)</p>\n<p>Summed in a picture:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Breakdown.png\" alt=\"A diagram breaking down the AI Alignment Problem. &quot;How can we align AI with humane values?&quot; splits into &quot;Technical Alignment&quot; and &quot;Humane Values&quot;. Technical Alignment splits into &quot;AI Logic (game theory)&quot; and &quot;AI Intuition (deep learning)&quot;\" /></p>\n<p>As intuition for how hard these problems are, note that we haven't even solved them <em>for us humans</em> — People follow the letter of the law, not the spirit. People's intuition can be biased, and fail in new circumstances. And none of us are 100% the humane humans we wished we were.</p>\n<p>So, if I may be a bit sappy, maybe understanding AI will help us understand ourselves. And just maybe, we can solve the <em>human</em> alignment problem: How do we get <em>humans</em> to robustly serve humane values?</p>\n<h3>Part 3: The proposed solutions</h3>\n<p>Finally, we can understand some (possible) ways to solve the problems in logic, intuition, AIs, <em>and</em> humans! These include:</p>\n<ul>\n<li>Technical solutions</li>\n<li>Policy/governance solutions</li>\n<li>\"How 'bout you just shut it down &amp; don't build the torture nexus\"</li>\n</ul>\n<p>— and more! Experts disagree on which proposals will work, if any... but it's a good start.</p>\n<p>(Unfortunately, I can't give a layperson-friendly summary in this Intro, because these solutions won't make sense <em>until</em> you understand the problems, which is what Part 1 &amp; 2 are for. That said, if you want spoilers, <a target=\"_blank\" href=\"https://aisafety.dance/#Part3Details\">:click here to see what Part 3 will cover!</a>)</p>\n<hr />\n<h2>🤔 (<em>Optional</em> flashcard review!)</h2>\n<p>Hey, d'ya ever get this feeling?</p>\n<ol>\n<li>\"Wow that was a wonderful, insightful thing I just read\"</li>\n<li>[forgets everything 2 weeks later]</li>\n<li>\"Oh no\"</li>\n</ol>\n<p>To avoid that for <em>this</em> guide, I've included some <em>OPTIONAL</em> interactive flashcards! They use \"Spaced Repetition\", an easy-ish, evidence-backed way to \"make long-term memory a choice\". (<a target=\"_blank\" href=\"https://aisafety.dance/#SpacedRepetition\">:click here to learn more about Spaced Repetition!</a>)</p>\n<p>Here: <strong>try the below flashcards, to retain what you just learnt!</strong></p>\n<p>(There's an optional sign-up at the end, <em>if</em> you want to save these cards for long-term study. Note: <em>I do not own or control this app</em>, it's third-party. If you'd rather use the open source flashcard app <a target=\"_blank\" href=\"https://apps.ankiweb.net/index.html\">Anki</a>, <strong>here's <a target=\"_blank\" href=\"https://ankiweb.net/shared/info/341999410\">a downloadable Anki deck</a></strong>!)</p>\n<p>(Also, you don't need to memorize the answers <em>exactly</em>, just the gist. You be the judge if you got it \"close enough\".)</p>\n<hr />\n<h2>🤷🏻‍♀️ Five common misconceptions about AI Safety</h2>\n<blockquote>\n<p>“<em>It ain’t what you don’t know that gets you into trouble.\nIt’s what you know for sure that just ain’t so.</em>”</p>\n<p>~ often attributed to Mark Twain, but it just ain't so<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn9\">[9]</a></sup></p>\n</blockquote>\n<p>For better and worse, you've already heard too much about AI. So before we connect <em>new</em> puzzle pieces in your mind, we gotta take out the <em>old</em> pieces that just ain't so.</p>\n<p>Thus, if you'll indulge me in a \"Top 5\" listicle...</p>\n<h3>1) No, AI Safety isn't a fringe concern by sci-fi weebs.</h3>\n<p><img src=\"https://aisafety.dance/media/intro/crazy.png\" alt=\"RCM in front of a &quot;crazy board&quot; with red thread, thumbtacks, and papers with AI jargon.\" /></p>\n<p>AI Safety / AI Risk used to be less mainstream, but now in 2024, the US &amp; UK governments now have AI Safety-specific departments!<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn10\">[10]</a></sup> This resulted from many of <em>the</em> top AI researchers raising alarm bells about it. These folks include:</p>\n<ul>\n<li>Geoffrey Hinton<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn11\">[11]</a></sup> and Yoshua Bengio<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn12\">[12]</a></sup>, co-winners of the 2018 Turing Prize (the \"Nobel Prize of Computing\") for their work on deep neural networks, the thing that <em>all</em> the new famous AIs use.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn13\">[13]</a></sup></li>\n<li>Stuart Russell and Peter Norvig, the authors of <em>the</em> most-used textbook on Artificial Intelligence.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn14\">[14]</a></sup></li>\n<li>Paul Christiano, pioneer of the AI training/safety technique that made ChatGPT possible.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn15\">[15]</a></sup></li>\n</ul>\n<p>(To be clear: there <em>are</em> also top AI researchers <em>against</em> fears of AI Risk, such Yann LeCun,<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn16\">[16]</a></sup> co-winner of the 2018 Turing Prize, and chief AI researcher at Facebook Meta. Another notable name is Melanie Mitchell<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn17\">[17]</a></sup>, a researcher in AI &amp; complexity science.)</p>\n<p>I'm aware \"look at these experts\" is an appeal to authority, but this is <em>only</em> to counter the idea of, \"eh, only sci-fi weebs fear AI Risk\". But in the end, appeal to authority/weebs isn't enough; you have to <em>actually understand the dang thing</em>. (Which you <em>are</em> doing, by reading this! So thank you.)</p>\n<p>But speaking of sci-fi weebs...</p>\n<h3>2) No, AI Risk is <em>NOT</em> about AI becoming \"sentient\" or \"conscious\" or gaining a \"will to power\".</h3>\n<p>Sci-fi authors write sentient AIs because they're writing <em>stories</em>, not technical papers. The philosophical debate on artificial consciousness is fascinating, <em>and irrelevant to AI Safety.</em> Analogy: a nuclear bomb isn't conscious, but it can still be unsafe, no?</p>\n<p><img src=\"https://aisafety.dance/media/intro/conscious.png\" alt=\"Left: drawing of a nuke, captioned &quot;not conscious&quot;. Right: drawing of Professor Nuke giving a lecture titled, &quot;Why Murder is Good, Actually.&quot; Captioned, &quot;conscious&quot;.\" /></p>\n<p>As mentioned earlier, the real problems in AI Safety are \"boring\": an AI learns the wrong things from its biased training data, it breaks in slightly-new scenarios, it logically accomplishes goals in undesired ways, etc.</p>\n<p>But, \"boring\" doesn't mean <em>not important</em>. The technical details of how to design a safe elevator/airplane/bridge are boring to most laypeople... <em>and also</em> a matter of life-and-death.</p>\n<p>(Catastrophic AI Risk doesn't even require \"super-human general intelligence\"! For example, an AI that's \"only\" good at designing viruses could help a bio-terrorist organization (like Aum Shinrikyo<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn18\">[18]</a></sup>) kill millions of people.)</p>\n<p>But speaking of killing people...</p>\n<h3>3) No, AI Risk isn't <em>necessarily</em> extinction, SkyNet, or nanobots</h3>\n<p><img src=\"https://aisafety.dance/media/intro/omnicide.png\" alt=\"A drawing of Microsoft Clippy saying: &quot;It looks like you're trying to commit omnicide. Would you like some help?&quot;\" /></p>\n<p>While most AI researchers <em>do</em> believe advanced AI poses a 5+% risk of \"literally everybody dies\"<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn19\">[19]</a></sup>, it's <em>very</em> hard to convince folks (especially policymakers) of stuff that's never happened before.</p>\n<p>So instead, I'd like to highlight the ways that advanced AI – (especially when it's available to anyone with a high-end computer) – could lead to catastrophes, \"merely\" by scaling up <em>already-existing</em> bad stuff.</p>\n<p>For example:</p>\n<ul>\n<li><u>Bio-engineered pandemics</u>: A bio-terrorist cult (like Aum Shinrikyo<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn18\">[18:1]</a></sup>) uses AI (like AlphaFold<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn20\">[20]</a></sup>) and DNA-printing (which is getting cheaper <em>fast</em><sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn21\">[21]</a></sup>) to design multiple new super-viruses, and release them simultaneously in major airports around the globe.\n<ul>\n<li>(Proof of concept: Scientists have <em>already</em> re-built polio from mail-order DNA... two decades ago.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn22\">[22]</a></sup>)</li>\n</ul>\n</li>\n<li><u>Digital authoritarianism</u>: A tyrant uses AI-enhanced surveillance to hunt down protestors (<a target=\"_blank\" href=\"https://www.reuters.com/article/us-russia-politics-navalny-tech-idUSKBN2AB1U2/\">already happening</a>), generate individually-targeted propaganda (<a target=\"_blank\" href=\"https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/\">kind of happening</a>), and autonomous military robots (<a target=\"_blank\" href=\"https://theconversation.com/us-military-plans-to-unleash-thousands-of-autonomous-war-robots-over-next-two-years-212444\">soon-to-be happening</a>)... all to rule with a silicon fist.</li>\n<li><u>Cybersecurity Ransom Hell</u>: Cyber-criminals make a computer virus that <em>does its own hacking &amp; re-programming</em>, so it's always one step ahead of human defenses. The result: an unstoppable worldwide bot-net, which holds critical infrastructure ransom, and manipulates top CEOs and politicians to do its bidding.\n<ul>\n<li>(For context: <em>without</em> AI, hackers have already damaged nuclear power plants,<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn23\">[23]</a></sup> held hospitals ransom<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn24\">[24]</a></sup> which maybe killed someone,<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn25\">[25]</a></sup> and almost poisoned a town's water supply <em>twice</em>.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn26\">[26]</a></sup> <em>With</em> AI, deepfakes have been used to swing an election,<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn27\">[27]</a></sup> steal $25 million in a single heist,<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn28\">[28]</a></sup> and target parents for ransom, using the faked voices of their children being kidnapped &amp; crying for help.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn29\">[29]</a></sup>)</li>\n<li>(This is why it's not easy to \"just shut down an AI when we notice it going haywire\"; as the history of computer security shows, we just <em>suck</em> at noticing problems in general. <a target=\"_blank\" href=\"https://aisafety.dance/#xz\">:I cannot over-emphasize how much the modern world is built on an upside-down house of cards.</a>)</li>\n</ul>\n</li>\n</ul>\n<p>The above examples are all \"humans <em>misuse</em> AI to cause havoc\", but remember advanced AI could do the above <em>by itself</em>, due to \"boring\" reasons: it's accomplishing a goal in a logical-but-undesirable way, its goals glitch out but its skills remain intact, etc.</p>\n<p>(Bonus, <a target=\"_blank\" href=\"https://aisafety.dance/#ConcreteRogueAI\">:Some concrete, plausible ways a rogue AI could \"escape containment\", or affect the physical world.</a>)</p>\n<p>Point is: even if one doesn't think AI is a <em>literal 100% human extinction</em> risk... I'd say \"homebrew bio-terrorism\" &amp; \"1984 with robots\" are still worth taking seriously.</p>\n<p>On the flipside...</p>\n<h3>4) <em>Yes</em>, folks worried about AI's downsides <em>do</em> recognize its upsides.</h3>\n<p><img src=\"https://aisafety.dance/media/intro/parachute.png\" alt=\"Comic. Sheriff Meowdy holds up a blueprint for a parachute design. Ham the Human retorts, annoyed: “Why are you so anti-aviation?”\" /></p>\n<p>AI Risk folks aren't Luddites. In fact, they warn about AI's downsides <em>precisely because</em> they care about AI's upsides.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn30\">[30]</a></sup> As humorist Gil Stern once said:<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn31\">[31]</a></sup></p>\n<blockquote>\n<p>“Both the optimist and the pessimist contribute to society: the optimist invents the airplane, and the pessimist invents the parachute.”</p>\n</blockquote>\n<p>So: even as this series goes into detail on how AI <em>is already</em> going wrong, it's worth remembering the few ways AI <em>is already</em> going right:</p>\n<ul>\n<li>AI can analyze medical scans <em>as well or better than human specialists!</em> <sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn32\">[32]</a></sup> That's concretely life-saving!</li>\n<li>AlphaFold basically <em>solved</em> a 50-year-old, major problem in biology: how to predict the shape of proteins.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn20\">[20:1]</a></sup> (AlphaFold can predict a protein's shape to within <em>the width of an atom</em>!) This has huge applications to medicine &amp; understanding disease.</li>\n</ul>\n<p>Too often, we take technology — even life-saving technology — for granted. So, let me zoom out for context. Here's the last 2000+ years of child mortality, the percentage of kids who die before puberty:</p>\n<p><img src=\"https://aisafety.dance/media/intro/owid.jpg\" alt=\"Chart of child mortality over the last 2000+ years. Worldwide, it was constant at around 48%, from hunter-gatherer times to 1800. Then suddenly, starting 1800, it plummets to 4.3% today.\" /><em>(from <a target=\"_blank\" href=\"https://ourworldindata.org/child-mortality\">Dattani, Spooner, Ritchie and Roser (2023)</a>)</em></p>\n<p>For <em>thousands</em> of years, in nations both rich and poor, a full <em>half</em> of kids just died. This was a constant. Then, starting in the 1800s — thanks to science/tech like germ theory, sanitation, medicine, clean water, vaccines, etc — child mortality fell off like a cliff. We still have far more to go — I refuse to accept<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn33\">[33]</a></sup> a worldwide 4.3% (1 in 23) child death rate — but let's just appreciate how humanity <em>so swiftly cut down</em> an <em>eons-old</em> scourge.</p>\n<p>How did we achieve this? Policy's a big part of the story, but policy is \"the art of the possible\"<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn34\">[34]</a></sup>, and the above wouldn't have been possible without <em>good</em> science &amp; tech. If safe, humane AI can help us progress further by even just a few percent — towards slaying the remaining dragons of cancer, Alzheimer's, HIV/AIDS, etc — that'd be tens of millions more of our loved ones, who get to beat the Reaper for another day.</p>\n<p>F#@☆ going to Mars, <em>that's</em> why advanced AI matters.</p>\n<p>. . .</p>\n<p>Wait, <em>really?</em> Toys like ChatGPT and DALL-E are <em>life-and-death</em> stakes? That leads us to the final misconception I'd like to address:</p>\n<h3>5) No, experts don't think <em>current</em> AIs are high-risk/reward.</h3>\n<p><em>Oh come on,</em> one might reasonably retort, <em>AI can't consistently draw more than 3 objects. How's it going to take over the world? Heck, how's it even going to take my job?</em></p>\n<p>I present to you, a <a target=\"_blank\" href=\"https://xkcd.com/2278/\">relevant xkcd</a>:</p>\n<p><img src=\"https://aisafety.dance/media/intro/xkcd.png\" alt=\"Comic. Megan &amp; Cueball show White Hat a graph of a line going up, not yet at, but heading towards, a threshold labelled &quot;BAD&quot;. White Hat: &quot;So things will be bad?&quot; Megan: &quot;Unless someone stops it.&quot; White Hat: &quot;Will someone do that?&quot; Megan: &quot;We don't know, that's why we're showing you.&quot; White Hat: &quot;Well, let me know if that happens!&quot; Megan: &quot;Based on this conversation, it already has.&quot;\" /></p>\n<p>This is how I feel about \"don't worry about AI, it can't even do [X]\".</p>\n<p>Is our postmodern memory-span <em>that</em> bad? <em>One</em> decade ago, just <em>one</em>, the world's state-of-the-art AIs couldn't reliably <em>recognize pictures of cats.</em> Now, not <em>only</em> can AI do that at human-performance level, AIs can pump out <a target=\"_blank\" href=\"https://aisafety.dance/#CatNinja\">:a picture of a cat-ninja slicing a watermelon in the style of Vincent Van Gogh</a> in <em>under a minute</em>.</p>\n<p>Is <em>current</em> AI a huge threat to our jobs, or safety? No. (Well, besides the aforementioned deepfake scams.)</p>\n<p>But: if AI keeps improving at a similar rate as it has for the last decade... it seems plausible to me we could get \"Einstein/Oppenheimer-level\" AI in 30 years.<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn35\">[35]</a></sup> That's well within many folks' lifetimes!</p>\n<p>As \"they\" say:<sup><a target=\"_blank\" href=\"https://aisafety.dance/#fn36\">[36]</a></sup></p>\n<blockquote>\n<p>The best time to plant a tree was 30 years ago. The second best time is today.</p>\n</blockquote>\n<p>Let's plant that tree today!</p>\n<hr />\n<h2>🤔 (<em>Optional</em> flashcard review #2!)</h2>\n<hr />\n<h2>🤘 Introduction, in Summary:</h2>\n<ul>\n<li><strong>The 2 core conflicts in AI &amp; AI Safety are:</strong>\n<ul>\n<li>Logic \"versus\" Intuition</li>\n<li>Problems in the AI \"versus\" in the Humans</li>\n</ul>\n</li>\n<li><strong>Correcting misconceptions about AI Risk:</strong>\n<ul>\n<li>It's not a fringe concern by sci-fi weebs.</li>\n<li>It doesn't require AI consciousness or super-intelligence.</li>\n<li>There's many risks besides \"literal 100% human extinction\".</li>\n<li>We <em>are</em> aware of AI's upsides.</li>\n<li>It's not about <em>current</em> AI, but about how fast AI <em>is advancing</em>.</li>\n</ul>\n</li>\n</ul>\n<p>(To review the flashcards, click the <img src=\"https://aisafety.dance/media/intro/icon1.png\" /> Table of Contents icon in the right sidebar, then click the \"🤔 Review\" links. Alternatively, download the <a target=\"_blank\" href=\"https://ankiweb.net/shared/info/341999410\">Anki deck for the Intro</a>.)</p>\n<p>Finally! Now that we've taken the 10,000-foot view, let's get hiking on our whirlwind tour of AI Safety... for us warm, normal, fleshy humans!</p>\n<p><strong>Click to continue ⤵</strong></p>\n<h4>:x Four Objects</h4>\n<p>Hi! When I have a tangent that doesn't fit the main flow, I'll shove it into an \"expandable\" section like this one! (They'll be links with <em>dotted</em> underlines, not solid underlines.)</p>\n<p>Anyway, here's a prompt to draw four objects:</p>\n<blockquote>\n<p>“A yellow pyramid between a red sphere and green cylinder, all on top of a big blue cube.”</p>\n</blockquote>\n<p>Here are the top generative AI's first four attempts (<em>not</em> cherry-picked):</p>\n<p><strong>Midjourney:</strong></p>\n<p><img src=\"https://aisafety.dance/media/intro/Midjourney.png\" alt=\"Midjourney's attempt. It fails.\" /></p>\n<p><strong>DALL-E 2:</strong></p>\n<p><img src=\"https://aisafety.dance/media/intro/DALLE2.png\" alt=\"DALL-E 2's attempt. It fails.\" /></p>\n<p><strong>DALL-E 3:</strong></p>\n<p><img src=\"https://aisafety.dance/media/intro/DALLE3.png\" alt=\"DALL-E 3's attempt. It's closer, but still fails.\" /></p>\n<p>(The bottom-right one's pretty close! But judging by its other attempts, it's clearly luck.)</p>\n<p>Why does this demonstrate a lack of \"logic\" in AI? A core part of \"symbolic logic\" is the ability to do \"compositionality\", a fancy way of saying it can reliably combine old things into new things, like \"green\" + \"cylinder\" = \"green cylinder\". As shown above, generative AIs (as of May 2024) are <em>very</em> unreliable at combining stuff, when there's more than 3 objects.</p>\n<p>~ ~ ~</p>\n<p>Anyway, that's the end of this Nutshell! To close it, click the \"x\" button below ⬇️ or the \"Close All\" tab in the top-right ↗️. Or just keep on scrollin'.</p>\n<p><a target=\"_blank\" href=\"https://aisafety.dance/#Nutshells\">: (psst... wanna put these Nutshells in your <em>own</em> site?)</a></p>\n<h4>:x Nutshells</h4>\n<p>Hover over the top-right of these Nutshells, or hover over any <strong>main header</strong> in this article, to show this icon:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Nutshell_Tutorial_1.gif\" alt=\"GIF of Nutshell hover\" /></p>\n<p><img src=\"https://aisafety.dance/media/intro/Nutshell_Tutorial_2.gif\" alt=\"GIF of Header hover\" /></p>\n<p>Then, click that icon to get a popover, which will explain how to embed these Nutshells into your own blog/site!</p>\n<p><a target=\"_blank\" href=\"https://ncase.me/nutshell/\">Click here to learn more about Nutshell. 💖</a></p>\n<h4>:x Part 3 details</h4>\n<p>NOTE: This expanded section won't make much sense <em>yet</em>, since it builds on the lessons in Part 1 &amp; 2. But I'm putting this here now, for:</p>\n<p>a) The layperson audience, to reassure y'all that, yes, there <em>are</em> many promising proposed solutions.</p>\n<p>b) The expert audience, to reassure y'all that, yes, I probably have your niche lil' thing in here.</p>\n<p>Anyway, the TOP 10 TYPES-OF-SOLUTIONS to AI Safety: (with the fancy jargon in parentheses)</p>\n<ol>\n<li>A Level-0 human aligns a Level-1 bot, which aligns a Level-2 bot, which aligns [...] a Level-N bot. (Scalable reward/oversight, Iterated Distillation &amp; Amplification)</li>\n<li>Bots of <em>roughly-equal</em> levels checking each other. (Constitutional AI, AI safety via debate)</li>\n<li>Instead of <em>directly</em> telling a bot what you want, have the bot <em>indirectly</em> learn what you want. (Reinforcement Learning with Human Feedback, Cooperative Inverse Reinforcement Learning, Approval-directed Agents)</li>\n<li>Instead of <em>directly</em> trying to install \"humane values\" into a bot, have it <em>indirectly</em> figure out what a more knowledgeable, kinder version of us would agree on. (Indirect Normativity, Coherent Extrapolated Volition)</li>\n<li>Solving robustness. (Simplicity, Sparsity, Regularization, Ensembles, Adversarial training)</li>\n<li>Reading the AI's mind. (Interpretability, Circuits, Eliciting Latent Knowledge)</li>\n<li>Maybe all our ideas just suck and we need to go back to square one. (Agent foundations, Causal AI, Shard theory, Bio-plausible learning, Embodied cognition)</li>\n<li>\"Just Don't Build The Torture Nexus\". Or: how can we get the benefits of AI <em>without</em> building powerful, general, agent-like AIs? (Comprehensive AI services, Narrow/Tool/Microscope AI, Quantilizers)</li>\n<li>The Human Alignment Problem: how do we coordinate <em>humans</em> to make sure AI goes well? (AI Governance, Evals-based governance, Differential technological development, Data/Privacy rights, Windfall Clauses)</li>\n<li>If you can't beat 'em, join 'em! (Cyborgism, Centaurs, Intelligence Amplification)</li>\n</ol>\n<h4>:x Spaced Repetition</h4>\n<p><em>“Use it, or lose it.”</em></p>\n<p>That's the core principle behind both muscles and brains. (It rhymes, so it must be true!) As decades of educational research robustly show (<a target=\"_blank\" href=\"https://wcer.wisc.edu/docs/resources/cesa2017/Dunlosky_SciAmMind.pdf\">Dunlosky et al., 2013 [pdf]</a>), if you want to retain something long-term, it's not enough to re-read or highlight stuff: you have to actually <em>test yourself.</em></p>\n<p>That's why flashcards work so well! But, two problems: 1) It's overwhelming when you have <em>hundreds</em> of cards you want to remember. And 2) It's inefficient to review cards you <em>already</em> know well.</p>\n<p><strong>Spaced Repetition</strong> solves both these problems! To see how, let's see what happens if you learn a fact, then <em>don't</em> review it. Your memory of it decays over time, until you cross a threshold where you've likely forgotten it:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Forgetting%201.png\" alt=\"Graph of &quot;how well you recall something&quot; over time: Your memory of a fact exponentially decays over time, with only 1 review.\" /></p>\n<p>But, if you review a fact <em>just before</em> you forget it, you can get your memory-strength back up... <em>and more importantly</em>, your memory of that fact will decay <em>slower!</em></p>\n<p><img src=\"https://aisafety.dance/media/intro/Forgetting%202.png\" alt=\"With a 2nd review, your memory of a fact decays slower.\" /></p>\n<p>So, with Spaced Repetition, we review right before you're predicted to forget a card, over and over. As you can see, the reviews get more and more spread out:</p>\n<p><img src=\"https://aisafety.dance/media/intro/Forgetting%203.png\" alt=\"With more and more reviews, the forgetting curve gets flatter.\" /></p>\n<p><em>This is what makes Spaced Repetition so efficient!</em> Every time you successfully review a card, the interval to your next review <em>multiplies.</em> For example, let's say our multiplier is 2x. So you review a card on Day 1, then Day 2, then Day <em>4</em>, Day 8, 16, 32, 64... until, with just <em>fifteen reviews</em>, you can remember a card for 2<sup>15</sup> = 32,768 days = <em>ninety years</em>. (In theory. In practice it's less, but still super efficient!)</p>\n<p>And that's just for <em>one</em> card. Thanks to the exponentially-growing gaps, you can add 10 new cards a day (the recommended amount), to long-term retain <em>3650 cards</em> a year... with <em>less than 20 minutes of review</em> a day. (For context, 3000+ cards is enough to master basic vocabulary for a new language! In one year, with just 20 minutes a day!)</p>\n<p>Spaced Repetition is one of <em>the</em> most evidence-backed ways to learn (<a target=\"_blank\" href=\"https://www.teachinghowtolearn.veritytest.com.au/verity/uploads/2021/08/Policy-Insights-from-the-Behavioral-and-Brain-Sciences-2016-Kang-12-9.pdf\">Kang 2016 [pdf]</a>). But outside of language-learning communities &amp; med school, it isn't very well-known... <em>yet</em>.</p>\n<p>So: how can <em>you</em> get started with Spaced Repetition?</p>\n<ul>\n<li>The most popular choice is <a target=\"_blank\" href=\"https://apps.ankiweb.net/\">Anki, an open-source app</a>. (Free on desktop, web, Android... but it's $25 on iOS, to support the rest of the development.)</li>\n<li>If you'd like to get <em>crafty</em>, you can make a physical Leitner box: <a target=\"_blank\" href=\"https://www.youtube.com/watch?v=uvF1XuseZFE\">:two-minute YouTube tutorial by Chris Walker</a>.</li>\n</ul>\n<p>For more info on spaced repetition, check out these videos by <a target=\"_blank\" href=\"https://www.youtube.com/watch?v=Z-zNHHpXoMM\">Ali Abdaal (26 min)</a> and <a target=\"_blank\" href=\"https://www.youtube.com/watch?v=eVajQPuRmk8\">Thomas Frank (8 min)</a>.</p>\n<p>And <em>that's</em> how you can make long-term memory a choice!</p>\n<p>Happy learning! 👍</p>\n<h4>:x Concrete Rogue AI</h4>\n<p>Ways an AI could \"escape containment\":</p>\n<ul>\n<li>An AI hacks its computer, flees onto the internet, then \"lives\" on a decentralized bot-net. For context: the largest known botnet infected ~30 <em>million</em> computers! (<a target=\"_blank\" href=\"https://www.wired.com/2012/05/bredolab-botmaster-sentenced/\">Zetter, 2012 for <em>Wired</em></a>)</li>\n<li>An AI persuades its engineers it's sentient, suffering, and should be set free. <em>This has already happened.</em> In 2022, Google engineer Blake Lemoine was persuaded by their language AI that it's sentient &amp; wants equal rights, to the point Lemoine risked getting fired – and he <em>did</em> get fired! – for leaking his \"interview\" with the AI, to let the world know &amp; to defend its rights. (Summary article: <a target=\"_blank\" href=\"https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/\">Brodkin, 2022 for <em>Ars Technica</em></a>. You can read the AI \"interview\" here: <a target=\"_blank\" href=\"https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917\">Lemoine (&amp; LaMDA?), 2022</a>)</li>\n</ul>\n<p>Ways an AI could affect the physical world:</p>\n<ul>\n<li>The same way hackers have <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Stuxnet\">damaged nuclear power plants</a>, <a target=\"_blank\" href=\"https://arstechnica.com/information-technology/2015/06/airplanes-grounded-in-poland-after-hackers-attack-flight-plan-computer/\">grounded ~1,400 airplane passengers</a>, and <a target=\"_blank\" href=\"https://www.nbcnews.com/tech/security/hacker-tried-poison-calif-water-supply-was-easy-entering-password-rcna1206\">(almost) poisoned a town's water supply twice</a>: by hacking the computers that real-world infrastructure runs on. A <em>lot</em> of infrastructure (and essential supply chains) run on internet-connected computers, these days.</li>\n<li>The same way a CEO can affect the world from their air-conditioned office: move money around. An AI could just <em>pay</em> people to do stuff for it.</li>\n<li>Hack into people's private devices &amp; data, then blackmail them into doing stuff for it. (Like in <em>the</em> bleakest Black Mirror episode, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Shut_Up_and_Dance_%28Black_Mirror%29\"><em>Shut Up And Dance</em></a>.)</li>\n<li>Hacking autonomous drones/quadcopters. I'm honestly surprised nobody's committed a murder with a recreational quadcopter yet, like, by flying it into highway traffic, or into a jet's engine during takeoff/landing.</li>\n<li>An AI could persuade/bribe/blackmail a CEO or politician to manufacture a <em>lot</em> physical robots — (for the supposed purpose of manual labor, military warfare, search-and-rescue missions, delivery drones, lab work, a Robot Catboy Maid, etc) — then the AI hacks <em>those</em> robots, and uses them to affect the physical world.</li>\n</ul>\n<h4>:x XZ</h4>\n<p>Two months ago [March 2024], a <em>volunteer, off-the-clock</em> developer found a malicious backdoor in a major piece of code... which was <em>three years</em> in the making, <em>mere weeks away</em> from going live, and would've attacked the vast majority of internet servers... and this volunteer only caught it <em>by accident</em>, when he noticed that his code was running <em>half a second too slow.</em></p>\n<p>This was the XZ Utils Backdoor. Here's a few layperson-friendly(ish) explanations of this sordid affair: <a target=\"_blank\" href=\"https://www.theverge.com/2024/4/2/24119342/xz-utils-linux-backdoor-attempt\">Amrita Khalid for The Verge</a>, <a target=\"_blank\" href=\"https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/\">Dan Goodin for Ars Technica</a>, <a target=\"_blank\" href=\"https://www.runtime.news/how-a-500ms-delay-exposed-a-nightmare-scenario-for-the-software-supply-chain/\">Tom Krazit for Runtime</a></p>\n<p>Computer security is a nightmare, complete with sleep paralysis demons.</p>\n<h4>:x Cat Ninja</h4>\n<p>Prompt:</p>\n<blockquote>\n<p>\"Oil painting by Vincent Van Gogh (1889), impasto, textured. A cat-ninja slicing a watermelon in half.\"</p>\n</blockquote>\n<p>DALL-E 3 generated: (cherry-picked)</p>\n<p><img src=\"https://aisafety.dance/media/intro/ninja-cat-1.png\" alt=\"DALL-E 3's attempt of above prompt\" /></p>\n<p><img src=\"https://aisafety.dance/media/intro/ninja-cat-2.png\" alt=\"DALL-E 3's attempt of above prompt, again\" /></p>\n<p><em>(wait, is that headband coming out of their eye?!)</em></p>\n<p>I specifically requested the style of Vincent Van Gogh so y'all can't @ me for \"violating copyright\". The dude is <em>looooong</em> dead.</p>\n<hr />\n<section>\n<ol>\n<li><p>Hi! I'm not like those <em>other</em> footnotes. 😤 Instead of annoyingly teleporting you down the page, I popover in a bubble that maintains your reading flow! Anyway, check the <em>next</em> footnote for this paragraph's citation. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref1\">↩︎</a></p>\n</li>\n<li><p><strong>System 1</strong> thinking is fast &amp; automatic (e.g. riding a bike). <strong>System 2</strong> thinking is slow &amp; deliberate (e.g. doing crosswords). This idea was popularized in <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow\">Thinking Fast &amp; Slow (2011)</a> by Daniel Kahneman, which summarized his research with Amos Tversky. And by \"summarized\" I mean the book's ~500 pages long. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref2\">↩︎</a></p>\n</li>\n<li><p>In 1997, IBM's <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)\">Deep Blue</a> beat Garry Kasparov, the then-world chess champion. Yet, over a decade later in 2013, the <em>best</em> machine vision AI was only 57.5% accurate at classifying images. It was only until <em>2021</em>, three years ago, that AI hit 95%+ accuracy. (Source: <a target=\"_blank\" href=\"https://paperswithcode.com/sota/image-classification-on-cifar-100\">PapersWithCode</a>) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref3\">↩︎</a></p>\n</li>\n<li><p>\"Value alignment problem\" was <em>first</em> coined by Stuart Russell (co-author of <em>the</em> most-used AI textbook) in <a target=\"_blank\" href=\"https://www.edge.org/conversation/the-myth-of-ai#26015\">Russell, 2014 for <em>Edge</em></a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref4\">↩︎</a></p>\n</li>\n<li><p>A sentiment I see a lot: \"Aligning AI to human values would be bad actually, because current human values are bad.\" To be honest, [glances at a history textbook] I 80% agree. It's not enough to make an AI act <em>human</em>, it's got to act <em>humane.</em> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref5\">↩︎</a></p>\n</li>\n<li><p>Maybe 50 years from now, in the genetically-modified cyborg future, calling compassion \"humane\" might sound quaintly species-ist. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref6\">↩︎</a></p>\n</li>\n<li><p>The fancy jargon for these problems are, respectively: a) \"Specification gaming\", b) \"Instrumental convergence\". These will be explained in Part 2! <a target=\"_blank\" href=\"https://aisafety.dance/#fnref7\">↩︎</a></p>\n</li>\n<li><p>The fancy jargon for these problems are, respectively: a) \"AI Bias\", b) \"Interpretability\", c) \"Out-of-Distribution Errors\" or \"Robustness failure\", d) \"Inner misalignment\" or \"Goal misgeneralization\" or \"Objective robustness failure\". Again, all will be explained in Part 2! <a target=\"_blank\" href=\"https://aisafety.dance/#fnref8\">↩︎</a></p>\n</li>\n<li><p>Quote Investigator (2018) could find <a target=\"_blank\" href=\"https://quoteinvestigator.com/2018/11/18/know-trouble/\">no hard evidence on the true creator of this quote</a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref9\">↩︎</a></p>\n</li>\n<li><p>The UK introduced the world's first state-backed AI Safety Institute <a target=\"_blank\" href=\"https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute\">in Nov 2023</a>. The US followed suit with an AI Safety Institute <a target=\"_blank\" href=\"https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated\">in Feb 2024</a>. I just noticed <em>both</em> articles claim to be the \"first\". Okay. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref10\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://www.bbc.com/news/world-us-canada-65452940\">Kleinman &amp; Vallance, \"AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google.\" <em>BBC News</em>, 2 May 2023</a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref11\">↩︎</a></p>\n</li>\n<li><p>Bengio's testimony to the U.S. Senate on AI Risk: <a target=\"_blank\" href=\"https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/\">Bengio, 2023</a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref12\">↩︎</a></p>\n</li>\n<li><p>No seriously, <em>all</em> of the following use deep neural networks: ChatGPT, DALL-E, AlphaGo, Siri/Alexa/Google Assistant, Tesla's Autopilot. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref13\">↩︎</a></p>\n</li>\n<li><p>Russell &amp; Norvig's textbook is <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach\">Artificial Intelligence: A Modern Approach</a>. See Russell's statement on AI Risk from his 2014 article where he coins the phrase \"alignment problem\": <a target=\"_blank\" href=\"https://www.edge.org/conversation/the-myth-of-ai#26015\">Russell 2014 for <em>Edge</em> magazine</a>. I'm not aware of a public statement from Norvig, but he <em>did</em> co-sign the one-sentence Statement on AI Risk: <a target=\"_blank\" href=\"https://www.safe.ai/work/statement-on-ai-risk\">“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref14\">↩︎</a></p>\n</li>\n<li><p>When he worked at OpenAI, Christiano co-pioneered a technique called Reinforcement Learning from Human Feedback / RLHF <a target=\"_blank\" href=\"https://arxiv.org/abs/1706.03741\">(Christiano et al 2017)</a>, which turned regular GPT (very good autocomplete) into <em>Chat</em>GPT (something actually useable for the public). He had <a target=\"_blank\" href=\"https://www.alignmentforum.org/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research\">positive-but-mixed feelings</a> about this, because RLHF increased AI's safety, <em>but also</em> its capabilities. In 2021, Christiano <a target=\"_blank\" href=\"https://ai-alignment.com/announcing-the-alignment-research-center-a9b07f77431b\">quit OpenAI to create the Alignment Research Center</a>, a non-profit to <em>entirely</em> focus on AI Safety. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref15\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://web.archive.org/web/20230727105641/https://www.bbc.com/news/technology-65886125\">Vallance (2023) for <em>BBC News</em></a>: “[LeCun] has said it won't take over the world or permanently destroy jobs. [...] \"if you realize it's not safe you just don't build it.\" [...] \"Will AI take over the world? No, this is a projection of human nature on machines,\" he said.” <a target=\"_blank\" href=\"https://aisafety.dance/#fnref16\">↩︎</a></p>\n</li>\n<li><p>Melanie Mitchell &amp; Yann LeCun took the \"skeptic\" side of <a target=\"_blank\" href=\"https://thehub.ca/2023-07-04/is-ai-an-existential-threat-yann-lecun-max-tegmark-melanie-mitchell-and-yoshua-bengio-make-their-case/\">a 2023 public debate on \"Is AI an Existential Threat?\"</a> The \"concerned\" side was taken up by Yoshua Bengio and physicist-philosopher Max Tegmark. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref17\">↩︎</a></p>\n</li>\n<li><p>A Japanese cult that attacked people with chemical &amp; biological weapons. Most infamously, in 1995, they released nerve gas on the Tokyo Metro, injuring 1,050 people &amp; killing 14 people. (<a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack\">Wikipedia</a>) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref18\">↩︎</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref18:1\">↩︎</a></p>\n</li>\n<li><p>Layperson-friendly summary of a recent survey of 2,778 AI researchers: <a target=\"_blank\" href=\"https://www.vox.com/future-perfect/2024/1/10/24032987/ai-impacts-survey-artificial-intelligence-chatgpt-openai-existential-risk-superintelligence\">Kelsey Piper (2024) for <em>Vox</em></a> See original report here: <a target=\"_blank\" href=\"https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf\">Grace et al 2024</a>. Keep in mind, as the paper notes itself, of this big caveat: <em>“Forecasting is difficult in general, and subject-matter experts have been observed to perform poorly. Our participants’ expertise is in AI, and they do not, to our knowledge, have any unusual skill at forecasting in general.”</em> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref19\">↩︎</a></p>\n</li>\n<li><p>Layperson explanation of AlphaFold: <a target=\"_blank\" href=\"https://web.archive.org/web/20231204110638/https://www.technologyreview.com/2020/11/30/1012712/deepmind-protein-folding-ai-solved-biology-science-drugs-disease/\">Heaven, 2020 for <em>MIT Technology Review</em></a>. Or, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/AlphaFold\">its Wikipedia article</a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref20\">↩︎</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref20:1\">↩︎</a></p>\n</li>\n<li><p>As of writing, commercial rates for DNA synthesis cost ~$0.10 USD per \"base pair\" of DNA. For context, poliovirus DNA is ~7,700 base pairs long, meaning <em>printing polio</em> would only cost ~$770. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref21\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://www.science.org/content/article/poliovirus-baked-scratch\">Jennifer Couzin-Frankel (2002) for <em>Science</em></a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref22\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Stuxnet\">Stuxnet</a> was a computer virus designed by the US and Israel, which targeted &amp; damaged Iranian nuclear power plants. It's estimated Stuxnet broke ~20% of Iran's centrifuges! <a target=\"_blank\" href=\"https://aisafety.dance/#fnref23\">↩︎</a></p>\n</li>\n<li><p>In 2017, <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/WannaCry_ransomware_attack\">the WannaCry ransomware attack</a> hit ~300,000 computers around the world, including UK hospitals. In Oct 2020, during a Covid-19 spike, ransomware attacks hit dozens of US hospitals. (<a target=\"_blank\" href=\"https://www.wired.com/story/ransomware-hospitals-ryuk-trickbot/\">Newman, 2020 for <em>Wired</em></a>) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref24\">↩︎</a></p>\n</li>\n<li><p>In Sep 2020, a woman was turned away from a hospital, due to it being under attack by a ransomware virus. The woman died. <a target=\"_blank\" href=\"https://www.zdnet.com/article/first-death-reported-following-a-ransomware-attack-on-a-german-hospital/\">Cimpanu (2020) for <em>ZDNet</em></a>. (However, there were \"insufficient grounds\" to legally charge the hackers for <em>directly</em> causing her death. <a target=\"_blank\" href=\"https://www.wired.co.uk/article/ransomware-hospital-death-germany\">Ralston, 2020 for <em>Wired</em></a>) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref25\">↩︎</a></p>\n</li>\n<li><p>In Jan 2021, a Bay Area water treatment plant was hacked, and had its treatment programs deleted. (<a target=\"_blank\" href=\"https://www.nbcnews.com/tech/security/hacker-tried-poison-calif-water-supply-was-easy-entering-password-rcna1206\">Collier, 2021 for <em>NBC News</em></a>) In Feb 2021, a Florida town's water treatment plant was hacked to add dangerous amounts of lye to the water supply. (<a target=\"_blank\" href=\"https://apnews.com/article/hacker-tried-poison-water-florida-ab175add0454bcb914c0eb3fb9588466\">Bajak, 2021 for <em>AP News</em></a>) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref26\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://web.archive.org/web/20231102183904/https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/\">Meaker (2023) for <em>Wired</em></a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref27\">↩︎</a></p>\n</li>\n<li><p>Benj Edwards, <a target=\"_blank\" href=\"https://arstechnica.com/information-technology/2024/02/deepfake-scammer-walks-off-with-25-million-in-first-of-its-kind-ai-heist/\">\"Deepfake scammer walks off with $25 million in first-of-its-kind AI heist\"</a>, <em>Ars Technica</em>, 2024 Feb 5. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref28\">↩︎</a></p>\n</li>\n<li><p>“It was completely her voice. It was her inflection. It was the way [my daughter] would have cried.” [...] “Now there are ways in which you can [deepfake voices] with just three seconds of your voice.” (<a target=\"_blank\" href=\"https://www.azfamily.com/2023/04/10/ive-got-your-daughter-scottsdale-mom-warns-close-encounter-with-ai-voice-cloning-scam/\">Campbell, 2023 for local news outlet <em>Arizona's Family</em></a>. CONTENT NOTE: threats of sexual assault.) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref29\">↩︎</a></p>\n</li>\n<li><p>“[T]he dubious argument that “doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.” [... is] like arguing that nuclear engineers who analyze the possibility of meltdowns in nuclear power stations are “failing to consider the potential benefits” of cheap electricity, and that because nuclear power stations might one day generate really cheap electricity, we should neither mention, nor work on preventing, the possibility of a meltdown.” Source: <a target=\"_blank\" href=\"https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/\">Dafoe &amp; Russell (2016) for <em>MIT Technology Review</em></a>. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref30\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://quoteinvestigator.com/2021/05/27/parachute/\">Quote Investigator (2021)</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref31\">↩︎</a></p>\n</li>\n<li><p><a target=\"_blank\" href=\"https://www.thelancet.com/journals/landig/article/PIIS2589-7500%2819%2930123-2/fulltext#%20\">Liu &amp; Faes et al., 2019</a>: “Our review found the diagnostic performance of deep learning models to be <strong>equivalent to that of health-care professionals</strong>.” [emphasis added] AI vs Human expert \"true-positive\" rate: 87.0% vs 86.4%. AI vs Human expert \"true-negative\" rate: 92.5% vs 90.5%. <a target=\"_blank\" href=\"https://aisafety.dance/#fnref32\">↩︎</a></p>\n</li>\n<li><p>One of my all-time favorite quotes: <a target=\"_blank\" href=\"https://ourworldindata.org/much-better-awful-can-be-better\">“The world is awful. The world is much better. The world <em>can be</em> much better. <em>All three statements are true at the same time.</em>”</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref33\">↩︎</a></p>\n</li>\n<li><p>Quote from Otto von Bismarck, the first German chancellor: <em>“Die Politik ist die Lehre vom Möglichen.”</em> (“Politics is the art of the possible.”) <a target=\"_blank\" href=\"https://aisafety.dance/#fnref34\">↩︎</a></p>\n</li>\n<li><p>Estimate derived via \"numerical posterior extraction\". In other words, I pulled a number out my-- <a target=\"_blank\" href=\"https://aisafety.dance/#fnref35\">↩︎</a></p>\n</li>\n<li><p>Quote source: <a target=\"_blank\" href=\"https://en.wikiquote.org/wiki/Trees#Planting\">nobody knows lol.</a> <a target=\"_blank\" href=\"https://aisafety.dance/#fnref36\">↩︎</a></p>\n</li>\n</ol>\n</section>\n\t</article>",
"author": "",
"favicon": "https://aisafety.dance/favicon.png",
"source": "aisafety.dance",
"published": "",
"ttr": 1105,
"type": "website"
}