How many world killers?

How many people wish to destroy all human life? I’d guess the answer is at least six digits, maybe seven. In other words, hundreds of thousands, if not millions.

(If 0.1% of humans wish to kill humanity, that’s 8 million people. If it’s 0.01%, then 800,000 people.)

In the past few years, I can recall several stories of airline pilots committing suicide and taking an entire commercial airliner with them. Depressed people seem to occasionally have an urge to spread their own suffering onto the rest of humanity.

Are airline pilots typical human beings? No, they are screened in an attempt to filter out people with mental problems. If I sit on the subway and scan people’s faces, the average person seems less mentally stable than the average pilot I observe when exiting a plane.

Obviously this is all guesswork, and I don’t think it matters very much whether the number of potential world killers is 800,000 or 8 million. It only takes one.

The real question is whether ordinary people will ever gain the power to destroy the world.

I must confess that I don’t understand the alignment debate. I have no opinion on whether AIs will be capable of pursuing their own goals, which might be unaligned with the best interest of society. My fear is not unaligned AIs, it’s AIs that are aligned with depressed people.

Perhaps there’s no reason for me to have this concern. But if I’m wrong, it’s not because no one would want to destroy the world, it’s because technology with never give individuals the power to destroy the world, and governments have no interest in destroying the world. That’s why I’d be wrong to have this concern.

In any case, I don’t understand why I keep seeing one article after another on what AIs might or might not decide to do, and very little on what they would be capable of doing if used by one of those hundreds of thousands of people that wish to destroy all human life.

If you have trouble imagining what I am talking about, consider a scientist that becomes convinced that humans are destroying the animal kingdom, and then becomes severely depressed. The Unibomber had very weak technology at his disposal. But in the future? Engineer a highly contagious and deadly virus with (like HIV) a long incubation period. Perhaps it’s already happened in a virus research lab.

Lots of people hope that future AIs will be aligned with humans. I fear that future AIs will be aligned with humans. It’s not AI that I distrust, it’s humans.

PS. When you pass a certain age, you become aware that they are many questions that you’ll never see answered. At age 67, I won’t live long enough to see how the world addresses global warming. I probably won’t live along enough to see if they ever complete the high speed rail project in California. I probably won’t live long enough to see which worldwide trend follows the current wave of nationalist authoritarianism. I won’t live to see humans go to Mars. I won’t live to see if fusion energy pans out. I won’t live long enough to see if we can solve the business cycle with NGDPLT.

If you think of long sports careers like LeBron James, the stars entering the NBA today will be the last generation I’ll follow to the end. I’ll never again see a Packer as talented as Rodgers or a Buck as talented as Giannis.

When I was younger, there was a sense of time being limitless. I felt that eventually I would see answers to these sorts of questions. Now I realize that I’ll never find out the endgame for AI. I don’t even have strong views as to what’s likely to happen, other than that the future will be far different and far stranger than we can imagine.


Tags:

 
 
 

21 Responses to “How many world killers?”

  1. Gravatar of Christian List Christian List
    23. March 2023 at 11:24

    I don’t see the danger so far.

    The “AI” programs so far don’t add much more quality. They seem to be pretty good at automating quite stupid tasks like reassembling facts and writing long texts. But so far not really more quality comes out of it, quite on the contrary.

    AI output depends massively on the quality of human input. In other words, a human who can set the AI so perfectly in order to reach his goals, does not really need AI to reach his goals.

    I am more concerned that there is massive Orwellian over-regulation. A journalist posted adorable pictures with Trump a few days ago where he is being arrested. The lovely results went viral, and the very first thing the AI administrators did was completely blocking simple word entries such as “arrested”!

  2. Gravatar of Sara Sara
    23. March 2023 at 13:14

    “The real question is whether ordinary people will ever gain the power to destroy the world.”

    This is the type of arrogance that historically has led to war and death, which shows how ignorant Sumner is.

    Notice how he uses the term “ordinary people”. He is reminding you that he is “superior” and that his decisions, like bombing the hell out of Donbas for eight years are all necessary for those “ordinary” losers, because he’s protecting you from all of those “ordinary” people who want to kill you.

    Ordinary in this context means different. Anything different must be assimilated. It must act just like me. All of those horrible Indians, and Russians, and Hungarians and Chinese, they just need to be BOMBED INTO SUBMISSION.

    ONE WORLD NATO! Chant with me. ONE WORLD NATO!

    Remember the elites are your friend. Give them more power. Give them more money. Let them control you. Submit.

  3. Gravatar of bill bill
    23. March 2023 at 14:40

    You may be positively surprised on climate change. The continued improvements in solar, wind, and batteries over the next 10 years may at least make the shape of the resolution of the matter pretty clear.

  4. Gravatar of Brent Buckner Brent Buckner
    23. March 2023 at 15:32

    @Christian List – you wrote: “I don’t see the danger so far.”
    I see danger in the trajectory.

    Consider that we’ve already seen one doomsday cult attempt to deploy bioweapons (Aum Shinrikyo – I know that there are questions as to their actual levels of funding and competence).

    To that, add the notion of increasingly widespread access to AI technologies that increasingly can be applied to bioweapon design. Already AlphaFold has far outstripped global human capability in protein folding analysis (and prion diseases are caused by nothing more complicated than proteins).

  5. Gravatar of Physecon Physecon
    23. March 2023 at 18:50

    “I probably won’t live along enough to see if they ever complete the high speed rail project in California.”

    Complete? No one alive now will see it complete.

  6. Gravatar of Jack Jack
    23. March 2023 at 21:49

    This is actually something talked about by AI alignment people the idea is to create an aligned AGI that can then stop any other AI’s coming into existence by means of a so called “pivotal act”. The most common example of a pivotal act would be destroying all GPU clusters capable of creating another AGI but people presume better plans could be made.

  7. Gravatar of mbka mbka
    24. March 2023 at 00:35

    Scott,

    very good points, and if I may add, rogue biology at this point scares me much more than rogue computing. Or as you said, either one of them controlled by rogue humans.

    There is another angle too, which is evolution. Existing life is hardy, and almost none of it has any “IQ” in the conventional sense. But it is nearly inextinguishable, honed by billions of years of experience. Witness Covid, a bio agent that doesn’t even have a cell. But it is capable, and evolves. So the fear of AI, fear because it is “intelligent”, to me is barking up the wrong tree. It is not intelligence we need to fear, but replication, adaptation, self-repair and self-support, and self interest. And of course, anything that directly interferes with our biology. Besides, any AI needs to survive as well and humans so far are its life support system. If it has any sense at all it will use us, not destroy us.

  8. Gravatar of postkey postkey
    24. March 2023 at 04:07

    “It is this belief in a new digital revolution which gave rise to the much-derided article by Danish politician, Ida Auken – originally titled “Welcome to 2030: I own nothing, I have no privacy, and life has never been better.”  More popularly known as “you’ll own nothing and you’ll be happy.”  It is a world of digital currencies and digital IDs, vaccine passports and 15-minute cities, electrification and driverless cars.  All of it based around the “energy too cheap to meter” from wind turbines and solar panels, and all of it operated by autonomous artificial intelligence within the “singularity” of the “internet of things.”
    It is a mirage, of course… one only visible to so-called “virtuals” – people whose lives and careers are now so detached from the material world that, were there not so many of them, could otherwise be diagnosed as certifiably insane.  The real world, meanwhile, looks more akin to the second global collapse – the first being the collapse of the integrated economies of the Bronze Age Eastern Mediterranean empires sometime around 1186 BCE.  The majority of ordinary people have seen their living standards decline over the past two decades – a process compounded and accelerated by two years of lockdowns followed by a year of self-destructive sanctions on key resources.”?
    https://consciousnessofsheep.co.uk/2023/03/01/paradise-postponed

  9. Gravatar of postkey postkey
    24. March 2023 at 04:08

    ‘Most’ ‘economic thinking’ is ‘short run’ and ‘redundant’? ‘It’ ignores the ‘supply side’?
    ‘Growth’ {and ‘civilisation’} depends upon ‘cheap’ F.F. – those so called ‘halcyon days’ are ‘over’. ?
    “The crisis now unfolding, however, is entirely different to the 1970s in one crucial respect… The 1970s crisis was largely artificial. When all is said and done, the oil shock was nothing more than the emerging OPEC cartel asserting its newfound leverage following the peak of continental US oil production. There was no shortage of oil any more than the three-day-week had been caused by coal shortages. What they did, perhaps, give us a glimpse of was what might happen in the event that our economies depleted our fossil fuel reserves before we had found a more versatile and energy-dense alternative. . . . That system has been on the life-support of quantitative easing and near zero interest rates ever since. Indeed, so perilous a state has the system been in since 2008, it was essential that the people who claim to be our leaders avoid doing anything so foolish as to lockdown the economy or launch an undeclared economic war on one of the world’s biggest commodity exporters . . .
    And this is why the crisis we are beginning to experience will make the 1970s look like a golden age of peace and tranquility. . . . The sad reality though, is that our leaders – at least within the western empire – have bought into a vision of the future which cannot work without some new and yet-to-be-discovered high-density energy source (which rules out all of the so-called green technologies whose main purpose is to concentrate relatively weak and diffuse energy sources). . . . Even as we struggle to reimagine the 1970s in an attempt to understand the current situation, the only people on Earth today who can even begin to imagine the economic and social horrors that await western populations are the survivors of the 1980s famine in Ethiopia, the hyperinflation in 1990s Zimbabwe, or, ironically, the Russians who survived the collapse of the Soviet Union.”

    https://consciousnessofsheep.co.uk/2022/07/01/bigger-than-you-can-imagine/

  10. Gravatar of postkey postkey
    24. March 2023 at 04:11

    “2019 RAND Paper . . .
    As far back as 2019, US Army-commissioned studies examined different means to provoke and antagonize Russia who they acknowledged sought to avoid conflict. “
    https://www.youtube.com/watch?v=uqVPM0KSUpo&t=5s
    https://www.rand.org/pubs/research_reports/RR3063.html

  11. Gravatar of postkey postkey
    24. March 2023 at 04:12

    ‘He asked Blair his views on
    Putin. “The problem with managing Putin and Russia,” said
    Blair, “is that we have to deal with them when it comes to
    Iran.” The rest of the strategy, he said, should be to make
    Russia a “little desperate” with our activities in areas
    bordering on what Russia considers its sphere of interest and
    along its actual borders. Russia had to be shown firmness
    and sown with seeds of confusion.’
    https://wikileaks.jcvignoli.com/cable_08LONDON890

  12. Gravatar of ssumner ssumner
    24. March 2023 at 10:37

    Bill, Maybe, but we still face explosive growth in the developing world, including countries with huge populations. But yes, the last few years do seem to have “bent the curve”.

    Jack, Good luck with that!

    mbka, You said:

    “Existing life is hardy”

    Yes, life as a whole. But it’s worth noting that Neanderthals died out soon after modern humans moved into Europe.

  13. Gravatar of Christian List Christian List
    24. March 2023 at 13:17

    @Brent Buckner
    I accidentally deleted part of my text myself before it was published:

    I firmly expect that certain rules will be implemented in AI that will be very similar to Asimov’s rules.

    In fact, we already see this today on a massive scale. And far beyond Asimov’s rules.

    The Internet was regulated the same way, I don’t see how AI is changing anything.

    The problem will be overregulation, sometimes misregulation, and only in really rare cases not enough regulation.

    The massive (censorship) apparatus has been there for a long time and on many levels. So-called “AI” will not change that.

    The same hype and the same scaremongering has been going on around 3D printers and the Internet. And what happened? Basically nothing.

    Where AI will be used, and is already being used on a massive scale today, is in the surveillance apparatus. This scenario is real.

  14. Gravatar of Edward Edward
    24. March 2023 at 13:45

    You distrust humans because you are anti-social.

    You’re entire life you’ve been rejected, again and again, and so now you passively aggressively write your hatred for ordinary people.

    You have no idea what you’re talking about. Ordinary people just want to trade goods and services, and watch football, without some weirdo screaming anti-Russia epithets, spending billions overthrowing regimes, sending their tax money to pay off third world politicians under the brand of humanitarian aid, and genuflect allegiance to a hypothetical one-world-NATO and a neo-marxist group calling themselves BLM.

    It should be obvious by now that people distrust you! And by you, I mean people who call themselves, oddly and arrogantly, “elites”. It’s you and your hatred for humanity that will cause a third world war, not joe six back drinking a beer and watching football.

  15. Gravatar of Michael Sandifer Michael Sandifer
    25. March 2023 at 04:38

    Scott,

    I think we should be careful not to over-hype recent AI product releases, as their limitations are more important than their capabilities. However, I wonder, do you now see it as more likely that the US will have an AI-fueled productivity boom in the near future, if we’re not in one already? Also, if so, do you think that productivity growth will have begun a perment upward trend? Afterall, we’re talking about automating automation here. That’s obviously what makes AI differ from previous advances in automation.

  16. Gravatar of John S John S
    25. March 2023 at 08:57

    “I’ll never again see a Packer as talented as Rodgers or a Buck as talented as Giannis.”

    Just wondering out loud — Does it really contribute that much more enjoyment to have the next great player be on your hometown team (as opposed to simply enjoying their career wherever they play)? I have no connections to Denver or LA, but that in no way diminishes my pleasure in watching the greatest passer ever (Jokic) and the most well-rounded scorer since Jordan (Kawhi) ply their trades.

    You’ll get to see Victor Wembanyama play through his rookie contract (and probably through his extension, when he’ll be able to escape hell/Detroit, or maybe Houston will be a contender by then). Heck, you’re living through the era of Shohei Ohtani, who I view at the moment as the most talented baseball player of all time (Ruth obv wins in longevity, but in terms of peak, let’s see the Babe throw 102 mph paired with the nastiest splitter in the game).

    We just had Trout vs. Ohtani to end the WBC. Messi won a World Cup. Mahomes is just getting warmed up. Both Nadal and Djokovic passed Federer. Connor McDavid might challenge Gretzky for Harts. I really can’t imagine what more a sports fan could ask for than this timeline. In terms of sports, you’ve won, the rest is gravy.

    “I won’t live long enough to see if we can solve the business cycle with NGDPLT.”

    I think you will. NGDPLT seems to have made pretty decent inroads in the rationalist community, which I think will be the spear tip of intellectual influence over the next few decades. Discourse progresses faster than it ever has.

  17. Gravatar of Anonymous Anonymous
    25. March 2023 at 13:55

    I agree. Alignment is an issue, but the bigger issue is simply power. If sufficient power is sufficiently widely available, it will be extremely dangerous. We can’t expect an AI to be aligned with human ethics because humans are not aligned with human ethics! People disagree very widely about ethics.

    I think you might be wrong about not living long enough to see the “endgame” for AI. Things are moving very rapidly.

  18. Gravatar of ssumner ssumner
    25. March 2023 at 17:46

    Michael, You asked:

    “do you now see it as more likely that the US will have an AI-fueled productivity boom in the near future,”

    No.

    John, You asked:

    “Does it really contribute that much more enjoyment to have the next great player be on your hometown team”

    Yes.

  19. Gravatar of John S John S
    26. March 2023 at 07:37

    Hmm, it’s interesting that in sports fandom you took the opposite approach to what you endorsed in your roulette post on EconLog — you concentrated your “rooting chips” on Wisconsin teams instead spreading them out among the best/most interesting players.

    Those bets paid off spectacularly over the last decade, but they benefited from two fairly improbable events (who would’ve guessed the Packers would draft a guy better than Favre, and if the Hawks had lost a few more games, Danny Ferry would’ve taken Giannis). Better to be lucky than good?

    “My fear is not unaligned AIs, it’s AIs that are aligned with depressed people.”

    My main fear is not depressed people, it’s people with good intentions who start playing with things they can’t control. There are quite a few synthetic biology companies that are using AI to develop all sorts of biological novelties (gene drives, synthetic proteins). Considering the track record of IT security and crypto, I don’t exactly feel confident in the ability of Silicon Valley hotshots to prevent/fix all potential biohazards.

    Re: not finding out the answers — I’m not even sure that in terms of lifetime expected value it’s better to know than to not know. Imagine you could live another 50 years, and you discover that there are no viable approaches to slowing global warming. Is the risk of knowing “we’re screwed” preferable to the current EV of “maybe we’ll figure it out”? To me it’s not an easy question.

    I try to focus on enjoying what I have rather than fretting about what I might not get. At least you got the answer to one of the biggest historical questions of modern times: Will the communist regimes of E. Europe & the Soviet Union ever fall? I’ll bet that many, if not most, people in 1975 thought they’d never live long enough to find out.

    Even if you lived another million years, you still would never learn the answer to the biggest question (“Why is there something rather than nothing?”) But does knowing the answer even matter? Even if cosmologists told me the answer today, I’d still be left with same issues of how best to live my life and enjoy what time I have left.

    “When I was younger, there was a sense of time being limitless. I felt that eventually I would see answers to these sorts of questions.”

    In my 20s, I actually deeply worried about not being able to learn the answers. But now — as trite as it sounds — I’ve come to feel that the most important thing to do is just to try my best each day and let the chips fall where they will. I found a pop-Stoicism book to be very helpful: “A Guide to the Good Life: The Ancient Art of Stoic Joy” by William Irvine. Perhaps you might enjoy it.

  20. Gravatar of ssumner ssumner
    26. March 2023 at 09:49

    John, You said:

    “My main fear is not depressed people, it’s people with good intentions who start playing with things they can’t control.”

    Yes, that’s another risk. But I have more fear for a man trying to shoot me with a gun, than a man who recklessly handles a gun in my presence.

    “Even if cosmologists told me the answer today, I’d still be left with same issues of how best to live my life and enjoy what time I have left.”

    You can’t know that unless you know the answer. If the answer were “The Christian God created the universe for us to be good”, that might affect the way you choose to live. (Or you might reply “Why is there a god, and not no god?”)

    Thanks for the stoicism tip. As I get older, my interest has shifted from self help philosophies to interesting philosophies. It’s too late for me to change!

  21. Gravatar of Don Geddis Don Geddis
    9. April 2023 at 13:54

    “My fear is not unaligned AIs, it’s AIs that are aligned with depressed people.”

    That’s a fine fear, and you’re not wrong to have it. It’s just a much, much, much easier problem to solve than the AI alignment problem.

    “I must confess that I don’t understand the alignment debate. I have no opinion on whether AIs will be capable of pursuing their own goals”

    You misunderstand the alignment problem. The problem is not AIs developing “their own goals”. Instead, the problem is, in essence, that we are not capable of fully specifying any human values or actual goals. The problem is that AIs will correctly optimize whatever goal we give them … but (for superintelligent future AIs) we won’t understand until too late that the goal we gave them does not actually result in a future that we value.

    This is the famous “paperclip optimizer”. You assign it to make paperclips … and it never even occurred to you that it might develop the capability and plan to put misinformation on Twitter and start a war in Africa, and thus acquire and control raw metals for cheaper than otherwise, and thus construct more paperclips. It’s doing exactly what you originally asked. But a superintelligent AI is capable of finding solutions that you can’t anticipate.

    And the problem is that the vast majority of possible futures for any optimizing superintelligent AI wind up with human extinction. It is very very clear that it will be much easier to build “just any” superintelligent AI, than it will be to build a machine much smarter than us, but which somehow remains aligned to human values. Nobody knows how to do that second thing. And thus a race to “be first” to build superintelligent AIs is almost certainly a race to human extinction.

Leave a Reply