John Horgan interviews Eliezer Yudkowsky

When the University of Chicago polls 50 top economists on subjects like fiscal stimulus and the minimum wage, I am often appalled by the results.  In contrast, I wish Eliezer Yudkowsky were made King of the World (assuming there was a King of the World, which I’m opposed to).  This is from Scientific American:

Horgan: If you were King of the World, what would top your “To Do” list?

Yudkowsky: I once observed, “The libertarian test is whether, imagining that you’ve gained power, your first thought is of the laws you would pass, or the laws you would repeal.”  I’m not an absolute libertarian, since not everything I want would be about repealing laws and softening constraints.  But when I think of a case like this, I imagine trying to get the world to a condition where some unemployed person can offer to drive you to work for 20 minutes, be paid five dollars, and then nothing else bad happens to them.  They don’t have their unemployment insurance phased out, have to register for a business license, lose their Medicare, be audited, have their lawyer certify compliance with OSHA rules, or whatever.  They just have an added $5.

I’d try to get to the point where employing somebody was once again as easy as it was in 1900.  I think it can make sense nowadays to have some safety nets, but I’d try to construct every safety net such that it didn’t disincent or add paperwork to that simple event where a person becomes part of the economy again.

I’d try to do all the things smart economists have been yelling about for a while but that almost no country ever does.  Replace investment taxes and income taxes with consumption taxes and land value tax.  Replace minimum wages with negative wage taxes.  Institute NGDP level targeting regimes at central banks and let the too-big-to-fails go hang.  Require loser-pays in patent law and put copyright back to 28 years.  Eliminate obstacles to housing construction.  Copy and paste from Singapore’s healthcare setup.  Copy and paste from Estonia’s e-government setup.  Try to replace committees and elaborate process regulations with specific, individual decision-makers whose decisions would be publicly documented and accountable.  Run controlled trials of different government setups and actually pay attention to the results.  I could go on for literally hours.

And I also liked this, which makes the current political circus seem pretty unimportant by comparison:

There is a conceivable world where there is no intelligence explosion and no superintelligence.  Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime.  Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice.  In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.

There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made).  And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep.  (Vivid detail warning!  I don’t actually know what the final hours will be like and whether nanomachines will be involved.  But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)

I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter.  The question is just which of these two worlds is more probable as the one we should avoid.  And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful.  The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.

I think it’s the first world that’s improbable and the second one that’s probable.  I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever.  But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later.  The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.

Vote Eliezer Yudkowsky, King of the World

PS. Long ago I used to read the paper version of Scientific American, and their economics articles were consistently awful. Have things improved?


Tags:

 
 
 

27 Responses to “John Horgan interviews Eliezer Yudkowsky”

  1. Gravatar of Christian List Christian List
    13. March 2016 at 14:37

    I like the first part.

    Give this list to The Donald after he’s elected. He’s constantly looking for policy content anyway. Hillary and Bernie for sure aren’t. They know exactly what they are going to do. And it’s for sure nothing of the things Yudkowsky mentioned. So when you go for Trump at least you’ll get the benefit of the doubt.

    The same thing could work even better with Ted Cruz. All the others? Not so much.

  2. Gravatar of E. Harding E. Harding
    13. March 2016 at 15:06

    “In contrast, I wish Eliezer Yudkowsky were made King of the World”

    -You know, I used to think this about you, before you started on your crusade against the Mighty Donald. If you recognize the true significance of him, maybe I might support you for that position again.

    I do not think Yudkowsky would be competent enough for such a role. He’s only a really smart guy, not an economist.

    “I’d try to do all the things smart economists have been yelling about for a while but that almost no country ever does.”

    -Really good idea.

    The second paragraph, though, is almost a re-statement of Pascal’s Wager, but with some probability theory attached.

  3. Gravatar of E. Harding E. Harding
    13. March 2016 at 15:08

    @Christian List

    Kasich and Rubio might do some of these things (for example, Rubio takes a lot of expert advice and tries to be well-informed, but this often leads him down roads of pure idiocy).

  4. Gravatar of Major.Freedom Major.Freedom
    13. March 2016 at 15:35

    >I wish Eliezer Yudkowsky were made King of the World

    ?

    >Institute NGDP level targeting regimes at central banks

    Oh that’s right.

    What is the point of citing all the other things Yudkowsky said? It’s not like those things are actually important.

    Blogpost could have been a lot shorter.

  5. Gravatar of Benoit Essiambre Benoit Essiambre
    13. March 2016 at 15:51

    All hail Yudkowsky

  6. Gravatar of E. Harding E. Harding
    13. March 2016 at 16:13

    Freedom, if Trump, the very emblem of strength, endorsed NGDP targeting, Sumner might even hate him more.

  7. Gravatar of Benjamin Cole Benjamin Cole
    13. March 2016 at 16:17

    Yudkowsky thinks economically. The underground cash economy comes close to his ideal of unregulated untaxed unobstructed transactions ($5 cab ride). Cash in circulation in Western economies is exploding. More than $4k per resident in US. Maybe the real economy is moving towards the Yudkowsky ideal…if Yudkowsky would embrace the decriminalization of puah-cart vending he would be even more impressive….

  8. Gravatar of Major.Freedom Major.Freedom
    13. March 2016 at 18:00

    E. Harding:

    Doubtful, because Presidential candidates like Trump are not potential philosopher kings who could assist in providing credibility to the counterfeiting operation. We’d likely hear something like “Trump is Hitler 2.0, but at least he has two brain cells to understand NGDPLT. Color me surprised.”

  9. Gravatar of Ray Lopez Ray Lopez
    13. March 2016 at 18:12

    Sumner: “Vote Eliezer Yudkowsky, King of the World” – he’s good, his ‘loser-pays’ policy is already the law in the UK, though he’s mistaken about AI taking over the world and making it into a dictatorship of the machines. In fact, AlphaGo’s win over a human GO champion was due to a specific algorithm invented in 2006 that’s a Monte Carlo heuristic incorporating neutral networks for pattern recognition. I think I understand how it works, and it’s not what most people think it is; it’s not a ‘general purpose smart learning’ machine, but more limited to GO. However, with some tinkering perhaps you can use it for other things, like an even better Google driverless car.

    “PS. Long ago I used to read the paper version of Scientific American, and their economics articles were consistently awful. Have things improved?” – not sure. I recall reading Sci. Am. saying money illusion has been validated due to a small controlled study in a university that found money illusion present with college kids using small amounts of real money (i.e., ‘double the money supply and the kids feel twice as rich, even though prices also doubled’). I thought that conclusion was pretty ludicrous, don’t you Scott?

  10. Gravatar of Don Geddis Don Geddis
    13. March 2016 at 18:42

    Ray, Yudkowsky already knows that AlphaGo’s victories offer only minor evidence about the coming technological singularity. That’s not why he’s predicting it. You’re really horrible about trying to guess what other people are thinking. Please stop trying.

  11. Gravatar of E. Harding E. Harding
    13. March 2016 at 19:19

    BTW, Yudkowsky’s FB post on Trump is tangential to my own on the Marginal Counterrevolution, especially in style. I find my post (Trump and Gravitas) more persuasive, though. I disagree with Yudkowsky on Muslims and Mexicans.

  12. Gravatar of Ray Lopez Ray Lopez
    13. March 2016 at 19:59

    @Don Geddis – quit projecting your inadequacies on me. I read the original source article and Yudkowsky is a slippery character, and it turns out he’s agnostic about AI, so his two examples were hypothetical. (“The question is just which of these two worlds is more probable as the one we should avoid. ” <–OT, that's not what others in this field think; ignorance is not bliss). So you were wrong, based on the source. Perhaps you read this guy and know about his views of AlphaGo, but the source doesn't even mention AlphaGo. Now, back to our regularly scheduled commentary on economics…

  13. Gravatar of Don Geddis Don Geddis
    13. March 2016 at 21:00

    Ray, you’re an idiot (as usual). When you don’t understand the conversation that adults are having, you describe it as “slippery”. That really means you have no idea what is going on.

    Yudkowsky is “agnostic” about AI? LOL. Could you be a bigger moron? The truth is that Yudkowsky is SO confident (and worried!) about AI, that he’s devoted his entire professional career to working on problems that won’t even exist until AFTER AI succeeds. He’s a cofounder of MIRI, which isn’t even working on AI … it’s working on solving problems that AI will bring with it, once it succeeds.

    Yes, I know his views on AlphaGo. And his views on AI. And his views on venture capital, and quantum physics, and writing fiction. You claim that “he’s mistaken about AI taking over the world“, but as usual, you haven’t got the slightest clue what you (or he) are talking about.

  14. Gravatar of ChrisA ChrisA
    13. March 2016 at 21:11

    Generally great stuff, the one bit I would quibble with is the following “Try to replace committees and elaborate process regulations with specific, individual decision-makers whose decisions would be publicly documented and accountable.”

    The problem with individual decision makers it that they make decisions. If you like the decisions that is great, but if not then a problem. It is the Lee Kuan Yew vs Pol Pot issue. Sure you would like LKW making decisions, but you might end up with Pol Pot instead. So better to slow down decision making until it is totally obvious what the appropriate response is, even though in most cases this is leading to a sub-optimum outcome. This is why the Churchill phrase “Democracy is the worst form of Government, except all the others” is so good (yes I know Churchill was quoting but no-one seems to know who from).

  15. Gravatar of BC BC
    13. March 2016 at 23:26

    I loved this sentence: “The libertarian test is whether, imagining that you’ve gained power, your first thought is of the laws you would pass, or the laws you would repeal.”

  16. Gravatar of Maurizio Maurizio
    14. March 2016 at 01:06

    Off topic: a question about “the market is smarter than you”. Lately, markets seem to interpret rising oil prices as bullish for stocks. How can markets be right on that? Isn’t it a negative supply shock? Or do they think prices are rising because demand is picking up?

  17. Gravatar of derivs derivs
    14. March 2016 at 01:34

    Maurizio,
    Conventional wisdom is that lack of revenue would set off defaults on billions of debt and ripple into the finance industry.

  18. Gravatar of TravisV TravisV
    14. March 2016 at 03:06

    I’m surprised there haven’t been more posts like this one by Noah Smith:

    http://noahpinionblog.blogspot.mx/2012/12/the-omnipotent-fed-idea.html

  19. Gravatar of ssumner ssumner
    14. March 2016 at 04:42

    Harding, You said:

    “Freedom, if Trump, the very emblem of strength, endorsed NGDP targeting, Sumner might even hate him more.”

    Would make no difference either way.

    Ben, You said:

    “if Yudkowsky would embrace the decriminalization of puah-cart vending he would be even more impressive….”

    Doesn’t he do that? Read it again, especially the part about freedom to operate your own small business.

    Ray, You said:

    “I thought that conclusion was pretty ludicrous, don’t you Scott?”

    Yup, it sounds like they haven’t changed. Your comments on AI are beyond stupid.

    ChrisA, I did not read that as opposition to democracy.

    Maurizio, I’m not sure, perhaps it relates to oil company profits, or perhaps derivs is correct.

  20. Gravatar of Ray Lopez Ray Lopez
    14. March 2016 at 07:36

    Sumner (to my rhetorical question, designed to get a “NO” from Sumner, but instead he answers “YES”!?): Ray, You said: “I thought that conclusion was pretty ludicrous, don’t you Scott?” Yup, it sounds like they [SCIENTIFIC AMERICAN ON ECONOMICS] haven’t changed.

    Bizarre. I report that Sci.Am. supports money illusion by favorably citing a college experiment involving real money, and I say this is ludicrous, yet Sumner agrees with me that there’s no money illusion based on this experiment. So question for Sumner: if this experiment was bad, then tell us what evidence you feel is persuasive to show money illusion? Since money illusion cannot be easily proved ‘in the wild’, it has to be a controlled experiment like the college experiment was, no?

    As for AI, I doubt Sumner has my level of expertise (though I’m no expert, just more expert than him).

  21. Gravatar of Don Geddis Don Geddis
    14. March 2016 at 08:33

    Ray: “my rhetorical question, designed to get a “NO” from Sumner, but instead he answers “YES”!?

    It’s like there’s a debate between thoughtful humans on climate change due to greenhouse gasses, and Ray is the pet turtle in the terrarium on the dresser, wondering whether “greenhouse” refers to the green leaf of lettuce he’s munching on for lunch.

  22. Gravatar of guest guest
    14. March 2016 at 08:35

    Maurizio, I think it’s even simpler than what other people are saying. The market is just signaling that the price changes in oil are driven by expectations of global demand, not supply. More demand for oil correlates with higher growth, which is bullish for stocks.

  23. Gravatar of Larry Larry
    14. March 2016 at 09:46

    While we’re certainly not there yet, and AlphaGo is not the system that will destroy us, the only way that humans will survive the AI singularity is to incorporate that AI into themselves.

    I see no way that we can domesticate such a superintelligence. We can’t even domesticate the non-superintelligences all around us today and we’re not even getting (much) better at it.

  24. Gravatar of Floccina Floccina
    14. March 2016 at 12:39

    There’s also a conceivable world where you work hard and fight malaria

    Maybe we could do this:

    Engineering the Extinction of 40 Species of Mosquitoes

  25. Gravatar of Don Geddis Don Geddis
    14. March 2016 at 14:04

    @Larry: “the only way that humans will survive the AI singularity is to incorporate that AI into themselves.

    A glorious dream, but probably infeasible. I’ve always loved this quote (from Tim Tyler, talking about man/machine synergism, on comp.ai.philosophy on 12/18/2008): “It’s like building a tower from bricks and chocolate cake: it would be fun if there were synergy between the two — but once bricks are available, there really is no good structural reason to include chocolate cake in the tower at all.

  26. Gravatar of derivs derivs
    14. March 2016 at 15:02

    “The market is just signaling that the price changes in oil are driven by expectations of global demand, not supply.”

    Guest,

    global demand in Q2 ’14 was 92 million barrels and price was over $100…

    global demand today is almost 97 million barrels and price is under $40….

    maybe you want to rethink that one…

  27. Gravatar of ssumner ssumner
    15. March 2016 at 09:38

    Ray, There are lots of experiment showing money illusion, like the bunching of wage changes at zero, and then the sharp dropoff right below zero.

Leave a Reply