Melancholia
During my recent trip to the Bay Area, I met lots of people who are involved in the field of AI. My general impression is that this region has more smart people than anywhere else, at least per capita. And not just fairly smart, I’m talking about extremely high IQ individuals. I don’t claim to have met a representative cross section of AI people, however, so take the following with a grain of salt.
If you spend a fair bit of time surrounded by people in this sector, you begin to think that San Francisco is the only city that matters; everywhere else is just a backwater. There’s a sense that the world we live in today will soon come to an end, replaced by either a better world or human extinction. It’s the Bay Area’s world, we just live in it.
In other words, I don’t know if the world is going to end, but it seems as though this world is coming to an end.
Some people I spoke with worried that AIs would soon take all of the jobs—and wondered about the impact on the economy. As far as existential risk, it often seemed as if the optimists were the pessimists, and vice versa. As if humanity’s best hope is that the AI enthusiasts are overestimating the potential for AGI.
One guy asked me if I were interested in cryonics. Not whether I was interested in it as a concept; was I interested in the sense of ready to sign a contract if he drew one up? He pointed out (half joking) that due to the rapid advance of AI, I wouldn’t have to spend much time being dead before I was revived.
I’m probably giving you the idea that the Bay Area tech people are a bunch of weirdos. Nothing could be further from the truth. In general, I found them to be smarter, more rational, and even nicer than the average human being. If everyone in the world were like these people, even communism might have worked.
There’s a weird disconnect between the AI world and the normal world. If the AI people are correct, then I don’t think the public has any idea what’s about to hit them. Consider a recent survey that discussed public attitudes toward AI. The public thought it might produce benefits in some areas, and then listed a few downsides:
There’s broad concern about the loss of the human element due to emerging AI technologies, especially in settings like the workplace and health care. Many Americans who would not want to apply for a job that uses AI in the hiring process cite a lack of the “human factor” in hiring as the reason why. In health and medicine, a majority of Americans think relying more on AI would hurt patients’ relationships with their providers.
The potential for negative impacts on jobs is another common concern with AI. Among those who say they’re more concerned than excited about AI, the risk of people losing their job is cited most often as the reason why. We also see this concern with some specific examples of AI. For example, 83% of Americans think driverless cars would lead to job loss for ride-share and delivery drivers.
Surveillance and data privacy are also concerns when discussing AI. Large majorities of Americans who are aware of AI think that as companies use AI, personal information will be used in unintended ways and ways people are not comfortable with.
Note that there is no question about existential risk, despite the fact that some of the smartest people on the planet think the “P(doom)” risk is uncomfortably high. I’m not saying they are right, but does the public even know about these concerns?
Leopold Aschenbrenner recently pointed out that even the federal government seems completely oblivious to the fact that AI research might have national security implications. Again, I don’t know how seriously we should take those concerns, but it shows the disconnect between the AI world and the normal world.
Overall, I felt there was both excitement about the progress in AI and a sort of melancholy about the fact that we might not know how to control the technology. I was often reminded of Lars von Trier’s underrated 2011 film entitled Melancholia, where a few experts understood that the end of the world was near (due to an asteroid strike), but the broader public was partying like there was a tomorrow. BTW, that film nicely captured the feeling of dread when you can see the end and cannot do anything about it.
Again, I’m not saying AI will be the end of the world. But spend enough time in the Bay Area and you begin to see the public’s fears as the optimistic case. All jobs are replaced by machines and we’re living on UBI? That’s great news! It means the world hasn’t been destroyed.
Of course this is easy for me to say, as I’m retired and will soon be dead. One younger guy told me that he was having trouble deciding what field to study, as he could not think of a single white collar job that wasn’t going to be replaced by AIs. Ironically, it’s the manual jobs that will be the last to be replaced, as computers are good at what we are bad at, and vice versa.
How seriously should we take the views of all these geniuses? I’m reminded of the observation that when playing the game of poker, if you cannot spot the “mark” at your poker table then you are the mark. I spent a few days among geniuses at the LessOnline meetup, and wasn’t able to spot the guy that was clueless about AI.
¯\_(ツ)_/¯.
All kidding aside, do you think the average person living near Oak Ridge or Alamogordo back in 1945 had any idea what the nearby eggheads were about to cook up?
Toward the end of his long essay, Aschenbrenner has this to say about the need to take AI more seriously:
But the scariest realization is that there is no crack team coming to handle this. As a kid you have this glorified view of the world, that when things get real there are the heroic scientists, the uber-competent military men, the calm leaders who are on it, who will save the day. It is not so. The world is incredibly small; when the facade comes off, it’s usually just a few folks behind the scenes who are the live players, who are desperately trying to keep things from falling apart.
Right now, there’s perhaps a few hundred people in the world who realize what’s about to hit us, who understand just how crazy things are about to get, who have situational awareness. I probably either personally know or am one degree of separation from everyone who could plausibly run The Project. The few folks behind the scenes who are desperately trying to keep things from falling apart are you and your buddies and their buddies. That’s it. That’s all there is.
He’s still pretty young, but getting disillusioned at a fast rate. It’s worth noting that this isn’t just about him becoming less naive. The quality of our political establishment really is declining at an alarming rate. In the 1950s, we had people like Dwight Eisenhower. Now we have leaders like Trump.
For me, the wake-up call occurred much later in life, back in 2008. I suddenly realized that monetary policy at our major central banks was not being run by “Princeton School” economists, even though a Princeton economist chaired the Fed. The stock market had a similar wake-up call, and promptly crashed in the fall of 2008.
PS. If you have 4 1/2 hours to spare, this is the most interesting podcast I’ve ever listened to. Aschenbrenner graduated from Columbia at the top of his class, at age 19.
PPS. Is Durer’s Melencholia I the greatest print ever made?
Tags:
8. June 2024 at 23:54
You’re so naive.
You talk to a few tech brats in Silicon Valley, and suddenly you think they’re all geniuses.
Silicon Valley futurists are like the Origin of Life scientists — mostly chemists — who say they’re so close to finding out how to create life; just give them a little more money Scott. Just a little more. Like 50M more in grants, because they’re so close. Lee Cronin told us 15 years ago that he was two years from finding out the origin of life. 100M later, he’s no closer.
THEY’RE NOT CLOSE. They make stupid claims, so they can get money from suckers like you.
We cannot even create a simple sugar molecule, because even that is too complex. And you think, miracously, they’ll be able to revive you in a few years or upload your mind into a machine. You think you’ll be immortal, lol.
Go to church, if you’re getting nervous. Because you’re chance of salvation, slim to none, is better than your chance of being uploaded into a hive mind.
It’s called sales skills. Something you don’t have, and apparently cannot even recognize.
“And even communism would work” is quite the statement. Again, it reveals your true intentions.
9. June 2024 at 02:04
My thoughts have become more nuanced as I’ve thought about these issues. I think Noah Smith makes a good point about comparative advantage possibly preserving some human labor, at least for a while.
More fundamentally though, I think that white collar workers who have something unique to offer can have their output and reach enhanced by AI and become even more successful than they are today. They will have to learn how to leverage AI. For those without anything special to offer, they will be increasingly subject to replacement. This is the hyper-realization of Tyler Cowen’s “average is over” thesis.
For example, I think Market Monetarism is the best approach to macroeconomics, but it is not an approach that the LLMs use. This leaves a role for more specialized AI systems to offer macroeconomic analyses, such as those of Market Monetarists.
Also, I think some high-level human subject matter expertise will continue to be required simply to determine which AI systems to use (which are best for purpose), and to help develop goals and strategies and polices to achieve them. They will also be required interfaces between highly specialized AI systems and non-subject matter expert stakeholders.
Low level analyst jobs are gone soon I think, so I don’t know how new data analysts/scientists, attorneys, etc. gain some of their job-specific experience in the future.
I don’t think blue collar jobs are as safe as some may assume. Advances in robotics technology is also accelerating rapidly, with jaw-dropping progress having been made over the past year. Blue collar workers may hold out longer than many lower level white collar workers, but they should not feel safe either, particularly if they’re young.
I think there will remain some demand for humans in certain roles, such as massage therapists and waiters/waitresses/bartenders, just because some people want to pay for human contact. There may always been niche human roles for this purpose.
Also, I think AI art will take over the low end of the market, since college students will be able to generate any kind of art they want to affordably decorate their dorms. However, I think higher end art could continue to be dominated by human beings, since we tend to want personal stories and people to whom we can relate. We want relatable human expression. No matter how good AI music becomes, there will be demand for live human talent.
We are indeed living in interesting times. I feel very privileged to live in this period, despite some extraordinarily troubling trends.
9. June 2024 at 02:23
I should also point out that I’ve seen the rise of the “lump of labor” and “lump of data” fallacies in views of the mass replacement of human labor by AI. People seem to neglect the fact that a lower cost of producing new software or data would likely lead to more software development and data production. Coupled with comparative advantage, this could mean human labor remains relavant longer than many assume.
9. June 2024 at 03:47
I feel like I’m ahead of 99% of the population when it comes to AI, and yet completely at sea when it comes to having a serious command of the issues. Any number of other technologies could fit in the same boat, but none of these others appears to be moving as fast. Radical advances in biotechnology, for example, would still require several generations to fully play out. Even if fully autonomous vehicles were leaving the factory today, they would be stuck in a traffic jam of red tape for years to come. But if AI makes another significant leap by 2027, as Aschenbrenner believes, then the impact would be immediate and lasting. On the other hand, absolutely nothing I do will make any difference, as a man or citizen.
That paper by Aschenbrenner certainly provides food for thought, but you gave the link to the PDF, which is all but impossible to read on a small screen. The following URL might be useful: https://situational-awareness.ai
9. June 2024 at 03:52
I’m not so concerned with the direction of disruption as I am with the pace. And I’m going to ignore existential risks, as there’s no point in worrying about those, just as I don’t loose sleep over asteroids. I have maybe 10 years left in the workforce, preferably more, but I could live with 10. In any case, I’ll probably be retired before you’re dead.
One thing I find that AI boosters seem to overlook is the inertia associated with anything to do with government. I mean, according to things you’ve written, the US still has a fairly antiquated means of dealing with tax returns, right? Even though it’s 2024. Not to mention the mess that is FDA approvals or property planning approvals, both of which involve political accountability and hence fallible and venal people. I’m in Australia and I work for a government agency that is headed by politician-appointed human decision-makers, and will be for the foreseeable future. Even assuming the confidentiality issues associated with involving an AI in advising government decision-makers dealing with commercial matters could be overcome (a massive ‘if’), to replace my job, an AI would need to attend meetings with these human decision-makers, understand what they’re saying (and not saying) in relation to the material before them, compare that to all relevant previous matters they have dealt with, understand and allow for their egos and agendas, and advise accordingly in a way that these humans will accept. That is well over a decade away.
In this way, AI is a bit like cryptocurrency. It works fine in decentralised environments where it is only interrupting interacting with other private agents. But the minute someone needs a government agency to consent to or adjudicate on a private agent’s actions, the AI will need to persuade politically-accountable human decision-makers. All of a sudden, that will bring things to a screeching halt. Of course, one day, government may be run by AI. But that will be long after *I* am dead.
9. June 2024 at 04:03
Sorry, ignore the ‘interrupting’ in the first line of my final para.
BTW, today I saw “A Brighter Summer Day”, in a packed cinema in 4K. Your review said: “An Edward Yang masterpiece. One beautifully directed scene after another—for 4 hours. Some scenes you’ll want to cut out, frame, and put on the wall.”
I agree with that. As much as a Murakami novel, I found it eye-opening in showing how enamoured the wider world was becoming of American pop culture in the 1960s, in a way that is now so completely internalised as to not muster observation.
9. June 2024 at 07:47
Actually, SFO and NYC will soon be irrelevant. Blackrock is spearheading a Texas stock exchange to rival NYC over lawfare concerns.
And James Tour, just received another grant from IBM for over 30M.
Rice University, and Texas more broadly, are now leading in nanotech innovation. Tour is a force of nature, btw. 800 publications to his name, 150+ patents. He’s also a critic of people like Dr. Cronin who talk and talk, but never produce much of anything. His colleagues hate him, mostly for his Chistian views, but they can’t stop citing him. The guy has an h-index of 129. Top ten chemist in the world.
Biggest digital currency tech conference in the world was also held in Austin, Texas a few days ago. All the big companies attended, and almost all of them are planning to relocate offices to Austin. Coinbase, Gemini, FV Bank, all the big players….
As they say, go woke….go broke.
9. June 2024 at 08:02
Edward,
Much of corporate America went “woke”, because that’s where the money is and increasingly where it will be. Hollywood entertainment producers aren’t stupid. They’re responding to demand, and young people are disproportionate consumers of movies, music, etc.
With regard to ESG, it is simply the future. There is no long-term future for the fossil fuel inddustry, for example. though it will be important for the rest of my life, as I’m nearing 50.
This divide is largely between the young and old, the urban and rural, and the college educated and not.
In some industries now, and increasingly in the future, not going woke means going broke.
I think this is one reason many far right conservatives are happy to throew away democracy and free markets, wanting to ban lab meat, for example.
9. June 2024 at 08:15
[…] 7. Scott Sumner’s amusing take from visiting AI circles in SF. […]
9. June 2024 at 08:36
Everyone, It seems silly to speculate as to what a super-intelligence would be like, because no human being is anywhere near smart enough to have an intelligent opinion on the subject. I don’t even know if we’ll be able to create a superintelligence, but if we do then I suspect it will be a “big deal”.
Kit, Good comment, but I hate the “small screen”. I do everything in a big iMac.
Rajat, As I get older, I increasingly notice how 1960s culture shows up in period films from far flung corners of the world. I even recall a Soviet film (Kazakh?) from the 1960s where you could easily see the impact of 60s pop culture.
9. June 2024 at 09:30
+1 on the Durer print. If you ever have the chance to see a pristine copy of Melancolia I (The British Museum has one) do it. The web image doesn’t do it justice.
9. June 2024 at 10:17
We are about to have an election where the choice will be between Biden and Trump.
I say let’s give AI a chance to run things. Could we do any worse?
9. June 2024 at 10:23
Thanks Stephen. I think I saw one a long time ago.
Blackbeard, I’d vote for AI.
9. June 2024 at 10:49
“For me, the wake-up call occurred much later in life, back in 2008. I suddenly realized that monetary policy at our major central banks was not being run by “Princeton School” economists, even though a Princeton economist chaired the Fed. The stock market had a similar wake-up call, and promptly crashed in the fall of 2008.”
That we didn’t have a second Great Depression seems like a victory. That the banks didn’t collapse seems like a victory. Could things have been done better? Sure, obviously, in retrospect. But relative to the worst-case scenarios, it seems like we’ve collectively learned some useful things.
9. June 2024 at 11:33
I live in SF and I’ve yet to see AI replace any job in significant quantity. Turns out AI being 60% as good as human still means you need the human. Does that mean there’s no reason to worry? Well, no, the future is hard to predict. It’s possible we have another dramatic leap and AI gets that last 40%. But it’s also possible it doesn’t happen (at least in this century). And even if AI does reach average human capability that doesn’t at all imply it will reach superintelligence.
9. June 2024 at 12:09
Nice chatting with you last weekend! As far as intelligence, I guess I differentiate between intellectualism ie being interested in topics like AI and intelligence. There were definitely extremely smart people there but more broadly the event appealed to people who like discussing intellectual topics, which can make them seem intelligent
9. June 2024 at 12:51
Why do we think labor won’t be replaced fairly rapidly? Yea it can go after high IQ easily. Have you driven a Waym or Tesla FSD? Driving is labor. It might take a little bit of time to build the robot armies but they will come not long after.
9. June 2024 at 14:23
jseliger, You said:
“That we didn’t have a second Great Depression seems like a victory.”
That’s a low bar. The Fed created the Great Recession, and did so unnecessarily. But yes, we are gradually improving, and Bernanke made some positive changes.
Andy, I’m also agnostic, but the AI people have some good arguments. You talk about “this century”, whereas they say we are just a few years away from human level intelligence, and a few more to superintelligence. I don’t know enough to comment.
Recall that 10 years after electric power was developed it had almost no impact on how people lived. But 100 years after it was developed it had utterly transformed our lives. I suspect the AI people will eventually be right, I just don’t know over what time frame.
Steven, I suppose that’s right, and given that there were 400 people at the event they presumably weren’t all like Yudkowsky or Alexander or Mowshowitz. But lots of the young people really impressed me.
Sean, I meant that it’s way easier for an AI to replace an accountant than a plumber.
9. June 2024 at 14:31
It’s easy to have a positive impression of the Bay Area if you only visit and you only hang out with the well to do. It’s only after living there for a while that you realize that the high openness capital is also the low conscientiousness capital and a huge amount of stuff is all talk. But occasionally something seemed like all talk and works! This lends a veneer of legitimacy to the rest of it, even though the stuff that works is typically much more boring/regular value creation when you go digging under the hood instead of just reading the breathless coverage.
9. June 2024 at 14:46
Romeo, Two points:
1. You misread my post. It wasn’t my overall impression of the Bay Area, it was a comment that there are lots of smart people here. Which is true! I’m sure there’s also lots of phony hype, as you say.
2. It’s odd to focus on the hype that didn’t pan out, when discussing the most successful region on the entire planet in terms of driving technological progress during the past 40 years.
I’m agnostic on AI, but I find most of the AI skepticism arguments to be really weak. Not everything panned out in the past? Yeah, so what? Those expecting big developments (I won’t call them “optimists”) have the stronger arguments, even if in the end things move slower than they expected.
It won’t take long to find out if people like Aschenbrenner are correct. We will see.
9. June 2024 at 15:21
“I’m agnostic on AI, but I find most of the AI skepticism arguments to be really weak.”
The AI crowd has been refining their views for a long time. It’s not surprising their arguments are ready. That doesn’t make them any more accurate. Most scientists said the same thing about the population bomb – that Julian Simon’s argument was weak. maybe, but he turned out to be right.
The major rise in life expectancy in the Western world (US, Europe etc) from the Malthusian minimum of about 40 began almost immediately with the commercial development of oil. It was arguably the most important technological change humans have ever experienced. Nonetheless, it took close to a century for the changes brought by the commercial development of oil to run their course.
Meanwhile the handwringers were out almost from day one, claiming relentlessly that oil was about to run out. Even Rockefeller worried it might run out. They all had great reasons! They all turned out to be wrong.
I don’t doubt that LLMs will bring changes. My portfolio is heavily invested in companies that already are or may do well in the coming AI era. So far so good. But exactly how those changes will play out is the hard part. My dad lost big when he spilled for Lucent stock in the 1990s. He correctly sirmised that billions of Chinese people would buy cell phones. He incorrectly assumed that Lucent would be the primary company to profit from that.
Spread your bets, Scott.
9. June 2024 at 15:30
Kangaroo, What’s the point of observing that weak arguments are sometimes correct, and then adding a bunch of weak arguments? Let’s just remain agnostic.
Life expectancy rose due to things like better nutrition, antibiotics and better sewerage systems. Oil may have helped, but it wasn’t decisive.
And LLMs are not the issue, superintelligence is the issue (if it happens).
9. June 2024 at 19:20
(1) Where is the energy going to come from to feed the AI computer complexes? Will AI build the wind turbines, the solar panels, or solve the nuclear fusion dilemma? Or will AI take over the oil and gas complex? (How many drilling blowouts will AI need to experience in order to know how to avoid them?) Which of the smart people in SF are dealing with this question?
(2) Already some people (who might indeed live in SF) have come to realize the problems of closure and redundancy in that digital information is finite. AI is going to need more information and the energy to generate it, even it it only takes 1.58 byte chips. AI is going to end up in endless loops.
(3) A lot of people are going to lose a lot of money on AI.
(4) Write poetry in code. Create beauty with code. Become a plumber or electrician. There are lots of ways to survive the pancake that will be the AI apocalypse. Life will continue until the real apocalypse.
9. June 2024 at 19:48
Scott,
the hand wringing within the AI scene (which I have to observe from afar) over super-AGI to me has a bit of a tinge of “I have become death, …” kind of self adulation. These people are very smart but more importantly they believe they are very smart. They also believe that intelligence is the be-all and end-all. I have no good reasoning to offer for my gut feeling, but here it comes. Covid killed a lot of people, and yet, it wasn’t intelligent, didn’t even have a brain, mind you it did not even have a cell. The real world is much more complex than computational, algorithmic intelligence of the kind produced in SFO. And to give you a bit more of a “reasoning” argument: AI today consumes ridiculous amounts of resources – energy chiefly. It _may_ be able to replace humans for many tasks but I seriously doubt it will be cost-effective at that. I am in education and I still haven’t figured out to even use LLMs to my advantage because LLMs don’t solve the kind of problems I have on a daily base.
The bigger problem in my eyes is human. People like Sam Altman give me clear SBF vibes: highly “intelligent” yet blind, oblivious to their own shortcomings, pitiful understanding of the real world outside, making beginner mistakes. Altman in particular is as creepy as it gets, with his manipulative behavior over an angelic boy’s face.
9. June 2024 at 21:00
> I’m probably giving you the idea that the Bay Area tech people are a bunch of weirdos. Nothing could be further from the truth. In general, I found them to be smarter, more rational, and even nicer than the average human being.
The former doesn’t contradict the latter. If you are exceptional in some areas, like being smarter or have idiosyncratic interest, you are probably exceptional in some other areas. Which can make you look like a weirdo.
> If everyone in the world were like these people, even communism might have worked.
Going off a slight tangent: during the Cold War Germans ran one of the best capitalist economies in the real world, and some other Germans ran the best socialist economy ever. Even thought it was pretty drab, East Germany was as good and successful as socialism got, ever.
And your average German isn’t all that exceptional; especially compared to the people you’ve met.
> […] My general impression is that this region has more smart people than anywhere else, at least per capita. And not just fairly smart, I’m talking about extremely high IQ individuals. […]
> If you spend a fair bit of time surrounded by people in this sector, you begin to think that San Francisco is the only city that matters; everywhere else is just a backwater.
I haven’t traveled to San Francisco in a while. But I can believe what you say about the people you’ve met. Which makes the dysfunction in their host city all the worse to bear.
My adopted home of Singapore also has some real smart people, both local and attracted from elsewhere. We are still moving ahead by leaps and bounds, just have a look at our GDP per capita growth, despite being already so close to the global productivity frontier. But many people here still seem to have something of an inferiority complex about Singapore and see it as much more of a backwater than it really is.
> PPS. Is Durer’s Melencholia I the greatest print ever made?
It’s pretty good, one of my favourites, too. But M.C. Escher and Hokusai, of ‘The Great Wave off Kanagawa’ fame can give Dürer a run for his money.
9. June 2024 at 21:14
I find the conversation about jobs and AI to be utterly absurd, and I don’t know if it’s because I’m crazy or lots of other people are crazy.
I start with the idea that technology has replaced lots of jobs before. At some point in the past, pretty much 100% of people were directly involved in the extraction of resources from nature in order to make (their own) food. Now that figure is 2%. A similar shift, not quite as dramatic in size, occurred when advanced economies like the US shifted from manufacturing to services. This makes me think that technology replacing the jobs we do now does not mean that there will not be jobs in the future.
My second reference point is the continued existence of activities and jobs in music and chess. Electronic equipment can reproduce music with much higher quality than people; computers can play chess much better than people. And yet people continue to make music and to play chess; not only that, they continue to get paid for making music and playing chess. This indicates to me that the purpose of human participation in an activity is not to do it *the best*. Even if machines can do X better than people, people will continue to do X and make money off it.
My third reference is Tyler Cowen’s new service sector jobs posts, which document the amazing range of things that people are willing to pay other people to do.
Now, the implications of this are not particularly appetizing to me. I think it means that 50 years from now, almost everyone will be in the service industry, being a flunky, performer, teacher/childcarer, therapist, sex worker, or some combination thereof. But the idea that there won’t be jobs to do just sounds ridiculous. We’ll invent jobs, just as we always have.
9. June 2024 at 22:05
I’ve been at the event as well and I’ve been struggling to give this emotion a name. Melancholia? Maybe. But the Duerer’s print captures it better than the word itself.
9. June 2024 at 22:15
Pherb, What if AI kills us all? Would you view that as important?
mbka. I agree about Altman being rather disturbing.
Matthias, Hokusai yes. Escher . . . not really.
Phil, You said:
“I find the conversation about jobs and AI to be utterly absurd, and I don’t know if it’s because I’m crazy or lots of other people are crazy.
I start with the idea that technology has replaced lots of jobs before.”
It’s not that you are crazy, it’s that you haven’t done your homework. Every single objection raised in this comment section has been addressed endlessly by people in Silicon Valley.
I think people need to find out what the AI people are actually saying before making knee jerk arguments against their claims. I actually sympathize with your conclusions—but this problem is much more difficult than you seem to assume. Previous technologies didn’t create entities that are far smarter than any human being—that would not be a nothingburger, if it happens.
9. June 2024 at 23:56
Thanks, Scott. I’m not entirely convinced – I read a fair bit, and writing from Silicon Valley very rarely seems to address the kind of historical point I was making there. They’re famously forward-looking! But I will keep on looking into it.
10. June 2024 at 04:01
My own prediction: AGI is far away. Typical machine learning approaches that gradually learn over massive training sets are not the path to AGI. They’re too slow to learn, too exploitable, a breakthrough is missing.
More importantly: Having worked in big tech, and having met some leaders in their fields (not just tech): the world is dumber than you think. Everyone has blind spots, biases, or warped values that prevent them from making good predictions in certain areas. And most of all, people are overconfident.
The success of Silicon Valley isn’t because the people are so smart that they can’t make mistakes, it’s because there’s so much churn that eventually value pops out of unexpected places.
Based on the description of your event, it seems like you were star struck and caught up by overconfident biased people. Many of them are financially motivated to hype AI. The number one change I’ve seen in big tech in the last 2 years is the need to convince investors that AI is capable and useful today.
10. June 2024 at 06:34
The fact is, there are many more questions than answers at this point. No one knows what is going to happen in much detail, no matter how smart or relatively informed. Perhaps some of those in the Valley commenting on this are familiar with some unreleased AI models of which I’m unfamiliar.
Current LLM technology is some steps of innovation away from even beginning to present the dangers some are concerned about. It is evolving rapidly, but we don’t know if some temporary plateaus are on the horizon.
I’ve been using LLMs such as ChatGPT and Gemini Advanced now for about 15 months, and here’s how I’d characterize the current state of the technology:
1. These generative AI systems are a combination of probabilistic databases and probabilistic generative engines.
2. They often cannot tell the difference between their database knowledge and their generative output.
3. Hence, this makes them very poor probabilistic databases, but decent generative engines for poems, emails, pictures, etc.
4. Also, they do not have working memory, where logic via transitive cognition would help facilitate real-time learning, with sound hypotheses concerning generalization of concepts. This limits not only useful learning, but also creativity.
5. While currently available generative AI systems can fetch data reliably under some circumstances, and successfully engage in single-step processing, such as through Python environments, they often cannot follow instructions to do both, for example, in step-wise fashion. It is very poor at multi-step tasks.
So, in summary, this technology is “mostly not there yet”. It offers much more promise than utility.
That said, the halucination problem is greatly ameloriated by restriction of the LLM’s attention to a specific data set. The problem can be even more thoroughly minimized with additional specific training/fine tuning on that specific data set.
Interestingly, ChatGPT-4.o suddenly got batter at math. I noticed his last week when using it for a data analysis task.
10. June 2024 at 07:37
I can’t help but be confused by the AI hype, from both the optimists and the pessimists. Every field in which I am knowledgeable, they give me horrifically wrong nonsense. Conceptually, I fail to see how these issues can ever be fixed. In fact, these LLMs seem to get worse from the moment they are generally released. The idea they can be trained on their own generated output is just hilarious. As is the the idea they will grow into AGI.
The San Francisco crowd might be brilliant engineers and coders, but they have a very narrow view of the world. Many of them don’t seem to really understand how things function.
Great for Nvidia stock, however!
10. June 2024 at 07:45
One of the great things about printing/engraving is how each copy is subtly different. A few years ago, the British Museum had a Hokusai exhibition and it was thrilling to see multiple copies of the same work side-by-side.
The British Museum’s best Melancholia has been in storage for some time, sadly, as it is getting old.
10. June 2024 at 08:20
There is AI development in theory and AI development in practice and deployment. AI, in and of itself, can develop theoretically at its own pace.
Deploying AI at scale, requires AI to deploy at the pace of things that enable AI, like power plants, power transmission, permitting, privacy laws (EU privacy laws at pretty strict). Where AI needs the physical world, it will develop at the speed the physical world develops which probably gives us time to figure some things out.
10. June 2024 at 08:31
Phil, Trust me, they’ve thought a lot about all the points you’ve made, and I’ve seen lots of discussion of those perspectives (even before I went to the conference.)
The comment section of this post vastly underestimates the AI crowd.
ee, You said:
“it seems like you were star struck”
That’s a really dumb comment, especially given that I’ve repeatedly said I’m agnostic about the whole thing. Sorry to be rude, but these AI people are much smarter than you. I never heard that sort of dumb comment when speaking with them. The debate we had occurred at a much more rational level than the comment section of this blog.
10. June 2024 at 09:36
Scott,
Was there any explicit discussion of economics as it relates to AI? How about the state of AI systems in comparison to the brain? Hallucinations in LLMs for example, are similar to toddlers’ inability to tell whether stories they make up are true.
Was there talk of AI arms races, between criminals and the rest? Between governments? Business competitors?
Can you tell us more about the specifics?
10. June 2024 at 13:20
Interesting and sobering, with one little slip. It’s not that we have leaders like Trump, it’s that we have leaders like Biden.
10. June 2024 at 17:28
The individual person working in AI probably is really intelligent, but as a crowd they may well still be an idiot, and as an industry they almost certainly are. There’s too much hype, salesmanship, etc. People are plainly lying on a regular basis (with the joke going around that AI stands for “actually Indian” since so many AI projects have turned out to just be people). It’s like predicting bubbles in the market. Some of them are correct about some of the things AI will accomplish and even when, but there’s so much noise that it feels foolish to treat any of those people like they know anything because it’s all so random.
10. June 2024 at 17:57
Everyone. I’d encourage people to read the Aschenbrenner essay, or listen to the podcast.
Some of the comments here—SMH.
Michael, Good questions. I don’t recall all of the topics, but you can be certain that everything mentioned in this comment section has been discussed in depth in Silicon Valley, and a hundred things not mentioned here.
Some of the commenters here really underestimate these people. It’s like middle school students lecturing Einstein that he doesn’t understand physics. Okay, that’s hyperbole, but you get the idea.
10. June 2024 at 18:41
To be fairLeonard does list power capacity as the biggest constraint. Energy production and transmission is highly regulated. Seems like the best candidate to give us time to figure things out as a society. Natural gas requires pipeline and storage infrastructure to be built that takes many years if it can be permitted. Nuclear takes decades. Renewables are not 24/7 reliable. Transmission takes decades to permit and build.
10. June 2024 at 22:08
Scott,
As you can probably guess, I’m not in the circles that get invited to events with AI experts, so I rely on public events. When I hear an argument between Eliezer Yudkowsky and Yann LeCun, for example, I feel like I learn some things, but don’t necessarily get any closer to having a useful opinion on the potential dangers of AI. Both of these guys are smarter and more educated than I am, particularly in AI, and I’m left with a very incomplete way to judge their arguments.
That said, the quality of discussion between public experts can vary pretty widely. Some of them seem well-informed about other fields, while others make elementary mistakes in some of their assumptions.
I have no idea how dangerous AI could be, but it’s not really a focus of mine. I have little direct control over AI development, and I have very little faith that government can usefully regulate it.
Hence, being 48, I’m focused on staying relevant career-wise for the next 15 or 20 years. I’m very interested in some insight into the economics of AI adoption, and how to model who things could play out via comparative advantage, etc. I also very much enjoy using the technology, as it’s already made my working and personal lives much easier.
11. June 2024 at 06:42
Scott, thanks for this piece. It is interesting. And I share many people’s concerns about lack of attention given to what may be coming.
I want to ask you about your comment regarding superintelligence “It seems silly to speculate as to what a super-intelligence would be like, because no human being is anywhere near smart enough to have an intelligent opinion on the subject. I don’t even know if we’ll be able to create a superintelligence, but if we do then I suspect it will be a “big deal”.”
If you reconsider what you wrote, wouldn’t you say that it is too dismissive? I think we have to debate how we might perceive ASI and relate to that, even if we are doomed not to understand it fully. Few things come to mind. First of all, we might still be able to switch it off – it would be good to have a consensus when we all need to run and find that big switch. It would also be good to agree how we relate to the actions suggested by ASI – it will still need us, minions, to execute. So, if it suggests dropping everything and digging a tunnel between SF and New Delhi because it is allegedly on a critical path (pun non-intendent) of a plan we, humans, cannot conceivably understand, do we just obey or we have some kind of review? Especially, considering that every iteration leading to ASI hallucinated now and then.
We also had superintelligence allegedly around for most of the human history. There are still people that tells us that there is God and some kind of plan of salvation. This insistence created a number of cults, led to devastation of nations, deaths of those who disagreed with interpretation and on a positive side left a number of beautiful churches and other works of art.
11. June 2024 at 07:51
I read some, although not all, of the Aschenbrenner book/essay and found it rather disappointing. Without writing a 165-page rebuttal, some thoughts I had were:
Rhetorically, his writing style is irritating. The idea that there are only a few hundred people who understand the situation, including the author (of course!), puts a bad taste in my mouth from the beginning. It’s so incredibly gnostic and occult, setting himself up as the magus or seer who knows the true secret. This is repeated throughout.
The anthromorphising of AI is another rhetorical trick he uses throughout. ‘The models, they just want to learn.’ Nonsense. The models have no agency and no desires. Talking about them as if they are agents – to the point of calling them agents! – is nothing more than making them sound more intelligent than they are. Same with the use of the word ‘hallucination,’ although amusingly this word only comes up once in the entire essay.
On a deeper level, there are a whole bunch of extreme claims here, without much/any evidence to support them. For example, that the machines can think and reason today, that by 2025/26 machines will outpace college graduates, and be smarter than humans by the end of the decade. His evidence is that this this ‘just requires one to believe in straight lines on a graph.’
He, and others like him, seem to think that superintelligence will be achieved simply be throwing more resources at it. More compute, more data, more power, more GPUS, etc etc. Again, to quote him, ‘the magic of deep learning is that it just works.’ And that with more ‘effective compute, models predictably, reliably get better.’
For the data limitation issue, he just asserts that it will be solved – again, through spending ‘billions of dollars.’ How? ‘I think it’s reasonable to guess that the labs will crack it.’ Oh, okay.
I more or less stopped reading after around page 50, only reading a few random lines that caught my eye whilst scrolling. For example, he clearly knows nothing about Cortes or Pizarro’s conquests in Latin America and is just repeating long out-of-date tropes. I’m not entirely sure why he felt it necessary to mention them, other than that he seems a firm believer in the Great Man of History theory.
Aschenbrenner may or may not be an extremely intelligent and gifted individual, but this is not a very impressive essay, in my opinion. There are many bold claims, with very little evidence to support them. It all feels like one giant pitch deck. /end rant.
11. June 2024 at 08:16
Bill, Energy constraints are not likely to stop us from getting to superintelligence–although they may slow its spread. Hopefully, something else will limit our ability.
Michael, You said:
“When I hear an argument between Eliezer Yudkowsky and Yann LeCun, for example, I feel like I learn some things, but don’t necessarily get any closer to having a useful opinion on the potential dangers of AI. Both of these guys are smarter and more educated than I am, particularly in AI, and I’m left with a very incomplete way to judge their arguments.”
Bingo. That’s exactly how I feel. I wish more commenters felt this way.
Alexander, People much smarter than me suggest it would be impossible to turn off a superintelligence, which could easily thwart our intentions. I’m agnostic on that question.
As for odd requests, presumably a superintelligence could explain its suggestions in a way that human can understand. Isn’t this the P/NP issue? Is a problem solvable? Is the solution verifiable? Thus it might identify a useful new protein that we would never have thought of, but where we can verify its use.
Tacitus, I recall him spending on a lot of time explaining how we’d overcome the data wall. He discussed reinforcement learning, chain of thought, better algorithms, etc. I don’t know if he’s right (I’m skeptical), but he had lots of strong arguments.
As for Cortes and Pizarro, are you referring to their use of native allies? If so, that makes his analogy even stronger. Unaligned AIs could align with evil humans.
11. June 2024 at 08:44
I read Aschenbrenner’s essay and I’m struggling a bit to square it with Andrew Ng’s letter to Congress, https://aifund.ai/insights-written-statement-of-andrew-ng-before-the-u-s-senate-ai-insight-forum/. Ng, as the former head of Google Brain, is, I assume, in Aschenbrenner’s elite 100 (and if he isn’t, that would raise questions about Aschenbrenner for me), and he is much more measured in his discussion about the future of AI.
11. June 2024 at 09:14
I’ll also grab an old quote I added to one of your econlog posts, https://www.econlib.org/a-chat-with-claude-3/, to express my surprise that the top AI people are now focused on the impending arrival of super-intelligence.
11. June 2024 at 09:21
Do you mean somewhere besides that essay? He doesn’t really make any coherent arguments in the essay about the data plateau problem. He says multiple times in different ways that people are working on making models more efficient, insiders are optimistic about this, and it’s intuitive that future models can be trained with less data or with synthetic data. But it’s all hopes and dreams, nothing substantial whatsoever.
Although, again, I think the ‘hallucinations’ issue is much more important and difficult issue to solve, which he doesn’t touch on…
Regarding the Spanish and Portuguese, this is his paragraph:
‘Whoever controls superintelligence will quite possibly have enough power to seize control from pre-superintelligence forces. Even without robots, the small civilization of superintelligences would be able to hack any undefended military, election, television, etc. system, cunningly persuade generals and electorates, economically outcompete nation-states, design new synthetic bioweapons and then pay a human in bitcoin to synthesize it, and so on. In the early 1500s, Cortes and about 500 Spaniards conquered the Aztec empire of several million; Pizarro and ~300 Spaniards conquered the Inca empire of several million; Alfonso and ~1000 Portuguese conquered the Indian Ocean. They didn’t have god-like power, but the Old World’s technological edge and an advantage in strategic and diplomatic cunning led to an utterly decisive advantage. Superintelligence might look similar.’
This is just complete nonsense, historically. The technological edge was basically non-existent; to claim they had an advantage in strategic cunning is almost offensive. To not mention the native allies is just strange, other than that they had ‘an advantage in… diplomatic cunning,’ whatever that means. Alfonso never conquered the Indian Ocean, either.
Basically, it tells us nothing at all about AI, it just tells us the author doesn’t know much about history. I honestly wonder if GPT wrote this paragraph.
11. June 2024 at 09:39
Carl, Yes, current AIs don’t have AGI. I think pretty much everyone agrees on that point.
Tacticus, Well, I am very interested in learning your theory of how a tiny number of Europeans were able to quickly take over most of the world, and rule over vast numbers of native people, without any sort of superiority over those people.
Until I hear otherwise, I’ll assume it was a mix of technological and tactical superiority.
I don’t recall where Aschenbrenner discussed the data wall in detail, as I both read the article and listened to the 4 1/2 hour podcast. But he certainly did discuss how he expected the data wall to be overcome.
11. June 2024 at 10:11
https://www.construction-physics.com/p/how-to-build-an-ai-data-center
I happily admit practical ignorance of AI specifics.
I do have decades of experience with energy supply and demand curves.
Aschenbrenner talks of a Manhattan Project level commitment. Well it took a very big threat and Pearl Harbor to create that commitment. For the foreseeable future, we are well short of that level of commitment to building infrastructure.
I agree we could technically build the energy infrastructure needed in the USA if we had a huge bipartisan commitment to do that and junk every bit of climate change legislation. It would also require a huge bipartisan commitment to permitting reform.
Currently, I’m not convinced we can permit a single new interstate natural gas pipeline going forward. MVP starts this month but planning for that started in 2014.
Barring a major bipartisan political winds change, energy supply and transmission has been and continues to be fairly inelastic.
Personally, I think we mostly have to embrace change because it happens when it happens. Trying to stop it or retard it doesn’t usually work very well. It’s pretty hard to put the genie back into the bottle.
I just think that the pace of AI deployment is not just about the pace of the field of AI. Once you are into gigawatts you are probably pushing deep into the elasticity of energy infrastructure. And if you want to do it clean then you have to look at nuclear and it’s timelines.
11. June 2024 at 10:18
https://x.com/E_R_Sepulveda/status/1757396348884627599/photo/1
oh this is a link to wind output versus nuclear output for Ontario.
I have a hard time seeing this type of electric gen meeting giant data center needs.
Renewables probably are not the answer without huge technological innovation in low carbon energy storage.
11. June 2024 at 10:32
But it goes to the heart of the question he is assuming the answer to: the leap from trained intelligence (both supervised and unsupervised) to general intelligence in the near future. We may be about to waste massive amounts of energy pursuing diminishing gains.
11. June 2024 at 10:43
Well, the simply answer is that a tiny number of Europeans were not able to quickly take over most of the world and rule over vast numbers of native people!
Up until the 19th century, with some exceptions like the Americas, European ‘colonies’ were mostly a few forts and towns scattered on various coasts (think Goa, Cape Town, etc). There were various reasons for this, notably that Europeans who left the coasts tended to die very quickly from diseases like malaria…
It wasn’t until the mid/late 19th century that Europeans actually ruled over vast numbers of native people, in part due to the discovery of medicines like quinine and also due to the productive capacities of the industrial revolution finally allowing weapons much more advanced than the natives. Up until then, Europeans weapons were hardly better.
I’m not saying that Europeans never had technological or tactical superiority; I’m saying they didn’t at the time of Pizarro, Cortes, and Alfonso, the three names he mentioned. Even with fully automatic weaponry, it would be pretty difficult for 500 men to conquer an empire of millions. Not that the weapons the Conquistadores had approached anything like that. I believe Cortes, for example, had more crossbowmen than gunners. Their main weapons were still swords and other hand-to-hand items.
So his analogy of superintelligence being like the well-armed and masterful Europeans in the 16th century surprising the South Americans just doesn’t work. It’s a fake analogy.
11. June 2024 at 10:53
I think the technology of ocean going transport was a pretty big strategic advantage for the Europeans. The ability to pop up anywhere on a coast and then leave again was a form of strategic mobility the other cultures could not match.
11. June 2024 at 13:54
Bill, you’re both underestimating the capabilities of other cultures’ ships and overestimating the capabilities of European ships. You’re also looking at it from a very antagonistic viewpoint, when frequently Europeans and other cultures were collaborating.
12. June 2024 at 04:11
Is it not possible that a non Silicon Valley person could be a correct contrarian in the same way that Eliezer said that you are/were?
He is a hard case given his language and style. He reminds me of Austrian economists.
Thanks for the HT and interesting feedback. The one by Shane Legg is balanced and speaks to some challenges of replicating humans efficient learning. I find it depressing to listen to these podcasts with the doomers. Then, as pointed out, you listen to the optimists, most often executives, and it’s clear they are more keen on the positive near term transformative impact on productivity and dismissive of the probability we will expire.
For you it’s not a biggie as you’ll be dead soon, I’m behind. But importantly there’s the next generation.
12. June 2024 at 08:14
Tacitus, A few hundred men overthrew the mighty Aztec and Inca empires, and you still haven’t told me how.
Milljas, You said:
“Is it not possible that a non Silicon Valley person could be a correct contrarian”
Obviously, and no one has ever denied this. BTW, I don’t even know what the “contrarian position” is for AI. Doomer? e/acc?
BTW, Aschenbrenner is not a doomer or an e/acc.
12. June 2024 at 10:09
AI doesn’t have to be super intelligent to have wide ranging long term effects on human society.
12. June 2024 at 11:05
Apologies, I thought I had said that I vehemently disagree that ‘A few hundred men overthrew the mighty Aztec and Inca empires.’
Re: the Aztecs:
Cortes may have had only a few hundred Spaniards, but he was allied with perhaps 100,000 natives who despised the Aztecs. It wasn’t just a few hundred Spanish versus 100,000 natives; there were maybe 100,000 fighters on two sides and the Spanish were on one of those sides.
Meanwhile, the natives (on both sides) got decimated by disease, likely smallpox, brought by earlier Spanish visitors/explorers/whatever.
It’s also very questionable how ‘mighty’ the Aztec empire was. It was basically a city state with some tributaries.
Re: the Incas:
The Incan Empire was undoubtedly much larger and more impressive than the Aztecs. However, it also was not a situation of a few hundred Spaniards against 100,000 natives. For one thing there was already a civil war going on when the Spanish arrived and European disease, again likely smallpox, was already wrecking havoc.
The ultimate conquest was also not overly quickly. Whilst the Aztec ’empire’ basically collapsed once the main city was destroyed, the conquest of Peru took decades.
—
In short, the Aztecs and the Incas were destroyed mostly by disease and by disaffected neighbours. Many scholars think both empires were already in decline, if not fully collapsing, by the time the Spanish arrived.
Besides bringing disease, the Spanish on the ground had little to do with the collapses. It was simply not superior European weaponry or tactics that brought down the native empires. That was mostly a narrative made up later!
13. June 2024 at 12:00
Davey, I agree.
Tacitus, You said:
“Cortes may have had only a few hundred Spaniards, but he was allied with perhaps 100,000 natives who despised the Aztecs.”
It seems to me that this is exactly Aschenbrenner’s point. If 100,000 natives required a few hundred Spaniards to get the job done, what does that say about their relative superiority?
AI will align with humans and gain immense power.
14. June 2024 at 17:48
I don’t believe in AI doom because I accept the EMH, and it’s not pointing toward that. Garett Jones has been making a similar point.
Regarding the conquistadors, this is a relevant post. The fact that Cortez got allies just indicates that people thought it worth joining the strong horse despite his tiny numbers.
14. June 2024 at 21:15
TGGP, Well, regarding the EMH it’s worth noting that some of the very same Silicon Valley types who predict doom were shorting the market in February 2020 because they saw Covid coming.
But I see your point, and I sort of lean that way myself.
15. June 2024 at 09:01
The Spanish were only superior in the sense that they had better immunity to Spanish diseases! Again, it was not ‘the Old World’s technological edge and an advantage in strategic and diplomatic cunning.’ Also, I rechecked my sources, and the Spanish actually had several thousand men, not a few hundred.
I think you are being far too generous to Aschenbrenner, both regarding his writing and reasoning and regarding his conclusions.
15. June 2024 at 09:13
Oh, and I shorted the market in February 2020 and bought back in the day before the ultimate bottom in March, so what does that say about the EMH?
(As long-time comment-readers may recall, I think the EMH is at best useless and at worst nonsense.)
16. June 2024 at 13:19
Markets are not perfectly relatively efficient, obviously, or there’d be no incentive to make them efficient, as Milton Friedman pointed out a long time ago. Markets are merely very difficult to beat consistently, rather than impossible, on average.
17. June 2024 at 15:39
Tacticus, We’ll have to agree to disagree on all of those points.
“I shorted the market in February 2020 and bought back in the day before the ultimate bottom in March, so what does that say about the EMH?”
Nothing?
18. June 2024 at 04:19
I agree. So what is the point of some of these Silicon Valley guys also shorting in February 2020?