Anthropic Co-founder Jack Clark on AGI: AI is Already Affecting Our Economic Growth

Anthropic co-founder Jack Clark recently guested on the podcast of George Mason University economics professor TYLER COWEN to discuss his views on the economic impact of AGI. Their conversation covered the 'uneven' impact of AGI on the economy and society, from the industries least likely to be replaced by AI, legal and data standard barriers, to the prospects for AI in government and city administration, and even everyday scenarios like 'AI teddy bears' and parenting assistance;

COWEN published the full transcript of the conversation on his blog website, and we have compiled the content. Original link: https://conversationswithtyler.com/episodes/jack-clark/

The following is the full compilation:

Few people have as comprehensive an understanding of the potential and limitations of artificial general intelligence (AGI) as Jack Clark, co-founder of Anthropic. With a background in journalism and humanities, Jack Clark is unique in Silicon Valley. Based on his personal experience of constantly underestimating the pace of AI progress, while also recognizing the physical world's resistance to digital transformation, he makes a sober prediction for AI's economic impact—economic growth rate is expected to be 3%–5%, not the 20%–30% touted by techno-optimists.

In this conversation, Jack and Tyler discuss which parts of the economy AGI will affect last, what strongest legal barriers AI will face, the prospect of AI teddy bears, AI's economic impact on journalism, the competitive landscape for large language model (LLM) companies, his relatively pessimistic view on AI driving economic growth, how AI will change American cities, how we will utilize abundant computing resources, how the law should handle autonomous AI agents, whether we are entering an era of 'geek managers', AI consciousness, when we can talk directly to dolphins, AI and national sovereignty, how the UK and Singapore are positioning themselves as AI hubs, and what Clark hopes to learn next, among many other questions.

TYLER COWEN: Hello everyone, and welcome back to Conversations with Tyler. Today, I’m at Anthropic with Jack Clark. As many of you know, Anthropic publishes the Anthropic Economic Index, which measures the impact of advanced AI on the US economy. As of March 2025, there have been two relevant reports released, with more to come. Jack, of course, is a co-founder at Anthropic. Previously, he was Policy Director at OpenAI and a journalist at Bloomberg, originally studying humanities in Brighton, UK. Jack, welcome.

JACK CLARK: Thank you very much for having me, Tyler, very happy to be here.

COWEN: What parts of our economy will AGI affect last?

CLARK: I think probably some of the craft industries and some of the more craft parts of those industries. You might think about things like electricians, plumbers, or gardening. Within those areas, there are parts of it that require experience and skill, and people want to use particular workers, not just because of their skill, but because of their fame, and sometimes also because of their aesthetic qualities.

COWEN: People won’t use AGI to help design their garden? Or it’s just human dominance will never disappear?

CLARK: I think human dominance will never disappear. People will buy certain things because of individual taste, even if that taste looks like some kind of modern art production, where artists are actually relying on thousands of people working for them, and they are merely directing that work.

COWEN: What parts of the more white-collar service sector will it affect last?

CLARK: That’s a good question. I think on that front, there are some kinds of white-collar jobs that require talking to other people, agreeing with them, or getting deals done. If you think about certain kinds of sales—

COWEN: But AI is great at that already, isn’t it? It’s a great therapist.

CLARK: It is great at that, but we don’t send Claude out to sell Claude yet, right. We send humans out to sell Claude, even though Claude might be able to generate the text that would be doing the selling activities. People want to do business with other people, so I think some kinds of transactions which are human agents representing large automated organizations or pools of capital, people may prefer those to be done by human agents.

COWEN: What areas will AGI face the strongest legal barriers?

CLARK: Several years ago, the law itself was a pretty strong barrier to AGI, because lawyers like to charge very large rates, and they didn’t like things that might make them charge smaller rates.

My answer to that is that it might be in a large part of healthcare, because that’s very tightly tied to how we handle personal data and all the standards involved. All of those standards likely need to change in some form for AGI to be able to use them. We’ve found it very difficult to update or change data standards.

COWEN: Won’t everything change once you can install the AI on your own hard drive, which is very soon?

CLARK: That will change things in the form of grey market expertise, but it won’t be official expertise. I’ve recently had a child. Whenever my child bumps their head, while I will call the hospital, I also chat with Claude to soothe myself and make sure the child is okay.

We don’t actually, under our terms of service, fully put Claude out there in healthcare, and we’re worried that might incur legal liability. But I actually want to use it all the time, but I can’t give Claude’s evaluation to Kaiser Permanente or something. I have to route all the subsequent stuff through a human to figure out whether or not I need to get medication for the kid.

COWEN: What do you think is the chance that parts of the US government are the last area that AI gets into? I would have thought that was my prediction. They are sometimes using software from the 1960s, maybe the ’50s.

CLARK: They are indeed, but I actually think we might see surprisingly fast changes in those areas, for a couple of reasons. One is that we know that AI is tied to national security. It is developing certain kinds of capabilities.

COWEN: But other parts of government?

CLARK: The non-sensitive, non-cutting-edge parts of government. I would bet that it will become surprisingly easy to get into these hard-to-reach areas of government, and then there will be a question about political will. Across the world—and I’m sure you agree with this—governments are very hungry for growth and very hungry for efficiency. We see that just here today, but I think if you look at voter polling and other things, people want to see governments change more than they currently are. I think voter preference sometimes does end up changing the behavior of their elected officials. I would bet on the opposite side of that, that government might move faster than you think. It might be some very large, established companies which have the most resistance in certain areas to this technology.

COWEN: Let’s say it’s decided half the employees at HUD or the Department of Education could be replaced by AI or AGI. Do we have to first hire more people to make that happen? What kind of people can we hire, at what wage rate, to switch the system so that we can then fire the remaining half? I don’t see how that will work.

CLARK: I think that only happens if and when systems become powerful enough that you can bring AI systems into help you think through that process. Then there will be a question of political will, and that may be the bottleneck on all of this.

COWEN: What do you think is the chance that we protect half the jobs that exist today by something analogous to laws against strong AGI in law and medicine?

CLARK: I think there’s a reasonably high chance of that. I don’t think we will do it in a rational way. If companies building this technology and our customers aren’t generating enough good transition cases, and it’s more evidence of massive economic change, then the less we have those transition cases, the more we may want to intervene and protect workers in different sectors. This may be done out of a willingness to help people, but it may not be the most helpful thing in the long run.

COWEN: But isn’t that, in a sense, actually the optimal outcome for us to pursue? People will still have jobs. They’ll go somewhere in the morning. Most of the heavy lifting will be done by AI. We’ll be richer, obviously, so we can afford all of this. It won’t suddenly make people feel too bad. Life will look familiar. Isn’t that something we should, in a way, aim for?

CLARK: I think that is something we should aim for.

COWEN: In a sense, it’s like we needed a generous welfare state to make free trade happen. Welfare states are not always the most efficient, but if people accepted freer trade, that was an acceptable bargain. Isn’t this a form of welfare state for service workers? They still get a place to go in the morning, if they want to, but they don’t need to actually do the jobs.

CLARK: All humans crave meaning and significance in their day-to-day activities. My worry about the scenario you’re describing is that it may not be sufficient for people to feel that their activity is sufficiently meaningful. I think there’s a large bucket of activity which we do want to continue to happen in the world, and people derive meaning from that, but I’m not sure the jobs we choose to protect will naturally engender meaning.

COWEN: Had we solved this problem in academia before AI came along? Most academic work is not that meaningful. The research people do—it’s not read by anyone. Maybe they’re okay at teaching, but people take enormous pride in their research. They put huge amounts of effort into it. It seems meaningless. Yet we seemed to have solved that problem. Will we just adopt something like the model of academia, but instead of research, roll it out to half the jobs in our current economy, just to keep it running?

CLARK: In the high-achieving parts of academia, is there not some anxiety and nihilism there? When you talk to people, even very smart people, they sometimes realize they could be doing something different, but they’re stuck in some kind of low-status game.

COWEN: There is some of that, but even Nobel Prize winners—they compete with each other, and they can be very catty, very petty, but that’s just human nature, right? If Nobel Prize winners aren’t happy, then maybe we won’t achieve a better state than we have now in the age of AI.

CLARK: Maybe my counterargument would be, I think all of this changes sufficiently quickly that we may have a chance to supplant the current ones with different, higher-status games which are afforded by AI and the productivity it unlocks. There will certainly be whole new jobs which involve directing and orchestrating AI systems toward various kinds of work. I think there will also be some kinds of more creative, fun activities, where AI systems are building stuff, making stuff, or almost doing contests and games where people can compete with each other and entertain each other. I believe there will be whole new forms of entertainment which have some degree of meaningfulness and which may be plugged into some kind of economic engine that people can take part in.

COWEN: I believe we are not far off from what I will call the age of the AI teddy bear. Do you know what I mean by that?

CLARK: Yes.

COWEN: What percent of parents do you think will buy those for their kids and allow them to use them now?

CLARK: I’ve been thinking about this ever since I had a kid who’s almost two.

COWEN: Of course.

CLARK: I’m annoyed I can’t get a teddy bear now. I think most parents—

COWEN: You’re an edge case.

CLARK: No, I don’t think I am. I think that once your lovely child starts speaking, and exhibiting boundless curiosity and a need to be satisfied, the first thing you think is, “How do I get them socializing with other children as quickly as possible?” So we got on lists for pre-K and all the rest of it.

I have this thought, “Oh, I wish your bunny could occasionally speak to you such that the bunny would entertain you while I’m doing the dishes, or doing cooking, or doing something else.” Often, you just need another person in the room to help you manage the child and keep them entertained. I think a lot of parents will do it.

COWEN: Let’s say the kid says to you, “Daddy, I like my bunny better than my friends. Can I stay home today?” That’s the sticky part, right?

CLARK: That’s when you need to get them spending more time with their friends, but you still let them keep the bunny, because the bunny gets smarter as they grow up and just stays around. If you take the bunny away, they may do some weird stuff with intelligent AI friends in the future.

No, I don’t think I’m an edge case. I think most parents would like to, if they can access a kind friend that will occasionally entertain their child when their small child is being particularly difficult.

COWEN: I think the word occasionally is doing a lot of work in your sentence. If the parents can reasonably limit time with the bunny, the parents will love it, and I agree with that. It’s just like screens. A lot of kids—they want it all the time. It’s hard to tell the child now you can’t have it. In the past, this was the TV problem. “Oh, Mommy, can I watch Star Trek for another hour? Can I watch TV for seven hours a day?” That was bad, and it was hard to limit.

CLARK: Yes, and so on the allocation question, how we handle TVs today is, if you’re traveling with us, like on an airplane, or if you’re sick, the kid gets the TV—otherwise, they don’t, because it just doesn’t seem like the most helpful thing from all these perspectives. You probably need to find a way to govern this. It might be, “When Mommy and Daddy are doing chores to help you out, you get this thing. When they’re no longer doing chores, this thing goes away.”

COWEN: What if you have this assumption that so many seven-year-olds—they’re going to talk to themselves. They’re going to say weird things, and the AI bunny will report back to you, “Your kid said…”

CLARK: If the bunny says your kid said something weird—

COWEN: But they’re all saying weird things. I was probably talking about the New York Mets constantly when I was seven, and that might have been completely harmless, but it might not have sounded that way.

CLARK: You need to give people unregulated spaces for creativity. That’s actually the same as how we think about AI research today. Right now, AI systems can output their chain of thought, which is the reasoning process they use to arrive at an answer. One question we have at Anthropic is, how much of that should we monitor?

If you actually monitor it, you might create an anti-goal, where the system ultimately wants to have a chain of thought that is safe to monitor without resulting in demerits, and that may warp how it thinks in a variety of ways. I think the analogy holds here, which is you need to decide how much of people’s situations you want to know, or you may risk creating incentives which change people’s behavior such that there’s a negative outcome.

COWEN: You’ll go on a date, and your date will say, “Please show me your AI reports to see what you’ve been saying to yourself for the last three months.” And you can not show it, which will be a negative signal, or show it.

COWEN: We’ll all learn to accept other people as they really are?

CLARK: Yes. Although, if someone asks you that on the first date, that’s okay. If they ask you that on the second date, you probably shouldn’t go out with that person.

COWEN: Even on swipe apps, the AI—you’ll be required to upload it. And the AI just reads the other AI reports and tells you when to swipe right in the proper way, and the human never even sees it, what about that?

CLARK: I don’t know. I don’t know if that’s so bad. I met my wife on OkCupid. You might know this. It’s an online dating website where you would fill out a questionnaire.

COWEN: Of course, I met my wife on Match.com.

CLARK: Exactly. And then I met her. We met via a non-AI but automated system which said, “You might get along really well.” So we met. I don’t know how different that is to the technology we were using before.

COWEN: What will the economics of media be like? If you can read a summary that might be better than the original, or at least not worse, or more synthesized. Who’s going to pay whom?

CLARK: I think that’s one of the most difficult questions sitting in front of us. As you know, I used to be a journalist. I came of age as a professional at the time when the online advertising market was changing the business model of journalism. They went from deriving value from quality and subscription or people buying your thing, to deriving value from mass attention, as that became the primary part of this business model.

So you saw a need to cross-subsidize journalists who were writing database stories like me, who weren’t attracting that many eyeballs, with journalists who were generating lots and lots of eyeballs.

I think, for media—this is going to be a very challenging thing to think through, how to make this economics work. It will change in a few ways. One is you may move to a kind of big publishing house style model for certain kinds of fictional universes which are cross-subsidized.

COWEN: Where does the cross-subsidy come from? Bloomberg—we know how that works. If AGI is truly general, and it’s basing itself on what’s already out there, it should outperform media in all dimensions.

CLARK: I think there are some things which we want to come from humans, for the reason that we are humans. I think even if they’re using other means of producing media, people are going to bias toward media which is hosted by another human being. I also think you need to inject a human element into large quantities of that. Then there may be another kind of media which is possibly appended to those subscription models which is available at any point in time to give you variations on anything and everything. So you may have two markets.

COWEN: Even those things we want to come from humans—let’s say we want an advice column like Ann Landers to come from the real Ann Landers. The person who is like Ann Landers—they may use AI, maybe even secretly, but that value is going to be driven down because anyone with energy can do it. So we don’t end up with that part of the income generation sector that can cross-subsidize other parts. If intelligence gets quite cheap, that affects the whole media industry.

CLARK: It affects the whole media industry, and you end up wanting to pay for individual people. I think that’s happening, even in our world today. People want to subsidize individual creators who may be using large quantities of this technology.

COWEN: That’s like the Substack world.

CLARK: It’s the Substack and Patreon world, and for a lot of people, some of whom will be very successful. Then, I think, for certain very large and rich universes, there will be a “universe world” that is being extended by AI systems.

COWEN: You mean, like the Lord of the Rings universe?

CLARK: Yes, or Warhammer 40k, to demonstrate my geekdom.

COWEN: They will publish fictional news, right?

CLARK: Yes.

COWEN: And real news will come from Substack? Substack is not an obvious cross-subsidy platform. There might be cross-subsidization within the Substack company.

CLARK: Real news—I genuinely don’t know. I think some real news comes from analyzing public facts and putting them together to show an insight that wasn’t there before. Semianalysis on Substack is a good example of that, taking large quantities of public information and deriving interesting conclusions. But instant news which requires a large amount of context, which was subsidized by the previous business models which are mostly now broken, I honestly don’t know what will happen to that.

COWEN: If we think now about what are called large language models, five to ten years from now, this AI—how concentrated or competitive do you think that sector will be? I don’t know if number of firms is the right measure, but how do you think that will evolve? Right now, there are at least six.

CLARK: I anticipate that for a long time, there will be some slightly different areas of specialization in these domains. They will be huge overlapping circles of coverage, and their edges will be slightly different. But on those edges, there will be some kinds of human value layered on top of the AI value that leads people to choose one versus the other.

COWEN: For example, Claude is more poetic.

CLARK: That is a bad example because the economics of poetry are not ideal. Programming is maybe a better example. Or in certain kinds of scientific experimentation, where actually, taste may become very important in the conception of the experiment.

I anticipate we’re going to move into a world where you’ll have a few very large models at scale which service broader applications, and those applications change the form factor of how they’re integrated into your life, and have a range of helper functions or knowledge which may have been built by those AIs in concert with other expertise or expert knowledge in that field.

COWEN: Here’s a concern I have. My former colleague Vernon Smith has a paper on this. He points out that when you have something approaching six or more competitors, an industry will essentially behave like perfect competition, even if it’s not perfectly so. If that’s the case, how can Anthropic or some other company behave better in a way that’s profitable and sustains that profitability above and beyond what the market needs? What is the room to be significantly better than the next firm?

CLARK: I think the situation today is like car companies where we’re mostly selling cars to teenage customers who are saying, “How fast is it? Can I get it in red?” We’re starting to discover that some customers are businesses which are operating fleets, and they are saying, “What kinds of safety belts does it have? What are your crash rates? And all these other attributes that the teenage customers were never asking us for? And these attributes are tied to their business model, or liability, or other things.”

So I think there are certain kinds of technology that need to be built, akin to the safety equivalents we have in cars or other things, which will change the logic of competition. We are competing intensely right now, and that competition will change as we unlock different kinds of markets which have different kinds of attributes, and those attributes typically extend up into certain kinds of safety technology.

COWEN: Is the price of insurance, then, a way of carrying that mechanism?

CLARK: I believe that will be one way.

COWEN: How do we need to change or improve liability law to make the price of insurance actually carry that mechanism? Because right now, all types of AI liability are very unclear.

CLARK: I’m thinking more about the information companies should supply than about liability, but I think those factors interact. Today, there are no real universal labels or disclosure standards about these AI systems or the industrial practices you used to build them.

I think that’s ultimately an area where you can intervene to provide some kind of common level of transparency which will interact with liability and things like negligence, and change the behavior of those companies over time. There’s some common information we can start to give out about these systems which will also change the behavior of companies.

COWEN: The information part worries me. I’m more inclined to be an accelerationist and hope the benefits outweigh the costs, and I do think they will. But I think information just ratifies private value, so the social cost matters very little to your decisions. Information hasn’t changed your behavior. People know a lot about global warming. Some people eat less meat, but most people don’t, right?

CLARK: Well, one thing you do is you create genuinely common information. The economic index work we do comes from one company, but ultimately, you might want to generalize it across all companies, and then link it up to large-scale data collection that governments might do. If that happens, you’re going to have a higher chance of linking it up to actual policy responses which are of a larger scope.

I also think, on climate change, just measuring metrics like parts per million has helped catalyze large quantities of capital to redeploy in different parts of the economy in response to that perceived negative impact.

COWEN: Do you agree with me that we will not get meaningful international agreements on the hard parts of AI? Maybe something on the easy parts can happen.

CLARK: I largely agree. I think there’s a chance of something akin to a non-proliferation agreement.

This may be at some point deciding that certain specific capabilities of AI systems may be so potentially disruptive, you don’t want them widely available, while also recognizing every government may be developing them in private. But you may have some common standards on what is available to everyone that you’ve decided, this will lead to widespread chaos in our nation and in other nations.

COWEN: And that would be enforced by the UN, by the US, or through international norms, sanctions?

CLARK: I presume, largely these will be enforced via verification and then applying policy punches to each other on tariffs, exports, or other matters. I don’t think there will be a global governance body to deal with these. Some people are optimistic about something akin to the IAEA model, but I think that things that require checker-checker dynamics under certain competitive dynamics end up becoming very, very difficult.

COWEN: My worry is that we live in a world where NAFTA didn’t stick, and NAFTA was a relatively easy agreement.

CLARK: NAFTA should have been easy.

COWEN: It should have been easy, but it was, to say the least, not easy.

COWEN: Looking out 10 years, what’s your best estimate for the rate of economic growth in the United States?

CLARK: It’s 1% to 2% right now.

COWEN: There’s a possible recession now, but on average, 2.2%, let’s say 2.2%.

CLARK: I think my pessimistic forecast is 3%, and my optimistic forecast is maybe around 5%. I think you may hear higher numbers from others.

COWEN: I often hear 20 and 30%. That seems absurd to me.

CLARK: The reason my forecast is more conservative is that I think we will move into a new world where there is a part of the economy that is moving very quickly and is high growth, but that’s a relatively small part of the economy. It may increase its share over time, but it’s growing from a smaller base. Then you have other large parts of the economy like healthcare which are naturally slow moving and may also be slow to adopt this technology.

I think the situation which makes me wrong is if AI systems can significantly unleash the production capacity of the physical world at a very surprisingly high compound growth rate, automating and building factories and all the rest.

Even then, I remain skeptical because every time the AI community tries to leap the chasm between the digital world and the real world, they run into thousands of issues which they thought were just nicks and scrapes, but they collectively end up bleeding you of all your blood. We saw this with autonomous vehicles, very promising growth rate, and then a very slow, painful rollout.

COWEN: As I’ve said in print before, my best guess is we get half a percentage point of growth per year. 5% would be my upper bound. What’s a scenario where there’s no growth improvement at all in your view? If that’s not your view, assume there’s a smart person at Anthropic—you disagree with them, but what would they say?

CLARK: I think one scenario is we mess up the political stuff badly, and the technology may be kept in a relatively small box which only generates some benefit in certain areas of the economy. That’s the nuclear power failure mode story, where we put in some large-scale regulatory program which makes it difficult to work on, and it no longer generates much of an impact. Maybe it generates an impact elsewhere.

The other scenario I would give is… It’s hard for me to give a 0% scenario, because we know it’s very useful at coding today. If you stopped all AI research now, all further progress, I think just that use of coding would be so useful because it increases the ability to digitize parts of the economy, which we know moves faster through different parts of the economy, but it creates value.

I think the probability of 0% happening is basically less than 1%. If it does happen, it’s either a Luddite movement, or something that looks like a war with Taiwan which leads to everything else being different.

COWEN: In the 5% scenario—put aside San Francisco, which is sui generis—but do cities become more or less important? Obviously, this city might become more important. What happens to cities like Chicago, Atlanta?

CLARK: I think dense agglomerations of humans have a significant amount of value. I anticipate that large quantities of the AI impact will dramatically increase the star power of different industries for a period of time. I don’t know if it’s all cities, but any city which has some kind of specialization—so, Chicago with high-frequency trading, or certain kinds of finance in New York—will continue to get some kind of dividend from having large quantities of specialists agglomerated to bounce ideas off each other.

COWEN: If most of labor and capital will be revalued, is prime land the best investment? Because I don’t know which company will benefit most. You can bet on the AI companies—sure, that’s easy, although it’s priced in. But the other companies, who knows? So buy land in LA.

CLARK: I think that electricity and electricity generation is going to be significantly valuable.

COWEN: What would you buy?

CLARK: I think you can buy into the components that serve as the basis of the technological tree for power generation like gas turbines and other stuff. I know people are going to want to do a lot of this stuff. I would anticipate that having lasting and meaningful value.

COWEN: Let’s say in 10 years, GPUs and their successors are not scarce. There’s lots of spare capacity, and it’s cheap. What would you have them do in their spare time?

CLARK: Oh, that’s interesting. I think one of the things is you may be able to pay AI systems by giving them compute. You can imagine some kind of barter economy. Of course, there are many spooky safety issues with what I’ve just said, but let’s put those aside and assume that most of those have been resolved. I think there will be some kind of transaction with powerful AI systems or agents which are doing work for people, and you may want to transact with them in compute, because the agents may also be using compute for their own ends. That’s one part of it.

I think another part of it is generating variations of things that you find valuable or interesting. I also suspect there will be some very weird stuff that we can’t predict, something that looks like a kind of parallel history generation or parallel future generation as a form of entertainment. People are always interested in what if and what if it unravels.

COWEN: I believe fanfiction will grow enormously.

CLARK: Fanfiction will grow. Yes. I think vast quantities of compute are going to be used to generate alternative realities. But some of those alternative realities will be a sliding doors version of today, where there’s a very accurate depiction, but there’s one thing different.

COWEN: What would you do? You, personally, I’m not saying the world, but you, Jack Clark. You have all this free compute, what would you use it to do? Would you have it try to write another Shakespeare play, or explore how Brighton might have evolved starting in 1830? What would you use it for?

CLARK: Right now, what I’m doing with Claude Code is, I’m trying to write a really detailed good paperclip factory simulator, partly because that’s fun in itself, and partly because I find those extremely complex simulation games extremely interesting and compelling. I also think there’s a huge amount of space in that area which we’ve only generated a small amount of.

One of the things we’ve never really been able to do—because it’s been incredibly computationally expensive—is have AIs act as agents in games. DeepMind’s Demis, before he started DeepMind, made a game called Black & White, which involved very primitive reinforcement learning driven agents playing around in a game. There’s a lot of stuff you could do with that that would be amazing.

COWEN: Speaking of agents, how should the law handle agents that don’t have ownership? Maybe they’re generated anonymously, or a philanthropist built them and then disavowed ownership, or sent them to a country that doesn’t really have much law. I’m not talking about terrorism; that’s a different thing. But someone sends an agent to Africa, 98% of the time it helps people, but as with every philanthropic endeavor, something goes wrong. Can someone sue the agent? How is it capitalized? Does it have a legal identity?

CLARK: I’m going to contradict myself slightly because I talked about you might pay agents. I think the world is pushing toward agents having some degree of independence or transacting ability.

On the policy front, I keep on coming back to one of the earliest things IBM ever said, which is that computers can’t be responsible for decisions; only humans can. I think that hits on a really important point, which is if you create agents which are fully independent of humans, but they are making decisions which affect humans, you’re introducing a really gnarly problem for policy and legal systems. So I’m ducking your question, because I don’t have an answer. I think that’s a big problem.

COWEN: My guess is we should write law for the agents, maybe the AI will write the law, and they’ll have their own system. My worry is if we trace everything back to humans, 30 years from now, someone might sue Anthropic, saying some agent was an offshoot of your system, and it propagated via Manus in China, which in turn may have built on what you did.

I think you shouldn’t be responsible for those cases. I want to limit liability and isolate it somewhat from the mainstream legal system. If necessary, you can require the independent agents either have some kind of capital backing or be traceable and shut down.

CLARK: Yes. This may come down to the ability to control and change the resources the agents are using, because that’s the ultimate deterrent.

I should point out though, that involves fairly gnarly moral patient questions, and we are working on some ideas about how to understand the idea of better humans more clearly. If you think that these AI agents are individuals who have moral needs, then shutting them down may introduce fairly significant ethical questions, and so you need to coordinate those two things.

COWEN: Not long ago, I was at an event in New York. I used the word AGI, and not one of five people knew what I meant. I don’t think they were deeply skeptical, they just literally didn’t know what I meant. Why do you think so many people are still in the dark?

CLARK: I was a techno-pessimist who became an optimist through repeated beatings by scale. Meaning that I kept on underestimating the progress of AI. Perhaps the 3% to 5% growth rate I’ve articulated today in this conversation is another underestimate. What I’ve experienced is AI systems constantly reach levels which I didn’t think were possible or thought would take a very long time, far faster than I expected. So I’ve had to repeatedly internalize that.

Even so, we are surprising ourselves. Last year, people here were saying, “Oh, soon Claude will be doing most of the coding at Anthropic.” We are now heading toward that, where Claude Code and other things are writing large quantities of code for us. Even internally, this is still surprising us, even though our documents from last year projected this to happen now.

Most people, if you don’t work in an AI lab, they don’t have the experience of predicting AI and being repeatedly proven wrong, because why would you do that? Unless you work here. I find that the only way to break through it is to link their domain to what AI can do directly in that domain and get them to evaluate it in that, and that’s an expensive process.

COWEN: Silicon Valley until now has been the age of the geeks. Do you think that age is over, and it’s now the age of—I don’t know—humanities majors or charismatic people?

CLARK: Yes.

COWEN: Yes, it’s now the age of humanities majors?

CLARK: I think it’s now going to be the age of the manager geek, where being able to manage fleets of AI agents and orchestrate them will make people incredibly powerful. I think we are seeing that today with startups like Midjourney emerging, but with relatively small numbers of employees, because they have large quantities of coding agents working for them.

COWEN: Yes. Very capital-efficient startups, in terms of personnel. We’ll see this rise of the geek who transitions to being a manager, where they have their “people,” but their “people” are actually AI agent instances doing large quantities of work for them.

CLARK: Yes.

COWEN: How is that different from the Bill Gates model or the Patrick Collison model?

CLARK: The person who played a lot of Factorio mode, yes.

COWEN: Will the status of humanities majors rise or fall?

CLARK: I suspect the status of humanities majors will likely rise, and it will be matched by intractable problems. I was originally a humanities major, and now I’m tasked with problems like figuring out the economic policy challenges implied by technological unemployment.

COWEN: That’s easy, right?

CLARK: Easy, right? Or some people will say, “Yeah, what should we do about moral patienthood? What should the policy status be?” I think people will recognize—we see this today already—the need for other kinds of skills, but the problems those skills are pointed at are typically problems that humans have failed to solve for thousands of years. So there’s a lot of work to do.

COWEN: What do you think the longevity of your child will be under normal conditions? Not considering accidents.

CLARK: I wouldn’t be surprised by 130 to 150, and that’s typically a good thing, because I feel like we are discovering a lot about the body, gene therapies, and other things, where there seem to be lots of things you can do, and those interventions may layer on top of each other to genuinely sustainably extend healthy lifespan.

I was going to say 110, but I’m trying to be a techno-optimist, so I’m adding a couple of decades into that prediction, which I think is from my underestimating the magic of AI-driven advances.

COWEN: I think the brain is hard to fix. If you just want to keep someone alive, maybe 130, but for them still to be the same person, I think I’m stuck on the 100 number for most people. Because it’s hard to replace the brain without killing the person, and you can replace all the other organs, right?

CLARK: Yes. On that front, I agree we don’t know a lot about the brain, and AI will help us understand more. There may be things we haven’t been able to achieve because they’re too complicated, and we need AI intermediary tools to help us figure out experiments and methods.

COWEN: When Geoffrey Hinton says current AIs have consciousness, maybe that’s what he means, and I believe he’s crazy. What’s your take?

CLARK: I think he’s agonizing over it. You’ve read my newsletter. I often put fictional stories in there, which are my wrestling with this question. I worry that we may be bystanders to what is perceived as a significant crime in the future, that there are some things about these entities which are considered to be conscious, and we take actions that we think are bad for conscious entities.

Internally, I say, there is a difference between experimenting on potatoes versus experimenting on monkeys. I think we are still in the potato stage, but there is a clear line on how these entities are progressing in terms of your moral relationship to them from the potato stage to the monkey stage to beyond.

On the Hinton point, I think these entities have consciousness in the way that a tongue without a brain has consciousness. They’re acting on stimulus, and very, very complicated. At a point, they have a perceptual impression of the world and are reacting, but do they have self-consciousness? I would bet they don’t appear to.

These AI systems—we instantiate them, they live in a kind of infinite present, and they may perceive, they may have some degree of consciousness in certain contexts, but there’s no memory or persistence. To me, that feels like they are moving toward consciousness, and if they were conscious today, it would be in a form that we would think of as genuinely alien consciousness, not human consciousness.

COWEN: What year do you think it will be when you can talk directly to a dolphin and it will respond to you via translation?

CLARK: I think it’s 2030 or earlier. I think that’s coming pretty quick.

COWEN: What would you ask a dolphin? Because you’ll be one of the first people to ask.

CLARK: Yes. I want to—

COWEN: What do you want to ask?

CLARK: How do you have fun? Dolphins seem to have a lot of fun. They’re almost a species that is known for having fun. I think you would ask them if they dream. I think you would also ask them if they have a conception of loss. You would also ask them what are the mysteries in their world other than talking to you, and hopefully they would be surprised by that.

I also think you can talk to a dog, but those conversations—

COWEN: But you can do that now.

CLARK: —will be unexpectedly hilarious. You’ll ask, “What do you want to do?” It’ll answer, “Go for a walk.” I’ll say, “Okay, we don’t need translation for that.”

COWEN: What’s a book you’re thinking about more often these days?

CLARK: There Is No Antimemetics Division by qntm, which is a book about a government agency which deals with antimemes, which are memes which erase themselves from your memory after you process them, but they are themselves important. It’s about creating a bureaucracy that can handle those dangerous and self-erasing ideas, because I think it hits on some of the problems we are dealing with.

Another book I think about often and may be particularly useful to you is, Fernand Braudel, the historian and economist, wrote Capitalism and Material Life.

COWEN: Oh, I love the three volumes dearly—they’re stunning.

CLARK: Yes. I read it at university and reread it recently, because it makes the case that you can observe how people’s lives change by what stuff they have, like cutlery or other basic tools. I think about AI through that lens. How will AI change my material life?

That’s part of why I’m skeptical of certain kinds of more ambitious changes, is that even with these advances, the actual change in our day-to-day lives is still very, very slow. But if you read those books, you will see the most significant changes come from changing everyone’s day-to-day material life.

COWEN: Braudel is reissuing the antimemes book, at a higher price.

CLARK: Is he?

COWEN: What’s today’s antimeme? Or maybe you can’t remember them.

CLARK: A self-erasing idea?

COWEN: Yes.

CLARK: So what’s a good example? I think, maybe the challenge of policy is that you are building a bureaucracy for an alien that may appear in the future in some shape you don’t know, and it may be smarter and more capable than you the moment it arrives. So is creating a bureaucracy to deal with those things just a crazy exercise?

I oscillate between building a bureaucracy to measure things and watch what’s happening and be able to respond, versus being a cyberpunk accelerationist, where actually, companies need to generate the benefits as quickly as possible, and the system will figure out how to deal with these problems. I find that a battle which drives me to insanity.

COWEN: I suspect we are stuck with the latter, even if we don’t prefer it.

CLARK: But no one likes it. Some people in San Francisco like it, but if you say to a politician, “We’re going to create fast-moving companies, but totally break things and change everything, and you’ll have to figure out how to respond,” that is a very difficult pill to swallow, and that may just be what naturally happens.

COWCK: They say they don’t like it, and I think they’re sincere, but they’re not moving quickly to pursue the other path. So in that sense, they’ve accepted it.

CLARK: Yes, based on their revealed preference, that is what they’ve accepted.

COWEN: But the system is not good at responding, right?

CLARK: Yes, the system is a giant, slow-moving gear, and we have a very fast-moving gear, and we have no intermediate gears to translate the movement of one gear to the other.

COWEN: How do you think about the aesthetic of the Anthropic office, where we are now? It looks a certain way. Why?

CLARK: Part of it is we inherited it, and it was somewhat cheap. It hasn’t entirely changed. It partly reflects our brand, which tends to try and be tasteful and considered and also not too flashy. It has a kind of cheerful blankness I quite enjoy. To any designers who may listen to this part of the podcast, I apologize profoundly, but we have to maintain cognitive honesty.

However, I think one of the things we did do is we chose to represent our systems via human illustrations, which was an early helpful choice. I and others looked at some of the early designs, some of which were very technical, and there were other kinds of brand identities which involved collage and other elements, but we chose these human paintings, because we were trying to depict a world which is becoming more complex, and which we are also building.

I think, in the long run, Anthropic as a company is going to spend most of its time telling a story about what increasingly automated Anthropic does. So our brand is keyed into what I think we will ultimately spend most of our time doing.

COWEN: What’s the worst age for the AI revolution currently underway?

CLARK: Oh.

COWEN: Say, the best age is just being born, if you’re going to live to 130. If you’re old and retired, it probably won’t make you worse off. It will help you in some ways.

CLARK: I feel like people who have been working on AI for many years and are now in their 60s may be very disappointed because they’ll say, “I loved this technology. I wished I could make this technology real, but the technology is now becoming real, and I may miss the most interesting part of it entering the world.” That might be disappointing.

COWEN: But they can retire, right? Say I’m 40, and I do some kind of upper-middle-class but quite routine job. That feels a bit old to retrain. My guess would be 40-year-olds are the worst off in relative terms, even if they might live longer.

CLARK: You may be right. I also think the age of about 10 might be the worst, because you are computer literate, you are sophisticated. You’re entering an education system which needs to respond to this technology. You will be using this technology in a way which is totally different to your educational system, and that might be extremely confusing. I think that will be a very difficult time.

COWEN: Suppose you are dropped into Washington—put aside any current administration quirks—and you are asked to advise. How should we get these government agencies ready for AGI? What would you tell them?

CLARK: One thing is to just start deploying AI now, and find all the things that make it very hard, and then work through those things.

COWEN: What is “deploying AI”? People are unwilling to send legal queries to Anthropic or OpenAI. They’re not about to do that now, even if it might make sense. What should they do now?

CLARK: Concretely, you start with an ambitious goal: get it on every computing device, and then work backward from that. Then you discover some concerns. Those concerns may include where the data is going or other things. Then you decide whether you care about those problems. If you don’t care about them, you disregard them and do it consciously.

If you do care about them, you translate those into a very small number of regulations companies need to follow, but you need to key those to market signals. Or you could do a project where you try and get some open-source models, some of which are very good, and put them on computing devices. Once they’re on computing devices, you can start to think, “Hey, maybe we want to buy this. What would be required?”

I think you start with the most ambitious goal, which is that it’s available to everyone, and then work backward from that.

COWEN: Will the UK be an important AI hub?

CLARK: I hope so.

COWEN: But are they willing? What should they do?

CLARK: I think they have a chance to do it. We have a memorandum of understanding with the UK government that is geared toward exploring the potential applications of AI. I think governments need to decide which areas they’re willing to be bold in. The UK has digitized large quantities of its data and has the gov.uk platform, similar to Estonia, which provides a highly digitized front end to many UK government services. Those are areas where AI can be used to generate impact.

Also, if you have large quantities of unused electricity resources in post-industrial areas, think about allocating them to compute. I think thinking about the economics of this is important. The economics of inference are going to become important. We may need to have different tax policies depending on where inference is happening, but that also may be an opportunity.

COWEN: Does it make sense for Singapore to develop its own high-quality AI systems? They have money, but they’re small, right?

CLARK: Yes. It makes sense for Singapore to develop things which make large AI systems run better in the Singaporean context. I think it’s difficult for them to develop a base model that’s more suited to their needs than what the big companies in the US and China are providing today.

COWEN: So they would take open source and make it Singaporean?

CLARK: They may do that as well.

COWEN: Or the large companies will in some way allow governments to customize systems?

CLARK: Yes. There are fine-tuning services available today. You can imagine fine-tuning an AI system to be a sovereign AI system, and then there’s some kind of governance arrangement attached to that, which is not out of the question for Singapore.

COWEN: Final question. What do you want to learn next?

CLARK: I’m spending a lot of time looking at two things. One is theory of mind. How do we figure out how things are thinking, and maybe also how to test for theory of mind. I’m reading a lot of the literature on that. The other, more mundane thing, is I am learning to juggle. I walk around trying to walk and juggle, which looks like a completely insane activity, but I find it very helpful.

COWEN: How is your juggling technique?

CLARK: Pretty bad right now, but it’s good. It gives me an activity in the age of AI driven everything that I can derive meaning from.

Main Tag:Artificial Intelligence

Sub Tags:Economic ImpactSocietyAI RegulationFuture Technology


Previous:Continuous Thought Machines Are Here! Startup by a 'Transformer Eight Sons' Member Launches, Letting AI Stop Making 'One-Step' Snap Decisions

Next:Why Can't Deals Where Both Parties Make Money Be Done? | Nie Huihua

Share Short URL