Cristina Caffarra in conversation with Luis Garicano on a few salient strands of the 𝗔𝗜 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗘𝘂𝗿𝗼𝗽𝗲 today. 𝗝𝗼𝗯 𝗱𝗲𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻 – will “most white collar jobs” really be “lost in 1-5 years” with Agentic AI? 𝗛𝗼𝘄 𝘀𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻 𝘄𝗶𝗹𝗹 𝗼𝘂𝗿 “𝘀𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗮𝗹 𝗔𝗜” 𝗿𝗲𝗮𝗹𝗹𝘆 𝗯𝗲? If we build on hyperscalers’ platforms, models and compute, how will Europe protect value creation? And is this not running 𝗮𝗴𝗮𝗶𝗻𝘀𝘁 𝗼𝘂𝗿 𝘀𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻𝘁𝘆 𝗱𝗿𝗶𝘃𝗲, 𝗽𝘂𝘀𝗵𝗶𝗻𝗴 𝘂𝘀 𝗳𝘂𝗿𝘁𝗵𝗲𝗿 𝗶𝗻𝘁𝗼 𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝘆 for key swathes of our future industrial base? AI adoption must accelerate at all costs in Europe, but it’s going to be quite a ride. And a reckoning for our sovereignty aspirations.
ESCAPE FORWARD Ep. 9, 30 April 2026
“AI in Europe: Jobs, Sovereignty and Value Capture, ” w Luis Garicano
Professor, London School of Economics, Author “Silicon Continent” and forthcoming “Messy Jobs”
CC: Hello and welcome to Escape Forward, the space where I have the privilege of talking to friends beyond my original realm of competition policy about anything else that’s interesting in the policy debate – industrial policy, trade, the US, and particularly tech policy.
We are going to have a number of conversations around AI, the topic of the moment,
as it is absolutely clear that this massive revolution and the way in which Europe is going to position itself in it will be pretty much “make or break” for the Continent. The AI conversation globally, but in Europe in particular, has multiple strands – we won’t be able to discuss them all today, although I have a very versatile guest who’s able to talk about almost anything in this space. But we’ll take a few particularly salient themes: first, something which is of global interest, the whole discussion around “end of work”, “work apocalypse”,
is AI going to kill the labor market? and relatedly the “end of software”, what does that mean, the power of agentic AI; then we’re going to have a conversation on what is the position that Europe should take in all of this, what should Europe bet on: sovereign AI, sovereign industrial AI… and then to conclude a discussion around power, which is very salient again in this space, governance, the role of regulation.
I’m especially delighted to have with me to discuss all of this Luis Garicano, who has been described as “not your usual academic economist”. Although he is a fully fledged academic, he is a Professor at the London School of Economics. He was also for three years a member of the European Parliament, so he’s had the opportunity to see how the policy sausage is made from the inside. And most importantly for our purposes, someone who has written and speaks and talks very extensively about AI. He has a very popular sub-stack called Silicon Continent, which has a huge following. And he is writing a book, which is coming up in May, I believe. Luis, welcome.
LG: Thanks very much. It’s a pleasure.
CC: Just to break the suspense, your book is going be called “Messy Jobs,” and you can tell us about the book in the course of this discussion. But I want to start framing the issue a bit.
Right now, the labour market implications of the AI revolution are very much in discussion. There has been a serious paper published at Stanford, Eric Brynjolfsson and others, called “Canaries in the Coal Mine” which has done the rounds, very well-known paper, the first in-depth study of the impact of generative AI on labour markets – and it does show, based on extensive data, that the younger generation of software coders is disproportionately being affected by the AI impact. Then there has been a number of pronouncements from Dario Amodei of Anthropic, Mustafa Suleyman at Microsoft AI, again pushing this notion that we are going to see extinction of certain types of white collar jobs in the next one to five years. All of this is also echoed in the policy dimension, there was a posting by Alexandra Geese the Green MEP, on the stats in Germany. She says, “In February, more than 61,000 IT professionals were registered as unemployed in Germany, up 23% from the previous year.” And “postings for computer science masters have collapsed by 46% since 2023”, etc. Yann LeCunn disagreed “Amadei and Suleyman, don’t listen to them”… and now I’m coming to you, you have an issue of Silicon Continent last week that’s really responding to this and anticipates the arguments in your book. So let me give you the floor, why do you disagree with this? And I know that your take on this is also different from the one of Alex Imas, who says, “Well, if everything is going to become so cheap, if the marginal cost of doing all of this is going to decline, then we’re going to see a shift towards more relational employment types, you’re going to see more nurses and personal trainers”. That’s not also a take that you agree with. Tell us how you see it.
LG: Okay, that’s a broad question, I’ll try to do my best. So first on the position of our overlords, tech masters of the world, the Dario Amadei and company. I think they understand technology much better than either of us so I’m not going to dispute their view of technology. But I honestly think they have no idea what people do in their jobs all day. They think that a project manager is somebody who’s there staring at charts. But for instance when you do some renovations in your home you need somebody who comes in and tries to get the electrician to work with a carpenter who hate each other and they want to come at the same time and he’s trying to solve conflicts and he’s trying to get stuff done and trying to get the project moving. There is a lot of work that is not digital information processing. The point of my “Messy Jobs” and the point of the piece last week on Silicon Continent was to say, look, there are two basic types of jobs. Alex Imas’ idea, which he posted in a beautiful post in his own blog, is all about the demand side. It’s like the demand for humanness. Human beings will always want work done by humans. Hence there will be artisans, that’s one side. And I agree with Alex, as the price of digital services compresses, the sectors that are human-intensive expand because the income of people goes up and they want to spend in that kind of thing. We know it empirically. People when they grow richer, they want to spend on holidays, restaurants, vacations and things that are human-effort intensive. So that’s one argument and it’s a good argument.
The argument I’m making Messy Jobs is a different one. It’s the supply side. And it’s indeed that regardless of whether people have a taste for humanness, there is a lot that human beings do. I talk about two aspects which I think are important. One is the idea of weak and strong bundles. Jobs are not just individual tasks, so that you take out the task and the job disappears. Jobs are bundles of tasks. There are a lot of things that go into being a radiologist. We start the book with is the famous Geoffrey Hinton prediction of 2017: don’t do radiology, there won’t be radiologists anymore in the future. Why? Because he thinks the job of a radiologist to stand up there staring at x-rays and deciding, yeah, this is cancer, no, this is not. That’s not the job, right? The job involves talking to the other doctors over the diagnosis, talking to the patients. It means deciding on treatment plans. It means training of juniors. There is a lot that goes into the job and there’s a lot of shared context. So when you take the diagnosis out, you don’t just empty the job of content. The content’s still there. A “strong bundle”, a bundle that holds together, is a set of tasks that even when you automate one of those tasks, doesn’t fall apart. And as a result, automating that task doesn’t mean the job gets weaker and less useful. It actually gets complemented and stronger. So that’s the first idea, the idea of strong bundles and how technology makes a strong bundle actually more productive.
The second idea has to do with authority and has to do with implementation of decisions in organizations. Why do we meet all day? What is the manager in the company doing? Yes, probably it’s processing information and part of it is going to get solved. But think of the morning run for a parent at eight o’clock in the morning. Suppose you have the best AI in the world and you’re a parent. What is the AI going to do? Well, maybe is going to schedule your day better, is going to send some emails to the teachers about the kid being sick or whatever, fine. Is your eight o’clock morning run going to be different when the kid just starts a tantrum and saying, I’m not going to the swimming pool? What do you do as a manager of the household? You talk to the kid, you tell her, hey, come on, yes, we’re going swimming, you need to continue swimming. Then you go running to the kitchen because the milk has been spilling over or something. There’s a lot that is going on. There is not information processing. There is solving conflict, using the authority, taking responsibility for the decisions, being there. And a lot of that authority has to do with the fact that contracts are incomplete, no one can say here are the things that will happen because nobody knows what the world is going to look like. It has to do with things that are hard to enforce. It has to do with relationships and trust. You cannot buy trust. I come here because I know you and you know me and you know I’m not going to say or I know you’re not going to ask, whatever it is. There is a long relationship that you don’t buy and you don’t sell and the agents don’t have. So all of those things together make the job messy and mean that this view of Amadei that the consultant is somebody who just clicks PowerPoints, that’s not what the consultant does.
CC: Okay let me challenge this. Of course I agree with much of what you said. But agentic AI is evolving extremely fast. And it’s doubling its capabilities every two to three months. I know people who are today doing things that were unthinkable six months ago. They are running an entire business with an AI agent. And it’s not just the process flow and the well-defined tasks that are being taken over by the agentic AI. This is all sounding very sci-fi. But I know that there is a strong view out there that goes beyond the notion that what you can automate is process flows. Agentic AI is going deeper and deeper into the tasks and will be soon able to also optimize across these more complex decisions. So what I’m seeing is that there is an incipient and very unknown world in which these capabilities are developing. If you look at the last year and a half, we’ve gone incredibly fast from just using Chat GPT at work and asking a few questions to process flow automation to now going into this unknown in which you have models that can create marketplaces in which conflicts are resolved. I’m not sure that is happening today, but I hear a lot of people who are saying this is radical, it’s pretty much where we are going. How do you see that?
LG: I agree that what we have seen since Christmas with Claude and Codex, the move from chatbots to agents that you’re pointing out is extraordinary. And those who are doing research and use it as a tool every day and keep being blown away by what it can do. As you said, in situations of ambiguity, in situations where judgment is required, you give it enough context, you ask it to make a good decision, it tends to make a good decision. But there are limitations even if the context windows grow, meaning the amount of stuff that you can cram into the current working memory of the system. Even if it grows, the truth is these things still don’t learn. So you first program the LLM, then you deploy it, you don’t actually have a machine that is continuously learning. Yann LeCunn famously has been very critical of that kind of feature, they don’t learn from the world. They learn from the words, they learn from our words, they learn from text, they don’t yet have what is called world models. So there are limitations, but I don’t think that’s the part that I want to emphasize as much. Those limitations however mean there is going to be context, and there is going to be a lot that humans will still have to add. As long as you have a human organization, you’re going to need a human making a decision, and as long as you have unforeseen contingencies, you’re going to have somebody decide which way are we going to turn. And the point of many of these decisions is that there is not necessarily a better or a worse decision. There is a judgment call. Are we going to go in this direction or in that direction? Well, it’s going to depend on, think of a country deciding to go to war or not, which is the ultimate human decision. You could say, well the AI should do it. But there is a lot of judgments to do with what our preferences are as country, and it’s not going to be just process information and come up with a decision. Then maybe the AI is going to give us a lot of information. And maybe it’s going to run how the troops are going to deploy in battle. But there is a decision that is irreducibly human. So to me, as long as the future is unknown, and as long as the contingencies are unforeseen, we have this irreducible role for humans in being the one where the back stops, where the decision is made.
If you tell the LLM “make a good justification for this argument”, it’s going to make you a great justification for route A. And if you ask it, make a good one for route B, it’s going to make it. And you’re going to have to make a call. Are we going to go A or B? So unforeseen contingencies will continue to exist. Human conflict will continue to exist. You can say, well, what about an organization that is purely agentic, a top manager with lots and lots of agents. Yes, that can exist. And I think we will see a lot of like one person companies. Right? You can be your own person running in the accounting AI and the finance AI, the marketing AI and making lots of decisions. At the end, there are going to be a human who are going to make those decisions. And I think that is going to be interacting with other humans. Because the economy only creates value to the extent that it’s serving humans. So I think in the end, I think that agents are going to be fantastic, but they’re going to have limitations.
CC: So it’s not the “end of work” as we know it, but to move forward a bit, is it the end of software? We have seen “software extinction” or at least predictions of that, with share prices falling. Because essentially we are going from a world in which writing software, writing code was a thing, it was also demanding and clunky and bespoke, it did require a lot of human hours, now shifting into a world where the production of code is essentially as zero cost. And that is going to mean quite a lot for this sector.
LG: I think that there is a recent March 26 Federal Reserve article which says we are not seeing major drops in jobs, even in jobs that you would think should be dropping a lot – but we certainly don’t see increases. You mentioned the “canaries in a coal mine”, there’s been a lot of discussion about “did they start counting the jobs that were falling because during COVID there were a lot of digital jobs that were not as necessary when we went back to the real economy”. But the Federal Reserve talks about 500,000 fewer coding jobs than there would have been. It doesn’t say jobs have fallen, it says that the trend has broken. I think that’s fair. And I think that the point you’re making is correct, which is to the extent that companies depended on making complex software that was integrating lots of things,
that competitive advantage they had is going to go. The competitive advantage is going to have to be on “do they have some lock-in from customers”? “Do they have some data?” If they have some data layer then they continue to exist. I mean, Bloomberg, just to give one example, is it just a set of tools on a lot of data that is commonly available to everybody? If that’s what it is, then Bloomberg will not be able to charge a lot for the terminals. Only the ones that have lock-in will persist. To me, an example is my most hated piece of software, which is Outlook. We’re speaking about agentic AI, but we’re waiting for them to fix Outlook for two decades, and that hasn’t happened. My Outlook is worse than it was two decades ago. So they have a competitive advantage, why? Because my company, in this case, the London School of Economics, doesn’t allow me to pull my Outlook email and put it in Gmail like it allowed me 10 years ago. So I have to be on Outlook. There is a competitive advantage. It’s sustainable because it has to do with corporate acquisition policies. Those things change slowly. And I think that’s part of what Dario Amadei and company don’t understand is that implementation is a huge barrier.
There are many police departments. I did a long study of police departments. I had all the data from all police departments in the US in 2010 about their IT acquisition. Some police departments had used the IT to make Comstat, geocode crimes, deploy policemen, get better crime solve. Many police departments hadn’t. And the reason they hadn’t done it was they’re humans making these decisions, and they are not changing the way they do things. We could have electronic administration, and we don’t. So a lot of those obstacles have to do with barriers that are not technological. It’s not just, is this software? Can you do a better email program than Outlook? For sure, you can do it tomorrow. Why is Microsoft entrenched? This is stuff that you are an expert in, as a competition expert. For reason that have nothing to do with the software being marvellous.
CC: Well put. Now let’s move to the next subject, a discussion also that is very, very central in Europe right now. What can Europe do? Where should we really throw our weight when it comes to AI? I think you and I completely agree that the race for LLMs is lost in Europe. We are not going to be saved by Mistral, deserving as it is as a company, and we love it and we love to see it, but the LLM race is over. There is a narrative that is very predominant, very relevant in Europe right now, and it has percolated all the way to the political sphere – which is, okay, we lost that one, but we are going to win the industrial AI battle. We, indeed, in Europe, have special capabilities, we are incredible engineers. We have machinery. We have mechanical capabilities, and pharma and all that. If we combine those unique assets that we have with the capabilities of AI, not an LLM but a narrow model, a next-generation model which will use our industrial data, then we can power ourselves up in the areas where industrial AI will be important, like robotics, and we will win that race. I want to hear your views on that. But before you do that there’s also one additional step, which is “we’re going to call it sovereign AI because it’s ours. And that’s the bit which I think is fuzzier because we can combine, industrial models of this nature with our capabilities, but it doesn’t mean we are setting up new platforms and new models at the foundation level. So when we talk sovereign, what sovereign are we talking about? The discussion of sovereignty is something I’m very involved in, as you know, and it is about creating the foundations for the infrastructure. That kind of vision is not about foundational infrastructure. It’s implementation, essentially building “sovereign applications” on top of non-sovereign
infrastructure.
LG: That’s very, very important. Let me start from the end, which is where we do have an advantage. I agree with you. We have the possibility of winning the implementation race because we have very good data on health. We have very good pharma companies. We have very good engineering and mechanical engineering companies, and we have that data. I think that’s a race we can participate and we can win, and we should focus on that. But let me now go back to the beginning of the layer, to the lower layer. So the infrastructure starts with a machine which only Europeans can make. Great, ASML. We make the machine that makes every single chip in the world. The one that actually etches the silicon with extreme ultraviolet light, made in the Netherlands. So you could say, okay, we have the beginnings of something. But we cannot really leverage that. The reason we cannot leverage that is because a lot of the technology that goes into this machine is American. A lot of the patents are American. The light source that goes into this machine is from Citrix, a San Diego company. We cannot really leverage that. Next layer are the chips, and the chips are Nvidia and TSMC. We are nowhere in that race, sadly….
CC: But don’t you see an opportunity for chips, I mean, do you not think that Germany will say, hell, we need to invest now in chips for cars and self-drive cars and we’ll need to power that up? You don’t think it’s getting faster?
LG: I was driving an Audi for 10 years. I mean, a beautiful piece of mechanical engineering. I had to get rid of it and promise not to drive it again because they were trying to use their own software. They weren’t trying to just let you plug the Android. They didn’t want to be a dump pipe. So they said, oh, no, no, no, we’re not going to be a dump pipe for Android and for Apple. We’re going to have our own software. And to be honest, they never managed. And to my knowledge today, they haven’t managed yet. I have been in the Mercedes of a very high ranking, German official recently who gave me a ride to a conference and the driver was complaining about this software that never finds the way. I was like, okay, let me use my Google Maps. So… chips, yes. Chips are not necessarily something we are going to be able to do. We should be able to do something, but we can’t because of the level of investment is too great. And then comes the foundation layer. The foundation is the LLM, the actual OpenAI, ChatGPT. Then there’s the data centre where this foundation layer is running. The LLM race involves training and inference. Training these models requires enormous investments in data centres. Between them, four of these companies are investing 600 billion. If you remember, the StarGate investment was just for 500 billion. And that’s four times the entire public and private R&D of all of Europe. It’s unfathomable. When you hear Europe talking about, we’re going to put 10 billion on this, 5 billion on that. We’re not in that race. So let’s take that as a given.
Then what can we do? What we can do is, we can ensure these models are interpretable. You were there at the start of the wars, of the social media wars. Imagine if the network graph that I have on Twitter (I keep calling it Twitter, that’s my little protest), that I could just move it somewhere else like Blue Sky. If this was interpretable, the market power of this industry would be very different because I could just send a message and people could see it from Facebook, they could see it from Twitter, they could see it from Blue Sky, they could see it from the WhatsApp. The whole market power situation would have been very different. We really messed up that race.
Now the incumbents are extracting enormous rents just by being incumbents. If you think of Uber, who’s providing a huge service to Europe, there’s no question about it, by actually allowing you to find a way to move from one place to another. But it’s extracting all these profits because we’re allowing them. It’s literally monopoly rents. There’s nothing special in the platform. We have to avoid that. So interoperability is crucial. This is easier with LLMs because when you see what a markdown file is for one of those LLMs, it has what is called an API call. The API call says, go to this model, send this and come back. You can just literally change the API call from model A to model B and everything can continue. So we have to ensure that interoperability. We have to ensure that competition.
And if we ensure this competition at that LLM layer, and we ensure these data centres are encrypted and they are local… but yes they are Azure and Amazon Web Services, they are not European, although they are on European soil. Imagine they’re encrypted, imagine they’re local. And then we put all our efforts in developing these implementation layer. It’s what I’m calling “the smart second mover strategy for Europe”. And what does the smart second mover mean? It means let’s use our advantage in manufacturing, advantage in pharma, advantage in your data, our enormous public sectors, to be the best ones at applications, at implementation, using this data to conserve a competitive advantage. Success would be if our pharma companies can stay in Europe and do AI in Europe. Disaster would be if they have to move to New Jersey, because the only way to do pharma AI data is by moving to New Jersey. That will be catastrophic.
CC: I’m with you that at this point we absolutely have a mission as Europe, which is to implement AI at the fastest speed we can. And it seems as if industry is going there. I was at Hannover Messe last week, and there was a gathering of the gotha of German manufacturing from Bosch to BASF to Siemens and the car companies, very much pushing this idea of industrial AI, we’re going to be sovereign in industrial AI. But this is indeed about implementation, the application layer. Agree this is something that everyone needs to push. The concern though remains, this is not entirely sovereign. You start with a big premise: if we can ensure interoperability, and if we can ensure that all this is benign, and if we can ensure that access to all of these foundation layer remains benign and possible, then we are off to the races. And in fact, I recommend your two Silicon Continent posting on this, you have one last July, I think one in March, they’re very thoughtful. I agree with much of what you say. But let’s not just kid ourselves, what we’re saying is that ultimately, the overlords remain the overlords in the sense that the platforms and the models, which are going to be the foundation for all of this, remain theirs. They worked incredibly hard and spent a lot of money to make them extremely attractive and fluid and so on. How will we avoid rent being extracted? You say, IF we can ensure interoperability. But we failed miserably with every attempt at ensuring interoperability in Web2.
LG: we can’t repeat that failure. A couple of things. We have one success, which is PSD2, financial interoperability. Banks are forced by European regulation to share (if the customer gives the consent), the data of the customer with startups so that the startups can enter. We did do it once, and we’ve had a very successful financial ecosystem. So it can be done. Now, the LLM race is very interesting because first of all, we are not seeing the lock-in. I am not saying it won’t happen because memories might start developing, but the lock-in is not happening at all. We go from like, oh, incredible what Gemini did, it’s so far ahead, and then boom, one week later, Claude, and one week later, Codex, etc. And this is changing all the time. The open models, in this case, sadly many of them Chinese, are six months behind – but what these open-weight models give you is credible threats, right? It’s not the frontier, and maybe the value of being in the frontier is so enormous that six months is huge, but they keep at least the level playing field, they keep down the ability of these guys to go to the sky with prices, they keep it limited because people have an alternative. To the extent that we have an open-weight ecosystem, to the extent that it’s not as difficult to just change the API code which LLM are going to use… then the switching costs are much lower than with a Facebook where everybody wanted to be. Obviously we would all wish to be able to send our WhatsApp message such that it arrives on the Telegram app of somebody else, but that didn’t happen. I think it will happen here. Yes it’s clear there is gigantic value created by these people. You’re right, they’re doing a big effort to put good scaffoldings to make sure it works very well. But it’s not clear the value capture stays at that layer. And if Europe can be smart about the value capture…
But I agree with you, I mean, we have failed. Let me remind you of two pieces of legislation that I’ve been critical of in the past, which are DMA and DSA. Those are two pieces that we haven’t used that give us a lot of bargaining power, if we want to use them. The fines that you can put under those pieces of legislation are very significant. So a lot of these things are tools that Europe has that it can deploy. Now we know from Turnberry, Trump is not suffering European fools gladly. We’re gonna have to find our resolve, but this is existential for us.
CC: I agree this is existential for Europe. And I’m hear what you say about the potential of the technology to somehow prevent rents being sucked away in the way we’ve seen happening in the previous world. But relying on regulation, I’m very much a skeptic. I am very much a pariah in Brussels for saying this, but look, the reality is, we can all stand here and cheer and say the DMA, DSA, potentially are very lovely. They do nothing. They’re still talking about Google Search. Could they possibly give access to the search data? I mean, this was a problem that I addressed in 2012 for Microsoft when they were complaining about that very same problem. We’re nowhere. Even before you get to Mr Trump. So I mean, I’m very radical on this, and I don’t think regulation is useful for any of this.
LG: I am of a similar view as you know.
CC: Yes, but what you’re saying is that the hope is that the potential for value capture from the handful of people who have put all of this into all of this investment into it might be lower if we are smart enough to actually create a condition for capturing the value ourselves through the applications. But also doesn’t the resilience piece worry you? How exposed are we? Because as you know, much of the narrative behind sovereignty is we’re exposed to weaponization of those layers. It can be something that’s weaponized against us by an administration that is not favorable to us.
LG: Yes we remain exposed to that. Let me separate two things. So yes, let me just be clear on regulation. The main impact of regulation up to today has been to hamper European companies. So I’ve been very critical of the AI Act. The appendix I think is 3D, Annex 3, which makes high risk every application of AI to education, to health, to many things – it’s insane.
GDPR has clearly hampered European companies without really doing much to help. So I think regulation has to be used to help Europe rather than to hamper it. The Brussels Effect doesn’t exist. We’re not the regulator to the world.
CC: It’s the dumbest idea anybody ever came up with. The Brussels Effect is the idea that, oh, we are so important. We put out the rules for everybody, all the rest of the world will have to follow. Of course it was so popular because it was flattering to the bureaucrats that somebody blonde tells you you have a sexy effect, you make the rules for the world, you are very happy.
LG: Yes you are a member of European Parliament, and you’re thinking, oh, I’ll do this rule, and I’ll rule the world. No, you won’t rule the world. You’ll just mess up Europe. That’s what ended up happening. So on the regulation, that’s my view. I was just saying we have tools that we can use in a bargaining game. It’s hard, it’s a hard game. Now you’re saying, let me put it in the most extreme way to make it sharp to our listeners, what about the kill switch, right? Are we not exposed to Trump tomorrow saying, oh, you know what, you guys are relying on Azure and Microsoft, on Azure and Amazon Web Services and all these, I think 60% of European data is on those three big American clouds. This possibility of Trump just killing us in one go exists. And I think the most obvious one is payments. I mean, we all do our payments on MasterCard and Visa, and, I mean, this is amazing, right? They just did it to a judge in The Hague who suddenly found he couldn’t use MasterCard and Visa. The Americans could literally say, you’re no longer using payments. You imagine what that could do for the European economy? It’s unthinkable, right? We have built our economies on the assumption that the Americans were our allies. If we have to think that the Americans are our enemies, a lot of what we’re doing doesn’t make any sense.
CC: Take Greenland. A Danish pension fund’s assessment is that if there was weaponization of American banks towards them, without being able to use their collateral and cash management services they would be gone in two weeks.
LG: So I’ve been thinking a lot about this second mover. Let me just separate two things. Let’s say we do need some sovereign cloud for certain government functions. I don’t think we can run our defense on Azure. And it’s true that Microsoft goes so deeply into our economies because everybody is on Outlook and on Office, and that means that everything is to some extent on the Microsoft cloud, even if they don’t realize. All ministries are on Microsoft, whether they want it or not. But let’s say that little by little, we can move. We’re beginning to see it. At least for those functions.
Now for the rest, we have to accept that we are relying on American clouds. What can we do to make the reliance less hurtful? The clouds that are being built in Spain and the Nordics are not sovereign at all, they are Amazon and Microsoft. We can at least make sure all the data is encrypted. We can at least make sure the data stays in Europe, and we can at least make sure that all of it remains movable from one cloud to the other. These are easy things to say and maybe harder to do. But encryption, I think, is happening, and localization is a requirement that is right now in place. Though it is true that the 2018 Cloud Act in the United States allows the US government to get access to all the data that is in these clouds.
CC: This is a fact. It is what Microsoft had to admit in front of the French Parliament under oath. There is no way around it. This is federal law, they are an American company. So that’s why I think the notion that we need sovereignty is very dear to my heart. People use emotional words like decoupling. I’m not suggesting any of that. We need to create some
capabilities that are really ours and not just sovereignty washing.
LG: I think we are very much in the same line. I think that what I’m trying to say is this. First of all, let’s not shoot ourselves in the foot by making regulations as if we are the regulator of the world. That I think we should all agree. Let’s not shoot ourselves in the foot by trying to invest in a technology we’re never going to get. Let’s focus all our efforts on the implementation layer and in their availability. Is this a sovereign solution? Is this going to give us a solid shot at capturing value, that’s sovereign enough for me.
CC: Well indeed to me, much of the discussion around sovereignty is not about the “killer “switch”, it’s about productivity growth that we need to just ultimately power up.
And this gap in productivity growth between US and Europe is not going to be even narrowed if we don’t invest in our tech. And this is really the important point. We need to absolutely power up investment. And to the extent that we can create layers or pieces of this infrastructure that are ours, it contributes to that.
LG: Yeah, if the rents get extracted that growth is going to go somewhere else. Yes, I agree with you. I mean, I’m very much on that line with you, which is at the end of the day, it’s about growing, creating prosperity for Europe and creating prosperity to pay for our growing welfare states. I think the only countries that have growth as an objective in Europe right now, are the ones who are closer to the border with Russia. Those are the ones who are not thinking of increased pensions, they are thinking of, hey, we need to invest because we’re in trouble. The further you are from Russia, the less you are into growth, because they are like, oh, you know, let’s just enjoy the sun, life is nice. But yes, growth is the objective, and AI is the tool that is going to deliver that productivity growth. And the way to do it is with this implementation. So I very much agree.
CC: Last part of the conversation, and I think it’s an important one. We haven’t touched on it yet. You and I are on the same page. The important thing for Europe is growth. We need to just run at it. But there’s a whole school of thought that is about the notion that all of this structure is very worrying from the perspective of what world are we growing into. We know that this infrastructure is essentially funded and powered by a narrow set of oligarchs. And it’s not just the “traditional platforms” that can be perceived as slightly more benign, the Microsoft or the Google, with all the misdeeds, but there is a whole generation of oligarchs that are Musk and Alex Karp and Peter Thiel and Andriessen and so on. All of that generation is perceived as being quite sinister and having quite sinister undemocratic objectives in many ways. And there is a whole discussion out there about what kind of world are we getting into with this AI. Instead of democratizing the use of resources, we are essentially actually allowing governments to push them in the US in the hands of this very narrow oligarchy, which is going to use them for making enormous bets and enormous amounts of money in ways that are not good for the kind of world we want to live in. This is widely discussed by people like the AI Now Institute, Brian Merchant, Karen Hao, they’re saying things that I relate to.
But at the same time the realist in me is like, it’s over. This stuff isn’t going to go back. I don’t know what a handful of nice civil society think-tank people are going to do to stop this or how they’re going to implement the governance they’re talking about. This is my ultimate concern. When I hear people saying, we need to prevent this from going in this direction, moving agency from us and taking us back into further techno-feudalism, it’s all very important. But what exactly are we going to do with what tools? Because the regulators we have in Europe, they’re nice people and they try hard, but we have shown we cannot regulate making pizza in the kitchen. So how are we going to stop or impose governance on this train?
LG: That’s a great question. Social media was a very democratizing force. We could hear everybody. We all thought this was going to make Arab Spring like the people’s power. It didn’t work. And the reason it didn’t work was not only because of the oligarchs running it, but because it allowed people to fragment it and it allowed everybody to have their own truth. One thing that we are noticing about LLMs is that it’s very difficult to have a manipulative LLM which is useful. If these guys want to sell compute and intelligence, they need to bet on making these machines intelligent. When Musk tried to make his LLM anti-work and political, you remember his mecha-Hitler right? There was a time at which Grok was saying Nazi crazy stuff and he had to shut it down. He discovered that if you try to put ideology into this thing, it’s going to go into this really deep end. So LLMs might bring us back to a common shared view of what is true that is not necessarily a manipulative one because what they do is they actually get their value from solving problems.
And so in some sense, I think the LLMs are more benign technology than people believe. And I also think that some of the labs running these, like for example, Anthropic, are if anything too concerned about all of these things. I mean, they’ve always been very concerned. They have a constitution. In fact, they are shooting themselves in the foot all the time by exaggerating the risks. So it’s not necessarily the case that that dark future is the one that is in front of us. I do share the view that there is a generation of kind of scary oligarchs that you mentioned. But I don’t think that the technology at the moment is going to give them the possibility of manipulating in the way that other technologies did.
The solution again, is not regulation. It’s the same that we went over when we talked about Europe. The solution is competition interoperability. If we manage to keep this from tipping towards one system to rule them all, if we manage to do our best to have the data movable, the API is movable and to have us able to continue to have choice, I think that is the best we can do in terms of governance.
CC: I don’t disagree, but the question is what kind of competition can we have when ultimately (and they are telling us directly, I was reading the interview of Thomas Gurian who is the head of cloud at Google with Thomson, and he was bragging about the massive advantages of integration that these companies have because of course they have every task and they are capable of running them in integrated ways. Now also the cyber piece is integrated – see Wiz). So we are in a world in which yes, competition could save us, but we still have a very, very small handful of people with the resources, the lakes of data, the forests of GPU, the amounts of money, and they’re in bed with each other because you can see they’re all buying from each other. So Google invests another X billion in Anthropic and then AWS does same. So if there is an interlocking directory kind of structure, it’s that one.
LG: Yes I’m not completely relaxed. I see three advantage you mentioned, which are, which could potentially be the source of monopoly power. Clearly, as you said, the security could be a big deal. Right now, the big advantage of the data centers is security, right? To run like Amazon Web Services or like Microsoft to run thousands and thousands of instances of all of these programs of everybody without having data leaks. I mean, that’s out of reach for anybody. There is no entrance who is ever going to replicate that. So security is a problem. And that’s why I do think that the data center is very hard. The other switching costs could be memory as these things start to remember you and know you, then you get stuck and you are kind of, it’s hard to change.
And the other one of course is data. If you have the data, you have the power. We do have the data in Europe, the health data, think of our health systems, which are fantastic, our manufacturing data. If the software layer gets commoditized, a lot of those advantages that you were talking about from Web2CO are going away. And it’s going to be the data, which is the advantage to the extent that the Azure, AWS, the data centers are local and encrypted, the data is the advantage. We can rely on that. And to the extent that the LLM is competitive, we can rely on that choice. I think there is a scenario where Europe does move fast on the technology and where we can keep the control of that final step. That is my hope. But we are nowhere near where we should be in this race because we’ve spent years sabotaging ourselves. I think the biggest enemy of Europe is Europe, is not the tech oligarchs. It’s ourselves who are just putting all the time stones in our own path.
And we need to get our act together. If we do get our act together, I am very hopeful. I mean, we’re not some little power. I mean, we’re talking about that at the start. We are the second largest consumer market in the world. And we should not accept this slide into irrelevance, this slow death into the darkness. That we are just accepting as if this is inevitable. This is not inevitable. I mean, Europe has been for 400 years the main innovator and technological power. Even in the late 70s, Europe was the main tech power in the world, in nuclear, in trains, in planes with Airbus, all the technologies, cars, everything, chemicals. And these technologies are fathomable. Yeah, even 50 years ago, we were the leaders.
CC: Of course there is a power law which has meant that every year we have sat on our backside in the last 15 years, others have gone faster and we have not. But I like to end on this very positive note because we’ve run out of time. This point really speaks to my heart because I personally do feel very strongly that Europe is NOT a middle power. Europe is a superpower. We need to act like one. This is something I always say. So your hopeful note at the end, we may fail but we don’t have time to fail and we shouldn’t fail and we should think in a positive way that we will make it, is the note I want to end on.
LG: All right. This was a wonderful conversation. Thanks, same here, it was a lot of fun. Thank you.






Leave a comment