Redefining Society Podcast

Shall we play a game? How about End of The World 2075, and then we can talk about exploring intelligent models to preserve the future of humanity? Ok! | A Conversation with futurist Trond Arne Undheim | Redefining Society Podcast With Marco Ciappelli

Episode Summary

Joining us on this episode is Futurist Trond Undheim, founder of Yegii, and a Research Scholar at Stanford University. Trond's work delves into the complex interplay of evolving technology, geopolitical economy, and fragile ecosystems. By developing intelligent models and assessing cascading global risks, Trond aims to preserve the future of humanity and our planet.

Episode Notes

Guest: Trond Arne Undheim, Founder of Yegii [@Yegii_Insight] and Research Scholar in Global Systemic Risk, Innovation, and Policy at Stanford University [@Stanford].

On Linkedin |

On Twitter |

Website |

On Facebook|

On Instagram |

On YouTube |


Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine |

This Episode’s Sponsors

BlackCloak 👉

Bugcrowd 👉

Devo 👉


Episode Introduction

Shall we play a game? How about End of The World 2075, and then we can talk about exploring intelligent models to preserve the future of humanity? Ok! | A Conversation with futurist Trond Arne Undheim | Redefining Society Podcast With Marco Ciappelli

Welcome to another thought-provoking episode of "Redefining Society" with your host, Marco Ciappelli. The show where we navigate the crossroads of technology, cybersecurity, and humanity, contemplating the philosophical questions that shape our existence in a world marked by continuous transformation.

Today's episode invites us to embark on a journey where the future is not a distant concept, but an immediate reality. We find ourselves in a Hybrid Analog Digital Society, where the fabric of our lives is woven with the threads of technological advancement and human resilience. The call to stop ignoring or pretending that technology is not affecting us is resonating across our collective conscience.

Joining us is Futurist Trond Undheim, founder of Yegii, and a Research Scholar at Stanford University. Trond's work delves into the complex interplay of evolving technology, geopolitical economy, and fragile ecosystems. By developing intelligent models and assessing cascading global risks, Trond aims to preserve the future of humanity and our planet.

Through this candid conversation, we will explore:

Here, you will hear not only about predictions and patterns but the way history informs the future, and how Trond and others like him are reading the tea leaves of humankind's past to illuminate potential paths forward.

So, pour yourself a coffee, settle in, and allow yourself to be challenged and inspired by this conversation that transcends mere observation and dives into action research. We're not just watching what's unfolding; we're exploring ways to shape it.

If this topic resonates with you, don't keep it to yourself. Share this episode with friends, family, and colleagues, and encourage them to think, discuss, and question. Make sure to subscribe to "Redefining Society" for more enlightening conversations at the intersection of technology, cybersecurity, and society. Let's face it: The future is now, and we must embrace it together.



Yegii |


To see and hear more Redefining Society stories on ITSPmagazine, visit:

Watch the webcast version on-demand on YouTube:

Are you interested in sponsoring an ITSPmagazine Channel?

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.


[00:00:00] Marco Ciappelli: All right, here we are on Redefining Society with me, Marco Ciappelli, and I am excited to talk about very uplifting topics today with my guest, Trond. I'll let him spell and pronounce his last name. He's joining us from some undisclosed location in Norway, I understand. Trond, how are you doing today?  

[00:00:25] Trond Undheim: Oh, I'm doing great. 

Uh, the, the location is not undisclosed in Norway. I was just saying that, uh, I've been traveling so much that I don't even remember where I am. I think I'm in, uh, actually I think I'm in Boston right now, but, uh, as I was saying, I spend my time a little bit between Europe and the US, but, but also since, you know, I work at Stanford and, uh, I also spend some time on the East coast. 

So, in between.  

[00:00:46] Marco Ciappelli: Sounds familiar to me. Uh, we were talking before starting recording. I just came back from Europe myself and I, I woke up this morning. I wasn't sure what kind of, uh, drink have like an espresso or a cafe lungo or, you know, an Americano or something like that.  

[00:01:02] Trond Undheim: Exactly. Well, the important thing is to have some coffee drink, uh, to, to, to adjust for all these things. 

[00:01:08] Marco Ciappelli: That's. That's the key. And so with that coffee drink, uh, I was mentioning, uh, you are what, uh, you know, we define a futurist, meaning, uh, there's not somebody to travel on a time machine. Although if you have one, you're more than welcome to share it with me. Uh, mine doesn't work. I kick it all the time, but... 

Doesn't go anywhere. In your case, you do a lot of research. You create a lot of scenarios. As you mentioned, you work with Stanford University. I'm familiar with, you know, what, what people like you do is like not really traveling into the past or the future, but really trying to Uh, predict as much as we can what can happen and with a very wide angle approach. 

[00:01:52] Trond Undheim: So yeah, I'm, you know, it's a very gentle of you and nice of you to call it prediction. Uh, in the futurist community, of course, it's not very trendy to call it prediction. So we like to think of it as scenarios because clearly no one really can predict futures. But I, but I also don't want to step back 100% from that because it is important to have. 

Uh, ideas and concepts about what forces are shaping the future. And I do think it's possible. And I, like you alluded to, it all comes from the past. It comes from looking at the past. So you'd be surprised to know that as a futurist, I spend an enormous amount of time reading history.  

[00:02:29] Marco Ciappelli: And I am not surprised, I'll be honest. 

[00:02:32] Trond Undheim: Yeah, because you know, you can't predict the future based on the future if you don't have a time machine or you are not some sort of deity. Um, so, you know, all we have really is patterns. So we're reading tea leaves and those tea leaves, they, uh, they, they, uh, transfers through time, right? And they, they, they, um, You know, get encoded and debated and you just have to read those tea leaves, uh, you know, meaning the history of humankind in a different way and try to imply what that might mean. 

And to just your second thing about how my topic is the future, but lately I'm working a lot on risk and also existential risk, which is a really serious topic, meaning, you know, what are some of the very, very Big forces that are shaping the world in potentially negative ways. But the way you can always also look at it is it's mitigation, right? 

So we're not in the business of predicting how the world will end. What's interesting here is to figure out how we can avoid that eventuality from even, uh, you know, becoming a possibility.  

[00:03:39] Marco Ciappelli: And that's a very philosophical approach, meaning not just to be mere watcher or, you know, okay, this is what is going to happen. 

But what can we do to change that? So  

[00:03:50] Trond Undheim: this is action, action research,  

[00:03:52] Marco Ciappelli: right? Exactly. So before we get into that, let's look a little bit into your past. Not too long. You don't need to start where you when you were born or anything like that. But how, your career, your, your, your studies or whatever it was. 

How did you decide that this was what you wanted to dedicate your life?  

[00:04:12] Trond Undheim: So I tell you, it's relatively recent that I realized that a lot of my work circles around risk, but I have for most of my career been interested in this relationship between technology and society. And as I said, with, with, with the history in mind, but always kind of looking towards changing. 

And, and with that, hopefully changing the future. So that was something, you know, interested in that kind of action is something I've always been, and, you know, I've been interested in entrepreneurial things and had done, uh, startup and innovation related things. Uh, both on my own successfully and unsuccessfully, and also helped others. 

Uh, I was working with, uh, you know, thousands of startups at MIT, at the Institute of Technology, trying to help them launch and scale their startup companies. And most of them had, uh, you know, as a motivation. Not just, you know, I'm going to earn the most money in the, in the world, but they wanted to change the world. 

So this keen interest in sort of how technology as one force is. It really has an outsized opportunity and potential to, uh, to change the trajectory of humankind has been something that I'm really interested in over time, how it all kind of works out. And also, uh, lately from the risk perspective, I've been interested now in my work at Stanford in how, uh, some of those things where we want to make those changes too fast, they are risky. 

So there's this trade off between trying to innovate and want to sort of leapfrog and and solve things. But of course, inadvertently in this, all of this innovation, we have to make certain shortcuts and that's where the, that's one source of these, these problems, these risks. And now we have other anthropogenic risks. 

So the risks stemming from people's actions and, you know, climate change comes to mind, other things that aren't necessarily directly related only to technology, but they have to do with our lifestyles. So how did I get to this? I think from having a very, very broad set of interests and never being the best at any one thing, but sort of dabbling with most people and coming from a family, you know, in many different topics, coming from an academic family that always brought very engaging people back to the house. 

Discussing all kinds of things. I think I was probably in a very strange friendship group when I was young. We would, you know, not go out and, you know, drink a beer on Fridays, but we would sit in like salons, essentially, in our parents houses and just discuss all night. So, I guess a nerd, like a very, very... 

nerdy background.  

[00:07:01] Marco Ciappelli: It sounds to me like the Salotti of the 1800s, right? When musicians and scientists and literature.  

[00:07:08] Trond Undheim: It actually was quite a bit like that. I like it back on it. We were musicians and intellectuals at an early age. Uh, Obviously must have looked very strange to the rest.  

[00:07:21] Marco Ciappelli: Those are the dorks right there. 

Listen, actually, I think I think it's a very important point, and then we go into something more specific about what you do. But this idea of a curiosity about many different aspects of life and and again, it comes because it's a synergy or so many things I I talked sometimes with people that have organization focus on one health. 

For the planet where you connect environment, you connect, uh, animal health as well as, you know, of course, the pandemic kind of rises thing, but, but in general, this concept of if something happened here in the world on this planet, it has repercussions. So I kind of see this as a very important trait of any scientist, to be honest, now, nowadays. 

Now, it may have been very hard until now to put a lot of knowledge, a lot of data in one, you know, blender and, and get an answer. But I'm assuming that, I'm not saying it's easy, but I'm assuming it's getting a little bit easier, maybe, to, to get all this data that we're able to collect nowadays. And Make scenarios. 

So how how does this magic works?  

[00:08:37] Trond Undheim: If you're referring to the magic of how to create or try I should do Scenarios. Yeah. Yeah. So you're talking about how to create future scenarios? Yes. Well, look, yes, it has gotten easier you know, it's it's all about combining different data sources and sort of, you know, establishing patterns of Change that you think are, are acting on the world, but then, you know, the creating scenarios for what might happen, you know, 5, 10, 20, uh, in my case, uh, right now, 50 years in the future, it does take a lot of imagination. 

It's also a literary craft, which I take very seriously, you know, so it's a narrative endeavor, uh, because what you're trying to do is you're trying to model behavior or describe and, and also have people empathize with. Real human beings that could exist and could execute those actions that could feel the impact of what might be, you know, the next 50 years. 

But, but I also think it's very important to keep in mind, you know, the limitations of the data there, there's now a big interest in my field in the very, very far future. There are people on, uh, podcasts and otherwise thinking about seriously, uh, uh, about the next, uh, 1 billion years of humanity. And when you have that timeframe, you've also got to realize that you're not then working with certainties from science. 

You're working with purely speculative things. They are not less important. But it is important to keep in mind that, you know, futurists, uh, can occasionally... Well, you have to make a choice. Are you going to make science based as a departure point? Or are you actually going to sort of just paint pictures? 

And, uh, and describe fears, which could be entirely legitimate, but it's just, it's a very different type of endeavor. And I think this whole, a lot of the AI debate and some of the hype right now, unfortunately falls for me into that category, uh, even though it is a super serious topic and demands regulatory attention. 

And it is something where obviously it's awesome to see some of the, uh, Uh, interim results that we've seen this year with, uh, with, uh, sort of language models and other things we can maybe go into it, but. You know, to sort of have this certainty that this particular year is the moment where we all have to pause and the world could end. 

I'm not that kind of futurist that with some certainty of that sort can, can make that claim. That's why I never signed that letter that people have signed to say, you know, this is the moment. I just don't see how we could get to that level of certainty. Uh, and, and timeline. So there are so many issues we could, we could point to. 

Um, I work across areas of risk and opportunity, right? Mm-hmm. . So Marco, that means I, I, I work on climate change. I work on AI risk, I work on synthetic biosecurity risks, um, and on nuclear safety and, uh, a plethora of other things that could, uh, you know, occur to us. I mean, even, uh, space risks. Risks from space exploration, uh, potential of, uh, you know, alien, uh, contact because of reaching out into outer space. 

There are so many issues today that perhaps were theoretical a long time ago and until even fairly recently. But It feels like some of these debates are now coming together. People who would have never spoken about them as either opportunities or risks are now starting to do so. So my overall kind of focus, uh, to be honest, is something that we kind of label as cascading risks. 

Which is this idea that it's not all mixing together, don't get me wrong, but we are trying to discover the rules of the game of this system, where risks are not independent. We're not dealing with, you know, AI risk, put it on a box, you know, put it in a, uh, in a box, figure out what it does, and then, you know, put it down again and everything's done. 

All of these things are interacting. And that's the system, uh, that we're trying to figure out. Uh, the problem is, of course, when you have such a wide lens, even if knowledge is now more present than ever, and there's so many people and sites and sources of knowledge to draw from, it is still highly complex. 

And it is a politically fraught minefield, it is intellectually challenging, and it is risky. As a, even as a career move, because these are so big issues that whatever opinions you have. They have real consequences, like some of the things we might talk about in terms of interventions or mitigations. 

They're going to dwarf the Marshall Plan in terms of economic cost. They are going to destroy entire businesses and lines of business. Perhaps the financial system that we are, uh, you know, building now. So the implications of the things that I am studying, uh, They're gargantuan.  

[00:13:56] Marco Ciappelli: Yeah. Uh, now it's a process. 

I need a, I need a model to decide what to ask you next, because go in a lot of different direction. No, but let's take one thing at a time and you can stick to one. Yeah, absolutely. Look, I, I would say the ai, it's, it's something, especially with generative ai, that now it's on the words of every news and, and, and, uh, you know, and everywhere there's strikes for the acting community, the riders, and so forth. 

I'm, I'm one of those with you from a philosophical perspective. I, I think we should worry about it. I don't, I don't see it personally as you are something that is happening today, but it's good to do something about it. I have a knowledge about it. I guess my question here could be with all these choices that you have with all, all this interconnectivity I think the big question that the audience may have is how do you pick The one that you feel and by you, I mean your the entire community you work with are more relevant right now you go to this is really when the Something is going to hit the fan earlier than others or Yes, but this is connected with another thing that maybe it's not in the front line, but maybe affecting that. 

So what's the process of, say, priority, World Economic Forum, we're going to talk about this, this and this.  

[00:15:21] Trond Undheim: So to be honest, we are at very early stages of this debate. You mentioned the World Economic Forum. They have had, you know, risk reports coming out every year for now. And, but, but in those risks, they're, they're looking at more short term risks. 

So they're worried about risks that might affect political developments or. Uh, large business over the next year, right? So that's the framing of those reports. And while that may, you know, that might be interesting and, you know, they typically then hit a maximum of 10 different risks and they kind of grade them every year. 

And some risks, you know, cybersecurity has been high a couple of years. And then now AI risk will undoubtedly show up in, you know, this year's report. And there are other risks, you know, environment was high a couple of years ago, but they can't have it on top of the list every year. That becomes a little bit of a game. 

You know, uh, where, you know, it's like competing individual risks and what, you know, which one is more important this year, and then what are the top 10 that seem to sort of always be there? My research community is more divided than that. So there are some people that choose to focus on individual risks. 

So they're either AI risk experts or AI opportunity, uh, you know, evangelists or they could be climate change. deniers or they're climate change, uh, you know, evangelists or sort of warriors. Um, so and, and on this, uh, bio side, there's people who, you know, have invested their entire career in, in thinking about biosecurity or biology as a, either an opportunity or, or, you know, the, the negative consequences and, and, you know, for pandemics and, you know, in the public health field, uh, or in the infectious disease field there, you know, similarly for very good reason. 

You have to specialize in something, but I guess I represent this second, third generation researchers that say we cannot afford anymore to specialize at that level only. We have to rely on those experts, uh, but there has to now become, uh, we have to build a system science that's a little bit wider. And system science sort of had a, has a bit of a bad word because the moment you say systems, people think complexity and they think impossible and impenetrable and hard to convey and possibly not very... 

scientific at all, because it's just all arrows, you know, and chaos. So we're trying to start a balance here between finding and identifying the different challenges, risks that we should worry about, and not over extending it into thousands of different things that you can just, with our current systems, cannot discuss meaningfully. 

I don't think the number of risks is one to five. So, for example, I don't think right now we should worry that the world will end with an AI taking over on the, and I don't think we should worry that some super bug, uh, pandemic, whether synthetically created or not, is going to end the world in and of itself. 

What I do worry about is that when many, many risks seem to occur with closer and closer proximity, it takes our eye off the ball. It means we might be preparing for the next pandemic by preparing for the past pandemic. While we're doing that. We don't have this enough resources to do the right things about climate change dynamics. 

And while doing all of this, we might be discussing how to stop, you know, AI developments. Meanwhile, business is running as usual and, you know, policymakers are, despite this hype, perhaps not even equipped to do what it might take. To start to control AI development, which I think is very, very necessary. 

It's just that it doesn't quite rise to the existential level for me. I'm not worried that civilization might end in the fall. I am, however, worried that if we don't get a grip on this technology, it will be embedded in so many processes that at some point we don't have a choice anymore. You have to shut down society. 

And, you know, famously take a break, but you know, that's not very realistic. So the reason we have to deal with it is not that it is an existential risk right now. The reason is that it is slowly kind of penetrating so many places and we just don't have full control over it. So in, in essence. Interacting with other risks, it does become a problem. 

But it's not an impending risk right now where there's a true risk of some computer taking over the world. Um, many people will disagree with me on that. Um, there are Times articles and other... There's people that are very, very concerned. We can talk about some of those arguments, but I think for me, it's just trying to identify and like we talked about earlier, in thinking about these things to make it somewhat more entertaining and simple, I actually have devised a game to, to, to, because gamification is good, but you know, tabletop game actually to look at all these risks. 

And to interact with them and you know in more of a evening setting reflect and come up with solutions  

[00:20:50] Marco Ciappelli: So it seems to me and I want to talk about the game because it makes me think about war games from the 80s Somebody is old enough. I know a little game of thermonuclear war Um, but before we go there, so you talked about at the beginning the maybe the risk of going to market I'm going to talk business. 

You're going to market too early without thinking about it Here is CHAT GPT OpenAI. Let's just put it in the search engine. Let's put it here. Let's put it in healthcare research. And I, I think it's great for a lot of stuff. But as you said, you may be, um, maybe we haven't think about it too, too much. But also in the notes that I have here, you talk about de growth. 

And I, and I just highlighted that word because I think it come with this complexity with the system that when things get too big as human, we get to, uh, let me not think about it. Let me put it under the rug and talk about it later. So maybe a simplification of the model. So can you explain me that?  

[00:21:56] Trond Undheim: Yeah, so degrowth is this perspective coming out of very, uh, interesting economists these days that are sort of rejecting the common paradigm and are starting to say that, you know, if you really look at the way things have been developing, economic growth has actually not been only positive for the world, it's, it's been structurally You know, at the end result is net negative. 

So the solution isn't to stop growth, but it is to manage growth a little bit more carefully. So degrowth doesn't mean That there should be no growth. That's very clear. Those are, that's a tiny little anarchist idea that those are a very small minority of the degrowth people believe that we should not grow the world economy at all. 

But what it is, is it's saying we need to come to some sort of managed slowdown. And slow down doesn't sound fantastic to free market, uh, liberals, right? So I have a book coming out actually, which is, uh, called Ecotech, where I try to explain that if we are going to aim for degrowth, it has to be that it itself has to be a managed process and it won't happen overnight. 

I mean, the only overnight solution to that would be an implosion of society and massive revolutions and really big chaos. So we simply don't have a choice, unfortunately. We have put ourselves in a situation where we do need to, I think, over time, slow the economy. And with that, slow technology and slow everything a little bit. 

Not a pause, but just a little bit more control, checks and balances, regulatory oversight. A lot of stuff that sounds somewhat boring and, you know, like anti entrepreneurial. But I think over time, it creates, uh, something that I've been working on for a while, which is standards, standardization. It creates a little bit of a platform that's a safer foundation upon which to build the next version of our society. 

And, uh, you know, I, I, um, talked to my students about this at Stanford, uh, about how they really are the generation that will build this society. And you have choices. Are you going to go grand? Right, so there's this idea, we need to build generation spaceships, we need to build, it's going to be big, it's going to be expensive, and we'll solve all the world's problems. 

New energy technologies where you put all in one basket and solve everything. But then there's actually alternatives. You can say, yes, we want to solve big problems. But we want to go modular, we want to go local, we want to solve it where the problem is. And most importantly, we don't want to put all the eggs in one basket because of the risks entailed. 

So if you apply that to health, that was the book I published last year, in health technologies, it is actually insane that we're thinking about Collecting all the world's health data into one that we have some global companies now, uh, claiming to actually have access to, to globalized data about all, uh, health on, on the planet. 

Those things that, that's craziness. So an alternative idea there is to actually split up the internet, split up all the databases into regional, uh, and regionally controlled systems. Because if you have a systems failure, if you do indeed have advanced AI, or indeed some technology that comes in the wrong hands, some rogue state, and there's many of them these days, or some rogue terrorist group, or God forbid, some alien group, this is not good. 

To have in one package. It would be like serving civilization on a platter. So, so these are big ramifications.  

[00:25:49] Marco Ciappelli: I was just thinking, sorry, if I'm a villain, this is what I want. They just give me one place to go and steal everything.  

[00:25:57] Trond Undheim: You don't even need a Hollywood plot to imagine how great that is. If you are a villain. 

So these are, these are big things. Uh, unfortunately, you know, in a, in a place like the world economic forum. Everybody there, of course, has a vested interest, and, and, and the challenging thing, and I'm not now going to name companies, because the problem, actually, in, in all of these fora is also governments, because we are used to, perhaps some of us, I grew up in Scandinavia, the government is a very trusted actor, other countries, it's the exact opposite, and I am learning slowly that perhaps the truth is perhaps somewhere in between, because there are no pure actors in this game. 

Governments have a clear agenda of surviving, governing, and um, even the governments that I grew up with, their systemic interest is to sustain themselves. And there are some consequences from that, which we are going to be seeing in the next 50 years. And some of my scenarios have modeled that out. And they're, they are surprising. 

to some of us pro state, pro nation, uh, people. Because even the nations that you might think, you know, have your back, their systemic interest is the survival of an archaic system that actually isn't productive. And I'm here not talking about global government as the only solution to things. In the degrowth scenario, a lot of the plans would be to call for much more regionalism and perhaps local governance, mostly to de risk things. 

And to slow things a little bit down, not so that it comes to a halt, but so that you can kind of control. Reflect. Experiment. Without the stakes being humanity's survival. Now, Marco, these are big things, but that's what I work on.  

[00:27:57] Marco Ciappelli: Absolutely. I think about this stuff every day. I'm lucky enough that I talk to people like you that think... 

Way more than the me about it. I try to tell story and and inspire people and leave them with more questions than answers when I finished the podcast, and I think there is a lot to think about here. But I do. I do also agree with the with the community approach. Um, I studied sociology of communication. 

And when I think about the global village from McLuhan, I think that The existence of community and diversity within this global village. It's, it's what is really important. So in a, in a much smaller context, it applies to what, what you just said. Um, I like to take the last maybe five minutes with you to, to look at what you do for fun, and I'm just being funny here myself. 

So you have a podcast called Futurized. I see here in the note, and then we mentioned about this. This wonderful, uplifting game called End of the World 2007, 2075. So tell me, tell me a little bit about both. And I know I've been sarcastic. It's actually to save the world, not, not to take it over.  

[00:29:14] Trond Undheim: Yeah, no, no problem. 

So Futurize started at, uh, you know, during the pandemic as this, uh, I guess, really very overdue idea of gathering all of the conversations that I was having anyway with smart people. And then recording them for posterity. It's something I should have started in the 80s. All of my research interviews and I just put them online and I started recording them. 

And, uh, the focus was kind of the next, uh, five to 10 years, the next decade. Technology, societies, much like, uh, what you do on your podcast. Uh, right now it has involved a little bit more thematic. So this spring I had a lot of, uh, environmental and sustainability perspectives, uh, for my upcoming book. And then this fall, it'll be much more existential and long term perspective. 

So I'm lining up a lot of thinkers that are thinking about long termism, existential risk, uh, the, the long term future of humanity, sort of like 25, 50, a thousand years, uh, and anywhere in between. But, um, the game is essentially. What I wanted to do there is, you know, I wanted to come to Stanford with something that students could relate to, because I agree with you, a lot of these messages of existential risk and, uh, indeed any kind of risk, they're negative messages, because what you're asking people to do is pay more attention to what you're doing, the world may not be as stable as you'd like it to be, You're the last generation that can fix it, you know, I mean, there are a lot of doom perspectives one could put on the shoulders of these 20 year olds. 

And I wanted to, in addition to doing that, I wanted to give them something playful, but with a serious, uh, element to it. So, I guess it's a little bit crossed between sort of monopoly and, and, and risk, the game's monopoly and risk. But, you know, you're sort of collaborating and competing and moving around in a world that I imagine to exist. 

Uh, in the future, where only a few world cities are left for some reason, you can imagine why, and then you open up these, uh, chips in each city and you have a different risk level, and then you have to pick these risk cards. And if you don't solve them in the city, they go global, then they become everyone's problem. 

So everyone has to help in and try to deal with it. Then you get points for solving these crises. And, you know, obviously it's a game, so it has to move fast paced. You know, you might resolve climate change by a mitigation card that says, uh, you know, I, I figured out geoengineering safely or something, you know, there might be a card that says that, and you can throw that in and the game master will accept it. 

But what it does, I am told from my students and others who have played it, is that it really gives you this mindset that you can have, you can act on real issues. and that it matters. The whole, the whole premise of the game is that if you get more than seven risks that are co occurring and are on a global level, so they have moved from the city level to to the global level. 

Then the world indeed does end, but the joyful prospect is, of course, that you collaborate, um, and you make it from 2050 to 2075, so 25 years with the, uh, that situation being averted and, you know, everyone's happy. And then from that on, you know, the, the scenario in the game is that if you can make it 25 years in such a precarious state, then you can. 

You know, the rest of the future is bright because you've found a way to interact even in a situation with so many calamities occurring on a basically weekly and annual basis, which I think is fairly realistically a plausible at least scenario for about 2050, right? So we're talking a climate system that's failing, massive global warming. 

Um, we're, we're at that point, I think talking about real AI risk. And we are probably faced with, uh, the fallout from a set of different mixes of bacteria and virus, natural pandemics, synthetic pandemics, accidents, lab leaks, perhaps extraterrestrial material that we have either collected or that falls down on us within this timeframe or is brought to us, uh, through some So there are all of these external sources of risk and not to, uh, think about energy technologies that would have evolved because of our needs. 

And we would have taken many, many risks to get there, to, to, to cater to our planet's energy risk. So I just think 2050 is this very, very interesting pivot point. So between 2050, 2075, a lot is going to happen. And the stakes are very, very high for what we do about that. Uh, but I remain optimistic that if. 

A game can at least sensitize people to the fact that you have choices, and you can actually take down some of these risks, and you can systemically actually reduce the chance of some of them occurring, then we are in a good situation. So, so that's why it's meaningful, because it can still be gamified, which means we can still sort of handle the concept of a, uh, accelerating numbers of what I call cascading risks. 

So I'm optimistic and I think, you know, my next year is going to all be spent on mitigation strategies. But as I said, they are expensive and they entail lifestyle changes and economic changes, adjustments. Um, because otherwise, Marco, we're moving into a world where only, only the wealthy and bright. And that's not right. 

[00:35:21] Marco Ciappelli: I kind of want to close it here. I have no further comment. Except that it was a pleasure to talk to you. Again, huge topic, huge complexity. Even when you were describing the game. It comes down to, you can do something other. at a local community level. If you don't stop it there, then it, it explode. It got, it got to a scale that it gets harder and harder to, to control. 

So I would love to have you back. I'm definitely going to listen to, uh, your podcast. I think it's right. About one of those things I like to do with my, uh, with my free time as well. Because, uh, you know, it becomes your, your life, your work, and that's all you do. And then the book, the Ecotech book seems interesting. 

I would love for you to come back anytime you want and have some more. Chats with me if you enjoyed it. I hope the audience did enjoy it and they will find every Link to resources to get in touch with you to look at the book. I know the game is on Kickstarter I believe and so  

[00:36:30] Trond Undheim: yeah, and you can get through getting Touched with me  

[00:36:33] Marco Ciappelli: so get in touch with Trond stay tuned Share it subscribe and yeah, don't stop those questions questions are good So, Trond, thank you again for spending some time with me and, uh, and the audience. 

[00:36:47] Trond Undheim: Marco, un piacere.  

[00:36:48] Marco Ciappelli: Okay. Next one in Napoletano then?  

[00:36:51] Trond Undheim: Napoletano.  

[00:36:52] Marco Ciappelli: Benissimo.  

[00:36:53] Trond Undheim: Certo.  

[00:36:54] Marco Ciappelli: Ciao a tutti. Ciao. Goodbye.