AI: opportunity or danger? Expert clarifies (#10)

Show notes

In the second episode, Tobias Bolzern talks to Karin Frick, a leading futurologist, about the impact of artificial intelligence (AI) on society, the economy and technology. Karin reflects on her early experiences with AI and asks whether we should see it as an opportunity or a threat. She emphasises that the technology itself is not the problem, but the potential misuse. Karin explains that we need to use AI responsibly and be aware of the risks, especially in the future where AI also takes over surveillance and management tasks. She concludes by discussing the role of humans in an AI-dominated world and the need to maintain critical thinking.

Show transcript

00:00:00: Swiss Cyber Security Days Talk, powered by Handelsseiton.

00:00:09: Welcome to today's episode, recorded live at the Swiss Cyber Security Days in Bern.

00:00:15: I'm Tobias Polzon and joining me is Karin Frik, former head principal researcher at

00:00:21: the Gottlieb-Dutweiler-Institut in Rüschlikon.

00:00:25: Karin is one of Switzerland's leading futurists specializing in trends in technology, society

00:00:31: and economy.

00:00:32: She has spent decades analyzing how innovations shape our world and has authored numerous

00:00:38: studies and topics such as AI, digital trust and the future of work.

00:00:43: Today we will explore the impact of artificial intelligence, what should we prepare for,

00:00:50: where should we set limits and how can we balance innovation with responsibility.

00:00:56: Karin, welcome to the show.

00:00:59: Thank you, welcome.

00:01:01: Hello.

00:01:02: When you look back on your childhood, were there any moments when you were interested

00:01:07: in the future, what fascinated you back then?

00:01:11: I mean, I was a child always interested in what will be when I'm an adult, when I grow

00:01:18: up, so what will be, what will I become, how will the world look like.

00:01:23: So, but of course I didn't know the term futurist.

00:01:30: Let's just dive in and talk about the big topic, artificial intelligence, AI.

00:01:35: Do you remember the moment when you first realized that AI would probably fundamentally

00:01:41: change our lives and what went through your mind at the time?

00:01:44: I mean, I really started to work on this topic more than 30 years ago when I started to work

00:01:52: and I started with AI, so just really in the end of the 80s.

00:01:59: It was, I had an interview with Hans Morowitz, who is a leading robotic researcher at the

00:02:07: Carnegie Mellon University and he had this book, "Mind Children" and it was really about

00:02:12: singularity, the idea and Hans was then about, he was in his 40s and he then told me that

00:02:20: when he will manage to live till he's 90 or so, he will live forever because then he

00:02:28: can upload his mind onto a computer and it was really the first topic when I started

00:02:34: my professional work and since then I'm always fascinated with future tech and the options

00:02:43: and yeah, you must, when you think back, I mean, in the end 80s, we did not have Google,

00:02:49: we did not have Wikipedia, we did not have iPhones, just I had a really simple Mac with

00:02:56: diskettes.

00:02:57: I mean, I remember those days in the 90s as well, that's when I grew up with technology

00:03:04: but maybe on the broadest spectrum, I've read that you've said that people normally are

00:03:09: not afraid of technology itself but rather its consequences.

00:03:13: Yeah, yeah, I think this is an important point.

00:03:17: We are not afraid, we are afraid who will use the technology maybe against us.

00:03:23: I mean, it's quite easy to see what's in favour for you, what the benefits are.

00:03:28: Actually everybody used iPhones, one thing like that, okay, it's easy, we use GPS navigation

00:03:34: because you understand it, when the tablets came as well, the older people started to

00:03:39: use it because you don't need to know how to programme, not press any keys but just take

00:03:45: your fingers and do it.

00:03:46: So actually, I think for most people it's very independent from their education, it's

00:03:51: very easy to understand the immediate benefits and so they are not afraid of technology but

00:03:59: you are afraid of it and you see all these health scenarios.

00:04:03: If technology is used as a weapon to manage you, to control you, to control your life,

00:04:10: to take all the benefits away and make you a slave.

00:04:14: I mean, it takes your job, maybe it controls you, it controls your mind, whatever you can

00:04:21: imagine and we are really afraid of the impact, of the negative impact of the technology,

00:04:29: not really of the technology itself.

00:04:32: That brings me right to the next question.

00:04:35: Do you think in general we should embrace AI, the possibilities or should we be scared

00:04:42: of it or at least aware of the risks?

00:04:46: I think, I mean, we should be aware of all the options we get and usually it is true

00:04:52: for all kind of technology.

00:04:54: You can use every kind of technology, you can use it as a weapon or you can use it as

00:04:59: a tool to somehow make things better, more efficient and so actually you can use a car

00:05:05: even to get faster but you can use it as a weapon, every kind of technology.

00:05:12: And I think as we get a very powerful technology such as AI, we should be aware what we do

00:05:19: with it and that's true for all kind of technology.

00:05:22: I mean, yeah, the higher the risk, if you use it, the more you have to think about what

00:05:28: you will do with it and that's true for some technology, we weapons, we need licenses,

00:05:34: we need license to drive a car.

00:05:36: For different kind of, if you work in a factory with big machines, robots, you of course need

00:05:42: a training, how to use it and all of that kind of things.

00:05:46: And so I think technology AI can be a powerful tool but we must be aware that we can use

00:05:57: for bad and make accidents maybe just by accident, not because we want to.

00:06:05: If you had to pinpoint on a timescale, when do you think AI will cause the most significant

00:06:12: societal shift?

00:06:14: I think it already did.

00:06:16: We are already here because yeah, this is the famous quote of, yeah, the future is already

00:06:23: here but unevenly distributed and yes, to some degree, actually if you look who designs

00:06:32: the direction where we go, what we do now, what we discuss, I mean, it's driven by this

00:06:38: future AI and so this is the first decision, I mean, who invests and somebody, I mean,

00:06:46: usually if you have enough people who see possible profits that they can make out of

00:06:52: it, so actually they invest and then you have everything and then you can make money.

00:06:57: I mean, we are a market society and there we need opportunities to make money for every

00:07:05: kind.

00:07:06: The people will follow and do their part, maybe not aware of all over and that's true

00:07:12: as we had before for the whole idea of sustainability.

00:07:16: I mean, you exploit the resources you get and you do some good, you profit because you can

00:07:23: feed more people and so and you see the negatives bill over maybe only years later and I think

00:07:31: this is true as well for AI and we are already there.

00:07:35: We will not go back.

00:07:36: I mean, you cannot really turn off the GPT and it's other agents that are here to come.

00:07:47: You just said agents 2025 will be the year presumably of AI agents.

00:07:55: As AI shifts from say task execution to more oversight, how do you think the role of us

00:08:03: humans might evolve?

00:08:06: I mean, in different direction.

00:08:09: I mean, I still think humans are as well able to learn, we learn, we learn not everybody

00:08:16: in the same speed, but we can learn, we can use it, we can use tools.

00:08:20: I mean, we could use all the other tools we invented before.

00:08:24: So we will be kind of a boss.

00:08:28: Because we all will get kind of slave or servants or assistants and then you have to think if

00:08:35: you don't do the job yourself, you have to think about how.

00:08:39: If you give somebody the job, you have to think more from above, more on the metal level,

00:08:47: because then you are in control of the system.

00:08:50: I suppose maybe not.

00:08:53: But maybe there are two ways.

00:08:55: You can say, "Okay, it's mega convenient.

00:08:57: Let the agent do.

00:08:59: I don't want to think.

00:09:00: I just want to relax and the agent is smart and it knows what I need.

00:09:06: It knows my budget and it will spend it maybe more rational than I do."

00:09:11: And actually, I outsource my life to the agent.

00:09:14: Would be one option.

00:09:16: The other option, as always, when some people are lazy, they just sit down and use all the

00:09:22: entertainment that the internet you can get and think they have a good time doing sweet

00:09:28: nothing, Deutsche Van Niente, kind of.

00:09:32: And the other way is, "Oh, I got such powerful tools.

00:09:36: Where can I get with that?

00:09:37: What can I make better?

00:09:39: What can I make different?

00:09:40: How could I get ahead?

00:09:42: What could I learn if I really have this power to go to the next kind of, if you take the

00:09:50: terms of the gaming.

00:09:52: Let me go to the next level."

00:09:53: And some people, and I think, "These are many.

00:09:57: They aspire to the next level."

00:09:59: I say, "Okay, I have this powerful tool.

00:10:01: I mean, we are here with these tools.

00:10:04: What can I do with it?"

00:10:07: Maybe on a more personal level, I see people rely more and more on generative AI tools in

00:10:14: my circle.

00:10:16: How do you foresee those tools affecting our critical thinking skills over time?

00:10:23: If you're not ...

00:10:24: Yes, there is one concept I think it's interesting or inspiring.

00:10:31: I mean, everybody knows the system one, two, thinking of Kahneman, which means system one

00:10:37: is when you are just emotional and intuitive.

00:10:43: System two is the one when you have to work.

00:10:46: It takes some power and effort to really for rational thinking.

00:10:51: And now there is a concept, maybe there will be as well, we'll get in the world with system

00:10:56: zero.

00:10:57: System zero means actually you outsource.

00:11:00: I don't want to bother.

00:11:02: I don't want trouble.

00:11:04: I don't have now the energy to really bring my mind to this problem, to solve this problem.

00:11:12: But I will outsource it and it's very convenient.

00:11:14: And convenience is always, as I once said, convenience always wins.

00:11:20: Of course, if you don't have to move yourself and you can use ... I mean, if you don't have

00:11:27: to make a fire and just have a switch, you will use the switch for heating or cooling

00:11:32: the room and so on.

00:11:34: And if you have a washing machine, of course, you don't wash by hand because it has not

00:11:41: really sex appeal washing by hand.

00:11:44: Maybe a candlelight dinner might be interesting from time to time, but otherwise not.

00:11:52: So actually maybe we will switch often to the system zero, which is we don't think at

00:11:58: all and we let the system think for us.

00:12:03: This could be a different society where we have ... maybe then we serve only on the

00:12:11: emotion, because it's okay.

00:12:14: We will become much more emotional because all the rational thinking has been outsourced.

00:12:19: And then we think, okay, maybe it's like a child because they hope maybe the mom or

00:12:27: dad, they call me, stop if it gets dangerous or so.

00:12:32: But this is just one scenario.

00:12:35: I still think ... On the other hand, I mean, today we ...

00:12:41: We don't need our body to work, most of us not.

00:12:45: Actually we just could sit here and do our work, our jobs.

00:12:50: And still now we go all to ... We do our sports, you need to go running, we go to the gym,

00:12:56: we do all these kind of sports, actually, but we do it active because we know if I don't

00:13:01: use my body, I will lose my muscles and I will get sick to some degree.

00:13:08: And maybe this same thing is going to happen that we get more and more aware or at least

00:13:14: some first people that you get aware of when you don't use your mind.

00:13:19: You will lose it as with muscles.

00:13:22: And then I think it will be even ... I mean, intelligence is cool and natural intelligence

00:13:29: is even cooler.

00:13:30: Actually, if you lose it, it's like a body, which is not fit.

00:13:36: Your research also shows that technological progress unfolds differently or more slowly

00:13:42: than initially expected.

00:13:45: What are some current AI hypes that might not materialize as expected?

00:13:51: I think it will take much, much longer because they are ... As soon as you get to more critical

00:13:59: problems, let's say really health decisions, the AI is not there yet.

00:14:07: You already see when you ask today, LGBT or perplexity or whatever, you can do it for

00:14:13: some research and you see the mistakes.

00:14:16: If you are a note of the discipline, you are asking, you realize all the mistakes, it's

00:14:23: making and saying, "Okay, if it would be a decision about my health, I really would

00:14:28: prefer somebody who makes a decision, who is more sure about what he or she says or decides."

00:14:38: And so, I mean, as I said, we talk these ideas and we really find back in this text with

00:14:46: Hans Morowitz for more than 30 years ago, all these ideas were there.

00:14:52: And even the technology was far away, but we are still not yet there.

00:14:57: And I mean, you can see, you can be amazed what's possible today, but it's still, I mean,

00:15:05: really difficult tasks.

00:15:09: It takes time, it takes energy, we don't know if we have enough data and so on.

00:15:15: I mean, we are amazed, but it will take longer.

00:15:21: And then you have all these problems.

00:15:23: The question is real ability, I mean, when it makes a mess, who is responsible?

00:15:32: Let's close the circle.

00:15:34: If you think about year 2035 and imagine how you will explain the world to a young person,

00:15:43: what will you tell them about the development of AI?

00:15:46: I mean, they know more than me.

00:15:49: Actually, today, my grandchild was just born.

00:15:54: I mean, she grows up in a world with just AI.

00:15:57: She does not know a world without, I mean, Google for her, that's really, like for me,

00:16:03: a kind of very old phone or whatever you can imagine.

00:16:07: I mean, she will not know a world without machines that answer her or maybe, yeah, mirror

00:16:16: us, know what she thinks, what she wants.

00:16:19: I mean, you don't have to explain them because that's kind of, they understand the world

00:16:26: they are in.

00:16:27: And I think the youngsters then, they are even more critical as we see it.

00:16:33: Yeah, it's kind of the next generation AI, maybe we need to call them.

00:16:40: Thank you, Karin Frik for your insights.

00:16:42: Thank you.

00:16:43: And thank you to your audience for tuning in to this episode from the Swiss Cyber Security

00:16:48: Days in Bern.

00:16:49: If you enjoyed the discussion, don't forget to subscribe.

00:16:52: And until next time.

00:16:53: (chimes)

00:16:55: [BLANK_AUDIO]

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.