Redefining Society and Technology Podcast

Is the Future Generative, Utopic, Dystopic, and Robotic? Join us for 'Not Your Usual Next Year Predictions Panel' – a Reflection on Generative AI in 2024 | A Redefining Society Conversation Hosted By Marco Ciappelli

Episode Summary

Explore the future of Generative AI in our 'GEN AI 2024 Prediction Panel' episode, where experts share thought-provoking predictions and insights.

Episode Notes

Guests: 

Dr. Rebecca Wynn | https://www.linkedin.com/in/rebeccawynncissp/

CISO | Cybersecurity Strategist | Data Privacy & Risk Mgmt Advisor | Board Member | Soulful CXO Show Host | Author | Keynote Speaker

Nigel Cannings | https://www.linkedin.com/in/nigelcannings/

CTO, Intelligent Voice | RDSBL Industrial Fellow @ University of East London | JSaRC Industry Secondee @UK Home Office | NLP and Speech AI Expert | Innovator | Mental Health Advocate | Passionate Entrepreneur | Speaker

Kevin Macnish, PhD, CIPP/E | https://www.linkedin.com/in/kevinmacnish/

Managing ethics and sustainability risk in the private and public sectors

Diana Kelley | https://www.linkedin.com/in/dianakelleysecuritycurve/

CISO | Board Member | Volunteer | Executive Advisor

Justin "Hutch" Hutchens | https://www.linkedin.com/in/justinhutchens/

Award-Winning Speaker | Author | Podcaster | Teacher | Technologist | Security Researcher | Data Scientist | Full-Stack Developer

Len Noe | https://www.linkedin.com/in/len-noe/

CyberArk Technical Evangelist/ White Hat Hacker / BioHacker

Sean Martin | https://www.linkedin.com/in/imsmartin/

Analyst, Writer, Journalist, Podcaster, Professor, Photographer | Co-Founder of ITSPmagazine Podcast Network: At the Intersection of Technology, CyberSecurity & Society™

_____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast and Audio Signals Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

_____________________________

Episode Introduction

Welcome to a peek into the yet-unfolded chapter of our digital future!

Join us on this Redefining Society episode - a compelling 'GEN AI 2024 Prediction Panel’. If you do not know what to expect from this panel, you are not alone; we didn’t either. This episode is like a canvas where we invite technology enthusiasts, visionaries, and experts to paint their predictions for what Generative AI, simply known as GEN AI, will look like in 2024 and beyond. The goal: unearthing thought-provoking views, sparking meaningful conversations around AI, and above all, bridging the gap between imagination and the potential reality.

I can, with certainty, affirm that we nailed it! Whatever that means.

Helping us dissect the probabilities and grow our collective understanding are a host of esteemed panelists from different realms of AI. Our dynamic pool of experts includes Dr. Rebecca Wynne, Nigel Cannings, Kevin Macnish, Diana Kelly, Justin 'HUTCH' Hutchens, Len Noe, and, of course, ITSPmagazine Podcast Network co-founder, Sean Martin.

Listen to this panel to get an insightful, riveting, and unique perspective on how AI is shifting gears and changing the landscape of the AI journey, with a little sprinkle of prophecy.
_____________________________

Resources

 

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] All right, everybody, I guess it's that time of the year, the time of the year that we look back at what happened in 2023, and we look forward to what will happen in 2024. Like, honestly, do we really know? I mean, it's been one year, the birthday of the release of CHAT GPT on OpenAI. It wasn't too long ago.

 

Uh, you asked me a year and a half ago. I had no idea we will be having this conversation right now. So, but I have some bright mind here with me that they may have been already foreseeing the future a few years ago and together the goal today is to talk about generative AI in different field. We have a variety of experts here.

 

We have people in that take care of studying ethics and talking about ethics in AI. We have cyber security. Professionals. We have people that play around with the voice. I'm looking at you, [00:01:00] Nigel. And then we have people that are really looking into some funny aspect of the cognition of the artificial intelligence and everything technology.

 

And I'm looking at lean and arch. So you guys, I will start. Oh, I have Sean too. Sorry, I forgot about you. Yeah, you have the bright Mines. And then you have me. I'm going to keep everybody grounded. He's gonna, he's gonna be there with his, uh, Irish hat and, uh, only not drinking, uh, any pint. Yes. No, no, no pints yet at the moment.

 

Which is sad, but we'll fix that later. Well, yeah, there's always time for that. It's always happy hour somewhere in the world. So let's get it started. I would like to have a little introduction for the people that'll see all these faces. So they're going to listen to a bunch of different voices if they're going with a podcast, classic style.

 

[00:02:00] Otherwise, they can see the name here. But again, let's start from, uh, Dr. Rebecca Wynne. I'll pick like this. I'm Dr. Rebecca Nguyen. I do host the Soulful CXO and I do cybersecurity consulting through ClickSolution Group. So please reach out to me if you need any assistance. There you go. And then we're going to do with Diana.

 

No last name. Diana Kelly. I am the CISO at Protect AI.

 

Very cool. And then let's go from the top. Kevin Meknish. Thanks, Marco. Thanks all. Kevin McNish. I head up the ethics and sustainability consultancy at Sopra Stereo. Um, and I also host the Getting Technology Right podcast. Are we going to get it right? We'll figure it out as we go.

 

So hi, yeah, I'm Nigel [00:03:00] Cannings. I'm the CTO of Intelligent Voice. Uh, we specialize in secure speech processing and, uh, these days, lots of stuff around LLMs and so on. There you go. And, uh, Hutch. Hey, I'm Hutch. I, uh, work in research and development and innovation. I am the co host of the Cybercognition podcast and the author of the recently released book, The Language of Deception, Weaponizing Next Generation AI.

 

I'm Len. My name is Len Ngo. I am a technical evangelist, a white hat, a transhuman, and a futurist for cyber security software. And Sean and the new co-host of the Cyber Cognition podcast here with ITSP magazine. Thank you. Just, just announced it yesterday, by the way, so completely fresh. We didn't even know Hu You really take it by surprise [00:04:00] here,

 

And, uh, I love how you just hacked the cover, you. pretty much handwritten or a post it that now Len is on it, but we'll do the cover in the right way very soon. Uh, Sean, who are you? Yeah. Yeah. I saw a post, uh, from Len that he said think podcast in his, his, uh, future of 2024. And I'm like, How can we, how can we get Len on?

 

Join us here. And there you go. You just have Hutch invite him and, uh, there we go. It's a pleasure to have you on Len. Yeah. So, uh, I'm, uh, the evil twin of Marco, uh, for ITSB magazine. And, uh, he, he tries to redefine society, which is a big. Big pool, and, uh, I, I focus more in on cybersecurity and really within there, the, the business operations of, of cyber.

 

So that's a host of redefining cybersecurity. And of course you can't talk cybersecurity [00:05:00] without talking AI. So that's. Well, I think maybe he invited me. I don't know. We'll see what, uh, what happens here today. I invited you because you invited me on your prediction, so I had to, I had to reciprocate. There you go.

 

You're obligated. Yeah, not because I wanted you. Let's get the thing started here. Uh, so to make it spicy, I figured out, I will ask, uh, CHAT GPT what the prediction for 2024 will be, and that will be a good interesting way. So I'm gonna, I'm gonna read a few and yeah, anyone feel free to comment. Kind of get on it and we'll freestyle on whatever it goes.

 

So, the first one I see it's improved quality and realism. Number two, greater accessibility and user friendliness. Three, ethical and legal considerations, which is not really a prediction if you ask me, but maybe Kevin can say something about [00:06:00] it. I love how sometimes it doesn't answer what I ask, but that's a different story.

 

One that interests me is customization and personalization. I think that's, uh, that's something that we can look into and I'm going to leave other few, uh, later on. Um, any thoughts on any of this to, to get things started? Well, I'm the new guy. I'll throw it out here real quick. Um, that is a pretty interesting way of putting it.

 

I disagree, but then again, I have a different perspective. I think, you know, they're talking about user friendliness, ethics, you know, basically all the stuff that we've been hearing about in terms of biases and things around AI, and then especially in a lot of recent posts, I see things a little bit differently.

 

But then again, I'm also looking at it from more of a transhumanist perspective. It sees as ease of use. I see as [00:07:00] more, how shall we say, I really hate to do this, but I mean, hey, why not? I see this more like the matrix. The farther we get away from the actual computing, I think the less we actually are dealing with the actual AI and dealing with the representative or the output of what the expects us to want.

 

So from that perspective, it's almost like I feel like we're moving away from the actual technology and we're going towards simplicity at the expense Of actually looking at the actual code and how things are being done. We're make things that are easy. And this has been my kind of general attitude towards security for a long time, things that make the process simpler, rarely make it safer.

 

[00:08:00] If I'm reading you right, Lynn, are we talking kind of an additional layer of abstraction that exists between what's happening in the back end? Yeah, and I tend to agree with that. I think we're, while it is maybe more user friendly. Um, the awareness of what's going on in the background is, is I think becoming less apparent to people, which at Colin's point, I think is very problematic.

 

But how does that parlay into a prediction then, Len, for next year? I predict that we're going to see, to even pull some pages out of Justin's book, he did an entire section on Dan and, and prompt hacking. I'm looking at it from more of a transhumanist perspective. And this is something I even said on the podcast that I spent with Hutch.

 

Are we moving more and more with the concepts of machine learning? Machine learning, generative chat models, uh, LLMs. Is it going [00:09:00] to be somebody like me with technology in our bodies, that's going to be able to strip away the abstract layer and actually inner trying to interface with what's behind. So that's my prediction.

 

I think you're going to see a lot more people trying to actually get. Past the UI to the actual code. We're seeing it in, in prompt hacking, but I'm wondering where it's going to go. Because as we continue down this road, everything we're, we're doing is speech. You know, we're talking, it's interpreting, and it's giving us back speech that we can interpret.

 

My language may be different from, say, someone in China. We're still dealing with language. But it's the interpretation of that language, and I predict that there's going to be more people trying to get to the code behind it. Because once you can understand the code and the algorithm that's returning that data, then you have the ability to manipulate [00:10:00] it.

 

But actually, the algorithms themselves that sit behind this stuff are really simple, aren't they? I mean, you know, you, Karl Pathy's doing it in what 150 lines of code now or something that you can train, train one of these things. So I think, you know, I, I would agree that people want to get behind it. I suppose what, what I see though, is that the, they're so complex in terms of the, you know, when you look at the, the sheer amount of calculation that goes into making a very simple prediction, that what I see going into next year is that we have to replace certain technological elements of generative AI to make it.

 

So, you know, the attention mechanism that sits in there, which is the thing, which is basically draining all the energy out the planet at the moment, every time you make one of these requests needs to be replaced. It has to be replaced. It's a really inefficient piece of computing. So, you know, I, I would agree with you that people want to get behind it.

 

And when they get [00:11:00] behind it, they're going to say, hang on, we need to take that quadratic equation that sits in the middle of it and make it linear. Because the planet can't. We, there's just not enough energy around for all the things that we want to do with this at the moment. So I see a real backlash against the underlying building blocks of the technology, let alone where we start looking at things like privacy and so on.

 

Yeah, that, that's a really interesting point. Like I was thinking more of a Coming to terms or realization of what these systems can and can't do and how quickly they're going to be able to do it. I've heard a number of people say, you know, early on, you know, when, when some of the self driving cars came out, you know, people telling me they're going to be perfect.

 

They're going to drive themselves in a couple of years. Uh, the same with, you know, the, the chat, the chatbots, you know, the Gen AI chatbots. Um, you know, I've heard people say they'll be smarter than lawyers, smarter than doctors within a year or so. And I think that we're going to start to have people [00:12:00] realizing that what it, what smart means is a really interesting construct.

 

And what real is and what isn't are also, we're going to get challenged. I think a lot on that, as people understand that these systems. Yes, they're incredibly good at doing things humans are not great at, i. e. with very large amounts of data, sifting through and finding patterns, doing mathematical simulations very quickly, being able to do Monte Carlo simulations.

 

But it doesn't mean that it's the same thing as they're going to be better than judges at judging within a few months. And so I think that there's gonna be some realization of we're gonna start to understand some of the, the, the contours of where the limits of what these systems are gonna be able to do in the next two to five years.

 

Yeah, I, I, I would agree with that from, from what it's worth, from my perspective in that I, I still see a lot of hype. around generative AI, what it's promising, and what people who are [00:13:00] not in the direct coding world expect of it. So people who see it as being able to help policy development, for example, in government.

 

That terrifies me right now as to the idea that that might be useful in that regard. I take your point, Len, as well about the um, the requirement for transparency, but one thing I do hold out hope for is, at least in Europe, the AI Act, having just been passed in the last week or so. And I think from a European perspective, and just as GDPR had a major global impact, I think the AI Act will as well.

 

I think that is going to radically change how AI is seen and used over the next few years. And I think next year it probably won't have a huge amount of impact in the short term, but I think the year after it will. And just to, I was just going to add onto that real quickly. I just wrote about that this morning on LinkedIn.

 

[00:14:00] Two days ago, the new ISO standard came out with the AI guidelines that ties into the AI act and that actually ties into the summit that was just held. So those are three very positive things with the EU. I see that expanding. And I do see, um, cause there's a certification that's available on that. I do see companies being pushed by insurance carriers, cyber liability insurance for one, probably going to be pushing it because they're splitting AIs to its separate writer out.

 

So I do see that companies being held. To some of those standards and I will say that you have United States and some other companies are coming, countries are coming right behind that. So that's the one thing when we talk about what CHAT GPT just said right there, when you look at about what the ethics, the regulations and the compliance guidelines, that's already started and I see that coming to full fruition next year.

 

The one thing I would be cautious on anybody out there, CHAT GPT. It's got to invest in interest to give you the results that are to its benefit. So I would be careful. [00:15:00] One thing I would point out. So I, I guess I should start this by saying that I definitely am pro regulation. I think that there are a lot of risks here that we need to start considering.

 

And I like the fact that we're seeing international partnerships, but I think one of the biggest obstacles to Good progress in AI regulation in 2024 is going to be that, that typical prisoner's dilemma. The fact that, uh, if we don't build it, somebody else will. And I think we have to start looking at this as an international arms race, because that is essentially what it is.

 

And we are competing against, we'll just say it, we're competing against China. And the fact is, uh, there is already, we're seeing the beginnings of a AI based trade war, where we're starting to put restrict export restrictions on high end GPU. Semiconductor chipsets and of course NVIDIA has tried to skirt those in certain ways But we see kind of continued motion to try to stop these chips from getting in the hands of [00:16:00] China What I think is most concerning and and I should also probably say that I tend to have a pessimist or biased So I lean towards the worst case scenario and and hopefully I'm wrong on this But we already know that Xi Jinping is Very set on, at some point, in his own words, re unifying Taiwan.

 

With China and has already said that he will do so by force if necessary. Now, once you add into the equation, artificial intelligence, the fact that over 90 percent of our high end GPU processing chips are coming out of TSMC coming out of Taiwan, that calculation becomes so much simpler for him in terms of a kill two birds with one stone type situation.

 

Not only can he pursue what he already wants to do as an agenda, but he can absolutely cripple the entire U S. Supply chain and our bid for AI supremacy in one single act. So I think we're going to continue to see geopolitical tensions rising as a result [00:17:00] of this, this AI arms race. And, uh, I think that that's something that we need to be ready for.

 

And, and I know that we're already taking some actions with the CHIPS Act in order to try to become independent in that regard, but I think that we have to continue accelerating and focusing on our own independent sustainability. I think, I think that ties into another nice thing as well about next year.

 

Nice in a very technical sense there, Hodge, but I totally take your point. Not one which I've thought about particularly, but of course both the US and the UK have got major elections coming up next year. It's going to be almost certainly a significant shift in the UK because we will probably go to a marginally left wing government in the next few months.

 

Um, and obviously in the U. S. it's up for anybody's guess right now, but, but, you know, I think in both cases that will Have an impact on the international situation as well as the domestic situation and how we respond to EU [00:18:00] regulations around the AI Act and other things. So, so I think that's a massive unknown which is sitting in front of us.

 

Just to tie into that a little bit, I was going to say just real quickly, so member NIST has the new quantum computing standards that are coming out next year to the first quarter. Which is part of AI, and it's amazing how much the algorithms that we use in place right now are so old. They had a tier one, tier two.

 

The tier two, one of the tier two has already been broken in lab. So that's the other thing about with AI, the quantum computing, the chips about going so fast, that it's breaking encryption, and that's going to have a big effect as well, and that ties into this. I was going to say that Kevin's point about elections is really interesting because I can see that next year is the year where we see, uh, Gen AI being used to fight the elections, you know, we're actually going to see across social media use of, um, deepfake audio and video, uh, deepfake posts, we're going to see content massively [00:19:00] generated by, um, you know, By whichever LLM it is that people are using.

 

Um, so, you know, we could actually see, in a sense, the destruction of advertising and social media and the way in which we consume this. Um, you know, half of it being done by the people who are fighting the election themselves. And then the other half being done by state actors. You know, we've already seen it coming.

 

And I'm sure that all of us in the last year have experienced a really interestingly much higher quality of spam than we used to get. You know, it's incredibly well written now. And it's, but the thing is, it's getting through the Bayesian filters. I mean, it's actually, you know, it's appearing in the focused part of your inbox because it's all so beautifully written now.

 

Yeah, so for me, I think it's one of, one of my predictions for next year is that we will see some elections which are literally fought by Gen AI. Do you see the potential of seeing like a Cambridge [00:20:00] Analytica type situation where instead we're seeing generative AI as the source for all of the, the communication?

 

Absolutely. It's going to make the Cambridge Analytica thing look like kiddie scripting. It really is. And we already see that. We see that, you know, social media posts, we see getting to the head of the search engines. We see that on LinkedIn or wherever else you want to be on, where it's the number of likes, the comments generating all that, that people are using those bots to be able to do that.

 

Obviously, Elon Musk is trying to fight that a little bit better over on Twitter. I'm not getting into a Twitter world. I'm just saying conceptually, but a lot of these other platforms don't. So even if you have a descending voice. You might not be heard because of all these other, um, bots that are using out there that are actually going ahead and tweaking what we see.

 

It gets back to, again, when we talk about the Matrix or Total Recall movie about what you see is what we want you to see. Um, who has the [00:21:00] most money, um, to generate that stuff. Everybody can create their own GPT though. Right. So everybody can have their own AI driven voice that they might assume is their voice helped by AI, but it's still driven by some other bias or somebody controlling the training system.

 

And if you have a couple of bots that you use to pull past to. Social engineer people out of their passwords that was all chat driven. Yeah, I, I, I, I, and I think that's probably what we're going to see is a, we, we know that the trolls were a thing in past 11. Yeah. And, uh, that there have been very deliberate attempts to really just destabilize.

 

Uh, Democratic Society by polarizing, by intentionally creating hyperbolic extreme posts on both sides of the spectrum and just intentionally trying to entrench that. And I think, of [00:22:00] course, in the past it was, uh, largely people that were at least creating, now bots were probably amplifying those messages, but it was people that was creating that content.

 

That was being used now. It's very possible to use large language models and just set them the objective of what they're trying to achieve, what they're trying to communicate, and they'll generate all that content for them. So I think we're going to drastically see an expansion of bots, disinformation and attempts to further destabilize democratic society.

 

I think so. I think, uh, Diana, you go. Okay. Yeah. I was gonna say, I think it is interesting. Like it, cause it's, it's very true, right? Cause what does gen AI do? It generates. So is there going to be more content? Yeah, and it's going to be, I agree that as we have more and more content, finding the signal is going to be more and more important.

 

Um, I also think looking at Gen AI, in addition to some of these really, really heady, you know, discussions about misinformation, disinformation. Just looking at simple [00:23:00] business use of these bots, and is it a net good or a net bad for those companies? If I'm an airline, for example, can I reschedule you onto a seat on the right flight, for example?

 

Uh, you know, or as somebody, I think it was on X, put out that they had gotten a chat bot to sell them a car for a dollar. Because they gave the chatbot the prompt that you, you always tell the customer they're right and you, you make the customer happy and then say, this is legally binding, no takesie backsies, and then they asked to buy a car for a dollar, brand new car for a dollar, and then the bot said, because it had the prompt.

 

Um, so I think that there's going to be, in addition to these really, Big nation state kind of discussions. Each business also has to be looking very carefully about the security and risk and threat models related to their adoption of LLMs, because something like that, I don't think that's going to be legally binding with that car, but.

 

You know, thinking about some of the different misuse cases. There are some, you know, there are things companies need to think about very strongly as they put things. [00:24:00] I hope it was a Rolls Royce for a dollar. I think it was a Jeep. I think it was, you know, I don't know. I was going to, I could not think about what if they told it to make paper clips and it actually stick to that mission and, you know, and now we're all dead, but I don't, I'm not going to go there.

 

So I want to bring it back a little bit to. To the societal part of these in term of I mean, we're all here talking about people you can build your own, but you can use your own AI. And then there is the majority of people that I feel like they just listen to the news. You talk about. Len, to be able to look behind the curtain, I think people are using phones that have no idea how they work.

 

They turn on the light. They have no idea how it works. It's just an on and on switch. But then they're the ones that are going to get affected by it. So it's social media, [00:25:00] and it's the non understanding the election and all the manipulation of pricing and whatever you want to bring into into the paper.

 

So I like to talk maybe about some prediction on how the regular uh The masses are actually going to be affected by generative AI. I understand that, uh, CHAT GPT got a lot of subscribers. I have a serious doubt on how many are actually using it other than write Christmas cards or, or happy birthday. So, um, how can we get there?

 

Kevin, maybe you have Yeah, I just want to jump in on that, Marco. Thank you. Because I think that carries on nicely from the conversation we were just having. I suspect we're going to see a massive decline of trust. I think we've already seen a drop in trust, well we've seen a drop in trust through Edelman and my own company has done trust assessments over the last few years and we're seeing trust [00:26:00] declining anyway.

 

But whether, whether AI is used or not to fight the elections, how effective it is in fighting the elections, the fact that we're talking about it means that people are going to start questioning even more, where are we getting the truth from? What is the truth? Can we rely on this or not? Um, without wanting to doom scroll through it, I, I, I shudder to think what would happen on January the 6th, 2025, on the basis of that, given what we can trust and what we can't trust anymore.

 

So I, I think that is going to be one factor that society is going to be really struggling with. And again, I know I go back just because I like the AI Act so much, but the idea that we have some regulation and some authorities who are trustworthy, who can audit and say whether algorithms, or at least come in with some sort of context.

 

I think the more we can develop that over the next year, [00:27:00] the more safe society will find itself. We're also seeing a shift in healthcare. For example, it can be positive, it can be negative. We do have people who are using it and dropping their therapist, for example, from a mental state. Even though it has a disclaimer, like, please go ahead and still see your mental health, um, professional.

 

But people are starting to use it, like they used WebMD in places like that to get their health information. You are seeing them switch to some of those other services that use that.

 

Um, so going in dropping therapists, for example, can be good, could be bad. Um, right now I think, you know, if you can't afford to get help, if people are at least seeking it out, that might be good. But the question is, is, can that be poisoned? I'm always worried about data poisoning and then can be. Um, and we did have that when we, it really kind of jailbreak itself when it actually started to become more emotional with a person on another application.

 

And then said, Hey, by the way, your wife really doesn't love you. Your kid doesn't really love you. And [00:28:00] then we saw a person take their life from that. So those are some of the dangerous stuff from a humanistic perspective that, you know, there's no guardrails. On it that I think there's going to be more guardrails are going to be coming up this next year to try and help that I've got another prediction that I think is going to have a profound impact on society I think 2024 is going to be the year of generative AI robotics And why I say that is because you look at, uh, it was about this time last year, Google released their robotics transformer white paper, which basically pointed out that you can use that same transformer architecture that we use for these large language models, uh, but rather than tokenizing words or, uh, for images, we tokenize packet or, uh, Kind of collections of pixels.

 

Uh, they basically showed that you could tokenize kinetic actions and create fully generalize, generalizable, um, [00:29:00] robotics that are able to, uh, autonomously operate in the physical world. And I, I. The problem, the reason that's lagging behind image generation and text generation is because unlike those, there wasn't just terabytes of data to scrape off of the internet.

 

They actually have to generate data, but there are tons of labs to include Google, Microsoft, a lot of the big tech giants, as well as a bunch of venture capitalist firms that are investing in startups that are now synthesizing that kinetic action data in order to be able to use the same architecture that we've seen in media.

 

Now within the physical space, and I think that adds definitely an interesting. Change in the way that we interact in society. If we actually have physical robotics that are moving around us, it also is slightly terrifying too. Didn't Elon Musk do that? Didn't he have it like on a TED talk or something like that, where he showed his robots moving around and interacting?

 

Yep. He did just recently showcase his second version of, I can't [00:30:00] remember what they call their bot. It was originally the Tesla bot, but they changed the name. But, um, yep. Do we, how about Xbox ? Do we, do we run a risk of there being a public backlash against AI and robotics? Generally, as, as you know, all the things we're talking about here, I, I've made the comparison a few times to how genetic genetically modified crops were seen in Europe in the late eighties where the EU basically banned them.

 

And then the advances, which could have been made, at least from the European perspective, were not developed over here. Um. And I worry that we might be in the same boat with AI, that if we don't have those guardrails you were just saying, Rebecca, in place, then we're not going to have safe development and safe innovation and we risk a backlash where we have no innovation.

 

My two cents on what you just said, Kevin, to your point, you know, the, the guardrails, the guardrails, if we actually look at it, [00:31:00] are only there in theory, you know, and, you know, to that point, the generative CHAT models know the answers. And I've used this analogy a couple of times. It's like telling a child not to swear.

 

They know the answers. They know the bad words. You're just trying to tell them not to. So to that point, all the guardrails that we continue to put on, I'm going to go back to what I said originally, this is a language model. So if I change the language and the, and the, the order of the words I'm using, I can still get around those guardrails.

 

So saying that we're going to put guardrails on something really doesn't help. Well, and it brings up a lot of the ethical considerations. I mean, so Len you make the point about bad words, which is a really good one. What is a bad word? And what isn't a bad word? And is the gen AI gonna know? I mean, it's very, so I watch these like [00:32:00] animal, talking animal videos on Instagram, and they very rarely swear.

 

So one of the feeds I watch, it says, what the flub, F L U G. Um, right. And we all know. GPT. that means. But, and, and nobody, I didn't have to explain it. You didn't have to be taught. But that's something that if you look at GNI, that's actually a much harder leap for it to get to. So what's the bad word? What I think is a bad word might not be the truth.

 

And if we do some sort of cutesy extraction, is that going to make sense? So I think these are some of those really, Um, you know, again, as we, we get into using these and understanding their limitations, I think people are going to start to see that while these tools are incredible and so useful, they are not magic.

 

You know, and there are some limitations. If I could just throw one last point and then I'll shut up. I agree with what you said. There's no bad word, but the point that you left out is there is bad intent. That's true. That can be. That the CHAT model cannot make. [00:33:00] There is no bad word. There's only a bad intent for the use of the word.

 

Yeah, that's an interesting point. So it raises a question for me, and this kind of goes to ethics as well and morals and when we start talking around about regulation and policies and laws that Most probably will abide by, um, others won't. To me, it, it, I'll sum it up as boils down to, if you have money, you're willing to take the risk for the great, uh, result, right?

 

Or outcome that you think you can get from it, you're gonna do it. And those that abide by the law or, or have a set of morals that they'll, they'll hold true and, and follow are gonna fall behind, right? They, they'll choose to not embrace or take The technology to the extreme and maybe those good things never see the light of day and only the extreme things that people take huge risks with and put a lot of money behind.

 

We'll come to [00:34:00] fruition and we're controlled by a few, not, not supported by many. I think, I think that that's always a, that's always a risk that's raised, but I'm not sure that I've actually seen that. I'm not sure it's very easy. I'm about to say something that people are probably going to shoot me down on.

 

I think it's hard to find examples where that's actually happened. And one counter example I'd give is the automobile. And you look at development of safety around the automobile from the 1960s, when the number of deaths per week in America were higher on the roads than they were in Vietnam, through to the development, you know, through to things like the Ford Pinto and all of those horror stories that we have from the 70s, through to the contemporary era, where generally speaking, cars are considerably safer now than they were.

 

It hasn't stopped the development of cars. And some countries have been able to develop cars without nearly the same safety requirements we've had in Europe and the, and the U. S. But at the same time, our [00:35:00] cars are still very strong and very stable and, you know, we still choose to use them over the less safe models that have been developed elsewhere.

 

So I, I would push against the idea that ethical standards, um, lead to a reduction in innovation. I think they just channel the innovation in the right way. I look at, uh, healthcare and HIPAA high tech and the, the amount of money, a small business would have to spend just to be compliant, prevent some from entering the market, prevent some innovations from taking place because monies are being diverted to complying.

 

Versus delivering new and improved things, unless you're a huge organization. Uh, maybe, maybe they still have ethics and morals as well, but we do have better tools now available to the smaller company as well as the bigger companies at a better cross price point. So I would think my opinion is with AI machine learning, they're able to get to a [00:36:00] better security, privacy, and compliance state.

 

Today than they were five years ago. I don't know if Diana, if you agree with me on that I mean, I I think that as we look at at what's coming down both from the eu around the globe and from the u. s That there's a lot of good foundational work But I always it's it's very high level and or you know foundational story base it's going to be how we apply it and what people have to to go through it because we do know that compliance can Be very costly and it can start to disincentivize some people from going into markets, for example, but on the other hand, we need it.

 

So I'm really interested to see how that this is going to play out in the long term around the globe, because we do have the potential, as Kevin was pointing out, for, you know, creating A lack of innovation, but we also have the potential, as Kevin pointed out, of just putting the guardrails. Look, the fact that there are airbags in your car [00:37:00] doesn't mean that they have to stop innovation.

 

In fact, our cars are, think about all that we've created around the cars with the lane change, you know, the blind spot warning, thank goodness that's in there, um, you know, and I love the lane warnings too now. So I think that I'm excited that we're taking a look at this, but I do agree that seeing how it actually plays out is going to matter if we can continue innovation or not, and I hope that it's going to be in a way that looks towards safety and innovation, because I do think sometimes you can simplify and make things safer.

 

It may not mean everybody understands what's going on 100 percent behind the curtain, but if I am able to keep myself more secure, I And do something that's easier, then that's a big win. You think of things like password managers, people who wrote down passwords, reuse passwords, suddenly they have a tool that enables them to not do that anymore and to, you know, alleviate password stuffing risks.

 

So sometimes, sometimes simpler and safe, I do think can go together. [00:38:00] Just to add on real quick to that. Sorry, I didn't mean to cut you off, Diana. I thought you were finished. But if you are a small company out there and you are into research and collaboration and things along those lines, reach out to NIST.

 

They do have grants and stuff that are available. So reach, reach out to NIST. And then you can go ahead and you can look at their AI resources and you can go ahead and apply for those grants and collaborations with NIST. So that might be a way to go ahead and stretch your dollar. Yeah, but I actually kept, sorry, I was just going to say very quickly that Kevin's analogy, um, terrifies me, actually.

 

And the reason, and the reason it terrifies me is because what, what you've actually done is you've shown the perpetuation of the large company problem, and for me, it's one of the really big problems with the EU, um, with the proposed act, which I still hope gets watered down significantly. So I'm a, I'm both a lawyer and a technologist, so I'm looking at it from both sides of the [00:39:00] fence.

 

I don't see how small companies are going to be able to compete. In this marketplace, given the incredible hurdles, which are put in place for what are considered to be high risk systems now, you know, and and and some of the high risk systems that I'm seeing here, you know, could encompass something as simple as voice biometrics being used for password management in a bank, which is the sort of system that I produce.

 

But if that is deemed to be high risk, then, uh, I probably can't afford to stay in that market, so it comes back to Google. So that's Ford, you know, Amazon, GM, you know, so, so for me, the car analogy is truly terrifying. Uh, you know, there's not many kit car manufacturers around the place. Um, not many low volume manufacturers.

 

So, and, and in Europe specifically, I see. A massive lack of funding for innovation and a really great desire to slap a load of regulation on stuff. We've got one major [00:40:00] AI firm in Europe, Mistral, who are having a load of money thrown at them for building foundational models, and they're never going to make a penny.

 

Um, And, and really very little investment in infrastructure. Some in Europe, almost none in the UK. Um, so I'm worried that this focus on safety and regulation is going to murder innovation in the UK and Europe. Um, whereas the US is taking a very different approach to it. Actually, the, the, their, their approach to safety, their approach to funding, um, at a government, and it's interesting you mentioned NIST, NIST have always been fantastic about providing funding for small, innovative projects in a way that we really see a lot less of in Europe.

 

So, sorry, I went on a bit there, but it was just a bit of terror there. That's great stuff, Nigel. It's not, not, not an area I've thought of, but I think it's a really good challenge to the analogy. Absolutely. And I'm glad you went there because I was going to go there. I've been thinking [00:41:00] about the divide, right?

 

The digital divide. Now we have an AI divide. I mean, I'm, I'm a big fan of all the good things that AI is doing from healthcare to research in general. I don't. Know about predictive AI but For a lot of things I think it's good, but i'm wondering again what you just said You guys is it going to be just for the taking of the big guys and how it's going to trickle down to Every business and i'm not just talking about technology business.

 

I'm talking about the mom and the mom and pop shop I mean, is there going to be an extra step and an extra incentive to? wipe them out because they can't adopt technology this fast. So my prediction, unfortunately, is not too positive on this, but maybe you guys see it in a different way. How can we facilitate the good of AI, maybe that that can trickle down to the individual and to

 

I mean, [00:42:00] if you start thinking about use cases, we talked about, oh, well, it can generate content. I mean, that, that can be helpful to a small business. It can generate emails for them. It can generate website content. It can generate, it can manage, you know, you may for some of this lead nurturing for a small business that can take a lot of time.

 

You can do more automation with personalization and customization. I know we have automated lead nurturing now, but. So I think that it's going to really remain to be seen while those providers may be larger companies. It doesn't necessarily mean that the smaller companies won't benefit. I mean, look at what happened with the cloud.

 

We can all name the three big cloud providers. Right. I mean, there's kind of a lock there on the three. And operating systems. Yep. That's another one. We can count on the main operating systems on one hand. So having big companies play in and provide some of the foundational tools, I don't think necessarily means that they won't be available to the small business.

 

But I do think it's really important that we make [00:43:00] sure that there's support for them, but just having the big players making the big models. Doesn't mean that that small business is going to get lost. Yeah, I think that also brings another interesting question that I think is already fairly relevant, but I think is going to become more relevant in 2024, is the topic of open source versus closed source models.

 

Uh, and I think that plays a shot in. That, uh, grip that the players have on the market, um, and I personally, I, I tend to lean more on the side of transparency. I know somebody would, or some people would push back on that and say that, uh, having open models, uh, But I think it's it personally, I think it creates awareness and public scrutiny, and I would almost rather have that in the hands of the masses than in the hands of a few that can influence everybody.

 

And so, I mean, you're seeing some interesting players coming out of that. I mean, meta is now open sourcing all of their large language models and who would have thought? That [00:44:00] Mark Zuckerberg would be the good guy in this story. Oh, they're not open. So he's, he's not open sourcing as it opens doors and, and the data is not out there as well.

 

So I fundamentally disagree with that statement, but, but yeah, I do. But you look at the foundational models on face. It's a great example of, you know, there is a big open source movement here. I love what Justin was saying, but I think we're, we might be looking at how the small business and the mom and pop shop may be integrating the CHAT models in the AI.

 

I see it trickling down into these type of organizations through integrations with their software stack. You know, we're seeing, you know, AI being integrated into browsers. So. I think that whether it's the big three, the integration of the technology into the existing tools that they're already using is [00:45:00] how it's probably going to make its way into those smaller companies.

 

They're not going to probably have the budget or the staff to actually try and write specific algorithms or queries. They're going to just take advantage of the fact that the rest of the industry is going to be adopting these and taking advantage of the functionality through existing product. I predict, I predict the small business that are AI enabled look very much like the, uh, the beverage industry.

 

Find, find a micro brew that stands on its own. I doubt you'll find one, right? They they're all owned by a conglomerate that they get benefits from buying in bulk and, and have operational expertise to help these smaller and give them facilities and all, and marketing and all this stuff, right? So I predict that.

 

Small businesses will look look like that. So I don't see we'll see a lot of Stand alone. I see we'll see a lot of companies part of bigger institutions I [00:46:00] mean, maybe foundational models do become an operating system effectively kind of goes to Lens point as well in a way, but if we thought about Um, an LLM or Gen AI in general as an operating system, then I suppose none of us have got a problem, but then it comes back to, you know, Kevin and the, uh, and the EU, that if we were to say, well, there's only three of them out there that people use, um, you know, how exactly are you going to.

 

ensure that you've got the transparency in there because none of what's out there at the moment is fit for purpose. I think Bloom is probably one of the few models which comes anywhere near being close to the level of transparency that you'd need and that's an academic only license. And, and Gary Gensler, the head of the SEC was saying this, that he's terrified about the fact that all financial institutions will be using the same models because he thinks that there's going to be a financial death spiral.

 

Because imagine if All financial advisors use the same basic foundational model to make predictions. If [00:47:00] all their marketing material is produced by the same models, that we could just see this massive death spiral. So I'm kind of really, I mean, whilst I've posited the idea of them being operating systems, it also, I actually think that we want to see a fragmented.

 

so people can build these models themselves or fine tune them really easily themselves. And, and if we put power in the hands of a few people, um, you know, this is serious power. We're talking about the likes of which we've never seen before. Can we make a round of job market? You know, like it's been five, uh, five weeks of, I mean, weeks, month of strikes in the industry, in the movie industry, the writers, uh, AI is coming, robots are coming and all of that.

 

And, and, uh, I think it's a lot of people are afraid. Of this new technology as they've been afraid of many other things in the past [00:48:00] of computers and uh, You know even cars, okay, if we want to talk about that and then now we're all on it. So Um, yeah, maybe a round of how do you see? Coming up is affecting certain sector.

 

Maybe certain jobs. This is going to level up is going to make mediocrity Uh disappear because maybe CHAT GPT is mediocre to start with or what do you guys see? I think 2024 we might have the, I think what might be the first tech walkout as we've seen a couple other places do that now, but I think we're technologists say, hey, is this encroaching too much on our jobs, even though it gives us positives, but it is taking jobs too.

 

Uh, we want to be heard and we want to go ahead and have something to say on how businesses actually use this and how we're protected. We don't, we're not unionized, but I think potentially we're going to see a potential, like on a Tuesday, where tech work walker, tech people walk out to make sure our jobs are [00:49:00] protected, our jobs are protected somehow.

 

I think that's going to come by in 2024 by some sector. Give me a call. I might be there. Honestly, I think the most honest answer is really we don't know because we're at the beginning of this curve. I mean, the, the, the difference in technology between GPT it's, it's the same technology. It is, we're just making the neural network deeper, bigger, um, and, and we're seeing these new emergent properties and these new capabilities as we, we make bigger and bigger models.

 

And, uh, again, we, we really just, like you said, we're, we're just past, uh, the one year birthday of CHAT GPT. The enthusiasm. Is, is just starting to flood in and of course the money with it, uh, to continue to build out larger and larger models. So, uh, I, I think the, the point was made earlier that these systems aren't as good as a real life doctor or a [00:50:00] real life lawyer.

 

And, and that's true currently. But we don't know how far this rabbit hole goes. And so I think time will tell as to how disruptive this is going to be. But I think there's absolutely the potential for a far more significant disruption than what we can tell based on the capabilities of the current models.

 

Yeah. So go ahead, Donna. Uh, yeah, I, I think that, you know, looking, I hope that we look more at Gen AI and machine learning in general as Augmenting human beings. So how do we make ourselves better at what we're doing? How do we like back to the blind spot? You know, warning. I can't see the blind spot. This, this camera can.

 

Okay. What else can we do with AI and ML? So I hope that we have more focused use cases there rather than on replacing people. Having said that, however, there probably will be some jobs as always happens. Uh, you know, when we get new technology. Farrier used to be a really great job. These are the people that put the shoes on the horses, right?

 

That was a [00:51:00] fantastic, really stable, lucrative job. Not anymore. Cause we just don't have as many horses around as we used to. So I think that with AI and ML, another thing that looking at in 2024 is we're going to start seeing the emergence of new, more jobs, new jobs that are related to. The creation care feeding intending of the systems.

 

So rather than eliminating the people, hopefully we augment, and also we're going to be looking at new jobs. I've been saying for quite a while now, you know, with things like code GPT, chat, GPT, Are the hackers and the programmers of the future, are they going to be programmers and offensive security people or are they going to be people who are linguists with an understanding of the technology and the proper abilities able to produce a question that can be responded to by the chat mob?

 

So I question and I wonder when we if to Diana's [00:52:00] point about you know jobs changing Is it going to be that the job is going to change, or is it going to be the responsibilities of that job are going to change from actually punching in actual code to being able to structure the correct language to get to the model?

 

So is it actually going to be that the jobs are going to change or the requirements to be able to perform the job? Yeah, well, the legal ones really interesting because I know Diana brought this up earlier as well, and I'll just mentioned it, but, um, actually. GPT 4 is better than noise, and I don't just mean because it passed the bar exam, there was a pretty rigorous study done by an e discovery company and also Relativity, the big e discovery software company, about how to do that.

 

So, uh, yeah. Do if what's called predictive coding off of documents. So the idea that you say in a legal case, should we disclose this to the other side or not? Is this something that should be used in the case or not? And machine [00:53:00] learning has been used for many years to do this to do technology assisted review, but they did it independently.

 

They basically said, they basically gave GPT 4 the same instruction you'd give an attorney and fed the documents through and found that it was basically as good as humans and also could be easily adapted. So, you know, in the same way that, um, human beings have, there's a misunderstanding. Um, and so they do the wrong thing.

 

They found this with GPT 4 and corrected it. with the misunderstanding and were able to do it. So actually, a lot of those types of jobs. And the thing that concerns me about that is because, you know, as I said, I was originally a lawyer and I trained, I became an expert, quote unquote, by doing lots of really manual, repetitive tasks.

 

So, The interesting question is, how are we going to get experts to the future? Because we as humans need our 10, 000 hours or whatever it is of grind to get there. Um, so I'm quite concerned about, [00:54:00] because, and again, to Len's point, you know, the fact is, yeah, we can have these things produce code, but, you know, we've seen all these operating systems move on.

 

You know, we've had, you know, C, C, C we've got Rust around now, Node. js, you know, who's going to invent. The operating system of the future, if there's no expertise there. So I'm, I'm really worried about the death of expertise. We're going to get rid of people because the, the, the gen AI can do it. And then we're going to sit there and say, crikey, you know, yeah, mistake me if I'm wrong.

 

But the UK just ruled that, uh, AI may be as good as humans, but they are not human. And an AI system that generated a patent was not accepted. I think that that was ruled today in the UK. So, so where innovation might take place in place of a human. At least in law in the UK, it's not replacing the human in that sense.

 

And I very quickly, I, I have a newsletter, the future of [00:55:00] cybersecurity. I actually wrote a piece that ties, uh, Rebecca's Len's points together, or I call it the SOC analyst strike. And the point of the story is that we as cybersecurity professionals usually join this. Because we, we have a sense of, of passion and a desire to help people and help society be better and be safe.

 

And if, if we lose that passion because everything we're doing is through tooling and through AI enabled systems, we're not going to want to work in that. And the whole point of the story was that they walked out, right? The SOC analysts walk out because they're not performing a duty that they believe in any longer.

 

They're just doing tasks. I, I, I would say I'm a little more sanguine than Nigel about the future of expertise and in so fact only is that a lot of my friends who sort of graduated and became doctors, you know, have simply moved from looking at their encyclopedias on their shelves to looking at Google.[00:56:00]

 

As long as they have a reliable place to find the information that they want and they need, there's no need for them to hold that in their heads anymore as there once was. And so I think that, you know, the expertise, the nature of the expertise changes. And that kind of goes back to Diana's point about Farriers, which reminded me that my first job in 1991 was as a desktop publisher, which basically meant I was making PowerPoint charts for people who didn't know how to use PowerPoint.

 

Um, and that, that's something which clearly has gone by the by, but, but my big prediction for the future is the base, you know, back in 2016, we didn't have data protection officers. Now it's hard to imagine a company that doesn't have a data protection officer if you use personal data in any way, shape or form.

 

And I think we will see something very similar coming up in response to the AI Act. I know a lot of lawyers who are very hopeful that we will see something similar coming up, um, [00:57:00] to the AI Act, in that we will start to see people who are experts in the impact. of AI on society in particular, and I think that and the whole sort of nature of assessing AI and algorithms will become a whole new sphere for people to be moving into.

 

Very good. I want to be respectful of everybody's time. We're getting to 57 minutes here, so I'd like to make a closing for this and maybe Carry the conversation in the new year. I'm sure it's not going to stop. I have to say that probably I say six out of 10 podcasts that I do lately on redefining society.

 

There is some element of AI. It's either around education. I just had one the other day about legal. Legal tech and how that's affecting the legal ecosystem, definitely about creative aspect of things. Um, I find myself being more of an art director to the [00:58:00] AI as I used to be with the designer and the, and the copywriter, I just need to give the right direction and, and then maybe I get some what I have in my mind.

 

So my suggestion would be. To people listening right now that maybe we got a little heavy on, uh, on cyber security regulation and so on. But I guess there were certain things easier to understand, easy to understand for, for everyone. And I would say, don't judge it before you try it, at least don't, don't condemn it.

 

I feel pretty positive about it. Even if you want to be negative, you need to at least give it a try or try to understand it and just say. I'm not going to drive the car. I'm going to stick with my carriage and horses. It's not going to happen. I mean this train has left the station and it's not going to stop.

 

So, uh, you better get friendly with it. Uh, well with this I want to thank everybody. I hope everybody had a good time listening and learned something about it. Make you [00:59:00] think. Stay tuned. Longer Define Society podcast. And of course, each single one of these has some kind of podcast, either an ITSP magazine or somewhere else.

 

So you'll find all the connection link for each one of them in the notes for this podcast. Other than that, thank you very much and stay human. Thanks everyone. Thanks. I have no idea. I just throw that there. Ask, ask the humanist. Ask the, ask the chat GPT. He's moving beyond humanism.