ChatGPT? Yes, we have to talk about it! Dive into an exciting new episode of Redefining Society, where we explore the cutting edge of artificial intelligence and its implications on privacy. Join us for a stimulating conversation with AI enthusiast and privacy advocate Arjun Bhatnagar.
Guest: Arjun Bhatnagar, CEO of Cloaked - a consumer-first privacy startup dedicated to bringing humanity back to the internet.
On LinkedIn | https://www.linkedin.com/in/arjunbhatnagar/
On Twitter | https://twitter.com/acenario
Guest: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin
Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________
This Episode’s Sponsors
BlackCloak 👉 https://itspm.ag/itspbcweb
Bugcrowd 👉 https://itspm.ag/itspbgcweb
Devo 👉 https://itspm.ag/itspdvweb
Episode Introduction
"ChatGPT? Yes, we have to talk about it! Dive into an exciting new episode of Redefining Society, where we explore the cutting edge of artificial intelligence and its implications on privacy. Join us for a stimulating conversation with AI enthusiast and privacy advocate Arjun Bhatnagar."
Welcome to Redefining Society, the podcast that challenges conventional thinking and paves the way for a new era of innovation, progress, and limitless possibilities. In today's episode, we're taking a deep dive into the world of artificial intelligence, exploring its potential, its ethical implications, and how it's reshaping the very fabric of our Society.
If you've ever been intrigued by AI or wondered how it might impact your privacy, you won't want to miss this episode. Our special guest is Arjun Bhatnagar, CEO of Club, a consumer privacy company dedicated to putting individuals in control of their own data and identity. Arjun is a passionate technologist with over 16 years of experience and a unique perspective on how AI is changing our world.
In this fascinating conversation, Arjun shares his journey into the realm of AI, including his fascinating experiment to create an AI version of himself. You'll hear about the successes, failures, and unexpected discoveries that led to a deeper understanding of privacy and its role in advancing technology.
As we delve into the implications of AI for privacy, Arjun offers thought-provoking insights into the potential "privacy limit" that technology may face in the future. He suggests that the only way to push past this barrier is to address the privacy challenges head-on, ensuring that consumer control and consent are at the heart of innovation.
But what about the darker side of AI? In this episode, we also touch upon the potential dangers and ethical questions surrounding the proliferation of AI. For example, how do we balance the benefits of advanced technology with the potential risks to our privacy and security? Arjun shares his thoughts on these complex issues, providing valuable food for thought as we navigate the rapidly evolving AI landscape.
Join us for this riveting discussion on the future of AI, the importance of privacy, and the impact these developments will have on our society. This episode of Redefining Society will give you a deeper understanding of the complex relationship between artificial intelligence, privacy, and our ever-changing world.
So, buckle up and prepare to have your mind blown as we embark on this thought-provoking journey into AI and privacy. Remember to comment, share, and subscribe to Redefining Society to catch all of the episodes. Together, let's challenge the status quo and redefine our world.
And now, without further ado, let's dive into our conversation with Arjun Bhatnagar and explore the thrilling world of artificial intelligence, its impact on privacy, and the exciting future ahead. Welcome to Redefining Society!
_____________________________
Resources
____________________________
To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast
Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9
Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.
_________________________________________
SPEAKERS
Marco Ciappelli, Sean Martin, and Arjun Bhatnagar
Marco Ciappelli 00:02
Marco. Shawn Welcome to EPG you're gonna start with your acronyms again. PT
Sean Martin 00:09
G. E R PT.
Marco Ciappelli 00:15
X one you missed. Are we playing battleship war? Yeah?
Sean Martin 00:23
did you sink… Did you sink my A-I?
Marco Ciappelli 00:26
A. I., that's my thing, but it's not going to be on the coordinate. Okay, we have started, as usual. But I want to welcome everybody on redefining society. This is an episode that really intersect with technology that it's kind of like starting to get in everybody's computer and not the computer not technology. But this one here is AI that is really starting to, to interact with us. And everybody's talking about that. It's TGP GTP, PETG. Shawn, what is it?
Sean Martin 01:03
We'll go with chat. GPT. All right, let's go with or stick with that one. For the most. We'll see. If we stick with that, but yeah, it's something I've been playing around with. And I'm excited to have this conversation because there's, there's the graphical interface that many probably have seen. But there's so much more behind the scenes, systems and the algorithms and workflows and processes and, and all kinds of stuff that I'm sure we're gonna get into here. Hours, where the conversations will condense down into a little little soundbite here for folks to jump in on and listen to. I am not AI today, and this is the real me talk. So it's gonna be full of garbage, I'm sure.
Marco Ciappelli 01:53
Well, and that's okay. Because we're going to have an open conversation, we're not going to get too technical. We're going to talk mostly I think about privacy, and also an introduction to what this new technology and is a new really, let's let's let's pawn this question as well to to our guests, and I'm planning to have many conversation around this artificial intelligence coming into our life. And this is the first one so he's gonna be with, or June. Welcome to the show. How are you doing today?
Arjun Bhatnagar 02:29
Good. Thank you. Thanks for having me.
Marco Ciappelli 02:30
And is it really you? Are you passing the Turing Test already? Or?
Arjun Bhatnagar 02:35
i This is the real meat today. Last, but I love her story. Right? did try to have an AI of myself? Yeah. Yeah,
Marco Ciappelli 02:44
that'd be a good stories in here start with that? Well, let's start with a real you. So let's start introducing yourself to the audience. And then feel free to go into the story. I love stories. So
Sean Martin 02:57
just quickly, Marco, you're making my point that sometimes reality is better than artificial one.
Arjun Bhatnagar 03:07
Yeah. I enjoy the conversation. Well, my name is Arjun Bhatnagar. I'm the CEO of club where I run a consumer privacy company focused on putting you in control of your data and your identity. And I just generally love technology, been in industry for 1617 years, and just been passionate, a lot of different areas, from physical to electrical to programming engineering as a whole has been really exciting.
Marco Ciappelli 03:31
And is it getting more exciting? Or is it getting scarier, or in your opinion for the user?
Arjun Bhatnargar 03:38
I think it's it's definitely getting more, it's getting exciting and scary. Exactly. And I think I actually have a thought around that. I think all technology right now is approaching this idea of a privacy limit, that technology is getting smarter and smarter and smarter. But it's eventually going to hit a wall where privacy becomes the main limiting factor. And something that I think about quite a bit is that we won't actually be able to get smarter until we solve the privacy challenge. And that privacy challenge can't be well, let me just trust Google. And like that, that's going to become a challenge or to think about the end of the day is that how does privacy rethought with consumer control consumer consent at the core? Yeah, that's just kind of way to think about it. And I'm also happy to talk about my old background AI, because I had a lot of fun exploring that same idea about myself AI and how does that kind of work? But I can go from any direction.
Marco Ciappelli 04:36
No, no, let's start from that one. Let's start with your your AI version. Tell me about that one.
Arjun Bhatnagar 04:42
Yeah. So I've always thought a lot about AI. And then similarly, I have a little notebook called the Book of AI that I like write pages every day, but my thoughts and how you reconstruct and think about it. So back in 2020, I finally got the idea of how to start something with AI and start thinking about it, and it started more from the data side, I wanted to figure out how what does all of my data look like? Where does it exist? What does it mean? And so what I actually ended up doing was, I bought a Mac Mini and I put it in my apartment. And I decided to write integrations into everything about me. Like my basic data, like my Google Calendar, my Facebook data, I hacked iMessage, I kept all my bank accounts and credit cards, my workout data, eating data, movement, data, GPS, that are sitting standing everything about me, I put in this box. And with a background in machine learning, I wrote some basic models to start understanding my life and trying to emulate me and figure out the trends, the patterns, the history. So at first, it started studying my, my, I pointed towards my workout and my social, and it figured out, okay, you missed your workout, these two days, do 15 Push Ups between these two meetings, do pull ups between these different readings and actually start figuring those things out by just looking at wide data, and looked at my spending habits and started understanding that Oh, on a repeat behavior is buying alcohol and Chinese food and then say, well, we should cut back cut that down a bit. And it was a really,
Sean Martin 06:19
it was a product release coming up one of the two. There's
Arjun Bhatnagar 06:23
it figured out because I spent 1057 Every day at this Chinese restaurant. And it's like, Hey, there's this trend. And I think maybe you should nip this in the bud. And then I had the biggest kind of AI feeling was when I connected it to my iMessage. So what really happened was a few different things. One, and this is how the idea of cloaked all came to was really, I was actually at lunch with somebody to put my phone down and have lunch, I picked up my phone and realized really crude aI had a full conversation but then girlfriend, it said I love you to her. It's center memes. It went back and forth talking with her. And it wasn't too smart. It was so the joke is that the conversation was just right enough. And its capabilities are just right enough for the competition to match. But I had a full conversation with her and it's over, but I picked up the phone. And that kind of like hit me like what just happened. And similarly, I haven't shared this part to a lot of people. But my friends also texted us all the time. Because we said I'm sad or send me tell me a joke or these things? It would it would try to use my previous responses to try to figure out a new response. This is This was back in 2020s. and forth from the chat GPT has now gone way beyond what I had originally done. But it's an interesting approach where some one friend was like, oh, for a solid like one minute this like I didn't think I was I thought I was talking to you. And the conversation was going back and forth. And for me is exciting, isn't it and see how my reactions are what people think of it when they talk to me. But then it's also scary. When we talked about Sean said earlier, I remember that moment when I saw I texted my girlfriend, I said wow, what just happened. And two things hit me one, I realized I had built all this stuff on this box. But I don't own any of my own data. And too. I love technology. I'm a technologist. And I wouldn't trust Facebook, Amazon or Google to make something like this. As if I didn't trust it as like, well, that's a problem. I love to be on the cutting edge trying different things. And it made me kind of nervous when I had built on my own in a little box without doing too much work. And so that kind of got my brain going and thinking about like, well, somebody has to fix and solve this problem. Otherwise, the next idea of innovation won't ever come, it'll be stuck. Because we've already seen it today, people are concerned people won't want to engage and thus making the technology dumber, unable to actually proceed.
Sean Martin 08:59
Boy, so many places to go with this. I I'm wondering, I don't want to get too technical. That's not this audience necessarily. But I used to code way back when I won't date myself. But I don't know that I could build an AI system. So but the fact that you did leads me to believe that certainly others can. And I don't want to diminish your effort here. But how how easy was it to build that system? Was it a collection of open source tools and things are much yes, you have to build. I
Arjun Bhatnagar 09:37
I've pretty proud of the I built a nice little architecture and how it's built to try to make it modular, but it was not at the other day not to diminish my my my efforts. I think absolutely somebody else could do it. Maybe mine a little bit more elegantly built, but I think anybody else could could build exactly the same same approach. We use open source tooling. I wrote a little bit of code three and frankly I I'd built maybe half of the core. And then I hired two people off Upwork to actually keep building more of the integrations and connections. And they were able to plug into the system I made and quickly add more integrations rapidly, because I built a little skeleton and architecture. So they're just plugging and playing. And so that the fact that I could do that I know there are people much smarter than me could go away even further, it was a pretty simple system for my end.
Marco Ciappelli 10:28
So let's, let's connect this experience of yours with both the privacy but let's bring it back to the chat GPT. So what makes Chad GPT so special that everybody's talking about it is that number one, it's pretty good. What it does. Also, it sucks, or what it cannot do, but people try to have a do. So it's not sentient is not understanding the language. But it's pretty good to put one word after another. And it's kind of a probability calculation, right? But I'm bringing it to privacy, because in order to do this and be so good at it, if I'm understanding and maybe you can explain, it's just the amount of data that they fed this thing. And it's, you know, people say, the whole internet, the whole library of Alexandria is in there. So while it may be relatively simple to get started, if you want to reach certain level, you just need to have a powerful machine and a large amount of data. I mean, I'm understanding this correctly. And I'm pretty sure the jump to privacy here is pretty and copyright. It's pretty, it's pretty simple. So what's your perspective on on this? Yes, just
Arjun Bhatnagar 11:53
just addressing both of the things you're kind of talking alluding towards? First, I think it's exciting, the fact that it works well. And next, you can have interesting conversations, I think the topic you're hinting towards, which is the big, the big goal is actually not even being technical, just a simple phrase people can connecting is what is checked up t versus the actual thing AGI artificial general intelligence. This was the first time that you're engaging with a piece of software, it gave a feeling of that taste of artificial general intelligence now to demystify things, though, the reason why this frustration is checked, if D is not artificial general intelligence, it actually is not intelligence either, is it is a conversational model, based on exactly what you said on a lot of data. And just even make you're giving people context, GP three is the Judy three is the model, which is generally based and trained on a lot of datasets. This is the Library of Alexandria. It's also training a lot of publicly posted data information, or it can find on the internet, a lot of information brought together, chat up threes, then GBD. Three is then specifically tuned to feel like a chatbot like a conversation. So getting conversations from chat transcripts, dialogues messages, every all these sources to make it a little bit more tuned to feel like a conversation. So it because it's so well tuned. It feels like a natural conversation, but doesn't know anything, because it's not actually going pulling from a database pulling from these things. It's trying to as he said, probability, it's trying to test and figure out based on what you've said, Here's my my model that I'm trained on when I get x input, very dumbed down Y output based on the parameters is it's it's tuned on. Now, the part of the problem we think about as well. I think this is a great first step. But you can't suddenly say that this is now intelligence, you have to take a step back. But see, where's intelligence had from here is that GPT? Three, I think it's pushing the chat up three pushing the boundaries and where you can use it. People using it from their work trying to be more productive, even dating life. People are using it in general conversations, support, etc. It's an interesting application. But I think the thing that people forget is where all this data came from. So I'm telling you about the privacy is that publicly posted data. And I haven't done any experience in this 30 minutes on this quite yet. And they're working on this actively. But it probably knows things about you because it's gathered information from a lot of public data sources. And your information is, sadly in the privacy world brokered in many places. It's sold, stolen, shared everywhere, sent information, tidbits about you are part of this training model. It might be adding layers to make sure that doesn't expose it or connect it, but it's there. The information is part of these systems. And so what do you have to kind of think about is that, well, what's my say? What's my control on how this technology is evolving? Especially when it relates to me information about me I copyrights is another big aspect because that is a well is dimension of privacy around ownership of me information about me. And then there's ownership around material that I've created. I have some mixed opinions on the copyright where I do think there's some transformative work happening. But the law and the legal world will figure it out. From that perspective, whether it is or not. I know Dali has some interesting case court cases happening. But I do think there's some transformative work happening, it just depends on this transformative effect AI really mean that it's, it's a brand new piece, knowing that it came from that source. I think similarly, when chat up three learns from conversations about you, or things that you've had or public posting made in Facebook or wherever, well, if it knows some things, or future AI future models are based on these things, well, how do you have control of it? That's can become the big question that we need to answer a society is that what's my role on this? Right now we our role is nothing because big tech companies say, It's okay, we will, we'll set the rules and everything and then you will, you will agree to them or delete us. That's the only two binary options you have. And I think that's where I really think that we have to change that it can't be delete, delete it or use it, they can't be the only two options.
Sean Martin 16:23
So I, I'm a tend to go down the technical route. But I'm gonna shift gears a little bit here. And kind of goes back to some of my early thoughts on artificial reality. And kind of Margot's point earlier that he wanted to hear about the real origin first, the fake one. And what my what I'm thinking here is, guy just look back at the past few years, where information, misinformation, non information, erroneous information, wherever you want to call it. And as we're feeding these machines, are we then feeding ourselves back what we want to hear. And I always my biggest fear is that we end up with a space that we've created, that's so well defined, that it's hard to break out of the definition that we've or the box that we've kind of painted for ourselves. So it's kind of a big question. But how do you see that? Because is there we've gone too far down this path to the right or to the left? Can we break it and start over? Or is it too, too late to kind of switch gears?
Arjun Bhatnagar 17:36
Well, that's a really interesting point. I've actually talked about this before around, I'd say by the term bias, right. The one thing is interesting is that it seems really great the thing the conversations, it's having the the topics that's able to create or generate chapter three in the future models. The problem is it does reflect our own bias has been trained on us. And then also, you have to think about what is being the it's pulling from often it's pulling from the majority or loudest voice, right? If there's billion data points, and a lot, it's from a few sources speaking a lot louder than others, then it can be trained towards these biases. And I think that's a very interesting challenge for us to think about is that, especially if you have competing models, people's like, Okay, I'm going to train myself on these types of people versus these types of people. We saw this live, I'm guessing both of you might have seen already the the Seinfeld AI on Twitch that was banned because it brought up slurs or brought up some, some gender, some gender issues are in brought up and the whole idea was that, well, it's it's parroting information it's being trained on and across different sources. This is bound to come up from these efforts. And I wouldn't even say that it's because society is is reflecting these issues. Often, it's often unit minority, that set of issues that can speak very loudly, that bias information. I don't think and my mind visions a little different about the future of AI is actually where the conversation began. I think that the idea of solving AGI as a general purpose model, maybe it's I'm not technically smart enough, and so maybe I'm the wrong person, I just don't think that will be practically useful for society. Because if you I always say this consensus kills greatness, right? So if you take everything around you average it all out and bring that to a data system or you're gonna kind of kill the greatness and Rick's society unique and all the different personalities and you'll get these biases you'll get these pointy edges around topics conversation cetera. My vision I think about is that AI comes back to the people. I like my goal for my company one day is to work towards like, what if I had an AI i There's only trained on me, my information things about me. And it's my reflection how I interact with the world and extending things about how I interact. And so someone else's AI is completely different than mine. So the idea of actually having to have one AGI one general idea, I think that model, now I can be proven wrong, maybe we crack it, and we just feel this brilliant thing that works. But I think we played by those problems you brought up, Shawn. And I think how you start to break it down is make the problem smaller. And it's like different people, different AI is different sorts them interact with each other, as opposed to one general system, doing everything.
Marco Ciappelli 20:41
Wow, you just went so philosophical that I almost want to jump on my chair, because I love this kind of conversation. And actually, you made me think about I mean, I'm thinking, you know, the movie Her if you remember, yeah, years ago, where you know, it really is interact with you now I'm thinking like, digital twins and how it is strictly your own. And now in that case, you wouldn't somehow resolve the privacy problem, although, who who is really owning your other you unless you can really do it yourself and, and have it in a in a sandbox, right? At that point. But But then, how much? It? I don't know, it's boy in my mind, because I'm thinking a parallel digital society where our individual avatars are interacting, and are they going to replicate our society, or they're gonna bring it to a completely different level? I mean, this
Arjun Bhatnagar 21:39
curious, I want to experiment I want like, that's what I imagined is that magic,
Marco Ciappelli 21:44
I love it. It's just blowing my mind right now. It's it's
Arjun Bhatnagar 21:48
heroin podcast, right? You have three or three of our individually eyes talking to each other, and learning from each other because they don't actually know each other. So their data sources, their informations are locked to one another. So they're actually learning themselves.
Marco Ciappelli 22:03
So I think we're kind of learning right now how to interact with with an AI with a chat or assist, and I could seem stupid sometimes to say, I don't know, Hey, Siri, now it's probably going to answer but do these, it feels like I'm talking to a machine. Maybe this is a training for the next level, when you talk to the chat. Sean, I know you, you want to comment on something like
Sean Martin 22:31
totally itching to jump in, because you were talking about technology and how it works, and specifically privacy kind of being a blocker. And I have so many thoughts here, but just this idea that, that we're using these systems, and we're players in them, and we contribute to them, and we pull stuff out of them, perhaps. But are we actually learning as humans? And are we able to keep up with what's being given back to us? Or are we simply just feeding something else? And and we're just pawns in the bigger game? Is that? I don't know if kind of the earlier point of what's the what's the purpose of GDP three for society, if if society can really absorb and benefit from it,
Arjun Bhatnagar 23:17
you hit the nail on the head because what when I started my little AI box, the idea I call the data box, that was the name was I had to change it because I realized oh, she that they've taken my my thought was, I did it because I wanted to have technology or AI help me and understand me because I can't make heads or tails. Like sometimes I remember. I felt sad for a while. So I took a journal out and I started writing. I'm sad. And I tried to dissect Why am I sad? There's work stress, personal life, stress, family stress, things going on, etc. But I was also thinking like, we this data, that's actually there's a financial reason I ate a burger yesterday, I also got a call from my mom, early morning, and maybe something some note happened. I saw an Instagram post from an axe or whatever. Like, I was like thinking of like, Wait, there's probably clear signs about why my life is offering a different way or today versus tomorrow, why I feel better certain days versus other days. And I knew I couldn't figure it out. But I knew there's data that can actually map it out because there are good days or bad days and a great days and there are horrible days. And I was trying to dissect myself, what makes one day better than another day. And that's part of my part of my thinking. I think with that idea of AI helping and working with you. It's totally possible to be beneficial towards society. It's just we have to change the it takes some take time somewhere I think we're trying to do as a business is that the whole business the whole right now world of technology and data is predatory. It's based on extracting out of you You doing some transformer work and selling back to you. I think if you kind of shorten the the feed the feedback loop towards you and technology and how the loop can help you and work with you, people will pay for that they would love to have a VIP like experience and the way they operate their lives every day. And technology can get smarter with actually smaller data. Because if you look at so much data, it kind of gets averaged out. But when the data's looking directly at you, it can actually say, Oh, I know that I'm not my I've got blinders on. I understand Arjun, he is very simple. It's actually not that there's not that much data going on. There's credit cards is finances, life, women, etc. But it's all connected in one small bucket, I think it can, I totally think that's the case. It just takes and everybody, somebody's gonna solve it. And my hope is that I'm part of that equation, or maybe someone else will, but I know somebody else will make it towards beneficial for society, less about advertising for society and trying to be a better ad engine.
Marco Ciappelli 26:05
So what you're saying is, the next product is going to be shrink GPT instead of charging. Somebody just say, hey, alert, stress level coming up, right? It's almost like your, your, you know, your Apple Watch that is gonna lay say, Hey, your heartbeat is going up and your breathing is not right. Everything. Okay, that's gonna be the next one, though. Take a pill, meditate. Now, I want to stay in the privacy thing here, because I know that's your expertise, but you're actually bringing it to a very philosophical level, which I agree, I mean, all this interaction that we're learning to do is really a way to look inside ourselves. I made a point not too long ago, in another conversation where we were talking about ethics and in advanced technology in general, and I, and I'm like, we never talk about ethics. As much as we're talking about ethics. Now, no philosophy is coming back psychology is really a way to, to know yourself, kind of like the touching from herself. Thinking about AI is thinking like human, and then we discover there's biases and all of that. And you're like, and then we go back, and we're like, we're the ones that are kind of in a wrong society. In a way. That's why our AI is acting like that. So but but because of that, like, tell me when you think about the perspective from the user, do you have any read anything or your any opinion on on how we are using the chat right now? I mean, there's an incredible amount of users right now that are feeding even more information, because I mean, we are feeding more information every time we interact with that. Whereas the privacy risks there if people of course, don't give your guests your credit card, and personal information, but other than that, I mean, it's hard not to express yourself, like even just by the question and say, I don't know write me a lyric for a song, which I've tried and it's pretty damn good. You know, you're giving away a piece of information for you. How is that handle? Considering it's free at the moment and you don't really own?
Arjun Bhatnagar 28:25
Yes. Chatter basically says all your the things the queries you make, if you go back, you'll see all your previous queries are all there. Now you can delete and they'll wipe the information it's like giving you credit that okay, we have a trust. They they do delete but I do know that a lot of times I won't speak specifically if they do or do not I know companies a large don't and they do delete they often still retain retain a copy or it's still there forever. So I don't want to comment if they're able to cuz I can't validate but I do know that they do have got the A lot of companies do maintain information. But at that sense, the privacy, you are effectively they will be foolish if they didn't in the current how the work in the world works. Not using it to help improve their model train new data. Absolutely. They're like when you write a song lyric, this is data that they're using to understand their own product. Because even they're now testing pricing and they've used some of the things that people use GP three to help figure out why people are using it. I think that's also tied the biggest realization Right? Was came from the uses usage of chat to be three was that people are using it like search, which is why Google was very afraid is that even tick tock tick tock was the first inkling of a little bit of fear of search, but people are searching for things on tick tock to get answers and they're not going on Google and then now check up three you said wait, I can get pretty good and sometimes wrong answers completely fake.
Marco Ciappelli 29:53
Right? But I'm gonna stick behind it. Yeah, right. They're gonna
Arjun Bhatnagar 29:57
they're going to they're going to be proudly say As they treated like search, and that's only from the data they analyze and studied, that's like, Wait, this is people be willing to use this over search. I don't know why Microsoft super excited. And Google's kind of scrambling to pick up that that pace, where it's like, well, let's bring this into search itself, let's improve it, let's make it part of that experience. So your data is being used to train and understand a lot of different things. So I always say this, and every time I talk to people, be careful what you put yourself out there. Like I always say that like, for me, personally, I'm just a little data conscious. I've never been in a say a privacy aficionado. Like I'm, I always say, I'm not the best at using a VPN everywhere. I don't might, my team is upset about me that I don't have three factor auth. And a special key here. And I consider myself still a layman in the sense that I'm very aware of security things. But I also like the convenience. That's why even from business, I always think about you, you can't compromise convenience to have privacy, you have to find a way to have both, and then chatting through usage. It's like you are giving a data and you'll have to figure we'll have to eventually figure out well, do I keep engaging with it? Do I not engage with it? Amazon I know and few other companies band chatting three because people are having it write code and they're sharing source code from their code bases into chatty with three to help resolve bugs, etc. And they're like, No, you can't it's proprietary nemadji personal data that's proprietary to you, they're cheering that they're gonna keep holding to that. If you wrote a nice little opening line for Tinder. They're studying that and looking at that and figuring it out. How could they be used for dating or other things?
Marco Ciappelli 31:42
Sean, how much privacy? Yeah, let's build. How many beans? Have you spilled on Chuck?
Sean Martin 31:48
Well, loads, loads and loads of beans there. What I'd like to know, I don't know if if you have any insight origin is because you can have multiple chats? Yeah. GPT. And do you have any sense are those separate? Because one thing I noticed is when when you're having a conversation? And let me step back? I'll refer to the I don't know if you saw the Ryan Reynolds. Commercial, we had to create a script in his voice. Oh, no, I don't think any of the video was published. Using the script that jet, GBT created clearly in mind a bunch of stuff about him and the way he does himself and the way he creates advertising. And so there's that where he, there's that information about him? Presumably, I have multiple chats in my view. And I'm separating them by different things. scripts for transcripts for podcasts we do or, or analysis of some information that I'm reviewing whatever. I'm wondering, I'm kind of mixing multiple things here. But there's my profile in there. Yeah. Where I presumably am feeding it stuff about me collectively. And this. Yeah, right. And then a conversation with then I could presumably leverage Ryan Reynolds information. Yeah, good creative, upon a video for ITSPmagazine and the voice of Brian Reynolds. So how does that all come together? Where are the lines drawn? And specific? I was just wondering, because I'm in the tech, the individual chats are those separated?
Arjun Bhatnagar 33:38
Yeah. So I actually discarded this by just accidentally messing around check up threes. I just went on the the inspector tool and Chrome. And you can look at the network tab. So whenever a software makes API calls or talks to a back end, you can as in the browser, see what the call the call and response are? So I saw just to quickly the tech side, I saw they have each has unique conversations. And basically threads underneath them are made me flip it's a thread with multiple conversation, but they do say they group them as individual things as one conversation. Another chat. So each chat is different. With the different back and forth. You did. They do separate that out. And I you can clearly see it in the response and their API, that they're grouping these as individual chats. And they figured out what's the what's this chat versus another chat. And whether they use it where Brian Reynolds data is affecting mine. As someone in privacy, I could say most likely, and some of the legal world I can't say like somebody's legal I can't definitively say if they are or not. But companies by large will differ towards. And it's I'd say it's the right idea. And this is where I always coming from my background. They want to make a better experience. So yes, Ryan Reynolds data, they want to have it so that your experience is better. However, my take is always, it's great to build a consumer experience. And that should be the mindset. All we need to change society is bringing consumer control, that they have a voice and all these things, how's it interacting what's being shown what's being shared? A don't share that I talked to my girlfriend last night, I asked for help and sending a text message or I've even heard people use getting me through to help like when having a rough patch in relationship and how to talk through it or explaining difficult concepts. And I think you could say, hey, please don't put that further dataset, or don't share anything, or you don't want yeah, shared this conversation I had, I had a great, I was using you to study things about AI and think we had an interesting back and forth conversation in chapter three. And like, Yes, I'm willing to share that. I think from really, from my perspective, it comes back to consumer control. And the pursuit of better experience is the right combo. I think the world of privacy is in a weird state, where people say, let me go delete, let me hide, let me stay anonymous, I think really, the word should be around control. And making it simple. People make it very complicated. They go here and give you control, you can have 35 dials and toggles that's not control. It's having an easy say and how these things work.
Sean Martin 36:19
Well, I'm gonna I know we're getting close to close to time here. And we've barely scratched the surface. But the let's go use the use of letters API. And I have to go there because it definitely connects to control and experience. And it's not just me, I presume not just me putting this stuff in and getting some information out to help them build their algorithm. As you noted, there, they're selling this stuff, there are API's into this system and datasets that were continuously feeding. So what were the general population? How, how does that look for them from a privacy perspective. So it's not just feeding this open.ai company with information, presumably there, many companies asset accessing it to create experiences off the data, whether you're you're benefiting from it or not?
Arjun Bhatnagar 37:12
Yeah, so that's actually a really good point. I'll disambiguate just in chat GPT, and just come to companies generally, because they all kind of operate similarly, is that these businesses are building on top of each other. And data is being shared different places, I always say that there's not in the real world privacy. Just because you trust a company doesn't mean that and even the company is not doing all the right things. They might use other software, like sales, CRM software, they might use customer support software, and these things, then start to share data throughout. And then those companies start to share data in different places. So I think similarly that bring it back to concept of the API. There's data sharing to be that'll happen. I know companies like this tend to also Facebook this originally, they open up developer portal, you could access the customer data, and you could let somebody get sign in. And then you can access a lot of data, the whole Cambridge analytical scandal that have been 2018, around privacy was around that exact concept that a business could was able to the influence, effect effectively, that whole influence the entire world, entire election, everything was all done because of data. And they how they could target specifically dissect people see the way they think, because they're interfacing with Facebook's data. And Facebook is giving access to information that kind of went all the way down to this company that then aggregated, targeted, then use that to manipulate people. I think from this perspective, there will be data sharing, and maybe they end share their a I believe they run as a nonprofit, or for profit, and they got investment, but there'll be businesses wanted to build on top of it. There'll be data sharing happening. So that it isn't inevitable case that comes back to like, be careful what you shared in that perspective. I think that's a waste fraud case. And even for the world privacy, be careful businesses that are even good. They have they're working great, but they might use other businesses that didn't perpetuate it data everywhere.
Marco Ciappelli 39:09
Yep. I gotta, I gotta quit this conversation. I hate it, because I want to keep bringing it up. But that's exactly what I said at the beginning, there is gonna be a lot of conversation where we barely scratch, the privacy, which we try to focus mostly on that but we came up with a lot of philosophical thoughts, sociological thoughts. And again, I go back to that it's a way to learn about ourselves and also to kind of figure out how our economy works. I mean, I'm going to just touch on the last, the last piece of conversation because it's fresh in my head to finish this is like to find that balance right between being a good company and making Money, because you're a company. But obviously, when you start looking at a tool that is so powerful, you know, I mean, some people, I said that before we started a conversation that a professor of computer science, I saw a presentation, you call it like, a very interactive parrot, because in reality, just having a good conversation with you, but it's not really creating knowledge or anything like that. But is there right, and some people are gonna make money. So I think it goes back to control and control from a cybersecurity perspective is one thing. And from a human perspective, I think control is knowledge. I have to say that I mean, if you don't know how to play with this, maybe try to figure it out a little bit before you just go all the way. You know, touch that water safe is really, really hard before you actually jump in there. And, yeah, and be smart about it. Because, yeah, somebody's making money out of it. And it's helpful, but it's a tool. That's the bottom line. It's a tool. So, Shawn, we're gonna keep this conversation going and maybe having some panels always,
Sean Martin 41:14
always say one more question Marco. But I think now it's, it's gonna be a constant. One more episode.
Marco Ciappelli 41:19
One more episode. Oh, there's gonna be a ton. And we haven't even shot about Li, music, ml and all of that. And so it's gonna be a never ending story. I almost asked you about web web 3.0. And then I said, No, I'm not going to open the can. So maybe for the next conversation,
Arjun Bhatnagar 41:36
sure. Efficient, have a web three.
Marco Ciappelli 41:39
We'll do an episode about that. So that's it. It's 41 minutes and 45 seconds. I want to thank Sean for joining me on this. Redefining society because I need is cybersecurity brain on this kind of conversation. And her June this was great. I mean, I really enjoy your your AI story and, and that personal experience that you have with it. So I hope that I hope the audience enjoyed it as much as we did. And they have more questions than answers. At least before they started listening to this. And now they're probably going to dig into what really this interaction with this artificial intelligence means. With that in mind, again, thank you very much. There'll be notes in the episode and stay tuned on ITSPmagazine for many, many more of this awfully engaging conversation. Thank you very much.
Arjun Bhatnagar 42:39
Thank you for having me. I appreciate it.
Sean Martin 42:43
DGP chat. Peace out