Vancord CyberSound Podcast
Episode
99

Transparent Tech Steering Consent in the AI Age

It is necessary for companies to proactively consider the ethical and legal ramifications of AI usage while understanding the importance of balancing technological advancement with safeguarding individual privacy rights and societal well-being. It becomes evident that privacy in the age of AI is a multifaceted and evolving realm, ripe with complexities and opportunities for thoughtful regulation and ethical implementation.

In this episode of CyberSound, co-hosts Jason and Michael welcome Rob McWilliams of Vancord, and William Roberts of Day Pitney LLP to delve into the intricate intersection of AI and privacy law. With a focus on demystifying AI and its implications, the conversation navigates through the evolving landscape of privacy policies, consent, and the burgeoning challenges posed by generative AI.

Transparent Tech Steering Consent in the AI Age podcast cover

Episode Transcript

00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.

Jason Pufahl 00:12
Welcome to CyberSound. I’m your host, Jason Pufahl, in studio today with Michael Grande,

Michael Grande 00:17
Great to be here.

Jason Pufahl 00:18
And we have some repeat guests. So I appreciate Rob McWilliams, who has actually worked with Vancord for a long time as our Internal Privacy Expert. So Rob, thanks for joining. And William Roberts from Day Pitney who actually did a recent podcast with us relative to data privacy. So I appreciate both of you joining today.

Michael Grande 00:36
Welcome.

Bill Roberts 00:37
Great. Thanks, Jason.

Jason Pufahl 00:38
So I’m going to start and I’m going to say, my background, probably is a little bit more in the information security space, perhaps a little less on the privacy space, and certainly less on privacy considerations relative to AI. So I think I’m gonna lean a lot on both Rob and William today. So in the conversation leading up to this, I think we agreed it would make a lot of sense to spend a minute on what is AI, you know, is what we’re talking about aligning with what the general population thinks about with AI? So, Bill, maybe I can just kick it over to you and say, spend a minute on, what is AI, if you don’t mind?

Bill Roberts 01:13
Yeah certainly, I’m happy to. And thanks again, for for having me having me back. I want to start off with the idea that there is no consensus, straightforward definition of AI out there. I think going back in time, AI was seen as like the the smart robot. And I think now some people on the far, probably over correcting see it as something new. But AI has been with us for a long time. And most of us are using AI in some capacity and have been for years, I think it’s best to describe it as a technology that uses a mathematical algorithm to perform a certain task. What’s making AI buzzy now is that there’s been a shift from like this machine learning model of AI that we’ve been seeing for a while we use it, you know, in legal research and filing and all sums sorts of things. But it’s really the buzz now is about generative AI, which is AI that can create something new, it creates a response, it generates something, whether it could be audio, or pictures or texts. So that’s kind of the two pieces, AI is not not new, sort of, you know, what is old is now new again, we’re looking at AI in a more unique context here. And a much more more power much, much more powerful way as well. I don’t know, Robert, do you have anything to add?

Rob McWilliams 03:07
Not really, I know, there have been some efforts made to define what AI is. But I agree, I don’t think that there is one that everyone now agrees on. And I certainly couldn’t quote one of the definitions and I would get very tongue tied if I tried. So I’m afraid I tend to go with the old you know, if it quacks like a duck definition. That’s perhaps not very sophisticated for this discussion, but I’m going to go with it. That, you know, it’s it’s I will not go as far as to say that if we, we all know what AI is just by looking at it. But it’s it tends to be where we are right now. And the interesting thing, going to Bill’s point is that, from the privacy point of view, a lot of this stuff is not new. That in the past, we’ve already had laws on the books for decades, that deal with automated decision, shall we say? And in the AI world, that’s from the privacy point of view, that’s a lot going to be a lot of the discussion, automated decisions being made by machines. So it’s not, it’s not new for sure. It is and it isn’t.

Jason Pufahl 04:40
Yeah. So, you know, I’m going to jump outside a little bit of what we put it directly in our outline because I had an I had an idea as we were speaking, I had an interesting conversation with a client where we suggested using some AI tools to do meeting transcription. So basically record the meeting and get a transcript when you’re done. And the response honestly, the response wasn’t that positive to begin with. But then interesting, they sort of segwayed into well, we would be more comfortable with one vendor, and there, and perhaps the way that we think that their privacy policies or privacy statements are constructed versus another vendor. And I’m wondering if you before we segue too deeply into privacy law, how are things like ChatGPT, and Microsoft Copilot didn’t zoom with their AI companion, constructing their privacy policies, and are they completely company centric, or are they actually are they trying to protect it anyway, their consumers?

Bill Roberts 05:41
Well, that’s a very loaded question. Consumers, but there has been a lot of, of movement. So you know, in the United States, the basic premise for data privacy rights when dealing with a particular company, or school or agency, it’s notice and two steps back and said it could be an opt in or an opt out, we could certainly have a big conversation, things important even more so in the context of AI of the value of the consent, how meaningful consent is, but because a part of this, this model you provide notice, and then either consent, opt out, or opt in, let’s talk about that in terms of like Zoom. So zoom provides you a privacy notice, your opt out right is to simply not use Zoom, basically, if you decide to use use Zoom, if you just decide to visit a particular website, you’re inherently opting in, even if it’s not necessarily transparent to the user. Those notices and the privacy policies then are living documents late, we like to say they’re updated quite often. And we did see, in 2023, quite a few companies make changes to their privacy policies, to greater allow the company to utilize the data being collected from a consumer for gen AI purposes, one of those was actually Zoom, they actually got a lot of pushback, they may have curtailed and they may have pulled back on some of that, don’t quote me, and I forget exactly how that was all but Google too. They updated their their privacy policies. So privacy policies aren’t static. So that’s something when your client would need to be aware of things are changing. So unless that privacy policies can be made part of like a services agreement, which is exceptionally unusual, that could change the day after you sign the agreement. And we are seeing that occur a lot. As companies read the valuate. What is the data we’re receiving? And how could that then be used for gen AI purposes?

Rob McWilliams 08:10
That makes sense. And just to add to that, I think very recently, the FTC, I believe, has sort of warned companies that they shouldn’t go too far in changing their privacy notices privacy policies, to do what Bill described, and they particularly raised the issue of retroactivity that if I have provided data to feel bad saying, zoom, since we’re here using that technology, but well, we’ll use it as an example. If I provided my data to zoom under a privacy notice that did not give certain road rights to use the data for artificial intelligence. Can they retroactively claim those rights by changing their privacy policy? And I think the answer is probably not. So that certainly is an issue for organizations is ensuring that they have the legal right to use personal information or any other data, but I’m talking privacy here that they use to train and feed AI systems. And I’m not sure that ChatGPT and other you know, generative AI has set a very good example with that.

Michael Grande 09:42
Bill, perhaps you can spend a moment, you know, Rob, spent a second earlier talking about existing privacy laws and how some things are sort of in place to deal with some of the unique aspects of AI but are there challenges with privacy laws sort of moving forward? That need some consideration.

Bill Roberts 10:01
Yes, there are. Well, there’s always challenges with privacy laws. But I think AI presents a few. I think it does present a few new challenges. But also, it amplifies some, some current ones, some of which Rob was getting at in terms of us being Zoom users at this moment. So let us let’s take a step back. The fundamental way gen AI works, is data goes in, there’s an input, and there is an output. That’s what privacy law has been designed largely to regulate, it regulates the collection of data, and then it regulates the processing use and redisclosure of the data. So on a fundamental basis, I think it’s important to recognize that AI does fall within the general world of current privacy laws. There are some folks who argue we need separate privacy laws for AI. That’s a very open question. Personally, I don’t think that is the direction in which the law is going to be going, I think we’re going to see a shift in in privacy law more holistically to new models, which will include AI, but I don’t think we’re gonna have AI specific privacy laws. So right now, a company that is collecting data for using AI, using it redisclosing it, processing it, should we be looking at the current laws on the books that they’re subject to, whether that’s GDPR in Europe, California, Virginia, etc, etc. That’s where we’re at now. But I mentioned that it amplify that AI I think it amplifies some of the problems with current law. And Rob mentioned that we’re using Zoom and that we talked about notice and consent. Let’s just continue to talk about me now. Did I, am a privacy lawyer? Did I read Zoom’s privacy policy before I logged on to your invitation? The answer is no. Have I read it? Yes. But for client purposes, not, my own, do is, if it said, If it says in there the current policy right now, I had spent some time, if I had like an hour of free time, and I decided to read it. Would I have any idea what it actually means to use my data from Zoom for AI purposes? No, I have no idea. I don’t have the slightest clue what that means. And for that for a couple of reasons. First is AI is hard. It’s a complicated topic. Do I know how it works? No, gosh, no. Is there anything Zoom can tell me so that I know how it works? I don’t think so. I don’t think there’s anything that privacy policy could say, that would educate me without me going to get my PhD in data science. And then maybe I’d have a chance. And then the last point I’d love, you know, Rob to pick up on on some of this, is a lot of what AI, a lot of the challenges for AI privacy, isn’t necessarily in the data that’s collected is the data that’s generated. And that’s why I think gen AI is so important, so challenging for privacy laws, because, okay, New York, maybe Zoom is collecting my name and the sound of my voice and my location in Hartford. Perhaps. That’s fine. I could understand that, I think to a degree, but I don’t have any insight into what would be created about me. What data is created about William Roberts at Day Pitney? And I don’t think it’s even possible for me to understand that because AI is always changing. It’s always coming up with new things. It’s always alerting. There’s always new use cases. So I think that some of those points, the transparency, the consent. That’s where I think a lot of the challenge is going to lie because you do need to comply with current privacy laws. But when you actually think about how these laws laws work in the context of AI is sort of like the round peg square hole problem at times. So that’s like a high level. But no, if you’re looking to develop or deploy an AI system at your company, don’t be don’t just assume that there’s okay, there’s no special AI privacy law, you need to be looking at the laws on the books, whether that CCPA, HIPAA, Fair Credit Reporting Act and so forth.

Rob McWilliams 15:30
Yes, absolutely. And I just throw in there as well. You know, what the key regulators are thinking and doing what what your state attorney general is doing in this area, what the FTC is doing in this area. One, just to elaborate on the on the Fair Credit Reporting Act, one sort of, for people who may not be super familiar with that, that’s been around for decades now. And it basically regulates anything that’s considered to be a credit. Credit Report, basically, and it doesn’t have to be for use by someone giving credit it could be for someone employing you or whatever, I don’t think that gives an assessment of you. And it’s very focused on adverse actions. So if an adverse action is taken against you, because of a credit reporting agencies report, you have rights to be notified and rights that you can dispute that, that that that action. Now, this doesn’t apply across the board, it applies to credit reporting agencies and their customers. But it’s an example of a law that most definitely will apply to some of the things that, particularly the financial sector will do with, with AI, and indeed, is already doing things without AI. Another one was kind of interesting, just to bring up is the Rite Aid case, where in fact, the federal, the FTC, actually banned Rite Aid from using facial recognition, AI, I think for five years, because they had deployed it without reasonable safeguards. And what it meant in practice, if we’re, if I understand the story correctly, is that people will be tagged in the system as shoplifters. Thieves, based on the technology, and that that tagging was not accurate. And in fact that it was discriminatory against certain groups of people. So as well as there being laws to consider, the FTC can invoke its unfair and deceptive mandate and take action and it certainly making noises that that’s what it intends to do. So, yeah, it’s going to be very, very hard to see how all of this will play out. But I do think that those are the important points that existing privacy laws do cover AI. And it’s not a if you’re talking AI, you’re not suddenly on a clean legal sheet. And either the regulator’s whether attorney, attorneys general, or or federal regulators are going to be active in this area. It’ll be interesting to see how effective that will be. You know, how, whether fines and orders are going to change the behavior of companies deploying AI might be but always with technology, we will always seem to be in a position that the use of technology is moving much faster than the regulators are able to keep up. But we’ll see.

Jason Pufahl 18:58
I’d say, when we spoke with the senators, that was basically their message was we’re trying to catch up, you know, the technology usage versus you know, the laws surrounding it.

Michael Grande 19:08
Always feel like you’re saying from behind.

Jason Pufahl 19:12
Rob, if you could, and maybe briefly. EU, I think you mentioned that the EU has worked the EU AI act, and I tend to think of sort of the EU as being maybe more progressive on the privacy side than the US. Can you give a little bit of an overview of what maybe some of those distinctions might be and is that relevant to us here yet?

Rob McWilliams 19:39
Yeah, absolutely. Maybe I’ll just start with a quick word on the GDPR, which in many ways was the first big comprehensive privacy regulation, and it’s definitely got its faults, and I don’t think it’s necessarily the model that the rest of the world should be following, but it does have the advantage that it’s technology agnostic. And meaning that we’re, it’s not trying to regulate a specific technology like AI, or whatever. So we can avoid some of the pitfalls of that. For example, when we’re here in the US, we’re trying to deal with the privacy implications of the internet with laws that relate to telephone tapping, which we were still dealing with. But the AI act, at just a very, very high level. It’s intended to ensure that AI is safe, and is respectful of fundamental rights and values. Who can argue with that? The EU wants to make sure that the Act applies sort of across the ecosystem of AI. So if you are a provider of AI, developer of AI, it will apply to you. If you are a deployer, an importer, or distributor, it will, it’ll cover everybody. And perhaps most interestingly, is it divides AI systems into risk categories, and the highest risk is just prohibited end of story. And an example of that would be real time remote biometric identification in public spaces. So if you’ve got systems set up for those looking out and saying, saying, Oh, look, the based on his eyes, that’s Jason, there at the ATM machine on the other side of the mall. And that will not be allowed in the EU. Perhaps for most businesses, it’s the next system down that level down of risk, there’ll be most interesting, and that’s the high risk systems. And these are systems that might be used in the hiring process, or the invoice, employee evaluation process, credit scoring systems, that kind of thing. And I think the big obligation on a business, there is going to be to come up with some kind of conformity assessment, which I don’t know exactly what that is. But I’m assuming it is a formal assessment of the AI system to ensure it complies with all of the details of the AI act to make sure that it is safe to use, legal to use. So that’s the EU’s approach. I would comment on this, I very much doubt that us will go completely in that direction. But it’s, I would hope that if there is AI regulation in the United States that it’s done at the federal level, not at the state level. And maybe that’s a vain hope. And that it would be fairly comprehensive and technology agnostic. But that’s, that’s the EU AI act in it, hopefully a nutshell.

Bill Roberts 23:11
Yeah, just just to pick up on that, I don’t see a, so the AI act in many ways, is similar to the Rite Aid case, where the emphasis is less on notice and consent or one’s individual control of their data. For all the reasons that I mentioned. I don’t have the time to educate myself on it. I don’t have the experience, I don’t have the knowledge, I don’t have the PhD. I can’t keep up with a fluid AI system. So it sort of takes the burden of off of the individual edits and brings it to the company to sort of like your GDPR DPIA’s for example. It’s a little, I think that’s largely what we saw with Rite Aid saying, you know what, it doesn’t actually matter if someone consented or you’re getting noticed, because this was just harmful. And I don’t love the US approach where it’s when something is harmful, depending upon who’s looking at it. I think the AI act approach to give some structure up to this is very helpful for for businesses, but it does lead to us a whole host of of questions like that Rob, touch touched upon, like, who is what harm is it a harm to society? Is it a harm to, to me personally, is it a benefit to Jason but a harm to Robert like, how do you sort out that who decides the arm for example, you talked about the senators? Is it really the people in the agencies deciding harm, like, do they have the background or experience? Do you want to trust that to the companies, for example? So there’s a couple of things I think that ever need to be sorted out. But I do think, to Rob’s point, because AI is so complicated for the Zoom reasons that I gave that example, we are going to be seeing a more of a shift to a harm based approach, that’s really come up, like I said, a whole host of its own problems. So something to be thinking about when with companies are saying, you know, what, here in the US, it’s okay, you know, we put on we put someone on notice that we’re going to be collecting their data and we put them on notice that it’s going to be used to train our gen AI and make an inference about them. Instead of that approach, which I know is that often the basis for how the government seeks to control privacy rights here, I think we’re going to be seeing a shift away from that, because it’s so fictitious in, in many ways to what Rob was talking about, with this Rite Aid slash European approach to harm. So I think that’s important for companies to be thinking about now is okay, we do have consent, we do have have noticed and decided to use our product anyway. So we’re all good. Check that box. I think there needs to be a second step now to say, what are the harms? Even if there’s no law telling you to be doing this, you do need to be aware of how is this going to be used, and think about how it’s going to affect people in the whole or in individuals. But, you know, the AI act is maybe the future of law in the US, but companies should be really thinking about that now.

Rob McWilliams 26:53
Oh I completely agree with that. And I think even outside of the context of AI, and just in privacy in general, I think there is a big issue of the average person’s ability to understand what a company does with their data. And therefore the validity of a consent. Even if it is an explicit consent, where you tick a box or press a button, nevermind a consent that’s just implied by your use of the website. Um, you know, as as Bill was alluding to, you know, I make a living from privacy. But sometimes if I have a question about, you know, how, you know, how it is that a particular platform seems to know something about me, and I try to work out where they’ve got it from, like, find myself looking at Google’s privacy notice, or Facebook’s privacy, though, I can’t work out where the data and what the data flows are behind the scenes in the sun. So I figured it is, we need to find some way out of that. And into creating an environment where citizens can use AI, be subject to AI, but feel that their well being is being considered by the system that they don’t have to do the impossible and look out for it for themselves. Just this is a slight, slightly humorous, although serious as well aside, we were talking about, you know, what, what zoom might collect what it might do. And I was reading something recently, and it was a new avenue with for phishing, where basically, this was deep fake video calls with fake individuals participating in a social engineering activity for a fishing purpose. And this is not something that’s, you know, might come down the road, it is apparently happening. And so, what was our video is, for example, now to get into the wrong hands, they could be used to create, you know, at the failure of my code that would be used to ask one of Vancord employees and move money from here to here. So as well as individuals be concerned about AI, obviously, organizations have to be concerned about new avenues of mischief shall be saying,

Jason Pufahl 29:36
What proof do you have that Michael and I aren’t being constructed in real time here already, so you don’t know.

Bill Roberts 29:46
That’s very deep. But no, you know, Rob picks up on a great point, which is, we’ve been talking a lot about privacy, the, how a company who’s looking to utilize a gen AI product or tool in its business, how they should be thinking about privacy laws, but there is the flip side of this to the different side, which is the this security side, any tool that could be used for good can be used for that. And we are seeing a tremendous amount on the cybersecurity risks, where for example, phishing is becoming much more widespread, you could pump out much higher quality phishing attempts at a much greater volume, social and, and engineering is so much simpler, is better at you know, AI. Phishing has been shown to be better able to get past your spam filters, for example. So there are some risks too, on the cybersecurity side. But for privacy, I think we’re really at the beginning stages. And there’s these these challenges with privacy law in terms of transparency, and consent. And notice these aren’t, aren’t new people have been talking about these for years saying like the famous thing oh, well, if you read every single privacy notice, you’d be reading them 24/7. But they highlight I think, AI now is highlighting the insufficiencies of certain privacy laws, to the challenge of not only knowing what someone is collecting about you, or knowing how it’s going to be used, but knowing what they’re going to be generating about you. And I actually was thinking, Rob, your meta shares to there’s a famous story. Well famous in privacy law world, so I guess that’s like huge asterix about Target where Target told every, told consumers it’s collecting, I’m gonna collect this data about you, I’m going to use it for these purposes. Okay, great. It’s simple enough. And one of the purposes was to figure out, if a particular consumer based upon their buying preferences may have a condition, and one of these was maybe based upon what this person is buying, they’re actually pregnant. So then we could start marketing them, pregnancies, supplies and baby supplies. So for someone like me, I personally love when when Amazon is like, hey, you have a dog, you should look at this. I’m like, oh, thanks, Amazon, this is super helpful. I am that person who they’re trying target because I just love when companies tell me things. So this, so Target in this story started telling the consumer who was like, teenager that okay, you know, you might be knee baby supplies and her father got it, and thanks for Target. Not collecting data about her or not, not providing notice, they were very, very clear on their all the privacy laws about what they generated, they generated a pregnancy status, they told the dad and I was like, holy cow, I’m gonna be a grandpa, I had no idea. So this is where some of these, I think risk of harm ideas that we’ve been been talking about are going to be the driving force. In US privacy law, I think we’re gonna see a shift. As Rob mentioned, the AI act is able to GDPR started it with the DPIA’s started the risk based approach. And we do have risk based approach now to of course in the US would like breaches and things but the AI act is going to take that to a new level, I think that’s the way the US law is going to be going to and it’s not going to be easy to because I there’s great benefits and telling someone, hey, you know, you would die of diabetes, you should maybe, you know, look at this new tool. There’s great value, you know, people benefit greatly from the advice of experts, and the advice of your doctor and AI in healthcare, for example, tremendous possibilities that catch things that doctors are missing, to create knowledge, to create an inference and create data about someone wholly separate from what you’ve provided notice to and what you collect it. So that’s going to be the the future challenge. And trying to make sure that government doesn’t stifle the technology for all the good that it can can do, while at the same time, as Rob mentioned, looking at some of these high risk harms, potential harms, and trying to mitigate those ahead of time. So that’s what we’re going to be following. But companies should be thinking about that now.

Jason Pufahl 34:58
Yeah, I think that that statement is probably a great way to close this, which is, the technology is incredibly useful. And we simply need to make sure that we’re using it in an appropriate and ethical way. I mean, that’s a 10 word summary of a 40 minute or 35 minute conversation here. I think the fact that we just spent, you know, 35 plus minutes talking about privacy, I think illustrates just how complex a topic it is. I mean, it’s clearly evolving, is not necessarily unique to AI. But But I think, you know, changes, the laws are going to largely be informed by kind of what AI does. Companies are writing privacy policies that make it easy for them to share data and use it creatively for future.

Michael Grande 35:45
And I think a lot of companies are trying to figure out how to become more efficient and productive and utilize the tools available to them. And it’s going to be hard to implement some of those controls on their employees as well. Right. So there’s a lot there’s, we could spend hours on this.

Jason Pufahl 36:00
I think we’ll welcome you guys back to talk more about this, too. I appreciate you joining. Oh, go ahead, Bill.

Bill Roberts 36:08
Oh, no. So I’m always happy to be joining rather than maybe some day, some day, we’ll actually find something we disagree about.

Michael Grande 36:17
Yeah, not sure.

Jason Pufahl 36:18
Well, look for that. Alright, guys, thanks for Thanks for joining. I really appreciate it. I am I’m pretty confident that we’ll want to explore this even further. The you know, I started the conversation by saying I was not a privacy expert. I feel like we just spent a lot of time and there’s so many outstanding questions still. So thanks very much. If anybody has questions, comments, you know, feel free to comment. You’re both in your your podcast of choice or YouTube. Like the episode, that is huge for us, if you do it, like and share, and we’re really happy to answer your specific questions or come back with some more more specific topics. Okay, guys, thanks for joining. I appreciate it. Thank you.

Rob McWilliams 37:01
Bye for now.

37:01
We’d love to hear your feedback. Feel free to get in touch at Vancord on LinkedIn. And remember, stay vigilant, stay resilient. This has been CyberSound.

Request a Meeting

Episode Details

Hosts
Guests
Rob McWilliams, William Roberts
Categories

Work with a Partner You Can Trust

Our goal is to provide an exceptional experience to each and every client. We learn your business and protect it as if it were our own. Our decades of experience combined with our expert team of engineers and security professionals provide you with guidance, oversight, and peace of mind that your systems are safe and secure.

Cybersecurity Tips In Your Inbox.

Get notified when we have something important to share!

Related Episodes