Erica Irvin (Lowe's): the new boundaries of the discipline
Transcript and direct access to Spotify, Apple Podcasts, YouTube
AUDIO LINKS
On Spotify:
On Apple Podcasts:
On YouTube Podcasts:
TRANSCRIPT
SM
Erika, thanks for joining me.
EI
Thank you for having me.
SM
I need to ask you, how do you see the evolving boundaries of a discipline? You've got people from marketing using personal data. You've got data that will not be personal anymore because in aggregate you'll be able to use it in many ways. And then someone from privacy is going to step into that data, into dealing with that data governance. And then someone from AI is going to do it as well. And then someone from somewhere in marketing compliance is going to step into that as well. You're gonna have three people fighting for the same ball. What do you think?
EI
So I really think that rethinking what we call the function is probably the first step, right? And limiting that function to the privacy office or the chief privacy officer really doesn't work anymore because we are not just dealing with privacy compliance, executing on privacy rights requests, right? We're involved in the entire customer experience. We're involved in data governance.
We're involved in conversations around ethics and the ethical use of data. We're helping design the AI experience. So it's so much more than just privacy compliance, right? We're helping our marketing teams understand how they can collect, use, and share that data that also impacts other pieces of the business and our AI use cases as well.
So we need to, I think, rethink about why are we still calling it the Chief Privacy Officer or why is it still the Privacy Office? And I think that's important because branding is important and how you position or how you are positioned is important. And so I think the Chief Privacy Officer really makes it seem like it's simply a compliance role.
It really doesn't talk about the fact that we're an enabler. Like we are facilitating the business. We can inject velocity into the business. And we're really critical to the business because the business is reliant on data. And we've got to help them figure out how to navigate that. So I don't know that the Chief Privacy Officer does it anymore.
SM
But then there is the element of technology neutrality. I don't know how you see this. This is pretty controversial. So in Europe it has been very controversial as well, which is: Are we going to be regulating every time there's something new coming up? Now we have Agentic AI. Will we have an “Agentic AI Act” in Europe, in California, whatever? How do you approach this idea of being technology neutral? Do you think it's a good idea? If it is, then how can you cut through that because in the end some of these laws will not be technology neutral.
EI
Well, I think in terms of at least implementing, I think about that implementing in an in-house environment, right? If you have good processes and good policies and you're asking the right questions, then whether you're talking about managing email or managing AI, it shouldn't matter, right? Security is security, right? So we're going to work with our security team to make sure that they're securing the data in accordance with whatever the encryption standards are based on the classification of data and they should be doing that regardless of the system. So I think that's just true across the board.
It may be difficult to do because the regulations may not reflect that. And they're going to create other standards or create other hurdles for us to go through, or make us feel that way anyway, reading them, because the regulations can be so nuanced, but really you should just be taking really good practices and continuing them through.
SM
Now, even more difficult, vendors. So if you have your own standards. How do you propagate them through third parties and vendors when you cannot really see so much about them? How do you approach that?
EI
Third party due diligence, third party management, having really strong processes there, I think is important. But it's also, I think, critical for us to educate our clients, who are the first, kind of like the first line of defense. They're selecting who we're partnering with, they're developing the relationships with these vendors. They're having all of these conversations. So, making sure that there's a cultural fit, making sure that they really understand that that partner really thinks about whether it's the customer experience, whether they think about the ethical use of data, right, the same way. It has to start at that level. So part of the job is making sure that we're educating our clients or educating the business, right, as they're selecting their third parties. That's going to make it a little easier, right?
But then having a really robust third party, either vendor management or due diligence process that cuts through all of the layers. So it's not just the technology team and not just security, but it's our corporate compliance. You need to think about OFAC controls and mean, all of those things. We need to be thinking that we're, or we need to be making sure that we're aligned on how we're building that technology because we're builders too, right? Some companies are, we are certainly, builders of technology as well. And you wanna make sure that we're aligned, I think not just from an SDK or API perspective (Can we integrate from a technology perspective?), but really culturally and philosophically, I think you have to be able to.
SM
When you are a company with a strong IT team developing their own things, there's more and more reliance on open source and components or reusable components and how far can you go in terms of auditing for example AI modules or when people are bringing their own models, how do you approach that?
EI
It's important to have that technology background or at least be comfortable asking the questions because part of our role is to make sure that we can demonstrate compliance with some of these responsible ethical AI principles, right? So when you think about demonstrating transparency, accountability, explainability, right? There are certain ways that you need to do that.
You know, and I'm going to say it's interesting. I actually was having a conversation about the data minimization rule out of Maryland, right? And talking about how those rules are going to require us to document the purpose for collecting the data, right? So we've got to say we're collecting this data. We're going to use it in this way as we engage with our customers.
That is very distinct from an AI explainability requirement. I might be able to do that in an Excel spreadsheet on the data minimization. I collected this data. We're using it for this specific purpose. We're using it in this market, whatever it is. We're using it from a loyalty perspective, a marketing perspective, an advertising perspective. That's very clear. That's easy to capture.
I realized as I was saying that very closely to discussing explainability in AI that it's not the same. And so going back to being able to have a technology background when you're helping the product teams develop, right, these products or services that are based on AI or algorithms, the documentation is different, right? So there, we're looking at prediction logs, we're looking at model cards, we're asking for logic, and they've gotta be able to document that. And that's very, very different from the other. And so I think, I'm gonna say the privacy team or the lawyers that are supporting that really need to kind of build up their technological understanding.
SM
Very good. So not specific to Lowe's, but how far removed do you think you should be, or compliance and privacy teams should be from the product? Meaning that we've had an interview on the idea of the product counsel and with Linsey Krolik from the University of Santa Clara. And we were talking about how you want to be really agile and fast. You don't want to go back and forth to the team and ask, you know, the privacy team whether something is okay. So I've seen some teams having, I was telling her, and this isn't the case of Europe, but a DPO, I call it a DPO emissary. So a DPO envoy that is ingrained within one of the pro teams so that they can really move fast. There's no need to go back and forth, but they're looking at everything. And Lindsay was explaining how the product counsel is already there, right, within every product. But in some companies that's very hard. You're not going to have a product counsel for every product because it's not a startup.
So how far removed do you think it's optimal if you want to make sure that you're not slowing people down and that you're not perceived as an enemy?
EI
We definitely do not want to be the, you know, the department of no. We have to, a particular mantra of mine or principle of mine for my team is that we inject velocity into the business. We want to be moving quickly and we want to help them move quickly. We absolutely do not want to get in the way. And we have also come up with this idea of product counsel and embedding them into the technology team. And it really, I think, is helping us see issues more quickly and also identifying for the teams more quickly things that they need to navigate, the things that they need to think about, that they need to design around. If they've already designed it, if they've already figured out their architecture, if they've already done those things and then we see it, they've got to go backwards. So they're not moving quickly, right? They've got to start those sprints over. But if we're with them at the beginning of the sprint and we're helping them understand what they need to navigate and understand what the end user's experience needs to look like so that it kind of mirrors what the regulators want to see, it helps, I think. So as closely as possible.
SM
So now you're talking to these teams and to make sure that you can make it even faster we need to make sure we speak the same language. So how do you see internal training to people who are not lawyers and they're not privacy professionals, so that you can speak the same language?
EI
So the training or the conversations… Sometimes I have to take on that burden to figure out their language. But it has also helped to have people on my team that do speak their language, right? So, bringing in someone who is a, I'm going to say maybe an engineer, right? but that has a privacy bent, who has had privacy training. To be on our team, they're the translator.
And I've had that happen in past lives actually where I kept asking for a data map. I kept asking the team for a data map from the data governance team. I was like, I need a data map. Where's the data map? And they kept saying they had a data map and they would come to me and they'd show me this very complex thing that, but it was for hygiene, it was for the lineage, was, so it looks very different from what I'm looking at as a privacy lawyer in terms of a data map. Well, when I got the person in the role that was an engineer, he came to me and he said, we have a data map. I can make it look the way that you need it to look, right? But I wasted three months, you know, not feeling like I didn't have a data map, right? And so that's a simple example, but having someone on the team, I think, that can speak the language is very important. And then making sure that we're doing our part to learn their world so that we can have those conversations, I think, is important. And being able to simplify it as much as possible.
SM
OK, very good. So now we're all aligned. You've got the product teams aligned, technical teams aligned. You've got the suppliers aligned, the vendors, because we've been going through that. And now we get to the last part, which is the consumers, right? How do you see communicating to the consumer? Privacy notices, there's plenty of different approaches… How do you see that?
EI
Simple is better. Simple is better for everyone. Simple is better so the regulators understand what we're doing and they feel comfortable. It's better for the consumers as well. I really like this idea of digital trust and building digital trust in our consumers, right, or for our customers.
And for the regulators as well, right? And we can do that in a few ways, right? We can do that with our disclaimers being in very plain English, right? Being simple, being approachable, right? So that the consumers understand what we're saying and they don't feel like we're trying to hide anything, right? Making the privacy policy very contextual. And very, and very simplified and saying, know what, this is what happens with your data when you're in the parking lot. This is what happens with your data… or this is the data we're collecting here and how we're using it and with whom we share that data. And this is what, when you're in the store or when you're, like really making it contextual and easy to receive, I think helps build that trust.
And we can really help affect that in our policy, in our online terms and conditions, in our marketing and in our advertising, and really being transparent helps build that trust and it gets, I think, people comfortable engaging with companies in the way that they'd like to engage with them.
SM
OK, just to finish, I'm going to throw something hard at you. Something futuristic, just to speculate about the future. So imagine, you've got this Agentic AI future where things always take much longer than we think, but we're assuming that some builders, for example, professional buyers from your stores, do not want to go through a manual process.
You already have this, really, in many ways as a way to handle procurement and supply chain management, but picture a small little gardening company for example that wants to have an Agent that goes around that's not just buying tools is doing other things is buying stuff for their cars elsewhere and these agents want to knock on your door and want to start talking to your systems at this point there's no consumer that is going to read any privacy notice and in the end some of these people may have to expose some personal data as part of the interaction, the transaction will act on, you know, will be contracting on their behalf with your system.
EI
We have no way to communicate in advance. You know, that's really interesting, this issue of consent. It's really around consent, which is a core principle across both the privacy and AI regulatory landscape that we have now, right? We're really, wow, it's the consent piece. You need that first.
SM
Maybe the agent already has my permission, if I'm the consumer, to agree to certain things that respect my own boundaries, because I define which are my red lines. So he can move within, he's got some wiggle room to negotiate, to agree to certain things, but it never goes beyond, for example, sharing sensitive data or beyond a certain retention period. And then it meets your policy. There's a match, as we used to do with, you know, the world of P3P and sort of XML contracts, and it would meet a system, and system to system, they'd be able to contract but in the end there's got to be something internally that makes it happen so I'm still trying to get my head around it.
EI
Yeah, it's interesting too. I was talking to one of our outside counsel about neural tech. Yeah, right. And so it's a little similar in that I don't fully know what I'm consenting to because I don't know what that technology is going to capture.
So how can you feel really comfortable that you have gained the right scope of consent or that we have drafted the consent in the right way to feel comfortable. But even if we've collected it, however that would work, you would collect it. But the value of that data is us using it and sharing it and leveraging it in some way. And if we haven't really captured the consent to even collect it, then we can't do the things that we were, you know, that really bring value.
We're going to be unable to extract the value, even if you could get comfortable collecting it. So you'd have to set up this, you know, two-step process, I guess. Which then you're just injecting friction into the process by going back and saying, okay, here's all the stuff I collected. And the person's like, “you can't use anything”. And then you come back. So that's really, you know, that's really interesting.
The other thing I think is interesting about that is, you know, AI agents or Agentic AI on the web where you ask them to do something for you and they go out and execute. They go shopping for you or whatever, but it's all online. But are there instances where there is a physical manifestation of that, that’s knocking on the door and physically interacting with, let's say in this instance, a consumer, a customer.
You know, what are the parameters that you need to put around that agent to make sure that they're operating in a way that is consistent with how you want your employer or employees to interact with consumers?
SM
Since you mention consent, what are the limits of consent in the world where everything is automated? So how do you see consent? And will we maybe be the US equivalent of the legitimate interest debate?
EI
Yeah. Well, the thing is, your disclaimer has to be fulsome enough so that you can say that it was reasonable that the consumer would expect that we would use the data in this way because we made such a broad-based disclaimer at the outset. But being able to articulate a legitimate interest that also helps frame for the product teams as they're developing what is the extent or the boundaries. Because we have established a legitimate interest as opposed to trying to build something and kind of guess what “the amorphous they” would deem reasonable.
SM
Thank You.
EI
Yes thank you for having me. This was fun.