The following is a transcript of our recent interview with Professor Daniel J. Solove on Masters of Privacy. The original recording can be found on the podcast’s website, as well as on your favorite podcasting feed.
Sergio Maldonado (01:42.25)
Okay, today we have a chat with Daniel Solove about his new book on privacy and technology, but also touching on other things that he has been writing about for a while, like finding protection for privacy beyond secrecy, the need for a private right of action, or how much of a fiction user consent can be. In case you do not know Professor Solove, he is the Professor of Intellectual Property and Technology Law at the George Washington University Law School.
One of the world’s leading experts in privacy law, Daniel is the author of more than 10 books and 100 articles about privacy. He’s also written a children’s fiction book about privacy. He’s one of the most cited law professors in the law and technology field. Daniel has been interviewed and quoted in hundreds of media articles and broadcasts and has been a consultant for many Fortune 500 companies. It is to him that we owe the famous taxonomy of privacy harms, as well as very recent papers on privacy in AI or privacy in data scraping. Let’s go for it.
Sergio Maldonado (02:50.68)
Daniel, thanks for joining me.
Thanks so much for having me.
Congrats on your new book, yet another accomplishment. Congrats on On privacy and technology.
Thank you so much.
There’s a few things that you’ve reviewed in there and I understand you’ve aimed to address a wider audience and not just the legal community and the privacy law, you know, professionals. But there’s one thing that you say I’d like to start with that if it’s not too dense, which is that you say privacy is complicated, but the law is too simple. What do you refer to, as a warm up?
Daniel Solove (03:34.582)
Yeah, I think that privacy itself is a very complicated thing because it means so many different things. And I think that the law often fixates on particular notions of privacy, notions that are antiquated and inadequate for the times that we’re in. And by doing so, it excludes from its protection.
a lot of the most important dimensions of privacy that need protecting today. And I’ll give you just an example. The law is very fixated on the notion that privacy must be something secret, so that if something’s private, it’s got to be hidden away and no one else can see it or know it, or it can’t be exposed to the public in any way. And this notion
leads to cases that fail to protect a lot of data because it happens to be online or it happens to be shared with some other people. And it’s an excuse to give this information no protection whatsoever. And I think privacy means a lot more than just secrecy. mean, some cases people want to hide information, but in a lot of cases people want to
be protected from certain uses of information, uses of information to potentially defraud them or harm them. An example would be, you know, my voice is available on the internet. In many ways, it’s available to the public. Does that mean that I have no protections in my voice that anyone can take my voice and use AI to replicate it and clone it and use it however they want? And I think that that’s
Just one example of how the law, I think, unfortunately, is too simplistic with privacy. Viewing it as binary, where it’s either totally private or totally public, think that we really see so much in between. so recognizing the complexity of privacy and getting the law to catch up is essential if we’re going to address these issues in modern times.
Sergio Maldonado (05:55.352)
Thank you. So putting aside the simplicity of the law for now, you did write a paper on data scraping, which exemplifies this, right? Public data that some company could grab from anywhere. It’s public. It’s not secret. And yet it calls for protection. But something that you’ve written about and I really find very interesting. In the context of the U.S. and of the Fourth Amendment,
you did explain how the Supreme Court has been escaping this trap of what you call the secrecy paradigm. Can you elaborate on that, please?
Yeah, so for a very long time, the Supreme Court and other courts as well in other contexts, the Supreme Court in the Fourth Amendment, which is the amendment that protects against law enforcement surveillance and gathering of data, the Supreme Court held that the Fourth Amendment doesn’t apply to any police gathering of information in public.
Which means that the police can do it without any oversight whatsoever. There is no protection if the Fourth Amendment doesn’t apply. And the court uses a test to see if the Fourth Amendment applies. And that’s a reasonable expectation, a privacy test. So the conception of privacy is built into the applicability of the Fourth Amendment. So the Supreme Court had in a series of cases said if something is exposed to the public, it’s not private.
for very long time pushed this very simplistic binary. And it would say, hey, if you share information with a company, so your phone company has your information, no expectation of privacy. What you put out in the trash, no expectation of privacy because you’ve left it for the trash collector to get to it. No expectation of privacy if someone took a beeper, a tracking device, and stuck it to your car.
Daniel Solove (08:07.03)
But in the last 10 to 15 years, the Supreme Court has finally started to rethink some of these old cases. And in a few cases decided in this past decade and a half, the Supreme Court started to backtrack. And it said that, in fact, know, sticking a tracking device to a car was a violation of the Fourth Amendment. And then it went further.
in a case called Carpenter, it went further to say that cell site location data, which is the location tracking data of where someone went via GPS, that data was private, even though it would be information that could be available publicly and that the police could follow someone around and see where they drove.
And ordinarily, the court, under its older conception, would have said, well, you know, the police could, in fact, you know, follow someone around all the time. It’s exposed to the public, therefore it’s not private. But here the court said, well, there’s a difference because there’s so much data being gathered and it’s a pervasive amount of information and it can be aggregated and used to make a lot of inferences about people’s lives. We need to
we need to draw a line and say, you know, there has to be some privacy protection here. The court kind of left this in a rather unfinished state, which is typical for the United States Supreme Court. It decides pretty narrowly. very, it lacks a lot of courage to kind of take bolder positions. It kind of picks and plucks it at
at areas of law and leaves just a lot of kind of unresolved vague questions in the aftermath. And that’s what happened with Carpenter. It did this. didn’t really overturn the doctrines before it just said, hey, OK, there’s a line somewhere, but we really don’t know where that line is. And that’s kind of where we are now with the law, where lower courts are trying to figure all this out. And we’ll see just how
Daniel Solove (10:32.536)
far the Supreme Court’s start in Carpenter really goes. But we’ll have to wait and see how that unfolds. I think it’s unfolding more slowly than it should. I’ve long thought that the court’s doctrine on privacy and its conception of privacy has been way outdated and long overdue for a complete overhaul. But that’s where we are.
Yeah, thank you. From a European perspective, the US approach to privacy, it’s a combination of this, so the Fourth Amendment and the privacy torts. And the privacy torts in the US, the departure from common law, it’s a very American thing, right? The privacy torts. Do you think that both these torts
and the fourth amendment have shaped or basically that common law and case law and the whole logic of the US legal system has had a very big impact on your taxonomy and on your philosophy of privacy in the sense that it is bottom up based on real cases and real challenges that people have faced in the real world. And what do you think about the privacy torch?
The privacy torts emerged in 1890 that were actually completely the statute rather than a common law development. So we have going on in the United States in different domains, you know, the growth of different areas of law. Fourth Amendment law is its own track. And then we have the law regulating what various private sector entities, non-government entities can do.
And for a long time, it was the privacy torts that were originated in an article written in 1890, but didn’t really start to become adopted by the states in the common law until the next century, until the 20th century. And then we see over the course of about a hundred years, more and more states start to embrace the privacy torts. Now the
Daniel Solove (12:54.594)
The problem with the privacy torts, and the reason why I don’t think they’re relevant to a lot of modern privacy problems, is that the privacy torts were born to respond to the problems that existed in 1890, which is that we had a highly sensationalistic media and there were new technologies such as photography that Warren and Brandeis were worried about. And the torts grew up really based on trying to combat
that set of problems, not big data. the torts weren’t necessarily have to be constrained to the media, but they grew up in response to that. And unfortunately, the common law stopped developing. So the common law, once the torts got mature enough to be recognized, and then they got into a document called the Restatement, which is a very influential
document that they put out to kind of help courts see where the law is going. And courts, instead of continuing to generate new torts to address the problems that we were facing, largely settled on the elements and the torts as they existed circa 1960. And so the torts got ossified and they don’t fit.
the problems that we’re facing today, which is big platforms taking massive amounts of data and doing all sorts of things and algorithms manipulating what gets displayed and various trade and sale and sharing of data between different entities. All this just doesn’t quite fit the model of the torts and the torts stopped evolving. I think if you woke
the authors of the article, Samuel Warren and Louis Brandeis, up from the grave, and you read their article, they would say, our article is about that the law should constantly be evolving to address these problems. So they would be appalled at the fact that the kind of torts kind of froze in 1960. So the torts are kind of, they apply here and there, but they’re mostly out of the picture because they don’t, really haven’t evolved to address the problems that we’re facing.
Daniel Solove (15:15.458)
So what we saw in the United States is a bunch of waves of privacy legislation. And we saw the first happen in the seventies in response to one of the early laws was a response to Watergate, the Privacy Act, which regulates government agencies. It’s become very relevant today with what’s going on with Musk starting to try to go and gather all this data about everybody and government records. So that law applied there.
And then we had a series of federal privacy statutes that address privacy in different contexts, privacy of video records, privacy of children’s data, and so on. had what’s called the sectoral approach in the United States. We didn’t have this omnibus law like the EU has with the GDPR, but there were laws addressing privacy throughout the 70s, 80s, and 90s. Those federal laws largely dried up around 2000 when
Congress became so partisan and I think, know, almost it’s a dysfunctional body at this point. It’s not a function. We don’t really have a functioning legislature in the United States anymore at the federal level. But then the story takes an interesting turn because the states have started to jump into action. In 2018, we see the first one. And that’s the it’s interesting within like a month of the GDPR becoming active.
California passes a very robust law, though it’s actually, I think, a weak law. But it’s still more robust than most of the existing laws in the United States at the state level. And then we see the states start to pass one law after the next. There’s about 20 of them. Unfortunately, I think these laws are not good. I love the fact that the states are getting into the action that they’re trying to do something.
But I think that most of them aren’t that great. But then we also see topic-specific laws. So there are laws dealing with biometric information, just biometric information, laws dealing with health privacy, laws dealing with children’s information. And some of these laws are actually pretty strong. In certain ways, they’re even stronger than the GDPR. They’re just much more narrow, more focused, they’re a specific state.
Daniel Solove (17:38.798)
But it’s an interesting landscape where we’re really seeing everything all over the place in this gigantic jumble and patchwork of different laws that are not all consistent with each other, that overlap and have gaps. It’s a very messy approach to the United States. And we still have the common law, too. So it lacks the kind of coherence that the
EU approach has where there’s the GDPR and it sets a baseline. It has consistent terminology. It has a consistent approach. The US is just a total mess. And it’s very hard to generalize about the United States approach because it kind of depends on what law we’re talking about. And we’re talking about hundreds of laws.
and thousands of cases that are every which way. so even at the common, that’s very hard to make generalization. I can, but for everything I generalization I can make, I can find a case that goes the other way.
Okay, so back to your, before I go back, since you’re bringing up the state comprehensive privacy laws and specific laws like BIPA in Illinois for biometric data or My Health, My Data in Washington, we’ve had episodes on those. Something that you’ve advocated for in the past is the private right of action in the sense that we need to make up
for the lack of effective enforcement. And that this could be a way, if I understood properly. So these laws do something that has taken a long time to arrive in Europe. It’s only slowly arriving now in Europe. So how do you see this taking shape? And do you think, again, my fascination with your taxonomy and the logic that you’ve built,
Sergio Maldonado (19:48.994)
and that we had to study and then teach, right? For example, the CIPT courses at the IPP is that you, again, you built it bottom up. So you look at, you looked at real world scenarios. And I’ve read what you said about the logic, how, know, Wittgenstein defined this, again, this wider, avoiding the closed notions or concepts that sort of, you know, constrain
the possibilities of something like privacy, right? I know I’m oversimplifying, but since you’ve been looking at this real world scenarios, and to me again, that’s so fascinating because I’m looking at that, I’m looking at case law, which is so alien to us. And now that you’re going to have all of these private rights of action across 20 states and more, do you see that the taxonomy could explode? is that a good thing?
Yes, I think it is a very good thing because I think that generally we’re in the early days of privacy law. you know, whether the GDPR might seem in Europe to be like the culmination, the final word, perhaps. But I think basically in terms of privacy law, we are in the early days. This is a law that we’re going to look back on the GDPR, which I think is the best privacy law in the world. Nevertheless, it
it needs a lot of reform. It’s an early step. So one of the things I like about the common law is that it can evolve if it doesn’t get ossified, which unfortunately happened to the privacy torts. There’s a lot of room in the common law for growth and development of the law to address and it develops case by case. So it develops in concrete situations from a bottom up style approach.
Courts must be up to that task to make sure they don’t hold the law back too narrowly like they were in the privacy torts. But I like the fact that it can evolve. I think it’s a great feature. I think that the challenges that we face with privacy law and the enforcement challenges that you spoke about
Daniel Solove (22:12.334)
are ones that, you I think the statutory laws play a very important role. So I wouldn’t advocate, let’s just go to common law and leave it at that. I think statutory approaches are very important. I think that when drafting statutes, though, the statute should be drafted broadly at a principle-based level with some wiggle room for the application of those laws to new situations. So some laws are so tethered.
particular technologies at the time that they wind up becoming obsolete and then they still tend to hang around because legislatures never like to revisit something. often just leave it. So an example would be the electronic surveillance law in the United States, the Electronic Communications Privacy Act or ECPA passed in 1986. And this regulates surveillance and this is built, a law built before
the internet before email. mean, there was email in the internet then, but it was worked in a very different way. And the law is built around, very specifically around how things looked circa 1986. And as a result, it doesn’t apply very well. It doesn’t make a lot of sense. And I think that if we focused instead on the goals, what do we want to do? We want to limit
You know, government power in what it can gather. want good oversight. We look at the aims that the law must achieve and then we reinvent the specifics as the technology evolves instead of trying to bake the specifics into the law, which then build a law around technology that’s going to be different tomorrow. And we see technology is changing too fast for laws like that. So I think that if the statutory law can work,
And then on the enforcement side, the enforcement picture is complicated because everywhere enforcement agencies are underfunded and under resourced. This is the case throughout the EU with the GDPR. think they’re trying the best they can, but there’s limits to how much they can enforce. we see the
Daniel Solove (24:30.956)
the pains of that when Ireland is not enforcing enough or fast enough and no one seems to even want that job. Now that their commissioner left a long time ago, they’re still trying to fill that job. And it’s like, what, do I really want to work 24–7 with a staff that’s like one-tenth of what I need to do to enforce it? And the companies know this, that look, there’s just not enough cops on the beat. There’s not enough enforcement. The penalties are not
uh, don’t have enough sting to, uh, deter the risk taking that companies want to do. They say, look, okay, I can do this and make trillions of dollars on AI and just scrape everything on the internet. then, yeah, they’re not going to make me, you know, delete the algorithm. They’ll just come and slap me with some fine that I’ll pay, but it’s a small, uh, it’s a small price to pay for, you know, the potential upside of trillions of dollars.
And then the odds that they’re actually going to be enforced against are low. So it’s not a good proposition. And the US is even worse. Right now we’re in a state where the main regulator of privacy, the FTC, is being dramatically challenged. And it’s not clear that the federal agencies are going to survive the wrecking ball that the Trump administration is taking to them.
How do we get stable enforcement where there always are cops on the beat and there’s always this pressure? And I think the private right of action is a great tool for that. It creates incentives for lawyers to bring these cases, to do enforcement, enforcement from the ground up. Now it has problems. There are problems with the system, but on the whole,
Despite the problems, does address some of the shortcomings of the enforcement by agencies. It addresses agency capture or agencies being weakened or not having enough resources. It just helps a lot in ensuring that laws are vigorously enforced. So despite all the problems with
Daniel Solove (26:57.198)
private right of action, and I think there are some, I still think on the whole it’s a great thing. I think it’s really essential if we want laws to be enforced vigorously. And that’s also why if you see where are the companies fighting the hardest, where are they becoming the most vigorously opposing, it is in that. And generally you look to, you know, where are they most
fight their battles against, that’s probably where the law is going to be most effective.
That’s where the lobby money goes. Good. Thank you. Great. Cause that really paves the way for this, which is if we’re expecting people to take action, right? For a long time, we had the hope that privacy will be perceived as a value, right? As a valuable asset or the respect for privacy, the ethical approach to handling data.
Exactly.
Sergio Maldonado (27:59.544)
So there was this hope when the whole thing started, at least when the GDPR started that companies that respected individual rights and choices and so on, they would have stronger brands and people would shun those that did not. It has not been the case. We’ve had some telco companies in Europe being fined every single week and the brand doesn’t seem to be stained because everyone in the industry is fighting the same dirty battle. So it doesn’t, it hasn’t had an impact. So.
If we take it the other way, so if it’s not the carrot, we take in the stick and we’re going after the class action or private right of action. For that to happen, besides what the court would expect in terms of harm.
Individuals need to have a perception as well. Not that they can make some money or a lawyer is chasing them in a very American style because they claim, they tell them they can make some money as happened with the Apple case in California. But instead you need to perceive that. And so if we look at the spectrum of possibilities, the things that have been sort of deprecated, talking about the common law, the torts in common law and so on.
were those that were easier to perceive because they were closer to confidentiality and to exposure and to the fact that other people see you. in fact, maybe I’ll remove this later, but something that you said, something that Ryan Callow had said was that the act of disclosing personal information about someone would not be a violation because it doesn’t involve observation, if I remember that.
It proves that people only appreciate the human violation in the sense that if eyes are on you and you’re exposing your confidential, you know, your home, your body, et cetera, that is a privacy violation. We are terrible at perceiving the future impact of today’s data collection practices and so on. So the more we push it to individuals, isn’t it possible that it becomes more abstract?
Sergio Maldonado (30:10.806)
and really hard for them to understand and therefore chase real enforcement.
Yeah, well, I think that the individual struggles to perceive the risk of what could befall them when they disclose or share data. And the average individual is not going to understand how AI works and all the stuff going on behind the scenes. think when you see polls of people, they will say,
that they value privacy a lot and they do perceive that their privacy is greatly under threat in the modern world. So they kind of have a general sense of this, but in the specifics, they don’t really fully understand all the what’s going on in this kind of backroom clandestine world of the data economy. That said, I think
that understanding what’s happening and what the potential harms are is key. think lawyers can drive that home. What they really need to do is convince courts of that. A lot of the class action lawsuits don’t really, they don’t function well to compensate individuals, but I don’t think that’s the primary value of them. The primary value is to enlist
a private enforcer. It’s kind of like a private bounty hunter that’s going after that. And so I think that that can happen. There’s a lot in what you had asked, a lot of different things to respond to, really great points and things I’d love to respond to in terms of how to understand privacy, the trust of companies. When I look at the trust of companies, that
Daniel Solove (32:11.81)
despite that, you know, I’ve seen polls where, how do you trust this company with privacy? And that turns on just generally overall good feelings about a company. It turns on what they read in the media, which is not always very informed. It turns on how much a company advertises, how much it loves privacy. But it doesn’t turn actually on how good the company is with privacy. And I’m not sure individuals can
really measure that. Companies will say stuff, but that’s not the reality. And it’s hard to fully measure that because we just don’t know all the details of what goes on behind the scenes. To evaluate, I can’t fully evaluate how good a company is on privacy because I would need to talk to the privacy officer. I would need to, what kind of training do they have?
How many personnel do they have? What resources do they have? I’d like to see their vendor agreements. I’d like to see their privacy impact assessments. How good are they? A lot of them aren’t very good because they have a very narrow conception of privacy. What are your policies and procedures? How do you integrate with the engineers and the designers of the product? How good are you on data security? And I need to know more than just
You have reasonable security. need to know what kind of encryption you’re using and are you using two factor authentication for access and how well your employees trained against phishing and do you do penetration testing? I could just go on and on and on with a thousand different questions, but I wouldn’t really need to study in a very deep, long way.
to really understand what’s going on. What are the algorithms doing? Not just the logic of the algorithms that the GDPR says you can get, but I need to know the data the algorithms are trained on, which is constantly changing and being updated. data in, data out, the data you put in affects the output. So I need to know the data that’s going in. There’s just so much to know that I just don’t think can really be readily knowable even by experts.
Daniel Solove (34:32.654)
It’s very hard to make the assessment like, is this company good with privacy or not? And I think ultimately the law, that’s the job of the law, needs to make sure that people are protected. I make an analogy in my book about going into the supermarket and buying milk. We can go into the supermarket and buy milk and we know it’s not going to kill us.
though in the US that might change with the current administration. But just generally in modern civilized society, right, you can buy milk and you don’t have to worry about its safety. I can look at the price and taste and just decide which milk I buy and I know it’s going to be okay. And that I don’t have to become an expert on milk production. I don’t have to learn about how it works and read treatises on it.
I don’t have to research each farm and its safety. I can just buy this milk and know that it’s going to be okay. And you know what? If it’s not okay, if it does harm me, I can sue. So there’s kind of a front end and a back end, anti and ex post type of protection that the law gives me. Now this wasn’t always the case. It used to be the case that milk, there wasn’t regulation.
of food and manufacturers could just stick in anything they wanted. And they did, in fact. They can mislabel the food. With milk, they put in formaldehyde. And then people were wondering why all these babies were dying. Well, because the formaldehyde, you’re poisoning the milk just to make it taste sweeter when it went rotten. So in modern times, we have
reform. We have laws that protect us. We know that someone has our back. The experts who know how to inspect the farms and make sure the milk is safe are watching. And then they know that they have accountability. If they do harm people, they’re going to get sued and they’re going to get in big trouble and pay a big amount of damages in a lot of court cases. And it’s going to hurt. And that incentives are all right.
Daniel Solove (36:49.644)
And we have a system that works with technology. It’s buyer beware. I’m supposed to figure out if this is safe. I have to figure out the risk and I have to somehow become an expert computer scientist. have to, you know, you know, get a PhD in AI. I need a team of expert engineers to help me figure out like, what is this algorithm going to do? And how is my data going to be used? And is it going to hurt me or not?
It’s just not feasible and it puts us with technology. It’s weird. We forget everything we’ve learned about other areas of law, about car safety, about food safety, and we make it the law of the wild.
Yeah, I guess the pushback against that would be that there was cause and consequence, right? In this physical world and we could wait for three days and maybe someone would die because of the milk and wait for the car to crash and this causality. And that permeates, so that has a spillover effect on something that you include in your book, which is the notion of consent. I think you’ve been critical of this, but I’ll wait for you to tell me.
which is that we put too much emphasis on personal agency, individual agency, and the fact that people take control. elaborating on what you were saying earlier. So it’s sort of a consequence of this. There’s so much complexity, right? So in the world, again, of milk and seat belts, anyone can see, I may not be technical, I do not understand much about cars, but I can perceive, again,
all of the elements of, for example, again, a tort. But, and I can really understand the logic in the context of an unforeseen future, where do we place the controls? So again, to go to the specifics, what do you think of consent to start there? So what is the problem with that you see with consent? Cause consent is all the rage in the U S and I am quite frustrated that having seen the damage it’s done in Europe.
Sergio Maldonado (39:04.78)
right with the privacy directive, having to ask people about something they do not understand. I see if we really care about privacy because we are annoying you with a question again that you do not understand. And it’s frustrating to see it spreading throughout US websites, making them less accessible, making them again more annoying and making no difference. But I need to hear you. What do you think about consent?
So you said something earlier before you got into consent about harm. And I think it was an interesting point that I’d love to respond to very, very quickly, which is that the harm with milk and cars are very visceral. People die and people get injured. And privacy harms are often not that way. We don’t see blood and death in a lot of the cases. And I do think it’s important to have a reckoning with privacy harm.
to really understand why it’s harmful, to really get it. And I think that it’s been work that I’ve been doing for a very long time. I think that harm is really key to protecting privacy and courts need to get much better at recognizing harm. Legislatures need to get much better at recognizing privacy harm. And so I’ve done a lot of work to try to…
better articulate and conceptualize what the harms are. It’s not just one kind of harm with privacy, it’s many kinds of harm. And so we need to understand all the different types. And I did work with Professor Daniel Citron to try to really explicate privacy harms and explain the harms and also show how they have analogs and things that the law already protects in other contexts. So the interesting thing with privacy harms, it’s not that they’re
They’re so radical that they have no analogies to existing protections. In fact, there are a lot of times where the law does protect very similar types of harms in other areas. But when it comes to privacy, courts just don’t fully understand it, seem to struggle to see how it’s analogous to other things.
Daniel Solove (41:21.098)
seem to be, I think, oftentimes caught in the thrall of technology, so intimidated by technology that they almost kind of freeze like a deer in headlights. I see this with legislatures. I see this with courts. I see this generally with technology. It’s like, you know, the technology makes them cowardly or think more narrowly or just be unable to
vigorously apply the law. It really impedes them. And so I say in the book, you we must avoid letting fear or worship of technology get in the way. You know, yes, technology is amazing, but it’s not manna from the heavens. It’s something that needs to be held accountable, that needs to…
The law needs to apply to it just like it does everything else. And we’ve developed over centuries the legal concepts and tools to do it. We just have to have the will to apply it to technology. And there’s so many myths that get in the way that prevent the policymakers from
applying the law to technology, which ultimately is just holding the makers and users of technology accountable for the harms that it creates. It’s that simple. But so many times that we find some excuse to either try to deny the harm or give the makers and users of technology a pass because it’s technology. then it’ll be, you know, myths like, well, it stifle innovation.
Well, you know, with cars, innovation is not just a car that goes faster. Innovation is a car that gets good fuel mileage, a car seat belts and airbags are innovations to the. So you’re asking technology companies, why don’t you innovate to make safer technologies, technologies that are less invasive of privacy that are better for people? Why not innovate in that direction? Why just encourage innovation to make a big profit?
Daniel Solove (43:37.414)
But that’s how things get wrong. On the consent issue, I think the problem with consent is that it’s over relied upon. The law often will allow consent to give companies a free pass to do what they want to with data, to collect data, to use data. They essentially have so much power, it lends legitimacy to almost
anything. So the GDPR, with consent, it’s a lawful basis and a company then can start doing all sorts of things it wants with the data. I think that consent has too much power. We can’t get rid of it in the law because we wouldn’t want the government saying, okay, we’re going to tell you every use of data that is allowable. And even if you want to share your data for a purpose, you want to share your location.
information so you can get a ride share and have a ride share app. No, we don’t like that. Or we don’t want you to have a doorbell camera. We forbid you from having a home assistant device. We’re going to ban cell phones. I think that’s too extreme. On the other hand, we need to recognize that consent is fictional and we’re never going to make it factual. It’s always going to be a fiction.
But what we can do is lean into the fiction and say, we’re going, if we recognize that people don’t fully understand what they’re consenting to, why don’t we then complete the fictional story and say that it should end happily ever after for people? So I might not fully know what I’m consenting to, but the law can be a backstop to make sure that it’s not gotcha, that I don’t consent to something that ultimately winds up harming me or that is used in ways that I don’t expect.
So just that I say, okay, the law can kind of backstop it and say, okay, well, you know, is what I’m consenting to in my interest? Is what I’m consenting to going to harm me? Is what I’m consenting to what I expect? And if it upholds these duties and makes sure like, yeah, okay, if it’s what I expect, it’s not harming me, and it’s in my interest, then yes, you know, go for it. You can use it.
Daniel Solove (46:00.526)
But if it’s not, you can’t. And that’s where the law can have our backs, knowing that, I really don’t fully understand what I’m consenting to or not. I can’t. I can’t figure out, is it good to have smart lights? Is it good to have a doorbell camera? It depends on what data is gathered and how it’s used, what algorithms are going to do with that data, which I
have no idea and wouldn’t be able to know unless I actually worked in the companies and saw what they’re doing and did all that work to fully understand their privacy programs, privacy impact assessments, how their algorithms work. I can’t do that. So I can’t make the risk calculation. I have certain expectations. have certain, you know, I basically want to use a product and use technology.
where the benefits outweigh the costs or the risks, right? Ultimately, that’s the general thing. People make the calculation. Is this technology going to help me more than it hurts me? What are the risks? Is it worth it for the benefit I’m getting? That’s the deal that people are trying to make. They’re not able to really make that given the information that they have, the knowledge that they have. So these consents are meaningless.
But that said, I think the law can step in to say, OK, we’re going to prevent the gotcha, where you say yes, but then the information is used in ways that we know that people didn’t expect. And we know that are not in people’s interests and are going to harm them. But hey, we got their consent. They clicked accept. So we’ll just let them do it. That’s where I would say that’s the intervention I would make. No, they can’t do it if it’s not good for the consumer.
Thank you, Daniel. There’s much more in your book on privacy and technology, so we’ll add a link to that. Thanks again.
Daniel Solove (48:02.872)
Thank you.
References: