One of the most forward-looking articles in the GDPR is Article 22. By identifying automated decision-making as a specific issue with data processing, the GDPR is also stepping into the forefront of machine learning and artificial intelligence.

The Ward brothers discuss AI and Machine Learning and the gap in decision making that many of these systems have. This same gap in understanding will run headlong into the protection of Article 22 of the GDPR.

 PODCAST

TRANSCRIPT

Jay: “Are you DataSmart?” A weekly podcast on data security, information management, and all things related to the data you have, how to protect it and maximize its value. I’m Jay Ward.

Christian: And I’m Christian Ward. And today we’re gonna tackle probably one of the most interesting parts, from my perspective, of GDPR which is Article 22, automated individual decision making, including profiling. Jay, no one wants to be read while listening to a podcast, but I’m gonna read out the bullet point number one from Article 22 because we’ve had some very good dialogues on this, and just how sort of far reaching it is, but almost how forward looking. Dare I say legislators being forward looking.

Jay: Dare, dare.

Christian: I’m very scared. So this is Article 22 of the GDPR, and bullet one states, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling which produces legal effects concerning him or her, or similarly significantly affects him or her.” Now, it goes on to explain a little bit more about that, particularly as it relates to a data controller and whether or not it’s authorized by a union or member state. It gives all the carve outs for the government to do obviously anything it wants with data. But the real question is, how do you view this as forward looking? Because in light of, let’s say, Google’s recent duplex platform that we did a whole podcast on, and the new Google Video which is making a lot of noise this morning called selfish ledger… Everyone, if you haven’t Googled selfish ledger and watched this nine-minute video, it’s also terrifying. It’s right up there with Eggers the circle in terms of us and our user data. But let’s talk a little bit first about Article 22, Jay, and this whole concept. What do you think they’re getting at with automated individual decision making?

Jay: It’s interesting because we’ve, in the United States, been exposed to automated decision-making for a very long time in the form of credit decisions. It’s very common for you when you’re filling out forms online to give your social security number or to give other, you know, discrete pieces of information and a credit decision is returned to you within, I don’t know, 20 seconds that reflects the decision of whether or not you can have credit, whether you need to make a deposit, all that kind of stuff. And it’s been fairly well-regulated by the Federal Trade Commission. That’s the Fair Credit Reporting Act.

There’s a lot of rules that go into place. But what’s not in place is a statement of whether or not you have the right to object to it, whether or not you have the right to have a person come in and make that decision. And that’s what Article 22 is all about. It presupposes that in instances where you have an automated decision made which does not just have to be about credit, because I think it’s easy to zero in on the things we know and ignore the things that we haven’t yet seen, but Article 22 is not just restricted to financial transactions. It’s about anything. And so it allows for data subjects to object to the processing of their data to reach an automated decision, to ask a human to make the decision, and it requires those who use automated decision-making services to let the data subject know that an automated decision is gonna be made. So it is, you know, it is a very forward-looking policy, primarily because few decisions, right now, are made in an automated manner. But that’s not going to be the case for long.

Christian: Yeah, and, you know, I think one of the…you bring up the credit rating and I’ve worked with almost every one of those platforms. But that sort of concept of automated decision-making has boomed in recent years because it’s gone beyond the concept of what is my credit score and how’s it calculated? Which obviously, yes, you can get a copy of the report, you can sort of, you know, contest various elements of the report. There’s plenty of those services. But now, it’s things like mortgage companies, you have the sort of social score construct that’s happening in China where people’s interaction with each other. It’s a very black mirror episode unto itself. That concern is that when I think about automated decisions and I’m looking at the Google duplex video where it says, you know, “Do you have something between 10 and 12” or however, you know, the odd-sounding teenage voice will say. You know, the concern is, when we start opening up more and more to AI and machine learning capabilities, I think we’re really just scratching the surface in terms of how many automated decisions are actually going to happen on a daily basis.

In some ways, quite frankly, even if I look for an airline ticket to come visit, you know, you and the family, the reality is I’m setting filters. But at some point, there is enough data, particularly with one airline that I travel on all the time, there is enough data for them to make some pretty interesting decisions that I won’t even know about. And the concern that I have is there are a lot of new articles out there about the dreaming or the subconscious of AI. And what that is, is at a certain point AI and most of the machine learning principles out there, the five tribes of machine learning, they admit that we don’t even know what the machines are doing at a certain point. Meaning the machines are taking data from so many sources and mashing them together in so many different algorithms sometimes stacked as they are called. So maybe once a biological algorithm and then the next one is a, you know, an NLP or let’s say a semantic algorithm. When it does that we sort of lose touch with why the decision was made to show me these two flights and not this flight. And fascinating way I think most scientists would tell you some of the results are amazing. They’re wonderful. But if the Article 22 is really saying that we have the right in some way to ask for a human to get involved, what I’m saying is, I can tell you right now definitively the machine learning and the AI can’t even tell you a human can’t step through the code to know why it arrived at that decision. How are we gonna solve that?

Jay: Right. And it’s, you know, you need a Rosetta Stone, to be able to meaningfully understand what’s going on and it’s unlikely that one’s gonna exist. What this reminds me of actually is when, and, you know, we’re talking a lot about administrative agencies here, so it makes sense. When an administrative agency in the United States makes a decision, for the most part courts are gonna defer to them. They’re gonna give them the opportunity to make their decisions and they’ll say, “All right. Well, the agency said X and their subject matter experts, that’s fine.” And there’s a lot of debate over that. But the part that’s not debatable is that the standard that courts often use to review agency action, which is to say what was done and why, is called the arbitrary and capricious standard. Which is…I just love that because it’s basically describing every decision that, you know, your children under five take, right? It’s arbitrary and capricious and often involves [crosstalk 00:07:27] and sometimes over, so I mean sometimes I make those. You know, those decisions are often, you know, the decisions are reviewed to see if there was a basis that can be understood and the record for what was done. Well, that’s a great way to formulate your approach to decision-making by anyone because if it’s not a justifiable decision, certainly under the GDPR, you’re going to have the ability to object to the way that it was done. And because the GDPR requires controllers and processors to explain how they are going to process your data, the purpose of that is to give you the ability to understand how decisions were reached and why.

Well, if the AI’s decision-making process can’t be stepped through, if we can’t get a glimpse into what was done, then we are facing a substantial problem. Both from a GDPR compliance standpoint, but also just from a general understanding of how decisions are reached. So there’s sort of a…there’s an epistemological, there’s a philosophical, but most importantly, there’s a practical and economic aspect to Article 22 that says, “Look, if we can’t get an idea of what’s going on, a human is going to need to come in and explain the decision.” And even if the human’s reasons are different than the AI’s decisions and the AI’s reasons, we still need a human to come out and provide a concrete explanation for why the decision was taken so that if we need to take an action against the decision-making entity, we can.

Christian: Yeah. I gotta tell you, I think we’re…once again, I give the legislative approach a lot of credit. The regulatory bodies and, you know me, Jay, I wouldn’t give them credit if it was on a credit card.

Jay: No, you don’t give credit really to anybody.

Christian: No, I don’t. But to think that they were forward-thinking enough to say, “Look, we really do need to understand and give people the chance to not be subject to an automated process, or at least do ask for why the automated process returned it.” I got a point the, you know, the whole audience here go look up articles related to not understanding why AI makes the decision it does, or go Google Google’s AI dreaming. And you’re gonna see some really crazy things like when we feed Google’s AI a ton of photographs of dumbbells, it actually can show what it’s dreaming of as it’s analyzing it and the dumbbells since are in so many photos are attached to human arms, it starts to draw these really creepy images of like half-disembodied arms and dumbbells. It’s freaky, it’s definitely the stuff of nightmares. But maybe not like those…one of those robotics dogs that you have in the blog post.

Jay: Oh, the Boston Dynamics.

Christian: I don’t know why you keep bringing those up. Those things just absolutely terrify me. I mean, the one holding the door open for the other, but my point is, generally, if you think about the crazy concept of a dream of a computer, it’s not that crazy. And what most scientists are saying, there’s a quote in this Court’s [SP] Magazine from one of the Uber AI scientists saying, you know, “If we can’t explain to people why the machine is making the decisions it is, you know, we’re gonna have a problem getting the world to accept AI as a valid solution.” And I think GDPR’s Article 22 is a great opportunity to build in some protection for the general consumer or a general individual to be able to say, “If I am the subject of my data and you are utilizing data to make a decision about me, I wanna be able to, sort of, disembody that decision, if you will, from the AI. I don’t wanna be part and parcel to that.” Do you think that’s something that people are grasping yet? Do they understand that’s what this is meant to protect?

Jay: I don’t think so. And I’ll sort of flip what you just said on its head, it’s not to disembody from the AI, it’s to re-embody it into a human. It’s essentially, to re-incarnate the decision-making authority into a person who can actually make the decision. And that actually reflects, I believe, an important aspect of the GDPR broadly. Because it repeatedly refers to natural persons, natural persons. Which is another way of saying not corporations, not business entities, not AI, nothing. It has to be a living, breathing human person who is entitled to the protections of the GDPR who must make the decisions under Article 22 if another natural person objects. And so in a real way a lot of the GDPR’s requirements reflect humanism in its most basic form. Which is to say we are here to protect the rights of humans. And so for me Article 22, in some ways, and this is, you know, fairly dystopian and, you know, we keep coming back to Philip K. Dick, you know, with “Tears in the Rain,” and now, we’re talking about, you know, whether…I guess we do know that they dream of electric sheep now that we’ve seen their terrifying dream pictures. But, you know, the idea here is that we are trying to make sure that humans stay relevant, that humans still have a role. And that’s a, I mean, that’s a frightening concept, but it is at least comforting to know that someone has been thinking about it.

Christian: Yeah. Look, as I said, I think it’s forward looking, it’s a good idea. I do think that, you know, we literally could have a series of issues where we develop AI and machine learning. As a society and a culture, we continue to put more and more faith into the decision-making process that they can assist us with. Then we’re going to need another type of AI or machine learning that can then translate the steps that were taken by the AI to come up with a solution in a manner that humans can understand. In many ways similarly that we go before a judge who will then try and explain how all the prior laws lead up to a particular decision or their thinking on a decision. And, I mean, that’s sort of logical step through is fascinating.

I will also point out I’m reading more and more books around, sort of, the yange and archetypes, yange and dream discussions. Not the Freud versions, but the young ones where we’re really starting to apply some of these psychological concepts to the machines themselves as they’re thinking. Look, this is what happens. We’re emulating our thought process, our ability of our subconscious in our dreams to try and add structure to our daily process of survival, and the machines are doing something similar. They’re just doing it at a massive scale, but since it is designed to follow our similar brain paths, then we have to also be ready for the fact that some of the decisions are going to actually be irrational. Things that we can’t understand why it made the decision it did. And we may actually, unfortunately, need a another machine to help us break down why it made that decision.

Jay: Is it gonna tell us that, you know, the AI is mad because the motherboard didn’t pay enough attention to it when it was still being developed?

Christian: That’s really terrible. That’s…

Jay: I had to try. I mean, it was right there. I mean, there are plenty of Freud jokes to make. So…

Christian: I think we have to leave it there folks, but we’ll continue to sort of dive more and more into machine learning. If you haven’t checked out Article 22 of the GDPR, please do so. I think it has a lot of ramifications for any business, any business out there that is utilizing AI and machine learning or automatic decision making in its interaction with natural persons. Meaning, all of us, listening, hopefully. Please take a look at it, it’s gonna have some impact for you and you need to be able to think about how you’re gonna be able to have a plan in place to explain that decision-making process because, people are gonna ask.

So, thank you again for listening to this episode of “Are you DataSmart?” And we’ll see you next time. Thanks again.

About the Author: Christian Ward

Christian Ward has been building data companies and partnerships since he launched his first financial data company 20 years ago. He has developed and executed hundreds of data partnerships around the world, from the small entrepreneurial firm to the world’s largest data companies. His focus is on the evolving use of data, privacy, and the opportunities created by the right data partnership strategy. Christian has held executive roles at Yext, Thomson Reuters, Infogroup, and the Bank of New York. He resides with his wife and three children at the Jersey Shore.

Leave a Reply