Yuma 4×4

Media and Communications

Sandra Wachter: Exploring fairness, privacy and advertising in an algorithmic world

Sandra Wachter:  Exploring fairness, privacy and advertising in an algorithmic world

thank you so much for the introduction yes Odie the topic of my talk is when AI disrupts the law and I want to look at questions of fairness and privacy when it comes to advertising Judy talk is centered around a piece of work that I just recently released paper that is called affinity profiling and discrimination by Association and online behavioral advertisement the paper is freely available if somebody wants to have a look but the basic story is that I was interested in the question how online targeting mechanisms are influencing us and how the effect is in an ethical and legal way and I got inspired by that story because I realize and I think most of us realize that every time we interact with some kind of technology digital technology there is some advertiser untouched it's almost impossible to access any digital service without being served advertisements there are some advertisements that are relatively harmless so for example we all know Netflix recommendation system where based on your viewing habits they might offer you some new suggestions for movies or series and similar here with Amazon were paid based on your shopping behavior that might offer your products that might be very interesting both of those examples ethically speaking are not terribly interesting they might be a bit annoying for example here where one of the users gets regularly offered new exciting offers for toilet seats because she recently purchased one so that might be annoying but it's not particularly ethically challenging but of course this is only one part of the truth there is actually some troubling evidence where we have to think about maybe new safeguards here one example has to do with price discrimination so algorithms can be used to offer different prices for the same product to different groups of people and this can be done for example based on geolocation it could be based on the fact whether you're using a PC or a Mac for example so you get the same product at a different price that could have ethical sequences and even more troubling is here the example of Facebook which have been recently in the news with a lot of things and one of them has been that they have been allegedly inferring very sensitive attributes about the users inferring ethnicity inferring political affiliation sexual orientation gender and you use that information to either specifically target those people or to exclude them for the market completely and that has happened in the past they have now announced to change their policies in certain ways but Facebook and other advertising companies still use very troubling tools to target people and one of the things that they use is they use all the data that can basically leave behind to use the webpages that we visit the videos that we watch the articles that we read or a geolocation everything they post on social media languages age and all of that the basic argument that all of those companies actually make is that they're not inferring something sensitive about you as a person they're just assuming an affinity the basic argument is that I'm not inferring whether your ethnicity I'm just assuming that you have an affinity of a certain ethnicity and therefore the argument is made that affinity assumed the affinity and personal traits of different things if we take that argument seriously it could actually disrupt a lot of legal safeguards that we currently have a couple of them I've listed here if we accept the argument that affinities and interests are different than personal traits what does it mean for data protection their protection is based around the idea that your personal data the data is about you should be protected what does it mean for non-discrimination law which operates on the G assumption that you should not be treated adversely because of you protected trade sensitive traits and what does it mean for untraditional groups sort of algorithms are grading groups that we don't understand that we don't have a concept for and how can we protect those people so I'm gonna try to talk about very briefly about all those three things and the first thing as I said it has to do with data protection and the GDP our internal data protection regulation that came into force last year and the GDP are actually has a lot of interesting safeguards here with regard to sensitive personal data and it says that data that director reveals something sensitive about you for example ethnicity sexual orientation or indirectly reveal something about the other sensitive has higher standards of protection so it's personal data and data that is by inference sensitive and once this seemingly by name that data is transferred into sensitive data you get more protections unfortunately the court sees it a bit differently the European Court of Justice has recently issued a judgment that could be troubling in that regard the background story of that case was that somebody wanted to have access to the names of the personal assistants of some members of the European Parliament and those personal assistants said I don't want this information to be disclosed because you can assume inferred what kind of political views I have argument being if I'm working for somebody for a green part in European Parliament you could assume that I'm in favor of Green Party politics and the court said no that's not good enough to turn something that is not sensitive into sensitive because two conditions have to apply first of all you have to have the intention to infer something sensitive and second of all the data that you're using has to be reliable to make that inference the court said just because working for somebody that's not a reliable basis to infer whether you have the same political opinions as your boss and therefore protection wasn't granted again this is problematic and this is how legal opinion and might be in need of further development because AI might disrupt that concept in tension from an AI perspective is not necessary to infer something that is sensitive if you use strong proxy data for example geolocation which strongly correlates with ethnicity or sexual orientation or gender it doesn't matter if you have the intention to infer that this information will be baked in the algorithm nonetheless and the question of reliability is also a bit of a red herring because that's actually a really important question is it really important if you can with confidence realize something sensitive or is the question whether you assume something sensitive about a person and then just I treated them differently here with the example where Facebook inferred sexual orientation and used that to tie the people is it really important accurately inferred your sexual orientation or is it the adverse treatment that follows from that so I think in terms of day protection we need to think a bit differently the second point has to do as I said with non-discrimination and how that might be disrupted by I technologies just very briefly I'm not going to go into detail how non-discrimination law generally works the law protects us for two types of discrimination one is direct discrimination and one is indirect discrimination direct discrimination means I treats you differently adversely because of one protected attribute for example I'm telling you I'm not gonna hire you because you're woman that's direct discrimination which is prohibited more complicated is the notion of indirect discrimination here use a neutral provision criterion practice apply to everybody equally but it just so happens that a particular group that is protected suffers a disadvantage so for example if I'm saying I'm only gonna hire people that are taller than 1 meter and 80 I'm not directly discriminating based on gender but it will have an effect on women because on average they are shorter than men so those are the types of discrimination and cases that we have protection that we have going back to the argument that your affinity and your interests are not the same as a personal traits what would that mean for discrimination law if it's not about you I cannot claim direct discrimination because you're not treated me differently because of something about me it's just my affinity and with indirect discrimination yes of course you could still bring a claim under that but there were a certain hurdles that are attached to it right if you classified with a certain group you might not want to out yourself for example if you're being classified as Christian but actually you're Buddhist you don't want to disclose your just belief with other people and the second reason has to do with misclassified users maybe I'm the algorithm assumes that your question when you're in fact Buddhist but you experience the adverse treatment nonetheless and because algorithms are not necessarily 100% accurate that could mean that you are not the target and might not get any protections under them and he had tried to come up with an idea to close that kind of gap that we currently face with an idea that is called discrimination by association which is as I said the the the paper that I just published discrimination of Association goes back to an interesting case a couple of years back where miss Coleman sued her employer she sued her employer because she felt discriminated based on disability the background story was that she wanted to have more flexible working hours to take care of her disabled kid she was not granted that privilege and eventually she was actually dismissed and she went to court and said I feel discriminated against disability because other people in the same firm were granted those privileges for the non-disabled children and the argument came up to say well you cannot claim discrimination based on disability because you are not disabled it's you get and the court said no this argument doesn't hold because this is discrimination by association you don't have to be a member of the protected group you don't even have to possess the protected attribute if you suffer negative consequences because of your association your affiliation your closeness to a protected group then you should be granted the same protections as well and the court said that's true for direct discrimination like in this case but it's also true for indirect discrimination which is might might be harder to spot so that could help us to close the current gap that we have in the framework there's still some work to go and I think the only thing how we can open that is actually to open up the black box of algorithms to bring a successful claim if you think there are discrimination issues because obviously if you want to bring a claim you need to show that you have suffered a particular disadvantage as I said price discrimination could be one of those examples in the in the offline world it's relatively easy you go to Tesco's or Sainsbury's or waitress when you compare crimes prices you choose that product that you think is the most valuable that has the best offer for you and you go out if Tesco's all of a sudden decides not to let you into the store anymore you know that you have been expelled from the market the same cannot be in the online world I don't know what other prices are out there I don't know how I'm being shown the best deal and I'm not sure if I'm actually being shown all the advertisements that are out there I don't know the may exclude it from the market and that's something where we actually need more opening up and have businesses being more transparent about their practices and the second thing has to do is that I need to show that the treatment is disproportionately affecting a protected group and here again algorithmic in transparency is posing a problem because I don't know who am being grouped with I don't know of what part of what what the member of a group I am I don't know who else is in that group and I don't know how those people are being affected so bringing a claim on that could be problematic because of algorithmic transparency problems nonetheless I think starting from the idea more creative idea to think about discrimination by Association could at least in theory help us a bit because it will give us the practical advantage of not needing to out ourselves we don't need to discuss our sensitive straights religious beliefs sexual orientation in order good to get protection because I don't have to be a member of that group or possess the attribute I just have to suffer negative consequences and it would also protect misclassified people so those people who have been grouped with people that they're actually not a part of so that would be an interesting way forward the last thing I when I talk about is how AI disrupts the law in the sense of new groups it creates and poses problems to that which is called group privacy from a legal perspective obviously we have a lot of non-discrimination protection which are usually based on historical experiences so that means we have protection against discrimination based on gender and sexual orientation political beliefs religious beliefs because we had negative experiences in the past and we don't want those things to reoccur it makes sense what if I told you that you credit score is gonna drop because of your video gamer it sounds very counterintuitive but it's not completely dystopian or utopian because it's something that something similar is happening China at the moment in China the social credit scoring system is being rolled out which means that the government is using private and public data to assess if somebody is a good citizen so that means if you're a good citizen you get benefits for example and better rates at supermarkets if you're a bad citizen it could mean that you are on a no flight list or that your kids are no longer allowed to go to certain schools being a video gamer in China is something that drops your credit score it's something to seem that you're a bad citizen but obviously this new group video gamers don't find protection and non-discrimination law because they have not been seen as a traditional protected and vulnerable group so it's very important in the future that we think about group privacy new group protection that could emerge because algorithms are grouping us in novel base that we don't really anticipate so let me close by saying that I think it's interesting that technology is challenging us in many ways but we have to think about new creative ways to close the loopholes that we currently have because new challenges are on the horizon I think with discrimination by Association there is an interesting step to close some of the challenges that we currently have but we need more transparency of what businesses are actually doing we need more information about the interest groups they were being placed too I need to know what an algorithm is actually thinking about me what is it inferring about me and we need more transparent business practices otherwise I'm not going to be able to prove my case in court that I have been violated my rights have been violated and we need collectively to think about new group privacy protection protection for untraditional groups that might emerge from that and I think again it sounds negatively but I'm a very big tech enthusiast and I think there are many many exciting opportunities for us in the future but I think it's very important that we all stick our heads together and think about new creative ways to guard against another risk of AI thank you you

Leave comment

Your email address will not be published. Required fields are marked with *.