Catch up on the most important news from today in two minutes or less.
Feed: All Latest
Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.
In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.
It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.
The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.
It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.
Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…
Twitter may have /accidentally/ shared data on users to ads partners even for those who opted out from personalised ads. That would be a violation of user settings and expectations, which #GDPR makes a quasi-contract. https://t.co/s0acfllEhG
— Lukasz Olejnik (@lukOlejnik) August 7, 2019
Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.
The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.
Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.
This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.
This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.
These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.
“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.
“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.
(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)
In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:
We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.
The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.
“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.
“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”
While the company may “believe” there is nothing Twitter users can do — aside from accept its apology for screwing up — European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.
Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.
The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.
While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.
So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.
— Johnny Ryan (@johnnyryan) August 6, 2019
Senate Democrats want to remind everyone that US elections are still at risk, and Congress could do more to protect them.
Feed: All Latest
We’ve done the work to bring you the best, most actionable content in the paid media industry, and we’re excited to debut new sessions we know you’ll love at Hero Conf.
Read more at PPCHero.com
After a series of tweets that made it seem as if YouTube was contradicting its own anti-harassment policies, the video platform published a blog post in an attempt to clarify its stance. But even though the post is supposed to “provide more details and context than is possible in any one string of tweets” and promises that YouTube will reexamine its harassment policy, it raises yet more questions about how serious YouTube is about combatting harassment and hate speech on its platform—especially if the abuse comes from a high-profile channel with million of subscribers.
YouTube is currently under fire for not taking earlier, more decisive actions against conservative commentator Steven Crowder after he made homophobic and racist comments about Vox reporter Carlos Maza in multiple videos. The platform eventually demonetized Crowder’s channel, which currently has more than 3.8 million subscribers, but then stated it would allow Crowder to start making ad revenue again if he fixed “all of the issues” with his channel and stopped linking to an online shop that sold shirts saying “Socialism is for f*gs.”
Before demonetizing Crowder’s channels, YouTube responded to Maza in a series of tweets that created confusion about how it enforces it policies. The platform said after an “in-depth review” of flagged videos by Crowder, it decided that even though the language they contained was “clearly hurtful,” the videos did not violate its policies because “as an open platform, it’s crucial for us to allow everyone-from creators to journalists to late-night TV hosts-to express their opinions w/in the scope of our policies.” This was in spite of the fact that Crowder’s derogatory references to Maza’s ethnicity and sexual orientation violate several of YouTube’s policy against harassment and cyberbullying, including “content that makes hurtful and negative personal comments/videos about another person.”
I’ve been called an anchor baby, a lispy queer, a Mexican, etc. These videos get millions of views on YouTube. Every time one gets posted, I wake up to a wall of homophobic/racist abuse on Instagram and Twitter.
— Carlos Maza (@gaywonk) May 31, 2019
In the new blog post, posted by YouTube head of communications Chris Dale, the platform gives a lengthy explanation of how it attempts to draw the line between things like “edgy stand-up comedy routines” and harassment. But in the case of Crowder’s persistent attacks on Maza, YouTube repeated its stance that the videos flagged by users “did not violate our Community Guidelines.”
As an open platform, we sometimes host opinions and views that many, ourselves included, may find offensive. These could include edgy stand-up comedy routines, a chart-topping song, or a charged political rant — and more. Short moments from these videos spliced together paint a troubling picture. But, individually, they don’t always cross the line.
There are two key policies at play here: harassment and hate speech. For harassment, we look at whether the purpose of the video is to incite harassment, threaten or humiliate an individual; or whether personal information is revealed. We consider the entire video: For example, is it a two-minute video dedicated to going after an individual? A 30-minute video of political speech where different individuals are called out a handful of times? Is it focused on a public or private figure? For hate speech, we look at whether the primary purpose of the video is to incite hatred toward or promote supremacism over a protected group; or whether it seeks to incite violence. To be clear, using racial, homophobic, or sexist epithets on their own would not necessarily violate either of these policies. For example, as noted above, lewd or offensive language is often used in songs and comedic routines. It’s when the primary purpose of the video is hate or harassment. And when videos violate these policies, we remove them.
The decision to demonetize Crowder’s channel was ultimately made because “we saw the widespread harm to the YouTube community resulting from the ongoing pattern of egregious behavior, took a deeper look, and made the decision to suspend monetization,” Dale wrote. In order to start earning ad revenue again, “all relevant issues with the channel need to be addressed, including any videos that violate our policies, as well as things like offensive merchandise,” he added.
The latest YouTube controversy is both upsetting and exhausting, because it is yet another reminder of the company’s lack of action against hate speech and harassment, despite constantly insisting that it will do better (just yesterday, for example, YouTube announced that it will ban videos that support views like white supremacy, Nazi ideology or promote conspiracy theories that deny events like the Holocaust or Sandy Hook).
The passivity of social media companies when it comes to stemming the spread of hate through its platforms has real-life consequences (for example, when (Maza was doxxed and harassed by fans of Crowder last year), and no amount of prevarication or distancing can stop the damage once its been done.
Amazon and Walmart’s problems in India look set to continue after Narendra Modi, the biggest force to embrace the country’s politics in decades, led his Hindu nationalist Bharatiya Janata Party to a historic landslide re-election on Thursday, reaffirming his popularity in the eyes of the world’s largest democracy.
The re-election, which gives Modi’s government another five years in power, will in many ways chart the path of India’s burgeoning startup ecosystem, as well as the local play of Silicon Valley companies that have grown increasingly wary of recent policy changes.
At stake is also the future of India’s internet, the second largest in the world. With more than 550 million internet users, the nation has emerged as one of the last great growth markets for Silicon Valley companies. Google, Facebook, and Amazon count India as one of their largest and fastest growing markets. And until late 2016, they enjoyed great dynamics with the Indian government.
But in recent years, New Delhi has ordered more internet shutdowns than ever before and puzzled many over crackdowns on sometimes legitimate websites. To top that, the government recently proposed a law that would require any intermediary — telecom operators, messaging apps, and social media services among others — with more than 5 million users to introduce a number of changes to how they operate in the nation. More on this shortly.
Las Vegas goes into the digging biz with Musk, GM unveils a new electric nervous system for its cars—plus the Rolls-Royce Champagne Chest of the Week.
Feed: All Latest
An algorithm that rates the quality of embryos better than specialists do is a first step toward making IVF easier for women.
Feed: All Latest
Plus, we ride the Jeep Gladiator and ponder the future of electric vehicles, courtesy of battery-swapping rickshaws.
Feed: All Latest