Monthly Archives: September 2018
There’s a reason why our team thinks we are a great place to work and no, its not because we have a ping pong table set up. See more about Hanapin’s latest certification + we’ll let you in on a little secret!
Read more at PPCHero.com
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.
“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.
Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.
But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.
You could say Facebook has ‘hostility to privacy‘ as a core value.
Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.
But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.
On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.
Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.
All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparency, hate speech and abuse, and also directly, and at times closely, on consumer privacy and control.
Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.)
The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”
As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.
But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.
He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.
What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.
Simply put, Facebook is weaponizing security to shield its erosion of privacy.
Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.
Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.
In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.
Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.
Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.
Hence the existential nature of the threat.
The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.
Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.
One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.
It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.
US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.
Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.
So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)
The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.
Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.
While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.
In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.
Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.
So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.
Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:
It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.
At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.
“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.
He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.
“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”
He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).
These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.
But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.
No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.
Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…
So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.
What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.
Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.
Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.
“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.
So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.
Safe to say, that’s not going to play at all well in Europe.
Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!
Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)
In other contexts this would be called spying — or, well, ‘mass surveillance’.
It’s also how Facebook makes money.
And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.
It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.
Except Facebook is a commercial company, not the NSA.
So it’s only fighting to keep being able to carpet-bomb the planet with ads.
Profiting from shadow profiles
Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.
In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.
Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.
It’s just a double whammy of awful, awful behavior.
I spend a lot of time trying to convince people to lock down their social media accounts with 2FA. Boy does this undermine my efforts. https://t.co/tPo4keQkT7
— Eva (@evacide) September 28, 2018
And of course, there’s more.
A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.
In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.
Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.
Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)
So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)
In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.
One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…
Sure #Facebook, I'll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.
There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.
These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.
Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.
WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.
On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.
So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.
This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.
No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.
Yet this is a very dangerous strategy, though.
Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.
What’s the best security practice of all? That’s super simple: Not holding data in the first place.
Spotify has ended a test that required its family plan subscribers to verify their location, or risk losing accessing to its music streaming service. According to recent reports, the company sent out emails to its “Premium for Family” customers that asked them to confirm their locations using GPS. The idea here is that some customers may have been sharing Family Plans, even though they’re not related, as a means of paying less for Spotify by splitting the plan’s support for multiple users. And Spotify wanted to bust them.
Of course, as these reports pointed out, asking users to confirm a GPS location is a poor means of verification. Families often have members who live or work outside the home — they may live abroad, have divorced or separated parents, have kids in college, travel for work or any other number of reasons.
But technically, these sorts of situations are prohibited by Spotify’s family plan terms — the rules require all members to share a physical address. That rule hadn’t really been as strictly enforced before, so many didn’t realize they had broken it when they added members who don’t live at home.
Customers were also uncomfortable with how Spotify wanted to verify their location — instead of entering a mailing address for the main account, for instance, they were asked for their exact (GPS) location.
The emails also threatened that failure to verify the account this way could cause them to lose access to the service.
Family plans are often abused by those who use them as a loophole for paying full price. For example, a few years ago, Amazon decided to cut down on Prime members sharing their benefits, because they found these were being broadly shared outside immediate families. In its case, it limited sharing to two adults who could both authorize and use the payment cards on file, and allowed them to create other, more limited profiles for the kids.
Spotify could have done something similar. It could have asked Family Plan adult subscribers to re-enter their payment card information to confirm their account, or it could have designated select slots for child members with a different set of privileges to make sharing less appealing.
Maybe it will now reconsider how verification works, given the customer backlash.
We understand the verification emails were only a small-scale test of a new system, not something Spotify is rolling out to all users. The emails were sent out in only four of Spotify’s markets, including the U.S.
And the test only ran for a short time before Spotify shut it down.
Reached for comment, a Spotify spokesperson confirmed this, saying:
“Spotify is currently testing improvements to the user experience of Premium for Family with small user groups in select markets. We are always testing new products and experiences at Spotify, but have no further news to share regarding this particular feature test at this time.”
VirusTotal Enterprise offers significantly faster and more customizable malware search, as well as a new feature called Private Graph, which allows enterprises to create their own private visualizations of their infrastructure and malware that affects their machines.
The Private Graph makes it easier for enterprises to create an inventory of their internal infrastructure and users to help security teams investigate incidents (and where they started). In the process of building this graph, VirtusTotal also looks are commonalities between different nodes to be able to detect changes that could signal potential issues.
The company stresses that these graphs are obviously kept private. That’s worth noting because VirusTotal already offered a similar tool for its premium users — the VirusTotal Graph. All of the information there, however, was public.
As for the faster and more advanced search tools, VirusTotal notes that its service benefits from Alphabet’s massive infrastructure and search expertise. This allows VirusTotal Enterprise to offer a 100x speed increase, as well as better search accuracy. Using the advanced search, the company notes, a security team could now extract the icon from a fake application, for example, and then return all malware samples that share the same file.
VirusTotal says that it plans to “continue to leverage the power of Google infrastructure” and expand this enterprise service over time.
Google acquired VirusTotal back in 2012. For the longest time, the service didn’t see too many changes, but earlier this year, Google’s parent company Alphabet moved VirusTotal under the Chronicle brand and the development pace seems to have picked up since.
Whether you are pro-change or anti-change with Google, there is one certainty – You have to change. Learn how to create automated rules in the new interface.
Read more at PPCHero.com
The FCC is pushing for speedy deployment of 5G networks nationwide with an order adopted today that streamlines what it perceives as a patchwork of obstacles, needless costs and contradictory regulations at the state level. But local governments say the federal agency is taking things too far.
5G networks will consist of thousands of wireless installations, smaller and more numerous than cell towers. This means that wireless companies can’t use existing facilities, for all of it at least, and will have to apply for access to lots of new buildings, utility poles and so on. It’s a lot of red tape, which of course impedes deployment.
To address this, the agency this morning voted 3 to 1 along party lines to adopt the order (PDF) entitled “Accelerating Wireline Broadband Deployment by Removing Barriers to Infrastructure Investment.” What it essentially does is exert FCC authority over state wireless regulators and subject them to a set of new rules superseding their own.
First the order aims to literally speed up deployment by standardizing new, shorter “shot clocks” for local governments to respond to applications. They’ll have 90 days for new locations and 60 days for existing ones — consistent with many existing municipal time frames but now to be enforced as a wider standard. This could be good, as the longer time limits were designed for consideration of larger, more expensive equipment.
On the other hand, some cities argue, it’s just not enough time — especially considering the increased volume they’ll be expected to process.
Cathy Murillo, mayor of Santa Barbara, writes in a submitted comment:
The proposed ‘shot clocks’ would unfairly and unreasonably reduce the time needed for proper application review in regard to safety, aesthetics, and other considerations. By cutting short the necessary review period, the proposals effectively shift oversight authority from the community and our elected officials to for-profit corporations for wireless equipment installations that can have significant health, safety, and aesthetic impacts when those companies have little, if any, interest to respect these concerns.
Next, and even less popular, is the FCC’s take on fees for applications and right-of-way paperwork. These fees currently vary widely, because as you might guess it is far more complicated and expensive — often by an order of magnitude or more — to approve and process an application for (not to mention install and maintain) an antenna on 5th Avenue in Manhattan than it is in outer Queens. These are, to a certain extent anyway, natural cost differences.
The order limits these fees to “a reasonable approximation of their costs for processing,” which the FCC estimated at about $ 500 for one application for up to five installations or facilities, $ 100 for additional facilities, and $ 270 per facility per year, all-inclusive.
For some places, to be sure, that may be perfectly reasonable. But as Catherine Pugh, mayor of Baltimore, put it in a letter (PDF) to the FCC protesting the proposed rules, it sure isn’t for her city:
An annual fee of $ 270 per attachment, as established in the above document, is unconscionable when the facility may yield profits, in some cases, many times that much in a given month. The public has invested and installed these assets [i.e. utility poles and other public infrastructure], not the industry. The industry does not own these assets; the public does. Under these circumstances, it is entirely reasonable that the public should be able to charge what it believes to be a fair price.
There’s no doubt that excessive fees can curtail deployment and it would be praiseworthy of the FCC to tackle that. But the governments they are hemming in don’t seem to appreciate being told what is reasonable and what isn’t.
“It comes down to this: three unelected officials on this dais are telling state and local leaders all across the country what they can and cannot do in their own backyards,” said FCC Commissioner Jessica Rosenworcel in a statement presented at the vote. “This is extraordinary federal overreach.”
New York City’s commissioner of information technology told Bloomberg that his office is “shocked” by the order, calling it “an unnecessary and unauthorized gift to the telecommunications industry and its lobbyists.”
The new rules may undermine deployment deals that already exist or are under development. After all, if you were a wireless company, would you still commit to paying $ 2,000 per facility when the feds just gave you a coupon for 80 percent off? And if you were a city looking at a budget shortfall of millions because of this, wouldn’t you look for a way around it?
Chairman Ajit Pai argued in a statement that “When you raise the cost of deploying wireless infrastructure, it is those who live in areas where the investment case is the most marginal—rural areas or lower-income urban areas—who are most at risk of losing out.”
But the basic market economics of this don’t seem to work out. Big cities cost more and are more profitable; rural areas cost less and are less profitable. Under the new rules, big cities and rural areas will cost the same, but the former will be even more profitable. Where would you focus your investments?
The FCC also unwisely attempts to take on the aesthetic considerations of installations. Cities have their own requirements for wireless infrastructure, such as how it’s painted, where it can be located and what size it can be when in this or that location. But the FCC seems (as it does so often these days) to want to accommodate the needs of wireless providers rather than the public.
Wireless companies complain that the rules are overly restrictive or subjective, and differ too greatly from one place to another. Municipalities contend that the restrictions are justified and, at any rate, their prerogative to design and enforce.
“Given these differing perspectives and the significant impact of aesthetic requirements on the ability to deploy infrastructure and provide service, we provide guidance on whether and in what circumstances aesthetic requirements violate the [Communications] Act,” the FCC’s order reads. In other words, wireless industry gripes about having to paint their antennas or not hang giant microwave arrays in parks are being federally codified.
“We conclude that aesthetics requirements are not preempted if they are (1) reasonable, (2) no more burdensome than those applied to other types of infrastructure deployments, and (3) published in advance,” the order continues. Does that sound kind of vague to you? Whether a city’s aesthetic requirement is “reasonable” is hardly the jurisdiction of a communications regulator.
For instance, Hudson, Ohio city manager Jane Howington writes in a comment on the order that the city has 40-foot limits on pole heights, to which the industry has already agreed, but which would be increased to 50 under the revisions proposed in the rule. Why should a federal authority be involved in something so clearly under local jurisdiction and expertise?
This isn’t just an annoyance. As with the net neutrality ruling, legal threats from states can present serious delays and costs.
“Every major state and municipal organization has expressed concern about how Washington is seeking to assert national control over local infrastructure choices and stripping local elected officials and the citizens they represent of a voice in the process,” said Rosenworcel. “I do not believe the law permits Washington to run roughshod over state and local authority like this and I worry the litigation that follows will only slow our 5G future.”
She also points out that the predicted cost savings of $ 2 billion — by telecoms, not the public — may be theorized to spur further wireless deployment, but there is no requirement for companies to use it for that, and in fact no company has said it will.
In other words, there’s every reason to believe that this order will sow discord among state and federal regulators, letting wireless companies save money and sticking cities with the bill. There’s certainly a need to harmonize regulations and incentivize wireless investment (especially outside city centers), but this doesn’t appear to be the way to go about it.
Something, just not anything too specifically quantifiable.
According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.
Signatories have committed to taking not exactly prescribed actions in the following five areas:
- Disrupting advertising revenues of certain accounts and websites that spread disinformation;
- Making political advertising and issue based advertising more transparent;
- Addressing the issue of fake accounts and online bots;
- Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
- Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.
“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.
“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”
“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”
Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.
Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.
Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.
Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)
The Commission is not doing any better on that front, either.
The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.
Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.
Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.
And online multiple players are implicated and involved in its spread.
But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.
In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”.
“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.
“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.
“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”
“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.
Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).
The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.
Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either.
Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.
Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.
Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.
- The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.
- Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial
- Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.
Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?
Which is really the core of the disinformation problem.
And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?
And so we end up with very vague suggestions in the code like:
- Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels
Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)
Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).
And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…
Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.
A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.
Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.
Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.
The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !
The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.
Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.
As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.
So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.
Until then, there’s nothing much to see here.
There’s a secret Facebook app called Blink. Built for employees only, it’s how the company tests new video formats it’s hoping will become the next Boomerang or SuperZoom. They range from artsy Blur effects to a way even old Android phones can use Slo-Mo. One exciting format in development offers audio beat detection that syncs visual embellishments to songs playing in the background or added via the Music feature for adding licensed songs as soundtracks that is coming to Facebook Stories after debuting on Instagram.
“When we first formed the team . . . we brought in film makers and cinematographers to help the broader team understand the tropes around storytelling and film making,” says Dantley Davis, Facebook Stories’ director of design. He knows those tropes himself, having spent seven years at Netflix leading the design of its apps and absorbing creative tricks from countless movies. He wants to democratize those effects once trapped inside expensive desktop editing software. “We’re working on formats to enable people to take the video they have and turn it into something special.”
For all the jabs about Facebook stealing Stories from Snapchat, it’s working hard to differentiate. That’s in part because there’s not much left to copy, and because it’s largely succeeded in conquering the prodigal startup that refused to be acquired. Snapchat’s user count shrank last quarter to 188 million daily users.
Meanwhile, Facebook’s versions continue to grow. The Messenger Day brand was retired a year ago and now Stories posts to either the chat app or Facebook sync to both. After announcing in May that Facebook Stories had 150 million users, with Messenger citing 70 million last September, today the company revealed they have a combined 300 million daily users. The Middle East, Central Latin America and Southeast Asia, where people already use Facebook and Messenger most, are driving that rapid growth.
With the success of any product comes the mandate to monetize it. That push ended up pushing out the founders of Facebook acquisition WhatsApp, and encroachment on product decision-making did the same to Instagram’s founders who this week announced they were resigning.
Now the mandate has reached Facebook Stories, which today opened up to advertisers globally, and also started syndicating those ads into Stories within Messenger. Facebook is even running “Stories School” programs to teach ad execs the visual language of ephemerality now that all four of its family of apps, including Instagram and WhatsApp, monetize with Stories ads. As sharing to Stories is predicted to surpass feed sharing in 2019, Facebook is counting on the ephemeral slideshows to sustain its ad revenue. Fears they wouldn’t lopped $ 120 billion off Facebook’s market cap this summer.
But to run ads you need viewers, and that will require responses to questions that have dogged Facebook Stories since its debut in early 2017: “Why do I need Stories here too when I already have Instagram Stories and WhatsApp Status?” Many find it annoying that Stories have infected every one of Facebook’s products.
The answer may be creativity. However, Facebook is taking a scientific approach to determining which creative tools to build. Liz Keneski is a user experience research manager at Facebook. She leads the investigative trips, internal testing and focus groups that shape Facebook’s products. Keneski laid out the different types of research Facebook employs to go from vague idea to polished launch:
- Foundational Research – “This is the really future-looking research. It’s not necessarily about any specific products but trying to understand people’s needs.”
- Contextual Inquiry – “People are kind enough to invite us into their homes and talk with us about how they use technology.” Sometimes Facebook does “street intercepts” where they find people in public and spend five minutes watching and discussing how they use their phone. It also conducts “diary studies” where people journal about how they spend their time with tech.
- Descriptive Research – “When we’re exploring a defined product space,” this lets Facebook get feedback on exactly what users would want a new feature to do.
- Participatory Design – “It’s kind of like research arts and crafts. We give people different artifacts and design elements and actually ask them to a deign what an experience that would be ideal for them might look like.”
- Product Research – “Seeing how people interact with a specific product, the things they’re like or don’t like, the things they might want to change” lets Facebook figure out how to tweak features it’s built so they’re ready to launch.
Last year Facebook went on a foundational research expedition to India. Devanshi Bhandari, who works on the globalization, discovered that even in emerging markets where Snapchat never got popular, people already knew how to use Stories. “We’ve been kind of surprised to learn . . . Ephemeral sharing wasn’t as new to some people as we expected,” she tells me. It turns out there are regional Stories copycats around the globe.
As Bhandari dug deeper, she found that people wanted more creative tools, but not at the cost of speed. So Facebook began caching the Stories tray from your last visit so it’d still appear when you open Facebook Lite without having to wait for it to load. This week, Facebook will start offering creative tools like filters inside Facebook Lite Stories by enabling them server-side so users can do more than just upload unedited videos.
That trip to India ended up spawning whole new products. Bhandari noticed some users, especially women, weren’t comfortable showing their face in Stories. “People would sometimes put their thumb over the video camera but share the audio content,” she tells me. That led Facebook to build Audio Stories.
Back at Facebook headquarters in California, the design team runs exercises to distill their own visions of creative. “We have a phase of our design cycle where we ask the designers . . . to bring in their inspiration,” says Davis. That means everything from apps to movie clips to physical objects. Facebook determined that users needed better ways to express emotion through text. While it offers different fonts, from billboard to typewriter motifs, they couldn’t convey if someone is happy or sad. So now Davis reveals Facebook is building “kinetic text.” Users can select if they want to convey if text is supposed to be funny or happy or sad, and their words will appear stylized with movement to get that concept across.
But to make Stories truly Facebook-y, the team had to build them into all its products while solving problems rather than creating them. For example, birthday wall posts are one of the longest running emerging behaviors on the social network. But most people just post a thin, generic “happy birthday!” or “HBD” post, which can feel impersonal, even dystopic. So after announcing the idea in May, Facebook is now running Birthday Stories that encourage friends to submit a short video clip of well wishes instead of bland text.
Facebook recently launched Group and Event Stories, where members can collaborate by all contributing clips that show up in the Stories tray atop the News Feed. Now Facebook is going to start building its own version of Snapchat’s Our Stories. Facebook is now testing holiday-based collaborative Stories, starting with the Mid-Autumn Festival in Vietnam. Users can opt to post to this themed Story, and friends (but not the public) will see those clips combined.
This is the final step of Facebook’s three-part plan to get people hooked on Stories, according to Facebook’s head of Stories, Rushabh Doshi. The idea is that first, Facebook has to get people a taste of Stories by spotlighting them atop the app as well as amidst the feed. Then it makes it easy for people to post their own Stories by offering simple creative tools. And finally, it wants to “Build Stories for what people expect out of Facebook.” That encompasses all the integrations of Stories across the product.
Still, the toughest nut to crack won’t be helping users figure out what to share but who to share to. Facebook Stories’ biggest disadvantage is that it’s built around an extremely broad social graph that includes not only friends but family, work colleagues and distant acquaintances. That can apply a chilling effect to sharing as people don’t feel comfortable posting silly, off-the-cuff or vulnerable Stories to such a wide audience.
Facebook has struggled with this problem in News Feed for over a decade. It ended up killing off its Friend List Feeds that let people select a subset of their friends and view a feed of just their posts because so few people were using them. Yet the problem remains rampant, and the invasion of parents and bosses has pushed users to Instagram, Snapchat and other younger apps. Unfortunately for now, Doshi says there are no Friend Lists or specific ways to keep Facebook Stories more private amongst friends. “To help people keep up with smaller groups, we’re focused on ways people are already connecting on Facebook, such as Group Stories and Event Stories” Doshi tells me. At least he says “We’re also looking at new ways people could share their stories with select groups of people.”
At 300 million daily users, Facebook Stories doesn’t deserve the “ghost town” label any more. People who were already accustomed to Stories elsewhere still see the feature as intrusive, interruptive and somewhat desperate. But with 2.2 billion total Facebookers, the company can be forced to focus on one-size-fits-all solutions. Yet if Facebook’s Blink testing app can produce must-use filters and effects, and collaborative Stories can unlock new forms of sharing, Facebook Stories could find its purpose.
- How would Google Answer Vague Questions in Queries?
- Why Google Grants Needs To Focus on Mobile Networks
- Social chat app Capture launches to take a shot at less viral success
- What Happens When Reproductive Tech Like IVF Goes Awry?
- Qualtrics’ Julie Larson-Green will talk customer experience at TC Sessions: Enterprise