Facebook hopes detailing concrete examples of fake news it’s caught — or missed — could improve news literacy, or at least prove it’s attacking the misinformation problem. Today Facebook launched “The Hunt for False News,” in which it examines viral B.S., relays the decisions of its third-party fact-checkers and explains how the story was tracked down. The first edition reveals cases where false captions were put on old videos, people were wrongfully identified as perpetrators of crimes or real facts were massively exaggerated.
The blog’s launch comes after three recent studies showed the volume of misinformation on Facebook has dropped by half since the 2016 election, while Twitter’s volume hasn’t declined as drastically. Unfortunately, the remaining 50 percent still threatens elections, civil discourse, dissident safety and political unity across the globe.
In one of The Hunt’s first examples, it debunks that a man who posed for a photo with one of Brazil’s senators had stabbed the presidential candidate. Facebook explains that its machine learning models identified the photo, it was proven false by Brazilian fact-checker Aos Fatos, and Facebook now automatically detects and demotes uploads of the image. In a case where it missed the mark, a false story touting NASA would pay you $ 100,000 to study you staying in bed for 60 days “racked up millions of views on Facebook” before fact-checkers found NASA had paid out $ 10,000 to $ 17,000 in limited instances for studies in the past.
While the educational “Hunt” series is useful, it merely cherry-picks random false news stories from over a wide time period. What’s more urgent, and would be more useful, would be for Facebook to apply this method to currently circulating misinformation about the most important news stories. The New York Times’ Kevin Roose recently began using Facebook’s CrowdTangle tool to highlight the top 10 recent stories by engagement about topics like the Brett Kavanaugh hearings.
Top performing Kavanaugh-related posts on Facebook over the last 24 hours (per @crowdtangle) come from:
2. Fox News
3. Franklin Graham
4. Fox News
6. NRA Institute for Legislative Action
8. Ben Shapiro
9. The Sage Page
— Kevin Roose (@kevinroose) October 7, 2018
If Facebook wanted to be more transparent about its successes and failures around fake news, it’d publish lists of the false stories with the highest circulation each month and then apply the Hunt’s format explaining how they were debunked. This could help dispel myths in society’s understanding that may be propagated by the mere abundance of fake news headlines, even if users don’t click through to read them.
But at least all of Facebook’s efforts around information security — including doubling its security staff from 10,000 to 20,000 workers, fact checks and using News Feed algorithm changes to demote suspicious content — are paying off:
- A Stanford and NYU study found that Facebook likes, comments, shares and reactions to links to 570 fake news sites dropped by more than half since the 2016 election, while engagements through Twitter continued to rise, “with the ratio of Facebook engagements to Twitter shares falling by approximately 60 percent.”
- A University of Michigan study coined the metric “Iffy Quotient” to assess the how much content from certain fake news sites was distributed on Facebook and Twitter. When engagement was factored in, it found Facebook’s levels had dropped to nearly 2016 volume; that’s now 50 percent less than Twitter.
- French newspaper Le Monde looked at engagement with 630 French websites across Facebook, Twitter, Pinterest and Reddit. Facebook engagement with sites dubbed “unreliable or dubious” has dropped by half since 2015.
Of course, given Twitter’s seeming paralysis on addressing misinformation and trolling, they’re not a great benchmark for Facebook to judge by. While it’s useful that Facebook is outlining ways to spot fake news, the public will have to internalize these strategies for society to make progress. That may be difficult when the truth has become incompatible with many peoples’ and politicians’ staunchly held beliefs.
In the past, Facebook has surfaced fake news-spotting tips atop the News Feed and bought full-page newspaper ads trying to disseminate them. The Hunt for Fake News would surely benefit from being embedded where the social network’s users look everyday instead of buried in its corporate blog.
Americans looking to reduce their reliance on products from tech’s most alarmingly megalithic companies might be surprised to learn just how far their reach extends.
Privacy-minded browser company DuckDuckGo conducted a small study to look into that phenomenon and the results were pretty striking.
“… As Facebook usage wanes, messaging apps like WhatsApp are growing in popularity as a ‘more private (and less confrontational) space to communicate,’” DuckDuckGo wrote in the post. “That shift didn’t make much sense to us because both services are owned by the same company, so we tried to find an explanation.”
DuckDuckGo gathered a random sample of 1,297 adult Americans who are “collectively demographically similar to the general population of U.S. adults” (i.e. not just DuckDuckGo diehards) using SurveyMonkey’s audience tools. The survey found that 50.4 percent of those surveyed who had used WhatsApp in the prior six months (247 participants) did not know the company is owned by Facebook.
Similarly, DuckDuckGo found that 56.4 percent of those surveyed who had used Waze in the past six months (291 participants) had no idea that the navigation app is owned by Google. A similar study conducted back in April found the same phenomenon when it came to Facebook/Instagram and Google/YouTube, though for Instagram the effect was even stronger (wow).
If you’re reading TechCrunch it’s probably almost impossible to imagine that average people aren’t tracing the lines between tech’s biggest companies and the products scooped up or built under their wings. And yet, it is so.
Even as companies like Google and Facebook suffer blowback from privacy crises, it’s clear that they can lean on the products they’ve picked up along the way to chart a path forward. If this survey is any indication, half of U.S. consumers will have no idea that they’ve jumped ship from a big tech product into a lifeboat captained by the very same company they sought to escape.
And for the biggest tech companies, it’s at least one reason that keeping satellite products at arm’s length from their respective motherships is advantageous for maintaining trust — especially while aggressive data sharing happens behind the scenes.
Facebook is demoting trashy news publishers and other websites that illicitly scrape and republish content from other sources with little or no modification. Today it exclusively told TechCrunch that it will show links less prominently in the News Feed if they have a combination of this new signal about content authenticity along with either clickbait headlines or landing pages overflowing with low-quality ads. The move comes after Facebook’s surveys and in-person interviews discovered that users hate scraped content.
If ill-gotten intellectual property gets less News Feed distribution, it will receive less referral traffic, earn less ad revenue and there’ll be less incentive for crooks to steal articles, photos and videos in the first place. That could create an umbrella effect that improves content authenticity across the web.
And just in case the scraped profile data stolen from 29 million users in Facebook’s recent massive security breach ended up published online, Facebook would already have a policy in place to make links to it effectively disappear from the feed.
Here’s an example of the type of site that might be demoted by Facebook’s latest News Feed change. “Latest Nigerian News” scraped one of my recent TechCrunch articles, and surrounded it by tons of ads.
“Starting today, we’re rolling out an update so people see fewer posts that ink out to low quality sites that predominantly copy and republish content from other sites without providing unique value. We are adjusting our Publish Guidelines accordingly,” Facebook wrote in an addendum to its May 2017 post about demoting sites stuffed with crappy ads. Facebook tells me the new publisher guidelines will warn news outlets to add original content or value to reposted content or invoke the social network’s wrath.
Personally, I think the importance of transparency around these topics warrants a new blog post from Facebook as well as an update to the original post linking forward to it.
So how does Facebook determine if content is stolen? Its systems compare the main text content of a page with all other text content to find potential matches. The degree of matching is used to predict that a site stole its content. It then uses a combined classifier merging this prediction with how clickbaity a site’s headlines are plus the quality and quantity of ads on the site.
TechCrunch: Hey Portal, dial Mark
Portal: Do you mean Mark Zuckerberg?
Portal: Dialling Mark…
TC: Hi Mark! Nice choice of grey t-shirt.
MZ: Uh, new phone who dis? — oh, hi, er, TechCrunch…
TC: Thanks for agreeing to this entirely fictional interview, Mark!
MZ: Sure — anytime. But you don’t mind if I tape over the camera do you? You see I’m a bit concerned about my privacy here at, like, home
TC: We feel you, go ahead.
As you can see, we already took the precaution of wearing this large rubber face mask of, well, of yourself Mark. And covering the contents of our bedroom with these paint-splattered decorator sheets.
MZ: Yeah, I saw that. It’s a bit creepy tbh
TC: Go on and get all taped up. We’ll wait.
[sound of Mark calling Priscilla to bring the tape dispenser]
[Portal’s camera jumps out to assimilate Priscilla Chan into the domestic scene, showing a generous vista of the Zuckerbergs’ living room, complete with kids playing in the corner. Priscilla, clad in an oversized dressing gown and with her hair wrapped in a big fluffy towel, can be seen gesticulating at the camera. She is also coughing]
Priscilla to Mark: I already told you — there’s a camera cover built into into Portal. You don’t need to use tape now
MZ: Oh, right, right!
Okay, going dark! Wow, that feels better already
[sound of knuckles cracking]
TC: So, Mark, let’s talk hardware! What’s your favorite Amazon Echo?
MZ: Uh, well…
TC: We’d guess one with all the bells & whistles, right? There’s definitely something more than a little Echo Show-y about Portal
MZ: Sure, I mean. We think Alexa is a great product
TC: Mhmm. Do you remember when digital photo frames first came out? They were this shiny new thing about, like, a decade ago? One of those gadgets your parents buy you around Thanksgiving, which ends up stuck in a drawer forever?
MZ: Yeah! I think someone gave me one once with a photo of me playing beer pong on it. We had it hanging in the downstairs rest room for the longest time. But then we got an Android tablet with a Wi-Fi connection for in there, so…
TC: Now here we are a decade or so later with Portal advancing the vision of what digital photo frames can be!
MZ: Yeah! I mean, you don’t even have to pick the pictures! It’s pretty awesome. This one here — oh, right you can’t see me but let me describe it for you — this one here is of a Halloween party I went to one year. Someone was dressed as SpongeBob. I think they might have been called Bob, actually… And this is, like, some other Facebook friends doing some other fun stuff. Pretty amazing.
You can also look at album art
TC: But not YouTube, right? But let’s talk about video calling
MZ: It’s an amazing technology
TC: It sure is. Skype, FaceTime… live filters, effects, animoji…
MZ: We’re building on a truly great technology foundation. Portal autozooming means you don’t even have to think about watching the person you’re talking to! You can just be doing stuff in your room and the camera will always be adjusting to capture everything you’re doing! Pretty amazing.
TC: Doing what Mark? Actually, let’s not go there
MZ: Portal will even suggest people for you to call! We think this will be a huge help for our mission to promote Being Well — uh, I mean Time Well Spent because our expert machine learning algorithms will be nudging you to talk to people you should really be talking to
TC: Like my therapist?
MZ: Uh, well, it depends. But our AI can suggest personalized meaningful interactions by suggesting Messenger contacts to call up
TC: It’s not going to suggest I videchat my ex is it?
MZ: Haha! Hopefully not. But maybe your mom? Or your grandma?
TC: Sounds incredibly useful. Well, assuming they didn’t already #deletefacebook.
But let’s talk about kids
MZ: Kids! Yeah we love them. Portal is going to be amazing for kids
TC: You have this storybook thing going on, right? Absent grandparents using Portal to read kids bedtime stories and what not…
MZ: Right! We think kids are going to love it. And grandparents! We’ve got these animal masks if you get bored of looking at your actual family members. It’s good, clean, innovative fun for all the family!
TC: Yeah, although, I mean, nothing beats reading from an actual kid’s book, right?
TC: If you do want to involve a device in your kid’s bedtime there are quite a lot of digital ebook apps for that already. Apple has a whole iBooks library of the things with read-aloud narration, for example.
And, maybe you missed this — but quite a few years ago there was a big bunch of indie apps and services all having a good go at selling the same sort of idea of ‘interactive remote reading experiences’ for families with kids. Though not many appear to have gone the distance. Which does sort of suggest there isn’t a huge unmet need for extra stuff beyond, well, actual children’s books and videochat apps like Skype and FaceTime.
Also, I mean, children’s story reading apps and interactive kids’ e-books are pretty much as old as the hills in Internet terms at this point. So, er, you’re not really moving fast and breaking things are you!?
MZ: Actually we’re more focused on stable infrastructure these days
TC: And hardware too, apparently. Which is a pretty radical departure for Facebook. All those years everyone thought you were going to do a Facebook phone but you left it to Amazon to flop into that pit… Who needs hardware when you can put apps and tracker pixels on everything, right?!
But here you are now, kinda working with Amazon for Portal — while also competing with Alexa hardware by selling your own countertop device… Aren’t you at all nervous about screwing this up? Hardware IS hard. And homes have curtains for a reason…
MZ: We’re definitely confident kids aren’t going to try swivelling around on the Portal Plus like it’s a climbing frame, if that’s what you mean. Well, hopefully not anyway
TC: But about you, Facebook Inc, putting an all-seeing-eye-cum-Internet-connected-listening-post into people’s living rooms and kids’ bedrooms…
MZ: What about it?
[MZ speaking to someone else in the room] Does the speaker have an off switch? How do I mute this thing?
TC: Hello? Mark?
[sound comes back on briefly and a snatch of conversation can be heard between Mark and Priscilla about the need to buy more diapers. Mark is then heard shouting across the room that his Shake Shack order of a triple cheeseburger and fries plus butterscotch malt is late again]
[crackle and a congested throat clearing sound. A child is heard in the background asking for Legos]
MZ: Not now okay honey. Okay hon-, uh, hello — what were you saying?
TC: Will you be putting a Portal in Max’s room?
MZ: Haha! She’d probably prefer Legos
MZ: She’s only just turned one
TC: Okay, let’s try a more direct question. Do you at all think that you, Facebook Inc,
might have a problem selling a $ 200+ piece of Internet-connected hardware when your company is known for creeping on people to sell ads?
MZ: Oh no, no! — we’ve, like, totally thought of that!
Let me read you what marketing came up with. Hang on, it’s around here somewhere…
[sound of paper rustling]
Here we go [reading]:
Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. Your Portal conversations stay between you and the people you’re calling. In addition, video calls on Portal are encrypted, so your calls are always secure.
For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.
Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, ‘Hey Portal.’ You can delete your Portal’s voice history in your Facebook Activity Log at any time.
Pretty cool, huh!
TC: Just to return to your stable infrastructure point for a second, Mark — did you mean Facebook is focused on security too? Because, well, your company keeps leaking personal data like a sieve holds water…
MZ: We think of infrastructure as a more holistic concept. And, uh, as a word that sounds reassuring
TC: Okay, so of course you can’t 100% guarantee Portal against hacking risks, though you’re taking precautions by encrypting calls. But Portal might also ‘accidentally’ record stuff adults and kids say in the home — i.e. if its ‘Hey Portal’ local listening function gets triggered when it shouldn’t. And it will then be 100% up to a responsible adult to find their way through Facebook’s labyrinthine settings and delete those wiretaps, won’t it?
MZ: You can control all your information, yes
TC: The marketing bumpf also doesn’t spell out what Facebook does with ‘Hey Portal’ voice recordings, or the personal insights your company is able to glean from them, but Facebook is in the business of profiling people for ad targeting purposes so we must assume that any and all voice commands and interactions, with the sole exception of the contents of videocalls, will go into feeding that beast.
So the metadata of who you talk to via Portal, what you listen to and look at (minus any Alexa-related interactions that you’ve agreed to hand off to Amazon for its own product targeting purposes), and potentially much more besides is all there for Facebook’s taking — given the kinds of things that an always-on listening device located in a domestic setting could be accidentally privy to.
Then, as more services get added to Portal, more personal behavioral data will be generated and can be processed by Facebook for selling ads.
MZ: Well, I mean, like I told that Senator we do sell ads
TC: And smart home hardware too now, apparently.
One more thing, Mark: In Europe, Facebook didn’t used to have face recognition technology switched on did it?
MZ: We had it on pause for a while
TC: But you switched it back on earlier this year right?
MZ: Facebook users in Europe can choose to use it, yes
TC: And who’s in charge of framing that choice?
MZ: Uh, well we are obviously
TC: We’d like you to tap on the Portal screen now, Mark. Tap on the face you can see to make the camera zoom right in on this mask of your own visage. Can you do that for us?
MZ: Uh, sure
[sound of a finger thudding against glass]
MZ: Are you seeing this? It really is pretty creepy!
Or — I mean — it would be if it wasn’t so, like, familiar…
[sound of a child crying]
Priscilla to Mark: Eeeew! Turn that thing off!
TC: Thanks Mark. We’ll leave you guys to it.
Enjoy your Shake Shack. Again.
Portal: Thanks for calling Mark, TechCrunch! Did you enjoy your Time Well Spent?
Workplace, the version of Facebook tailored to enterprises that has over 30,000 organizations as paying customers, is ramping up the service today with a rush of new features to help it competes with the likes of Slack and Microsoft’s Teams.
The additions are being announced at a new, standalone conference called Flow — the first time Facebook has built what’s likely to become a recurring event for a specific product, Workplace’s head Julien Codorniou told me in an interview. He described Workplace as “Facebook’s first SaaS startup.” He tells us that for existing clients, the goal of Flow is to show off new features that deepen employee engagement with Workplace so they can’t imagine switching away. And for enterprise software partners Facebook integrates with, it’s to foster an ecosystem surrounding Workplace so it can adapt to any business.
In a big upgrade to the “chat” features of Workplace (conversations that happen outside the news feed, first launched last year), users will now be able to start chats, calls and video conversations either one-to-one or in groups, in the style of WhatsApp or Messenger. Facebook is also making it easier to navigate through high volumes of messages in your channels by adding in replies, do not disturb and pinning features — Facebook’s first move to bring in algorithmic sorting to Workplace. And Facebook is also bringing its Safety Check feature from the main app to Workplace, delivered via Workchat, as a tool that can be controlled by admins to check on the status of employees during a critical incident.
Workplace has picked up 30,000 businesses as customers in the two years since it launched (including some biggies like Walmart, the world’s largest employer); and today it also added a couple of notable large enterprises to the mix: GSK, Astra Zeneca, Chevron, Kantar, Telefonica, Securitas, Clarins UK, Jumia and GRAB.
But Facebook has never revealed how many users (or “seats”, in enterprise parlance) it has on Workplace. As a point of comparison, Slack today has 8 million users across 70,000 organizations, and Facebook hasn’t updated its 30,000 figure in a year.
The range of features Facebook is introducing today are notable both for their breadth and for what they are aiming to do. Some help put Workplace more on par with the core Facebook experience in terms of functionality, but ultimately they are all squarely aimed at making Workplace into something that fits more closely with how enterprises already use IT.
The chat features that are being incorporated build on the minimal chat features that were already present in Workplace and essentially create something like WhatsApp or Messenger that sits within the same secure framework as Workplace itself. It’s effectively Facebook’s first step forward into unified communications — a specific branch of enterprise IT that used to be centred around PBXs and other expensive physical equipment, but has more recently become more virtualised with the rise of voice of IP and cloud-based systems that can be used over any internet connection.
Workplace had already had a feature in place for up to 50 companies to converse in multi-organizational conversations on the platform, and now if some members of those groups want to take the conversation to a more direct channel potentially with voice or video calling, they can do that directly from within the app without having to open a separate messaging client (which may or may not be under the control of IT). Up to 50 people can join a video call in Workplace.
The three features that help you better organise your conversations — do not disturb, replies and pinning important items — will be especially welcome to people who have especially “noisy” channels on Workplace.
Replies, Codorniou said, will work “like on WhatsApp” — where you can select a message and reply to it and it will appear with its mini thread later in the feed.
But they are perhaps most notable of all because they will be the first time that Facebook is introducing “algorithmic” sorting to Workplace. For those who already use normal Facebook, or Twitter, or other social media services, algorithmic sorting is something that is well-known, as it plays with the sequence of posts to show you what is deemed to be more important, versus what’s most recent.
In the case of pinning, Facebook is letting the IT admins, and users, effectively play a part in the algorithmic sorting: Admins can pin “important” posts to the top of a feed, and that will affect what users see and can respond to first. “If the CEO posts a message, this might be more important than something posted an intern,” he said.
Do not disturb, meanwhile, will let users set times when they do not get pinged with messages, but when you “return” again to Workplace, Facebook decides what gets sorted to the top of what you view.
Codorniou notes that Facebook uses machine learning and AI “to make sure that if you don’t use Workplace for two weeks [as an example] you have the most relevant information on top of the news feed.” Signals that it uses to sort include who you work with, and which groups you are most active in. “It’s algorithmic by default,” he noted, and added that this was something that was requested by Workplace users. “People don’t believe in the chronological feed anymore,” he said. “It’s important to guarantee reach to communications teams.”
The Safety Check also fits into this concept. Here, Facebook will be putting IT managers/Workplace admins into the driver’s seat, “giving them the keys to the feature”, said Codorniou, and letting them control the use and distribution of a feature that in regular Facebook is controlled by Facebook itself.
Frederic takes a deeper diver into Safety Check here, but the main idea, as Codorniou described it to me, is that it allows companies “to track and clear who is safe and who is not” when a particular location has been through an emergency or critical incident. There are apps that companies can use to run safety checks, or sometimes they might use SMS, but these tend to work more manually and are harder to execute quickly, he said. Facebook doesn’t reveal how well penetrated their apps are at organizations like Walmart and Starbucks, but this potentially becomes one lever to helping get Workplace distributed more widely.
“Employees are a company’s number-one asset of the company, and this helps make sure you are safe,” he added. “People don’t want to play Candy Crush, but things like Live” — which Workplace launched last year — “and Safety Check are relevant. They help turn companies into communities.”
(Community, of course, is the big theme for Facebook these days.)
All these updates are happening at a time when many people have been scrutinising Facebook for its approach to user privacy and personal data.
The issue was notably highlighted over the Cambridge Analytica scandal many months ago, specifically over how third parties were able to access users’ information; and then more recently Facebook faced criticism two weeks ago, when it emerged that a bug in one of its features exposed user information to malicious hackers. Both of these problems were squarely about Facebook’s core consumer app, but I couldn’t help but wonder what kind of an impact it has had on the company’s enterprise business — given that levels of security in workplace networks typically tend to be higher as they are connected to corporate information.
“We had a few questions of course but we have no reason to believe that Workplace was affected,” Codorniou said. He noted that there had once been a feature to log in to Workplace using a user’s Facebook ID, but that was disabled some time go. “We have been investigating, but most customers are on single sign on,” he noted, which uses services like Okta, One Login and Ping to connect and sign in employees to their Workplace spaces.
Facebook’s scale brings it huge advantages in the enterprise. The consumerization of the office stack means Facebook can easily port over its familiar features. It’s big enough to extensively dogfood Workplace within the company. And it already has advertising relationships with many of the world’s top brands. But being a tech giant comes with the associated scandals and constant criticism. Facebook will have to convince business leaders that its social troubles won’t muddy their suits.
Facebook’s first hardware product combines Alexa (and eventually Google Assistant) with a countertop video chat screen that zooms to always keep you in frame. Yet the fancy gadget’s success depends not on functionality, but whether people are willing to put a Facebook camera and microphone in their home even with a physical clip-on privacy shield.
Today Facebook launches pre-sales of the $ 199 10-inch screen Portal, and $ 349 15.6-inch swiveling screen with hi-fi audio Portal+, minus $ 100 if you buy any two. They’ve got “Hey Portal” voice navigation, Facebook Messenger for video calls with family, Spotify and Pandora for Bluetooth and voice-activated music, Facebook Watch and soon more video content providers, augmented reality Story Time for kids, a third-party app platform, and it becomes a smart photo/video frame when idle.
Knowing buyers might be creeped out, Facebook’s VP of Portal Rafa Camargo tells me “We had to build all the stacks — hardware, software, and AI from scratch — and it allowed us to build privacy into each one of these layers”. There’s no facial recognition and instead just a technology called 2D pose that runs locally on the device to track your position so the camera can follow you if you move around. A separate chip for local detection only activates Portal when it hears its wake word, it doesn’t save recordings, and the data connection is encrypted. And with a tap you can electronically disable the camera and mic, or slide the plastic privacy shield over the lens to blind it while keeping voice controls active.
As you can see from our hands-on video demo here, Facebook packs features into high-quality hardware, especially in the beautiful Portal+ which has a screen you can pull from landscape to portrait orientation and impressive-sounding 4-inch woofer. The standard Portal looks and sounds a bit stumpy by comparison. The Smart Camera smoothly zooms in and out for hands-free use, though their are plenty of times that video chatting from your mobile phone will be easier. The lack of YouTube and Netflix is annoying, but Facebook promises there are more video partners to come.
The $ 199 Portal comes in $ 20 cheaper than the less functional Amazon Echo Show (read our gadget reviewer Brian Heater’s take on Portal below), and will also have to compete with Lenovo and Google’s upcoming version that might have the benefit of YouTube. Portal and the $ 349 Portal+ go on sale today in the US on Portal.Facebook.com, Amazon, and Best Buy in both black and white base colors. They ship in November when they’ll also appear in physical Amazon Books and Best Buy stores.
Hands-On With Portal
Deep inside Facebook’s Menlo Park headquarters, the secretive Building 8 lab began work on Portal 18 months ago. The goal was to reimagine video chat not as a utilitarian communication tool, but for “the feeling of being in the same room even if you’re thousands of miles apart” Facebook Portal’s marketing lead Dave Kaufman tells me. Clearly drinking the social network’s kool-aid, he says that “it’s clear that Facebook has done a good job when you’re talking about the breadth of human connection, but we’re focusing on the depth of connection.”
The saddening motive? 93% of the face-to-face time we spend with our parents is done by the time we finish high-school, writes Wait but Why’s Tim Urban. “It felt like punch in the gut to people working at Facebook” says Kaufman. So the team built Portal to be simple enough for young children and grandparents to use, even if they’re too young or old to spend much time on smartphones.
Before you even wake up Portal, it runs a slideshow of your favorite Facebook photos and videos, plus shows birthday reminders and notifications. From the homescreen you’ll get suggested and favorite Messenger contacts you can tap to call, or you can just say “Hey Portal, call Josh.” Built atop the Android Open Source framework, Facebook designed a whole new UI for Portal for both touch and voice. Alex is integrated already. “We definitely have been talking to Google as well” Camargo tells me. “We view the future of these home devices . . . as where you will have multiple assistants and you will use them for whatever they do best . . . We’d like to expand and integrate with them.”
Portal uses your existing social graph instead of needing to import phone numbers or re-establish connections with friends. You can group video chat with up to seven friends, use augmented reality effects to hide your face or keep children entertained, and transfer calls to and from your phone. 400 million Facebookers use Messenger video chat monthly, racking up 17 billion calls in 2017, inspiring Facebook to build Portal around the feature. Kaufman says the ability to call phone numbers is in the roadmap, which could make Portal more tolerant of people who don’t live on Messenger.
Once a video call starts, the 140-degree, 12-megapixel Smart Lens snaps into action, automatically zooming and recentering so your face stays on camera even if you’re bustling around the kitchen or playing with the kids. A four-microphone array follows you too to keep the audio crisp from a distance. If a second person comes into view, Portal will widen the frame so you’re both visible. Tap on a person’s face, and Portal Spotlight crops in tight around just them. Facebook worked with an Oscar award-winning cinematographer to make Smart Lens feel natural. Unfortunately it can’t track pets, but that got so many requests from testers that Facebook wants to add it in. I suggested Portal should let you call businesses so you could move around or be entertained while on hold, though the team says it hasn’t discussed that.
Portal’s most adorable feature is called Story Time. It turns public domain children’s books into augmented reality experiences that illustrate the action and turn you into the characters. You’ll see the three little pigs pop up on your screen, and an AR mask lets you become the big bad wolf when you might impersonate his voice. Kids and grandparents won’t always have much to talk about, and toddlers aren’t great conversation partners, so this could extend Portal calls beyond a quick hello.
Beyond chat, Facebook has built a grip of third-party experiences into Portal. You can use any Alexa to summon Spotify, Pandora, or IHeartRadio, and even opt to have songs play simultaneously on you and someone else’s Portal for a decentralized dance party. Portal also acts as a Bluetooth speaker, and Spotify Connect lets it power multi-room audio. Portal+ in portrait mode makes a great playlist display with artwork and easy song skipping. The Food Network and Newsy apps let you watch short videos so you follow recipes or catch up on the world as you do your housework. And while you can’t actually browse the News Feed, Facebook Watch pulls in original premium video as well as some viral pap to keep you occupied.
My biggest gripe with Portal is that there’s no voice controlled text messaging feature. Perhaps we’ll see that down the line, though, as Facebook Messenger is now internally testing speech transcription and voice navigation. You can’t use WhatsApp, Instagram Direct, pop open a web browser either. Even with the Smart Lens subject tracking, Portal is stuck on a table and lacks the convenience of video chatting from a phone in your portable, stabilized gimble commonly known as your hand. Other shortcomings could be shored up with the gadget’s app platform that is currently invite-only, but Facebook will have to prove there are enough Portal buyers out there to lure developers.
So how will Facebook make money on Portal? “we definitely don’t have ads on the devices, and we don’t see that coming” says Camargo. Facebook wouldn’t reveal the margin it will earn selling the device, but when asked if it’s a loss leader for driving ad views on its social network, Camargo tells me “I wouldn’t say that’s the case”, though boosting engagement is surely an incentive. Portal could earn money from enterprise clients, though, as Facebook is already internally testing a version of its Workplace team collaboration’s video chat feature on Portal. The team laughs that Facebook employees are starting to prefer Portal to their office’s expensive and complex video conference hardware.
Privacy vs Utility
After Cambridge Analytica and Facebook’s recent 50 million user breach, it’s understandable that some people would be scared to own Portal’s all-seeing eye. Privacy makes Portal a non-starter to many even as they seem comfortable with Google or Amazon having access to their dwelling. In hopes of assuaging fears, Facebook put a dedicated button atop Portal that electronically disconnects the camera and microphone so they can’t record, let alone transmit. Portal isn’t allowed to save video, and Facebook says it won’t store your voice commands (though Alexa does). Oh, and just to kill this pervasive rumor, Camargo definitively confirmed that Facebook’s smartphone apps don’t secretly record you either.
For added protection, snap on the plastic privacy shield and you’ll blind the lens while still being able to voice-activate music and other features. If you use these, especially when you’re not video chatting, the privacy threat drops significantly. The fact that the shield isn’t attached on a hinge to swing on the place makes it feel like a last-minute scramble after a year of privacy scandals, even though Camargo claims all hardware decisions were locked in before this year.
Doing his part on the PR offensive to combat the privacy narrative, Facebook CEO Mark Zuckerberg shared a photo of his young daughters playing with Portal, and wrote “Our girls don’t use a lot of screens yet, but we’re happy for them to do video calls to see their grandparents or so I can see them when I’m traveling.”
You could see Zuckerberg’s willingness to ship Portal amidst a storm of negative press as either infuriatingly negligent as Facebook’s privacy troubles remain, or a show of impressive conviction that the smart home is a future people want that the company must be part of. Maybe Portal is an improbable hail mary, but maybe it’s a calculated bet that the cynical and vocal minority don’t represent the average person who cares more about convenience than privacy. Camargo admits that “If no one wants it ever, we will reassess. But we also don’t think we’ll come and get it all right so we will continue to evolve, we’re already investing in expanding the product line with more products we want to launch next year.”
Overall, Portal could replace your favorite Alexa device and add seamless video chatting through Messenger if you’re willing to pay the price. That’s both in terms of the higher cost, but also the ‘brand tax’ of welcoming the data-gobbler with a history of privacy stumbles into your home. But Facebook also benefits as a neutral party to Amazon Alexa and Google. If it can integrate both assistant into one device alongside Portal’s own, it could offer the best of all worlds.
For a first-time hardware maker, Facebook did a remarkable job of building polished devices that add new value instead of reinventing the smart home wheel. Teaming up with Amazon and eventually Google instead of directly competing with their voice assistants shows a measure of humility most tech giants eschew. Yet a history of “move fast and break things” in search of growth has come back to haunt Facebook. Video chat is about spending time with people you love and trust, and Facebook hasn’t earned those feelings from us.
Less than 10 percent of the 50 million users attacked in Facebook’s recent breach lived in the European Union, tweeted the Irish Data Protection Commission which oversees privacy in the region. However, Facebook still could be liable for up to $ 1.63 billion in fines, or 4 percent of its $ 40.7 billion in annual global revenue for the prior financial year, if the EU determines it didn’t do enough to protect the security of its users.
Facebook wrote in response to the IDPC’s tweet that “We’re working with regulators including the Irish Data Protection Commission to share preliminary data about Friday’s security issue. As we work to confirm the location of those potentially affected, we plan to release further info soon.”
Facebook alerted regulators and the public to the breach Friday morning after discovering it Tuesday afternoon. That’s important because it came under the 72-hour deadline for announcing hacks that can trigger an additional fine of up to 2 percent of a company’s global revenue if not met.
UPDATE Facebook data breach – @DPCIreland understands that the number of potentially affected EU accounts is less than 10% of the 50 million accounts in total potentially affected by the security breach. DPC Ireland statement beneath. #dataprotection #GDPR #EUdataP pic.twitter.com/oSfGy6DP2S
— Data Protection Commission Ireland (@DPCIreland) October 1, 2018
That hack saw sophisticated attackers combine three bugs in Facebook’s profile, privacy, and video uploading features to steal the access token of 50 million users. These access tokens could allow the attackers to take over user accounts and act as them on Facebook, Instagram, Oculus, and other sites that rely on Facebook’s login system. The EU’s GDPR laws threaten heavy fines for improper security practices and are seen as stricter than those in the US, so its findings during this investigation carry weight.
The big question remains what data was stolen and how it could potentially be misused. Unless investigators or journalists discover a nefarious application for that data, such as how Cambridge Analytica’s illgotten data was used to inform Donald Trump’s campaign strategy, it’s unlikely for the public to see this as more than just another of Facebook’s constant privacy scandals. It could still trigger regulation, or push partners away from using Facebook’s login system, but the world seems to be growing numb to the daily cybersecurity breaches that plague the internet.
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.
“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.
Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.
But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.
You could say Facebook has ‘hostility to privacy‘ as a core value.
Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.
But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.
On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.
Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.
All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparency, hate speech and abuse, and also directly, and at times closely, on consumer privacy and control.
Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.)
The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”
As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.
But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.
He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.
What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.
Simply put, Facebook is weaponizing security to shield its erosion of privacy.
Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.
Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.
In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.
Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.
Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.
Hence the existential nature of the threat.
The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.
Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.
One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.
It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.
US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.
Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.
So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)
The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.
Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.
While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.
In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.
Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.
So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.
Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:
It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.
At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.
“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.
He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.
“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”
He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).
These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.
But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.
No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.
Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…
So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.
What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.
Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.
Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.
“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.
So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.
Safe to say, that’s not going to play at all well in Europe.
Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!
Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)
In other contexts this would be called spying — or, well, ‘mass surveillance’.
It’s also how Facebook makes money.
And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.
It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.
Except Facebook is a commercial company, not the NSA.
So it’s only fighting to keep being able to carpet-bomb the planet with ads.
Profiting from shadow profiles
Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.
In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.
Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.
It’s just a double whammy of awful, awful behavior.
I spend a lot of time trying to convince people to lock down their social media accounts with 2FA. Boy does this undermine my efforts. https://t.co/tPo4keQkT7
— Eva (@evacide) September 28, 2018
And of course, there’s more.
A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.
In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.
Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.
Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)
So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)
In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.
One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…
Sure #Facebook, I'll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.
There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.
These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.
Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.
WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.
On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.
So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.
This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.
No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.
Yet this is a very dangerous strategy, though.
Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.
What’s the best security practice of all? That’s super simple: Not holding data in the first place.
- 4 Reasons I Prefer Facebook Ads over Google Ads
- Facebook launches ‘Hunt for False News’ debunk blog as fakery drops 50%
- Smart home tech makers don’t want to say if the feds come for your data
- Twilio launches a new SIM card and narrowband dev kit for IoT developers
- Why search marketing matters in 2018