Egnyte announced today it was combining its two main products — Egnyte Protect and Egnyte Connect — into a single platform to help customers manage, govern and secure the data from a single set of tools.
Egynte co-founder and CEO Vineet Jain says that this new single platform approach is being driven chiefly by the sheer volume of data they are seeing from customers, especially as they shift from on-prem to the cloud.
“The underlying pervasive theme is that there’s a rapid acceleration of data going to the cloud, and we’ve seen that in our customers,” Jain told TechCrunch. He says that long-time customers have been shifting from terabytes to petabytes of data, while new customers are starting out with a few hundred terabytes instead of five or ten.
As this has happened, he says customers are asking for a way to deal with this data glut with a single platform because the volume of data makes it too much to handle with separate tools. “Instead of looking at this as separate problems, customers are saying they want a solution that helps address the productivity part at the same time as the security part. That’s because there is more data in the cloud, and concerns around data security and privacy, along with increasing compliance requirements, are driving the need to have it in one unified platform,” he explained.
The company is doing this because managing the data needs to be tied to security and governance policies. “They are not ultimately separate ideas,” Jain says.
Jain says, up until recently, the company saw the data management piece as the way into a customer, and after they had that locked down, they would move to layer on security and compliance as a value-add. Today, partly due to the data glut and partly due to compliance regulations, Jain says, these are no longer separate ideas, and his company has evolved its approach to meet the changing requirements of customers.
Egnyte was founded in 2007 and has raised over $ 138 million on a $ 460 million post valuation, according to Pitchbook data. Its most recent round was $ 75 million led by Goldman Sachs in September, 2018. Egnyte passed the $ 100 million ARR mark in November.
TikTok may be the fastest-growing social network in the history of the internet, but it is also quickly becoming the fastest-growing security threat and thorn in the side of U.S. China hawks.
The latest, according to a notice published by the U.S. Navy this past week and reported on by Reuters and the South China Morning Post, is that TikTok will no longer be allowed to be installed on service members’ devices, or they may face expulsion from the military service’s intranet.
It’s just the latest example of the challenges facing the extremely popular app. Recently, Congress led by Missouri senator Josh Hawley demanded a national security review of TikTok and its Sequoia-backed parent company ByteDance, along with other tech companies that may share data with foreign governments like China. Concerns over the leaking of confidential communications recently led the U.S. government to demand the unwinding of the acquisition of gay social network app Grindr from its Chinese owner Beijing Kunlun.
The intensity of criticism on both sides of the Pacific has made it increasingly challenging to manage tech companies across the divide. As I recently discussed here on TechCrunch, Shutterstock has actively made it harder and harder to find photos deemed controversial by the Chinese government on its stock photography platform, a play to avoid losing a critical source of revenue.
We saw similar challenges with Google and its Project Dragonfly China-focused search engine as well as with the NBA.
What’s interesting here though is that companies on both sides are struggling with policy on both sides. Chinese companies like ByteDance are increasingly being targeted and stricken out of the U.S. market, while American companies have long struggled to get a foothold in the Middle Kingdom. That might be a more equal playing field than it has been in the past, but it is certainly a less free market than it could be.
While the trade fight between China and the U.S. continues, the damage will continue to fall on companies that fail to draw within the lines set by policymakers in both countries. Whether any tech company can bridge that divide in the future unfortunately remains to be seen.
What the networking company gets with a shiny red ribbon is a security product that helps stop automated attacks like credential stuffing. In an article earlier this year, Shape CTO Shuman Ghosemajumder explained what the company does:
We’re an enterprise-focused company that protects the majority of large U.S. banks, the majority of the largest airlines, similar kinds of profiles with major retailers, hotel chains, government agencies and so on. We specifically protect them against automated fraud and abuse on their consumer-facing applications — their websites and their mobile apps.
F5 president and CEO François Locoh-Donou sees a way to protect his customers in a comprehensive way. “With Shape, we will deliver end-to-end application protection, which means revenue generating, brand-anchoring applications are protected from the point at which they are created through to the point where consumers interact with them—from code to customer,” Locoh-Donou said in a statement.
As for Shape, CEO Derek Smith said that it wasn’t a huge coincidence that F5 was the buyer, given his company was seeing F5 consistently in its customers. Now they can work together as a single platform.
Shape launched in 2011 and raised $ 183 million, according to Crunchbase data. Investors included Kleiner Perkins, Tomorrow Partners, Norwest Venture Partners, Baseline Ventures and C5 Capital. In its most recent round in September, the company raised $ 51 million on a valuation of $ 1 billion.
F5 has been in a spending mood this year. It also acquired NGINX in March for $ 670 million. NGINX is the commercial company behind the open-source web server of the same name. It’s worth noting that prior to that, F5 had not made an acquisition since 2014.
It was a big year in security M&A. Consider that in June, four security companies sold in one three-day period. That included Insight Partners buying Recorded Future for $ 780 million and FireEye buying Verodin for $ 250 million. Palo Alto Networks bought two companies in the period: Twistlock for $ 400 million and PureSec for between $ 60 and $ 70 million.
This deal is expected to close in mid-2020, and is of course, subject to standard regulatory approval. Upon closing Shape’s Smith will join the F5 management team and Shape employees will be folded into F5. The company will remain in its Santa Clara headquarters.
A few years ago I started a website and to my delight, the SEO efforts I was making to grow it were yielding results. However, one day I checked my rankings, and got the shock of my life. It had fallen, and badly.
I was doing my SEO right and I felt that was enough, but I didn’t know there was more. I hadn’t paid attention to my website security, and I didn’t even know that it mattered when it comes to Google and its ranking factors. Also, there were other security concerns I wasn’t paying attention to. As far as I was concerned back then, it didn’t matter since I had good content.
Obviously I was wrong, and I now know that if you really want to rank higher and increasing your site’s search traffic, then you need to understand that there is more to it than just building links and churning out more content. Understanding Google’s algorithm and it’s ranking factors are crucial.
Currently, Google has over 200 ranking factors they consider when they want to determine where to rank a site. And as expected, one of them is about how protected your site is. According to them, website security is a top priority, and they make a lot of investments all geared towards enduring that all their services, including Gmail and Google Drive, use top-notch security and other privacy tools by default all in a bid to make the internet a safer place generally.
Unfortunately, I was uninformed about these factors until my rankings started dropping. Below are four things you can do to protect your site.
Four steps to get started on website security
1. Get security plug-ins installed
On average, a typical small business website gets attacked 44 times each day, and software “bots” attack these sites more than 150 million times every week. And this is for both WordPress sites and even for non-WordPress websites.
Malware security breaches can lead to hackers stealing your data, data loss, or it could even make you lose access to your website. And in some cases, it can deface your website and that will not just spoil your brand reputation, it will also affect your SEO rankings.
To prevent that from happening, enhance your website security with WordPress plugins. These plugins will not just block off the brute force and malware attacks, they will harden WordPress security for your site, thus addressing the security vulnerabilities for each platform and countering all other hack attempts that could pose a threat to your website.
2. Use very strong passwords
As much as it is very tempting to use a password you can easily remember, don’t. Surprisingly, the most common password for most people is still 123456. You can’t afford to take such risks.
Make the effort to generate a secure password. The rule is to mix up letters, numbers, and special characters, and to make it long. And this is not just for you. Ensure that all those who have access to your website are held to the same high standard that you hold yourself.
3. Ensure your website is constantly updated
As much as using a content management system (CMS) comes with a lot of benefits, it also has attendant risks attached. According to this Sucuri report, the presence of vulnerabilities in CMS’s extensible components is the highest cause of website infections. This is because the codes used in these tools are easily accessible owing to the fact that they are usually created as open-source software programs. That means hackers can access them too.
To protect your website, make sure your plugins, CMS, and apps are all updated regularly.
4. Install an SSL certificate
If you pay attention, you will notice that some URLs begin with “https://” while others start with “http://”. You may have likely noticed that when you needed to make an online payment. The big question is what does the “s” mean and where did it come from?
To explain it in very simple terms, that extra “s” is a way of showing that the connection you have with that website is encrypted and secure. That means that any data you input on that website is safe. That little “s” represents a technology known as SSL.
But why is website security important for SEO ranking?
Following Google’s Chrome update in 2017, sites that have “FORMS” but have no SSL certificate are marked as insecure. The SSL certificate, “Secure Sockets Layer” is the technology that encrypts the link between a browser and a web server, protects the site from hackers, and also makes sure that all the data that gets passed between a browser and a web server remains private.
A normal website comes with a locked key in the URL bar, but sites without SSL certificates, on the other hand, have the tag “Not Secure”. This applies to any website that has any form.
According to research carried out by Hubspot, 82% of those that responded to a consumer survey stated that they would leave a website that is not secure. And since Google chrome already holds about 67% out of the whole market share, that is a lot of traffic to lose.
Technically, the major benefit of having Hypertext Transfer Protocol Secure (HTTPS) instead of Hypertext Transfer Protocol (HTTP) is that it gives users a more secure connection that they can use to share personal data with you. This adds an additional layer of security which becomes important especially if you are accepting any form of payment on your site.
To move from HTTP to HTTPS you have to get an SSL certificate (Secure Socket Layer certificate) installed on your website.
Once you get your SSL certificate installed successfully on a web server and configured, Google Chrome will show a green light. It will then act as a padlock by providing a secure connection between the browser and the webserver. For you, what this means is that even if a hacker is able to intercept your data, it will be impossible for them to decrypt it.
Security may have a minor direct effect on your website ranking, but it affects your website in so many indirect ways. It may mean paying a little price, but in the end, the effort is worth it.
The post Why website security affects SEO rankings (and what you can do about it) appeared first on Search Engine Watch.
Rhino was founded in 2017 with the goal of getting back to renters the billions of dollars that are locked up in cash security deposits, all while protecting landlords and their property. As it stands now, landlords usually take one month’s rent to cover any damage that might be done to the apartment during the lease. This is piled on top of first and sometimes last month’s rent, and even at times a broker’s fee of one month’s rent, which adds up to an incredibly steep cost of moving.
Because of certain regulations, this money is held in an individual escrow account and can’t really generate interest, which results in billions of dollars zapped out of the economy and instead sitting dead in some account.
Rhino is looking to give renters the option to pay a small monthly fee (as low as $ 3) to cover an insurance policy for the landlord. Rhino is itself a managing general agent, allowing the company to both sell and create policy plans for landlords through partnerships with carriers.
Thus far the startup has saved renters upwards of $ 60 million in 2019, with users in more than 300,000 rental units across the country.
“The greatest challenge is working against legacy and industry norms,” said Rhino CEO and co-founder Paraag Sarva. “That start has begun, but there is a huge amount of inertia behind the status quo and that is far and away what we are most challenged by day in and day out.”
To help speed up the process, Rhino is working alongside policymakers to enact change on a federal level.
Alongside the funding announcement, the company is announcing its new policy proposal that was created in collaboration with federal, state and local government officials. The policy essentially allows for renters to be given a choice when it comes to cash deposits, including allowing residents to cover security deposits in installments or use insurtech products like Rhino to cover deposits.
Rhino says it will be sharing the policy proposal with 2020 presidential candidates on both sides of the aisle.
Rhino is one of a handful of companies that has been incubated by Kairos, a startup studio led by Ankur Jain with the goal of solving the biggest problems faced by everyday Americans. The studio focuses on housing and healthcare, with companies such as Rhino, June Homes, Little Spoon, Cera and a couple of startups still in stealth.
Senate Democrats want to remind everyone that US elections are still at risk, and Congress could do more to protect them.
Feed: All Latest
The company believes that by combining its cloud backup service with Webroot’s endpoint security tools, it will give customers a more complete solution. Webroot’s history actually predates the cloud, having launched in 1997. The private company reported $ 250 million in revenue for fiscal 2018, according to data provided by Carbonite . That will combine with Carbonite’s $ 296.4 million in revenue for the same time period.
Carbonite CEO and president Mohamad Ali saw the deal as a way to expand the Carbonite offering. “With threats like ransomware evolving daily, our customers and partners are increasingly seeking a more comprehensive solution that is both powerful and easy to use. Backup and recovery, combined with endpoint security and threat intelligence, is a differentiated solution that provides one, comprehensive data protection platform,” Ali explained in a statement.
The deal not only enhances Carbonite’s backup offering, it gives the company access to a new set of customers. While Carbonite sells mainly through Value Added Resellers (VARs), Webroot’s customers are mainly 14,000 Managed Service Providers (MSPs). That lack of overlap could increase its market reach through to the MSP channel. Webroot has 300,000 customers, according to Carbonite.
This is not the first Carbonite acquisition. It has acquired several other companies over the last several years, including buying Mozy from Dell a year ago for $ 145 million. The acquisition strategy is about using its checkbook to expand the capabilities of the platform to offer a more comprehensive set of tools beyond core backup and recovery.
The company announced it is using cash on hand and a $ 550 million loan from Barclays, Citizens Bank and RBC Capital Markets to finance the deal. Per usual, the acquisition will be subject to regulatory approval, but is expected to close this quarter.
WIRED’s Gadget Lab team kicks off the new year with a wrap-up of the year’s biggest electronics show. Plus, an interview with Reddit’s Jen Wong.
Feed: All Latest
A grey hat hacking hero, bad boat news, and more security news this week.
Feed: All Latest
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.
“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.
Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.
But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.
You could say Facebook has ‘hostility to privacy‘ as a core value.
Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.
But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.
On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.
Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.
All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparency, hate speech and abuse, and also directly, and at times closely, on consumer privacy and control.
Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.)
The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”
As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.
But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.
He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.
What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.
Simply put, Facebook is weaponizing security to shield its erosion of privacy.
Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.
Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.
In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.
Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.
Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.
Hence the existential nature of the threat.
The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.
Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.
One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.
It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.
US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.
Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.
So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)
The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.
Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.
While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.
In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.
Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.
So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.
Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:
It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.
At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.
“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.
He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.
“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”
He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).
These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.
But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.
No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.
Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…
So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.
What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.
Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.
Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.
“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.
So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.
Safe to say, that’s not going to play at all well in Europe.
Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!
Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)
In other contexts this would be called spying — or, well, ‘mass surveillance’.
It’s also how Facebook makes money.
And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.
It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.
Except Facebook is a commercial company, not the NSA.
So it’s only fighting to keep being able to carpet-bomb the planet with ads.
Profiting from shadow profiles
Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.
In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.
Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.
It’s just a double whammy of awful, awful behavior.
I spend a lot of time trying to convince people to lock down their social media accounts with 2FA. Boy does this undermine my efforts. https://t.co/tPo4keQkT7
— Eva (@evacide) September 28, 2018
And of course, there’s more.
A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.
In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.
Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.
Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)
So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)
In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.
One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…
Sure #Facebook, I'll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.
There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.
These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.
Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.
WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.
On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.
So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.
This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.
No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.
Yet this is a very dangerous strategy, though.
Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.
What’s the best security practice of all? That’s super simple: Not holding data in the first place.
- How to Build a Winning Native Advertising Strategy
- Instagram CEO, ACLU slam TikTok and WeChat app bans for putting US freedoms into the balance
- Mathematicians May Have Figured Out How ‘Stone Forests’ Form
- Entity Seeking Queries and Semantic Dependency Trees
- SaaS Ventures takes the investment road less traveled