The company believes that by combining its cloud backup service with Webroot’s endpoint security tools, it will give customers a more complete solution. Webroot’s history actually predates the cloud, having launched in 1997. The private company reported $ 250 million in revenue for fiscal 2018, according to data provided by Carbonite . That will combine with Carbonite’s $ 296.4 million in revenue for the same time period.
Carbonite CEO and president Mohamad Ali saw the deal as a way to expand the Carbonite offering. “With threats like ransomware evolving daily, our customers and partners are increasingly seeking a more comprehensive solution that is both powerful and easy to use. Backup and recovery, combined with endpoint security and threat intelligence, is a differentiated solution that provides one, comprehensive data protection platform,” Ali explained in a statement.
The deal not only enhances Carbonite’s backup offering, it gives the company access to a new set of customers. While Carbonite sells mainly through Value Added Resellers (VARs), Webroot’s customers are mainly 14,000 Managed Service Providers (MSPs). That lack of overlap could increase its market reach through to the MSP channel. Webroot has 300,000 customers, according to Carbonite.
This is not the first Carbonite acquisition. It has acquired several other companies over the last several years, including buying Mozy from Dell a year ago for $ 145 million. The acquisition strategy is about using its checkbook to expand the capabilities of the platform to offer a more comprehensive set of tools beyond core backup and recovery.
The company announced it is using cash on hand and a $ 550 million loan from Barclays, Citizens Bank and RBC Capital Markets to finance the deal. Per usual, the acquisition will be subject to regulatory approval, but is expected to close this quarter.
WIRED’s Gadget Lab team kicks off the new year with a wrap-up of the year’s biggest electronics show. Plus, an interview with Reddit’s Jen Wong.
Feed: All Latest
A grey hat hacking hero, bad boat news, and more security news this week.
Feed: All Latest
At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.
“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.
Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.
But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.
You could say Facebook has ‘hostility to privacy‘ as a core value.
Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.
But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.
On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.
Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.
All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparency, hate speech and abuse, and also directly, and at times closely, on consumer privacy and control.
Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.)
The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”
As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.
But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.
He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.
What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.
Simply put, Facebook is weaponizing security to shield its erosion of privacy.
Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.
Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.
In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.
Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.
Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.
Hence the existential nature of the threat.
The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.
Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.
One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.
It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.
US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.
Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.
So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)
The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.
Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.
While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.
In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.
Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.
So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.
Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:
It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.
At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.
“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.
He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.
“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”
He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).
These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.
But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.
No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.
Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…
So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.
What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.
Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.
Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.
“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.
So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.
Safe to say, that’s not going to play at all well in Europe.
Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!
Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)
In other contexts this would be called spying — or, well, ‘mass surveillance’.
It’s also how Facebook makes money.
And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.
It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.
Except Facebook is a commercial company, not the NSA.
So it’s only fighting to keep being able to carpet-bomb the planet with ads.
Profiting from shadow profiles
Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.
In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.
Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.
It’s just a double whammy of awful, awful behavior.
I spend a lot of time trying to convince people to lock down their social media accounts with 2FA. Boy does this undermine my efforts. https://t.co/tPo4keQkT7
— Eva (@evacide) September 28, 2018
And of course, there’s more.
A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.
In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.
Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.
Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)
So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)
In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.
One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…
Sure #Facebook, I'll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.
There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.
These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.
Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.
WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.
On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.
So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.
This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.
No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.
Yet this is a very dangerous strategy, though.
Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.
What’s the best security practice of all? That’s super simple: Not holding data in the first place.
The growth of cloud services — with on-demand access to IT services over the Internet — has become one of the biggest evolutions in enterprise technology, but with it, so has the threat of security breaches and other cybercriminal activity. Now it appears that one of the leading companies in cloud services is looking for more ways to double down and fight the latter. Amazon’s AWS has been working on a range of new cryptographic and AI-based tools to help manage the security around cloud-based enterprise services, and it currently has over 130 vacancies for engineers with cryptography skills to help build and run it all.
One significant part of the work has been within a division of AWS called the Automated Reasoning Group, which focuses on identifying security issues and developing new tools to fix them for AWS and its customers based on automated reasoning, a branch of artificial intelligence that covers both computer science and mathematical logic and is aimed at helping computers automatically reason completely or nearly completely.
Classified in its patent application as “computer software for cryptographic protocol specification and verification,” Quivela also has a Github repository within AWS Labs’ profile that describes it as a “prototype tool for proving the security of cryptographic protocols,” developed by the AWS Automated Reasoning Group. (The ARG also has as part of its mission to share code and ideas with the community.)
SideTrail is not on Github, but Byron Cook, an academic who is the founder and director of the AWS Automated Reasoning Group, has co-authored a research paper called “SideTrail: Verifying the Time Balancing of Cryptosystems.” However, the link to the paper, describing what this is about, is no longer working.
The trademark application for SideTrail includes a long list of potential applications (as trademark applications often do). The general idea is cryptography-based security services. Among them: “Computer software, namely, software for monitoring, identifying, tracking, logging, analyzing, verifying, and profiling the health and security of cryptosystems; network encryption software; computer network security software,” “Providing access to hosted operating systems and computer applications through the Internet,” and a smattering of consulting potential: “Consultation in the field of cloud computing; research and development in the field of security and encryption for cryptosystems; research and development in the field of software; research and development in the field of information technology; computer systems analysis.”
Added to this, in July, a customer of AWS started testing out two other new cryptographic tools developed by the ARG also for improving an organization’s cybersecurity — with the tools originally released the previous August (2017). Tiros and Zelkova, as the two tools are called, are math-based techniques that variously evaluate access control schemes, security configurations and feedback based on different setups to help troubleshoot and prove the effectiveness of security systems across storage (S3) buckets.
Amazon has not trademarked Tiros and Zelkova. A Zelkova trademark, for financial services, appears to be registered as an LLC called “Zelkova Acquisition” in Las Vegas, while there is no active trademark listed for Tiros.
Amazon declined to respond to our questions about the trademarks. A selection of people we contacted associated with the projects did not respond to requests for comment.
More generally, cryptography is a central part of how IT services are secured: Amazon’s Automated Reasoning Group has been around since 2014 working in this area. But Amazon appears to be doing more now both to ramp up the tools it produces and consider how it can be applied across the wider business. A quick look on open vacancies at the company shows that there are currently 132 openings at Amazon for people with cryptography skills.
“Cloud is the new computer, the Earth is the motherboard and data centers are the cards,” Cook said in a lecture he delivered recently describing AWS and the work that the ARG is doing to help AWS grow. “The challenge is that as [AWS] scales it needs to be ever more secure… How does AWS continue to scale quickly and securely?
“AWS has made a big bet on our community,” he continued, as one answer to that question. That’s led to an expansion of the group’s activities in areas like formal verification and beyond, as a way of working with customers and encouraging them to move more data to the cloud.
Amazon is also making some key acquisitions also to build up its cloud security footprint, such as Sqrrl and Harvest.ai, two AI-based security startups whose founding teams both happen to have worked at the NSA.
Amazon’s AWS division pulled in over $ 6 billion in revenues last quarter with $ 1.6 billion in operating income, a healthy margin that underscores the shift that businesses and other organizations are making to cloud-based services.
Security is an essential component of how that business will continue to grow for Amazon and the wider industry: more trust in the infrastructure, and more proofs that cloud architectures can work better than using and scaling the legacy systems that businesses use today, will bolster the business. And it’s also essential, given the rise of breaches and ever more sophisticated cyber crimes. Gartner estimates that cloud-based security services will be a $ 6.9 billion market this year, rising to nearly $ 9 billion by 2020.
Automated tools that help human security specialists do their jobs better is an area that others like Microsoft are also eyeing up. Last year, it acquired Israeli security firm Hexadite, which offers remediation services to complement and bolster the work done by enterprise security specialists.
“You can’t hack what isn’t there,” Very Good Security co-founder Mahmoud Abdelkader tells me. His startup assumes the liability of storing sensitive data for other companies, substituting dummy credit card or Social Security numbers for the real ones. Then when the data needs to be moved or operated on, VGS injects the original info without clients having to change their code.
It’s essentially a data bank that allows businesses to stop storing confidential info under their unsecured mattress. Or you could think of it as Amazon Web Services for data instead of servers. Given all the high-profile breaches of late, it’s clear that many companies can’t be trusted to house sensitive data. Andreessen Horowitz is betting that they’d rather leave it to an expert.
That’s why the famous venture firm is leading an $ 8.5 million Series A for VGS, and its partner Alex Rampell is joining the board. The round also includes NYCA, Vertex Ventures, Slow Ventures and PayPal mafioso Max Levchin. The cash builds on VGS’ $ 1.4 million seed round, and will pay for its first big marketing initiative and more salespeople.
“Hey! Stop doing this yourself!,” Abdelkader asserts. “Put it on VGS and we’ll let you operate on your data as if you possess it with none of the liability.” While no data is ever 100 percent unhackable, putting it in VGS’ meticulously secured vaults means clients don’t have to become security geniuses themselves and instead can focus on what’s unique to their business.
“Privacy is a part of the UN Declaration of Human Rights. We should be able to build innovative applications without sacrificing our privacy and security,” says Abdelkader. He got his start in the industry by reverse-engineering games like StarCraft to build cheats and trainer software. But after studying discrete mathematics, cryptology and number theory, he craved a headier challenge.
Abdelkader co-founded Y Combinator-backed payment system Balanced in 2010, which also raised cash from Andreessen. But out-muscled by Stripe, Balanced shut down in 2015. While transitioning customers over to fellow YC alumni Stripe, Balanced received interest from other companies wanting it to store their data so they could be PCI-compliant.
Now Abdelkader and his VP from Balanced, Marshall Jones, have returned with VGS to sell that as a service. It’s targeting startups that handle data like payment card information, Social Security numbers and medical info, though eventually it could invade the larger enterprise market. It can quickly help these clients achieve compliance certifications for PCI, SOC2, EI3PA, HIPAA and other standards.
VGS’ innovation comes in replacing this data with “format preserving aliases” that are privacy safe. “Your app code doesn’t know the difference between this and actually sensitive data,” Abdelkader explains. In 30 minutes of integration, apps can be reworked to route traffic through VGS without ever talking to a salesperson. VGS locks up the real strings and sends the aliases to you instead, then intercepts those aliases and swaps them with the originals when necessary.
“We don’t actually see your data that you vault on VGS,” Abdelkader tells me. “It’s basically modeled after prison. The valuables are stored in isolation.” That means a business’ differentiator is their business logic, not the way they store data.
For example, fintech startup LendUp works with VGS to issue virtual credit card numbers that are replaced with fake numbers in LendUp’s databases. That way if it’s hacked, users’ don’t get their cards stolen. But when those card numbers are sent to a processor to actually make a payment, the real card numbers are subbed in last-minute.
VGS charges per data record and operation, with the first 500 records and 100,000 sensitive API calls free; $ 20 a month gets clients double that, and then they pay 4 cent per record and 2 cents per operation. VGS provides access to insurance too, working with a variety of underwriters. It starts with $ 1 million policies that can be much larger for Fortune 500s and other big companies, which might want $ 20 million per incident.
Obviously, VGS has to be obsessive about its own security. A breach of its vaults could kill its brand. “I don’t sleep. I worry I’ll miss something. Are we a giant honey pot?,” Abdelkader wonders. “We’ve invested a significant amount of our money into 24/7 monitoring for intrusions.”
Beyond the threat of hackers, VGS also has to battle with others picking away at part of its stack or trying to compete with the whole, like TokenEx, HP’s Voltage, Thales’ Vormetric, Oracle and more. But it’s do-it-yourself security that’s the status quo and what VGS is really trying to disrupt.
But VGS has a big accruing advantage. Each time it works with a clients’ partners like Experian or TransUnion for a company working with credit checks, it already has a relationship with them the next time another clients has to connect with these partners. Abdelkader hopes that, “Effectively, we become a standard of data security and privacy. All the institutions will just say ‘why don’t you use VGS?’”
That standard only works if it’s constantly evolving to win the cat-and-mouse game versus attackers. While a company is worrying about the particular value it adds to the world, these intelligent human adversaries can find a weak link in their security — costing them a fortune and ruining their relationships. “I’m selling trust,” Abdelkader concludes. That peace of mind is often worth the price.
So much for summer Fridays. Yesterday, BuzzFeed reported that a dozen tech companies, including Facebook, Google, Microsoft and Snapchat, would meet at Twitter headquarters on Friday to discuss election security. For two of them, that wasn’t the only meeting in the books.
In what appears to be a separate event on Friday, Facebook and Microsoft also met with the Department of Homeland Security, the FBI and two bodies of state election officials, the National Association of State Election Directors (NASED) and the National Association of Secretaries of State (NASS), about their election security efforts.
The discussion was the second of its kind connecting DHS, Facebook and state election officials on “actions being taken to combat malicious interference operations.” The meetings offer two very different perspectives on threats to election security. States are largely concerned with securing voter databases and election systems, while private tech companies are waging a very public war against coordinated disinformation campaigns by U.S. foreign adversaries on their platforms. Social media platforms and election systems themselves are two important yet usually disconnected fronts in the ongoing war against Russian election interference.
“Effectively combatting coordinated information operations requires many parts of society working together, which is why Facebook believes so strongly in the need for collaboration between law enforcement, government agencies, security experts and other companies to confront these growing threats,” Facebook VP of Public Policy Kevin Martin said of the meeting.
“We are grateful for the opportunity to brief state election officials on a recent call convened by DHS and again today as part of our continued effort to develop collaborative relationships between government and private industry.”
Curiously, while Microsoft and Facebook attended the DHS-hosted meeting, it doesn’t look like Twitter did. To date, Twitter and Facebook have faced the most fallout for foreign interference on their platforms meant to influence American politics, though Google was also called to Congress to testify on the issue last fall. When reached, Twitter declined to comment on its absence, though the company was reportedly playing host to the other major tech election security meeting today.
The meeting with state officials sounds like it was largely informative in nature, with Facebook and Microsoft providing insight on their respective efforts to contain foreign threats to election integrity. On Tuesday, Microsoft revealed that its Digital Crimes Unit secured a court order to take down six domains created by Russia’s GRU designed to phish user credentials. Half of the phishing domains were fake versions of U.S. Senate websites.
“No one organization, department or individual can solve this issue alone, that’s why information sharing is so important,” said Microsoft VP of Customer Security and Trust Tom Burt. “To really be successful in defending democracy, technology companies, government, civil society, the academic community and researchers need to come together and partner in new and meaningful ways.”
A Chipotle scam, FBI brain drain, and more of the week’s top security news.
Feed: All Latest
Xage (pronounced Zage), a blockchain security startup based in Silicon Valley, announced a $ 12 million Series A investment today led by March Capital Partners. GE Ventures, City Light Capital and NexStar Partners also participated.
The company emerged from stealth in December with a novel idea to secure the myriad of devices in the industrial internet of things on the blockchain. Here’s how I described it in a December 2017 story:
Xage is building a security fabric for IoT, which takes blockchain and synthesizes it with other capabilities to create a secure environment for devices to operate. If the blockchain is at its core a trust mechanism, then it can give companies confidence that their IoT devices can’t be compromised. Xage thinks that the blockchain is the perfect solution to this problem.
It’s an interesting approach, one that attracted Duncan Greatwood to the company. As he told me in December his previous successful exits — Topsy to Apple in 2013 and PostPath to Cisco in 2008 — gave him the freedom to choose a company that really excited him for his next challenge.
When he saw what Xage was doing, he wanted to be a part of it, and given the unorthodox security approach the company has taken, and Greatwood’s pedigree, it couldn’t have been hard to secure today’s funding.
The Industrial Internet of Things is not like its consumer cousin in that it involves getting data from big industrial devices like manufacturing machinery, oil and gas turbines and jet engines. While the entire Internet of Things could surely benefit from a company that concentrates specifically on keeping these devices secure, it’s a particularly acute requirement in industry where these devices are often helping track data from key infrastructure.
GE Ventures is the investment arm of GE, but their involvement is particularly interesting because GE has made a big bet on the Industrial Internet of Things. Abhishek Shukla of GE Ventures certainly saw the connection. “For industries to benefit from the IoT revolution, organizations need to fully connect and protect their operation. Xage is enabling the adoption of these cutting edge technologies across energy, transportation, telecom, and other global industries,” Shukla said in a statement.
The company was founded just last year and is based in Palo Alto, California.
Russian meddling, Venmo privacy, and more of the week’s top security news.
Feed: All Latest
- How would Google Answer Vague Questions in Queries?
- Why Google Grants Needs To Focus on Mobile Networks
- Social chat app Capture launches to take a shot at less viral success
- What Happens When Reproductive Tech Like IVF Goes Awry?
- Qualtrics’ Julie Larson-Green will talk customer experience at TC Sessions: Enterprise