CBPO

Tag: News


Inside a Ferrari Hypercar, Lyft’s IPO, and More Car News

March 31, 2019 No Comments

Plus, we ride the Jeep Gladiator and ponder the future of electric vehicles, courtesy of battery-swapping rickshaws.
Feed: All Latest


Boeing’s 737 Crash, Tesla’s Model Y, and More News This Week

March 18, 2019 No Comments

This week’s transportation news focused on two major stories: the investigation into the fatal crash of Ethiopian Flight 302 and Elon Musk’s reveal of Tesla’s new baby SUV.
Feed: All Latest


Fabula AI is using social spread to spot ‘fake news’

February 7, 2019 No Comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $ 500,000 in angel funding and about another $ 500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Social – TechCrunch


Search industry news and trends: Best of 2018

January 1, 2019 No Comments

It’s that time of the year again: reflecting on the year that’s past as we prepare for 2019 lurking around the corner. In this article, we have a roundup of some of our fan favorite pieces from 2018 on news and trends from the search industry.

From alternative search engines to future trends, best online courses to algorithm updates, these were some of our highlights from the past year.

We also have a roundup of our top articles on SEO tips and tricks here.

1. No need for Google: 12 alternative search engines in 2018

While many of us use “googling” synonymously with “searching,” there are indeed a number of viable alternatives out there. In this article, we try to give some love to 12 alternative search engines.

Most of us can name the next few: Bing, Yandex, Baidu, DuckDuckGo.

But some on the list may surprise you — how about Ecosia, a Co2-neutral search engine? With every search made, the social business uses the revenue generated to plant trees. On average, 45 searches gets one more tree for our little planet.

2019 might be a year for a little more time spent with some G alternatives.

2. Which is the best search engine for finding images?

Human beings process visuals faster than they do text. So it makes sense that in the last decade, the number of images on the internet has ballooned.

In this post, we compare the best search engines for conducting three categories of image search on the web.

First, general / traditional image search, looking at Google, Bing, and Yahoo.

Then, reverse image search, looking at TinEye, Google, and Pinterest.

Third, free-to-use image search, looking at EveryPixel, Librestock, and the Creative Commons.

3. The 2018 guide to free SEO training courses online

As all good SEOs know, this is a never-ending process. The SEO world seems to be constantly evolving, and nearly everyone in the field has learned their snuff largely through online material.

For anyone who’s new to the scene, this can be an encouraging thought. We all started mostly just poking around on the interwebs to see what to do next. And happily, a lot of the best SEO material is freely available for all.

In this article, we look at the best online, free SEO training courses. From Google to Moz to QuickSprout and more, these are fundamentals that anyone can start with.

We also highlight a number of individuals and businesses to follow in the industry.

4. Video and search: YouTube, Google, the alternatives and the future

One third of all time spent online is accounted for by watching video. And, it’s predicted that 80% of all internet traffic will come from video in 2019.

This year was further proof that videos engage growing numbers of users and consequently have an impact on the SERPs. In fact, video has been seen to boost traffic from organic listings by as much as 157%.

In this article, we explore how the ways in which we search for video are changing. From YouTube to Google Search, Facebook to Vimeo, video — and how we interact with video content online — has seen some interesting changes.

5. Are keywords still relevant to SEO in 2018?

Sneak peak: this one starts out with, “What a useless article! Anyone worth their salt in the SEO industry knows that a blinkered focus on keywords in 2018 is a recipe for disaster.”

We go on to explore why focusing on just keywords is outdated, how various algorithm updates have changed the game, and what we should do now instead.

Ps: the snarky take sticks throughout the read, along with the quality overview.

6. Google’s core algorithm update: Who benefited, who lost out, and what can we learn?

This was an interesting piece following an algorithm update from back in March. There were suspicions, Google SearchLiason tweeted a confirmation, and everyone had to reassess.

Via a simple query, “What’s the best toothpaste?” and the results Google outputted over the course of half a dozen weeks, we can trace certain changes.

What pages benefitted, what can those insights tell us about the update, and how do we handle when our content visibility nosedives?

7. A cheat sheet to Google algorithm updates from 2011 to 2018

Who couldn’t use one of these hanging around?

Google makes changes to its ranking algorithm almost every day. Sometimes (most times) we don’t know about them, sometimes they turn the SERPs upside down.

This cheat sheet gives the most important algorithm updates of the recent years, along with some handy tips for how to optimize for each of the updates.

Well, that’s it for SEW in 2018. See you next year!

The post Search industry news and trends: Best of 2018 appeared first on Search Engine Watch.

Search Engine Watch


Pew: Social media for the first time tops newspapers as a news source for US adults

December 10, 2018 No Comments

It’s not true that everyone gets their news from Facebook and Twitter. But it is now true that more U.S. adults get their news from social media than from print newspapers. According to a new report from Pew Research Center out today, social media has for the first time surpassed newspapers as a preferred source of news for American adults. However, social media is still far behind other traditional news sources, like TV and radio, for example.

Last year, the portion of those who got their news from social media was around equal to those who got their news from print newspapers, Pew says. But in its more recent survey conducted from July 30 through August 12, 2018, that had changed.

Now, one-in-five U.S. adults (20 percent) are getting news from social media, compared with just 16 percent of those who get news from newspapers, the report found. (Pew had asked respondents if they got their news “often” from the various platforms.)

The change comes at a time when newspaper circulation is on the decline, and its popularity as a news medium is being phased out — particularly with younger generations. In fact, the report noted that print only remains popular today with the 65 and up crowd, where 39 percent get their news from newspapers. By comparison, no more than 18 percent of any other age group does.

While the decline of print has now given social media a slight edge, it’s nowhere near dominating other formats.

Instead, TV is still the most popular destination for getting the news, even though that’s been dropping over the past couple of years. TV is then followed by news websites, radio and then social media and newspapers.

But “TV news” doesn’t necessarily mean cable news networks, Pew clarifies.

In reality, local news is the most popular, with 37 percent getting their news there often. Meanwhile, 30 percent get cable TV news often and 25 percent watch the national evening news shows often.

However, if you look at the combination of news websites and social media together, a trend toward increasing news consumption from the web is apparent. Together, 43 percent of U.S. adults get their news from the web in some way, compared to 49 percent from TV.

There’s a growing age gap between TV and the web, too.

A huge majority (81 percent) of those 65 and older get news from TV, and so does 65 percent of those ages 50 to 64. Meanwhile, only 16 percent of the youngest consumers — those ages 18 to 29 — get their news from TV. This is the group pushing forward the cord cutting trend, too — or more specifically, many of them are the “cord-nevers,” as they’re never signing up for pay TV subscriptions in the first place. So it’s not surprising they’re not watching TV news.

Plus, a meager 2 percent get their news from newspapers in this group.

This young demographic greatly prefers digital consumption, with 27 percent getting news from news websites and 36 percent from social media. That is to say, they’re four times as likely than those 65 and up to get news from social media.

Meanwhile, online news websites are the most popular with the 30 to 49-year-old crowd, with 42 percent saying they get their news often from this source.

Despite their preference for digital, younger Americans’ news consumption is better spread out across mediums, Pew points out.

“Younger Americans are also unique in that they don’t rely on one platform in the way that the majority of their elders rely on TV,” Pew researcher Elisa Shearer writes. “No more than half of those ages 18 to 29 and 30 to 49 get news often from any one news platform,” she says.


Social – TechCrunch


Facebook policy VP, Richard Allan, to face the international ‘fake news’ grilling that Zuckerberg won’t

November 23, 2018 No Comments

An unprecedented international grand committee comprised of 22 representatives from seven parliaments will meet in London next week to put questions to Facebook about the online fake news crisis and the social network’s own string of data misuse scandals.

But Facebook founder Mark Zuckerberg won’t be providing any answers. The company has repeatedly refused requests for him to answer parliamentarians’ questions.

Instead it’s sending a veteran EMEA policy guy, Richard Allan, now its London-based VP of policy solutions, to face a roomful of irate MPs.

Allan will give evidence next week to elected members from the parliaments of Argentina, Brazil, Canada, Ireland, Latvia, Singapore, along with members of the UK’s Digital, Culture, Media and Sport (DCMS) parliamentary committee.

At the last call the international initiative had a full eight parliaments behind it but it’s down to seven — with Australia being unable to attend on account of the travel involved in getting to London.

A spokeswoman for the DCMS committee confirmed Facebook declined its last request for Zuckerberg to give evidence, telling TechCrunch: “The Committee offered the opportunity for him to give evidence over video link, which was also refused. Facebook has offered Richard Allan, vice president of policy solutions, which the Committee has accepted.”

“The Committee still believes that Mark Zuckerberg is the appropriate person to answer important questions about data privacy, safety, security and sharing,” she added. “The recent New York Times investigation raises further questions about how recent data breaches were allegedly dealt with within Facebook, and when the senior leadership team became aware of the breaches and the spread of Russian disinformation.”

The DCMS committee has spearheaded the international effort to hold Facebook to account for its role in a string of major data scandals, joining forces with similarly concerned committees across the world, as part of an already wide-ranging enquiry into the democratic impacts of online disinformation that’s been keeping it busy for the best part of this year.

And especially busy since the Cambridge Analytica story blew up into a major global scandal this April, although Facebook’s 2018 run of bad news hasn’t stopped there…

The evidence session with Allan is scheduled to take place at 11.30am (GMT) on November 27 in Westminster. (It will also be streamed live on the UK’s parliament.tv website.)

Afterwards a press conference has been scheduled — during which DCMS says a representative from each of the seven parliaments will sign a set of ‘International Principles for the Law Governing the Internet’.

It bills this as “a declaration on future action from the parliaments involved” — suggesting the intent is to generate international momentum and consensus for regulating social media.

The DCMS’ preliminary report on the fake news crisis, which it put out this summer, called for urgent action from government on a number of fronts — including floating the idea of a levy on social media to defence democracy.

However UK ministers failed to leap into action, merely putting out a tepid ‘wait and see’ response. Marshalling international action appears to be DCMS’ alternative action plan.

At next week’s press conference, grand committee members will take questions following Allan’s evidence — so expect swift condemnation of any fresh equivocation, misdirection or question-dodging from Facebook (which has already been accused by DCMS members of a pattern of evasive behavior).

Last week’s NYT report also characterized the company’s strategy since 2016, vis-a-vis the fake news crisis, as ‘delay, deny, deflect’.

The grand committee will hear from other witnesses too, including the UK’s information commissioner Elizabeth Denham who was before the DCMS committee recently to report on a wide-ranging ecosystem investigation it instigated in the wake of the Cambridge Analytica scandal.

She told it then that Facebooks needs to take “much greater responsibility” for how its platform is being used, and warning that unless the company overhauls its privacy-hostile business model it risk burning user trust for good.

Also giving evidence next week: Deputy information commissioner Steve Wood; the former Prime Minister of St Kitts and Nevis, Rt Hon Dr Denzil L Douglas (on account of Cambridge Analytica/SCL Elections having done work in the region); and the co-founder of PersonalData.IO, Paul-Olivier Dehaye.

Dehaye has also given evidence to the committee before — detailing his experience of making Subject Access Requests to Facebook — and trying and failing to obtain all the data it holds on him.


Social – TechCrunch


Stoop aims to improve your news diet with an easy way to find and read newsletters

November 17, 2018 No Comments

Stoop is looking to provide readers with what CEO Tim Raybould described as “a healthier information diet.”

To do that, it’s launched an iOS and Android app where you can browse through different newsletters based on category, and when you find one you like, it will direct you to the standard subscription page. If you provide your Stoop email address, you’ll then be able to read all your favorite newsletters in the app.

“The easiest way to describe it is: It’s like a podcast app but for newsletters,” Raybould said. “It’s a big directory of newsletters, and then there’s the side where you can consume them.”

Why newsletters? Well, he argued that they’re one of the key ways for publishers to develop a direct relationship with their audience. Podcasts are another, but he said newsletters are “an order of magnitude more important” because you can convey more information with the written word and there are lower production costs.

That direct relationship is obviously an important one for publishers, particularly as Facebook’s shifting priorities have made it clear that they need to “establish the right relationship [with] readers, as opposed to renting someone else’s audience.” But Raybould said it’s better for readers too, because you’ll spend your time on journalism that’s designed to provide value, not just attract clicks: “You will find you use the newsfeed less and consume more of your content directly from the source.”

“Most content [currently] is distributed through a third party, and that software is choosing what to surface next — not based on the quality of the content, but based on what’s going to keep people scrolling,” he added. “Trusting an algorithm with what you’re going to read next is like trusting a nutritionist who’s incentivized based on how many chips you eat.”

Stoop Discover

So Raybould is a fan of newsletters, but he said the current system is pretty cumbersome. There’s no one place where you can find new newsletters to read, and you may also hesitate to subscribe to another one because it “crowds out your personal inbox.” So Stoop is designed to reduce the friction, making it easy to subscribe to and read as many newsletters as your heart desires.

Raybould said the team has already curated a directory of around 650 newsletters (including TechCrunch’s own Daily Crunch) and the list continues to grow. Additional features include a “shuffle” option to discover new newsletters, plus the ability to share a newsletter with other Stoop users, or to forward it to your personal address.

The Stoop app is free, with Raybould hoping to eventually add a premium plan for features like full newsletter archives. He’s also hoping to collaborate with publishers — initially, most publishers will probably treat Stoop readers as just another set of subscribers, but Raybould said the company could provide access to additional analytics and also make signing up easier with the app’s instant subscribe option.

And the company’s ambitions go beyond newsletters. Raybould said Stoop is the first consumer product from a team with a larger mission to help publishers — they’re also working on OpenBundle, a bundled subscription initiative with a planned launch in 2019 or 2020.

“The overarching thing that is the same is the OpenBundle thesis and the Stoop thesis,” he said. “Getting publishers back in the role of delivering content directly to the audience is the antidote to the newsfeed.”

Mobile – TechCrunch


Facebook launches ‘Hunt for False News’ debunk blog as fakery drops 50%

October 20, 2018 No Comments

Facebook hopes detailing concrete examples of fake news it’s caught — or missed — could improve news literacy, or at least prove it’s attacking the misinformation problem. Today Facebook launched “The Hunt for False News,” in which it examines viral B.S., relays the decisions of its third-party fact-checkers and explains how the story was tracked down. The first edition reveals cases where false captions were put on old videos, people were wrongfully identified as perpetrators of crimes or real facts were massively exaggerated.

The blog’s launch comes after three recent studies showed the volume of misinformation on Facebook has dropped by half since the 2016 election, while Twitter’s volume hasn’t declined as drastically. Unfortunately, the remaining 50 percent still threatens elections, civil discourse, dissident safety and political unity across the globe.

In one of The Hunt’s first examples, it debunks that a man who posed for a photo with one of Brazil’s senators had stabbed the presidential candidate. Facebook explains that its machine learning models identified the photo, it was proven false by Brazilian fact-checker Aos Fatos, and Facebook now automatically detects and demotes uploads of the image. In a case where it missed the mark, a false story touting NASA would pay you $ 100,000 to study you staying in bed for 60 days “racked up millions of views on Facebook” before fact-checkers found NASA had paid out $ 10,000 to $ 17,000 in limited instances for studies in the past.

While the educational “Hunt” series is useful, it merely cherry-picks random false news stories from over a wide time period. What’s more urgent, and would be more useful, would be for Facebook to apply this method to currently circulating misinformation about the most important news stories. The New York Times’ Kevin Roose recently began using Facebook’s CrowdTangle tool to highlight the top 10 recent stories by engagement about topics like the Brett Kavanaugh hearings.

If Facebook wanted to be more transparent about its successes and failures around fake news, it’d publish lists of the false stories with the highest circulation each month and then apply the Hunt’s format explaining how they were debunked. This could help dispel myths in society’s understanding that may be propagated by the mere abundance of fake news headlines, even if users don’t click through to read them.

The red line represents the decline of Facebook engagement with “unreliable or dubious” sites

But at least all of Facebook’s efforts around information security — including doubling its security staff from 10,000 to 20,000 workers, fact checks and using News Feed algorithm changes to demote suspicious content — are paying off:

  • A Stanford and NYU study found that Facebook likes, comments, shares and reactions to links to 570 fake news sites dropped by more than half since the 2016 election, while engagements through Twitter continued to rise, “with the ratio of Facebook engagements to Twitter shares falling by approximately 60 percent.”
  • A University of Michigan study coined the metric “Iffy Quotient” to assess the how much content from certain fake news sites was distributed on Facebook and Twitter. When engagement was factored in, it found Facebook’s levels had dropped to nearly 2016 volume; that’s now 50 percent less than Twitter.
  • French newspaper Le Monde looked at engagement with 630 French websites across Facebook, Twitter, Pinterest and Reddit. Facebook engagement with sites dubbed “unreliable or dubious” has dropped by half since 2015.

Of course, given Twitter’s seeming paralysis on addressing misinformation and trolling, they’re not a great benchmark for Facebook to judge by. While it’s useful that Facebook is outlining ways to spot fake news, the public will have to internalize these strategies for society to make progress. That may be difficult when the truth has become incompatible with many peoples’ and politicians’ staunchly held beliefs.

In the past, Facebook has surfaced fake news-spotting tips atop the News Feed and bought full-page newspaper ads trying to disseminate them. The Hunt for Fake News would surely benefit from being embedded where the social network’s users look everyday instead of buried in its corporate blog.


Social – TechCrunch


Facebook News Feed now downranks sites with stolen content

October 17, 2018 No Comments

Facebook is demoting trashy news publishers and other websites that illicitly scrape and republish content from other sources with little or no modification. Today it exclusively told TechCrunch that it will show links less prominently in the News Feed if they have a combination of this new signal about content authenticity along with either clickbait headlines or landing pages overflowing with low-quality ads. The move comes after Facebook’s surveys and in-person interviews discovered that users hate scraped content.

If ill-gotten intellectual property gets less News Feed distribution, it will receive less referral traffic, earn less ad revenue and there’ll be less incentive for crooks to steal articles, photos and videos in the first place. That could create an umbrella effect that improves content authenticity across the web.

And just in case the scraped profile data stolen from 29 million users in Facebook’s recent massive security breach ended up published online, Facebook would already have a policy in place to make links to it effectively disappear from the feed.

Here’s an example of the type of site that might be demoted by Facebook’s latest News Feed change. “Latest Nigerian News” scraped one of my recent TechCrunch articles, and surrounded it by tons of ads.

An ad-filled site that scraped my recent TechCrunch article. This site might be hit by a News Feed demotion

“Starting today, we’re rolling out an update so people see fewer posts that ink out to low quality sites that predominantly copy and republish content from other sites without providing unique value. We are adjusting our Publish Guidelines accordingly,” Facebook wrote in an addendum to its May 2017 post about demoting sites stuffed with crappy ads. Facebook tells me the new publisher guidelines will warn news outlets to add original content or value to reposted content or invoke the social network’s wrath.

Personally, I think the importance of transparency around these topics warrants a new blog post from Facebook as well as an update to the original post linking forward to it.

So how does Facebook determine if content is stolen? Its systems compare the main text content of a page with all other text content to find potential matches. The degree of matching is used to predict that a site stole its content. It then uses a combined classifier merging this prediction with how clickbaity a site’s headlines are plus the quality and quantity of ads on the site.


Social – TechCrunch