CBPO

Tag: News

Facebook News Feed now downranks sites with stolen content

October 17, 2018 No Comments

Facebook is demoting trashy news publishers and other websites that illicitly scrape and republish content from other sources with little or no modification. Today it exclusively told TechCrunch that it will show links less prominently in the News Feed if they have a combination of this new signal about content authenticity along with either clickbait headlines or landing pages overflowing with low-quality ads. The move comes after Facebook’s surveys and in-person interviews discovered that users hate scraped content.

If ill-gotten intellectual property gets less News Feed distribution, it will receive less referral traffic, earn less ad revenue and there’ll be less incentive for crooks to steal articles, photos and videos in the first place. That could create an umbrella effect that improves content authenticity across the web.

And just in case the scraped profile data stolen from 29 million users in Facebook’s recent massive security breach ended up published online, Facebook would already have a policy in place to make links to it effectively disappear from the feed.

Here’s an example of the type of site that might be demoted by Facebook’s latest News Feed change. “Latest Nigerian News” scraped one of my recent TechCrunch articles, and surrounded it by tons of ads.

An ad-filled site that scraped my recent TechCrunch article. This site might be hit by a News Feed demotion

“Starting today, we’re rolling out an update so people see fewer posts that ink out to low quality sites that predominantly copy and republish content from other sites without providing unique value. We are adjusting our Publish Guidelines accordingly,” Facebook wrote in an addendum to its May 2017 post about demoting sites stuffed with crappy ads. Facebook tells me the new publisher guidelines will warn news outlets to add original content or value to reposted content or invoke the social network’s wrath.

Personally, I think the importance of transparency around these topics warrants a new blog post from Facebook as well as an update to the original post linking forward to it.

So how does Facebook determine if content is stolen? Its systems compare the main text content of a page with all other text content to find potential matches. The degree of matching is used to predict that a site stole its content. It then uses a combined classifier merging this prediction with how clickbaity a site’s headlines are plus the quality and quantity of ads on the site.


Social – TechCrunch


Kanye’s Password, a WhatsApp Bug, and More Security News This Week

October 13, 2018 No Comments

A grey hat hacking hero, bad boat news, and more security news this week.
Feed: All Latest


Tech and ad giants sign up to Europe’s first weak bite at ‘fake news’

September 26, 2018 No Comments

The European Union’s executive body has signed up tech platforms and ad industry players to a voluntary  Code of Practice aimed at trying to do something about the spread of disinformation online.

Something, just not anything too specifically quantifiable.

According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.

Signatories have committed to taking not exactly prescribed actions in the following five areas:

  • Disrupting advertising revenues of certain accounts and websites that spread disinformation;
  • Making political advertising and issue based advertising more transparent;
  • Addressing the issue of fake accounts and online bots;
  • Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
  • Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.

Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.

“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.

“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”

“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”

Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.

Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.

Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.

Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)

The Commission is not doing any better on that front, either.

The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.

Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.

Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.

And online multiple players are implicated and involved in its spread. 

But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.

In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”. 

“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.

“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.

“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”

“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.

Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).

The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.

Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either. 

Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.

Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.

Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.

For e.g.

  • The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.

Or

  • Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial

And

  • Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.

Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?

Which is really the core of the disinformation problem.

And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?

And so we end up with very vague suggestions in the code like:

  • Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels

Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)

Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).

And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…

Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.

A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.

Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.

Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.

The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !

The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.

Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.

As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.

So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.

Until then, there’s nothing much to see here.


Social – TechCrunch


More Breaking News: Amazon Rebrand + Exact Match Update

September 9, 2018 No Comments

Google announced yet another high-impact update to exact match keyword targeting just last night, and we just have to talk about it.

Read more at PPCHero.com
PPC Hero


Tesla’s Legal Woes, Bugatti’s Insane Supercar, and More Car News

August 27, 2018 No Comments

Plus, an advanced Porsche 911, and why San Francisco’s new $ 2 billion transit terminal doesn’t tick all the boxes.
Feed: All Latest


Jack Dorsey admits Twitter hasn’t ‘figured out’ approach to fake news

August 20, 2018 No Comments

Jack Dorsey is hedging his bets. In an interview with CNN’s Brian Stelter, the beard-rocking CEO said Twitter is reluctant to commit to a timetable for enacting policies aimed at curbing heated political rhetoric on the site.

The executive’s lukewarm comments reflect an embattled social network that has been the brunt of criticism from both sides of the political divide. The left has taken Twitter to task for relative inaction over incendiary comments from far right pundits like Alex Jones. The site was slow to act, compared to the likes of services including YouTube, Facebook and even YouPorn (yep).

When it ultimately did ban Jones’ Infowars, it was a seven day “timeout.” That move, expectedly, has drawn scrutiny from the other side of the aisle. Yesterday, Trump tweeted a critique of social media in general, that is generally being regarded as a thinly-veiled allusion to his embattled supporter, Jones.

Social Media is totally discriminating against Republican/Conservative voices. Speaking loudly and clearly for the Trump Administration, we won’t let that happen. They are closing down the opinions of many people on the RIGHT, while at the same time doing nothing to others

Trump also recently called for an end to what the right has deemed the “shadow banning” of conservative voices on social media.

“How do we earn peoples’ trust?” the CEO asked rhetorically during the conversation. “How do we guide people back to healthy conversation?”

Dorsey suggested that his company is “more left-leaning,” a notion that has made him extra cautious of blowback from the right. He also continued his position of refusing to hold the company to be accountable for fact-checking, a policy that runs counter to proclamations of other social media like Facebook.

“We have not figured this out,” Dorsey said, “but I do think it would be dangerous for a company like ours… to be arbiters of truth.”

For now, Dorsey and co. appear to be in a holding pattern, an indecisiveness that has drawn fire from all sides. The exec pines for a less polarized dialogue, citing NBA and K-Pop accounts as examples of Twitter subcultures that have been more measured in their approach.

Of course, anyone who’s spent time reading replies to LeBron or The Warriors can tell you that that’s a pretty low bar for discourse.

The fact of the matter is that this is the state of politics in 2018. Things are vicious and rhetoric can be incendiary. All of that is amplified by social media, as political pundits lean into troubling comments, conspiracy theory and outright lies to drive clicks. 

Dorsey, says he’s pushing for policies “that encourage people to talk and to have healthy conversation.” Whatever Twitter’s “small staff” might have in the works, it certainly feels a long way off.


Social – TechCrunch


Airport Surveillance, FBI Brain Drain, and More Security News This Week

August 4, 2018 No Comments

A Chipotle scam, FBI brain drain, and more of the week’s top security news.
Feed: All Latest


Venmo Privacy, Ransomware Attacks, and More Security News This Week

July 22, 2018 No Comments

Russian meddling, Venmo privacy, and more of the week’s top security news.
Feed: All Latest


Tesla Hits Its Goals, Lyft Buys Into Bikes, and More Car News This Week

July 8, 2018 No Comments

Plus: GM’s self-driving cars get into a scrape, China considers rolling back incentives for going electric, and more.
Feed: All Latest


Star Wars News: Life After ‘Solo’ May or May Not Include a Lando Movie

May 21, 2018 No Comments

Will Donald Glover be getting his own ‘Star Wars Story’? It’s not impossible!
Feed: All Latest