CBPO

Tag: News

Search industry news and trends: Best of 2018

January 1, 2019 No Comments

It’s that time of the year again: reflecting on the year that’s past as we prepare for 2019 lurking around the corner. In this article, we have a roundup of some of our fan favorite pieces from 2018 on news and trends from the search industry.

From alternative search engines to future trends, best online courses to algorithm updates, these were some of our highlights from the past year.

We also have a roundup of our top articles on SEO tips and tricks here.

1. No need for Google: 12 alternative search engines in 2018

While many of us use “googling” synonymously with “searching,” there are indeed a number of viable alternatives out there. In this article, we try to give some love to 12 alternative search engines.

Most of us can name the next few: Bing, Yandex, Baidu, DuckDuckGo.

But some on the list may surprise you — how about Ecosia, a Co2-neutral search engine? With every search made, the social business uses the revenue generated to plant trees. On average, 45 searches gets one more tree for our little planet.

2019 might be a year for a little more time spent with some G alternatives.

2. Which is the best search engine for finding images?

Human beings process visuals faster than they do text. So it makes sense that in the last decade, the number of images on the internet has ballooned.

In this post, we compare the best search engines for conducting three categories of image search on the web.

First, general / traditional image search, looking at Google, Bing, and Yahoo.

Then, reverse image search, looking at TinEye, Google, and Pinterest.

Third, free-to-use image search, looking at EveryPixel, Librestock, and the Creative Commons.

3. The 2018 guide to free SEO training courses online

As all good SEOs know, this is a never-ending process. The SEO world seems to be constantly evolving, and nearly everyone in the field has learned their snuff largely through online material.

For anyone who’s new to the scene, this can be an encouraging thought. We all started mostly just poking around on the interwebs to see what to do next. And happily, a lot of the best SEO material is freely available for all.

In this article, we look at the best online, free SEO training courses. From Google to Moz to QuickSprout and more, these are fundamentals that anyone can start with.

We also highlight a number of individuals and businesses to follow in the industry.

4. Video and search: YouTube, Google, the alternatives and the future

One third of all time spent online is accounted for by watching video. And, it’s predicted that 80% of all internet traffic will come from video in 2019.

This year was further proof that videos engage growing numbers of users and consequently have an impact on the SERPs. In fact, video has been seen to boost traffic from organic listings by as much as 157%.

In this article, we explore how the ways in which we search for video are changing. From YouTube to Google Search, Facebook to Vimeo, video — and how we interact with video content online — has seen some interesting changes.

5. Are keywords still relevant to SEO in 2018?

Sneak peak: this one starts out with, “What a useless article! Anyone worth their salt in the SEO industry knows that a blinkered focus on keywords in 2018 is a recipe for disaster.”

We go on to explore why focusing on just keywords is outdated, how various algorithm updates have changed the game, and what we should do now instead.

Ps: the snarky take sticks throughout the read, along with the quality overview.

6. Google’s core algorithm update: Who benefited, who lost out, and what can we learn?

This was an interesting piece following an algorithm update from back in March. There were suspicions, Google SearchLiason tweeted a confirmation, and everyone had to reassess.

Via a simple query, “What’s the best toothpaste?” and the results Google outputted over the course of half a dozen weeks, we can trace certain changes.

What pages benefitted, what can those insights tell us about the update, and how do we handle when our content visibility nosedives?

7. A cheat sheet to Google algorithm updates from 2011 to 2018

Who couldn’t use one of these hanging around?

Google makes changes to its ranking algorithm almost every day. Sometimes (most times) we don’t know about them, sometimes they turn the SERPs upside down.

This cheat sheet gives the most important algorithm updates of the recent years, along with some handy tips for how to optimize for each of the updates.

Well, that’s it for SEW in 2018. See you next year!

The post Search industry news and trends: Best of 2018 appeared first on Search Engine Watch.

Search Engine Watch


Pew: Social media for the first time tops newspapers as a news source for US adults

December 10, 2018 No Comments

It’s not true that everyone gets their news from Facebook and Twitter. But it is now true that more U.S. adults get their news from social media than from print newspapers. According to a new report from Pew Research Center out today, social media has for the first time surpassed newspapers as a preferred source of news for American adults. However, social media is still far behind other traditional news sources, like TV and radio, for example.

Last year, the portion of those who got their news from social media was around equal to those who got their news from print newspapers, Pew says. But in its more recent survey conducted from July 30 through August 12, 2018, that had changed.

Now, one-in-five U.S. adults (20 percent) are getting news from social media, compared with just 16 percent of those who get news from newspapers, the report found. (Pew had asked respondents if they got their news “often” from the various platforms.)

The change comes at a time when newspaper circulation is on the decline, and its popularity as a news medium is being phased out — particularly with younger generations. In fact, the report noted that print only remains popular today with the 65 and up crowd, where 39 percent get their news from newspapers. By comparison, no more than 18 percent of any other age group does.

While the decline of print has now given social media a slight edge, it’s nowhere near dominating other formats.

Instead, TV is still the most popular destination for getting the news, even though that’s been dropping over the past couple of years. TV is then followed by news websites, radio and then social media and newspapers.

But “TV news” doesn’t necessarily mean cable news networks, Pew clarifies.

In reality, local news is the most popular, with 37 percent getting their news there often. Meanwhile, 30 percent get cable TV news often and 25 percent watch the national evening news shows often.

However, if you look at the combination of news websites and social media together, a trend toward increasing news consumption from the web is apparent. Together, 43 percent of U.S. adults get their news from the web in some way, compared to 49 percent from TV.

There’s a growing age gap between TV and the web, too.

A huge majority (81 percent) of those 65 and older get news from TV, and so does 65 percent of those ages 50 to 64. Meanwhile, only 16 percent of the youngest consumers — those ages 18 to 29 — get their news from TV. This is the group pushing forward the cord cutting trend, too — or more specifically, many of them are the “cord-nevers,” as they’re never signing up for pay TV subscriptions in the first place. So it’s not surprising they’re not watching TV news.

Plus, a meager 2 percent get their news from newspapers in this group.

This young demographic greatly prefers digital consumption, with 27 percent getting news from news websites and 36 percent from social media. That is to say, they’re four times as likely than those 65 and up to get news from social media.

Meanwhile, online news websites are the most popular with the 30 to 49-year-old crowd, with 42 percent saying they get their news often from this source.

Despite their preference for digital, younger Americans’ news consumption is better spread out across mediums, Pew points out.

“Younger Americans are also unique in that they don’t rely on one platform in the way that the majority of their elders rely on TV,” Pew researcher Elisa Shearer writes. “No more than half of those ages 18 to 29 and 30 to 49 get news often from any one news platform,” she says.


Social – TechCrunch


Facebook policy VP, Richard Allan, to face the international ‘fake news’ grilling that Zuckerberg won’t

November 23, 2018 No Comments

An unprecedented international grand committee comprised of 22 representatives from seven parliaments will meet in London next week to put questions to Facebook about the online fake news crisis and the social network’s own string of data misuse scandals.

But Facebook founder Mark Zuckerberg won’t be providing any answers. The company has repeatedly refused requests for him to answer parliamentarians’ questions.

Instead it’s sending a veteran EMEA policy guy, Richard Allan, now its London-based VP of policy solutions, to face a roomful of irate MPs.

Allan will give evidence next week to elected members from the parliaments of Argentina, Brazil, Canada, Ireland, Latvia, Singapore, along with members of the UK’s Digital, Culture, Media and Sport (DCMS) parliamentary committee.

At the last call the international initiative had a full eight parliaments behind it but it’s down to seven — with Australia being unable to attend on account of the travel involved in getting to London.

A spokeswoman for the DCMS committee confirmed Facebook declined its last request for Zuckerberg to give evidence, telling TechCrunch: “The Committee offered the opportunity for him to give evidence over video link, which was also refused. Facebook has offered Richard Allan, vice president of policy solutions, which the Committee has accepted.”

“The Committee still believes that Mark Zuckerberg is the appropriate person to answer important questions about data privacy, safety, security and sharing,” she added. “The recent New York Times investigation raises further questions about how recent data breaches were allegedly dealt with within Facebook, and when the senior leadership team became aware of the breaches and the spread of Russian disinformation.”

The DCMS committee has spearheaded the international effort to hold Facebook to account for its role in a string of major data scandals, joining forces with similarly concerned committees across the world, as part of an already wide-ranging enquiry into the democratic impacts of online disinformation that’s been keeping it busy for the best part of this year.

And especially busy since the Cambridge Analytica story blew up into a major global scandal this April, although Facebook’s 2018 run of bad news hasn’t stopped there…

The evidence session with Allan is scheduled to take place at 11.30am (GMT) on November 27 in Westminster. (It will also be streamed live on the UK’s parliament.tv website.)

Afterwards a press conference has been scheduled — during which DCMS says a representative from each of the seven parliaments will sign a set of ‘International Principles for the Law Governing the Internet’.

It bills this as “a declaration on future action from the parliaments involved” — suggesting the intent is to generate international momentum and consensus for regulating social media.

The DCMS’ preliminary report on the fake news crisis, which it put out this summer, called for urgent action from government on a number of fronts — including floating the idea of a levy on social media to defence democracy.

However UK ministers failed to leap into action, merely putting out a tepid ‘wait and see’ response. Marshalling international action appears to be DCMS’ alternative action plan.

At next week’s press conference, grand committee members will take questions following Allan’s evidence — so expect swift condemnation of any fresh equivocation, misdirection or question-dodging from Facebook (which has already been accused by DCMS members of a pattern of evasive behavior).

Last week’s NYT report also characterized the company’s strategy since 2016, vis-a-vis the fake news crisis, as ‘delay, deny, deflect’.

The grand committee will hear from other witnesses too, including the UK’s information commissioner Elizabeth Denham who was before the DCMS committee recently to report on a wide-ranging ecosystem investigation it instigated in the wake of the Cambridge Analytica scandal.

She told it then that Facebooks needs to take “much greater responsibility” for how its platform is being used, and warning that unless the company overhauls its privacy-hostile business model it risk burning user trust for good.

Also giving evidence next week: Deputy information commissioner Steve Wood; the former Prime Minister of St Kitts and Nevis, Rt Hon Dr Denzil L Douglas (on account of Cambridge Analytica/SCL Elections having done work in the region); and the co-founder of PersonalData.IO, Paul-Olivier Dehaye.

Dehaye has also given evidence to the committee before — detailing his experience of making Subject Access Requests to Facebook — and trying and failing to obtain all the data it holds on him.


Social – TechCrunch


Stoop aims to improve your news diet with an easy way to find and read newsletters

November 17, 2018 No Comments

Stoop is looking to provide readers with what CEO Tim Raybould described as “a healthier information diet.”

To do that, it’s launched an iOS and Android app where you can browse through different newsletters based on category, and when you find one you like, it will direct you to the standard subscription page. If you provide your Stoop email address, you’ll then be able to read all your favorite newsletters in the app.

“The easiest way to describe it is: It’s like a podcast app but for newsletters,” Raybould said. “It’s a big directory of newsletters, and then there’s the side where you can consume them.”

Why newsletters? Well, he argued that they’re one of the key ways for publishers to develop a direct relationship with their audience. Podcasts are another, but he said newsletters are “an order of magnitude more important” because you can convey more information with the written word and there are lower production costs.

That direct relationship is obviously an important one for publishers, particularly as Facebook’s shifting priorities have made it clear that they need to “establish the right relationship [with] readers, as opposed to renting someone else’s audience.” But Raybould said it’s better for readers too, because you’ll spend your time on journalism that’s designed to provide value, not just attract clicks: “You will find you use the newsfeed less and consume more of your content directly from the source.”

“Most content [currently] is distributed through a third party, and that software is choosing what to surface next — not based on the quality of the content, but based on what’s going to keep people scrolling,” he added. “Trusting an algorithm with what you’re going to read next is like trusting a nutritionist who’s incentivized based on how many chips you eat.”

Stoop Discover

So Raybould is a fan of newsletters, but he said the current system is pretty cumbersome. There’s no one place where you can find new newsletters to read, and you may also hesitate to subscribe to another one because it “crowds out your personal inbox.” So Stoop is designed to reduce the friction, making it easy to subscribe to and read as many newsletters as your heart desires.

Raybould said the team has already curated a directory of around 650 newsletters (including TechCrunch’s own Daily Crunch) and the list continues to grow. Additional features include a “shuffle” option to discover new newsletters, plus the ability to share a newsletter with other Stoop users, or to forward it to your personal address.

The Stoop app is free, with Raybould hoping to eventually add a premium plan for features like full newsletter archives. He’s also hoping to collaborate with publishers — initially, most publishers will probably treat Stoop readers as just another set of subscribers, but Raybould said the company could provide access to additional analytics and also make signing up easier with the app’s instant subscribe option.

And the company’s ambitions go beyond newsletters. Raybould said Stoop is the first consumer product from a team with a larger mission to help publishers — they’re also working on OpenBundle, a bundled subscription initiative with a planned launch in 2019 or 2020.

“The overarching thing that is the same is the OpenBundle thesis and the Stoop thesis,” he said. “Getting publishers back in the role of delivering content directly to the audience is the antidote to the newsfeed.”

Mobile – TechCrunch


Facebook launches ‘Hunt for False News’ debunk blog as fakery drops 50%

October 20, 2018 No Comments

Facebook hopes detailing concrete examples of fake news it’s caught — or missed — could improve news literacy, or at least prove it’s attacking the misinformation problem. Today Facebook launched “The Hunt for False News,” in which it examines viral B.S., relays the decisions of its third-party fact-checkers and explains how the story was tracked down. The first edition reveals cases where false captions were put on old videos, people were wrongfully identified as perpetrators of crimes or real facts were massively exaggerated.

The blog’s launch comes after three recent studies showed the volume of misinformation on Facebook has dropped by half since the 2016 election, while Twitter’s volume hasn’t declined as drastically. Unfortunately, the remaining 50 percent still threatens elections, civil discourse, dissident safety and political unity across the globe.

In one of The Hunt’s first examples, it debunks that a man who posed for a photo with one of Brazil’s senators had stabbed the presidential candidate. Facebook explains that its machine learning models identified the photo, it was proven false by Brazilian fact-checker Aos Fatos, and Facebook now automatically detects and demotes uploads of the image. In a case where it missed the mark, a false story touting NASA would pay you $ 100,000 to study you staying in bed for 60 days “racked up millions of views on Facebook” before fact-checkers found NASA had paid out $ 10,000 to $ 17,000 in limited instances for studies in the past.

While the educational “Hunt” series is useful, it merely cherry-picks random false news stories from over a wide time period. What’s more urgent, and would be more useful, would be for Facebook to apply this method to currently circulating misinformation about the most important news stories. The New York Times’ Kevin Roose recently began using Facebook’s CrowdTangle tool to highlight the top 10 recent stories by engagement about topics like the Brett Kavanaugh hearings.

If Facebook wanted to be more transparent about its successes and failures around fake news, it’d publish lists of the false stories with the highest circulation each month and then apply the Hunt’s format explaining how they were debunked. This could help dispel myths in society’s understanding that may be propagated by the mere abundance of fake news headlines, even if users don’t click through to read them.

The red line represents the decline of Facebook engagement with “unreliable or dubious” sites

But at least all of Facebook’s efforts around information security — including doubling its security staff from 10,000 to 20,000 workers, fact checks and using News Feed algorithm changes to demote suspicious content — are paying off:

  • A Stanford and NYU study found that Facebook likes, comments, shares and reactions to links to 570 fake news sites dropped by more than half since the 2016 election, while engagements through Twitter continued to rise, “with the ratio of Facebook engagements to Twitter shares falling by approximately 60 percent.”
  • A University of Michigan study coined the metric “Iffy Quotient” to assess the how much content from certain fake news sites was distributed on Facebook and Twitter. When engagement was factored in, it found Facebook’s levels had dropped to nearly 2016 volume; that’s now 50 percent less than Twitter.
  • French newspaper Le Monde looked at engagement with 630 French websites across Facebook, Twitter, Pinterest and Reddit. Facebook engagement with sites dubbed “unreliable or dubious” has dropped by half since 2015.

Of course, given Twitter’s seeming paralysis on addressing misinformation and trolling, they’re not a great benchmark for Facebook to judge by. While it’s useful that Facebook is outlining ways to spot fake news, the public will have to internalize these strategies for society to make progress. That may be difficult when the truth has become incompatible with many peoples’ and politicians’ staunchly held beliefs.

In the past, Facebook has surfaced fake news-spotting tips atop the News Feed and bought full-page newspaper ads trying to disseminate them. The Hunt for Fake News would surely benefit from being embedded where the social network’s users look everyday instead of buried in its corporate blog.


Social – TechCrunch


Facebook News Feed now downranks sites with stolen content

October 17, 2018 No Comments

Facebook is demoting trashy news publishers and other websites that illicitly scrape and republish content from other sources with little or no modification. Today it exclusively told TechCrunch that it will show links less prominently in the News Feed if they have a combination of this new signal about content authenticity along with either clickbait headlines or landing pages overflowing with low-quality ads. The move comes after Facebook’s surveys and in-person interviews discovered that users hate scraped content.

If ill-gotten intellectual property gets less News Feed distribution, it will receive less referral traffic, earn less ad revenue and there’ll be less incentive for crooks to steal articles, photos and videos in the first place. That could create an umbrella effect that improves content authenticity across the web.

And just in case the scraped profile data stolen from 29 million users in Facebook’s recent massive security breach ended up published online, Facebook would already have a policy in place to make links to it effectively disappear from the feed.

Here’s an example of the type of site that might be demoted by Facebook’s latest News Feed change. “Latest Nigerian News” scraped one of my recent TechCrunch articles, and surrounded it by tons of ads.

An ad-filled site that scraped my recent TechCrunch article. This site might be hit by a News Feed demotion

“Starting today, we’re rolling out an update so people see fewer posts that ink out to low quality sites that predominantly copy and republish content from other sites without providing unique value. We are adjusting our Publish Guidelines accordingly,” Facebook wrote in an addendum to its May 2017 post about demoting sites stuffed with crappy ads. Facebook tells me the new publisher guidelines will warn news outlets to add original content or value to reposted content or invoke the social network’s wrath.

Personally, I think the importance of transparency around these topics warrants a new blog post from Facebook as well as an update to the original post linking forward to it.

So how does Facebook determine if content is stolen? Its systems compare the main text content of a page with all other text content to find potential matches. The degree of matching is used to predict that a site stole its content. It then uses a combined classifier merging this prediction with how clickbaity a site’s headlines are plus the quality and quantity of ads on the site.


Social – TechCrunch


Kanye’s Password, a WhatsApp Bug, and More Security News This Week

October 13, 2018 No Comments

A grey hat hacking hero, bad boat news, and more security news this week.
Feed: All Latest


Tech and ad giants sign up to Europe’s first weak bite at ‘fake news’

September 26, 2018 No Comments

The European Union’s executive body has signed up tech platforms and ad industry players to a voluntary  Code of Practice aimed at trying to do something about the spread of disinformation online.

Something, just not anything too specifically quantifiable.

According to the Commission, Facebook, Google, Twitter, Mozilla, some additional members of the EDIMA trade association, plus unnamed advertising groups are among those that have signed up to the self-regulatory code, which will apply in a month’s time.

Signatories have committed to taking not exactly prescribed actions in the following five areas:

  • Disrupting advertising revenues of certain accounts and websites that spread disinformation;
  • Making political advertising and issue based advertising more transparent;
  • Addressing the issue of fake accounts and online bots;
  • Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;
  • Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.

Mariya Gabriel, the European commissioner for digital economy and society, described the Code as a first “important” step in tackling disinformation. And one she said will be reviewed by the end of the year to see how (or, well, whether) it’s functioning, with the door left open for additional steps to be taken if not. So in theory legislation remains a future possibility.

“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” she said in a statement. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, and we welcome this.

“These actions should contribute to a fast and measurable reduction of online disinformation. To this end, the Commission will pay particular attention to its effective implementation.”

“I urge online platforms and the advertising industry to immediately start implementing the actions agreed in the Code of Practice to achieve significant progress and measurable results in the coming months,” she added. “I also expect more and more online platforms, advertising companies and advertisers to adhere to the Code of Practice, and I encourage everyone to make their utmost to put their commitments into practice to fight disinformation.”

Earlier this year a report by an expert group established by the Commission to help shape its response to the so-called ‘fake news’ crisis, called for more transparency from online platform, as well as urgent investment in media and information literacy education to empower journalists and foster a diverse and sustainable news media ecosystem.

Safe to say, no one has suggested there’s any kind of quick fix for the Internet enabling the accelerated spread of nonsense and lies.

Including the Commission’s own expert group, which offered an assorted pick’n’mix of ideas — set over various and some not-at-all-instant-fix timeframes.

Though the group was called out for failing to interrogate evidence around the role of behavioral advertising in the dissemination of fake news — which has arguably been piling up. (Certainly its potential to act as a disinformation nexus has been amply illustrated by the Facebook-Cambridge Analytica data misuse scandal, to name one recent example.)

The Commission is not doing any better on that front, either.

The executive has been working on formulating its response to what its expert group suggested should be referred to as ‘disinformation’ (i.e. rather than the politicized ‘fake news’ moniker) for more than a year now — after the European parliament adopted a Resolution, in June 2017, calling on it to examine the issue and look at existing laws and possible legislative interventions.

Elections for the European parliament are due next spring and MEPs are clearly concerned about the risk of interference. So the unelected Commission is feeling the elected parliament’s push here.

Disinformation — aka “verifiably false or misleading information” created and spread for economic gain and/or to deceive the public, and which “may cause public harm” such as “threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”, as the Commission’s new Code of Practice defines it — is clearly a slippery policy target.

And online multiple players are implicated and involved in its spread. 

But so too are multiple, powerful, well resourced adtech players incentivized to push to avoid any political disruption to their lucrative people-targeting business models.

In the Commission’s voluntary Code of Practice signatories merely commit to recognizing their role in “contributing to solutions to the challenge posed by disinformation”. 

“The Signatories recognise and agree with the Commission’s conclusions that “the exposure of citizens to large scale Disinformation, including misleading or outright false information, is a major challenge for Europe. Our open democratic societies depend on public debates that allow well-informed citizens to express their will through free and fair political processes,” runs the preamble.

“[T]he Signatories are mindful of the fundamental right to freedom of expression and to an open Internet, and the delicate balance which any efforts to limit the spread and impact of otherwise lawful content must strike.

“In recognition that the dissemination of Disinformation has many facets and is facilitated by and impacts a very broad segment of actors in the ecosystem, all stakeholders have roles to play in countering the spread of Disinformation.”

“Misleading advertising” is explicitly excluded from the scope of the code — which also presumably helped the Commission convince the ad industry to sign up to it.

Though that further risks muddying the waters of the effort, given that social media advertising has been the high-powered vehicle of choice for malicious misinformation muck-spreaders (such as Kremlin-backed agents of societal division).

The Commission is presumably trying to split the hairs of maliciously misleading fake ads (still bad because they’re not actually ads but malicious pretenders) and good old fashioned ‘misleading advertising’, though — which will continue to be dealt with under existing ad codes and standards.

Also excluded from the Code: “Clearly identified partisan news and commentary”. So purveyors of hyper biased political commentary are not intended to get scooped up here, either. 

Though again, plenty of Kremlin-generated disinformation agents have masqueraded as partisan news and commentary pundits, and from all sides of the political spectrum.

Hence, we must again assume, the Commission including the requirement to exclude this type of content where it’s “clearly identified”. Whatever that means.

Among the various ‘commitments’ tech giants and ad firms are agreeing to here are plenty of firmly fudgey sounding statements that call for a degree of effort from the undersigned. But without ever setting out explicitly how such effort will be measured or quantified.

For e.g.

  • The Signatories recognise that all parties involved in the buying and selling of online advertising and the provision of advertising-related services need to work together to improve transparency across the online advertising ecosystem and thereby to effectively scrutinise, control and limit the placement of advertising on accounts and websites belonging to purveyors of Disinformation.

Or

  • Relevant Signatories commit to use reasonable efforts towards devising approaches to publicly disclose “issue-based advertising”. Such efforts will include the development of a working definition of “issue-based advertising” which does not limit reporting on political discussion and the publishing of political opinion and excludes commercial

And

  • Relevant Signatories commit to invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest.

Nor does the code exactly nail down the terms it’s using to set goals — raising tricky and even existential questions like who defines what’s “relevant, authentic, and authoritative” where information is concerned?

Which is really the core of the disinformation problem.

And also not an easy question for tech giants — which have sold their vast content distribution farms as neutral ‘platforms’ — to start to approach, let alone tackle. Hence their leaning so heavily on third party fact-checkers to try to outsource their lack of any editorial values. Because without editorial values there’s no compass; and without a compass how can you judge the direction of tonal travel?

And so we end up with very vague suggestions in the code like:

  • Relevant Signatories should invest in technological means to prioritize relevant, authentic, and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels

Only slightly less vague and woolly is a commitment that signatories will “put in place clear policies regarding identity and the misuse of automated bots” on the signatories’ services, and “enforce these policies within the EU”. (So presumably not globally, despite disinformation being able to wreak havoc everywhere.)

Though here the code only points to some suggestive measures that could be used to do that — and which are set out in a separate annex. This boils down to a list of some very, very broad-brush “best practice principles” (such as “follow the money”; develop “solutions to increase transparency”; and “encourage research into disinformation”… ).

And set alongside that uninspiringly obvious list is another — of some current policy steps being undertaken by the undersigned to combat fake accounts and content — as if they’re already meeting the code’s expectations… so, er…

Unsurprisingly, the Commission’s first bite at ‘fake news’ has attracted some biting criticism for being unmeasurably weak sauce.

A group of media advisors — including the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics — are among the first critics.

Reuters reports them complaining that signatories have not offered measurable objectives to monitor the implementation. “The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” it quotes the group as saying.

Disinformation may be a tough, multi-pronged, multi-dimensional problem but few would try to argue that an overly dilute solution will deliver anything at all — well, unless it’s kicking the can down the road that you’re really after.

The Commission doesn’t even seem to know exactly what the undersigned have agreed to do as a first step, with the commissioner saying she’ll meet signatories “in the coming weeks to discuss the specific procedures and policies that they are adopting to make the Code a reality”. So double er… !

The code also only envisages signatories meeting annually to discuss how things are going. So no pressure for regular collaborative moots vis-a-vis tackling things like botnets spreading malicious disinformation then. Not unless the undersigned really, really want to.

Which seems unlikely, given how their business models tend to benefit from engagement — and disinformation-fuelled outrage has shown itself to be a very potent fuel on that front.

As part of the code, these adtech giants have at least technically agreed to make information available to the Commission on request — and generally to co-operate with its efforts to assess how/whether the code is working.

So, if public pressure on the issue continues to ramp up, the Commission does at least have a route to ask for relevant data from platforms that could, in theory, be used to feed a regulation that’s worth the paper it’s written on.

Until then, there’s nothing much to see here.


Social – TechCrunch


More Breaking News: Amazon Rebrand + Exact Match Update

September 9, 2018 No Comments

Google announced yet another high-impact update to exact match keyword targeting just last night, and we just have to talk about it.

Read more at PPCHero.com
PPC Hero


Tesla’s Legal Woes, Bugatti’s Insane Supercar, and More Car News

August 27, 2018 No Comments

Plus, an advanced Porsche 911, and why San Francisco’s new $ 2 billion transit terminal doesn’t tick all the boxes.
Feed: All Latest