CBPO

Monthly Archives: February 2019

Client Presentations: A Guide For Sophisticated Digital Marketers

February 7, 2019 No Comments

The goal of this guide is to boost your confidence before facing off with clients with an outline for how to present data in a meaningful way.

Read more at PPCHero.com
PPC Hero


Fabula AI is using social spread to spot ‘fake news’

February 7, 2019 No Comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $ 500,000 in angel funding and about another $ 500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Social – TechCrunch


SpaceX’s Starship, Meant for Mars, Prepares for a First Hop

February 7, 2019 No Comments

A new, extra-beefy SpaceX rocket is undergoing testing in Texas. Formerly called the BFR, Starship is designed to ferry up to 100 humans to Mars.
Feed: All Latest


Why Storytelling is Essential in Online Marketing

February 6, 2019 No Comments

storytelling-marketingAs Marketers our job is to not only interpret analytics data, but to also provide a summary of the performance and apply recommendations for future strategies, forecasting and on-going testing. However, this standard metric of decoding is not enough and we need to find a better way to communicate successes and failures that the client can understand. That is why storytelling is just as important now than it was when we are in Kindergarten when the teacher read us a story in a circle.

In this post, I will highlight the importance of storytelling with the client which not only helps the client understand, but also reinforces the client-agency relationship.

Storytelling is also a Science

As marketers, early on we are classically trained to become proficient in Excel, Powerpoint and (my personal favorite) writing on whiteboards so that we can be perceived as smartest one in the room. These elements of communication comprise of bullet points, summarizations, goals and objectives, sales vs. cost projections, etc… On the contrary, we are most likely doing it all wrong. There have been many studies and published articles that debunk this MBA/classroom method and reinforce the one of oldest and most fundamental communication methods.

In an very “eye-opening” article by Lifehacker.com published back in 2012 entitled “The Science of Storytelling: Why Telling a Story is the Most Powerful Way to Activate Our Brains“, author Leo Widrich states “It’s in fact quite simple. If we listen to a powerpoint presentation with boring bullet points, a certain part in the brain gets activated. Scientists call this Broca’s area and Wernicke’s area. Overall, it hits our language processing parts in the brain, where we decode words into meaning. And that’s it, nothing else happens. When we are being told a story, things change dramatically. Not only are the language processing parts in our brain activated, but any other area in our brain that we would use when experiencing the events of the story are too. So in essence, telling stories not only puts our entire brain to work it also allows the storyteller to put ideas and thoughts into the listeners brain as well.

Complexities of Storytelling

For most clients, they do not care too much about CTR%, AVG positions, bounce rates, etc… they want to know what is causing their cash register to ring below are some of the common questions they are mostly concerned about:

  • What’s working and why?
  • Whats not working and why?
  • Why are sales down this month as compared to last month?
  • How can we generate more sales without increasing the budget, etc…

Because of this difference in understanding  success metrics, marketers need to take all of the Analytics data (which are considered very complex by clients) and transform them into a story/language that they can understand. For example, lets suppose that the client saw a 50% increase in sales coming from their “Brand Terms” in Adwords as compared to the previous month. Instead of just providing them with increased performance metrics such as CTR%, Conversion rates, etc.., marketers need to do a little digging around and form a story that they can understand.

A story would be something like:

“Well, since we added more generalized “non-branded” terms as well as your interview on the local TV station, a larger audience of people who were not familiar with your brand before, typed your brand into Google and clicked on the PPC Text Ads. ” It is this type of success story that can create that “light bulb” in the heads of the client to ensure them that they are prospering their investment in you or your agency.”

Leveraging Web Analytics Data to Feed the Story

Just looking at common performance data is simply not enough to tell a story. Marketers need to look at various layers of data to comprise a story that can makes sense to the client. Identifying these interesting and important metrics such as hour of day, day of the week. GEO by state, metro area, city, direct/bookmark, conversion funnels, etc… These are examples of the metrics, combined with overall performance data is what makes up the holistic story that the client needs to hear. Moreover, these stories often lead to future optimization strategies and testing which is great for the client-agency relationship.

In Conclusion:

Trying to explain all of the intricate metrics and what they mean to a client is hard enough. But simplifying the data and creating a story around it, even as an “ice-breaker” at the beginning of the conversation, helps the client feel like they made the right choice in hiring you. The one thing we need to remember is that a story, if broken down into the simplest form, is a connection of cause and effect and that is what clients need to understand.


PPC Marketing Agency | Search Marketing Firm | Adwords Certified Consultant


Backed by Benchmark, Blue Hexagon just raised $31 million for its deep learning cybersecurity software

February 5, 2019 No Comments

Nayeem Islam spent nearly 11 years with chipmaker Qualcomm, where he founded its Silicon Valley-based R&D facility, recruited its entire team and oversaw research on all aspects of security, including applying machine learning on mobile devices and in the network to detect threats early.

Islam was nothing if not prolific, developing a system for on-device machine learning for malware detection, libraries for optimizing deep learning algorithms on mobile devices, and systems for parallel compute on mobile devices, among other things.

In fact, because of his work, he also saw a big opportunity in better protecting enterprises from cyberthreats through deep neural networks that are capable of processing every raw byte within a file and that can uncover complex relations within datasets. So two years ago, Islam and Saumitra Das, a former Qualcomm engineer with 330 patents to his name and another 450 pending, struck out on their own to create Blue Hexagon, a now 30-person Sunnyvale, Ca.-based company that is today disclosing that it has raised $ 31 million in funding from Benchmark and Altimeter.

The funding comes roughly one year after Benchmark quietly led a $ 6 million Series A round for the firm.

So what has investors so bullish on the company’s prospects, aside from its credentialed founders? In a word, speed, seemingly. According to Islam, Blue Hexagon has created a real-time, cybersecurity platform that he says can detect known and unknown threats at first encounter, then block them in “sub seconds” so the malware doesn’t have time to spread.

The industry has to move to real-time detection, he says, explaining that four new and unique malware samples is released every second, and arguing that traditional security methods can’t keep pace. He says that sandboxes, for example, meaning restricted environments that quarantine cyber threats and keep them from breaching sensitive files, are no longer state of the art. The same is true of signatures, which are mathematical techniques used to validate the authenticity and integrity of a message, software or digital document but are being bypassed by rapidly evolving new malware.

Only time will tell if Blue Hexagon is far more capable of identifying and stopping attackers, as Islam insists is the case. It is not the only startup to apply deep learning to cybersecurity, though it’s certainly one of the first. Critics, some who are protecting their own corporate interests, also worry that hackers can foil security algorithms by targeting the warning flags they look for.

Still, with its technology, its team, and its pitch, Blue Hexagon is starting to persuade not only top investors of its merits, but a growing —  and broad — base of customers, says Islam. “Everyone has this issue, from large banks, insurance companies, state and local governments. Nowhere do you find someone who doesn’t need to be protected.”

Blue Hexagon can even help customers that are already under attack, Islam says, even if it isn’t ideal. “Our goal is to catch an attack as early in the kill chain as possible. But if someone is already being attacked, we’ll see that activity and pinpoint it and be able to turn it off.”

Some damage may already be done, of course. It’s another reason to plan ahead, he says. “With automated attacks, you need automated techniques.” Deep learning, he insists, “is one way of leveling the playing field against attackers.”


Enterprise – TechCrunch


7 social media monitoring tools to check out in 2019

February 5, 2019 No Comments

The beginning of a new year is as good a reason as any other to try something new: a different lifestyle, a new hobby, a brand new marketing strategy.

And, of course, a new tool, since it’s both exciting and rewarding to discover awesome software that helps you deal with work and, sometimes, with life as well.

This is a list of social media monitoring/listening tools you should check out next year.

Some of them have existed for a while, some are new and fresh on the market.

But all of them are worth trying out and using in your marketing tool set (if social monitoring is in your marketing strategy at all, as it should be). So let’s start:

Which social media monitoring tools should you check out this year?

1. Awario

awario dashboard, a social media monitoring tool

Awario collects mentions of your keywords from a large range of sources (that keeps getting larger).

It monitors all major social media platforms, Reddit and other forums, news sites and blogs, and the Web.

It works in real-time: whenever your keyword is mentioned, it will immediately appear in your mention feed, and you’ll be able to check it at any point and reply to the mention straight from the dashboard. All reviews, complaints, questions and comments can be dealt with as quickly as you like.

Awario also does its fair deal of analysis. It analyses the growth of mentions, their Reach (how many people do mentions reach), its sentiment (a percentage of positive, negative, and neutral mentions), mentions’ locations, languages, and sources.

You can also generate reports on mentions’ statistics, compare analytics with your competitors, and see your industry influencers.

One of the features that makes Awario stand out is Awario Leads – a recent addition made specifically for finding hot leads online. It brings surprisingly good results and can properly transform the way you sell!

Price: Starts with $ 29/mo. You can also sign up for a free 14-day trial.

2. Mention

mention dashboard, a social media monitoring tool

Mention is one of the oldest and tried out social media monitoring tools. The French company had the time to mature, discover what the users need, and make sure it delivers the best possible results.

Its main goal is real-time search: you get the results from the past 24 hours after setting up an alert. Historical data is only available on request.

Mention is a good choice for large companies: it monitors all main sources, lets you tag and organize mentions, build your own custom reports and export them in PDF and CSV.

There’s even an automated reports feature: they update you on what’s happening with your alerts on a regular basis. The tool also finds influencers in your industry, reveals their interests, locations, and follower count, and makes it easy to jump into influencer marketing.

Mention is integrated with Slack and Zapier which makes marketing workflow smooth and simple.

Price: Starts with $ 29/mo. You can sign up for a free 14-day trial.

3. Brand24

brand24, a social media monitoring tool

Brand24 is a solid social media monitoring tool for small and medium-sized businesses.

It has existed for a while on the Polish market, but focused on the English-speaking part of the world fairly recently. It monitors Facebook, Twitter, Google+, Reddit, YouTube, Instagram, the web and saves historical data for up to 12 months.

Similarly to Awario and Mention, Brand24 filters by countries and languages, provides mentions’ statistics, and Influencer reports. However, reports aren’t white-label and data export is only allowed in the Premium plan.

Brand24 allows multi-user access for up to 99 users, which is great for large social media marketing teams.

Moreover, they have Slack integration and a mobile app, making social media monitoring a process that anyone from the team can do at any point of their day.

Price: Starts with $ 49/mo. You can sign up for a free 14-day trial.

4. TweetDeck

tweetdeck dashboard, a social media monitoring tool

TweetDeck isn’t quite on the same level as the tools mentioned before, as it only just monitors Twitter.

But that’s fair, since it’s a tool By Twitter for Twitter. The tool doesn’t stop at monitoring: you can also schedule posts for Twitter and look at Twitter analytics.

Basically, it’s a handy tool if Twitter is your preferred marketing channel and the one you want to keep an eye on. After all, it often makes sense: most social media crises happen there, and most brands interact with customers on exactly this social media platform.

Price: Free. Sign up with your Twitter account.

5. Keyhole

keyhole dashboard, a social media monitoring tool

Keyhole is a tool that combines a real TweetDeck and a hypothetical TweetDeck for Instagram.

Keyhole does social listening for these two platforms and analyzes the mentions found. It also does automated posting on Twitter and Instagram. Besides these two platforms, Keyhole monitors blogs and news sites.

What Keyhole does especially well is reports. Clear, customizable, and embeddable reports include growth rates, engagement metrics, historical data, social media influencers, and sentiment analysis. You’ve got access to all kinds of presentation forms: clouds, graphs, charts, maps, and so on. It’s a delight for both users and clients.

Price: Starts with $ 199/mo. Sign up for a free 7-day trial.

6. Brandwatch

brandwatch dashboard, a social media monitoring tool

Brandwatch is a tool made for marketing departments and social media marketing teams that you might find in big corporations. It does social listening across all platforms, blogs, forums, and the web.

It mostly stands out due to its social media analytics and reporting features. Brandwatch collects not only its own social listening data, but also other kinds of data from Hootsuite, Buzzsumo, and Google Analytics.

As a result, you get demographic and psychographic data about your audience, their location and languages, trending topics in your niche, robust sentiment analysis, and all other information you might need to do comprehensive market research.

You also get dashboards that can be exported into customizable PowerPoint presentations.

Price: Starts with $ 500/mo. There’s no free trial, but you can book a demo here.

7. Talkwalker

talkwalker dashboard, a social media monitoring tool

Talkwalker is another social media monitoring tool that’s made mostly for agencies.

It monitors even niche social media platforms, such as Flickr, Pinterest, Foursquare, SoundCloud, Twitch, and so on, in addition, of course, to all the main ones, such as blogs, forums, news sites, Twitter, Facebook, Instagram, and YouTube. It stands out for the ability to analyze images (e.g., find all photos with your logo).

Talkwalker is integrated with Google Analytics and analyses age, gender, occupation, location, languages; identifies main content themes and mentions’ sentiment.

Just as Brandwatch, Talkwalker is meant to work with great amounts of data rather than with specific mentions, and its main uses are market research and reputation management.

Price: Freemium, paid plans start with $ 700/mo. Start for free here.

Conclusion

So here you go: you’ve got a sample of the best social media monitoring tools for every kind of budget and every kind of social listening goals: from customer service to lead generation and extensive market research. Go ahead and start the year with a new tool that is best for you and your business!

Read next:

The post 7 social media monitoring tools to check out in 2019 appeared first on Search Engine Watch.

Search Engine Watch



This light-powered 3D printer materializes objects all at once

February 5, 2019 No Comments

3D printing has changed the way people approach hardware design, but most printers share a basic limitation: they essentially build objects layer by layer, generally from the bottom up. This new system from UC Berkeley, however, builds them all at once, more or less, by projecting a video through a jar of light-sensitive resin.

The device, which its creators call the replicator (but shouldn’t, because that’s a MakerBot trademark), is mechanically quite simple. It’s hard to explain it better than Berkeley’s Hayden Taylor, who led the research:

Basically, you’ve got an off-the-shelf video projector, which I literally brought in from home, and then you plug it into a laptop and use it to project a series of computed images, while a motor turns a cylinder that has a 3D-printing resin in it.

Obviously there are a lot of subtleties to it — how you formulate the resin, and, above all, how you compute the images that are going to be projected, but the barrier to creating a very simple version of this tool is not that high.

Using light to print isn’t new — many devices out there use lasers or other forms of emitted light to cause material to harden in desired patterns. But they still do things one thin layer at a time. Researchers did demonstrate a “holographic” printing method a bit like this using intersecting beams of light, but it’s much more complex. (In fact, Berkeley worked with Lawrence Livermore on this project.)

In Taylor’s device, the object to be recreated is scanned first in such a way that it can be divided into slices, a bit like a CT scanner — which is in fact the technology that sparked the team’s imagination in the first place.

By projecting light into the resin as it revolves, the material for the entire object is resolved more or less at once, or at least over a series of brief revolutions rather than hundreds or thousands of individual drawing movements.

This has a number of benefits besides speed. Objects come out smooth — if a bit crude in this prototype stage — and they can have features and cavities that other 3D printers struggle to create. The resin can even cure around an existing object, as they demonstrate by manifesting a handle around a screwdriver shaft.

Naturally, different materials and colors can be swapped in, and the uncured resin is totally reusable. It’ll be some time before it can be used at scale or at the level of precision traditional printers now achieve, but the advantages are compelling enough that it will almost certainly be pursued in parallel with other techniques.

The paper describing the new technique was published this week in the journal Science.

Gadgets – TechCrunch


60-Minute Masterclass: Audience Targeting Strategies for Google Ads

February 4, 2019 No Comments

It’s time for you to thrive in a digital marketing world that is all about audiences.

Read more at PPCHero.com
PPC Hero


Facebook warned over privacy risks of merging messaging platforms

February 3, 2019 No Comments

Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.

In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”

Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.

Instagram founders, Kevin Systrom and Mike Krieger, left Facebook last year, as a result of rising tensions over reduced independence, according to our sources.

While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.

Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.

In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $ 122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.

A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.

“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”

“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.

There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.

Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.

A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.

Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).

Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).

The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.

Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.

We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.

Here’s the full statement from the Irish watchdog:

While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.

Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.

Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.

Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.

Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.

But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.

At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.

It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.

And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.


Social – TechCrunch