CBPO

Tag: Social

It’s Time for Facebook to Become the Google Grants of Social Media

February 28, 2019 No Comments

Facebook Ads NonprofitWhenever I speak to Nonprofits (which is something I love to do), I always evangelize the importance of leveraging all of the online technology companies which offer “in-kind” services, especially Google Grants. However, for marketers in today’s world, Google Grants is simply not enough. Identifying with potential donors, volunteers and simple awareness has evolved way beyond the search engines and into our Facebook and Twitter feeds as we all crave instant news, gossip and basic information. In this post, I will discuss not only the steps that have already been taken by Facebook, but also how much more they need to do to fulfill their obligation to assist those organizations in need.

What Facebook needs to Learn from Google

In the early months of 2002, Google relaunched its AdWords platform with a new cost-per-click (CPC) pricing model that made it increasingly more popular and successful with both large and smaller companies. It was this achievement that opened the eyes of both the Google founders and other Google executives, to provide the same opportunity for Nonprofits by giving them free ads on Google.com. In essence, they believed that the Adwords platform would enable non-profits too reach a much larger audience and connect with the people who were searching for information about their specific cause or programs. As you will see below, it has grown by leaps and bounds….

Recent screenshot from the new Google Grants Blog:

 

Why Facebook Doesn’t Understand the Opportunity

After seeing the success of Google grants for the past 13 years, you would think Facebook would have a Nonprofit plan already in place to offer Free advertising to Nonprofits. However, it appears that even though they have made attempts to achieve this, it was simply not enough. According to the great article by AdWeek entitled: “Nonprofits Rely Heavily on Social Media to Raise Awareness“, author Kimberlee Morrison mentions that the social media presence is growing significantly for nonprofits. She goes on to say: “The report shows an increase of 29 percent in Facebook fans across all verticals and a 25 percent increase in Twitter followers. What’s more, there are big increases in sharing and likes from sources outside the follower base, so it would be wise for nonprofits to play to that strength on social sites if their aim is attracting a wider user base.

How Facebook Failed in its First Attempt

Back on November 15, 2015, The Nonprofit Times published an interesting article entitled “$ 2 Million In Facebook Ads Going To Nonprofits” in which Facebook announced in partnership with ActionSprout, that they will distribute $ 2 million in Facebook Ads credits during the holiday season. These Facebook Ads credits (up to $ 1,500 each) will be given out to roughly two-thousand nonprofits. According to author Andy Segedin, he states that …according Drew Bernard, CEO and co-founder. Organizations will receive credit allotments of $ 600, $ 900, $ 1,200 or $ 1,500 that will be granted from December through February. All applicants will be set up with a free ActionSprout account, Bernard said.

The article goes on to say: “Bernard hopes that the credit giveaway will help organizations post more and better content on Facebook. The company plans to publish key findings based off of the distribution and use of the credits, but will not move forward with any follow-up efforts until information is gathered. “This is a test to see what we can learn, and with what we learn we’ll all go back to the drawing board and see if there’s something we should do next with this”.

If you are interested in hearing more about the “key findings” of this test, your going to have to wait a little while and also give them your email address. (Not very Philanthropic)

Screen Shot 2016-05-06 at 10.17.19 AM

 

In Conclusion:

If you can tell by my tone, I am somewhat disappointed by Facebook’s lack of initiative with their efforts to help Nonprofits.  In my opinion, they offer a much stronger platform than Google Adwords based on their “intense” targeting as well as their “ripe and persuasive audience”. I am also quite shocked that they could not follow in the footsteps of Google’s 13 years of supporting Nonprofits with their Google Grants Programs. To end insultr to injury, I am also dumbfounded that they not only had to partner with another company but also label their efforts as a test to limited number of Nonprofits for just a couple month. What’s the point of a test, when you know Nonprofits could only benefit from the Free Advertising.

You almost get the sense that this was for the benefit for everyone else, except for the Nonprofit which needs it the most.


PPC Marketing Agency | Search Marketing Firm | Adwords Certified Consultant


How Social Advertising is Reinventing Our Display Marketing Strategies

February 25, 2019 No Comments

facebook-ads-google-displayIn today’s world, there is rarely a PPC Marketing Strategy that does not include or even toy with the notion of creating either a Facebook Ads or Twitter Ads campaign(s) at some point in the strategy life-cycle. Because of this, marketers are developing and testing different audience segments based on interests, household income, marital status, exercise habits, etc… Frankly, it has changed the landscape of online marketing as we know it. In this post, I will talk about the importance of leveraging the targeting abilities within Facebook Ads and how it can benefit your next Google display campaign.

Facebook Ad’s Demographic Targeting Abilities

The targeting abilities in both Google Display and Facebook Ads are similar with regard to Demographics and Topics/Placement. However, truth be told, Facebook is just far more superior to marketers based on their the deeper targeting options and more precise segmentation abilities. So without further ado, lets talk about the similarities and how marketers can harness what they have learned from Facebook and apply to Google.

As you can see from the screenshot below ↓, Google Display provides similar demographic targeting options as compared to Facebook. They allow marketers to choose Genders, Ages and even Parental Status. However, there is one major “elephant in the room” here that skews all of this and that is the dreadful UNKNOWN that we see in all of our data reports. These unknowns are basically people that Google can not identify to be associated with any or all of the targeted options selected. (In Facebook, they have the same problem). The common issue is that not all people want to disclose their information to the platforms, hence making it more of a “ballpark” than a “hole in one”

Demographic targeting in PPC Marketing

The Fuzziness with Google Topics Targeting

In Google Display, we have the ability to select specific topics and/or placements where we want to advertise our display banners and text ads. In the screenshot below ↓, I have provided a small example of how we can target the topic(s) of Coffee & Tea. But here’s the catch. In Google, we have an INTENT problem with our ability to choose specific audiences based on these very generalized topics. Meaning, the Coffee and Tea audience found in Google could be anything from Coffee Market Financials to the Health Benefits of Green Tea, but NOT specifically the Coffee and Tea drinkers. It is this little dilemma that forces marketers to add another layer of targeting to try and “hone in” on their preferred audience. That extra layer is called Placement targeting, but there are some extra steps that are needed to get the most out of it.

Extra Effort needed with Google Placement Targeting

Placement Targeting is the closest thing to to Facebook Ads in terms of reaching specific brands or interests that possess a higher level of intent to make a purchase. However, there are some common issues with placement targeting that marketers need to know before they start spending their ad dollars.

  • The partnering websites in this network are common Adsense customers. They can vary from being very authoritative and prominent like (CNN, Nytimes, etc..) all the way to suspicious arbitrage sites where all they do is drive up impressions and cost (yes, they still exist)
  • Marketers are often missing out on potential site partners because Google’s own search engine is not up to date on listing all of them (meaning, there are great sites that are a part of the Adsense network that are not listed in their directory). This hiccup forces marketers to do their own research to find those sites and they need to be added manually.

In Conclusion:

The targeting abilities within Facebook Ads have become an absolute “game changer” in the PPC marketing world. It has made such an impact that it’s starting to question Google’s own targeting abilities within the display network. The FBA platform allows advertisers to reach those avid Coffee and Tea drinkers by targeting everything from certain Brands, Flavors, Keurig Cups, Brewing types, etc… However, simply eliminating Display from their strategy is not a wise choice, considering the missed opportunities in reaching that additional audience. If there is one take-away from this article, it is to take what they have learned from Facebook Ads and apply them to their display campaigns.


PPC Marketing Agency | Search Marketing Firm | Adwords Certified Consultant


Fabula AI is using social spread to spot ‘fake news’

February 7, 2019 No Comments

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $ 500,000 in angel funding and about another $ 500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Social – TechCrunch


7 social media monitoring tools to check out in 2019

February 5, 2019 No Comments

The beginning of a new year is as good a reason as any other to try something new: a different lifestyle, a new hobby, a brand new marketing strategy.

And, of course, a new tool, since it’s both exciting and rewarding to discover awesome software that helps you deal with work and, sometimes, with life as well.

This is a list of social media monitoring/listening tools you should check out next year.

Some of them have existed for a while, some are new and fresh on the market.

But all of them are worth trying out and using in your marketing tool set (if social monitoring is in your marketing strategy at all, as it should be). So let’s start:

Which social media monitoring tools should you check out this year?

1. Awario

awario dashboard, a social media monitoring tool

Awario collects mentions of your keywords from a large range of sources (that keeps getting larger).

It monitors all major social media platforms, Reddit and other forums, news sites and blogs, and the Web.

It works in real-time: whenever your keyword is mentioned, it will immediately appear in your mention feed, and you’ll be able to check it at any point and reply to the mention straight from the dashboard. All reviews, complaints, questions and comments can be dealt with as quickly as you like.

Awario also does its fair deal of analysis. It analyses the growth of mentions, their Reach (how many people do mentions reach), its sentiment (a percentage of positive, negative, and neutral mentions), mentions’ locations, languages, and sources.

You can also generate reports on mentions’ statistics, compare analytics with your competitors, and see your industry influencers.

One of the features that makes Awario stand out is Awario Leads – a recent addition made specifically for finding hot leads online. It brings surprisingly good results and can properly transform the way you sell!

Price: Starts with $ 29/mo. You can also sign up for a free 14-day trial.

2. Mention

mention dashboard, a social media monitoring tool

Mention is one of the oldest and tried out social media monitoring tools. The French company had the time to mature, discover what the users need, and make sure it delivers the best possible results.

Its main goal is real-time search: you get the results from the past 24 hours after setting up an alert. Historical data is only available on request.

Mention is a good choice for large companies: it monitors all main sources, lets you tag and organize mentions, build your own custom reports and export them in PDF and CSV.

There’s even an automated reports feature: they update you on what’s happening with your alerts on a regular basis. The tool also finds influencers in your industry, reveals their interests, locations, and follower count, and makes it easy to jump into influencer marketing.

Mention is integrated with Slack and Zapier which makes marketing workflow smooth and simple.

Price: Starts with $ 29/mo. You can sign up for a free 14-day trial.

3. Brand24

brand24, a social media monitoring tool

Brand24 is a solid social media monitoring tool for small and medium-sized businesses.

It has existed for a while on the Polish market, but focused on the English-speaking part of the world fairly recently. It monitors Facebook, Twitter, Google+, Reddit, YouTube, Instagram, the web and saves historical data for up to 12 months.

Similarly to Awario and Mention, Brand24 filters by countries and languages, provides mentions’ statistics, and Influencer reports. However, reports aren’t white-label and data export is only allowed in the Premium plan.

Brand24 allows multi-user access for up to 99 users, which is great for large social media marketing teams.

Moreover, they have Slack integration and a mobile app, making social media monitoring a process that anyone from the team can do at any point of their day.

Price: Starts with $ 49/mo. You can sign up for a free 14-day trial.

4. TweetDeck

tweetdeck dashboard, a social media monitoring tool

TweetDeck isn’t quite on the same level as the tools mentioned before, as it only just monitors Twitter.

But that’s fair, since it’s a tool By Twitter for Twitter. The tool doesn’t stop at monitoring: you can also schedule posts for Twitter and look at Twitter analytics.

Basically, it’s a handy tool if Twitter is your preferred marketing channel and the one you want to keep an eye on. After all, it often makes sense: most social media crises happen there, and most brands interact with customers on exactly this social media platform.

Price: Free. Sign up with your Twitter account.

5. Keyhole

keyhole dashboard, a social media monitoring tool

Keyhole is a tool that combines a real TweetDeck and a hypothetical TweetDeck for Instagram.

Keyhole does social listening for these two platforms and analyzes the mentions found. It also does automated posting on Twitter and Instagram. Besides these two platforms, Keyhole monitors blogs and news sites.

What Keyhole does especially well is reports. Clear, customizable, and embeddable reports include growth rates, engagement metrics, historical data, social media influencers, and sentiment analysis. You’ve got access to all kinds of presentation forms: clouds, graphs, charts, maps, and so on. It’s a delight for both users and clients.

Price: Starts with $ 199/mo. Sign up for a free 7-day trial.

6. Brandwatch

brandwatch dashboard, a social media monitoring tool

Brandwatch is a tool made for marketing departments and social media marketing teams that you might find in big corporations. It does social listening across all platforms, blogs, forums, and the web.

It mostly stands out due to its social media analytics and reporting features. Brandwatch collects not only its own social listening data, but also other kinds of data from Hootsuite, Buzzsumo, and Google Analytics.

As a result, you get demographic and psychographic data about your audience, their location and languages, trending topics in your niche, robust sentiment analysis, and all other information you might need to do comprehensive market research.

You also get dashboards that can be exported into customizable PowerPoint presentations.

Price: Starts with $ 500/mo. There’s no free trial, but you can book a demo here.

7. Talkwalker

talkwalker dashboard, a social media monitoring tool

Talkwalker is another social media monitoring tool that’s made mostly for agencies.

It monitors even niche social media platforms, such as Flickr, Pinterest, Foursquare, SoundCloud, Twitch, and so on, in addition, of course, to all the main ones, such as blogs, forums, news sites, Twitter, Facebook, Instagram, and YouTube. It stands out for the ability to analyze images (e.g., find all photos with your logo).

Talkwalker is integrated with Google Analytics and analyses age, gender, occupation, location, languages; identifies main content themes and mentions’ sentiment.

Just as Brandwatch, Talkwalker is meant to work with great amounts of data rather than with specific mentions, and its main uses are market research and reputation management.

Price: Freemium, paid plans start with $ 700/mo. Start for free here.

Conclusion

So here you go: you’ve got a sample of the best social media monitoring tools for every kind of budget and every kind of social listening goals: from customer service to lead generation and extensive market research. Go ahead and start the year with a new tool that is best for you and your business!

Read next:

The post 7 social media monitoring tools to check out in 2019 appeared first on Search Engine Watch.

Search Engine Watch


Tips to maximize ROI on paid social: Facebook + Instagram

January 22, 2019 No Comments

Available ad impressions on social media are hitting a wall as user growth slows, driving up CPC and CPM prices. As demand increases, it becomes even more important for advertisers to properly optimize campaigns to maximize their return on investment for paid social.

According to Merkle’s Q2 2018 Digital Marketing Report, advertiser spend increased 40% year-over-year in Q2, while impressions fell 17%.

The influx of advertising dollars to social media platforms with a steady number of available impressions means that the average cost-per-click (CPC) is rising.

Many paid social media campaigns do not maximize their return on investment because of poor or incomplete optimization, limited distribution, incomplete tracking, and undefined goals.

Here’s what you need to do to squeeze more out of your paid social media campaigns.

Advertising for the funnel

Each advertisement you run must have a clear goal in mind, and that goal must fit into a larger piece of your paid social media strategy. Moving prospects from the top of the funnel to the bottom—as efficiently as possible—is necessary for a successful ad campaign.

Keep in mind that it may take multiple interactions with your advertisements and content before someone works their way through the funnel. Your ad campaigns should never take on a one-and-done approach.

An ad targeting a past purchaser will be very different than an ad targeting someone who is completely unfamiliar with your brand and products.

This makes it important to segment your customers into the correct phase of the buying process. Run different ads with different messages and calls to action for each segment.

Advertise smarter, not harder.

Simple process improvements

A number of small improvements can greatly impact the success of a paid social media campaign. Not implementing these is basically leaving money on the table. Remember, we are trying to squeeze every last drop of ROI out of these campaigns, even if getting the maximum return takes time.

While the examples I cite relate to Facebook and Instagram, we can see equivalents on Twitter, YouTube, Pinterest, LinkedIn, and Snapchat to some degree.

Whichever social media platform you are advertising through, follow platform best practices and make sure everything is set up properly—through tracking pixels and UTM codes. Everything should be properly attributed across platforms.

Facebook Pixel

First, make sure that Facebook’s tracking pixel is properly implemented on your website.

Facebook Pixel Helper, a free Chrome browser extension from Facebook, can help you troubleshoot any issues. You can find information on how to set up Facebook Pixel from scratch on Facebook’s website.

You also need to set up Facebook Pixel with standard events like newsletter sign-ups and successful e-commerce actions (add to cart, purchase, etc.) to help with creating higher quality custom and lookalike audiences.

Facebook and Instagram have powerful tracking and conversion optimization abilities in their ad technology, so use them.

Custom audiences

Using Facebook’s custom audiences feature is a must if you want your paid social media campaigns to really perform.

It is foolish to not capture and harness information about your website’s visitors, especially when it is free and requires only minutes to set up. Facebook offers a number of ways to create a custom audience in the Facebook Ads Manager.

Facebook’s Create a Custom Audience Tool

Website traffic

If your Facebook Pixel is properly set up, it can record every action taken by visitors on your website in the past 180 days. The actions include page views, button clicks, abandoned carts, and purchases.

You can create audiences to build lookalike audiences or use them for remarketing.

Advertising to someone who has already been to your website and possibly even completed on-site actions has a much higher chance of converting than advertising to a first-time visitor.

Offline conversions

With proper implementation, you can track offline events, like sales at physical retail locations, after someone has interacted with your Facebook advertisements.

There are two ways to set up offline activity: either upload the offline data CSV file manually to Facebook or sync your CRM directly with Facebook. The customer information will then be matched to the correct user IDs on Facebook.

This approach will show you if someone took a specific action, like purchasing after viewing.

Creating offline events

Lookalike audiences

You can create lookalike audiences in the Facebook Ads Manager to find audiences that have similar traits and characteristics to your ideal user.

The lookalike audience is created based on a custom audience, which acts as a seed audience. This allows you to greatly expand the number of potential customers you can target based on a higher-quality custom audience.

Lookalike audiences let you target people similar to your existing, custom audiences

Conversion tracking

Conversions are of paramount importance for e-commerce stores. Website traffic is useless unless it results in sales. Luckily, Facebook and Instagram can help optimize your campaign’s delivery for successful conversions.

Conversion tracking depends on the proper implementation of the tracking pixel and properly set up ad campaigns. You also need to set up standard events or custom conversions on Facebook to accurately measure and optimize for conversions. Google Analytics offers conversion tracking as well, but it’s based on a last-click-attribution model.  

There is no reason not to track and optimize for conversions. Even media companies that generate revenue by on-site ad units can benefit from optimizing toward conversions by focusing on pages-per-session to find a higher quality user, opposed to general website visitors.  

Remarketing

Remarketing with social media ad managers requires proper implementation of each platforms’ tracking pixels.

For example, Facebook’s audience and lookalike audience features are powerful tools that can track users and specific website actions up to 180 days in the past.

Remarketing with these audiences in mind is a strategic approach, and entire campaigns can be built around them. In fact, these types of campaigns often yield the highest returns.  

Facebook remarketing illustration

 

Image: http://marketingjumpleads.com/facebook-remarketing

Sequential Advertising

Sequential advertising is when you show different ads to the same person over a period of time. Large television campaigns sometimes use this tactic, but there is no reason why it cannot be successfully applied to paid social media campaigns.

For example, you may show an audience an ad focusing on one benefit of your product. The next ad, after the majority of people in the audience has seen the first one, would highlight another benefit of the product. The third ad would highlight a customer testimonial. You are showing your audience the same product but with different messaging.

If you are running video ads, you could also share a related story via that format. Think the Budweiser Frogs television campaign or some of BMW’s mini-movies. A sequential advertising campaign does not have to go to such lengths to be successful, but fresh, on-brand, eye-catching creative in any form is generally a good thing.

Maximizing distribution

Besides improvements to the advertising process, further optimization to paid social campaigns can be achieved through maximizing the campaigns’ distribution. That will ensure your campaign is successful based on your set goals. Not maximizing the distribution of your ads will leave money on the table.

Regularly refresh your creative

Using the same creative—images, video, and text—over and over can quickly cause fatigue. This means your audience will start to ignore your ads—or worse, start leaving mean comments on them. You’ll also start experiencing an increase in CPM and CPC as you lose more in Facebook’s ad auction.

Regularly refresh your creative to avoid this. It makes a difference, even if you’re just refreshing your images or copy every month.

Standing out in the newsfeed is a big part of successful paid campaigns. If you are using photographs or videos, they need to be high quality and relatable to make the user stop scrolling through their newsfeed.

Pay special attention to resolution, aspect ratio, and how the ad units look on a mobile device. The majority of users will see your ad on their phone, so make sure it’s thumb-stopping.   

Use all available placements

Facebook is always optimizing for the lowest event cost possible. The vast majority of your results will come from ads run on the Facebook or Instagram newsfeed. But don’t forget about other placements, like the sidebar, messenger, and marketplace.

Automatic placements are the best option to maximize results beyond the newsfeeds.

All placements selected

All placements selected.

Image: https://www.facebook.com/business/help/965529646866485

Limited placements selected

Limited placements selected.

Image: https://www.facebook.com/business/help/965529646866485

Optimize for mobile

Unless you specifically target only desktop device users, the majority of the impressions or clicks you receive will be from mobile devices. This means you better make sure your creative is mobile-friendly.

Make sure all of your images and videos are formatted to maximize the viewable space on mobile for the type of advertisement you are running. Your headlines and accompanying text also need to be optimized to fit.

If you’re using videos, make sure they’re formatted to a 1:1 aspect ratio (square videos) to take up the most room on the Facebook mobile newsfeed and outperform horizontal aspect videos.

Benefits of mobile optimized video

Image: https://blog.bufferapp.com/square-video-vs-landscape-video

Minimize restrictions for the Facebook algorithm

Don’t try to control Facebook too much. Instead, give Facebook room to show your ads to the correct users at the correct time with the least necessary targeting restrictions. The more freedom the algorithm has to use your pixel data, the better able it is to encourage conversions.  

Josh Thompson is Senior Social Media Strategist at Portent—a Clearlink Digital Agency. Josh is Facebook Blueprint Certified and has worked in social media advertising for seven years.

The post Tips to maximize ROI on paid social: Facebook + Instagram appeared first on Search Engine Watch.

Search Engine Watch


Pew: Social media for the first time tops newspapers as a news source for US adults

December 10, 2018 No Comments

It’s not true that everyone gets their news from Facebook and Twitter. But it is now true that more U.S. adults get their news from social media than from print newspapers. According to a new report from Pew Research Center out today, social media has for the first time surpassed newspapers as a preferred source of news for American adults. However, social media is still far behind other traditional news sources, like TV and radio, for example.

Last year, the portion of those who got their news from social media was around equal to those who got their news from print newspapers, Pew says. But in its more recent survey conducted from July 30 through August 12, 2018, that had changed.

Now, one-in-five U.S. adults (20 percent) are getting news from social media, compared with just 16 percent of those who get news from newspapers, the report found. (Pew had asked respondents if they got their news “often” from the various platforms.)

The change comes at a time when newspaper circulation is on the decline, and its popularity as a news medium is being phased out — particularly with younger generations. In fact, the report noted that print only remains popular today with the 65 and up crowd, where 39 percent get their news from newspapers. By comparison, no more than 18 percent of any other age group does.

While the decline of print has now given social media a slight edge, it’s nowhere near dominating other formats.

Instead, TV is still the most popular destination for getting the news, even though that’s been dropping over the past couple of years. TV is then followed by news websites, radio and then social media and newspapers.

But “TV news” doesn’t necessarily mean cable news networks, Pew clarifies.

In reality, local news is the most popular, with 37 percent getting their news there often. Meanwhile, 30 percent get cable TV news often and 25 percent watch the national evening news shows often.

However, if you look at the combination of news websites and social media together, a trend toward increasing news consumption from the web is apparent. Together, 43 percent of U.S. adults get their news from the web in some way, compared to 49 percent from TV.

There’s a growing age gap between TV and the web, too.

A huge majority (81 percent) of those 65 and older get news from TV, and so does 65 percent of those ages 50 to 64. Meanwhile, only 16 percent of the youngest consumers — those ages 18 to 29 — get their news from TV. This is the group pushing forward the cord cutting trend, too — or more specifically, many of them are the “cord-nevers,” as they’re never signing up for pay TV subscriptions in the first place. So it’s not surprising they’re not watching TV news.

Plus, a meager 2 percent get their news from newspapers in this group.

This young demographic greatly prefers digital consumption, with 27 percent getting news from news websites and 36 percent from social media. That is to say, they’re four times as likely than those 65 and up to get news from social media.

Meanwhile, online news websites are the most popular with the 30 to 49-year-old crowd, with 42 percent saying they get their news often from this source.

Despite their preference for digital, younger Americans’ news consumption is better spread out across mediums, Pew points out.

“Younger Americans are also unique in that they don’t rely on one platform in the way that the majority of their elders rely on TV,” Pew researcher Elisa Shearer writes. “No more than half of those ages 18 to 29 and 30 to 49 get news often from any one news platform,” she says.


Social – TechCrunch


New Jobs Open! Social Campaign Strategist, Product Evangelist, Search Engine Marketing Manager + More!

December 8, 2018 No Comments

In the past couple weeks, we’ve had 7 new jobs posted to the PPC Hero Job Board! Take a look at what’s available Here’s a brief look at some of the newly posted positions: eLama New York, NY Role: Product Evangelist eLama is a leading digital marketing automation service in Russia and CIS. Due to […]

Read more at PPCHero.com
PPC Hero


Paid Social Predictions for Marketers in 2020

December 3, 2018 No Comments

Now that we’re all aware that paid social is crucially important for your business…it’s time to catch up with the always-changing ad formats, algorithms, and audiences.

Read more at PPCHero.com
PPC Hero


Limiting social media use reduced loneliness and depression in new experiment

November 10, 2018 No Comments

The idea that social media can be harmful to our mental and emotional well-being is not a new one, but little has been done by researchers to directly measure the effect; surveys and correlative studies are at best suggestive. A new experimental study out of Penn State, however, directly links more social media use to worse emotional states, and less use to better.

To be clear on the terminology here, a simple survey might ask people to self-report that using Instagram makes them feel bad. A correlative study would, for example, find that people who report more social media use are more likely to also experience depression. An experimental study compares the results from an experimental group with their behavior systematically modified, and a control group that’s allowed to do whatever they want.

This study, led by Melissa Hunt at Penn State’s psychology department, is the latter — which despite intense interest in this field and phenomenon is quite rare. The researchers only identified two other experimental studies, both of which only addressed Facebook use.

One hundred and forty-three students from the school were monitored for three weeks after being assigned to either limit their social media use to about 10 minutes per app (Facebook, Snapchat and Instagram) per day or continue using it as they normally would. They were monitored for a baseline before the experimental period and assessed weekly on a variety of standard tests for depression, social support and so on. Social media usage was monitored via the iOS battery use screen, which shows app use.

The results are clear. As the paper, published in the latest Journal of Social and Clinical Psychology, puts it:

The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.

Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.

It’s not the final word in this, however. Some scores did not see improvement, such as self-esteem and social support. And later follow-ups to see if feelings reverted or habit changes were less than temporary were limited because most of the subjects couldn’t be compelled to return. (Psychology, often summarized as “the study of undergraduates,” relies on student volunteers who have no reason to take part except for course credit, and once that’s given, they’re out.)

That said, it’s a straightforward causal link between limiting social media use and improving some aspects of emotional and social health. The exact nature of the link, however, is something at which Hunt could only speculate:

Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.

When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.

The researchers acknowledge the limited nature of their study and suggest numerous directions for colleagues in the field to take it from here. A more diverse population, for instance, or including more social media platforms. Longer experimental times and comprehensive follow-ups well after the experiment would help, as well.

The 30-minute limit was chosen as a conveniently measurable one, but the team does not intend to say that it is by any means the “correct” amount. Perhaps half or twice as much time would yield similar or even better results, they suggest: “It may be that there is an optimal level of use (similar to a dose response curve) that could be determined.”

Until then, we can use common sense, Hunt suggested: “In general, I would say, put your phone down and be with the people in your life.”


Social – TechCrunch


Limiting social media use reduced loneliness and depression in new experiment

November 10, 2018 No Comments

The idea that social media can be harmful to our mental and emotional well-being is not a new one, but little has been done by researchers to directly measure the effect; surveys and correlative studies are at best suggestive. A new experimental study out of Penn State, however, directly links more social media use to worse emotional states, and less use to better.

To be clear on the terminology here, a simple survey might ask people to self-report that using Instagram makes them feel bad. A correlative study would, for example, find that people who report more social media use are more likely to also experience depression. An experimental study compares the results from an experimental group with their behavior systematically modified, and a control group that’s allowed to do whatever they want.

This study, led by Melissa Hunt at Penn State’s psychology department, is the latter — which despite intense interest in this field and phenomenon is quite rare. The researchers only identified two other experimental studies, both of which only addressed Facebook use.

One hundred and forty-three students from the school were monitored for three weeks after being assigned to either limit their social media use to about 10 minutes per app (Facebook, Snapchat and Instagram) per day or continue using it as they normally would. They were monitored for a baseline before the experimental period and assessed weekly on a variety of standard tests for depression, social support and so on. Social media usage was monitored via the iOS battery use screen, which shows app use.

The results are clear. As the paper, published in the latest Journal of Social and Clinical Psychology, puts it:

The limited use group showed significant reductions in loneliness and depression over three weeks compared to the control group. Both groups showed significant decreases in anxiety and fear of missing out over baseline, suggesting a benefit of increased self-monitoring.

Our findings strongly suggest that limiting social media use to approximately 30 minutes per day may lead to significant improvement in well-being.

It’s not the final word in this, however. Some scores did not see improvement, such as self-esteem and social support. And later follow-ups to see if feelings reverted or habit changes were less than temporary were limited because most of the subjects couldn’t be compelled to return. (Psychology, often summarized as “the study of undergraduates,” relies on student volunteers who have no reason to take part except for course credit, and once that’s given, they’re out.)

That said, it’s a straightforward causal link between limiting social media use and improving some aspects of emotional and social health. The exact nature of the link, however, is something at which Hunt could only speculate:

Some of the existing literature on social media suggests there’s an enormous amount of social comparison that happens. When you look at other people’s lives, particularly on Instagram, it’s easy to conclude that everyone else’s life is cooler or better than yours.

When you’re not busy getting sucked into clickbait social media, you’re actually spending more time on things that are more likely to make you feel better about your life.

The researchers acknowledge the limited nature of their study and suggest numerous directions for colleagues in the field to take it from here. A more diverse population, for instance, or including more social media platforms. Longer experimental times and comprehensive follow-ups well after the experiment would help, as well.

The 30-minute limit was chosen as a conveniently measurable one, but the team does not intend to say that it is by any means the “correct” amount. Perhaps half or twice as much time would yield similar or even better results, they suggest: “It may be that there is an optimal level of use (similar to a dose response curve) that could be determined.”

Until then, we can use common sense, Hunt suggested: “In general, I would say, put your phone down and be with the people in your life.”

Mobile – TechCrunch