CBPO

Author: CBPO

New Jobs Open! Social Campaign Strategist, Product Evangelist, Search Engine Marketing Manager + More!

December 8, 2018 No Comments

In the past couple weeks, we’ve had 7 new jobs posted to the PPC Hero Job Board! Take a look at what’s available Here’s a brief look at some of the newly posted positions: eLama New York, NY Role: Product Evangelist eLama is a leading digital marketing automation service in Russia and CIS. Due to […]

Read more at PPCHero.com
PPC Hero


How to increase page speed to improve SEO results

December 8, 2018 No Comments

Page speed has been a part of Google’s search ranking algorithms for quite some time, but it’s been entirely focused on desktop searches until recently when Google began using page speed as a ranking factor for mobile searches as well.

Have you checked your page speed scores lately?

How do your speeds match up against your competition?

If your pages are loading slower than competitors, there’s a chance you’re taking a hit in the SERPs. While relevance of a page carries much more weight than page speed, it’s still important to ensure your pages are loading fast for users and search engines.

Here are 5 ways to increase page speed and improve SEO results.

Compress images

Large image files can have a significant negative impact on page speed performance. Images often represent the largest portion of bytes when downloading a page. This is why optimizing images generally returns the biggest improvement in speed performance. Compressing your images using an image compression tool will reduce their file size leading to faster loading pages for both users and search engines, which in turn will have a positive impact on your organic search rankings.

Leverage browser caching

Web browsers cache quite a bit of information, including images, JavaScript files and stylesheets. The benefit is that when visitors revisit your site, the browser doesn’t have to reload the whole page. If your server does not include caching headers or if resources are only cached for a short period of time, then pages on your site will load slower because browsers must reload all of this information.

Google recommends setting a minimum cache time of one week (and preferably up to one year) for static assets, or assets that change infrequently. So, make sure you work with your web developer to ensure caching is setup for optimal page speed performance.

Decrease server response time

There are numerous potential factors that may slow down the response of your server: slow database queries, slow routing, frameworks, libraries, slow application logic, or insufficient memory. All these factors should be taken into consideration when trying to improve your server’s response time.

The most favorable server response time is under 200ms. SEO marketers should work with their website  hosting provider to reduce server response time and increase page speed performance.

Enable Gzip compression

Your pages will load slower if your site has compressible resources that are served without Gzip compression. Gzip, a software application for file compression, should be utilized to reduce the size of files on your site such as CSS, HTML, and JavaScript (but not images).

You will need to determine which type of server your site runs on before enabling Gzip compression as each server requires a unique configuration, for example:

Again, your hosting provider can help you enable Gzip compression accordingly. You’d be surprised how much faster your pages load by having Gzip implemented.

Avoid multiple landing page redirects

Having more than one redirect from a given URL to the final landing page can slow page load time. Redirects prompt an additional HTTP request-response which can delay page rendering. SEO Marketers should minimize the number of redirects to improve page speed. Check your redirects and make sure you don’t have redundant redirects that could be slowing load time.

Conclusion

SEO marketers must be analyzing and improving page speed. A great place to start is compressing images, utilizing caching, reducing server response time, enabling file compression, and removing multiple/redundant redirects.

I urge marketers to periodically use Google’s Page Speed Insights Tool to check your load time and compare your website to competitors’ sites. The tool also provides specific, recommended optimizations to increase your site’s page speed performance.

As Google continues to favor fast-loading websites it’s crucial that SEO Experts take necessary steps to ensure your site’s pages are meeting (and beating) Google’s expectations. Today, improving page speed is an essential aspect of any successful SEO Program.

The post How to increase page speed to improve SEO results appeared first on Search Engine Watch.

Search Engine Watch


FB QVC? Facebook tries Live video shopping

December 7, 2018 No Comments

Want to run your own home shopping network? Facebook is now testing a Live video feature for merchants that lets them demo and describe their items for viewers. Customers can screenshot something they want to buy and use Messenger to send it to the seller, who can then request payment right through the chat app.

Facebook confirms the new shopping feature is currently in testing with a limited set of Pages in Thailand, which has been a testbed for shopping features. The option was first spotted by social media and reputation manager Jeff Higgins, and re-shared by Matt Navarra and Social Media Today. But now Facebook is confirming the test’s existence and providing additional details.

The company tells me it had heard feedback from the community in Thailand that Live video helped sellers demonstrate how items could be used or worn, and provided richer understanding than just using photos. Users also told Facebook that Live’s interactivity let customers instantly ask questions and get answers about product specifications and details. Facebook has looked to Thailand to test new commerce experiences like home rentals in Marketplace, as the country’s citizens were quick to prove how Facebook Groups could be used for peer-to-peer shopping. “Thailand is one of our most active Marketplace communities” says Mayank Yadav, Facebook product manager for Marketplace.

Now it’s running the Live shopping test, which allows Pages to notify fans that they’re broadcasting to “showcase products and connect with your customers.” Merchants can take reservations and request payments through Messenger. Facebook tells me it doesn’t currently have plans to add new partners or expand the feature. But some sellers without access are being invited to join a waitlist for the feature. It also says it’s working closely with its test partners to gather feedback and iterate on the live video shopping experience, which would seem to indicate it’s interested in opening the feature more widely if it performs well.

Facebook doesn’t take a cut of payments through Messenger, but the feature could still help earn the company money at a time when it’s seeking revenue streams beyond News Feed ads as it runs out of space there, Stories take over as the top media form and user growth plateaus. Hooking people on video viewing helps Facebook show lucrative video ads. The more that Facebook can train users to buy and sell things on its app, the better the conversion rates will be for businesses, and the more they’ll be willing to spend on ads. Facebook could also convince sellers who broadcast Live to buy its new Marketplace ad units to promote their wares. And Facebook is happy to snatch any use case from the rest of the internet, whether it’s long-form video viewing or job applications or shopping to boost time on site and subsequent ad views.

Increasingly, Facebook is setting its sights on Craigslist, Etsy and eBay. Those commerce platforms have failed to keep up with new technologies like video and lack the trust generated by Facebook’s real-name policy and social graph. A few years ago, selling something online meant typing up a generic description and maybe uploading a photo. Soon it could mean starring in your own infomercial.

[PostScript: And a Facebook home shopping network could work perfectly on its new countertop smart display Portal.]


Social – TechCrunch


Canada, France Plan Global Panel to Study the Effects of AI

December 7, 2018 No Comments

The International Panel on Artificial Intelligence will be modeled on a group formed in 1988 to study climate change and recommend government policies.
Feed: All Latest


Seized cache of Facebook docs raise competition and consent questions

December 5, 2018 No Comments

A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $ 19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.


Social – TechCrunch


UK Documents Suggest Facebook Traded User Privacy For Growth

December 5, 2018 No Comments

The newly public documents provide a rare window into Facebook CEO Mark Zuckerberg’s thoughts on how to expand his social media juggernaut.
Feed: All Latest


Could Another Digital Agency Easily Steal Your Clients?

December 5, 2018 No Comments

Explore a few ways to be proactive in bringing new insights and ideas to your PPC accounts.

Read more at PPCHero.com
PPC Hero


Microsoft and Docker team up to make packaging and running cloud-native applications easier

December 4, 2018 No Comments

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this launch, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched  Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one, remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.


Enterprise – TechCrunch


TechSEO Boost: Machine Learning for SEOs

December 4, 2018 No Comments

This year’s TechSEO Boost, an event dedicated to technical SEO and hosted by Catalyst, took place on November 29 in Boston.

Billed as the conference “for developers and advanced SEO specialists,” TechSEO Boost built on the success of the inaugural event in 2017 with a day of enlightening, challenging talks from the sharpest minds in the industry.

Some topics permeated the discourse throughout the day and in particular, machine learning was a recurring theme.

As is the nature of the TechSEO Boost conference, the sessions aimed to go beyond the hype to define what precisely machine learning means for SEO, both today and in future.

The below is a recap of the excellent talk from Britney Muller, Senior SEO Scientist at Moz, entitled (fittingly enough) “Machine Learning for SEOs.”

What is machine learning? A quick recap.

The session opened with a brief primer on the key terms and concepts that fit under the umbrella of “machine learning.”

Muller used the definition in the image below to capture the sense of machine learning as “a subset of AI (Artificial Intelligence) that combines statistics and programming to give computers the ability to “learn” without explicitly being programmed.”

definition what is machine learning

That core idea of “learning” from new stimuli is an important one to grasp as we consider how machine learning can be applied to daily SEO tasks.

Machine learning excels at identifying patterns in huge quantities of data. As such, some of the common examples of machine learning applications today include:

  • Recommender systems (Netflix, Spotify)
  • Ridesharing apps (Uber, Lyft)
  • Digital Assistants (Amazon Alexa, Apple Siri, Google Assistant)

This very ubiquity can make it a challenging concept to grasp, however. In fact, Eric Schmidt at Google has gone so far as to say, “The core thing Google is working on is basically machine learning.”

It is helpful to break this down into the steps that comprise a typical machine learning project, in order to see how we might apply this to everyday SEO tasks.

The machine learning process

The image below represents the machine learning process Muller shared at TechSEO Boost:

the machine learning process

It is important to bear in mind that some of the training data should be reserved for testing at a later point in the process.

Where possible, this data should also be labelled clearly to help the machine learning algorithm identify classifications and categories within a noisy data set.

It is for precisely this reason that Google asks us to label images to verify our identity:

street signs Google images

This demonstrates our human ability to pick out objects in cluttered contexts, but it has the added benefit of providing Google with higher quality image data.

The pitfalls of an unsupervised approach to machine learning, and a training data set that is open to interpretation, were laid bare just last week.

Google’s ‘Smart Compose’ feature within Gmail has demonstrated gender bias by preferring certain pronouns when predicting what a user might want to say.

As reported in Reuters, “Gmail product manager Paul Lambert said a company research scientist discovered the problem in January when he typed “I am meeting an investor next week,” and Smart Compose suggested a possible follow-up question: “Do you want to meet him?” instead of “her.”

gmail smart compose

The challenge here is not restricted to projects on such a scale. Marketers who want to get their hands dirty must be aware of the limitations of machine learning, as well as its exciting possibilities.

Muller added that people tend to overfit their data, which reduces the accuracy and flexibility of the model they are using. This (very common) phenomenon occurs when a model corresponds very closely with one specific data set, reducing its applicability to new scenarios.

The ability to scale effectively is what gives machine learning its appeal, so overfitting is something to be avoided with care. There is a good primer to this topic here and it is also explained very well through this image:

the best way to explain overfitting

So, how exactly can this subset of AI be used to improve SEO performance?

How you can use machine learning for SEO

As is the case with all hype-friendly technologies, businesses are keen to get involved with machine learning. However, the point is not to “use machine learning” through fear of being left behind, but rather to find the best uses of machine learning for each business.

Britney Muller shared some examples from her role at Moz during her session at TechSEO Boost.

The first was an approach to automated meta description generation using the Algorithmia Advanced Content Summarizer, which was then compared to Google’s approach to automated descriptions pulled directly from the landing page.

Meta descriptions remain an important asset when trying to encourage a positive click-through rate, but a lot of time is spent crafting these snippets. An automated alternative that can interpret the meaning of landing pages and create clickable summaries for display in the SERPs would be very useful.

serp q&a results comparison

Muller shared some examples, such as the image above, to demonstrate the comparison between the two approaches. The machine learning approach is not perfect and may require some tweaking, but it does an excellent job of conveying the page’s intent when compared to Google’s selection.

The team at Moz has since built this into Google Sheets:

google sheets meta descriptions

Although this is not a product other businesses can access right now, an alternative way of achieving automated meta descriptions has been shared by Paul Shapiro (the TechSEO Boost host) via Github here.

Automated image optimization

Another fascinating use of machine learning for SEO is the automation of image optimization. Britney Muller showed how, in under 20 minutes, it is possible to train an algorithm to distinguish between cats and ducks, then use this model on a new data set with a high level of accuracy.

recognize ducks vs snakes

For large retailers, the application of this method could be very beneficial. With so many new images added to the inventory every day, and with visual search on the rise, a scalable image labeling system would prove very profitable. As demonstrated at TechSEO Boost, this is now a very realistic possibility for businesses willing to build their own model.

A further use of machine learning described by Britney Muller was the transcription of podcasts. An automated approach to this task can turn audio files into something much more legible for a search engine, thereby helping with indexation and ranking for relevant topics.

Muller detailed an approach using the Amazon Transcribe product through Amazon Web Services to achieve this aim.

amazon transcribe tool

The audio is broken down and delivered in a J-SON file in a lot of detail, with the different speakers on the podcast labelled separately.

transcription json string

There was not enough time in the session to work through every potential use of machine learning for SEO, but Muller’s core message was that everyone in the industry should be working towards at least a working knowledge of these concepts.

Some further opportunities for experimentation were listed as follows:

opportunities with machine learning

As we can see, machine learning truly excels when working with large data sets to identify patterns.

Tools and resources

The best way to get engaged is to combine theory with practice. This is almost always the case, but it is a particularly valid piece of advice in relation to programming.

Muller’s was not the first or last talk to reference Google Codelabs throughout the day.

google codelabs

There are more resources out there than ever before and the likes of Amazon and Google want machine learning to be approachable. Amazon has launched a machine learning course and Google’s crash course is a fantastic way to learn the components of a successful project.

google codelabs machine learning crash course

The Google-owned Kaggle is always a great place to trial new data sets and review the innovative work performed by data scientists around the world, once a basic grasp has been attained.

Furthermore, Google’s Colaboratory makes it easy to get started on a project and work with a remote team.

Key takeaways: machine learning for SEOs

What became particularly clear through Muller’s talk is how approachable machine learning applications can be for SEOs. Moreover, the room for experimentation is unprecedented, for those willing to invest some time in the discipline.

key takeaways

The post TechSEO Boost: Machine Learning for SEOs appeared first on Search Engine Watch.

Search Engine Watch


Tumblr will delete all porn from its platform

December 4, 2018 No Comments

Tumblr, a microblogging service that’s impact on internet culture has been massive and unique, is preparing for a massive change that’s sure to upset many of its millions of users.

On December 17, Tumblr will be banning porn, errr “adult content,” from its site and encouraging users to flag that content for removal. Existing adult content will be set to a “private mode” viewable only to the original poster.

What does “adult content” even mean? Well, according to Tumblr, the ban means the removal of any media that depicts “real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts.”

This is a lot more complicated than just deleting some hardcore porn from the site; over the past several years Tumblr has become a hub for communities and artists with more adult themes. This has largely been born out of the fact that adult content has been disallowed from other multimedia-focused social platforms. There are bans on nudity and sexual content on Instagram and Facebook, though Twitter has more relaxed standards.

Why now? The Tumblr app was removed from the iOS app store several weeks ago due to an issue with its content filtering that led the company to issue a statement. “We’re committed to helping build a safe online environment for all users, and we have a zero tolerance policy when it comes to media featuring child sexual exploitation and abuse,” the company had detailed. “We’re continuously assessing further steps we can take to improve and there is no higher priority for our team.”

We’ve reached out to Tumblr for further comment.

Update: In a blog post titled “A better, more positive Tumblr,” the company’s CEO Jeff D’Onofrio minimized claims that the content ban was related to recent issues surrounding child porn, and is instead intended to make the platform one “where more people feel comfortable expressing themselves.”

“As Tumblr continues to grow and evolve, and our understanding of our impact on our world becomes clearer, we have a responsibility to consider that impact across different age groups, demographics, cultures, and mindsets,” the post reads. “Bottom line: There are no shortage of sites on the internet that feature adult content. We will leave it to them and focus our efforts on creating the most welcoming environment possible for our community.”

The imminent “adult content” ban will not apply to media connected with breastfeeding, birth or more general “health-related situations” like surgery, according to the company.

Tumblr is attempting to make aims to minimize the impact on the site’s artistic community as well, but this level of nuance is going to be incredibly difficult for them to enforce uniformly and will more than likely lead to a lot of frustrated users being told that their content does not qualify as “art.”

Tumblr is also looking to minimize impact on the more artistic storytelling, “such as erotica, nudity related to political or newsworthy speech, and nudity found in art, such as sculptures and illustrations, are also stuff that can be freely posted on Tumblr.”

I don’t know how much it needs to be reiterated that child porn is a major issue plaguing the web, but a blanket ban on adult content on a platform that has gathered so many creatives working with NSFW themes is undoubtedly going to be a pretty controversial decision for the company.


Social – TechCrunch