CBPO

Tag: Search

Delete your pages and rank higher in search – Index bloat and technical optimization 2019

July 16, 2019 No Comments

If you’re looking for a way to optimize your site for technical SEO and rank better, consider deleting your pages.

I know, crazy, right? But hear me out.

We all know Google can be slow to index content, especially on new websites. But occasionally, it can aggressively index anything and everything it can get its robot hands on whether you want it or not. This can cause terrible headaches, hours of clean up, and subsequent maintenance, especially on large sites and/or ecommerce sites.

Our job as search engine optimization experts is to make sure Google and other search engines can first find our content so that they can then understand it, index it, and rank it appropriately. When we have an excess of indexed pages, we are not being clear with how we want search engines to treat our pages. As a result, they take whatever action they deem best which sometimes translates to indexing more pages than needed.

Before you know it, you’re dealing with index bloat.

What is the index bloat?

Put simply, index bloat is when you have too many low-quality pages on your site indexed in search engines. Similar to bloating in the human digestive system (disclaimer: I’m not a  doctor), the result of processing this excess content can be seen in search engines indices when their information retrieval process becomes less efficient.

Index bloat can even make your life difficult without you knowing it. In this puffy and uncomfortable situation, Google has to go through much more content than necessary (most of the times low-quality and internal duplicate content) before they can get to the pages you want them to index.

Think of it this way: Google visits your XML sitemap to find 5,000 pages, then crawls all your pages and finds even more of them via internal linking, and ultimately decides to index 30,000 URLs. This comes out to an indexation excess of approximately 500% or even more.

But don’t worry, diagnosing your indexation rate to measure against index bloat can be a very simple and straight forward check. You simply need to cross-reference which pages you want to get indexed versus the ones that Google is indexing (more on this later).

The objective is to find that disparity and take the most appropriate action. We have two options:

  1. Content is of good quality = Keep indexability
  2. Content is of low quality (thin, duplicate, or paginated) = noindex

You will find that most of the time, index bloat results in removing a relatively large number of pages from the index by adding a “NOINDEX” meta tag. However, through this indexation analysis, it is also possible to find pages that were missed during the creation of your XML sitemap(s), and they can then be added to your sitemap(s) for better indexing.

Why index bloat is detrimental for SEO

Index bloat can slow processing time, consume more resources, and open up avenues outside of your control in which search engines can get stuck. One of the objectives of SEO is to remove roadblocks that hinder great content from ranking in search engines, which are very often technical in nature. For example, slow load speeds, using noindex or nofollow meta tags where you shouldn’t, not having proper internal linking strategies in place, and other such implementations.

Ideally, you would have a 100% indexation rate. Meaning every quality page on your site would be indexed – no pollution, no unwanted material, no bloating. But for the sake of this analysis, let’s consider anything above 100% bloat. Index bloat forces search engines to spend more resources (which are limited) than needed processing the pages they have in their database.

At best, index bloat causes inefficient crawling and indexing, hindering your ranking capability. But index bloat at worst can lead to keyword cannibalization across many pages on your site, limiting your ability to rank in top positions, and potentially impacting the user experience by sending searchers to low-quality pages.

To summarize, index bloat causes the following issues:

  1. Exhausts the limited resources Google allocates for a given site
  2. Creates orphaned content (sending Googlebot to dead-ends)
  3. Negatively impacts the website’s ranking capability
  4. Decreases the quality evaluation of the domain in the eyes of search engines

Sources of index bloat

1. Internal duplicate content

Unintentional duplicate content is one of the most common sources of index bloat. This is because most sources of internal duplicate content revolve around technical errors that generate large numbers of URL combinations that end up indexed. For example, using URL parameters to control the content on your site without proper canonicalization.

Faceted navigation has also been one of the “thorniest SEO challenges” for large ecommerce sites, as Portent describes, and has the potential of generating billions of duplicate content pages by overlooking a simple feature.

2. Thin content

It’s important to mention an issue introduced by the Yoast SEO plugin version 7.0 around attachment pages. This WordPress plugin bug led to “Panda-like problems” in March of 2018 causing heavy ranking drops for affected sites as Google deemed these sites to be lower in the overall quality they provided to searchers. In summary, there is a setting within the Yoast plugin to remove attachment pages in WordPress – a page created to include each image in your library with minimal content – the epitome of thin content for most sites. For some users, updating to the newest version (7.0 then) caused the plugin to overwrite the previous selection to remove these pages and defaulted to index all attachment pages.

This then meant that having five images per blog post would lead to 5x-ing the number of indexed pages with 16% of actual quality content per URL, causing a massive drop in domain value.

3. Pagination

Pagination refers to the concept of splitting up content into a series of pages to make content more accessible and improve user experience. This means that if you have 30 blog posts on your site, you may have ten blog posts per page that go three pages deep. Like so:

  • https://www.example.com/blog/
  • https://www.example.com/blog/page/2/
  • https://www.example.com/blog/page/3/

You’ll see this often on shopping pages, press releases, and news sites, among others.

Within the purview of SEO, the pages beyond the first in the series will very often contain the same page title and meta description, along with very similar (near duplicate) body content, introducing keyword cannibalization to the mix. Additionally, the purpose of these pages is for a better browsing user experience for users already on your site, it doesn’t make sense to send search engine visitors to the third page of your blog.

4. Under-performing content

If you have content on your site that is not generating traffic, has not resulted in any conversions, and does not have any backlinks, you may want to consider changing your strategy. Repurposing content is a great way to maximize any value that can be salvaged from under-performing pages to create stronger and more authoritative pages.

Remember, as SEO experts our job is to help increase the overall quality and value that a domain provides, and improving content is one of the best ways to do so. For this, you will need a content audit to evaluate your own individual situation and what the best course of action would be.

Even a 404 page that results in a 200 Live HTTP status code is a thin and low-quality page that should not be indexed.

Common index bloat issues

One of the first things I do when auditing a site is to pull up their XML sitemap. If they’re on a WordPress site using a plugin like Yoast SEO or All in One SEO, you can very quickly find page types that do not need to be indexed. Check for the following:

  • Custom post types
  • Testimonial pages
  • Case study pages
  • Team pages
  • Author pages
  • Blog category pages
  • Blog tag pages
  • Thank you pages
  • Test pages

To determine if the pages in your XML sitemap are low-quality and need to be removed from search really depends on the purpose they serve on your site. For instance, sites do not use author pages in their blog, but still, have the author pages live, and this is not necessary. “Thank you” pages should not be indexed at all as it can cause conversion tracking anomalies. Test pages usually mean there’s a duplicate somewhere else. Similarly, some plugins or developers build custom features on web builds and create lots of pages that do not need to be indexed. For example, if you find an XML sitemap like the one below, it probably doesn’t need to be indexed:

  • https://www.example.com/tcb_symbols_tax-sitemap.xml

Different methods to diagnose index bloat

Remember that our objective here is to find the greatest contributors of low-quality pages that are bloating the index with low-quality content. Most times it’s very easy to find these pages on a large scale since a lot of thin content pages follow a pattern.

This is a quantitative analysis of your content, looking for volume discrepancies based on the number of pages you have, the number of pages you are linking to, and the number of pages Google is indexing. Any disparity between these numbers means there’s room for technical optimization, which often results in an increase in organic rankings once solved. You want to make these sets of numbers as similar as possible.

As you go through the various methods to diagnose index bloat below, look out for patterns in URLs by reviewing the following:

  • URLs that have /dev/
  • URLs that have “test”
  • Subdomains that should not be indexed
  • Subdirectories that should not be indexed
  • A large number of PDF files that should not be indexed

Next, I will walk you through a few simple steps you can take on your own using some of the most basic tools available for SEO. Here are the tools you will need:

  • Paid Screaming Frog
  • Verified Google Search Console
  • Your website’s XML sitemap
  • Editor access to your Content Management System (CMS)
  • Google.com

As you start finding anomalies, start adding them to a spreadsheet so they can be manually reviewed for quality.

1. Screaming Frog crawl

Under Configuration > Spider > Basics, configure Screaming Frog to crawl (check “crawl all subdomains”, and “crawl outside of start folder”, manually add your XML sitemap(s) if you have them) for your site in order to run a thorough scan of your site pages. Once the crawl has been completed, take note of all the indexable pages it has listed. You can find this in the “Self-Referencing” report under the Canonicals tab.

screenshot example of using Screaming Frog to scan through XML sitemaps

Take a look at the number you see. Are you surprised? Do you have more or fewer pages than you thought? Make a note of the number. We’ll come back to this.

2. Google’s Search Console

Open up your Google Search Console (GSC) property and go to the Index > Coverage report. Take a look at the valid pages. On this report, Google is telling you how many total URLs they have found on your site. Review the other reports as well, GSC can be a great tool to evaluate what the Googlebot is finding when it visits your site.

screenshot example of Google Search Console's coverage report

How many pages does Google say it’s indexing? Make a note of the number.

3. Your XML sitemaps

This one is a simple check. Visit your XML sitemap and count the number of URLs included. Is the number off? Are there unnecessary pages? Are there not enough pages?

Conduct a crawl with Screaming Frog, add your XML sitemap to the configuration and run a crawl analysis. Once it’s done, you can visit the Sitemaps tab to see which specific pages are included in your XML sitemap and which ones aren’t.

example of using Screaming Frog to run a crawl analysis of an XML sitemap

Make a note of the number of indexable pages.

4. Your own Content Management System (CMS)

This one is a simple check too, don’t overthink it. How many pages on your site do you have? How many blog posts do you have? Add them up. We’re looking for quality content that provides value, but more so in a quantitative fashion. It doesn’t have to be exact as the actual quality a piece of content has can be measured via a content audit.

Make a note of the number you see.

5. Google

At last, we come to the final check of our series. Sometimes Google throws a number at you and you have no idea where it comes from, but try to be as objective as possible. Do a “site:domain.com” search on Google and check how many results Google serves you from its index. Remember, this is purely a numeric value and does not truly determine the quality of your pages.

screenshot example of using Google search results to spot inefficient indexation

Make a note of the number you see and compare it to the other numbers you found. Any discrepancies you find indicates symptoms of an inefficient indexation. Completing a simple quantitative analysis will help direct you to areas that may not meet minimum qualitative criteria. In other words, comparing numeric values from multiple sources will help you find pages on your site that contain a low value.

The quality criteria we evaluate against can be found in Google’s Webmaster guidelines.

How to resolve index bloat

Resolving index bloat is a slow and tedious process, but you have to trust the optimizations you’re performing on the site and have patience during the process, as the results may be slow to become noticeable.

1. Deleting pages (Ideal)

In an ideal scenario, low-quality pages would not exist on your site, and thus, not consume any limited resources from search engines. If you have a large number of outdated pages that you no longer use, cleaning them up (deleting) can often lead to other benefits like fewer redirects and 404s, fewer thin-content pages, less room for error and misinterpretation from search engines, to name a few.

The less control you give search engines by limiting their options on what action to take, the more control you will have on your site and your SEO.

Of course, this isn’t always realistic. So here are a few alternatives.

2. Using Noindex (Alternative)

When you use this method at the page level please don’t add a site-wide noindex – happens more often than we’d like), or within a set of pages, is probably the most efficient as it can be completed very quickly on most platforms.

  • Do you use all those testimonial pages on your site?
  • Do you have a proper blog tag/category in place, or are they just bloating the index?
  • Does it make sense for your business to have all those blog author pages indexed?

All of the above can be noindexed and removed from your XML sitemap(s) with a few clicks on WordPress if you use Yoast SEO or All in One SEO.

3. Using Robots.txt (Alternative)

Using the robots.txt file to disallow sections or pages of your site is not recommended for most websites unless it has been explicitly recommended by an SEO Expert after auditing your website. It’s incredibly important to look at the specific environment your site is in and how a disallow of certain pages would affect the indexation of the rest of the site. Making a careless change here may result in unintended consequences.

Now that we’ve got that disclaimer out of the way, disallowing certain areas of your site means that you’re blocking search engines from even reading those pages. This means that if you added a noindex, and also disallowed, Google won’t even get to read the noindex tag on your page or follow your directive because you’ve blocked them from access. Order of operations, in this case, is absolutely crucial in order for Google to follow your directives.

4. Using Google Search Console’s manual removal tool (Temporary)

As a last resort, an action item that does not require developer resources is using the manual removal tool within the old Google Search Console. Using this method to remove pages, whole subdirectories, and entire subdomains from Google Search is only temporary. It can be done very quickly, all it takes is a few clicks. Just be careful of what you’re asking Google to deindex.

A successful removal request lasts only about 90 days, but it can be revoked manually. This option can also be done in conjunction with a noindex meta tag to get URLs out of the index as soon as possible.

Conclusion

Search engines despise thin content and try very hard to filter out all the spam on the web, hence the never-ending search quality updates that happen almost daily. In order to appease search engines and show them all the amazing content we spent so much time creating, webmasters must make sure their technical SEO is buttoned up as early in the site’s lifespan as possible before index bloat becomes a nightmare.

Using the different methods described above can help you diagnose any index bloat affecting your site so you can figure out which pages need to be deleted. Doing this will help you optimize your site’s overall quality evaluation in search engines, rank better, and get a cleaner index, allowing Google to find the pages you’re trying to rank quickly and efficiently.

Pablo Villalpando is a Bilingual SEO Strategist for Victorious. He can be found on Twitter @pablo_vi.

The post Delete your pages and rank higher in search – Index bloat and technical optimization 2019 appeared first on Search Engine Watch.

Search Engine Watch


Paid Search Basics for Events

May 23, 2019 No Comments

This blog provides tips and insights into event-based paid search strategies surrounding audiences types and different ads to garner success.

Read more at PPCHero.com
PPC Hero


Five ways blockchain will impact search marketing

May 21, 2019 No Comments

Few technologies promise to have an impact on the marketplace as tremendous as the blockchain technology. Though many professionals in the search marketing industry are still entirely unfamiliar with it. Blockchain’s disruptive nature is changing the nature of digital advertising regardless of whether some professionals hear about it or not, however, meaning it’s imperative to catch up on how this technology is changing the industry if you want to remain competitive.

Here are five of the major ways that blockchain will impact search marketing, and how advertising professionals are already beginning to master this interesting technology as it takes over.

1. Blockchain will make ads trustworthy

Consumers hate advertisements for a number of reasons, but by and large the most common is that they simply think advertising technology is untrustworthy. Nobody likes feeling as if they are being surveilled 24/7, and few people trust digital advertisements that appear on their screen enough to click on them, even if its contents are interesting. Blockchain technology promises to help this problem by securing the ad supply chain and making the marketing process more trustworthy to consumers everywhere.

Soon, thanks to blockchain services, ad tech vendors, buyers, and publishers will be more connected than ever before. Transparency, that is sorely needed in the ad supply chain can be brought about by the application of blockchain services, which thanks to their nature as ledgers are accessible to every party involved in a financial transaction. Website owners and ad vendors of the future will thus be able to operate with one another much more securely when making marketing arrangements.

2. Blockchain is delivering ad transparency

Elsewhere, blockchain services will be applied to make ads more transparent in an effort to win over the trust of skeptical consumers. Companies like Unilever are now teaming up with the likes of IBM on blockchain projects that they hope will disclose information about their business footprint and the way they collect and utilize information on customers. As these endeavors become more successful, others will be convinced to enlist the help of blockchain technology when it comes to ensuring a transparent advertising industry.

3. Blockchain is changing ad payments

Blockchain technology will also impact search marketing by disrupting the way that advertisement payments are facilitated. Companies like Amino Payments will soon be springing up left and right as the market for blockchain services grows larger and larger. These businesses will help mainstream blockchain-powered ad buys that make use of interesting smart contracts. While smart contracts are only just beginning to become an accepted part of the business world, they’ll be a mainstream facet of doing business sooner than we think, all thanks to the wonderful power of blockchain.

4. New advertising ecosystems are springing up

Some of the ways that blockchain is impacting search marketing are truly monumental. Blockchain technology is helping new advertising ecosystems get on their feet, for instance, with nascent companies like Adshares that are working hard to create a blockchain-based advertising ecosystem. As cryptocurrencies and other blockchain-powered technologies become more mainstream, we’ll see an increased need for blockchain-friendly payment systems.

Search marketing professionals in the future may have to rely on specialized expertise when navigating these new blockchain-powered advertising ecosystems that use a standard bitcoin wallet, which will become dominated by the IT-savvy. Programmatic advertising has already been upended time and again in recent years as the digital revolution brought about better computers, and the rise of blockchain could very well be the next stage in that cycle of disruption.

5. New blockchain browsers will reshape user experiences

Finally, the digital experience of the average consumer will be fundamentally changed by the introduction of blockchain browsers. Browser options like Brave are becoming more popular and grabbing headlines as they promise a privacy-respecting internet experience that features more honest and safer ad tech. Our current understandings of the marketing world may be entirely useless a few years from now when blockchain powered browsers off secure, personalized search options to users who are sick and tired of modern advertising gurus.

Search marketing is in for more than its fair share of disruptive changes in the forthcoming years, largely because of the advent of blockchain technology. Like any other technological innovation, blockchain will take time and investment to grow into its full potential, but it’s already quite clear that its development is jarring advertising professionals.

The post Five ways blockchain will impact search marketing appeared first on Search Engine Watch.

Search Engine Watch


SEO case study: How Venngage turned search into their primary lead source

April 27, 2019 No Comments

Venngage is a free infographic maker that has catered to more than 21,000 businesses. In this article, we explore how they grew their organic traffic from about 275,000 visitors per month in November 2017 to about 900,000 today — more than tripling in 17 months.

I spoke with Nadya Khoja, Chief Growth Officer at Venngage, about their process.

Venngage gets most of their leads from content and organic search. The percentage varies from month to month in the range of 58% to 65%.

In Nov 2017, Venngage enjoyed 275,000 visitors a month from organic search traffic. Today (16 months later) it’s 900,000. Nadya Khoja (their Chief Growth Officer) extrapolated from their current trend that by December of 2019 (in nine months) they will enjoy three million organic search visitors per month.

Screenshot of Venngage's statistics

In 2015, when Nadya started with Venngage, they saw 300 to 400 registrations a week. By March of 2018, this was up to 25,000 a week. Today it’s 45,000.

While Nadya had the advantage of not starting from zero, that is impressive growth per any reasonable metric. How did they do it?

Recipe

There are a lot of pieces to this puzzle. I’ll do my best to explain them, and how they tie together. There is no correct order to things per se, so what is below is my perspective on how best to tell this story.

The single most important ingredient: Hypothesize, test, analyze, adjust

This critical ingredient is surprisingly not an ingredient, but rather a methodology. I’m tempted to call it “the scientific method”, as that’s an accurate description, but perhaps it’s more accurate to call it the methodology written up in the books “The Lean Startup” (which Nadya has read) and “Running Lean” (which Nadya has not read).

This single most important ingredient is the methodology of the hypothesize, test, analyze, and adjust.

What got them to this methodology was a desire to de-risk SEO.

The growth in traffic and leads was managed through a series of small and quick iterations, each one of which either passed or failed. Ones that passed were done more. Ones that failed were abandoned.

This concept of hypothesizing, testing, analyzing, and adjusting is used both for SEO changes and for changes to their products.

The second most important ingredient

This ingredient is shared knowledge. Venngage marketing developed “The Playbook”, which everyone in marketing contributes to. “The Playbook” was created both as a reference with which to bring new team members up to speed quickly, as well as a running history of what has been tested and how it went.

The importance of these first two ingredients cannot be overstated. From here on, I am revealing things they learned through trial and error. You have the advantage to learn from their successes and failures. They figured this stuff out the hard way. One hypothesis and one test at a time.

Their north star metrics

They have two north star metrics. The first one seems fairly obvious. “How many infographics are completed within a given time period?” The second one occurred to them later and is as important, if not more so. It is “how long does it take to complete an infographic?”

The first metric, of course, tells them how attractive their product is. The second tells them how easy (or hard) their product is to use.

Together these are the primary metrics that drive everything Venngage does.

The 50/50 focus split

As a result of both the company and the marketing department having a focus on customer acquisition and customer retention, every person in marketing spends half their time working on improving the first north star metric, and the other half spend their time working on improving the second.

Marketing driving product design

Those north star metrics have led to Venngage developing what I call marketing driven product design. Everywhere I ever worked has claimed they did this. The way Venngage does this exceeds anything ever done at a company I’ve worked for.

“How do I be good?”

This part of Nadya’s story reminds me of the start of a promo video I once saw for MasterClass.com. It’s such a good segue to this part of the story that I cropped out all but the good part to include in this article.

When Steve Martin shed light on an important marketing question

I’ve encountered a number of companies through the years who thought of marketing as “generating leads” and “selling it”, rather than “how do we learn what our customers want?”, or “how do we make our product easier to use?”

Squads

The company is structured into cross-functional squads, a cross-functional squad being people from various departments within Venngage, all working to improve a company-wide metric.

For example, one of the aspects of their infographic product is templates. A template is a starting point for building an infographic.

As templates are their largest customer acquisition channel, they created a “Template Squad”, whose job is to work on their two north star metrics for their templates.

The squad consists of developers, designers, UI/UX people, and the squad leader, who is someone in marketing. Personally, I love this marketing focus, as it de-focuses marketing and causes marketing to be something that permeates everything the company does.

There is another squad devoted to internationalization, which as you can infer, is responsible to improve their two north star metrics with users in countries around the world.

Iterative development

Each template squad member is tasked with improving their two north star metrics.

Ideas on how to do this come from squad members with various backgrounds and ideas.

Each idea is translated into a testable hypothesis. Modifications are done weekly. As you can image, Venngage is heavy into analytics, as without detailed and sophisticated analytics, they don’t know which experiments worked and which didn’t.

Examples of ideas that worked are:

  • Break up the templates page into a series of pages, which contain either category of templates or single templates.
  • Ensure each template page contains SEO keywords specific for the appropriate industry or audience segment. This is described in more detail further in this document.
  • Undo the forced backlink each of the embedded templates used to contain.
    • This allowed them to get initial traction, but it later resulted in a Google penalty.
    • This is a prime example of an SEO tactic that worked until it didn’t.
  • Create an SEO checklist for all template pages with a focus on technical SEO.
    • This eliminated human error from the process.
  • Eliminate “React headers” Google was not indexing.
  • Determine what infographic templates and features people don’t use and eliminate them.

Measuring inputs

I personally think this is really important. To obtain outputs, they measured inputs. When the goal was to increase registrations, they identified the things they had to do to increase registrations, then measured how much of that they did every week.

Everyone does SEO

In the same way that marketing is something that does not stand alone, but rather permeates everything Venngage does, SEO does not stand alone. It permeates everything marketing does. Since organic search traffic is the number one source of leads, they ensure everyone in marketing knows the basics of technical SEO and understands the importance of this never being neglected.

Beliefs and values

While I understand the importance of beliefs and values in human psychology, it was refreshing to see this being proactively addressed within an organization in the context of improving their north star metrics.

They win and lose together

Winning and losing together is a core belief at Venngage. Nadya states it minimizes blame and finger-pointing. When they win, they all win. When they lose, they all lose. It doesn’t matter who played what part. To use a sports analogy, a good assist helps to score a goal. A bad assist, well, that’s an opportunity to learn.

SEO is a team effort

While it is technically possible for a single person to do SEO, the volume of tasks required these days makes it impractical. SEO requires quality content, technical SEO, and building of backlinks through content promotion, guest posting, and the others. Venngage is a great example of effectively distributing SEO responsibilities through the marketing department.

To illustrate the importance of the various pieces fitting together, consider that while content is king, technical SEO is what gets content found, but when people find crappy content, it doesn’t convert.

You can’t manage what you don’t measure

This requires no elaboration.

But what you measure matters

This probably does justify some elaboration. We’ve all been in organizations that measured stupid stuff. By narrowing down to their two north star metrics, then focusing their efforts to improving those metrics, they’ve aligned everyone’s activity towards things that matter.

The magic of incremental improvements

This is the Japanese concept of Kaizen put into play for the development and marketing of a software product.

Done slightly differently, this concept helped Britain dominate competitive cycling at the 2008 Olympics in Beijing.

Customer acquisition is not enough

Venngage developed their second north star metric after deciding that acquiring new customers was not, in and of itself, any form of the Holy Grail. They realized that if their product was hard to use, fewer people would use it.

They decided a good general metric of how easy the product is to use was to measure how long people take to build an infographic. If people took “too long”, they spoke to them about why.

This led them to change the product in ways to make it easier to use.

Link building is relationship building

As a reader of Search Engine Watch, you know link building is critical and central to SEO. In the same way that everyone in Venngage marketing must know the basics of technical SEO, everyone in Venngage marketing must build links.

They do so via outreach to promote their content. As people earn links from the content promotion outreach, they record those links in a shared spreadsheet.

While this next bit is related to link building, everyone in Venngage marketing has traffic goals as well.

This too is tracked in a simple and reasonable way. Various marketers own different “areas” or “channels”. These channels are broken down into specific traffic acquisition metrics.

As new hires get more familiar with how things work at Venngage, they are guided into traffic acquisition channels which they want to work on.

Learning experience, over time

My attempt here is to provide a chronology of what they learned in what order. It may help you avoid some of the mistakes they made.

Cheating works until it doesn’t

Understanding the importance of links to search ranking, they thought it would be a good idea to implement their infographics with embedded backlinks. Each implemented infographic contained a forced backlink to the Venngage website.

They identified a set of anchor text they thought would be beneficial to them and rotated through them for these forced backlinks.

And it worked, for a while. Until they realized they had invited a Google penalty. This took a bit to clean up.

The lessons learned:

  • The quality of your backlinks matter.
  • To attract quality backlinks, publish quality content.

Blog posts brought in users who activated

At some point, their analytics helped them realize that users who activated from blog posts where ideal users for them. So they set a goal to increase activations from blog posts, which led to the decision to test if breaking up templates into categories and individual pages with only one template made sense. It did.

Website design matters

Changing the website from one big template page to thousands of smaller ones helped, and not just because it greatly increased the number of URLs indexed by Google. It also greatly improved the user experience. It made it easier for their audience to find templates relevant to them, without having to look at templates that weren’t.

Lesson learned: UI/UX matters for both users and SEO.

Hybrid content attracts

Hybrid content is where an article talks about two main things. For example, talking about Hogwarts houses sorting within the context of an infographic. This type of content brings in some number of Harry Potter fans, some of whom have an interest in creating infographics. The key to success is tying these two different topics together well.

Content is tuneable

By converting one huge templates page into thousands of small template pages, they realized that a template or set of templates that appeal to one audience segment would not necessarily appeal to others. This caused them to start to tune templates towards audience segments in pursuit of more long tail organic search traffic.

How did they figure out what users wanted in terms of better content? They used a combination of keyword research and talking with users and prospects.

Some content doesn’t make the cut

After they caught onto the benefits of tuning content to attract different audience segments, they looked for content on their site that no one seemed to care about. They deleted it. While it decreased the amount of content on their site, it increased their overall content quality.

Traffic spikes are not always good news

When they initially started creating forced backlinks in their infographics, they could see their traffic increase. They saw some spikes. Their general thought was more traffic is good.

When they experienced the Google penalty, they realized how wrong they were. Some traffic spikes are bad news. Others are good news.

When your website traffic shows a sudden change, even if you’re experiencing a spike in organic search traffic, you must dig into the details and find out the root cause.

Lesson learned: There is a thing as bad traffic. Some traffic warns you of a problem.

Links from product embeds aren’t all bad

They just needed to make the embedded links optional. To allow the customer to decide if they do or do not deserve a backlink. While this did not cause any change to their levels of organic search traffic, it was necessary to resolve the Google penalty.

Boring works

Incremental continuous improvement seems repetitive and boring. A one percent tweak here, a two percent tweak there, but over time, you’ve tripled your organic search traffic and your lead flow.

It’s necessarily fun, but it delivers results.

Lesson learned: What I’ll call “infrastructure” is boring, and it matters. Both for your product and your SEO.

Figure out what to measure

The idea of measuring the amount of time required to complete an infographic did not occur to them on day one. This idea came up when they were looking for a metric to indicate to them how easy (or difficult) their product was to use.

Once they decided this metric possibly made sense, they determined their baseline, then through an iterative process, making improvements to the product to make this a little faster.

As they did so, the feedback from the users was positive, so they doubled down on this effort.

Lesson learned: What you measure matters.

Teach your coworkers well

They created “The Playbook”, which is a compendium of the combined knowledge they’ve accumulated over time. The playbook is written by them, for them.

Marketing employees are required to add chapters to the playbook as they learn new skills and methods.

Its primary purpose is to bring new team members up to speed quickly, and it also serves as a historical record of what did and did not work.

One important aspect of continuous improvement is for new people to avoid suggesting experiments that previously failed.

Additionally (and I love this), every month everyone in marketing gives Nadya an outline of what they’re learning and what they’re improving on.

Their marketing stack

While their marketing stack is not essential to understanding their processes, I find it useful to understand what software tools a marketing organization uses, and for what. So here is theirs. This is not a list of what they’ve used and abandoned over time, but rather a list of what they use now.

  • Analytics: Google Analytics and Mixpanel
  • Customer communications: Intercom
  • Link analysis and building: Ahrefs
  • Link building outreach: Mailshake
  • Project management: Trello
  • General purpose: G Suite

In closing

To me, what Nadya has done at Venngage is a case study in how to do SEO right, and most of doing it right are not technical SEO work.

  • Help senior management understand that some things that are not typically thought of as SEO (website design for example) can have serious SEO implications.
  • Get senior management buy in to include these non-SEO functions in your SEO efforts.
  • Understand what very few basic metrics matter for your company, and how you measure them.
  • Distribute required SEO work through as many people as reasonably possible. Include people whose job functions are not necessarily SEO related (writers, designers, UI/UX, and more).
  • Test and measure everything.
  • Win big through a continuous stream of small incremental improvements.

Venngage has surely lead by example and all the guidelines and pointers shared above can surely help your organization implement its search for increased sales.

Kevin Carney is the Founder and CEO of the boutique link building agency Organic Growth. 

The post SEO case study: How Venngage turned search into their primary lead source appeared first on Search Engine Watch.

Search Engine Watch


How to optimize paid search ads for phone calls

April 16, 2019 No Comments

There have been an abundance of hand-wringing articles published that wonder if the era of the phone call is over, not to mention speculation that millennials would give up the option to make a phone call altogether if it meant unlimited data.

But actually, the rise of direct dialing through voice assistants and click to call buttons for mobile search means that calls are now totally intertwined with online activity.

Calling versus buying online is no longer an either/or proposition. When it comes to complicated purchases like insurance, healthcare, and mortgages, the need for human help is even more pronounced. Over half of consumers prefer to talk to an agent on the phone in these high-stakes situations.

In fact, 70% of consumers have used a click to call button. And three times as many people prefer speaking with a live human over a tedious web form. And calls aren’t just great for consumers either. A recent study by Invoca found that calls actually convert at ten times the rate of clicks.

However, if you’re finding that your business line isn’t ringing quite as often as you’d like it to, here are some surefire ways to optimize your search ads to drive more high-value phone calls.  

Content produced in collaboration with Invoca.

Four ways to optimize your paid search ads for more phone calls

  1. Let your audience know you’re ready to take their call — and that a real person will answer

If you’re waiting for the phone to ring, make sure your audiences know that you’re ready to take their call. In the days of landlines, if customers wanted a service, they simply took out the yellow pages and thumbed through the business listings until they found the service they were looking for. These days, your audience is much more likely to find you online, either through search engines or social media. But that doesn’t mean they aren’t looking for a human to answer their questions.

If you’re hoping to drive more calls, make sure your ads are getting that idea across clearly and directly. For example, if your business offers free estimates, make sure that message is prominent in the ad with impossible-to-miss text reading, “For a free estimate, call now,” with easy access to your number.

And to make sure customers stay on the line, let them know their call will be answered by a human rather than a robot reciting an endless list of options.

  1. Cater to the more than half of users that will likely be on mobile

If your customer found your landing page via search, there’s a majority percent chance they’re on a mobile device.

While mobile accounted for just 27% of organic search engine visits in Q3 of 2013, its share increased to 57% as of Q4 2018.

Statistic: Mobile share of organic search engine visits in the United States from 3rd quarter 2013 to 4th quarter 2018 | Statista

That’s great news for businesses looking to boost calls, since mobile users obviously already have their phone in hand. However, forcing users to dig up a pen in order to write down your business number only to put it back into their phone adds an unnecessary extra step that could make some users think twice about calling.  

Instead, make sure mobile landing pages offer a click to call button that lists your number in big, bold text. Usually, the best place for a click to call button is in the header of the page, near your form, but it’s best practice to A/B test button location and page layouts a few different ways in order to make sure your click to call button can’t be overlooked.

  1. Use location-specific targeting

Since 2014, local search queries from mobile have skyrocketed in volume as compared to desktop.

Statistic: Local search query volume in the United States from 2014 to 2019, by platform (in billions) | Statista

In 2014, there were 66.5 billion search queries from mobile and 65.6 billion search queries from desktop.

Now in 2019, desktop has decreased slightly to 62.3 billion — while mobile has shot up to 141.9 billion — nearly a 250% increase in five years.

Mobile search is by nature local, and vice versa. If your customer is searching for businesses hoping to make a call and speak to a representative, chances are, they need some sort of local services. For example, if your car breaks down, you’ll probably search for local auto shops, click a few ads, and make a couple of calls. It would be incredibly frustrating if each of those calls ended up being to a business in another state.

Targeting your audience by region can ensure that you offer customers the most relevant information possible.

If your business only serves customers in Kansas, you definitely don’t want to waste perfectly good ad spend drumming up calls from California.

If you’re using Google Ads, make sure you set the location you want to target. That way, you can then modify your bids to make sure your call-focused ads appear in those regions.  

  1. Track calls made from ads and landing pages

Keeping up with where your calls are coming from in the physical world is important, but tracking where they’re coming from on the web is just as critical. Understanding which of your calls are coming from ads as well as which are coming from landing pages is an important part of optimizing paid search. Using a call tracking and analytics solution alongside Google Ads can help give a more complete picture of your call data.

And the more information you can track, the better. At a minimum, you should make sure your analytics solution captures data around the keyword, campaign/ad group, and the landing page that led to the call. But solutions like Invoca also allow you to capture demographic details, previous engagement history, and the call outcome to offer a total picture of not just your audience, but your ad performance.

For more information on how to use paid search to drive calls, check out Invoca’s white paper, “11 Paid Search Tactics That Drive Quality Inbound Calls.”

The post How to optimize paid search ads for phone calls appeared first on Search Engine Watch.

Search Engine Watch


Complete guide to Google Search Console

April 6, 2019 No Comments

At the frontlines in the battle for SEO is Google Search Console (GSC), an amazing tool that makes you visible in search engine results pages (SERPs) and provides an in-depth analysis of web traffic being routing to your doorstep. And it does all this for free.

If your website marks your presence in cyberspace, GSC boosts viewership and increases traffic, conversions, and sales. In this guide, SEO strategists at Miromind explain how you benefit from GSC, how you integrate it with your website, and what you do with its reports to strategize the domain dominance of your brand.

What is Google Search Console (GSC)?

Created by Google, the Google Webmaster Tools (GWT) initially targeted webmasters. Offered by Google as a free of cost service, GWT metamorphosed into its present form, the Google Search Console (GSC). It’s the cutting edge tool widely used by an exponentially diversifying group of digital marketing professionals, web designers, app developers, SEO specialists, and business entrepreneurs.

For the uninitiated, GSC tells you everything that you wish to know about your website and the people who visit it daily. For example, how much web traffic you’re attracting, what are people searching for in your site, the kind of platform (mobile, app, desktop) people are using to find you, and more importantly, what makes your site popular.

Then GSC takes you on a subterranean dive to find and fix errors, design sitemaps, and check file integrity.

Precisely what does Google Search Console do for you? These are the benefits.

1. Search engine visibility improves

Ever experienced the sinking sensation of having done everything demanded of you for creating a great website, but people who matter can’t locate you in a simple search? Search Console makes Google aware that you’re online.

2. The virtual image remains current and updated

When you’ve fixed broken links and coding issues, Search Console helps you update the changes in such a manner that Google’s search carries an accurate snapshot of your site minus its flaws.

3. Keywords are better optimized to attract traffic

Wouldn’t you agree that knowing what draws people to your website can help you shape a better user experience? Search Console opens a window to the keywords and key phrases that people frequently use to access your site. Armed with this knowledge, you can optimize the site to respond better to specific keywords.

4. Safety from cyber threats

Can you expect to grow business without adequate protection against external threats? Search Console helps you build efficient defenses against malware and spam, securing your growing business against cyber threats.

5. Content figures prominently in rich results

It’s not enough to merely figure in a search result. How effectively are your pages making it into Google rich results? These are the cards and snippets that carry tons of information like ratings, reviews, and just about any information that results in better user experience for people searching for you. Search console gives you a status report on how your content is figuring in rich results so you can remedy a deficit if detected.

6. Site becomes better equipped for AMP compliance

You’re probably aware that mobile friendliness has become a search engine ranking parameter. This means that the faster your pages load, the more user-friendly you’re deemed to be. The solution is to adopt accelerated mobile pages (AMP), and Search Console helpfully flags you out in the areas where you’re not compliant.

7. Backlink analysis

The backlinks, the websites that are linking back to your website give Google an indication of the popularity of your site; how worthy you are of citation. With Search Console, you get an overview of all the websites linking to you, and you get a deeper insight into what motivates and sustains your popularity.

8. The site becomes faster and more responsive to mobile users

If searchers are abandoning your website because of slow loading speeds or any other glitch, Search Console alerts you so you can take remedial steps and become mobile-friendly.

9. Google indexing keeps pace with real-time website changes

Significant changes that you make on the website could take weeks or months to figure in the Google Search Index if you sit tight and do nothing. With search console, you can edit, change, and modify your website endlessly, and ensure the changes are indexed by Google instantaneously. By now you have a pretty good idea why Google Search Console has become the must-have tool for optimizing your website pages for improved search results. This also helps ensure that your business grows in tandem with the traffic that you’re attracting and converting.

Your eight step guide on how to use Google Search Console

1. How to set up your unique Google Search Console account

Assuming that you’re entirely new to GSC, your immediate priority is to add the tool and get your site verified by Google. By doing this, you’ll be ensuring that Google classifies you unambiguously as the owner of the site, whether you’re a webmaster, or merely an authorized user.

This simple precaution is necessary because you’ll be privy to an incredibly rich source of information that Google wouldn’t like unauthorized users to have access to.

You can use your existing Google account (or create a new one) to access Google Search Console. It helps if you’re already using Google Analytics because the same details can be used to login to GSC. Your next step is to open the console and click on “Add property”.

Screenshot of adding a property in Google Search Console

By adding your website URL into the adjacent box, you get an umbilical connection to the console so you can start using its incredible array of features. Take care to add the prefix “https” or “www” so Google loads the right data.

2. How to enable Google to verify your site ownership

Screenshot of Google verifying site ownership

Option one

How to add an HTML tag to help Google verify ownership

Once you have established your presence, Google will want to verify your site. At this stage, it helps to have some experience of working in HTML. It’ll be easier to handle the files you’re uploading; you’ll have a better appreciation of how the website’s size influences the Google crawl rate, and gain a clearer understanding of the Google programs already running on your website.

Screenshot of adding an HTML tag to help Google verify ownership

If all this sounds like rocket science, don’t fret because we’ll be hand-holding you through the process.

Your next step is to open your homepage code and paste the search console provided HTML tag within the <Head> section of your site’s HTML code.

The newly pasted code can coexist with any other code in the <Head> section; it’s of no consequence.

An issue arises if you don’t see the <Head> section, in which case you’ll need to create the section to embed the Search Console generated code so that Google can verify your site.

Save your work and come back to the homepage to view the source code; the console verification code should be clearly visible in the <Head> section confirming that you have done the embedding correctly.

Your next step is to navigate back to the console dashboard and click “Verify”.

At this stage, you’ll see either of two messages – A screen confirming that Google has verified the site, or pop up listing onsite errors that need to be rectified before completing verification. By following these steps, Google will be confirming your ownership of the site. It’s important to remember that once the Google Search Console code has been embedded onsite and verified, any attempt to tamper or remove the code will have the effect of undoing all the good work, leaving your site in limbo.

Getting Google Search Console to verify a WordPress website using HTML tag

Even if you have a WordPress site, there’s no escape from the verification protocol if you want to link the site to reap the benefits of GSC.

Assuming that you’ve come through the stage of adding your site to GSC as a new property, this is what you do.

The WordPress SEO plugin by Yoast is widely acknowledged to be an awesome SEO solution tailor-made for WordPress websites. Installing and activating the plugin gives you a conduit to the Google Search Console.

Once Yoast is activated, open the Google Search Console verification page, and click the “Alternate methods” tab to get to the HTML tag.

You’ll see a central box highlighting a meta tag with certain instructions appearing above the box. Ignore these instructions, select and copy only the code located at the end of the thread (and not the whole thread).

Screenshot of verifying a WordPress website using Yoast

Now revert back to the website homepage and click through SEO>Dashboard. In the new screen, on clicking “Webmaster tools” you open the “Webmaster tools verification” window. The window displays three boxes; ensure to paste the previously copied HTML code into the Google Search Console box, and save the changes.

Now, all you have to do is revert to the Google Search Console and click “Verify” upon which the console will confirm that verification is a success. You are now ready to use GSC on your WordPress site.

Option two

How to upload an HTML file to help Google verify ownership

This is your second verification option. Once you’re in Google Search Console, proceed from “Manage site” to “Verify this site” to locate the “HTML file upload” option. If you don’t find the option under the recommended method, try the “Other verification methods”.

Screenshot of uploading an HTML file to help Google verify ownership

Once you’re there, you’ll be prompted to download an HTML file which must be uploaded in its specified location. If you change the file in any manner, Search Console won’t be able to verify the site, so take care to maintain the integrity of the download.

Once the HTML file is loaded, revert back to the console panel to verify, and once that is accomplished you’ll get a message confirming that the site is verified. After the HTML file has been uploaded, go back to Search Console and click “Verify”.

If everything has been uploaded correctly, you will see a page letting you know that the site has been verified.

Once again, as in the first option we’ve listed, don’t change, modify, or delete the HTML file as that’ll bring the site back to the unverified status.

Option three

Using the Google Tag Manager route for site verification

Before you venture into the Google Search Console, you might find it useful to get the hang of Google Tag Manager (GTM). It’s a free tool that helps you manage and maneuver marketing and analytics tags on your website or app.

You’ll observe that GTM doubles up as a useful tool to simplify site verification for Google Search Console. If you intend to use GTM for site verification there are two precautions you need to take; open your GTM account and enable the “View, Edit, and Manage” mode.

Also, ensure that the GTM code figures adjacent to the <Body> tag in your HTML code.

Once you’re done with these simple steps, revert back to GSC and follow this route – Manage site > Verify this site > Google Tag Manager. By clicking the “Verify” option in Google Tag Manager, you should get a message indicating that the site has been verified.

Pop up screenshot of site verification using Google Tag Manager

Once again, as in the previous options, never attempt to change the character of the GTM code on your site as that may bring the site back to its unverified position.

Option four

Securing your status as the domain name provider

Once you’re done with the HTML file tagging or uploading, Google will prompt you to verify the domain that you’ve purchased or the server where your domain is hosted, if only to prove that you are the absolute owner of the domain, and all its subdomains or directories.

Open the Search Console dashboard and zero in on the “Verify this site” option under “Manage site”.

You should be able to locate the “Domain name provider” option either under the “Recommended method” or the “Alternate method” tab. When you are positioned in the “Domain name provider”, you’ll be shown a listing of domain hosting sites that Google provides for easy reference.

Screenshot of securing yourself as the domain name provider

At this stage, you have two options.

If your host doesn’t show up in the list, click the “Other” tab to receive guidelines on creating a DNS TXT code aimed at your domain provider. In some instances, the DNS TXT code may not match your provider. If that mirrors your dilemma, create a DNS TXT record or CNAME code that will be customized for your provider.

3. Integrating the Google Analytics code on your site

If you’re new to Google Analytics (GA), this is a good time to get to know this free tool. It gives you amazing feedback which adds teeth to digital marketing campaigns.

At a glance, GA helps you gather and analyze key website parameters that affect your business. It tracks the number of visitors converging on your domain, the time they spend browsing your pages, and the specific keywords in your site that are most popular with incoming traffic.

Most of all, GA gives you a fairly comprehensive idea of how efficiently your sales funnel is attracting leads and converting customers. The first thing you need to do is to verify whether the website has the GA tracker code inserted in the <Head> segment in the homepage HTML code. If the GA code is to carry out its tracking functions correctly, you have to ensure that the code is placed only in the <Head> segment and not elsewhere as in the <Body> segment.

Back in the Google Search Console, follow the given path – Manage site > Verify this site till you come to the “Google Analytics tracking code” and follow the guidelines that are displayed. Once you get an acknowledgment that the GA code is verified, refrain from making any changes to the code to prevent the site from reverting to unverified status.

Google Analytics vs. Google Search Console – Knowing the difference and appreciating the benefits

For a newbie, both Google Analytics and Google Search Console appear like they’re focused on the same tasks and selling the same pitch, but nothing could be further from the truth.

Read also: An SEO’s guide to Google Analytics

GA’s unrelenting focus is on the traffic that your site is attracting. GA tells you how many people visit your site, the kind of platform or app they’re using to reach you, the geographical source of the incoming traffic, how much time each visitor spends browsing what you offer, and which are the most searched keywords on your site.

If GA gives you an in-depth analysis of the efficiency (or otherwise) of your marketing campaigns and customer conversion pitch. Google Search Console then peeps under the hood of your website to show you how technically sound you are in meeting the challenges of the internet.

GSC is active in providing insider information.

  • Are there issues blocking the Google search bot from crawling?
  • Are website modifications being instantly indexed?
  • Who links to you and which are your top-linked pages?
  • Is there malware or some other cyber threat that needs to be quarantined and neutralized?
  • Is your keyword strategy optimized to fulfill searcher intent?

GSC also opens a window to manual actions, if any, issued against your site by Google for perceived non-compliance of the Webmaster guidelines.

If you open the manual actions report in the Search Console message center and see a green check mark, consider yourself safe. But if there’s a listing of non-compliances, you’ll need to fix either the individual pages or sometimes the whole website and place the matter before Google for a review.

Screenshot of a complying site on Google Search Console

Manual actions must be looked into because failure to respond places your pages in danger of being omitted from Google’s search results. Sometimes, your site may attract manual action for no fault of yours, like a spammy backlink that violates Webmaster quality guidelines, and which you can’t remove.

Screenshot of a site non-compliance on Google Search Console

In such instances, you can use the GSC “Disavow Tool” to upload a text file, listing the affected URLs, using the disavow links tool page in the console.

If approved, Google will recrawl the site and reprocess the search results pages to reflect the change. Basically, GA is more invested in the kind of traffic that you’re attracting and converting, while GSC shows you how technically accomplished your site is in responding to searches, and in defining the quality of user experience.

Packing power and performance by combining Google Analytics and Google Search Console

You could follow the option of treating GA and GSC as two distinct sources of information and analyze the reports you access, and the world would still go on turning.

But it may be pertinent to remember that both tools present information in vastly different formats even in areas where they overlap. It follows that integrating both tools presents you with additional analytical reports that you’d otherwise be missing; reports that trudge the extra mile in giving you the kind of design and marketing inputs that lay the perfect foundation for great marketing strategies.

Assuming you’re convinced of the need for combining GA and GSC, this is what you do.

Open the Google Search Console, navigate to the hub-wheel icon, and click the “Google Analytics Property” tab.

Screenshot of how to combine Google Analytics and Google Search Console

This shows you a listing of all the GA accounts that are operational in the Google account.

Hit the save button on all the accounts that you’ll be focusing on, and with that small step, you’re primed to extract maximum juice from the excellent analytical reporting of the GA-GSC combo.

Just remember to carry out this step only after the website has been verified by Google by following the steps we had outlined earlier.

What should you do with Google Search Console?

1. How to create and submit a sitemap to Google Search Console

Is it practical to hand over the keys to your home (website) to Google and expect Google to navigate the rooms (webpages) without assistance?

You can help Google bots do a better job of crawling the site by submitting the site’s navigational blueprint or sitemap.

The sitemap is your way of showing Google how information is organized throughout your webpages. You can also position valuable details in the metadata, information on textual content, images, videos, and podcasts, and even mention the frequency with which the page is updated.

We’re not implying that a sitemap is mandatory for Google Search Console, and you’re not going to be penalized if you don’t submit the sitemap.

But it is in your interests to ensure that Google has access to all the information it needs to do its job and improve your visibility in search engines, and the sitemap makes the job easier. Ultimately, it works in your favor when you’re submitting a sitemap for an extensive website with many pages and subcategories.

For starters, decide which web pages you want Google bots should crawl, and then specify the canonical version of each page.

What this means is that you’re telling Google to crawl the original version of any page to the exclusion of all other versions.

Then create a sitemap either manually or using a third-party tool.

At this stage, you have the option of adding the sitemap to the robots.txt file in your source code or link it directly to the search console.

Read also: Robots.txt best practice guide + examples

Assuming that you’ve taken the trouble to get the site verified by GSC, revert back to the search console, and then navigate to “Crawl” and its subcategory “Sitemaps.”

On clicking “Sitemaps” you will see a field “Add a new sitemap”. Enter the URL of your sitemap in a .xml format and then click “Submit”.

Screenshot of adding a site map

With these simple steps, you’ve effectively submitted your sitemap to Google Search Console.

2. How to modify your robots.txt file so search engine bots can crawl efficiently

There’s a file embedded in your website that doesn’t figure too frequently in SEO optimization circles. The minor tweaking of this file has major SEO boosting potential. It’s virtually a can of high-potency SEO juice that a lot of people ignore and very few open.

It’s called the robots exclusion protocol or standard. If that freaks you out, we’ll keep it simple and call it the robots.txt file.

Even without technical expertise, you can open your source code and you’ll find this file.

The robots.txt is your website’s point of contact with search engine bots.

Before tuning in on your webpages, the search bot will peep into this text file to see if there are any instructions about which pages should be crawled and which pages can be ignored (that’s why it helps to have your sitemap stored here).

The bot will follow the robots exclusion protocol that your file suggests regarding which pages are allowed for crawling and which are disallowed. This is your site’s way of guiding search engines to pages that you wish to highlight and also helps in exclusion of content that you do not want to share.

There’s no guarantee that robots.txt instructions will be followed by bots, because bots designed for specific jobs may react differently to the same set of instructions. Also, the system doesn’t block other websites from linking to your content even if you wouldn’t want the content indexed.

Before proceeding further, please ensure that you’ve already verified the site; then open the GSC dashboard and click the “Crawl” tab to proceed to “robots.txt Tester.”

Screenshot of robots.txt Tester

This tool enables you to do three things:

  • Peep into the robots.txt file to see which actions are currently allowed or disallowed
  • Check if there are any crawl errors in the past 90 days
  • Make changes to suit your desired mode of interacting with search bots

Once you’ve made necessary changes, it’s vital that the robots.txt file in your source code reflects those changes immediately.

To do that, shortly after making changes, click the “Submit” tag below the editing box in the search console and proceed to upload the changed file to update the source code. Your root directory should then appear as www.yourwebsite.com/robots.txt.

To confirm that you’ve completed the mission, go back to the search console’s robots.txt testing tool and click “Verify live version” following which you should get a message verifying the modification.

3. How to use the “Fetch as Google” option to update regular website changes

On-page content and title tags undergo regular changes in the website’s life cycle, and it’s a chore to manually get these changes recorded and updated in the Google search engine. Fortunately, GSC comes up with a solution.

Once you’ve located the page that needs a change or update, open the search console, go to the “Crawl” option and zero in on the “Fetch as Google” option. You’ll see a blank URL box in the center.

Screenshot of how to use the “Fetch as Google” in Google Search Console

Enter the modified page in the box to look like this; http://yourwebsite.com/specificcategory, then click “Fetch and Render.”

After completing this step go to the “Request indexing” button and consider the options before you.

What you have just done is to authorize the Google bot to index all the changes that you’ve put through, and within a couple of days, the changes become visible in the Google search results.

4. How to use Google Search Console to identify and locate site errors

A site error is a technical malfunction which prevents Google search bots from indexing your site correctly.

Naturally, when your site is wrongly configured or slowing down, you are creating a barrier between the site and search engines. This blocks content from figuring in top search results.

Even if you suspect that something is wrong with your site, you can’t lose time waiting for the error to show up when it’s too late, because the error would have done the damage by then.

So, you turn to Google Search Console for instant troubleshooting. With GSC, you get a tool that keeps you notified on errors that creep into your website.

When you’ve opened the Google Search Console, you’ll see the “Crawl” tab appearing on the left side of the screen. Click the tab and open “Crawl errors”.

Screenshot of crawling and identifying site errors in Google Search Console

What you see now is a listing of all the page errors that Google bots encountered while they were busy indexing the site. The pop up will tell you when the page was last crawled and when the first error was detected, followed by a brief description of the error.

Once the error is identified, you can handover the problem for rectification to your in-house webmaster.

When you click the “Crawl” tab, you’ll find “Crawl stats.” This is your gateway to loads of statistically significant graphs that show you all the pages that were crawled in the previous 90 days, the kilobytes downloaded during this period, and precisely how much time it took for Google to access and download a page. These stats give a fair indication of your website’s speed and user-friendliness.

Conclusion

A virtual galaxy of webmasters, SEO specialists, and digital marketing honchos would give their right arm and left leg for tools that empower SEO and bring in more customers that’ll ring the cash registers. Tools with all the bells and whistles are within reach, but many of them will make you pay hefty fees to access benefits.

But here’s a tool that’s within easy reach, a tool that promises high and delivers true to expectations without costing you a dollar, and paradoxically, very few people use it.

GSC is every designer’s dream come true, every SEO expert’s plan B, every digital marketer’s Holy Grail when it comes to SEO and the art of website maintenance.

What you gain using GSC are invaluable insights useful in propelling effective organic SEO strategies, and a tool that packs a punch when used in conjunction with Google Analytics.

When you fire the double-barreled gun of Google Analytics and Google Search Console, you can aim for higher search engine rankings and boost traffic to your site, traffic that converts to paying customers.

Dmitriy Shelepin is an SEO expert and co-founder of Miromind.

The post Complete guide to Google Search Console appeared first on Search Engine Watch.

Search Engine Watch


Google Dataset Search: How you can use it for SEO

April 2, 2019 No Comments

Back in September 2018, Google launched its Dataset Search tool, an engine which focuses on delivering results of hard data sources (research, reports, graphs, tables, and others) in a more efficient manner than the one which is currently offered by Google Search.

The service promises to enable easy access to the internet’s treasure trove of data. As Google’s Natasha Noy says,

“Scientists, data journalists, data geeks, or anyone else can find the data required for their work and their stories, or simply to satisfy their intellectual curiosity.”

For SEOs, it certainly has potential as a new research tool for creating our own informative, trustworthy, and useful content. But what of its prospects as a place to be visible, or as a ranking signal itself?

Google Dataset Search: As a research tool

As a writer who has been using Google to search for data since about a decade, I’d agree that finding hard statistics on search engines is not always massively straightforward.

Often, data which isn’t the most recent ranks better than newer research. This makes sense in an SEO sense, that which was published months or years prior has had a long time to earn authority and traffic. But usually I need the freshest stats, and even search results pointing to data on a page that has been published recently doesn’t necessarily mean that the data contained in that page is from that date.

Additionally, big publications (think news sites like the BBC) frequently rank better than the domain where the data was originally published. Again, this is unsurprising in the context of search engines. The BBC et al. have far more traffic, authority, inbound links, and changing content than most research websites, even .gov sites. But that doesn’t mean to say that the user looking for hard data wants to see BBC’s representation of that data.

Another key issue we find when researching hard data on Google concerns access to content. All too regularly, after a bit of browsing in the SERPs I find myself clicking through only to find that the report with the data I need is behind a paywall. How annoying.

On the surface, Google Dataset Search sets out to solve these issues.

Example of Google Dataset Search result

A quick search for “daily weather” (Google seems keen to use this kind of .gov data to exemplify the usefulness of the tool) shows how the service differs from a typical search at Google.com.

Results rank down the left-hand side of the page with the rest of the SERP real estate given over to more information about whichever result you have highlighted (position one is default). This description portion of the page includes:

  • Key URL links to landing pages
  • Key dates such as the time period the data covers, when the dataset was last updated and/or when it was first published
  • Who provides the data
  • The license for the data
  • Its relevant geolocation
  • A description of what the data is

By comparison, a search for the same keyphrase on Google in incognito mode prioritizes results for weather forecasts from Accuweather, the BBC, and the Met Office. So to have a search engine which focuses on pure, recorded data, is immediately useful.

Most results (though not all) make it clear to the user as to when the data is from and what the original source is. And by virtue of the source being included in the Dataset Search SERPs, we can be quite sure that a click through to the site will provide us access to the data we need.

Google Dataset Search: As a place to increase your visibility

As detailed on Google’s launch post for the service, Dataset Search is dependent on webmasters marking up their datasets with the Schema.org vocabulary.

Broadly speaking, Schema.org is a standardized way for developers to make information on their websites easy to crawl and understandable by search engines. SEOs might be familiar with the vocabulary if they have marked up their video content or other non-text objects on their sites. For example, whether they have sought to optimize their business for local search.

There are ample guidelines and sources to assist you with dataset markup (Schema.org homepage, Schema.org dataset markup list, Google’s reference on dataset markup, and Google’s webmaster forum are all very useful). I would argue that if you are lucky enough to produce original data, it is absolutely worth considering making it crawlable and accessible for Google.

If you are thinking about it, I’d also argue that it is important to start ranking in Google Dataset Search now. Traffic to the service might not be massive currently, but the competition to start ranking well is only going to get more difficult. The more webmasters and developers see potential in the service, the more it will be used.

Additionally, dataset markup will not only benefit your ranking in Dataset Search it will also increase your visibility for relevant data-centric queries in Google too. An important point as we see tables and stats incorporated more frequently and more intuitively in elements of the SERPs such as the Knowledge Graph.

In short:

  • Getting the most out of your data is straightforward to do.
  • The sooner you do, the more likely you are to have a head-start on visibility in Dataset Search before your competitors.
  • And it is good best-practice for visibility in increasingly data-intuitive everyday search.

Google Dataset Search: As a ranking signal

There is a good reason to believe that being indexed in Dataset Search will be a ranking signal in its own right.

Google Scholar, which indexes scholarly literature such as journals and books has been noted by Google to provide a valuable signal about the importance and prominence of a dataset.

With that in mind, it makes sense to think a dataset that is well-optimized with clear markup and is appearing in Dataset Search would send a strong signal to Google. This would signal that the respective site is a trusted authority as a source of that type of data.

Thoughts for the future

It is early days for Google Dataset Search. But for SEO, the service is already certainly showing its potential.

As a research tool, its usefulness really depends on the community of research houses who are marking up their data for the benefit of the ecosystem. I expect the number of contributors to the service will grow quickly making for a diverse and comprehensive data tool.

I also expect that the SERPs may change considerably. They certainly work better for these kinds of queries than Google’s normal search pages. But I had some bugbears. For example, which URL am I expected to click on if a search result has more than one? Can’t all results have publication dates and the time period the data covers? Could we see images of graphs/tables in the SERPs?

But when it comes to potential as a place for visibility and a ranking signal, if you are a business that collects data and research (or you are thinking about producing this type of content), now is the time to ensure your datasets are marked up with Schema.org to beat your competitors in ranking on Google Dataset Search. This dataset best practice will also stand you in good stead as Google’s main search engine gets increasingly savvy with how it presents the world’s data.

Luke Richards is a writer for Search Engine Watch and ClickZ. You can follow Luke on Twitter at @myyada.

The post Google Dataset Search: How you can use it for SEO appeared first on Search Engine Watch.

Search Engine Watch


Scroll, Scale, Save: Using SEO Data for Smarter Search Query Analysis

March 29, 2019 No Comments

There is a ton of information on search pages that can tell us why terms aren’t converting. Here are features to use for smarter Search Query Analyses using User Intent.

Read more at PPCHero.com
PPC Hero


Diagnose High CPL In PPC Search

March 17, 2019 No Comments

There are multiple ways of diagnosing a high CPL, however, these are a few quick and relatively easy ways to figure out why your costs are rising.

Read more at PPCHero.com
PPC Hero


Context Clusters in Search Query Suggestions

February 19, 2019 No Comments

Saketh Garuda

Context Clusters and Query Suggestions at Google

A new patent application from Google tells us about how the search engine may use context to find query suggestions before a searcher has completed typing in a full query. After seeing this patent, I’ve been thinking about previous patents I’ve seen from Google that have similarities.

It’s not the first time I’ve written about a Google Patent involving query suggestions. I’ve written about a couple of other patents that were very informative, in the past:

In both of those, the inclusion of entities in a query impacted the suggestions that were returned. This patent takes a slightly different approach, by also looking at context.

Context Clusters in Query Suggestions

We’ve been seeing the word Context spring up in Google patents recently. Context terms from knowledge bases appearing on pages that focus on the same query term with different meanings, and we have also seen pages that are about specific people using a disambiguation approach. While these were recent, I did blog about a paper in 2007, which talks about query context with an author from Yahoo. The paper was Using Query Contexts in Information Retrieval. The abstract from the paper provides a good glimpse into what it covers:

User query is an element that specifies an information need, but it is not the only one. Studies in literature have found many contextual factors that strongly influence the interpretation of a query. Recent studies have tried to consider the user’s interests by creating a user profile. However, a single profile for a user may not be sufficient for a variety of queries of the user. In this study, we propose to use query-specific contexts instead of user-centric ones, including context around query and context within query. The former specifies the environment of a query such as the domain of interest, while the latter refers to context words within the query, which is particularly useful for the selection of relevant term relations. In this paper, both types of context are integrated in an IR model based on language modeling. Our experiments on several TREC collections show that each of the context factors brings significant improvements in retrieval effectiveness.

The Google patent doesn’t take a user-based approach ether, but does look at some user contexts and interests. It sounds like searchers might be offered a chance to select a context cluster before showing query suggestions:

In some implementations, a set of queries (e.g., movie times, movie trailers) related to a particular topic (e.g., movies) may be grouped into context clusters. Given a context of a user device for a user, one or more context clusters may be presented to the user when the user is initiating a search operation, but prior to the user inputting one or more characters of the search query. For example, based on a user’s context (e.g., location, date and time, indicated user preferences and interests), when a user event occurs indicating the user is initiating a process of providing a search query (e.g., opening a web page associated with a search engine), one or more context clusters (e.g., “movies”) may be presented to the user for selection input prior to the user entering any query input. The user may select one of the context clusters that are presented and then a list of queries grouped into the context cluster may be presented as options for a query input selection.

I often look up the inventors of patents to get a sense of what else they may have written, and worked upon. I looked up Jakob D. Uszkoreit in LinkedIn, and his profile doesn’t surprise me. He tells us there of his experience at Google:

Previously I started and led a research team in Google Machine Intelligence, working on large-scale deep learning for natural language understanding, with applications in the Google Assistant and other products.

This passage reminded me of the search results being shown to me by the Google Assistant, which are based upon interests that I have shared with Google over time, and that Google allows me to update from time to time. If the inventor of this patent worked on Google Assistant, that doesn’t surprise me. I haven’t been offered context clusters yet (and wouldn’t know what those might look like if Google did offer them. I suspect if Google does start offering them, I will realize that I have found them at the time they are offered to me.)

Like many patents do, this one tells us what is “innovative” about it. It looks at:

…query data indicating query inputs received from user devices of a plurality of users, the query data also indicating an input context that describes, for each query input, an input context of the query input that is different from content described by the query input; grouping, by the data processing apparatus, the query inputs into context clusters based, in part, on the input context for each of the query inputs and the content described by each query input; determining, by the data processing apparatus, for each of the context clusters, a context cluster probability based on respective probabilities of entry of the query inputs that belong to the context cluster, the context cluster probability being indicative of a probability that at least one query input that belongs to the context cluster and provided for an input context of the context cluster will be selected by the user; and storing, in a data storage system accessible by the data processing apparatus, data describing the context clusters and the context cluster probabilities.

It also tells us that it will calculate probabilities that certain context clusters might be requested by a searcher. So how does Google know what to suggest as context clusters?

Each context cluster includes a group of one or more queries, the grouping being based on the input context (e.g., location, date and time, indicated user preferences and interests) for each of the query inputs, when the query input was provided, and the content described by each query input. One or more context clusters may be presented to the user for input selection based on a context cluster probability, which is based on the context of the user device and respective probabilities of entry of the query inputs that belong to the context cluster. The context cluster probability is indicative of a probability that at least one query input that belongs to the context cluster will be selected by the user. Upon selection of one of the context clusters that is presented to the user, a list of queries grouped into the context cluster may be presented as options for a query input selection. This advantageously results in individual query suggestions for query inputs that belong to the context cluster but that alone would not otherwise be provided due to their respectively low individual selection probabilities. Accordingly, users’ informational needs are more likely to be satisfied.

The Patent in this patent application is:

(US20190050450) Query Composition System
Publication Number: 20190050450
Publication Date: February 14, 2019
Applicants: Google LLC
Inventors: Jakob D. Uszkoreit
Abstract:

Methods, systems, and apparatus for generating data describing context clusters and context cluster probabilities, wherein each context cluster includes query inputs based on the input context for each of the query inputs and the content described by each query input, and each context cluster probability indicates a probability that at a query input that belongs to the context cluster will be selected by the user, receiving, from a user device, an indication of a user event that includes data indicating a context of the user device, selecting as a selected context cluster, based on the context cluster probabilities for each of the context clusters and the context of the user device, a context cluster for selection input by the user device, and providing, to the user device, data that causes the user device to display a context cluster selection input that indicates the selected context cluster for user selection.

What are Context Clusters as Query Suggestions?

The patent tells us that context clusters might be triggered when someone is starting a query on a web browser. I tried it out, starting a search for “movies” and got a number of suggestions that were combinations of queries, or what seem to be context clusters:

The patent says that context clusters would appear before someone began typing, based upon topics and user information such as location. So, if I were at a shopping mall that had a movie theatre, I might see Search suggestions for movies like the ones shown here:

Context Clusters

One of those clusters involved “Movies about Business”, which I selected, and it showed me a carousel, and buttons with subcategories to also choose from. This seems to be a context cluster:

Movies about Business

This seems to be a pretty new idea, and may be something that Google would announce as an availble option when it becomes available, if it does become available, much like they did with the Google Assistant. I usually check through the news from my Google Assistant at least once a day. If it starts offering search suggestions based upon things like my location, it could potentially be very interesting.

User Query Histories

The patent tells us that context clusters selected to be shown to a searcher might be based upon previous queries from a searcher, and provides the following example:

Further, a user query history may be provided by the user device (or stored in the log data) that includes queries and contexts previously provided by the user, and this information may also factor into the probability that a user may provide a particular query or a query within a particular context cluster. For example, if the user that initiates the user event provides a query for “movie show times” many Friday afternoons between 4 PM-6 PM, then when the user initiates the user event on a Friday afternoon in the future between these times, the probability associated with the user inputting “movie show times” may be boosted for that user. Consequentially, based on this example, the corresponding context cluster probability of the context cluster to which the query belongs may likewise be boosted with respect to that user.

It’s not easy to tell whether the examples I provided about movies above are related to this patent or if it is tied more closely to the search results that appear in Google Assistant results. It’s worth reading through and thinking about potential experimental searches to see if they might influence the results that you may see. It is interesting that Google may attempt to anticipate what is suggests to show to us as query suggestions, after showing us search results based upon what it believes are our interests based upon searches that we have performed or interests that we have identified for Google Assistant.

The contex cluster may be related to the location and time that someone accesses the search engine. The patent provides an example of what might be seen by the searcher like this:

In the current example, the user may be in the location of MegaPlex, which includes a department store, restaurants, and a movie theater. Additionally, the user context may indicate that the user event was initiated on a Friday evening at 6 PM. Upon the user initiating the user event, the search system and/or context cluster system may access the content cluster data 214 to determine whether one or more context clusters is to be provided to the user device as an input selection based at least in part on the context of the user. Based on the context of the user, the context cluster system and/or search system may determine, for each query in each context cluster, a probability that the user will provide that query and aggregate the probability for the context cluster to obtain a context cluster probability.

In the current example, there may be four queries grouped into the “Movies” cluster, four queries grouped into the “Restaurants” cluster, and three queries grouped into the “Dept. Store” cluster. Based on the analysis of the content cluster data, the context cluster system may determine that the aggregate probability of the queries in each of the “Movies” cluster, “Restaurant” cluster, and “Dept. Store” cluster have a high enough likelihood (e.g., meet a threshold probability) to be input by the user, based on the user context, that the context clusters are to be presented to the user for selection input in the search engine web site.

I could see running such a search at a shopping mall, to learn more about the location I was at, and what I could find there, from dining places to movies being shown. That sounds like it could be the start of an interesting adventure.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Context Clusters in Search Query Suggestions appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓