CBPO

SEO

How changing domains challenge SEO

September 21, 2019 No Comments

An insight on the relation of Domain and SEO, and how it impacts your digital footprint altogether.

When you are running a website, the digital footprint you build plays an imperative role in how your website fares among search engines. But be it a blog, online store, or a video stream, the goals are more or less the same – get traffic, create awareness, and generate conversions. To achieve this, webmasters spend vast amounts of their time increasing their organic web traffic, improving conversion volumes, and practicing different SEO techniques to enhance the visibility of their brand.

At some point on this online journey, there may come a time when you feel your website needs to achieve certain milestones in order to grow and establish a stronger position in the market. During this phase, you may come across the decision of changing your domain name to re-brand for expansion, enter a more popular TLD, or increase your digital marketing potential.

What happens when you switch domains?

Unfortunately, moving domains can do a serious number on your search engine rankings. Not because your brand virtually disappears for a short while, but because top search engines such as Google determine rankings through metrics based on both domain level and page level. When you decide to switch to a new domain, you basically reset these domain metrics back to zero. Fortunately, there are ways where you can minimize the damage and if you are careful, you can effectively negate the effects of moving to a new domain.

In this article, we are taking the opportunity of explaining why website owners change their domains and the imminent and long term effects domain changing can bring on their SEO. We have also added a mini-guide that will help you get a proper walk-through on migrating your old domain to the new one.

Why do webmasters consider changing their domain?

In the vast majority of situations, website owners refrain from changing their domain name. However, some conditions may require webmasters to transition into new domain names in order to achieve certain merits that come with it. Here are some of the reasons why.

1. They don’t like the domain name

Often, website owners need a change in taste or outlook to increase relevance with their business and distinguish their position in the marketplace.

2. The domain does not perform well

Perhaps the domain name has failed to accumulate the organic traffic volume or positive feedback that was expected. This prevents webmasters from and achieving their business goals.

3. Change in business

Many online businesses experience transition in their business model, go through an acquisition, change business activity or switch industry. This makes the current domain irrelevant or inapplicable with the current incorporated status of the business.

4. You want a better top-level domain (TLD)

Webmasters also register with lesser-known domain extensions since their desired one is unavailable at that time. Once the opportunity presents itself you might want to move to a more mainstream TLD that suits your business and brand presence.

What challenges does changing domains bring on SEO?

SEO is a key determinant in this entire process and counts as one of the central pieces in the digital marketing toolbox. So, while we know that moving to a new domain directly affects your SEO efforts, the question is how far does it impact your website’s SEO performance or in what ways does it deter its progress.

For new website owners, it may take time to properly understand the implications of changing a domain name. For instance, if you are moving to a new domain name and you have sold your old one, you will lose all the link equity you had built over the life of the old domain. This means your organic traffic takes a nosedive and your domain authority begins to diminish.

Moreover, failing to implement proper redirects during the migration can result in almost immediate loss of traffic. Once you lose your live page front, the only thing greeting your visitors is the dreaded 404 page. It is also important to timely convey your website’s search engine rankings to the new domain since Google search engine metrics will consider your new domain with zero visibility. Another major challenge is content duplication since your site could already be going through canonicalization issues and may start at the domain level. This will eventually exacerbate plagiarism and you will need to implement a canonical URL extension helps you remove duplicate content issues.

Website owners who dearly care about their online presence will never risk losing their link value or quality score they have earned so tirelessly. Therefore, the best course of action during domain migration is to maintain strict adherence to Google’s migration guidelines and track every change with proper tools to remain informed about your website’s progress. Sometimes even the slightest change in direction can end up with several undesired outcomes further down the line.

SEO guide: How to properly move domains

Here your main objective is to effectively redirect all of the pages to an entirely different domain. This guide by MOZ will briefly take you on a step-by-step action plan of doing it professionally and keeping your rankings abreast. Moz also underwent a re-branding phase where it changed its domain name from SEOmoz to Moz so there is a lot here that you can learn from the SEO behemoth.

  • Your old domain will need a sitemap so your first step would be to create one
  • Develop content for the new domain, such as the description of your company, about us, contact information, mission statement and other basics that can be easily linked
  • Now once you have set up the domain, its time you make it live
  • Go to Google Webmaster Tools and get your new domain and old domain registered and verified
  • To inform your visitors about the transition, enable a 404 page with the old domain so your visitors remain aware of the process and the new domain
  • Test the redirects in the development environment from the old domain to the new one. (This should be a 1:1 redirect)
  • Now implement a 301 redirect for your old domain to the new domain
  • To allow search engines to crawl your old URLs, submit the old sitemap to search engines such as Google and Bing. (You can change their index accordingly as the submission pages are within Bing Webmaster Center and Google Webmaster Tools)
  • From Google Webmaster Tools, open and fill up the “Change Address”
  • To make sure all URLs have been verified by Google and Bing, create a new sitemap and submit it to the search engines
  • Google Webmaster Tools and Bing Webmaster Center will run the diagnostics for the sitemap and fix any errors found

Now you are good to go. To make sure your new domain is stable and properly indexed you must monitor search engine results.

Don’t forget to drop a comment with your queries about changing domains.

Zeeshan Khalid is a web entrepreneur and an eCommerce specialist, and the CEO of FME extensions.

The post How changing domains challenge SEO appeared first on Search Engine Watch.

Search Engine Watch


Interview with Tony Uphoff: Digital Marketing for B2B Manufacturing Industry

September 19, 2019 No Comments

Most of my clients are from B2B industrial manufacturing. I have many challenges with this industry because my clients’ products and services are very specific, niche websites.

I have developed new B2B SEO and PPC strategies in my everyday hands-on experience by managing multiple projects. In addition, there is another challenge that the industry is facing, the adaption to digital transformation.

Tony Uphoff interview on B2B SEOI decided to talk about B2B with Tony Uphoff, the CEO of Thomas. Thomas is a leading resource for product sourcing and supplier selection. Tony is the video host of the popular, “Thomas Index Report” on industrial sourcing trends, and he is a regular Forbes.com contributor who writes about the industrial marketplace.

I was curious to know what Tony thinks about the challenges that B2B manufacturing companies are facing when adapting to digital transformation and data-driven culture. I know that my fellow B2B marketers who are dealing with the same challenges will find a lot of value for themselves as well as B2B manufacturer business owners. We also spoke about SEO, KPIs, and lead generation in B2B. Here is my interview with Tony.

Karina: How do manufacturing and B2B advertising differ from wholesale and B2C advertising?

Tony: There are some key differences, but also some similarities that many people overlook. One difference is that B2B purchasing often involves a longer sales cycle. Buying a piece of capital equipment or choosing a new supplier is not something to be taken lightly. A lot of research and vetting goes into the process as there is a material risk for the buyer, both, personally and professionally.

Another key difference is that B2B buyers aren’t typically completing a one-off purchase. They’re looking to find a supplier they can partner with for the long-term. As for the similarities, a B2B purchase is more personal for the buyer than many people understand. While B2C purchases are often very personal, consumers identify with certain brands that they want to be associated with. But with B2B, purchases are often personal similarly because the buyer that makes the decision on the purchase has a lot on the line.

Karina: Manufacturers have long relied on trade shows and other physical events for marketing and sales. Do you see this trend changing?

Tony: Yes. Many businesses understand that the digital transformation of industrial marketing and sales is here to stay, and they’re trending away from traditional methods such as trade shows and word-of-mouth exposure. There are still several well-attended mega-trade shows, as well as smaller ones, hosted every year, but we’re seeing that those types of events are typically taking up a smaller percentage of the marketing and sales budgets of the customers we work with.

Karina: Are U.S. manufacturers finding a greater need to make their marketing more data-driven?

Tony: Yes, because the buyer is in control of the sales process in today’s industrial world. Today’s buyers have unprecedented levels of information at their fingertips. Buyers are as much as 70 percent of the way through their buying process before they engage with a sales rep. This is a massive shift in the way businesses need to reach, engage and sell to industrial buyers thanks to the digital transformation of marketing and sales. Companies that still rely on old-school marketing tactics to try to drive growth and retain customers are going to find it increasingly difficult to stay relevant in today’s market.

Karina: How long do you think will it take for B2B to fully adapt to data-driven, analytics, and digital marketing?

Tony: The industry is still in the early stages of the digital transformation of marketing and sales. While we’re seeing a good number of businesses that are aggressively and enthusiastically embracing the transformation, we are also seeing a significant number of businesses that have yet to make a real commitment to a digital strategy. It may be a generational challenge as these incredibly successful industrial and manufacturing businesses were built and grown by Baby Boomers whose expertise is in engineering, product design, and manufacturing. Nearly half of the users of Thomasnet.com are millennial buyers who are helping to accelerate the digital transformation.

Karina: The industrial manufacturer’s market is very niche and faces big challenges in content marketing due to specialization and sometimes very low search volume results. How can content marketers take advantage of this?

Tony: For our customers with niche markets, the niche works to their advantage simply because it’s in the lower competition of their industry. From an SEO perspective, this makes it easier for them to stand out on result pages. There are a huge number of categories in manufacturing that are not at all niche. However, there’s massive competition in areas such as “CNC machining” and “metal stamping”. Whether in a highly competitive category or a niche category, we’ve learned from our customers that the pillar page strategy works well for overarching terms. Then we drive users to niche terms.

Karina: What is the right approach for digital marketers to run successful digital campaigns for the B2B Industrial manufacturer sector?

Tony: Getting their website in order is the foundation for everything else. Is it responsive? Is it secure? Is it easy to use, comprehensive, and informative? It’s also important to implement a program that reaches buyers at every phase of the industrial buying process. Understand that building brand awareness is often just as important as generating leads. In terms of strategy, it’s easy to get caught up in all the tactics and solutions, but while the vehicle is important, the most important thing industrial marketers need to keep in mind is that whatever they’re putting out there. It needs to resonate with a specific persona that has a specific job to do. Marketing and advertising content should be focused on helping your ideal customer(s) solve problems and accomplish important tasks, specific to where that buyer may be in their buying journey.

Karina: What does the future of publishing look like?

Tony: While it’s obvious that much of the publishing world is moving to digital platforms — if they haven’t already — a more relevant question is “What does the future of advertising look like?”. For years advertisers have relied on display networks, buying data on users and employing programmatic advertising. Not only has this proved to be quite costly and relatively ineffective, but privacy laws such as GDPR are making this approach obsolete. The trend today has publishers moving away from those broad ad-serving networks to the “walled garden” approach. A “walled garden” approach is one in which they’re creating their own ad networks and selling advertising directly on their online assets. Interestingly, this approach mirrors the ad sales approach that publishers in the print world have used for over a century.

Karina: What KPIs should B2B businesses focus on in marketing?

Tony: Obviously, lead generation in the form of marketing qualified leads and sales qualified leads are a key KPI for digital marketing. But as I mentioned earlier, it’s important to build brand awareness as well. The reason is simple – when your sales team calls a lead that has never heard of your company, just getting that lead to continue the conversation is a challenge. When the lead is aware of your brand before the salesperson calls, that person is more likely to be receptive to the call. Other important KPIs are the cost of acquisition and average order value – and internally, businesses should also focus on RFI/RFQ response time. We’ve surveyed tens of thousands of industrial buyers, and invariably one of their pet peeves is the lack of responsiveness from suppliers to which they’ve requested information. Today, all the great marketing in the world will have little value if you aren’t following up on incoming RFIs and RFQs within a day – and preferably the same day you receive them.

Karina: How is Thomasnet.com using data and analytics to add services that bring new elements of value to their advertisers?

Tony: The first-party data generated by users on Thomasnet.com®, as well as data that is captured by buyers interacting with customer product information generated by our Thomas Product Data Solutions and our Thomas Marketing Services, gives us incredible insights into in-market buyers of products and services. We’re approaching three petabytes of buyer behavior data that helps us understand what buyers are interested in, how their purchase process works and when, where and how they’re engaging with content as a part of their buying journey.

Using our free Thomas Webtrax™ platform, our customers (as well as other qualifying industrial companies) can see and use that data to turn anonymous web traffic into leads, and create more targeted, meaningful messaging when targeting those leads. We’re also introducing a weekly data feed that businesses can use to determine exactly which buyers are actively in-market within a certain segment or vertical of industry. Our Thomas marketing services team also leverages the buying and sourcing trends from our data to help their customers enhance their organic and paid marketing.

Key takeaways from the interview

  • Clarify the differences and similarities between B2B and B2C
  • Discuss the reasons why B2B is trending away from traditional to digital marketing
  • Understand how B2B marketing is adapting to digital transformation
  • The importance of B2B manufacturers companies to adapt to a data-driven culture
  • The challenges of content marketing in niche B2B businesses
  • The steps to run successful digital marketing campaigns for B2B businesses
  • The KPIs that B2B businesses should focus on

I had a great conversation with Tony where I understood better the transformation of the B2B manufacturer industry. The industry has evolved from hard copy directories like yellow pages to an entirely data-driven culture (happening now). This is a huge opportunity for marketers to generate leads. Then, it is key to fully understand and overcome the challenges.

Note: This interview has been condensed for publishing purposes.

Karina Tama is a contributor for Forbes, Thrive Global and the El Distrito Newspaper. She can be found on Twitter @KarinaTama2.

The post Interview with Tony Uphoff: Digital Marketing for B2B Manufacturing Industry appeared first on Search Engine Watch.

Search Engine Watch


How Google Enforces Category Diversity for Some Local Search Results

September 17, 2019 No Comments

More Diversity in Search Results

Earlier this year, we were told that Google was making an effort to make the search results we see more diverse, by showing us fewer results from the same domains in response to a query. Search Engine Land covered that news with the post: Google search update aims to show more diverse results from different domain names.

Shortly before that news about more diverse results in organic search came out, Google was granted a patent in May which told us about how they might enforce category diversity in showing different points of interest in local search results, This post is about that effort to make local search results more diverse.

More Diversity At Google in 2013, in Past Search Results

Back in 2013. Google’s Former Head of Web Spam, Matt Cutts, published a video about more diverse search results in response to the question, Why does Google show multiple results from the same domain?

So this isn’t the first time we have heard about efforts from Google in trying to give us more diverse results, and they came out with a patent around that time to provide more diverse results as well.

I remember getting a phone call around 6 years ago from a co-worker who asked me why a client’s high ranking organic result might disappear from search results. I asked for the query and the client’s name and ran the search. The top-ranking result was a local result for the client. I told my co-worker that I was seeing that, and she told me that our client also used to have an organic result showing for that query, and a local result that wasn’t quite as high. It appeared that the organic result had been removed, and the local result had been boosted.

Coincidentally, I had written the following blog post the day before: How Google May Create Diverse Search Results by Merging Local and Web Search Results. I told my co-worker about the patent I had written about the day before, and sent her a link to that blog post. We were able to explain to our client what appears to have happened to their organic result for that query, that it looks like Google’s desire to have more diverse search results cause their page ranking organically was “merged” with the local result.

Category Diversity in a Patent Granted in 2019

I hadn’t seen anything quite like that merger between organic results and a local result happen again after that. It is impossible to tell if Google has been using that kind of merging since then. But that patent was all about providing more diverse search results to searchers. So when I see a patent, like this new one that tells us it exists to provide more diverse search results, I find myself wondering what, if anything could have been removed to make search results more diverse. If someone searches for “things to do in Carlsbad, California,” and they are provided with a list of restaurants to eat at, that would be disappointing, because while there are some nice restaurants here, there are plenty of other things to do.

By expanding to a category diversity from a diversity-based upon pages from the same domain, Google is giving us more diverse search results.

This new patent tells us about this category diversity in the following way:

When a searcher asks for points of interest information at a certain location, the local search system may generate a collection of candidate POIs and receives information relating to each candidate POI’s respective category and a score and rank within the category for each, and, for categories a searcher may select, promotes or demotes the score of each ranked candidate POI within its respective category through a scaling process.

It really is impossible to tell if Google has already implemented this patent which was granted in May. I tried some searches at different places to see if they showed diverse results for those places, and was given diversity in what I was being shown:

When I search for [points of interest Raleigh, NC], I get results that start out with a carousel of top things to do in Raleigh:

Points of Interest Raleigh

When I search for ][points of interest Carlsbad, Ca], I get results that start with a carousel of top things to do in Carlsbad:

Points of Interest Carlsbad

I wasn’t surprised to see carousels for those particular queries, and I tried a few more, worded a little differently, which didn’t trigger carousels. The patent doesn’t mention carousels, though. But those results do show some category diversity.

The patent does provide a lot of details on how Google might demote some listings that are in categories that are over-represented, and promote some listings that are associated with categories that are under-represented.

category diversity

The summary of the patent gives us the process behind it in a nutshell, telling us that the method behind it, includes receiving a request to:

Identify points of interest (POIs),
Obtaining data identifying

  1. Candidate points of interest (POIs) that satisfy the request
  2. A respective category associated with each candidate POI
  3. A non-scaled score associated with each candidate POI, ranking, for each of one or more of the categories, the candidate POIs associated with the category, based on the respective non-scaled scores, scaling, for each of the one or more categories, the non-scaled scores of the ranked candidate POIs associated with the category, ranking the candidate POIs using the scaled scores, for the candidate POIs that are associated with the one or more categories, and the non-scaled scores, for the candidate POIs that are not associated with the one or more categories, and providing data that identifies two or more of the candidate POIs, as ranked according to the scaled scores and the non-scaled scores

It goes on to provide much more depth about how category diversity might be achieved. And reading through it, it makes sense, that in an area where you may have a variety of 30-50 places that someone might want to visit, and five of those are Italian Restaurants, and the rest include other kinds of restaurants, museums, parks, beaches, theatres, stores, playgrounds, stadiums, nightclubs. You wouldn’t want to just tell a potential visitor to that location that there are five Italian Restaurants there and nothing about the diversity of other kinds of places.

Here is a little richer description of how Google may go about enforcing category diversity in response to requests for information about points of interest at different locations:

  1. Selecting, as the one or more categories, one or more categories that are each associated with more than a predetermined number of candidate POIs the predetermined number is two
  2. The method includes selecting, as the one or more categories, one or more categories that are each associated with one or more candidate POI
  3. Scaling, for each of the one or more categories that are associated with only one candidate POI, the non-scaled score of the ranked candidate POI associated with the category comprises multiplying the non-scaled score of the ranked candidate POI associated with the category by a factor of one
  4. Scaling the non-scaled scores of the ranked, candidate POIs includes increasing the respective non-scaled scores of the top n ranked candidate POIs
  5. Scaling the non-scaled scores of the ranked, candidate POIs includes leaving unchanged the non-scaled scores of one or more of the top n ranked candidate POIs
  6. Scaling the non-scaled scores of the ranked, candidate POIs includes decreasing the non-scaled scores of one or more of the top n ranked candidate POIs
  7. Dynamically determining a scaling factor to use to scale one or more non-scaled scores of the ranked, candidate POIs of a particular category based on a non-scaled score associated with a top ranked candidate POI of a different category; and/or the method includes dynamically determining a scaling factor to use to scale one or more non-scaled scores of the ranked, candidate POIs of a particular category based on a quantity of the candidate POIs of the particular category identified in the data.

That is a fairly complex approach to achieve diversity of results, but it seems to be one that will provide results that are truly diverse.

The patent on category diversity for local results can be found at:

Enforcing category diversity
Inventors: Neha Arora, Ke Yang, Zuguang Yang
Assignee: Google LLC
US Patent: 10,289,648
Granted: May 14, 2019
Filed: November 14, 2016

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for enforcing the category diversity or sub-category diversity of POIs that are identified in response to a local search. According to one implementation, a method includes receiving a request to identify points of interest (POIs), obtaining data identifying (i) candidate points of interest (POIs) that satisfy the request, (ii) a respective category associated with each candidate POI, and (iii) a non-scaled score associated with each candidate POI, and ranking, for each of one or more of the categories, the candidate POIs associated with the category, based on the respective non-scaled scores. The method also includes scaling, for each of the one or more categories, the non-scaled scores of the ranked candidate POIs associated with the category, ranking the candidate POIs using the scaled scores, for the candidate POIs that are associated with the one or more categories, and the non-scaled scores, for the candidate POIs that are not associated with the one or more categories, and providing data that identifies two or more of the candidate POIs, as ranked according to the scaled scores and the non-scaled scores.

Takeaways

If I didn’t mention this patent, you may not have noticed a need for it. If it didn’t exist, and every time someone searched for something like [things to do in Carlsbad], and the same 5 Italian Restaurants showed up as things to do in town, you would notice that there isn’t much diversity.

I do find myself wondering what isn’t being included in These local results that are enforcing category diversity, but I do like seeing that diversity.

And if I want to see all of the local Italian Restaurants in the area, I can try another search for just for [Italian Restaurant].


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Enforces Category Diversity for Some Local Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Google Ads announce more changes to match types – Challenges and opportunities

September 5, 2019 No Comments

Google Ads has recently announced that it now allows ads to be served for queries that it understands to share the same meaning on broad modified and phrase match keywords.

For bigger advertisers, this is probably not a huge concern, as they are not limited by budget. Being visible for a wider range of search terms without having to add thousands of keyword variations can only be a good thing.

But what about those with limited budgets, and those in niche industries that need to target very specific keywords?

While there will undoubtedly be challenges to overcome in light of these changes, there are also likely to be opportunities.

Challenges

1. Spend may increase

An increase in impressions is likely to equate to more clicks, which is fine if these clicks go on to convert. But with Google determining how relevant a search term is to the keywords in your campaigns, just how much could spend skyrocket if left unchecked?

Neil Andrew from AdTech startup PPC Protect, says:

“These changes are definitely going to result in a massive increase in irrelevant and even invalid traffic on Google Ads accounts that aren’t actively managed/monitored. Our internal analysis on this shows up to 20% increases in budget usage from the change in broad/phrase match keywords, the vast majority of which isn’t relevant to a conversion action. As a SaaS platform provider, we are in a unique position to analyse this.

We have over 35,000 Google Ads accounts connected to our system currently, and we have had a number of users notice an uptick in both wasted spend and irrelevant traffic. We’ve also seen a large share of this traffic be invalid – mostly from bot activity and competitor clicking activity. It seems like narrow niche targeting is getting tougher to achieve by the day.”

2. Impressions may be wasted on irrelevant search terms

If you’re using a target impression share bid strategy, now might be the time to review it as this might impact impression share metrics.

Impressions may now include ads triggered by keywords that Google determines to have the same meaning (unless they are added as negatives). Just how much impression share is Google going to give to variants, rather than the keywords actually in the campaign?

3. Irrelevant terms/keywords would need to be revisited and reviewed

Ads showing for irrelevant terms/keywords that are already in the account that were tested earlier and paused due to poor performance are a major bugbear of mine.

I’ve noticed keywords that have been tested previously, and paused, can still be shown as a close match. So if you have keywords that you’ve paused because they historically haven’t worked well, you’ll now need to check if Google is still serving ads for the keyword and exclude it.

This means you’ll end up with keywords that state both added and excluded.

4. More time will need to be spent on analyzing search term reports and building negative keyword lists

Yes, analyzing search term reports is absolutely something that all PPC managers should be doing on a regular basis. However, having to check search term reports daily to exclude the keywords an advertiser doesn’t want to serve ads for is going to be time-consuming, especially on large accounts, taking time away from managing and optimizing other aspects of a campaign.

Sam Kessenich, Chief Digital Officer, RyTech, is already noticing impressions ramp up.

“Regarding the most recent changes to keyword targeting, without a doubt, these changes will increase impressions and clicks across almost every campaign. We’re noticing an increase across all search campaigns due to this change, and are being forced to do daily or weekly negative keyword additions when keywords don’t match goals. Proper negative keyword research and search term monitoring is the most effective strategy we can do before accounts launch and as accounts are running.”

5. Building ad groups with single keywords just got a lot more difficult

A great way to have control over a campaign at a very granular level is to build single keyword ad groups (SKAG). This strategy allows for highly focused ad copy and landing pages, and as a result, quality scores for this type of campaign are high.

Carolina Jaramillo, Paid Media Manager at POLARIS explains why this strategy will no longer be as effective.

“I’m a big fan of creating SKAG structured campaigns, and this new change might make it more difficult to protect the single keyword ad group structure. Consequently, due to this new change, how will we be able to optimise ad copy for a single keyword when this keyword is liable to match a wide range of different queries? Although I am interested to see how Google will look for opportunities to expand our reach to serve ads for relevant queries as they say in their update, and as they state 15% of searches we see every day are new, we will have to wait and see how this change will affect our clients’ Google Ads campaigns.”

So, can any good come of these changes?

Opportunities

1. May reveal new keywords that were not previously targeted which actually convert

Not everyone searches the same. So coming up with a comprehensive keyword list that captures every single potential search term a user might enter to find your products and services is nigh-on impossible. Keyword research can only take you so far.

With this in mind, showing ads for searches that share the same intent may provide a great opportunity to track down some high converting keywords, which may have otherwise been overlooked.

Haley Anhut, PPC Manager at Clean Origin thinks there are benefits of Google showing not only for close variants but also conceptually related keywords.

“I have already seen some very smart close variants triggering existing keywords. Whether these keywords can be left alone, included within an existing ad group or a new ad group created around those keywords for highly targeted ad copy; all offer a great way to expand your campaign reach and performance. The greater the awareness of a consumer’s journey to conversion, and how that journey functions within the search funnel, allows for a highly tactical approach when reaching consumers. With more data at our fingertips, we can enhance campaign optimization strategy and expand reach through relevant searches.”

2. Will save time creating granular ad groups

As Google is capable of understanding when search terms mean the same thing, and will serve ads as a result, you no longer need to worry about including the keywords within that ad group in the ad copy. While it’s not yet clear how showing ads for close match and intent-based variations of your keywords will impact metrics like ad relevancy, this catch-all approach could save time when it comes to creating granular ad groups containing just a couple of keywords for every campaign.

Coupled with a feature like keyword insertion, this could be a powerful way of increasing reach on low impression campaigns while making the ads more relevant to the user’s search term with minimal effort.

3. Top tips and advice from PPC managers

Rather than panic, you should be proactive in preparing for this change and keep a very close eye on your accounts as it begins to roll out.

“Broad and phrase match CPCs are increasing because there are more campaigns competing for the same keywords now. A good tactic is to allocate a portion of the daily budget to the new phrase match and broad match parameters and see which keywords are resulting in low CPCs and high CTRs. Those keywords can then be optimized into ‘exact matches.’ Overall, this change makes keyword research much more important now because a higher value will lie in ‘exact match’ keywords.”

Haris Karim, Lead Digital Strategist at MAB.

“To avoid the negative effects of unwanted reach, skew towards more specific match types like exact match, although exact match already allows same-meaning close variant targeting so that is not as specific as it once was, too. In addition to this, make sure you are using a robust negative keyword strategy to avoid showing for unwanted queries. Lastly, review your search term reports regularly to ensure your impressions are relevant to your ad group keywords, ads, and landing pages.”

Timothy Johnson, SMB Solutions and PPC Lead at Portent Digital Agency.

“I would say that if you still have some ad groups built around different match types, you should consolidate those ad groups into one. For instance, if you have an ad group dedicated to exact match keywords, and another ad group dedicated to phrase match, the phrase match keywords (which now are showing for more phrases) will cannibalize all of that exact-match traffic unless the exact-match keywords have higher bids and ad rank.”

Adam Gingery, Digital Strategy and Paid Search Manager at Majux Marketing.

“I feel like Google is trying to make our lives easier with this latest change, but it’s actually just making them harder. Yes, there will be opportunities for the big spenders to get more exposure from the lower volume terms that they may not have thought of or come across yet, but for the smaller players that need to spend their limited budget very wisely, it means more time needs to be spent constantly monitoring search term reports and adding more and more negatives. So my tip for those smaller advertisers would be to focus on negative keywords. Regularly check search term reports and add negative phrases straight from there, but also take the single terms within the longer phrases that are wrong, and add those as broad match negatives to stop Google showing ads for another phrase containing that term, if it will always be wrong.”

Ashleigh Davison, Head of Biddable Media, Browser Media.

“The obvious suggestion here to minimize impact is to focus on negative keywords, especially if you can do this preemptively before they start costing you money. So instead of just thinking of all the most obvious negatives that a business would want to avoid, you will now need to start thinking about close variations of your products or services that you may want to add.”

Ryan Scollon, PPC freelance consultant.

What do you think the impact will be? We’d love to know your thoughts.

Victoria is Account Director at Browser Media. She can be found on Twitter @VikingWagon.

The post Google Ads announce more changes to match types – Challenges and opportunities appeared first on Search Engine Watch.

Search Engine Watch


How to check Google search results for different locations

August 30, 2019 No Comments

One of the fundamental truths about SEO is that no two Google searches are the same.

The logic behind it is simple, things you’ve Googled, read and watched are stored for at least three months before your Web & App Activity is deleted, if at all.

This, together with data on devices you use as well as places you go – both in terms of location history and the current IP – lets Google deliver personalized results. While this is convenient, you end up in the infamous “filter bubble”.  

In a world of highly customized SERPs on the one hand, and a host of ranking signals for local search Google uses in its algorithms on the other, pulling relevant ranking data is as challenging as it gets.

Luckily, there are a bunch of ways to pop the filter bubble, targeting the one thing that seems to be dominating personalized search – location.

Not only does it determine what users see in search results, but it also helps business owners address the issue of inconsistent SERP performance across their service areas.

The thing is, doing your local SEO homework doesn’t stop at continuous content improvement and link building, targeted specifically for local search. Poor performance can still be an issue – one that is oftentimes attributed to not having enough of a customer base in a certain location. Therefore, the problem can only be diagnosed by checking SERPs for the entirety of the geographical area covered.

Without further ado, let’s look at how you can fetch rankings for different locations manually and using designated tools – all from the comfort of your home.

Country-level search

First off, decide on the level of localization.

For brands working in multiple countries, pulling nationwide results is more than enough. For local businesses operating within a city, ranking data will differ district by district and street by street.

Check manually

So, say you want to see how well a website performs in country-level search. For that, you’ll need to adjust Google search settings and then specify the region you’d like to run a search for. And yes, you heard it right: simply checking that you have the correct TLD extension is no longer enough since Google stopped serving results on separate country domains a while back.

Now, in order to run a country-specific search manually, locate Search settings in your browser and pick a region from the list available under Region Settings.

google-regional-settings

 

Alternatively, use a proxy or VPN service – both work for doing a country-wide search.

Use rank tracking software

To automate the job, turn to the rank tracking software of choice, for example, Rank Tracker. The results will pretty much reflect the SERPs you fetched having manually adjusted search settings in the browser.

There you have it – non-geo-sensitive queries and multilingual websites performance tracking are all taken care of.

City-level search

Doing SEO for small or medium-sized business comes with many challenges, not the least of which is making sure your website shows up in local search.

Whether you have a physical store or simply provide services within a specific area, tracking ranking coverage on the city level will ultimately improve findability, and drive leads and customers.

Check manually

To manually run a search limited to a specific city, use the ‘&near=cityname’ search parameter in your Google URL:

As the name suggests, “&near=cityname” lets you pull SERPs near a certain location. While this method is easy to master, many claim that it’s unreliable, with results often delivered for a larger city nearby.

city-level-search-in-google

Still, the trick is nice to have up your sleeve as a quick and sound way of checking city-specific rankings manually.

Another silver bullet of local search that is sure to hit the city target is Google Ads’ Preview and Diagnosis Tool.

The Ad Preview and Diagnosis tool lets you pick a location, specify a language as well as user device – and fetch local SERPs regardless of your current whereabouts.

Use rank tracking software

Pretty much every rank tracking tool out there is able to run a city-specific ranking check.

Rank Tracker, Ahrefs, SEMrush, Whitespark, AccuRanker, BrightLocal – you name it – all boast the functionality and deliver local search results. That said, picking the right software for you and your business is a two-fold process.

First, take the time to look into the supported locations for search, since some of the tools, like Whitespark or SEMrush, have a somewhat limited location catalog. Second, you need to double-check that the software you’re most interested in is using their own database, with results relying on a well-designed and trusted crawler.

Doing this type of research helps you safeguard that you are able to easily see accurate SERPs for the location of your choosing.

In case you’re new to city-level ranking checks and/or baffled by the variety of options on the market, go for a single-dashboard tool: BrightLocal would be a perfect example of clean design and intuitive navigation.

Better yet, all data lives on BrightLocal’s website, which adds to the overall user-friendliness and lets you easily automate the monitoring of top search engines for multiple locations.

Street-level search

Google’s Local Pack is the place to be when running any kind of business. With over half of searches run from mobile devices, a single Local Pack may take up as much as an entire results page on a smartphone.

Both Maps and Local Pack results are extremely location-sensitive. Always keep that in mind while you’re doing your research. In order to verify that your business shows up for the right locations within a city, the search is to be narrowed down to a specific street address.

Check manually

Not to say that you cannot configure an address-specific search by yourself. Even manually, this is still perfectly doable.

However, unlike relying on a toolkit that would basically do the whole process for you, setting up a highly localized search in a browser involves multiple steps and also requires some groundwork.

  1. To start off, you need to get the exact geo-coordinates of the location you’d like to run the search from. When in doubt, use a designated tool.
  2. In your Google Chrome browser, open DevTools: navigate to the top right corner of your browser window and click on Tools > Developer Tools. You can also press Control+Shift+C (on Windows) or Command+Option+C(on Mac).

manually-checking-search-results-using-developer-tools

 

3. Navigate to the three-dot menu icon in the top right corner: from there, click More Tools > Sensors. This step is also the appropriate time to give yourself some credit for getting that far in Google search configuration.

4. In the Geolocation dropdown, select “Other” and paste your target longitude and latitude coordinates.

5. Run a search and retrieve the SERPs for the exact location you specified.

In case you aren’t particularly excited about a multistep search setup, try the Valentin app, it lets you check search results for any location with no DevTools involved.

Use rank tracking software

If anything, rank tracking for multiple precise locations is the one job you want automated and done for you by a tool that was specifically developed for local search.

There you have the idea behind SEO PowerSuite’s Rank Tracker designed to, among other things, pull hyper-localized SERPs for unlimited locations. Configure as many custom search engines as you wish. On top of that, set up scheduled tasks and have local search results checked autonomously.

Not only do I rely on Rank Tracker because it has been built by my team but also because it’s the only toolkit out there that automates what both Chrome and Valentin app help you configure manually. And of course, ranking data retrieved by the software is precise and easily exportable.

Another tool that lets you visualize – quite literally – any business’ search performance across a service area is Local Falcon. Created for Google Maps, the platform runs a search for up to 225 locations within any area specified.

With an overview of your search performance at hand, you can make better targeting choices while expanding outreach and winning new customers.

Final thoughts

Given that there are as many SERP variations as there are searches, rank tracking may feel utterly discouraging: if no two users get to see quite the exact same results, why bother? Well, the sentiment is totally understandable.

But in fact, it all boils down to understanding the reasons behind tracking rankings in the first place.

Is it to see how quickly your SEO efforts transform into higher positions in SERPs? That’d be one. Is it to make sense of the changes in traffic and sales at every point and in every location? Sure.

Big and small, businesses today simply have to keep tabs on their rankings not just country-wide but even on a street-by-street basis. There is hardly any excuse to ignore a single metric here.

Not just that, in business as well as SEO there is no such thing as an unexplainable dynamic. And more often than not, you have to take a closer look to see the root of any problem.

We all understand that rankings in themselves aren’t the only metric of success. It’s not as straightforward as having more traffic, getting more business is the main goal.

But it shouldn’t in any way undermine the overall importance of tracking rankings as a tried and tested way of checking that your website is served among relevant search results.

Local search is all about making sure your customers see you and get to you. So use it to your best advantage – whether you go for checking manually or using rank tracking software.

Aleh is the Founder and CMO at SEO PowerSuite and Awario. He can be found on Twitter at @ab80.

The post How to check Google search results for different locations appeared first on Search Engine Watch.

Search Engine Watch


Your step-by-step guide to content marketing keyword research

August 28, 2019 No Comments

Keyword research for creating content can make a tangible difference in your Google rankings. Anyone who works in content marketing knows that keyword research is crucial to ranking on Google and improving content engagement. But it can also be stressful, particularly when you look at how many results on Google appear for the keywords you want to rank for. 

What is the process for keyword research and how do you get it right? This is a challenge that most content marketers and creators face. This guide will explain the process of researching keywords and help you begin and improve your content marketing.

Why do keyword research?

Let us get this critical question out of the way – why should you be doing keyword research at all?

Keywords help people find your content on the internet. When users have a specific query they need answers to, they head to a search engine where they input sets of words. Google then searches these indexes to find content that matches those sets of words and how well it answers the query to deliver the content to the user. 

The better the content is related to the search input, the higher the content appears. The content that answers a search query best appears at the very top of Google’s first page – under the ads, of course.

Those sets of words are keywords, and they need to appear in your content in strategic areas for Google to deem your content worthy of appearing on their front page.

Content that has good SEO and is relevant will have a better chance of ranking high. As a result, the content will generate more leads, increase sales, and improve ROI.

Without keywords, your content will languish on pages further down the line on Google, ensuring that it doesn’t get seen even if it is good quality.

Business-related keyword research

What does your company sell or produce? Look at the products you have in your store and which ones need to be sold through content marketing strategies.

Make a list of these items and what you think are the most relevant search terms, such as in the example below where we look at “fashion” as a search term.

Example of creating mind maps for keyword research

Source: Venngage

Create a mind map where you can include all the relevant terms to your industry and business that you can then search-related terms for on Google. This is also a great way to generate ideas for your content.

Search on Google

We have determined the importance of researching keywords and why you should undergo the process. With that out of the way, you should immediately go to Google.

Though there are numerous tools online that will show you keyword rankings and associated keywords, Google is still the best place to find the answers you are looking for. After all, Google is the most popular search engine that content marketers want to rank on.

Whether or not you have decided on the kind of content you are going to create, you can still search Google for keywords to use.

For instance, if you were a clothing brand working on new blogs about the clothes you sell, you could start off by typing in “jeans” and seeing the results, like in the below screenshot.

Using Google search for keyword research

Source: Google search

But “jeans” is far too broad a category to write about. We have to narrow it down so you have a better chance of ranking and being found by your audience.

Look at what happens when you search for “jeans for men”.

Searching targeted keywords for products on Google

Source: Google search

The terms become more specific the deeper you go in your search. Instead of writing an article about jeans in general, you can write something specifically for men over 40.

And you can go even further in your keyword research.

Example of finding long tail niche keywords on Google

Source: Google search

When you search for “jeans for men over 40”, you get even more search suggestions for your content, alongside related keywords that you can use.

You could target your content towards “how to dress in your 40s male” instead of just “jeans” for a better chance of reaching your target audience.

Long-tail keywords

The search term “how to dress in your 40s male” is a long-tail keyword, as opposed to a seed or head keyword like “jeans”.

Long-tail keywords are easier to rank for than head keywords, which will have extremely high competition. There are fewer chances of Google ranking you over your competitors in this case.

Instead, you should aim for long-tail keywords that are more niche to your business. Don’t look for your product, as that will generally only show you your competitors.

Search for ways that people use, or will use, your product, and choose your keywords accordingly.

Look at competitors’ keywords

As we have noticed, Google will show you the best results for the search terms you enter. Some of those results will likely be your competitors. Why not study them?

Look at the top three most relevant posts that appear in Google’s search for the terms you have entered. Avoid review sites, as these are not relevant for this exercise.

Once you have chosen the competitor content for research, look at the main headings of the article – these are the h1, h2, and h3 tags within a piece of content.

If a piece of content has great SEO, the keywords they are ranking for have to appear in the headings, most often in the title, and the first heading, as well as across the body copy.

List out what you see in your competitor’s content. Knowing the keywords that your competitor is using will help you tailor and structure your content. 

In fact, using competitor names as keywords in Google Ads for your content has become a popular exercise for businesses. 

However, this is a tricky area that you should study more about before implementing, even if the results could be positive.

Creating your content

Having chosen your long-tail keyword, you can incorporate it into your content. 

An important thing to remember in content marketing is that your material should, first and foremost, answer your customers’ query. 

Your goal may be to rank on Google and improve visits to your site, but if your content is solely SEO-focused with little regard to the needs of the reader, you will see higher bounce rates, which will negatively impact your ranking.

Additionally, keywords aren’t the only reason why your content will rank higher on Google. There are a number of other factors that increase rankings, such as link building, incorporating visuals in your content, and bounce rates.

But using relevant keywords that draw in your audiences will see results over time. You can also find out whether your keyword research is reflecting positive results by using tools to study keyword rankings.

Key takeaways

Keywords affect your Google rankings, and that is where you should go to find the keywords best suited for you and your content.

Use long-tail keywords instead of head keywords that will have a lot of competition. Also, look at top competitors for your keywords to decide whether or not those keywords will work for you.

Finally, create your content with your consumers in mind, and not purely for SEO, as that will improve the chances of your content being read.

With these steps completed in your keyword research, you are well placed to begin creating content that will help you move towards the top of the Google rankings.

Ronita Mohan is a content marketer at the online infographic and design platform, Venngage.

The post Your step-by-step guide to content marketing keyword research appeared first on Search Engine Watch.

Search Engine Watch


Augmented Search Queries Using Knowledge Graph Information

August 24, 2019 No Comments

What are Augmented Search Queries?

Last year, I wrote a post called Quality Scores for Queries: Structured Data, Synthetic Queries and Augmentation Queries, which told us that Google may look at query logs and structured data (table data and schema data) related to a site to create augmentation queries, and evaluate information about searches for those comparing them to original queries for pages from that site, and if the results of the augmentation queries do well in evaluations compared to the original query results, searchers may see search results that are a combination of results from the original queries and the augmentation queries.

Around the time that patent was granted to Google another patent that talks about augmented search queries was also granted to Google, and is worth talking about at the same time with the patent I wrote about last year. It takes the concept of adding results from augmented search queries together with original search results, but it has a different way of coming up with augmented search queries, This newer patent that I am writing about starts off by telling us what the patent is about:

This disclosure relates generally to providing search results in response to a search query containing an entity reference. Search engines receive search queries containing a reference to a person, such as a person’s name. Results to these queries are often times not sufficiently organized, not comprehensive enough, or otherwise not presented in a useful way.

Augmentation from the first patent means possibly providing additional information in search results based upon additional query information from query logs or structured data from a site. Under this new patent, augmentation comes from recognizing that an entity exists in a query, and providing some additional information in search results based upon that entity.

This patent is interesting to me because it takes an older type of search – where a query returns pages in response to the keywords typed into a search box, with a newer type of search, where an entity is identified in a query, and knowledge information about that entity is reviewed to create possible augmentation queries that could be combined with the results of the original query.

The process behind this patent can be described in this way:

In some implementations, a system receives a search query containing an entity reference, such as a person’s name, that corresponds to one or more distinct entities. The system provides a set of results, where each result is associated with at least one of the distinct entities. The system uses the set of results to identify attributes of the entity and uses the identified attributes to generate additional, augmented search queries associated with the entity. The system updates the set of results based on one or more of these augmented search queries.

A summary of that process can be described as:

  1. Receiving a search query associated with an entity reference, wherein the entity reference corresponds to one or more distinct entities.
  2. Providing a set of results for the search query where the set of results distinguishes between distinct entities.
  3. Identifying one or more attributes of at least one entity of the one or more distinct entities based at least in part on the set of results.
  4. Generating one or more additional search queries based on the search query, the at least one entity, and the one or more attributes.
  5. Receiving an input selecting at least one of the one or more additional search queries and providing an updated set of results based on the selected one or more additional search queries, where the updated set of results comprises at least one result not in the set of results.

The step of generating one or more additional search queries means ranking the identified one or more attributes and generating one or more additional search queries based on the search query, the at least one entity, the one or more attributes, and the ranking.

That ranking can be based on the frequency of occurrence.
The ranking can also be based on a location of each of the one or more attributes with respect to at least one entity in the set of results.

augmented search queries - Planet of the apes example

This process can identify two different entities in a query. For instance, there were two versions of the Movie, the Planet of the Apes. One was released in 1968, and the other was released in 2001. They had different actors in them, and the second was considered a reboot of the first.

When results are generated in instances where there may be more than one entity involved, the search queries provided may distinguish between the distinct entities. They may identify one or more attributes of at least one entity of the one or more distinct entities based at least in part on the set of results. Augmented search queries may be generated for “one or more additional search queries based on the search query, the at least one entity, and the one or more attributes.”

This patent can be found at:

Providing search results using augmented search queries
Inventors: Emily Moxley and Sean Liu
Assignee: Google LLC
US Patent: 10,055,462
Granted: August 21, 2018
Filed: March 15, 2013

Abstract

Methods and systems are provided for updating a set of results. In some implementations, a search query associated with an entity reference is received. The entity reference corresponds to one or more distinct entities. A set of results for the search query is provided, and the set of results distinguishes between distinct entities. One or more attributes for at least one entity of the one or more distinct entities are identified based at least in part on the set of results. One or more additional search queries are identified based on the search query, the at least one entity, and the one or more attributes. An input selecting at least one of the additional search queries is received. An updated set of results is provided based on the selected additional search queries. The updated set of results comprises at least one result not in the set of results.

Some Additional Information About How Augmented Search Queries are Found and Used

A couple of quick definitions from the patent:

Entity Reference – refers to an identifier that corresponds to one or more distinct entities.

Entity – refers to a thing or concept that is singular, unique, well defined, and distinguishable.

This patent is all about augmenting a set of query results by providing more information about entities that may appear in a query:

An entity reference may correspond to more than one distinct entity. An entity reference may be a person’s name, and corresponding entities may include distinct people who share the referenced name.

This process is broader than queries involving people. We are given a list in the patent that it includes, and it covers, “a person, place, item, idea, topic, abstract concept, concrete element, other suitable thing, or any combination thereof.”

And when an entity reference appears in a query, it may cover a number of entities, for example, a query that refers to John Adams could be referring to:

  • John Adams the Second President
  • John Quincy Adams the Sixth President
  • John Adams the artist

Entity attributesIn addition to having an entity in an entity reference in a query, we may see a mention of an attribute for that entity, which is “any feature or characteristic associated with an entity that the system may identify based on the set of results.” For the John Adams entity reference, we may also see attributes included in search results, such as [second president], [Abigail Adams], and [Alien and Sedition Acts].

entity selection box

It sounds like an entity selection box could be shown that allows a searcher to identify which entity they might like to see results about, so when there is an entity in a query such as John Adams, and there are at least three different John Adams that could be included in augmented search results, there may be clickable hyperlinks for entities for a searcher to select or deselect which entity they might be interested in seeing more about.

Augmented Search Queries with Entities Process Takeaways

When an original query includes and entity reference in it, Google may allow searchers to identify which entity they are interested in, and possibly attributes associated with that entity. This really brings the knowledge graph to search, using it to augment queries in such a manner. A flowchart from the patent illustrates this process in a way that was worth including in this post:

augmented search queries flowchart

The patent provides a very detailed example of how a search that includes entity information about a royal wedding in England might be surfaced using this augmented search query approach. That may not be a query that I might perform, but I could imagine some that I would like to try out. I could envision some queries involving sports and movies and business. If you own a business, and it is not in Google’s knowledge graph you may end up missing out on being included in results from augmented search queries.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Augmented Search Queries Using Knowledge Graph Information appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How AMP technology can upgrade your email campaigns

August 22, 2019 No Comments

Accelerated mobile page (AMP) technology is what going to revolutionize bulk email marketing as we know it today.

It enables to add dynamic content to previously static flat email pages, and lets recipients react to it right in the message. To view extra photos or scroll through price offers, customers no longer need to download the site page, open a new tab or click on the link – now they can do it without leaving the email body. Supported by Gmail and Mail.ru and adapted by major online platforms, it will soon extend to other email clients and brands.

How it works

AMP technology is a series of HTML tags backed up by CSS and JS. It aims to speed up the mobile web and optimize page performance, creating new ways for more versatile customer engagement. To send AMP-powered bulk email campaigns, you have to undergo registration at Google as a dynamic content sender and make sure your email automation service provider supports this technology. As for today, the following companies have announced AMP support:

  • eSputnik
  • Stripo
  • Litmus
  • Amazon SES and Amazon Pinpoint
  • SparkPost
  • Twilio Sendgrid

This list will definitely grow, as gearing emails with app functionality is a great opportunity to increase the ROI of your email marketing campaigns.

Benefits of AMP technology

  • Interactive elements increase the recipients’ engagement and as a result the time spent on the emails. The more time a subscriber spends on the email, the more chances they would respond to the offeror make any other active action.
  • Email recipients can directly interact with the content without the necessity to download separate pages. It saves time and makes the shopping experience easier and more satisfactory. And satisfied buyers are more likely to turn into repeat customers.
  • Easy to use, AMP-powered messages improve usability which again leads to bigger responsiveness and engagement.
  • AMP messages do not involve third parties, and the conversation goes only between a sender and a recipient.

Where to apply AMP technology

1. Online shopping

Though a regular flat email can also contain interactive elements like carousels, countdown timers or rollovers, customers should still land on a webpage to browse a catalog or check current product availability. An AMP-powered campaign allows a complete checkout process directly in the email. You can decide upon size/color/material and complete the order without leaving the email. The same approach can be integrated with cart abandonment campaigns allowing people to revise their abandoned carts and make necessary changes if needed.

2. Booking

AMP email can benefit travel industry brands by enabling people to check available tickets, rooms, car trips, or tables at your favorite restaurant. Apart from simply seeing how many offers are left, you can also choose a seat number or specify a location. For example, you might state you would prefer a back row buying a movie ticket or a window seat when reserving a flight.

3. Delivery

Companies providing delivery services can send AMP emails that will allow real-time tracking of the courier with their order rather than just notify the status change.

4. Event invitations

Backed up by AMP technology, invitation emails now can let recipients RSVP to an event and make the necessary comments, for example, confirm participation in a webinar or choose the time for a skype call.

5. Surveys and polls

AMP technology can generate benign conditions for expanding survey emails, making it easy to participate in polls and fill out questionnaires. It also makes possible leaving feedback or review in the real-time, seeing all updates on the existing comments.

6. Financial sector

Adopting AMP emails can also be transformative for the financial industry. An online calculator form built within the email will help clarify the loan details, perform an estate appraisal or make other basic calculations straight in the email.

7. Subscription

With the help of AMP technology, you can manage your subscription in a more convenient way. Now you can not only subscribe to newsletters but also choose the time and frequency of these messages.

How to start sending AMP emails

Before you dive into the creation of AMP campaigns, make sure both email agent of the recipient and your ESP support the AMP technology. The next step is to contact Google as a dynamic content sender and ask them to add your email address to the whitelist. Here is how to do it:

To register with Google, create two similar emails: HTML email and an email with an AMP part

how to use amp in emails

HTML – email

AMP HTML – email

  1. Add dynamic content and make sure AMP elements get validated.
  2. Test whether the AMP campaign has the appropriate appearance and behavior.
  3. Verify your sender domain with SPF, DKIM, and DMARC.
  4. Send both emails from your corporate email address to ampforemail.whitelisting@gmail.com.
  5. Fill in the Sender Registration Form.
  6. Wait till Google sends you an email notifying that you have been approved for sending AMP email to Gmail accounts.

Keep in mind that your authorization may take several days after which you will be able to send AMP-powered emails.

Though the technology of accelerated mobile pages is still under development, its potential is great. Billions of emails are sent on a daily basis, and almost 70% of them are read on mobile devices. This means that a bigger part of the interaction between the brand and its customers happens via emails and SMS campaigns.

AMP technology, when smartly integrated into the overall marketing strategy, will definitely make this interaction more beneficial for each party. Customers will get more convenient and satisfying interaction experience, and companies will be able to grow email responsiveness and encourage more active actions.

Zhanna Tarakanova is PR Manager at eSputnik.

The post How AMP technology can upgrade your email campaigns appeared first on Search Engine Watch.

Search Engine Watch


Google Knowledge Graph Reconciliation

August 15, 2019 No Comments

Exploring how Google’s knowledge graph works can provide some insights into how is growing and improving and may influence what we see on the web. A newly granted Google patent from the end of last month tells us about one way that Google is using to improve the amount of data that its knowledge graph contains.

The process involved in that patent doesn’t work quite the same way as the patent I wrote about in the post How the Google Knowledge Graph Updates Itself by Answering Questions but taken together, they tell us about how the knowledge graph is growing and improving. But part of the process involves the entity extraction that I wrote about in Google Shows Us How It Uses Entity Extractions for Knowledge Graphs.

This patent tells us that information that may make its way into Google’s knowledge graph isn’t limited to content on the Web, but can also may “originate from another document corpus, such as internal documents not available over the Internet or another private corpus, from a library, from books, from a corpus of scientific data, or from some other large corpus.”

What Knowledge Graph Reconciliation is?

The patent tells us about how a knowledge graph is constructed and processes that it follows to update and improve itself.

The site Wordlift includes some defintions related to Entities and the Semantic Web. The Definition that they provide for reconciling entities means “providing computers with unambiguous identifications of the entities we talk about.” This patent from Google focuses upon a broader use of the word “Reconciliation” and how it applies to knowledge graphs, to make sure that those take advantage of all of the information from web sources that may be entered into those about entities.

This process involves finding missing entities and missing facts about entities from a knowledge graph by using web-based sources to add information to a knowledge graph.

Problems with knowledge graphs

Large data graphs like Google’s Knowledge Graph store data and rules that describe knowledge about the data in a way that allows the information they provide to be built upon. A patent granted to Google describes how Google may build upon data within a knowledge graph so that it contains more information. The patent doesn’t just cover information from within the knowledge graph itself, but can look to sources such as online news

Tuples as Units of Knowledge Graphs

The patent presents some definitions that are worth learning. One of those is about facts involving entities:

A fact for an entity is an object related to the entity by a predicate. A fact for a particular entity may thus be described or represented as a predicate/object pair.

The relationship between the Entity (a subject) and a fact about the entity (a predicate/object pair) is known as a tuple.

In a knowledge graph, entities, such as people, places, things, concepts, etc., may be stored as nodes and the edges between those nodes may indicate the relationship between the nodes.

For example, the nodes “Maryland” and “United States” may be linked by the edges of “in country” and/or “has state.”

A basic unit of such a data graph can be a tuple that includes two entities, a subject entity and an object entity, and a relationship between the entities.

Tuples often represent real-world facts, such as “Maryland is a state in the United States.” (A Subject, A Verb, and an Object.)

A tuple may also include information, such as:

  • Context information
  • Statistical information
  • Audit information
  • Metadata about the edges
  • etc.

When a knowledge graph contains information about a tuple, it may also know about the source of that tuple and a score for the originating source of the tuple.

A knowledge graph may lack information about some entities. Those entities may be described in document sources, such as web pages, but manual addition of that entity information can be slow and does not scale.

This is a problem facing knowledge graphs – missing entities and their relationships to other entities can reduce the usefulness of querying the data graph. Knowledge graph reconciliation provides a way to make a knowledge graph richer and stronger.

The patent tells us about inverse tuples, which reverses the subject and object entities.

For example, if the potential tuples include the tuple the system may generate an inverse tuple of .

Sometimes inverse tuples may be generated for some predicates but not for others. For example, tuples with a date or measurement as the object may not be good candidates for inverse occurrences, and may not have many inverse occurrences.

For example, the tuple is not likely to have an inverse occurrence of <2001, is the year of release, Planet of the Apes> in the target data graph.

Clustering of Tuples is also discussed in the patent. We are told that the system may then cluster the potential tuples by:

  • source
  • provenance
  • subject entity type
  • subject entity name

This kind of clustering takes place in order to generate source data graphs.

The process behind the knowledge graph reconciliation patent:

  1. Potential entities may be identified from facts generated from web-based sources
  2. Facts from those sources are analyzed and cleaned, generating a small source data graph that includes entities and facts from those sources
  3. The source graph may be generated for a potential source entity that does not have a matching entity in the target data graph
  4. The system may repeat the analysis and generation of source data graphs for many source documents, generating many source graphs, each for a particular source document
  5. The system may cluster the source data graphs together by type of source entity and source entity name
  6. The entity name may be a string extracted from the text of the source
  7. Thus, the system generates clusters of source data graphs of the same source entity name and type
  8. The system may split a cluster of source graphs into buckets based on the object entity of one of the relationships, or predicates
  9. The system may use a predicate that is determinative for splitting the cluster
  10. A determinative predicate generally has a unique value, e.g., object entity, for a particular entity
  11. The system may repeat the dividing a predetermined number of times, for example using two or three different determinative predicates, splitting the buckets into smaller buckets. When the iteration is complete, graphs in the same bucket share two or three common facts
  12. The system may discard buckets without sufficient reliability and discard any conflicting facts from graphs in the same bucket
  13. The system may merge the graphs in the remaining buckets, and use the merged graphs to suggest new entities and new facts for the entities for inclusion in a target data graph

How Googlebot may be Crawling Facts to Build a Knowledge Graph

This is where some clustering comes into play. Imagine that the web sources are about science fiction movies, and they contain information about movies involving the “Planet of the Apes.” series, which has been remade at least once, and there are a number of related movies in the series, and movies with the same names. The information about those movies may be found from sources on the Web, and clustered together and go through a reconciliation process because of the similarities. Relationships between the many entities involved may be determined and captured. We are told about the following steps:

  1. Each source data graph is associated with a source document, includes a source entity with an entity type that exists in the target data graph, and includes fact tuples
  2. The fact tuples identify a subject entity, a relationship connecting the subject entity to an object entity, and the object entity
  3. The relationship is associated with the entity type of the subject entity in the target data graph
  4. The computer system also includes instructions that, when executed by the at least one processor, cause the computer system to perform operations that include generating a cluster of source data graphs, the cluster including source data graphs associated with a first source entity of a first source entity type that share at least two fact tuples that have the first source entity as the subject entity and a determinative relationship as the relationship connecting the subject entity to the object entity
  5. The operations also include generating a reconciled graph by merging the source data graphs in the cluster when the source data graphs meet a similarity threshold and generating a suggested new entity and entity relationships for the target data graph based on the reconciled graph
  6. More Features to Knowledge Graph Reconciliation

    There appear to be 9 movies in the Planet of the Apes Series and the rebooted series. The first “Planet of the Apes” was released in 1968, and the second “Planet of the Apes” was released in 2001. Since they have the same name, things could get confusing if they weren’t separated from each other, and using facts about those movies to break the cluster about “Planet of the Apes” down into buckets based upon facts that tell us that there was an original series, and a rebooted series involving the “Planet of the Apes.”

    entity graph reconciliation planet of the apes

    I’ve provided details of an example that Google pointed out, but here is how they describe this breaking a cluster down into bucked based on facts:

    For example, generating the cluster can include generating a first bucket for source data graphs associated with the first source entities and the first source entity type, splitting the first bucket into second buckets based on a first fact tuple, the first fact tuple having the first source entity as the subject entity and a first determinative relationship, so that source data graphs sharing the first fact tuple are in a same second bucket; and generating final buckets by repeating the splitting a quantity of times, each iteration using another fact tuple for the first source entity that represents a distinct determinative relationship, so that source data graphs sharing the first fact tuple and the other fact tuples are in the same final bucket, wherein the cluster is one of the final buckets.

    So this aspect of knowledge graph reconciliation involves understanding related entities, including some that may share the same name, and removing ambiguity from how they might be presented within a knowledge graph.

    Another aspect of knowledge graph reconciliation may involve merging data, such as seeing when one of the versions of the movie “Planet of the Apes” has more than one actor who is in the movie and merging that information together to make the knowledge graph more complete. The image below from the patent shows how that can be done:

    Knowledge graph reconciliation actors from Planet of the Apes

    The patent also tells us that discarding fact tuples that represent conflicting facts from a particular data source may take place also. Some types of facts about entities have only one answer, such as a birthdate of a person, or the launch date of a movie. If there is more than one of those appearing, they will be checked to see if one of them is wrong, and should be removed It is also possible that this may happen with inverse tuples, which the patent also tells us about.

    Inverse Tuples Generated and Discarded

    Knowledge Graph Reconciliation - Reverse Tuples

    When a tuple is a subject-verb-object, what is known as inverse tuples may be generated? If we have fact tuples such as “Maryland is a state in the United States of America,” and “California is a state in the United States of America,” we may generate inverse tuples such as “The United States of America has a state named Maryland,” and “The United States of America has a state named California.”

    Sometimes tuples may be generated from one source and conflict when they are clustered by topic from another source. An example might be because of the recent trade deadline in Major League Baseball where the right fielder Yasul Puig was traded from the Cincinnati Reds to the Cleveland Indians. The tuple “Yasul Puig plays for the Cincinnati Reds” conflicts with the tuple “The Cleveland Indians have a player named Yasul Puig.” One of those tuples may be discarded during the knowledge graph reconciliation.

    There is a reliability threshold for tuples, and tuples that don’t meet it may be discarded as having insufficient evidence. For instance, a tuple that is only from one source may not be considered reliable and may be discarded. If there are three sources for a tuple that are all from the same domain, that may also be considered insufficient evidence, and that tuple may be discarded.

    Advantages of the Knowledge Graph Reconciliation Patent Process

  1. A data graph may be extended more quickly by identifying entities in documents and facts concerning the entities
  2. The entities and facts may be of high quality due to the corroborative nature of the graph reconciliation process
  3. The identified entities may be identified from news sources, to more quickly identify new entities to be added to the data graph
  4. Potential new entities and their facts may be identified from thousands or hundreds of thousands of sources, providing potential entities on a scale that is not possible with manual evaluation of documents
  5. Entities and facts added to the data graph can be used to provide more complete or accurate search results

The Knowledge Graph Reconciliation Patent can be found here:

Automatic discovery of new entities using graph reconciliation
Inventors: Oksana Yakhnenko and Norases Vesdapunt
Assignee: GOOGLE LLC
US Patent: 10,331,706
Granted: June 25, 2019
Filed: October 4, 2017

Abstract

Systems and methods can identify potential entities from facts generated from web-based sources. For example, a method may include generating a source data graph for a potential entity from a text document in which the potential entity is identified. The source data graph represents the potential entity and facts about the potential entity from the text document. The method may also include clustering a plurality of source data graphs, each for a different text document, by entity name and type, wherein at least one cluster includes the potential entity. The method may also include verifying the potential entity using the cluster by corroborating at least a quantity of determinative facts about the potential entity and storing the potential entity and the facts about the potential entity, wherein each stored fact has at least one associated text document.

Takeaways

The patent points out at one place, that human evaluators may review additions to a knowledge graph. It is interesting seeing how it can use sources such as news sources to add new entities and facts about those entities. Being able to use web-based news to add to the knowledge graph means that it isn’t relying upon human-edited sources such as Wikipedia to grow, and the knowledge graph reconciliation process was interesting to learn about as well.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google Knowledge Graph Reconciliation appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How would Google Answer Vague Questions in Queries?

July 18, 2019 No Comments

“How Long is Harry Potter?” is asked in a diagram from a Google Patent. The answer is unlikely to do with a dimension related to the fictional character but may have something to do with one of the best selling books featuring Harry Potter as a main Character.

When questions are asked as queries at Google, sometimes they aren’t asked clearly, with enough preciseness to make an answer easy to provide. How do vague questions get answered?

Question answering seems to be a common topic in Google Patents recently. I wrote about one not long ago in the post, How Google May Handle Question Answering when Facts are Missing

So this post is also on question answering but involves issues involving the questions rather than the answers. And particularly vague questions.

Early in the description for a recently granted Google Patent, we see this line, which is the focus of the patent:

Some queries may indicate that the user is searching for a particular fact to answer a question reflected in the query.

I’ve written a few posts about Google working on answering questions, and it is good seeing more information about that topic being published in a new patent. As I have noted, this one focuses upon when questions asking for facts may be vague:

When a question-and-answer (Q&A) system receives a query, such as in the search context, the system must interpret the query, determine whether to respond, and if so, select one or more answers with which to respond. Not all queries may be received in the form of a question, and some queries might be vague or ambiguous.

The patent provides an example query for “Washington’s age.”

Washington’s Age could be referring to:

  • President George Washington
  • Actor Denzel Washington
  • The state of Washington
  • Washington D.C.

For the Q&A system to work correctly, it would have to decide which the searcher who typed that into a search box the query was likely interested in finding the age of. Trying that query, Google decided that I was interested in George Washington:

Answering vague questions

The problem that this patent is intended to resolve is captured in this line from the summary of the patent:

The techniques described in this paper describe systems and methods for determining whether to respond to a query with one or more factual answers, including how to rank multiple candidate topics and answers in a way that indicates the most likely interpretation(s) of a query.

How would Google potentially resolve this problem?

It would likely start by trying to identify one or more candidate topics from a query. It may try to generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic.

It would obtain search results based on the query, which references an annotated resource, which would be is a resource that, based on automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource. For each candidate topic-answer pair,

There would be a Determination of a score for the candidate topic-answer pair based on:

(i) The candidate topic appearing in the annotations of the resources referenced by one or more of the search results
(ii) The query answer appearing in annotations of the resources referenced by the search results, or in the resources referenced by the search results.

A decision would also be made on whether to respond to the query, with one or more answers from the candidate topic-answer pairs, based on the scores for each.

Topic-Answer Scores

The patent tells us about some optional features as well.

  1. The scores for the candidate topic-answer pairs would have to meet a predetermined threshold
  2. This process may decide to not respond to the query with any of the candidate topic answer pairs
  3. One or More of the highest-scoring topic-answer pairs might be shown
  4. An topic-answer might be selected from one of a number of interconnected nodes of a graph
  5. The Score for the topic-answer pair may also be based upon a respective query relevance score of the search results that include annotations in which the candidate topic occurs
  6. The score to the topic-answer pair may also be based upon a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, which could indicate the likelihood that the answer is correct for that question

Knowledge Graph Connection to Vague Questions?

vague answers answered with Knowledge base

This question-answering system can include a knowledge repository which includes a number of topics, each of which includes attributes and associated values for those attributes.

It may use a mapping module to identify one or more candidate topics from the topics in the knowledge repository, which may be determined to relate to a possible subject of the query.

An answer generator may generate for each candidate topic, a candidate topic-answer pair that includes:

(i) the candidate topic, and
(ii) an answer to the query for the candidate topic, wherein the answer for each candidate topic is identified from information in the knowledge repository.

A search engine may return search results based on the query, which can reference an annotated resource. A resource that, based on an automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource.

A score may be generated for each candidate topic-answer pair based on:

(i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results
(ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results. A front-end system at the one or more computing devices can determine whether to respond to the query with one or more answers from the candidate topic-answer pairs, based on the scores.

The additional features above for topic-answers appears to be repeated in this knowledge repository approach:

  1. The front end system can determine whether to respond to the query based on a comparison of one or more of the scores to a predetermined threshold
  2. Each of the number of topics that in the knowledge repository can be represented by a node in a graph of interconnected nodes
  3. The returned search results can be associated with a respective query relevance score and the score can be determined by the scoring module for each candidate topic-answer pair based on the query relevance scores of one or more of the search results that reference an annotated resource in which the candidate topic occurs
  4. For one or more of the candidate topic-answer pairs, the score can be further based on a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, or each of one or more annotations in which the answer in a respective candidate topic-answer pair occurs

Advantages of this Vague Questions Approach

  1. Candidate responses to the query can be scored so that a Q&A system or method can determine whether to provide a response to the query.
  2. If the query is not asking a question or none of the candidate answers are sufficiently relevant to the query, then no response may be provided
  3. The techniques described herein can interpret a vague or ambiguous query and provide a response that is most likely to be relevant to what a user desired in submitting the query.

This patent about answering vague questions is:

Determining question and answer alternatives
Inventors: David Smith, Engin Cinar Sahin and George Andrei Mihaila
Assignee: Google Inc.
US Patent: 10,346,415
Granted: July 9, 2019
Filed: April 1, 2016

Abstract

A computer-implemented method can include identifying one or more candidate topics from a query. The method can generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic. The method can obtain search results based on the query, wherein one or more of the search results references an annotated resource. For each candidate topic-answer pair, the method can determine a score for the candidate topic-answer pair for use in determining a response to the query, based on (i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results, and (ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results.

Vague Questions Takeaways

I am reminded of a 2005 Google Blog post called Just the Facts, Fast when this patent tells us that sometimes it is “most helpful to a user to respond directly with one of more facts that answer a question determined to be relevant to a query.”

The different factors that might be used to determine which answer to show if an answer is shown, includes a confidence level, which may be confidence that an answer to a question is correct. That reminds me of the association scores of attributes related to entities that I wrote about in Google Shows Us How It Uses Entity Extractions for Knowledge Graphs. That patent told us that those association scores for entity attributes might be generated over the corpus of web documents as Googlebot crawled pages extracting entity information, so those confidence levels might be built into the knowledge graph for attributes that may be topic-answers for a question answering query.

A webpage that is relevant for such a query, and that an answer might be taken from may be used as an annotation for a displayed answer in search results.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How would Google Answer Vague Questions in Queries? appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓