CBPO

Tag: Google’s

How Google’s Knowledge Graph Updates Itself by Answering Questions

October 31, 2018 No Comments

How A Knowledge Graph Updates Itself

Elijah Hail

To those of us who are used to doing Search Engine Optimization, we’ve been looking at URLs filled with content, and links between that content, and how algorithms such as PageRank (based upon links pointed between pages) and information retrieval scores based upon the relevance of that content have been determining how well pages rank in search results in response to queries entered into search boxes by searchers. Web pages connected by links have been seen as information points connected by nodes. This was the first generation of SEO.

Search has been going through a transformation. Back in 2012, Google introduced something it refers to as the knowledge graph, in which they told us that they would begin focusing upon indexing things instead of strings. By “strings,” they were referring to words that appear in queries, and in documents on the Web. By “things,” they were referring to named entities, or real and specific people, places, and things. When people searched at Google, the search engines would show Search Engine Results Pages (SERPs) filled with URLs to pages that contained the strings of letters that we were searching for. Google still does that, and is slowly changing to showing search results that are about people, places, and things.

Google started showing us in patents how they were introducing entity recognition to search, as I described in this post:
How Google May Perform Entity Recognition

They now show us knowledge panels in search results that tell us about the people, places, and things they recognize in the queries we perform. In addition to crawling webpages and indexing the words on those pages, Google is collecting facts about the people, places, and things it finds on those pages.

A Google Patent that was just granted in the past week tells us about how Google’s knowledge graph updates itself when it collects information about entities, their properties and attributes and relationships involving them. This is part of the evolution of SEO that is taking place today – learning how Search is changing from being based upon search to being based upon knowledge.

What does the patent tell us about knowledge? This is one of the sections that details what a knowledge graph is like that Google might collect information about when it indexes pages these days:

Knowledge graph portion includes information related to the entity [George Washington], represented by [George Washington] node. [George Washington] node is connected to [U.S. President] entity type node by [Is A] edge with the semantic content [Is A], such that the 3-tuple defined by nodes and the edge contains the information “George Washington is a U.S. President.” Similarly, “Thomas Jefferson Is A U.S. President” is represented by the tuple of [Thomas Jefferson] node 310, [Is A] edge, and [U.S. President] node. Knowledge graph portion includes entity type nodes [Person], and [U.S. President] node. The person type is defined in part by the connections from [Person] node. For example, the type [Person] is defined as having the property [Date Of Birth] by node and edge, and is defined as having the property [Gender] by node 334 and edge 336. These relationships define in part a schema associated with the entity type [Person].

Note that SEO is no longer just about how often certain words appear on pages of the Web, what words appear in links to those pages, in page titles, and headings, alt text for images, and how often certain words may be repeated or related words may be used. Google is looking at the facts that are mentioned about entities, such as entity types like a “person,” and properties, such as “Date of Birth,” or “Gender.”

Note that quote also mentions the word “Schema” as in “These relationships define in part a schema associated with the entity type [Person].” As part of the transformation of SEO from Strings to Things, The major Search Engines joined forces to offer us information on how to use Schema for structured data on the Web to provide a machine readable way of sharing information with search engines about the entities that we write about, their properties, and relationships.

I’m writing about this patent because I am participating in a Webinar online about Knowledge Graphs and how those are being used, and updated. The Webinar is tomorrow at:
#SEOisAEO: How Google Uses The Knowledge Graph in its AE algorithm. I haven’t been referring to SEO as Answer Engine Optimization, or AEO and it’s unlikely that I will start, but see it as an evolution of SEO

I’m writing about this Google Patent, because it starts out with the following line which it titles “Background:”

This disclosure generally relates to updating information in a database. Data has previously been updated by, for example, user input.

This line points to the fact that this approach no longer needs to be updated by users, but instead involves how Google knowledge graphs update themselves.

Updating Knowledge Graphs

I attended a Semantic Technology and Business conference a couple of year ago, where the head of Yahoo’s knowledge base presented, and he was asked a number of questions in a question and answer session after he spoke. Someone asked him what happens when information from a knowledge graph changes and it needs to be updated?

His Answer was that a knowledge graph would have to be updated manually to have new information place within it.

That wasn’t a satisfactory answer because it would have been good to hear that the information from such a source could be easily updated. I’ve been waiting for Google to answer a question like this, which made seeing a line like this one from this patent a good experience:

In some implementations, a system identifies information that is missing from a collection of data. The system generates a question to provide to a question answering service based on the missing information, and uses the response from the question answering service to update the collection of data.

This would be a knowledge graph update, so that patent provides details using language that reflects that exacly:

In some implementations, a computer-implemented method is provided. The method includes identifying an entity reference in a knowledge graph, wherein the entity reference corresponds to an entity type. The method further includes identifying a missing data element associated with the entity reference. The method further includes generating a query based at least in part on the missing data element and the type of the entity reference. The method further includes providing the query to a query processing engine. The method further includes receiving information from the query processing engine in response to the query. The method further includes updating the knowledge graph based at least in part on the received information.

How does the search engine do this? The patent provides more information that fills in such details.

The approaches to achieve this would be to:

…Identifying a missing data element comprises comparing properties associated with the entity reference to a schema table associated with the entity type.

…Generating the query comprises generating a natural language query. This can involve selecting, from the knowledge graph, disambiguation query terms associated with the entity reference, wherein the terms comprise property values associated with the entity reference, or updating the knowledge graph by updating the data graph to include information in place of the missing data element.

…Identifying an element in a knowledge graph to be updated based at least in part on a query record. Operations further include generating a query based at least in part on the identified element. Operations further include providing the query to a query processing engine. Operations further include receiving information from the query processing engine in response to the query. Operations further include updating the knowledge graph based at least in part on the received information.

A knowledge graph updates itself in these ways:

(1) The knowledge Graph may be updated with one or more previously performed searches.
(2) The knowledge Graph may be updated with a natural language query, using disambiguation query terms associated with the entity reference, wherein the terms comprise property values associated with the entity reference.
(3) The knowledge Graph may use properties associated with the entity reference to include information updating missing data elements.

The patent that describes how Google’s knowledge graph updates themselves is:

Question answering to populate knowledge base
Inventors: Rahul Gupta, Shaohua Sun, John Blitzer, Dekang Lin, Evgeniy Gabrilovich
Assignee: Google
US Patent: 10,108,700
Granted: October 23, 2018
Filed: March 15, 2013

Abstract

Methods and systems are provided for a question answering. In some implementations, a data element to be updated is identified in a knowledge graph and a query is generated based at least in part on the data element. The query is provided to a query processing engine. Information is received from the query processing engine in response to the query. The knowledge graph is updated based at least in part on the received information.


Copyright © 2018 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google’s Knowledge Graph Updates Itself by Answering Questions appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


What Google’s E-A-T score means for ecommerce

September 8, 2018 No Comments

Google updated their search quality rating guidelines in July. These rating guidelines, which you can view here, are used by humans to rate the quality of web pages as search results for specific queries. These ratings are used to guide how Google’s search engineers improve their search engine.

Soon after the update to the guidelines, Google introduced a broad core algorithm update circa August 1st, most likely to ensure that the search engine was returning results that reflected the changes to its guidelines.

One of the most important changes to the guidelines was a greater focus on Expertise, Authority, and Trustworthiness (E-A-T), as well as a focus on applying this to individual authors—not just brands or web pages.

E-A-T is important for the ecommerce industry because shopping pages are considered by the rater guidelines to be “Your Money or Your Life” (YMYL) pages, and these types of pages are held to the highest quality standards. For that reason they are also expected to have the highest E-A-T.

If you want your shopping pages to show up in the search results, you will need to identify how to maximize your E-A-T score for Google’s hypothetical human quality raters, which Google’s algorithms are designed to emulate.

Let’s talk about how to do that.

Which content is Google taking into consideration?

The expertise, authority, and trustworthiness of a page are determined primarily by looking at the main content on the page. What counts as main content is obvious when we are talking about a content site like a blog, but which content are Google’s quality raters taking into consideration on your category and product pages?

The first important thing to recognize is that “content” is not limited to text. The rater guidelines explicitly state that “webpage content includes … functionality (such as online shopping features, email, calculator functionality, online games, etc.).”

So raters aren’t just being asked to evaluate text. They’re being asked to evaluate your site’s functionality. It isn’t just the text on your page that needs to be high E-A-T, it’s the design, interface, interactivity, useability, and other features.

For example, raters are explicitly asked to “put at least one product in the cart to make sure the shopping cart is functioning.” They are reminded that “high quality shopping content should allow users to find the products they want and to purchase the products easily.” I highly recommend meeting these basic functions expected of the modern ecommerce site in service of that goal:

  • A persistent shopping cart that stores the products you are planning to buy
  • The ability to create a wishlist
  • The ability to sort category pages and search results by price, weighted relevance, review score, best sellers, and similar criteria
  • The ability to filter category and search results by product features and tags
  • A responsive design that looks good and functions well on mobile devices
  • Modern search capable of interpreting queries and dealing with misspellings rather than simply matching text exactly to what is found on the page

Google provides quality raters with some examples of main content. In an example featuring a product page, they consider the content behind the reviews, shipping, and safety information tabs to be main content:

The rest of the content on the page is considered “supplementary content.” This is because the purpose of the shopping page is to sell or give information about a product. Everything directly involved in serving that purpose is considered main content. Everything peripheral to it, such as suggested products and navigation, is considered to be supplementary.

For a page to receive a good quality score, raters are asked to look for a “satisfying amount of high quality content.” They give an example of a shopping page that includes “the manufacturer’s product specs, …original product information, over 90 user reviews, shipping and returns information, [and] multiple images of the product.” High E-A-T isn’t going to get you far enough if the amount of content isn’t satisfactory for the purpose of the shopping page, so this is where you need to start.

Prerequisites

For quality raters to determine the E-A-T of your shopping pages, there are a few things they need to be able to find to give you a positive score at all.

When raters are evaluating shopping pages, the guidelines ask them to “do some special checks” for “contact information,” including “policies on payment, exchanges, and returns,” suggesting that this information will most likely be found under “customer service.” Make sure this information is present and easy to find.

What is expertise in the ecommerce industry?

The rater guidelines offer an example of a shopping page that earns a high quality score because of its high E-A-T:

They say that the page has “high E-A-T for the purpose of the page” because they have “expertise in these specific types of goods.” They mention that many of the products sold on the site are unique to this company, presumably as evidence of this. They also mention that they have “a positive reputation.”

This suggests that what counts as expertise for a shopping page, according to Google, is the expertise of the manufacturer and the brand regarding the products being sold. The fact that they have a good reputation and exclusive products are used as evidence of this. Needless to say, this means you should only work with manufacturers that have recognized expertise in the industry.

The expertise of those who don’t work for your brand are actually relevant as well. The guidelines ask raters to look for “recommendations by experts, news articles, and other credible information…about the website” while they are doing reputation research for your brand or your content creators.

This emphasizes the importance of outreach in earning a high E-A-T score. Obviously, your products, your site functionality, and your brand integrity must be inherently high in order to earn positive press and recommendations from experts in the appropriate industries, but there are limits to how much your site and products are capable of promoting themselves.

To earn a positive reputation, you will also need to reach out directly to industry influencers and experts, send products to product reviewers, and make headlines by taking newsworthy actions. Failing to do so means that even if your products, brand, and site are stellar, while you won’t have a negative reputation, you will have less of a reputation than those who have made the effort to promote themselves effectively.

Crucially, reputation requires high editorial freedom. Placing sponsored content on sites or promoting your site with ads will not earn you a positive reputation, at least not directly, because content created by your own brand isn’t considered during this research phase.

What makes an ecommerce brand authoritative?

The rater guidelines consider this shopping page to deserve the “highest quality” rating:

As part of the reasoning behind this, they mention that “since the store produces this backpack, they are experts on the product, making the page on their own website authoritative.”

This reveals an interesting insight into how Google decides product content is authoritative. An industry expert or the manufacturer of the product needs to be providing the information, or it isn’t authoritative.

In contrast, a blog post written by somebody who doesn’t work in this industry, isn’t an outdoors enthusiast, and otherwise doesn’t know very much about backpacks wouldn’t be considered an authority on this product.

Google provides this page as an example of one that should receive the “lowest” quality rating:

They name “no evidence of E-A-T” as one reason for this. They note that the “Contact Us” page doesn’t give a company name or physical address, and that the “Shipping and Returns” page lists a different company that doesn’t seem related.

Perhaps most notably for authority considerations, however, they note that they include official looking logos for the Better Business Bureau and Google Checkout, but these don’t seem to actually be affiliated with the website. While the guidelines don’t explicitly mention it, the inclusion of the “Nike” logo in the header also seems to be deceptive.

When it comes to authority, Google seems to be most concerned with how it can be misrepresented. Presumably, a small company with limited reach could still be considered to have good authority so long as it only claims to be the authority over its own products. Likewise, a marketplace selling products produced by other manufacturers would presumably be considered authoritative if it were easy to verify that those manufacturers were indeed affiliated with the seller, and that the ecommerce site was an authorized merchant.

For this specific example, had the Nike, BBB, and Google Checkout logos linked to some sort of verification of affiliation, the page likely would have been considered to have high, or at least satisfactory authority.

What is trustworthiness for ecommerce sites?

To be considered high quality, raters are asked to look for “satisfying customer service information” when evaluating shopping pages. This means that any potential questions or concerns that shoppers might have about the product and the buying process should be addressed.

It’s best to be as extensive and comprehensive as possible. The purpose of the product, how to use it, what it looks like, and what results they should expect need to be covered in as much detail as possible.

Information about shipping charges should be transparent and revealed up front.

Return policies, guarantees, and similar information should be easily accessible. The checkout process shouldn’t surprise users by completing before they thought they were making a purchase or introducing fees they were not expecting or warned about.

Contact information, live chat, and customer support should be easy to find.

Remember that Google is considering all of this information main content. This should be reflected in your site design as well. Do not hide this information away or make it difficult to find. Put it where shoppers and human quality raters alike would expect to find it and where it will alleviate any concerns about the buying process.

The guidelines explicitly mention that stores “frequently have user ratings,” and that they “consider a large number of positive user reviews as evidence of positive reputation.”

Needless to say, it’s strongly recommended to introduce user review functionality to your site. User reviews have a well-measured positive impact on search engine traffic. Various studies have found that 63% of users are more likely to buy from a site that features user reviews, that users who interact with user reviews are 105% more likely to make a purchase, that they can produce an 18% lift in sales, and that having 50 or more reviews can result in an additional 4.6% boost in conversion rates.

In addition to allowing users to leave reviews, it’s important to encourage your users to leave reviews. Include automated emails asking your users to leave a review into your checkout process, with emails arriving in user’s inboxes shortly after their product is shipped successfully, or even papers telling them how to leave a review sent with the product.

If you’re concerned that asking users to leave reviews, or allowing them to in the first place, will result in negative reviews, this fear is largely unfounded. A study published in Psychological Science found that buyers were actually more influenced by the number of reviews than by the overall score, even to the extent that this was considered irrational behavior on their part.

Another study found that users are actually more likely to purchase a product with a rating between 4.2 and 4.5 stars, since excessively high star ratings are considered suspicious.

Finally, if you leave users to their own devices, the ones who are most likely to leave a review are the ones who are either extremely surprised by how well things went, or extremely disappointed. Additionally, they will review your products on another site if they can’t do so on yours, and Google’s guidelines ask quality raters to look at other sites for reviews.

For these reasons and more, try asking your users to leave reviews.

One crucial piece of the puzzle for trustworthiness is security. The guidelines specifically call out an “insecure connection” on a checkout page as a reason to consider a shopping page untrustworthy, and a reason to give it a “low” quality rating. While they are specifically talking about the checkout page, it’s best to deploy HTTPS on every page of your site in order to eliminate any source of doubt.

Another example receiving the “lowest” score, is considered malicious because it asks for the user’s government ID number and ATM pin number. While this is an obvious piece of deception that no legitimate checkout page would ask for, consider less clearly malicious features that could lead to distrust. For example, requiring an email address for checkout, without explanation, that automatically adds users to an email list instead of the option to opt into one, is likely to reduce your trust score.

Conclusion

Google’s search quality evaluator guidelines indicate that expertise, authority, and trustworthiness are central considerations for Google’s engineers. To perform well in the search results for the foreseeable future, your pages should be developed as though humans were evaluating them for these factors.

When it comes to ecommerce, shopping pages are of primary concern, and E-A-T functions differently for them than it would for a blog post. A high quality ecommerce site doesn’t just feature authoritative text, its features and functionality are built with E-A-T in mind.

Earn expertise by working with manufacturers at the top of their industry, and by getting your brand and products in front of industry experts. Be authoritative by partnering with authoritative brands and ensuring that everything is easily verifiable. Build trust with user reviews, extensive contact and customer service information, a secure site, and a transparent checkout process.

Invest in these features to ensure that your shopping pages continue to perform well and remain competitive in the long run.

Manish Dudharejia is the president and founder of E2M Solutions Inc, a San Diego based digital agency that specializes in website design & development and ecommerce SEO. Follow him on Twitter.

Search Engine Watch


Google’s Cloud Functions serverless platform is now generally available

July 24, 2018 No Comments

Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.


Enterprise – TechCrunch


Does Google’s Duplex violate two-party consent laws?

May 19, 2018 No Comments

Google’s Duplex, which calls businesses on your behalf and imitates a real human, ums and ahs included, has sparked a bit of controversy among privacy advocates. Doesn’t Google recording a person’s voice and sending it to a data center for analysis violate two-party consent law, which requires everyone in a conversation to agree to being recorded? The answer isn’t immediately clear, and Google’s silence isn’t helping.

Let’s take California’s law as the example, since that’s the state where Google is based and where it used the system. Penal Code section 632 forbids recording any “confidential communication” (defined more or less as any non-public conversation) without the consent of all parties. (The Reporters Committee for the Freedom of the Press has a good state-by-state guide to these laws.)

Google has provided very little in the way of details about how Duplex actually works, so attempting to answer this question involves a certain amount of informed speculation.

To begin with I’m going to consider all phone calls as “confidential” for the purposes of the law. What constitutes a reasonable expectation of privacy is far from settled, and some will have it that you there isn’t such an expectation when making an appointment with a salon. But what about a doctor’s office, or if you need to give personal details over the phone? Though some edge cases may qualify as public, it’s simpler and safer (for us and for Google) to treat all phone conversations as confidential.

As a second assumption, it seems clear that, like most Google services, Duplex’s work takes place in a data center somewhere, not locally on your device. So fundamentally there is a requirement in the system that the other party’s audio will be recorded and sent in some form to that data center for processing, at which point a response is formulated and spoken.

On its face it sounds bad for Google. There’s no way the system is getting consent from whomever picks up the phone. That would spoil the whole interaction — “This call is being conducted by a Google system using speech recognition and synthesis; your voice will be analyzed at Google data centers. Press 1 or say ‘I consent’ to consent.” I would have hung up after about two words. The whole idea is to mask the fact that it’s an AI system at all, so getting consent that way won’t work.

But there’s wiggle room as far as the consent requirement in how the audio is recorded, transmitted and stored. After all, there are systems out there that may have to temporarily store a recording of a person’s voice without their consent — think of a VoIP call that caches audio for a fraction of a second in case of packet loss. There’s even a specific cutout in the law for hearing aids, which if you think about it do in fact do “record” private conversations. Temporary copies produced as part of a legal, beneficial service aren’t the target of this law.

This is partly because the law is about preventing eavesdropping and wiretapping, not preventing any recorded representation of conversation whatsoever that isn’t explicitly authorized. Legislative intent is important.

“There’s a little legal uncertainty there, in the sense of what degree of permanence is required to constitute eavesdropping,” said Mason Kortz, of Harvard’s Berkman Klein Center for Internet & Society. “The big question is what is being sent to the data center and how is it being retained. If it’s retained in the condition that the original conversation is understandable, that’s a violation.”

For instance, Google could conceivably keep a recording of the call, perhaps for AI training purposes, perhaps for quality assurance, perhaps for users’ own records (in case of time slot dispute at the salon, for example). They do retain other data along these lines.

But it would be foolish. Google has an army of lawyers and consent would have been one of the first things they tackled in the deployment of Duplex. For the onstage demos it would be simple enough to collect proactive consent from the businesses they were going to contact. But for actual use by consumers the system needs to engineered with the law in mind.

What would a functioning but legal Duplex look like? The conversation would likely have to be deconstructed and permanently discarded immediately after intake, the way audio is cached in a device like a hearing aid or a service like digital voice transmission.

A closer example of this is Amazon, which might have found itself in violation of COPPA, a law protecting children’s data, whenever a kid asked an Echo to play a Raffi song or do long division. The FTC decided that as long as Amazon and companies in that position immediately turn the data into text and then delete it afterwards, no harm and, therefore, no violation. That’s not an exact analogue to Google’s system, but it is nonetheless instructive.

“It may be possible with careful design to extract the features you need without keeping the original, in a way where it’s mathematically impossible to recreate the recording,” Kortz said.

If that process is verifiable and there’s no possibility of eavesdropping — no chance any Google employee, law enforcement officer or hacker could get into the system and intercept or collect that data — then potentially Duplex could be deemed benign, transitory recording in the eye of the law.

That assumes a lot, though. Frustratingly, Google could clear this up with a sentence or two. It’s suspicious that the company didn’t address this obvious question with even a single phrase, like Sundar Pichai adding during the presentation that “yes, we are compliant with recording consent laws.” Instead of people wondering if, they’d be wondering how. And of course we’d all still be wondering why.

We’ve reached out to Google multiple times on various aspects of this story, but for a company with such talkative products, they sure clammed up fast.

Mobile – TechCrunch


Responsive Search Ads: Google’s Latest Text Ad Format

May 9, 2018 No Comments

Responsive Search Ads allow Google to dynamically serve varying combinations of Headlines and Descriptions and optimize ad delivery based on the top-performing Headline and Description combinations.  These ads will appear in the same locations and will look like Expanded Text Ads, however Responsive Text Ads will include up to three Headlines.

Read more at PPCHero.com
PPC Hero


Google’s mobile-first index: six actions to identify risks and maximize ranking opportunities

April 28, 2018 No Comments

Google’s mobile-first index is here, causing fresh uncertainty about potential SEO impacts – but there are a number of proactive steps to take to manage risk and maximize ranking opportunities.

Rather than passively wait to feel the impact of the shift to mobile-first indexation, we advise companies to take six specific actions to prepare for opportunities and protect site performance as the mobile-first index is rolled out throughout 2018.

Brands that have been prioritizing mobile performance shouldn’t experience a negative impact from the mobile-first index, but an honest and systematic re-evaluation is required. Companies who have allowed the mobile and desktop experience to diverge over the years will likely experience change – rankings could be lost (or gained) as a result of the switch.

Before diving in and making changes to prepare for the mobile-first index, we recommend running a full audit of current desktop and mobile rankings in all the regions your company does business in, along with top performing pages.

 

Tracking this performance over time, any losses or gains in keyword visibility should be clear to see – along with potential causes. Across the six actions below the common thread is Google’s determination to provide accurate answers to users, in the channel that is used most frequently – mobile.

Keeping that at the heart of your SEO strategy and things should be fine – but having a plan certainly helps.

Identify risks

  • Action one go mobile-responsive

Even today, too few marketers and SEO professionals meaningfully differentiate between responsive, mobile-friendly and standalone mobile sites – but that difference will become especially important in 2018.
A responsive website adjusts (or responds) based on user activity and the device used. Typical features of a responsive site include minimal navigation, images optimized for mobile and content that shifts seamlessly according to the size of the display.

In comparison, a mobile-friendly design is often anything-but mobile-friendly, attempting to show content on a mobile device as they do on a desktop, and so give users the frustrating experience of having to manually zoom in, or squint at small fonts.

Finally, some brands still operate standalone mobile sites, completely separate from the desktop experience. With responsive and mobile-friendly sites, there shouldn’t be any difference in content from a desktop version of a site.

However, a mobile-friendly site may be disproportionately skewed towards the desktop experience with an impact on factors like mobile site speed, navigation and general usability – and these are all areas of concern when considering how Google evaluates quality in 2018.

With a separate mobile site, marketers need to make sure that the mobile version contains everything (useful) that your desktop site does which could be a lot of work depending on your mobile strategy so far.

For some brands still lingering with standalone mobile sites, the shift to the mobile-first index may be the nudge needed to move to a fully responsive approach to the site.

Whether you operate responsive, mobile-friendly or a standalone mobile site, the first action we recommend is to identify any differences and either add to or completely overhaul the mobile sites you manage.

While desktop site continue will factor into rankings as a secondary consideration (and it is vanishingly unlikely that longstanding sites with many well-earned rankings will be wiped off the SERPS) making sure the mobile experience contains all the relevant content of the desktop experience – including all structured data/meta description/alt text/schema –  is an important protective step.

  • Action twooptimize site speed versus competitors

The mobile-first index flips previous logic – when 80% of evaluations about rankings were based on desktop crawling and indexing, site speed considerations were less of a concern.

However, as Google crawls mobile sites while mimicking (a not-very-good) mobile connection, slow performance, elements that struggle to load and broken links will quickly use up crawl equity and indicate that your site is less efficient at delivering the answers that users want relative to your competitors.

In addition to Google tools, we regularly use platforms like GTMetrix, Pingdom, DareBoost and WebPageTest.org to get a complete view of speed issues.

Particularly for international sites, testing mobile speed from different locations and comparing these measurements to those of your key competitors will help establish practical targets to aim for. Although Google frequently mentions a target page speed of under three seconds as being ideal, in practical reality and SEO terms, aiming to be better than your competitors should be enough.

Like with SEO in general, speed optimization is similar to an old joke –  ‘you don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.’

As ever, the quickest wins in terms of speed are usually to be found in reducing image and video size, managing JavaScript and other moving elements, minimizing tracking codes and scripts and doing what you can to reduce any slowdown caused by bolt-ons like booking and payment platforms.

The challenge for SEO professionals is to identify elements like these that can be improved without too much damage to the brand experience or taking away content useful for users.

  • Action threeoptimize the customer journey

Understanding the intent of site visitors and reducing barriers from their first click in the SERPS to the information they are looking for should result in positive user experiences – and minimize the risk that comes from a site experience that causes confusion, fruitless clicking around and pushes customers away.

Although there’s some fuzziness about quite how Google interprets the quality of a user’s visit – and how it rewards that quality in terms of rankings – we advise researching the different types of mobile journeys your customers take in a systematic way and making them more efficient.

Though much ‘best practice’ SEO advice has in the past been based around engagement and keeping visitors on the site, we all know that site visitors often stick around because they’re being frustrated by unclear navigation and a poor approach to customer journey planning.

Users are more impatient of poor customer journeys on mobile – and we must anticipate that Google will feel the same too. Though helping visitors to get the answers they seek more quickly may actually decrease dwell time, we’re confident that Google and other search engines will differentiate between a short visit and a swift return to the SERPS, and a short visit that successfully ends the user’s search.

Evaluating bounce rates and the success of the mobile user journey using heat-mapping tools like Hotjar or user research panels like Peek User Testing will bring in objective data to answer whether your visitors are engaged and loving your content, or hitting barriers and getting increasingly annoyed.

In the mobile-index era, we predict that this annoyance will have a greater impact on rankings – and so is a risk to be managed carefully.

Maximising opportunities

While taking steps to understand your assets and protect your rankings is important, the shift to the mobile-first index is also a big opportunity to get ahead of competitors who are less prepared. Knowing that others will be slow to react really gives an extra incentive to put real effort into SEO strategies that will positively differentiate your brand from competitors.

  • Action four prioritize content formatting that excels on mobile.

A lot of content marketing (such as infographics, interactive microsites, mega pages and even video, depending on the platform) produced by brands still display poorly on mobile devices.

Taking a mobile-first mindset and prioritizing everyday content and content marketing assets that work particularly well on mobile devices will resonate best with both customers and search engines. Fortunately, there are a lot of methodologies that can be used to provide depth of content that is engaging and easily navigable on mobile.

One of the biggest changes is the resurgence of expandable content areas like tabs, accordions and other filters. Use filters to hide content not relevant to a visitor’s specific query, tabs that reveal further information when clicked and accordions that expand the page are all familiar to site visitors – and allow for a single web page to be seamlessly used in multiple ways by multiple audiences.

While these have been seen by Google and other search engines as a potentially sneaky way to cram in content to a page, Google is on record as stating that content that is hidden to make a mobile site more efficient and speedier to explore will be taken into full consideration.

While competitors may have a responsive or mobile-friendly site and feel that this is enough preparation, many will likely still take a desktop-first mindset, creating overloaded pages that are tedious to wade through on mobile devices.

Thinking with a customer and mobile-first mindset to arrange content that can be skimmed easily through logical headings, bolding of main points and pull-out quotes, numbered lists, bullet-points and more will support mobile visitors and and differentiate from competitors while allowing search engine bots to crawl effectively.

  • Action fiveevaluate AMP and progressive web apps

Again capitalizing on the slowness of competitors, the move to the mobile-first index means a re-evaluation of progressive web apps and accelerated mobile pages could bring up big opportunities.

As a recap, Accelerated Mobile Pages allow web pages to load especially quickly by loading page elements asynchronously and removing out elements of JavaScript that cause delays.

AMP templates are easily applied in the code with well-established procedures to provide the speedy AMP version to search engines, with the slower (but perhaps more visual) non-AMP still being recognized for ranking purposes with a canonical tag.

Progressive Web Apps use browser feature detection to give a fast, app-like experience that can be loaded from a mobile home screen or simply visited with a direct link. Websites that have a lot of moving parts and a lot of returning traffic, for example in e-commerce or other transactional sites, are the most well suited for Progressive Web Apps as they can massively streamline the user experience.

In both cases, although implementation is comparatively straightforward, you can bet that a minority of companies in your industry will have a systematic approach to using these technologies.

Being fast, being relevant and being right are key watchwords for future mobile-first SEO and using technologies that help speed, indexation and the user experience is a positive and proactive step.

Action six –  identify competitors to beat

As discussed, not every competitor will be thinking systematically about the mobile-first index, or the changing nature of SEO in general. That opens up the possibility that by being faster and more focused, some previously difficult to rank for keywords will become more obtainable.

Using your business and industry knowledge, we advise clients to identify competitors who have rankings ahead of your own that may be less responsive to change, and underprepared for the mobile-first index.

Building these target keywords into your mobile strategy and wider SEO strategy – including off-site SEO and link earning – should result in some strong opportunities.

Conclusion – manage risk, capitalize on opportunities

For some, the mobile-first index won’t result in anything transformational – if you’ve been following best practice for years and your main competitors have been doing likewise there probably won’t be any game-changing shifts.

However, in any period of uncertainty there are opportunities to take advantage of and risks to manage – and in competitive SEO niches, taking every chance to get ahead is important.

Whatever your starting point – the mobile-first index is the new normal in SEO, and now is the time to get to grips with the challenge – and potential.

 

 

 

 

Search Engine Watch


Google’s Mobile Location History

January 30, 2018 No Comments

Google Location History

If you use Google Maps to navigate from place to place, or if you have agreed to be a local guide for Google Maps, there is a chance that you have seen Google Mobile Location history information. There is a Google Account Help page about how to Manage or delete your Location History. The location history page starts off by telling us:

Your Location History helps you get better results and recommendations on Google products. For example, you can see recommendations based on places you’ve visited with signed-in devices or traffic predictions for your daily commute.

You may see this history as your timeline, and there is a Google Help page to View or edit your timeline. This page starts out by telling us:

Your timeline in Google Maps helps you find the places you’ve been and the routes you’ve traveled. Your timeline is private, so only you can see it.

Mobile Location history has been around for a while, and I’ve seen it mentioned in a few Google patents. It may be referred to as a “Mobile location history” because it appears to contain information collected by your mobile device. Here are three posts I’ve written about patents that mention location history and describe processes that depend upon Mobile Location history.

An interesting article that hints at some possible aspects of location history just came out on January 24th, in the post, If you’re using an Android phone, Google may be tracking every move you make.

The timing of the article about location history is interesting given that Google was granted a patent on user location histories the day before that article was published. It focuses upon telling us how location history works:

The present disclosure relates generally to systems and methods for generating a user location history. In particular, the present disclosure is directed to systems and methods for analyzing raw location reports received from one or more devices associated with a user to identify one or more real-world location entities visited by the user.

Techniques that could be used to attempt to determine a location associated with a device can include GPS, IP Addresses, Cell-phone triangulation, Proximity to Wifi Access points, and maybe even power line mapping using device magnetometers.

The patent has an interesting way of looking at location history, which sounds reasonable. I don’t know the latitudes and longitudes of places I visit:

Thus, human perceptions of location history are generally based on time spent at particular locations associated with human experiences and a sense of place, rather than a stream of latitudes and longitudes collected periodically. Therefore, one challenge in creating and maintaining a user location history that is accessible for enhancing one or more services (e.g. search, social, or an API) is to correctly identify particular location entities visited by a user based on raw location reports.

The location history process looks like it involves collecting data from mobile devices in a way that allows it to gather information about places visited, with scores for each of those locations. I have had Google Maps ask me to verify some of the places that I have visited, as if the score it had for those places may not have been sufficient (not high enough of a level of confidence) for it to believe that I had actually been at those places.

The location history patent is:

Systems and methods for generating a user location history
Inventors: Daniel Mark Wyatt, Renaud Bourassa-Denis, Alexander Fabrikant, Tanmay Sanjay Khirwadkar, Prathab Murugesan, Galen Pickard, Jesse Rosenstock, Rob Schonberger, and Anna Teytelman
Assignee: Google LLC
US Patent: 9,877,162
Granted: January 23, 2018
Filed: October 11, 2016

Abstract

Systems and methods for generating a user location history are provided. One example method includes obtaining a plurality of location reports from one or more devices associated with the user. The method includes clustering the plurality of location reports to form a plurality of segments. The method includes identifying a plurality of location entities for each of the plurality of segments. The method includes determining, for each of the plurality of segments, one or more feature values associated with each of the location entities identified for such segment. The method includes determining, for each of the plurality of segments, a score for each of the plurality of location entities based at least in part on a scoring formula. The method includes selecting one of plurality of locations entities for each of the plurality of segments.

Why generate a location history?

A couple of reasons stand out in the patent’s extended description.

1) The generated user location history can be stored and then later accessed to provide personalized location-influenced search results.
2) As another example, a system implementing the present disclosure can provide the location history to the user via an interactive user interface that allows the user to view, edit, and otherwise interact with a graphical representation of her mobile location history.

I like the interactive user Interface that shows times and distances traveled.

This statement from the patent was interesting, too:

According to another aspect of the present disclosure, a plurality of location entities can be identified for each of the plurality of segments. As an example, map data can be analyzed to identify all location entities that are within a threshold distance from a segment location associated with the segment. Thus, for example, all businesses or other points of interest within 1000 feet of the mean location of all location reports included in a segment can be identified.

Google may track information about locations that appear in that history, such as popularity features, which may include, “a number of social media mentions associated with the location entity being valued; a number of check-ins associated with the location entity being valued; a number of requests for directions to the location entity being valued; and/or and a global popularity rank associated with the location entity being valued.”

Personalization features may also be collected which described previous interactions between the user and the location entity, such as:

1) a number of instances in which the user performed a map click with respect to the location entity being valued;
2) a number of instances in which the user requested directions to the location entity being valued;
3) a number of instances in which the user has checked-in to the location entity being valued;
4) a number of instances in which the user has transacted with the location entity as evidenced by data obtained from a mobile payment system or virtual wallet;
5) a number of instances in which the user has performed a web search query with respect to the location entity being valued.

Other benefits of location history

This next potential feature was one that I tested to see if it was working, querying location history. It didn’t seem to be active at this point:

For example, a user may enter a search query that references the user’s historical location (e.g. “Thai restaurant I ate at last Thursday”). When it is recognized that the search query references the user’s location history, then the user’s location history can be analyzed in light of the search query. Thus, for example, the user location history can be analyzed to identify any Thai restaurants visited on a certain date and then provide such restaurants as results in response to the search query.

The patent refers to a graphical representation of mobile location history, which is available:

As an example, in some implementations, a user reviewing a graphical representation of her location history can indicate that one of the location entities included in her location history is erroneous (e.g. that she did not visit such location). In response, the user can be presented with one or more of the location entities that were identified for the segment for which the incorrect location entity was selected and can be given an opportunity to select a replacement location.

Location History Timeline Interface
A Location History Timeline Interface

In addition to the timeline interface, you can also see a map of places you may have visited:

Timeline with Map Interface
Map Interface

You can see in my screenshot of my timeline, I took a photo of a Kumquat tree I bought yesterday. It gives me a chance to see the photos I took, so that I can edit them, if I would like. The patent tells us this about the user interface:

In other implementations, opportunities to perform other edits, such as deleting, annotating, uploading photographs, providing reviews, etc., can be provided in the interactive user interface. In such fashion, the user can be provided with an interactive tool to explore, control, share, and contribute to her location history.

The patent tells us that it tracks activities that you may have engaged in at specific locations:

In further embodiments of the present disclosure, a location entity can be associated with a user action within the context of a location history. For example, the user action can be making a purchase (e.g. with a digital wallet) or taking a photograph. In particular, in some embodiments, the user action or an item of content generated by the user action (e.g. the photograph or receipt) can be analyzed to assist in identifying the location entity associated with such user action. For example, the analysis of the user action or item of content can contribute to the score determined for each location entity identified for a segment.

I have had the Google Maps application ask me if I would like to contribute photos that I have taken at specific locations, such as at the sunset at Solana Beach. I haven’t used a digital wallet, so I don’t know if that is potentially part of my location history.

The patent describes the timeline feature and the Map feature that I included screenshots from above.

The patent interestingly tells us that location entities may be referred to by the common names of the places they are called, and refers to those as “Semantic Identifiers:

Each location entity can be designated by a semantic identifier (e.g. the common “name” of restaurant, store, monument, etc.), as distinguished from a coordinate-based or location-based identifier. However, in addition to a name, the data associated with a particular location entity can further include the location of the location entity, such as longitude, latitude, and altitude coordinates associated with the location entity.

It’s looking like location history could get smarter:

As an example, an interaction evidenced by search data can include a search query inputted by a user that references a particular location entity. As another example, an interaction evidenced by map data 218 can include a request for directions to a particular location entity or a selection of an icon representing the particular location entity within a mapping application. As yet another example, an interaction evidenced by email data 220 can include flight or hotel reservations to a particular city or lodging or reservations for dinner at a particular restaurant. As another example, an interaction evidenced by social media data 222 can include a check-in, a like, a comment, a follow, a review, or other social media action performed by the user with respect to a particular location entity.

Tracking these interactions is being done under the name “user/location entity interaction extraction,” and it may calculate statistics about such interactions:

Thus, user/location entity interaction extraction module 212 can analyze available data to extract interactions between a user and a location entity. Further, interaction extraction module 212 can maintain statistics regarding aggregate interactions for a location entity with respect to all users for which data is available.

It appears that to get the benefit of being able to access information such as this, you would need to give Google the ability to collect such data.

The patent provides more details about location history, and popularity and other features, and even a little more about personalization. Many aspects of location history have been implemented, while there are some that look like they might have yet to be developed. As can be seen from the three posts I have written about that describes patents that use information from location history, it is possible that location history may be used in other processes used by Google.

How do you feel about mobile location history from Google?


Copyright © 2018 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google’s Mobile Location History appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Seven insights hiding in Google’s new Christmas shopping research

December 17, 2017 No Comments

In December 2017, Google released a set of statistics about the Christmas shopping season. Use these insights in AdWords to make this your best December ever.

Read more at PPCHero.com
PPC Hero


The last word on Fred from Google’s Gary Illyes

September 27, 2017 No Comments

This month’s Brighton SEO delegates all hoped for Google’s Gary Illyes to enlighten them on the major talking points in search this year. They weren’t disappointed. 

Google algorithm updates are frequently on the minds of SEOs and webmasters, and have been a hot topic for years. We are always on tenterhooks, waiting for the next change that could damage our site’s rankings.

We are never able to rest, always at risk of being penalized by the next animal to enter Google’s zoo of updates.

Past assumptions about Google Fred

Back on March 7th 2017, many webmasters reported unexpected fluctuations to rankings. The name Google Fred then began to circulate, following a chat on Twitter between Barry Schwartz and Google’s Gary Illyes where Gary joked about future updates being named Fred.

We safely assumed there was an adjustment to the algorithm as Google confirmed there are updates happening every day. As usual, Google did not confirm any details about this particular update, but analysis of affected sites suggested it focused on poor quality content sites that were benefiting from monetization tactics.

As this update felt larger than the normal day-to-day algorithm changes, it seemed only natural it should be worthy of a name. As a result, the name “Google Fred” officially stuck, despite Gary Illyes intending his tongue-in-cheek comment to refer to all future updates.

So how can we tell the difference between the Fred update in March and other updates?

What is Google Fred, really?

In a Q&A session at September’s Brighton SEO, Google Fred was brought up once again, and we got the final word on Fred from Gary Illyes himself. Here’s what Fred’s creator had to say:

Interviewer: Let’s talk about Fred.

Gary Illyes: Who?

Interviewer: You are the person that created Fred. So Fred is basically an algo that…

Gary Illyes: It’s not one algo, it’s all the algos.

Interviewer: So you can confirm it’s not a single algo – it’s a whole umbrella of a bunch of different changes and updates that everyone has just kind of put under this umbrella of “Fred”.

Gary Illyes: Right, so the story behind Fred is that basically I’m an asshole on Twitter. And I’m also very sarcastic which is usually a very bad combination. And Barry Schwartz, because who else, was asking me about some update that we did to the search algorithm.

And I don’t know if you know, but in average we do three or two to three updates to the search algorithm, ranking algorithm every single day. So usually our response to Barry is that sure, it’s very likely there was an update. But that day I felt even more sarcastic than I actually am, and I had to tell him that.

Oh, he was begging me practically for a name for the algorithm or update, because he likes Panda or Penguin and what’s the new one. Pork, owl, shit like that. And I just told him that, you know what, from now on every single update that we make – unless we say otherwise – will be called Fred; every single one of them.

Interviewer: So now we’re in a perpetual state of Freds?

Gary Illyes: Correct. Basically every single update that we make is a Fred. I don’t like, or I was sarcastic because I don’t like that people are focusing on this.

Every single update that we make is around quality of the site or general quality, perceived quality of the site, content and the links or whatever. All these are in the Webmaster Guidelines. When there’s something that is not in line with our Webmaster Guidelines, or we change an algorithm that modifies the Webmaster Guidelines, then we update the Webmaster Guidelines as well.

Or we publish something like a Penguin algorithm, or work with journalists like you to publish, throw them something like they did with Panda.

Interviewer: So for all these one to two updates a day, when webmasters go on and see their rankings go up or down, how many of those changes are actually actionable? Can webmasters actually take something away from that, or is it just under the generic and for the quality of your site?

Gary Illyes: I would say that for the vast majority, and I’m talking about probably over 95%, 98% of the launches are not actionable for webmasters. And that’s because we may change, for example, which keywords from the page we pick up because we see, let’s say, that people in a certain region put up the content differently and we want to adapt to that.

[…]

Basically, if you publish high quality content that is highly cited on the internet – and I’m not talking about just links, but also mentions on social networks and people talking about your branding, crap like that.

[audience laughter]

Then, I shouldn’t have said that right? Then you are doing great. And fluctuations will always happen to your traffic. We can’t help that; it would be really weird if there wasn’t fluctuation, because that would mean we don’t change, we don’t improve our search results anymore.

(Transcript has been lightly edited for clarity)

So there we have it: every update is a Fred unless otherwise stated. The ranking drops in March may well have been triggered by the “original” Fred update, but so will all fluctuations, for they are all Fred.

How can we optimize for Fred?

Gary says that 95-98% of updates are not actionable for webmasters. With two or three updates a day, that accounts for a lot of updates each year! So what do we do?

The answer is simple – do what you were doing before. Build great websites, build your brand and produce high quality content aimed to satisfy the needs of searchers whilst adhering to the Webmaster Guidelines.

As Simon Ensor wrote in his recent article on the SEO industry and its sweeping statements, SEOs shouldn’t fear algorithm updates from Google:

“Many may complain that Google moves the goalposts but in actual fact, the fundamentals remain the same. Avoiding manipulative behavior, staying relevant, developing authority and thinking about your users are four simple factors that will go a long way to keeping you on the straight and narrow.

The Google updates are inevitable. Techniques will evolve, and results will require some hard graft. Every campaign is different, but if you stick to the core principles of white-hat SEO, you need not take notice of the sweeping statements that abound in our corner of the marketing world. Nor should you have to fear future Google updates.”

What does it mean for SEOs?

Sage advice aside, this explanation from Gary Illyes may still leave SEOs feeling slightly frustrated. We appreciate that not every small update warrants a name or set of webmaster guidelines, but we still have a job to do and a changeable industry to make sense of.

We have stakeholders and clients to answer to and explain ranking fluctuations to. It doesn’t help us to put all updates under the carpet of Fred.

Of course we would find it really useful if each major update came with clear guidelines immediately, not leaving us for days in the dark, figuring it out and stabilizing our rankings.

But maybe – as Gary may have been alluding – where would the fun be if it were that simple?

To read the full transcript of the Q&A with Gary Illyes or watch a recording of the interview, check out this blog post by iThinkMedia.

Search Engine Watch


A typical day for researchers on Google’s Brain Team

September 17, 2017 No Comments

 What do you and researchers on Google’s Brain Team have most in common? You both probably spend a lot of time triaging email. In a Reddit AMA, 11 Google AI researchers took time to share the activities that consume the greatest chunks of their days. Email was a frequent topic of conversation, in addition to less banal activities like skimming academic papers and brainstorming with… Read More
Enterprise – TechCrunch