CBPO

Tag: Google’s

Google’s How News Works, aimed at clarifying news transparency

June 11, 2019 No Comments

In May, Google announced the launch of a new website aimed at explaining how they serve and address news across Google properties and platforms.

The site, How News Works, states Google’s mission as it relates to disseminating news in a non-biased manner. The site aggregates a variety of information about how Google crawls, indexes, and ranks news stories as well as how news can be personalized for the end user.

How News Works provides links to various resources within the Google news ecosystem all in one place and is part of The Google News Initiative.

What is The Google News Initiative?

The Google News Initiative (GNI) is Google’s effort to work with news industry professionals to “help journalism thrive in the digital age.” The GNI is driven and summarized by the GNI website which provides information about a variety of initiatives and approaches within Google including:

  • How to work with Google (e.g., partnership opportunities, training tools, funding opportunities)
  • A list of current partnerships and case studies
  • A collection of programs and funding opportunities for journalists and news organizations
  • A catalog of Google products relevant to journalists

Google attempts to work with the news industry in a variety of ways. For example, it provides funding opportunities to help journalists from around the world.

Google is now accepting applications (through mid-July) from North American and Latin American applicants to help fund projects that “drive digital innovation and develop new business models.” Applicants who meet Google’s specified criteria (and are selected) will be awarded up to $ 300,000 in funding (for U.S. applicants) or $ 250,000 (for Latin American applicants) with an additional award of up to 70% of the total project cost.

The GNI website also provides users with a variety of training resources and tools. Journalists can learn how to partner with Google to test and deploy new technologies such as the Washington Post’s participation in Google’s AMP Program (accelerated mobile pages).

AMP is an open source initiative that Google launched in February 2016 with the goal of making mobile web pages faster.

AMP mirrors content on traditional web pages, but uses AMP HTML, an open source format architected in an ultra-light way to reduce latency for readers.

News transparency and accountability

The GNI’s How It Works website reinforces Google’s mission to “elevate trustworthy information.” The site explains how the news algorithm works and links to Google’s news content policies.

The content policy covers Google’s approach to accountability and transparency, its requirements for paid or promotional material, copyright, restricted content, privacy/personalization and more.

This new GNI resource, a subsection of the main GNI website, acts as a starting point for journalists and news organizations to delve into Google’s vast news infrastructure including video news on YouTube.

Since it can be difficult to ascertain if news is trustworthy and accurate, this latest initiative by Google is one way that journalists (and the general public) can gain an understanding of how news is elevated and indexed on Google properties.

The post Google’s How News Works, aimed at clarifying news transparency appeared first on Search Engine Watch.

Search Engine Watch


Highlights From Google’s Keyword Planner Update

June 4, 2019 No Comments

The Google Keyword Planner is a useful tool for every Google Advertiser. Learn more about the most recent updates!

Read more at PPCHero.com
PPC Hero


Utilizing Google’s Test My Site Tool to Improve Mobile Performance

April 23, 2019 No Comments

Google updated their Test My Site tool to include custom recommendations for mobile sites. Read more to find how this tool can improve your mobile performance.

Read more at PPCHero.com
PPC Hero


Google’s new voice recognition system works instantly and offline (if you have a Pixel)

March 13, 2019 No Comments

Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa or Google to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition works entirely offline, eliminating that delay altogether — though of course mangling is still an option.

The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.

Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention.

Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel.

Google’s work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly.

So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.

“Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization.

Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.

Gadgets – TechCrunch


Google’s Cloud Firestore NoSQL database hits general availability

February 2, 2019 No Comments

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to 10 new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances, and, while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in 10 new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning stage for deciding the next phase of locations, but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and increment database values without needing a transaction.

It’s worth noting that while Cloud Firestore falls under the Google Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise, then, that Google is positioning it as a tool for both Google Cloud and Firebase users.


Enterprise – TechCrunch


What were Google’s biggest search algorithm updates of 2018?

January 8, 2019 No Comments

Search came a long way this past year. We saw the appearance of the zero-result SERP, featuring knowledge cards for answers such as conversions and times.

We welcomed the mobile-first index and the mobile speed update. With the focus on mobile, we saw meta description lengths shorten from 300+ to 150 or so.

We saw minor changes to image search and a renewed emphasis on “compelling and shareable content.” After testing video carousels on desktop SERPs for a while, Google decided to roll the feature out by replacing video thumbnails with video carousels across the board. Understandably, we’ve since seen more focus on producing video.

Some algorithm updates occurred overnight, some happened incrementally. Some caused only ripples, and some turned the SERPs updside down.

As we say hello to 2019, we want to take a moment to reflect on this past year. The algorithm changes we saw last year can be indicators of changes or trends to come. Search engines often make incremental adjustments to their filters.

So, our friends over at E2M have created a visual and entertaining overview of what went down in Google Search over 2018 — and which might help give us an idea of where we’re going next.

Google’s biggest Search algorithm updates of 2018 – A visual representation by E2M

Google”s Biggest Search Algorithm Updates Of 2018

The post What were Google’s biggest search algorithm updates of 2018? appeared first on Search Engine Watch.

Search Engine Watch


Google’s Cloud Spanner database adds new features and regions

January 1, 2019 No Comments

Cloud Spanner, Google’s globally distributed relational database service, is getting a bit more distributed today with the launch of a new region and new ways to set up multi-region configurations. The service is also getting a new feature that gives developers deeper insights into their most resource-consuming queries.

With this update, Google is adding to the Cloud Spanner lineup Hong Kong (asia-east2), its newest data center location. With this, Cloud Spanner is now available in 14 out of 18 Google Cloud Platform (GCP) regions, including seven the company added this year alone. The plan is to bring Cloud Spanner to every new GCP region as they come online.

The other new region-related news is the launch of two new configurations for multi-region coverage. One, called eur3, focuses on the European Union, and is obviously meant for users there who mostly serve a local customer base. The other is called nam6 and focuses on North America, with coverage across both costs and the middle of the country, using data centers in Oregon, Los Angeles, South Carolina and Iowa. Previously, the service only offered a North American configuration with three regions and a global configuration with three data centers spread across North America, Europe and Asia.

While Cloud Spanner is obviously meant for global deployments, these new configurations are great for users who only need to serve certain markets.

As far as the new query features are concerned, Cloud Spanner is now making it easier for developers to view, inspect and debug queries. The idea here is to give developers better visibility into their most frequent and expensive queries (and maybe make them less expensive in the process).

In addition to the Cloud Spanner news, Google Cloud today announced that its Cloud Dataproc Hadoop and Spark service now supports the R language, in addition to Python 3.7 support on App Engine.


Enterprise – TechCrunch


How Google’s Knowledge Graph Updates Itself by Answering Questions

October 31, 2018 No Comments

How A Knowledge Graph Updates Itself

Elijah Hail

To those of us who are used to doing Search Engine Optimization, we’ve been looking at URLs filled with content, and links between that content, and how algorithms such as PageRank (based upon links pointed between pages) and information retrieval scores based upon the relevance of that content have been determining how well pages rank in search results in response to queries entered into search boxes by searchers. Web pages connected by links have been seen as information points connected by nodes. This was the first generation of SEO.

Search has been going through a transformation. Back in 2012, Google introduced something it refers to as the knowledge graph, in which they told us that they would begin focusing upon indexing things instead of strings. By “strings,” they were referring to words that appear in queries, and in documents on the Web. By “things,” they were referring to named entities, or real and specific people, places, and things. When people searched at Google, the search engines would show Search Engine Results Pages (SERPs) filled with URLs to pages that contained the strings of letters that we were searching for. Google still does that, and is slowly changing to showing search results that are about people, places, and things.

Google started showing us in patents how they were introducing entity recognition to search, as I described in this post:
How Google May Perform Entity Recognition

They now show us knowledge panels in search results that tell us about the people, places, and things they recognize in the queries we perform. In addition to crawling webpages and indexing the words on those pages, Google is collecting facts about the people, places, and things it finds on those pages.

A Google Patent that was just granted in the past week tells us about how Google’s knowledge graph updates itself when it collects information about entities, their properties and attributes and relationships involving them. This is part of the evolution of SEO that is taking place today – learning how Search is changing from being based upon search to being based upon knowledge.

What does the patent tell us about knowledge? This is one of the sections that details what a knowledge graph is like that Google might collect information about when it indexes pages these days:

Knowledge graph portion includes information related to the entity [George Washington], represented by [George Washington] node. [George Washington] node is connected to [U.S. President] entity type node by [Is A] edge with the semantic content [Is A], such that the 3-tuple defined by nodes and the edge contains the information “George Washington is a U.S. President.” Similarly, “Thomas Jefferson Is A U.S. President” is represented by the tuple of [Thomas Jefferson] node 310, [Is A] edge, and [U.S. President] node. Knowledge graph portion includes entity type nodes [Person], and [U.S. President] node. The person type is defined in part by the connections from [Person] node. For example, the type [Person] is defined as having the property [Date Of Birth] by node and edge, and is defined as having the property [Gender] by node 334 and edge 336. These relationships define in part a schema associated with the entity type [Person].

Note that SEO is no longer just about how often certain words appear on pages of the Web, what words appear in links to those pages, in page titles, and headings, alt text for images, and how often certain words may be repeated or related words may be used. Google is looking at the facts that are mentioned about entities, such as entity types like a “person,” and properties, such as “Date of Birth,” or “Gender.”

Note that quote also mentions the word “Schema” as in “These relationships define in part a schema associated with the entity type [Person].” As part of the transformation of SEO from Strings to Things, The major Search Engines joined forces to offer us information on how to use Schema for structured data on the Web to provide a machine readable way of sharing information with search engines about the entities that we write about, their properties, and relationships.

I’m writing about this patent because I am participating in a Webinar online about Knowledge Graphs and how those are being used, and updated. The Webinar is tomorrow at:
#SEOisAEO: How Google Uses The Knowledge Graph in its AE algorithm. I haven’t been referring to SEO as Answer Engine Optimization, or AEO and it’s unlikely that I will start, but see it as an evolution of SEO

I’m writing about this Google Patent, because it starts out with the following line which it titles “Background:”

This disclosure generally relates to updating information in a database. Data has previously been updated by, for example, user input.

This line points to the fact that this approach no longer needs to be updated by users, but instead involves how Google knowledge graphs update themselves.

Updating Knowledge Graphs

I attended a Semantic Technology and Business conference a couple of year ago, where the head of Yahoo’s knowledge base presented, and he was asked a number of questions in a question and answer session after he spoke. Someone asked him what happens when information from a knowledge graph changes and it needs to be updated?

His Answer was that a knowledge graph would have to be updated manually to have new information place within it.

That wasn’t a satisfactory answer because it would have been good to hear that the information from such a source could be easily updated. I’ve been waiting for Google to answer a question like this, which made seeing a line like this one from this patent a good experience:

In some implementations, a system identifies information that is missing from a collection of data. The system generates a question to provide to a question answering service based on the missing information, and uses the response from the question answering service to update the collection of data.

This would be a knowledge graph update, so that patent provides details using language that reflects that exacly:

In some implementations, a computer-implemented method is provided. The method includes identifying an entity reference in a knowledge graph, wherein the entity reference corresponds to an entity type. The method further includes identifying a missing data element associated with the entity reference. The method further includes generating a query based at least in part on the missing data element and the type of the entity reference. The method further includes providing the query to a query processing engine. The method further includes receiving information from the query processing engine in response to the query. The method further includes updating the knowledge graph based at least in part on the received information.

How does the search engine do this? The patent provides more information that fills in such details.

The approaches to achieve this would be to:

…Identifying a missing data element comprises comparing properties associated with the entity reference to a schema table associated with the entity type.

…Generating the query comprises generating a natural language query. This can involve selecting, from the knowledge graph, disambiguation query terms associated with the entity reference, wherein the terms comprise property values associated with the entity reference, or updating the knowledge graph by updating the data graph to include information in place of the missing data element.

…Identifying an element in a knowledge graph to be updated based at least in part on a query record. Operations further include generating a query based at least in part on the identified element. Operations further include providing the query to a query processing engine. Operations further include receiving information from the query processing engine in response to the query. Operations further include updating the knowledge graph based at least in part on the received information.

A knowledge graph updates itself in these ways:

(1) The knowledge Graph may be updated with one or more previously performed searches.
(2) The knowledge Graph may be updated with a natural language query, using disambiguation query terms associated with the entity reference, wherein the terms comprise property values associated with the entity reference.
(3) The knowledge Graph may use properties associated with the entity reference to include information updating missing data elements.

The patent that describes how Google’s knowledge graph updates themselves is:

Question answering to populate knowledge base
Inventors: Rahul Gupta, Shaohua Sun, John Blitzer, Dekang Lin, Evgeniy Gabrilovich
Assignee: Google
US Patent: 10,108,700
Granted: October 23, 2018
Filed: March 15, 2013

Abstract

Methods and systems are provided for a question answering. In some implementations, a data element to be updated is identified in a knowledge graph and a query is generated based at least in part on the data element. The query is provided to a query processing engine. Information is received from the query processing engine in response to the query. The knowledge graph is updated based at least in part on the received information.


Copyright © 2018 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google’s Knowledge Graph Updates Itself by Answering Questions appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


What Google’s E-A-T score means for ecommerce

September 8, 2018 No Comments

Google updated their search quality rating guidelines in July. These rating guidelines, which you can view here, are used by humans to rate the quality of web pages as search results for specific queries. These ratings are used to guide how Google’s search engineers improve their search engine.

Soon after the update to the guidelines, Google introduced a broad core algorithm update circa August 1st, most likely to ensure that the search engine was returning results that reflected the changes to its guidelines.

One of the most important changes to the guidelines was a greater focus on Expertise, Authority, and Trustworthiness (E-A-T), as well as a focus on applying this to individual authors—not just brands or web pages.

E-A-T is important for the ecommerce industry because shopping pages are considered by the rater guidelines to be “Your Money or Your Life” (YMYL) pages, and these types of pages are held to the highest quality standards. For that reason they are also expected to have the highest E-A-T.

If you want your shopping pages to show up in the search results, you will need to identify how to maximize your E-A-T score for Google’s hypothetical human quality raters, which Google’s algorithms are designed to emulate.

Let’s talk about how to do that.

Which content is Google taking into consideration?

The expertise, authority, and trustworthiness of a page are determined primarily by looking at the main content on the page. What counts as main content is obvious when we are talking about a content site like a blog, but which content are Google’s quality raters taking into consideration on your category and product pages?

The first important thing to recognize is that “content” is not limited to text. The rater guidelines explicitly state that “webpage content includes … functionality (such as online shopping features, email, calculator functionality, online games, etc.).”

So raters aren’t just being asked to evaluate text. They’re being asked to evaluate your site’s functionality. It isn’t just the text on your page that needs to be high E-A-T, it’s the design, interface, interactivity, useability, and other features.

For example, raters are explicitly asked to “put at least one product in the cart to make sure the shopping cart is functioning.” They are reminded that “high quality shopping content should allow users to find the products they want and to purchase the products easily.” I highly recommend meeting these basic functions expected of the modern ecommerce site in service of that goal:

  • A persistent shopping cart that stores the products you are planning to buy
  • The ability to create a wishlist
  • The ability to sort category pages and search results by price, weighted relevance, review score, best sellers, and similar criteria
  • The ability to filter category and search results by product features and tags
  • A responsive design that looks good and functions well on mobile devices
  • Modern search capable of interpreting queries and dealing with misspellings rather than simply matching text exactly to what is found on the page

Google provides quality raters with some examples of main content. In an example featuring a product page, they consider the content behind the reviews, shipping, and safety information tabs to be main content:

The rest of the content on the page is considered “supplementary content.” This is because the purpose of the shopping page is to sell or give information about a product. Everything directly involved in serving that purpose is considered main content. Everything peripheral to it, such as suggested products and navigation, is considered to be supplementary.

For a page to receive a good quality score, raters are asked to look for a “satisfying amount of high quality content.” They give an example of a shopping page that includes “the manufacturer’s product specs, …original product information, over 90 user reviews, shipping and returns information, [and] multiple images of the product.” High E-A-T isn’t going to get you far enough if the amount of content isn’t satisfactory for the purpose of the shopping page, so this is where you need to start.

Prerequisites

For quality raters to determine the E-A-T of your shopping pages, there are a few things they need to be able to find to give you a positive score at all.

When raters are evaluating shopping pages, the guidelines ask them to “do some special checks” for “contact information,” including “policies on payment, exchanges, and returns,” suggesting that this information will most likely be found under “customer service.” Make sure this information is present and easy to find.

What is expertise in the ecommerce industry?

The rater guidelines offer an example of a shopping page that earns a high quality score because of its high E-A-T:

They say that the page has “high E-A-T for the purpose of the page” because they have “expertise in these specific types of goods.” They mention that many of the products sold on the site are unique to this company, presumably as evidence of this. They also mention that they have “a positive reputation.”

This suggests that what counts as expertise for a shopping page, according to Google, is the expertise of the manufacturer and the brand regarding the products being sold. The fact that they have a good reputation and exclusive products are used as evidence of this. Needless to say, this means you should only work with manufacturers that have recognized expertise in the industry.

The expertise of those who don’t work for your brand are actually relevant as well. The guidelines ask raters to look for “recommendations by experts, news articles, and other credible information…about the website” while they are doing reputation research for your brand or your content creators.

This emphasizes the importance of outreach in earning a high E-A-T score. Obviously, your products, your site functionality, and your brand integrity must be inherently high in order to earn positive press and recommendations from experts in the appropriate industries, but there are limits to how much your site and products are capable of promoting themselves.

To earn a positive reputation, you will also need to reach out directly to industry influencers and experts, send products to product reviewers, and make headlines by taking newsworthy actions. Failing to do so means that even if your products, brand, and site are stellar, while you won’t have a negative reputation, you will have less of a reputation than those who have made the effort to promote themselves effectively.

Crucially, reputation requires high editorial freedom. Placing sponsored content on sites or promoting your site with ads will not earn you a positive reputation, at least not directly, because content created by your own brand isn’t considered during this research phase.

What makes an ecommerce brand authoritative?

The rater guidelines consider this shopping page to deserve the “highest quality” rating:

As part of the reasoning behind this, they mention that “since the store produces this backpack, they are experts on the product, making the page on their own website authoritative.”

This reveals an interesting insight into how Google decides product content is authoritative. An industry expert or the manufacturer of the product needs to be providing the information, or it isn’t authoritative.

In contrast, a blog post written by somebody who doesn’t work in this industry, isn’t an outdoors enthusiast, and otherwise doesn’t know very much about backpacks wouldn’t be considered an authority on this product.

Google provides this page as an example of one that should receive the “lowest” quality rating:

They name “no evidence of E-A-T” as one reason for this. They note that the “Contact Us” page doesn’t give a company name or physical address, and that the “Shipping and Returns” page lists a different company that doesn’t seem related.

Perhaps most notably for authority considerations, however, they note that they include official looking logos for the Better Business Bureau and Google Checkout, but these don’t seem to actually be affiliated with the website. While the guidelines don’t explicitly mention it, the inclusion of the “Nike” logo in the header also seems to be deceptive.

When it comes to authority, Google seems to be most concerned with how it can be misrepresented. Presumably, a small company with limited reach could still be considered to have good authority so long as it only claims to be the authority over its own products. Likewise, a marketplace selling products produced by other manufacturers would presumably be considered authoritative if it were easy to verify that those manufacturers were indeed affiliated with the seller, and that the ecommerce site was an authorized merchant.

For this specific example, had the Nike, BBB, and Google Checkout logos linked to some sort of verification of affiliation, the page likely would have been considered to have high, or at least satisfactory authority.

What is trustworthiness for ecommerce sites?

To be considered high quality, raters are asked to look for “satisfying customer service information” when evaluating shopping pages. This means that any potential questions or concerns that shoppers might have about the product and the buying process should be addressed.

It’s best to be as extensive and comprehensive as possible. The purpose of the product, how to use it, what it looks like, and what results they should expect need to be covered in as much detail as possible.

Information about shipping charges should be transparent and revealed up front.

Return policies, guarantees, and similar information should be easily accessible. The checkout process shouldn’t surprise users by completing before they thought they were making a purchase or introducing fees they were not expecting or warned about.

Contact information, live chat, and customer support should be easy to find.

Remember that Google is considering all of this information main content. This should be reflected in your site design as well. Do not hide this information away or make it difficult to find. Put it where shoppers and human quality raters alike would expect to find it and where it will alleviate any concerns about the buying process.

The guidelines explicitly mention that stores “frequently have user ratings,” and that they “consider a large number of positive user reviews as evidence of positive reputation.”

Needless to say, it’s strongly recommended to introduce user review functionality to your site. User reviews have a well-measured positive impact on search engine traffic. Various studies have found that 63% of users are more likely to buy from a site that features user reviews, that users who interact with user reviews are 105% more likely to make a purchase, that they can produce an 18% lift in sales, and that having 50 or more reviews can result in an additional 4.6% boost in conversion rates.

In addition to allowing users to leave reviews, it’s important to encourage your users to leave reviews. Include automated emails asking your users to leave a review into your checkout process, with emails arriving in user’s inboxes shortly after their product is shipped successfully, or even papers telling them how to leave a review sent with the product.

If you’re concerned that asking users to leave reviews, or allowing them to in the first place, will result in negative reviews, this fear is largely unfounded. A study published in Psychological Science found that buyers were actually more influenced by the number of reviews than by the overall score, even to the extent that this was considered irrational behavior on their part.

Another study found that users are actually more likely to purchase a product with a rating between 4.2 and 4.5 stars, since excessively high star ratings are considered suspicious.

Finally, if you leave users to their own devices, the ones who are most likely to leave a review are the ones who are either extremely surprised by how well things went, or extremely disappointed. Additionally, they will review your products on another site if they can’t do so on yours, and Google’s guidelines ask quality raters to look at other sites for reviews.

For these reasons and more, try asking your users to leave reviews.

One crucial piece of the puzzle for trustworthiness is security. The guidelines specifically call out an “insecure connection” on a checkout page as a reason to consider a shopping page untrustworthy, and a reason to give it a “low” quality rating. While they are specifically talking about the checkout page, it’s best to deploy HTTPS on every page of your site in order to eliminate any source of doubt.

Another example receiving the “lowest” score, is considered malicious because it asks for the user’s government ID number and ATM pin number. While this is an obvious piece of deception that no legitimate checkout page would ask for, consider less clearly malicious features that could lead to distrust. For example, requiring an email address for checkout, without explanation, that automatically adds users to an email list instead of the option to opt into one, is likely to reduce your trust score.

Conclusion

Google’s search quality evaluator guidelines indicate that expertise, authority, and trustworthiness are central considerations for Google’s engineers. To perform well in the search results for the foreseeable future, your pages should be developed as though humans were evaluating them for these factors.

When it comes to ecommerce, shopping pages are of primary concern, and E-A-T functions differently for them than it would for a blog post. A high quality ecommerce site doesn’t just feature authoritative text, its features and functionality are built with E-A-T in mind.

Earn expertise by working with manufacturers at the top of their industry, and by getting your brand and products in front of industry experts. Be authoritative by partnering with authoritative brands and ensuring that everything is easily verifiable. Build trust with user reviews, extensive contact and customer service information, a secure site, and a transparent checkout process.

Invest in these features to ensure that your shopping pages continue to perform well and remain competitive in the long run.

Manish Dudharejia is the president and founder of E2M Solutions Inc, a San Diego based digital agency that specializes in website design & development and ecommerce SEO. Follow him on Twitter.

Search Engine Watch


Google’s Cloud Functions serverless platform is now generally available

July 24, 2018 No Comments

Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.


Enterprise – TechCrunch