CBPO

Tag: Google’s

Google’s average position sunset: Are you set up for the transition?

November 23, 2019 No Comments

On September 30th, Google turned off average position as a metric for search campaigns and now requires advertisers to transition to new impression share and impression rate tools.

The news was first announced in February as an effort to establish more accurate and transparent forms of measurement. Advertisers now get to experience how often ads are appearing for eligible searches (share) and how often ads are showing at the top of the search results page (rate)—and while these new tools will ultimately be beneficial, the forced change from Google will undoubtedly stir up routine for many advertisers.

Here are a few ways advertisers can get set up with the rollout of new metrics.

Understanding the basics

To understand the impact of this change, let’s first define impression share and impression rate. Impression share is the percentage of impressions an ad receives compared to the total number that the ad is qualified for on the search engine results page (SERP). Impression share is a novel way to discover room for ad performance improvements—it displays any missed opportunities by showing how often a certain ad showed up in the top search results.

In contrast, the average position did not properly measure whether ads showed up above the organic results or not; it just showcased their order compared to other ads. Advertisers were left with a guessing game.

Impression rate shows advertisers how often their ads show up at the top of the SERP based on their total impressions—in other words, what percent of the time an ad is in the very top spot (absolute top) or shown anywhere above the organic search results (top). These details address another shortcoming of average position since even an ad in position two might be at the bottom of the page.

Measuring impression share and impression rate

There are three versions of impression share, all which measure ad impressions divided by the total eligible impressions for that ad, but based on different locations on the SERP:

  • Search (abs.) top IS: The new impression an ad has received in the absolute top location (the very first ad above the organic search results) divided by the estimated number of impressions the ad was eligible to receive in the top location. This metric is new.
  • Search top IS: The impressions an ad has received anywhere above the organic search results compared to the estimated number of impressions the ad was eligible to receive in the top location. This metric is also new.
  • Search impression share: This already-existing metric measures impressions anywhere on the page.

For the impression rate, there are two metrics that are only based on ad impressions, not the total number of eligible impressions.

  • Impr. (absolute top) %: The percent of ad impressions that are shown as the very first ad above the organic search results.
  • Impr. (top) %: The percent of ad impressions that are shown anywhere above the organic search results.

Optimizing for awareness and performance

If an advertiser is more focused on driving awareness than ROI, impression share and impression rate are both greatly valuable, as they guarantee the ads are meeting a visibility threshold and can boost awareness.

On the other hand, advertisers using Google’s new impression share options in Smart Bidding should be cautious. The impression share data is not accessible on the same day, so it’s hard to track performance – and setting a high target may significantly boost spending by making an ad eligible for additional, unwanted auctions. A better strategy for Smart Bidding is to bid to impression rate, which has data available intraday. This approach allows advertisers to optimize their impressions showing at the top of the SERP.

As a general starting point, the easiest way for advertisers to set targets is to look at recent performance for campaigns across the three impression % (rate) metrics. This should ensure the smoothest transition from targeting a position to targeting impression share.

Impression share metrics table updated

Setting up for the transition

Advertisers using Google have been encouraged to focus on the impression metrics for some time. Still, many advertisers probably feel an impact from the shift to these metrics, particularly because of the new obstacles it presents for bidding strategies. Therefore, advertisers should set the right bids to achieve their shared goal.

With this switch to the new metrics, advertisers should check any rules that support average position, and update reports and saved columns that include the average position. The following applications may include average position:

  • Bidding settings and AdWords rules
  • Custom columns
  • Saved reports (especially any with filters)
  • AdWords scripts
  • Saved column sets
  • Scorecards that use average position in dashboards
  • URLs using the {ad position} parameter

Google announced it will be automatically migrating “Target Position on Page” bid strategies, but there’s no certainty on a timeline or details regarding the migration. Therefore, advertisers should watch for any campaign targeting average position from now on to ensure they’re getting the expected results.

Wes MacLaggan is SVP of Marketing at Marin Software.

The post Google’s average position sunset: Are you set up for the transition? appeared first on Search Engine Watch.

Search Engine Watch


Evolution of Google’s News Ranking Algorithm

November 1, 2019 No Comments

Image: Photo by Nathan Dumlao on Unsplash

Did the Algorithm Behind How News Articles Rank at Google Just Change?

A Google Patent about how news articles are ranked by Google was updated this week, and in this case it suggests how entities in those documents can have an impact on ranking.

How Have News Articles Been Ranked at Google?

This patent was originally filed in 2003.

The beta version of Google News was first launched by Google in 2002, so this was one of the early patents that described how Google ranked news articles.

One of the inventors of the original patent was Krishna A. Bharat, known as a founder of Google News.

The newest version (a continuation patent) was just granted and is the Sixth Version of the patent. It can be found at:

Systems and methods for improving the ranking of news articles
Inventors: Michael Curtiss, Krishna A. Bharat, and Michael Schmitt
Assignee: Google LLC
US Patent: 10,459,926
Granted: October 29, 2019
Filed: April 27, 2015

This version of the patent provides a history of previous versions of the patent, and when they were filed and what the patent numbers of the earlier 5 versions are:

This application is a

(1) continuation of U.S. patent application Ser. No. 14/140,108, filed on Dec. 24, 2013, which is a

(2) continuation of U.S. patent Ser. No. 13/616,659, filed on Sep. 14, 2012 (now U.S. Pat. No. 8,645,368), which is a

(3) continuation of U.S. patent application Ser. No. 13/404,827, filed Feb. 24, 2012, (now U.S. Pat. No. 8,332,382), which is a

(4) continuation of U.S. patent application Ser. No. 12/501,256, filed on Jul. 10, 2009, (now U.S. Pat. No. 8,126,876), which is a

(5) continuation of U.S. patent application Ser. No. 10/662,931, filed Sep. 16, 2003, (now U.S. Pat. No. 7,577,655),

the disclosures of which are hereby incorporated by reference herein.

What A Continuation Patent is

Continuation Patents take the date of the filing of the patent they are continuing (or the ones those patents are continuing) and are intended to show how the process described by the patents have changed. The processes are set out in the claims sections of the patents, which are the parts of the patents that the prosecuting patent officer reviews when deciding whether or not to grant the new patents.

Often, looking at the very first claim of each patent can help identify important aspects that have changed from one version of a patent to another. It is somewhat rare (in my experience) to see a patent that has been updated 6 times as this one has. I recently wrote about Google’s Universal Search Interface patent which was recently updated a fourth time – Google’s New Universal Search Results.

What Caused A Recent Rankings Change at the New York Times?

A post on Twitter this week suggested that The New York Times may have been negatively impacted by a new Algorithm called Bert that was just released at Google, which was announced in Understanding searches better than ever before.

That Tweet does tell us that it is possible that BERT may have had an impact or a move to Mobile-First Indexing may have caused a loss of rankings at the Newspaper’s site. But seeing that tweet, and seeing that there was a new version of this patent made me curious to see what it contained, and what the changes it may have brought about were.

The Changing Claims from the Ranking of News Articles Patents

But it’s possible that other changes at Google could also have an impact on rankings at news sites. One way to tell how Google changed it how ranks articles is to look at how the patent covering the ranking of news articles has changed over time.

Compare How the first 4 claims from this patent have changed over time.

The latest first claim in this patent introduces some new things to look at

What is claimed is:

1. A method for ranking results, comprising: receiving a list of objects; identifying a first object in the list and a first source with which the first object is associated; identifying a second object in the list and a second source with which the second object is associated; determining a quantity of named entities that (i) occur in the first object that is associated with the first source, and (ii) do not occur in objects that are identified as sharing a same cluster with the first object but that are associated with one or more sources other than the first source; computing, based at least on the quantity of named entities that (i) occur in the first object that is associated with the first source, and (ii) do not occur in objects that are identified as sharing a same cluster with the first object but that are associated with one or more sources other than the first source, a first quality value of the first source using a first metric, wherein a named entity corresponds to a person, place, or organization; computing a second quality value of the second source using a second metric that is different from the first metric; and ranking the list of objects based on the first quality value and the second quality value.

2. The method of claim 1 wherein the identifying the first source with which the first object is associated includes: identifying the first source based on a uniform resource locator (URL) associated with the first object.

3. The method of claim 1 wherein the first source is a news source.

4. The method of claim 1 wherein computing the first quality value of the first source is further based on: one or more of a number of articles produced by the first source during a first time period, an average length of an article produced by the first source, an amount of important coverage that the first source produces in a second time period, a breaking news score, network traffic to the first source, a human opinion of the first source, circulation statistics of the first source, a size of a staff associated with the first source, a number of bureaus associated with the first source, a breadth of coverage by the first source, a number of different countries from which traffic to the first source originates, and a writing style used by the first source.

From the version of the patent that was filed on Sep. 14, 2012 (now U.S. Pat. No. 8,645,368):

What is claimed is:

1. A method comprising: determining, using one or more processors and based on receiving a search query, articles and respective scores; identifying, using one or more processors, for an article of the articles, a source with which the article is associated; determining, using one or more processors, a score for the source, the score for the source being based on: a metric that represents an evaluation, by one or more users, of the source, and an amount of traffic associated with the source; and adjusting, using one or more processors, the score of the article based on the score for the source.

2. The method of claim 1, where identifying the source includes identifying the source based on an address associated with the article.

3. The method of claim 1, where determining the score includes accessing a memory to determine the score for the source.

4. The method of claim 1, where the score for the source is further based on a length of time between an occurrence of an event and publication, by the source, of an article associated with the event.

From the Version of the patent filed on Feb. 24, 2012, (now U.S. Pat. No. 8,332,382):

What is claimed is:

1. A computer-implemented method comprising: obtaining, in response to receiving a search query, articles and respective scores; identifying, using one or more processors, for an article of the articles, a source with which the article is associated; determining, using one or more processors, a score for the source, based on polling one or more users to request the one or more users to provide a metric that represents an evaluation of a source and based on a length of time between an occurrence of an event and publication, by the source, of another article associated with the event; and adjusting, using one or more processors, the score of the article based on the score for the source.

2. The method of claim 1, where identifying the source includes identifying the source based on an address associated with the article.

3. The method of claim 1, where adjusting the score of the article includes: determining, using the score for the source, a new score for the article associated with the source; and adjusting the score of the article based on the determined new score.

4. The method of claim 1, where the score for the source is further based on a usage pattern indicating traffic associated with the source.

From the version of the patent that was filed on February 10, 2009, (Now U.S. Pat. No. 8,126,876):

What is claimed is:

1. A method, performed by one or more server devices, the method comprising: receiving, at one or more processors of the one or more server devices, a search query, from a client device; generating, by one or more processors of the one or more server devices and in response to receiving the search query, a list of references to news articles; identifying, by one or more processors of the one or more server devices and for each reference in the list of references, a news source with which each reference is associated; determining, by one or more processors of the one or more server devices and for each identified news source, whether a news source rank exists; determining, by one or more processors of the one or more server devices and for each reference with an existing corresponding news source rank, a new score by combining the news source rank and a score corresponding to a previous ranking of the reference; and ranking, by one or more processors of the one or more server devices, the references in the list of references based, at least in part, on the new scores.

2. The method of claim 1, where determining whether each news source rank exists includes accessing a database to locate the news source rank.

3. The method of claim 1, further comprising: providing the ranked list of references to the client device.

4. The method of claim 1, where determining the new score comprises: determining, for each reference with an existing corresponding news source rank, a weighted sum of the news source rank and the score corresponding to the previous ranking of the reference.

And the Very First Version of the patent filed on September 16, 2003, (Now U.S. Pat. No. 7,577,655):

What is claimed is:

1. A method comprising: determining, by a processor, one or more metric values for a news source based at least in part on at least one of a number of articles produced by the news source during a first time period, an average length of an article produced by the news source, an amount of coverage that the news source produces in a second time period, a breaking news score, an amount of network traffic to the news source, a human opinion of the news source, circulation statistics of the news source, a size of a staff associated with the news source, a number of bureaus associated with the news source, a number of original named entities in a group of articles associated with the news source, a breadth of coverage by the news source, a number of different countries from which network traffic to the news source originates, or a writing style used by the news source determining, by the processor, an importance metric value representing the amount of coverage that the news source produces in a second time period, where the determining an importance metric includes: determining, by the processor, for each article produced by the news source during the second time period, a number of other non-duplicate articles on a same subject produced by other news sources to produce an importance value for the article, and adding, by the processor, the importance values to obtain the importance metric value; generating, by the processor, a quality value for the news source based at least in part on the determined one or more metric values; and using, by the processor, the quality value to rank an object associated with the news source.

2. The method of claim 1 where the determining includes: determining, by the processor, a plurality of metric values for the news source.

3. The method of claim 2 where the generating includes: multiplying, by the processor, each metric value in the plurality of metric values by a factor to create a plurality of adjusted metric values, and adding, by the processor, the plurality of adjusted metric values to obtain the quality value.

4. The method of claim 3 where the plurality of metric values includes a predetermined number of highest metric values for the news source.

How the News Ranking Claims Differ

An analysis of changes over Time to the patent for “Systems and methods for improving the ranking of news articles,” should reflect how Google has changed how they have been implementing that patent.

We can see that in the claims for the very first patent (filed in 2003) that Google was looking at metric values for different news sources to rank the content that those sources were creating. That very long first claim from that version of the patent list a number of metrics to use to rank news sources, and that ranking influenced the ranking of news articles. So a story from a very well known news agency would have a tendency to rank higher than a story from a lesser-known agency.

The version of the patent filed in 2009 still focuses upon news sources (and a “news source rank”), along with references to the news articles generated by those news sources.

The version of the patent filed in February 2012 again tells us about a score for a news article that is influenced by a score for a news source, but it doesn’t include the many metrics that the 2003 version of the patent does.

The version of the patent filed in September 2012 Holds on to the score for the source, but tells us that score is based on a metric that represents an evaluation, by one or more users, the amount of traffic associated with the source, and a score for the article based upon a score for the source.

The most recent published version of this patent, filed in April 2015, and granted in October 2019 introduces some changes in how news articles may be ranked by Google. It tells us about how articles covering different topics are placed in clusters (which isn’t new in itself), and how those articles may rank higher than other articles by covering more entities that aren’t covered by articles in the same clusters


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Evolution of Google’s News Ranking Algorithm appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


The evolution of Google’s rel “no follow”

October 29, 2019 No Comments

Google updated the no-follow attribute on Tuesday 10th September 2019 regarding which they say it aims to help fight comment spam. The Nofollow attribute has remained unchanged for 15 years, but Google has had to make this change as the web evolves.

Google also announced two new link attributes to help website owners and webmasters clearly call out what type for link is being used,

rel=”sponsored”: Use the sponsored attribute to identify links on your site that were created as part of advertisements, sponsorships or other compensation agreements.

rel=”ugc”: UGC stands for User Generated Content, and the ugc attribute value is recommended for links within user-generated content, such as comments and forum posts.

rel=”nofollow”: Use this attribute for cases where you want to link to a page but don’t want to imply any type of endorsement, including passing along ranking credit to another page.

March 1st, 2020 changes

Up until the 1st of March 2020, all of the link attributes will serve as a hint for ranking purposes, anyone that was relying on the rel=nofollow to try and block a page from being indexed should look at using other methods to block pages from being crawled or indexed.

John Mueller mentioned the use of the rel=sponsered in one of the recent Google Hangouts.

Source: YouTube

The question he was asked

“Our website has a growing commerce strategy and some members of our team believe that affiliate links are detrimental to our website ranking for other terms do we need to nofollow all affiliate links? If we don’t will this hurt our organic traffic?”

John Mueller’s answer

“So this is something that, I think comes up every now and then, from our point of view affiliate links are links that are placed with a kind of commercial background there, in that you are obviously trying to earn some money by having these affiliate link and pointing to a distributor that you trust and have some kind of arrangement with them.

From our point of view that is perfectly fine, that’s away on monetizing your website your welcome to do that.

We do kind of expect that these types of links are marked appropriately so that we understand these are affiliate links, one way to do that is to use just a nofollow.

A newer way to do that to let us know about this kind of situation is to use the sponsored rel link attribute, that link attribute specifically tells us this is something to do with an advertising relationship, we treat that the same as a no-follow.

A lot of the affiliate links out there follow really clear patterns and we can recognize those so we try to take care of those on our side when we can  but to be safe we recommend just using a nofollow or rel sponsered link attribute, but in general this isn’t something that would really harm your website if you don’t do it, its something that makes it a little clearer for us what these links are for and if we see for example a website is engaging in large scale link selling then that’s something where we might take manual action, but for the most part if our algorithms just recognize these are links we don’t want to count then we just won’t count them.”

How quickly are website owners acting on this?

This was only announced by Google in September and website owners have until march to make the change required but data from Semrush show that website owners are starting to change over to the new rel link attribute with.

The data shows that out of From one million domains, only 27,763 has at least one UGC link but the interesting fact is that if we’ll look at those 27,763 domains that have at least one UGC link, each domain from this list on average has 20,904,603 follow backlinks, 6,373,970 – no follow, 22.8 – UGC, 55.5 – sponsored.

Source: Semrush.com

This is still very early days but we can see that there is change and I would expect that to grow significantly into next year.

Conclusion

I believe that Google is going to use the data from these link attributes to catch out website owners that continue to sell links and mark them up incorrectly in order to pass any sort of SEO value other to another website in any sort of agreement Paid or otherwise.

Paul Lovell is an SEO Consultant And Founder at Always Evolving SEO. He can be found on Twitter @_PaulLovell.

The post The evolution of Google’s rel “no follow” appeared first on Search Engine Watch.

Search Engine Watch


Google’s How News Works, aimed at clarifying news transparency

June 11, 2019 No Comments

In May, Google announced the launch of a new website aimed at explaining how they serve and address news across Google properties and platforms.

The site, How News Works, states Google’s mission as it relates to disseminating news in a non-biased manner. The site aggregates a variety of information about how Google crawls, indexes, and ranks news stories as well as how news can be personalized for the end user.

How News Works provides links to various resources within the Google news ecosystem all in one place and is part of The Google News Initiative.

What is The Google News Initiative?

The Google News Initiative (GNI) is Google’s effort to work with news industry professionals to “help journalism thrive in the digital age.” The GNI is driven and summarized by the GNI website which provides information about a variety of initiatives and approaches within Google including:

  • How to work with Google (e.g., partnership opportunities, training tools, funding opportunities)
  • A list of current partnerships and case studies
  • A collection of programs and funding opportunities for journalists and news organizations
  • A catalog of Google products relevant to journalists

Google attempts to work with the news industry in a variety of ways. For example, it provides funding opportunities to help journalists from around the world.

Google is now accepting applications (through mid-July) from North American and Latin American applicants to help fund projects that “drive digital innovation and develop new business models.” Applicants who meet Google’s specified criteria (and are selected) will be awarded up to $ 300,000 in funding (for U.S. applicants) or $ 250,000 (for Latin American applicants) with an additional award of up to 70% of the total project cost.

The GNI website also provides users with a variety of training resources and tools. Journalists can learn how to partner with Google to test and deploy new technologies such as the Washington Post’s participation in Google’s AMP Program (accelerated mobile pages).

AMP is an open source initiative that Google launched in February 2016 with the goal of making mobile web pages faster.

AMP mirrors content on traditional web pages, but uses AMP HTML, an open source format architected in an ultra-light way to reduce latency for readers.

News transparency and accountability

The GNI’s How It Works website reinforces Google’s mission to “elevate trustworthy information.” The site explains how the news algorithm works and links to Google’s news content policies.

The content policy covers Google’s approach to accountability and transparency, its requirements for paid or promotional material, copyright, restricted content, privacy/personalization and more.

This new GNI resource, a subsection of the main GNI website, acts as a starting point for journalists and news organizations to delve into Google’s vast news infrastructure including video news on YouTube.

Since it can be difficult to ascertain if news is trustworthy and accurate, this latest initiative by Google is one way that journalists (and the general public) can gain an understanding of how news is elevated and indexed on Google properties.

The post Google’s How News Works, aimed at clarifying news transparency appeared first on Search Engine Watch.

Search Engine Watch


Highlights From Google’s Keyword Planner Update

June 4, 2019 No Comments

The Google Keyword Planner is a useful tool for every Google Advertiser. Learn more about the most recent updates!

Read more at PPCHero.com
PPC Hero


Utilizing Google’s Test My Site Tool to Improve Mobile Performance

April 23, 2019 No Comments

Google updated their Test My Site tool to include custom recommendations for mobile sites. Read more to find how this tool can improve your mobile performance.

Read more at PPCHero.com
PPC Hero


Google’s new voice recognition system works instantly and offline (if you have a Pixel)

March 13, 2019 No Comments

Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa or Google to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition works entirely offline, eliminating that delay altogether — though of course mangling is still an option.

The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.

Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention.

Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel.

Google’s work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly.

So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.

“Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization.

Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.

Gadgets – TechCrunch


Google’s Cloud Firestore NoSQL database hits general availability

February 2, 2019 No Comments

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to 10 new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances, and, while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in 10 new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning stage for deciding the next phase of locations, but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and increment database values without needing a transaction.

It’s worth noting that while Cloud Firestore falls under the Google Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise, then, that Google is positioning it as a tool for both Google Cloud and Firebase users.


Enterprise – TechCrunch


What were Google’s biggest search algorithm updates of 2018?

January 8, 2019 No Comments

Search came a long way this past year. We saw the appearance of the zero-result SERP, featuring knowledge cards for answers such as conversions and times.

We welcomed the mobile-first index and the mobile speed update. With the focus on mobile, we saw meta description lengths shorten from 300+ to 150 or so.

We saw minor changes to image search and a renewed emphasis on “compelling and shareable content.” After testing video carousels on desktop SERPs for a while, Google decided to roll the feature out by replacing video thumbnails with video carousels across the board. Understandably, we’ve since seen more focus on producing video.

Some algorithm updates occurred overnight, some happened incrementally. Some caused only ripples, and some turned the SERPs updside down.

As we say hello to 2019, we want to take a moment to reflect on this past year. The algorithm changes we saw last year can be indicators of changes or trends to come. Search engines often make incremental adjustments to their filters.

So, our friends over at E2M have created a visual and entertaining overview of what went down in Google Search over 2018 — and which might help give us an idea of where we’re going next.

Google’s biggest Search algorithm updates of 2018 – A visual representation by E2M

Google”s Biggest Search Algorithm Updates Of 2018

The post What were Google’s biggest search algorithm updates of 2018? appeared first on Search Engine Watch.

Search Engine Watch


Google’s Cloud Spanner database adds new features and regions

January 1, 2019 No Comments

Cloud Spanner, Google’s globally distributed relational database service, is getting a bit more distributed today with the launch of a new region and new ways to set up multi-region configurations. The service is also getting a new feature that gives developers deeper insights into their most resource-consuming queries.

With this update, Google is adding to the Cloud Spanner lineup Hong Kong (asia-east2), its newest data center location. With this, Cloud Spanner is now available in 14 out of 18 Google Cloud Platform (GCP) regions, including seven the company added this year alone. The plan is to bring Cloud Spanner to every new GCP region as they come online.

The other new region-related news is the launch of two new configurations for multi-region coverage. One, called eur3, focuses on the European Union, and is obviously meant for users there who mostly serve a local customer base. The other is called nam6 and focuses on North America, with coverage across both costs and the middle of the country, using data centers in Oregon, Los Angeles, South Carolina and Iowa. Previously, the service only offered a North American configuration with three regions and a global configuration with three data centers spread across North America, Europe and Asia.

While Cloud Spanner is obviously meant for global deployments, these new configurations are great for users who only need to serve certain markets.

As far as the new query features are concerned, Cloud Spanner is now making it easier for developers to view, inspect and debug queries. The idea here is to give developers better visibility into their most frequent and expensive queries (and maybe make them less expensive in the process).

In addition to the Cloud Spanner news, Google Cloud today announced that its Cloud Dataproc Hadoop and Spark service now supports the R language, in addition to Python 3.7 support on App Engine.


Enterprise – TechCrunch