CBPO

Tag: Google

Google advanced search: Six powerful tips for better SEO

June 30, 2020 No Comments

30-second summary:

  • Google advanced search helps you get granular with your searches and deliver hyper-focused searches with the help of search operators (or a combination of them).
  • For example, you can search for articles published in the last week by your competitors or discover internal linking opportunities you might’ve missed.
  • In this how-to guide, Venngage’s Aditya Sheth outlines six Google advanced search hacks you need to know to master Google search and become a better SEO.

I have to come clean on something: I’m lazy.

While being lazy may not be a virtue, it does come with an unseen advantage: It allows you to look for creative ways to get things done without necessarily spending more time.

And as an SEO, I’m always looking for ways to get more done without working longer hours. Essentially: aiming to accomplish more with less.

One way to do more with less is to look for tools, tactics or even hacks that help you cut down time wasted and get more done, faster. 

One of my favorite hacks ever? Google advanced search.

But what is it? In simple terms, the Google advanced search helps you fine-tune your searches to find exactly what you’re looking for. 

This is an especially useful skill if you want to quickly pull up small-bits of information without always having to rely on tools like Ahrefs, Moz, or SEMRush to do it for you.

In this how-to SEO guide, you’ll use advanced search operators to:

Before we dive into the meat of this guide, first things first:

A mini-crash course on advanced search operators

To keep things simple, we’re going to cover four operators I, as an SEO, use most often.

The first operator is the site search operator. What this allows you to do is retrieve results from a single website. All you have to do is type site:[any website] into Google.

For example, If I enter site:semrush.com, I will only see results pertaining to SEMrush:

You don’t need the http://, https://, or www prefixes when using the site operator.

That’s not all, you can even use a keyword in addition to the site operator to find if that site has written any content around that keyword.

Let’s say I want to find whether we’ve covered the keyword “infographic” on the site. I’ll enter “site:semrush.com infographic” and this is what comes up:

I personally use the site operator very frequently as it limits my search results to a single domain. Keep this operator in mind as we’re going to be relying on it later.

The next operator you’ll find useful is the quotes or exact-match (“”) operator. What the exact-match operator does is limit your searches to exact-match phrases only.

For example, here is a normal Google search (notice the number of results):

And now the same phrase wrapped in quotation marks: 

 

Notice something different? 

Compared to a normal Google search, exact-match queries will only show you results where your keyphrase has been mentioned exactly as it is (and not a variation). 

This operator is especially powerful to identify if your site has any duplicate content that could be sabotaging your rankings (more on this later).

Last but not the least, we’re going to learn the dash (-) and plus (+) operators to perform laser-targeted searches. 

What the dash (-) operator does is excludes certain keywords from appearing in the search results. So if I wanted to read about the topic of search engines but not search engine optimization, I’d use the following query: 

 

By using the “- optimization” in my search, I’ll only see results about search engines and not search engine optimization.

The plus (+) operator, you guessed it — does the exact opposite. You can use the plus operator to add words to your original search and show you a different set of results. 

For example, here’s a query I entered in Google search:

What did I do here? I used the site:, dash and plus operators in conjunction to show me articles that closely relate to search engine marketing but not SEO on the Search Engine Watch blog.

Venngage

There are many search operators out there (too many to list in fact). You can find a much more comprehensive list of search operators on the Moz blog.

But for simplicity’s sake, we’re going to stick to the site, exact match, dash, and plus operators in this guide.

Six Google advanced search tips for better SEO

Using the Google advanced search operators above, you can access exactly what you’re looking for and spend less time searching for it.

Advanced search can come really handy especially when you’re just starting out and don’t have the budget for expensive SEO tools.

Imagine all the endless possibilities that lie in wait for you as an SEO; if only you got better at googling. Well, it’s easier than you think. I’ll show it to you:

1. Conduct basic but insightful competitor research

Conducting competitor research on Google is really easy. All you have to do is use the “related:” search operator followed by a website URL. 

“Related:” allows you to find sites that are closely related to a specific URL. You can use related to identify not only direct competitors but also indirect peripheral competitors that you might’ve missed in your competitor research.

Not only that, the related: operator also helps you understand how Google is categorizing your competitors and your website.

Let’s look at what Google returns if we search for competitors related to Venngage

I already know the first three results are our direct competitors, but the last two are surprising because they seem to be indirectly competing with us (and I wasn’t even aware of them).

We’re an online infographic maker tool while both Column Five Media and InfoNewt appear to be done-for-you agencies. Google has identified and categorized them as sites related to Venngage which is an insightful find.

Don’t dismiss this advanced search hack because of its simplicity. Try it for yourself and see what Google comes up with. You might just come away with a better understanding of the competition as it pertains to SEO.

2. Stalk your competitor’s content strategy

Sticking to the topic of competitor research, here’s a cool way you can spy on your competitor’s content strategy: combining the site operator and Google’s date-range filter.

Let’s try this on one of our direct competitors: Piktochart.

To limit my search to only blog-related results, I’ll use Piktochart’s/blog subdomain instead of their website. And by the looks of it, they have 790 pages on their blog. 

I can use the date-range filter (click on tools and filter by date) to further drill down these results to identify what content they published in the last month only. Here’s what comes up: 

This not only tells me Pitkchart published four new articles last month but also gives me insight into Piktocharts’ content strategy and the keywords they’re targeting.

You can find even more data by filtering the results by days, months, or custom time periods. 

I can even include exact-match (“your keyword” in quotes) keywords to find out how much content Piktochart has published on any given topic, which is a clever way to uncover their topic cluster strategy. 

Let’s take content marketing as a topic for example

Using the site operator in conjunction with the date filters on Google search gives you information on: 

  • How much content your competition has published till date
  • How often they publish new content in a given time period
  • What kind of content they publish at a certain point in time
  • How often your competitor has written about a given topic

Pretty cool right? 

3. Unearth a gold mine of guest posting opportunities 

If your goal is to drive quality traffic back to your website, pick up high-quality backlinks, boost your website’s domain authority and even rank higher on Google — guest blogging will help you do all of the above.

Anybody that tells you guest blogging is dead is either lying or in on it. Guest blogging still works, even in 2020.

Now that we’ve briefly covered how important guest blogging really is, how do you uncover guest blogging opportunities in your niche or industry?

Here are a few advanced search queries you can copy and paste into Google

  • Your Keyword “guest post opportunities”
  • Your Keyword “guest post”
  • Your Keyword “submit guest post”
  • Your Keyword “submit blog post”
  • Your Keyword intitle:“write for us”
  • Your Keyword intitle:“guest post guidelines”

If I’m looking to guest post for sites in the design space, for example, I’d use the following query:

Sites bookmarked. Guest post pitches sent. Fingers crossed. 

Try out these search queries for yourself and you’ll be able to build a respectable list of sites to contribute for.

Brian Dean has the most exhaustive guide on guest blogging I’ve read (it includes a huge list of search operators that will help you find even more guest posting opportunities).

4. Discover hidden opportunities for internal linking

Internal linking plays a small but important role in the ranking factors that determine how well you rank on Google.

Irrespective of how well-designed and easy-to-navigate your site may be, a great internal linking structure can make all the difference when it comes to driving traffic from one post to another across your entire blog.

Internal linking also creates topical relevance by creating supporting content for the main topics of your website.

A few weeks ago, I published a mammoth webinar guide on the Venngage blog. I wanted it to start driving traffic to the post and rank for high-volume keywords immediately.

I got to work by finding out where I could link to our guide internally from as many relevant posts on our blog as possible. All I did was use the site operator and the keyword “webinar”: 

Boom! Barring the first result, I found 47 internal linking opportunities with a simple search. And all it took was a few seconds.

You can even use this search query: site:www.yourwebsite.com/blog intext:”your keyword” to pretty much do the same thing.

This advanced search hack won’t be as useful if you’ve recently started blogging, but it will come in handy if you manage a huge blog that already has a lot of existing content.

5. Find duplicate content on your website

Duplicate content is content that appears on more than one location on your website and can confuse search engines when it comes to deciding which page to rank higher. 

In short: Duplicate content can hurt your website rankings and it’s a technical SEO issue you cannot afford to ignore.

To show you an example of duplicate content, I’ll use this small piece of copy from the Apple Airpods product description on Walmart

Google advanced search tips: Duplicate Content

Using the site operator, I’ll paste the copy into Google using the exact-match operator. Here’s what I come up with: 

The same piece of copy shows up on six other pages on Walmart. Things could be a lot worse but still, not ideal.

But if I were to search for the same piece of copy across the web (not just Walmart) using the dash operator, this is what comes up:

The same piece of copy appears on ~19,000 other websites (excluding Walmart). That’s a lot of duplicate content. 

Duplicate content is especially a major issue for website blogs with 1,000s of pages or ecommerce sites with the same product descriptions. 

6. Find missed content opportunities

One of the last search operators I’ll cover is the “filetype” operator. 

Filetype can help you find non-HTML content on your site, such as Word Documents or PDF files. This content is often valuable, but not search optimized. And traffic to it doesn’t show up in your Analytics.

To use this search operator, simple type in “site:yourwebsite.com filetype:pdf” like so: 

Then look at that content. Have you published it as HTML content? Is it search optimized? Is there an opportunity to make it a valuable, rank-worthy and trackable webpage?

PDF files are often the rust of the internet, added to sites because the content manager doesn’t have an easy way to publish actual web pages.

They should always be an alternate (print-friendly, download-friendly) version of HTML content. They should almost never be the only version of a piece of content.  

Your turn to master Google search

Congratulations! You’ve officially made it to the end of this mammoth guide. 

Google is far more powerful and robust than we realize or give it credit for. 

Knowing what to search for and how to search for it with the help of Google advanced search operators will help you harness Google’s true power and in turn, grow your site.

As SEOs, our job comprises running SEO tests, tinkering with Google’s algorithms, and staying on top of the latest search trends.

Google advanced search is not only a fun skill that you can learn over the weekend. It can help you uncover opportunities hiding in plain sight and help you be more effective at your job.

The real kicker

Google is and always will be free. The know-how to fine-tune your searches will help you become a better SEO and pay dividends over the long term.

Has using Google advanced search in your day-to-day made you a better SEO? Which search operators do you use most frequently? Did I miss any advanced search tips? Drop them in the comments below.

Aditya Sheth does Content & SEO at Venngage. You can connect with him on Linkedin or find him on Twitter @iamadityashth.

The post Google advanced search: Six powerful tips for better SEO appeared first on Search Engine Watch.

Search Engine Watch


Tracking Your Free Google Shopping Ads

June 25, 2020 No Comments

Tracking for Surfaces Across Google clicks in Google Merchant Center leaves something to be desired. The step-by-step process in this post is an alternative.

Read more at PPCHero.com
PPC Hero


How Google May Annotate Images to Improve Search Results

June 25, 2020 No Comments

How might Google improve on information from sources such as knowledge bases to help them answer search queries?

That information may be learned from or inferred from sources outside of those knowledge bases when Google may:

  • Analyze and annotate images
  • Consider other data sources

A recent Google patent on this topic defines knowledge bases for us, why those are important, and it points out examples of how Google looks at entities while it may annotate images:

A knowledge base is an important repository of structured and unstructured data. The data stored in a knowledge base may include information such as entities, facts about entities, and relationships between entities. This information can be used to assist with or satisfy user search queries processed by a search engine.

Examples of knowledge bases include Google Knowledge Graph and Knowledge Vault, Microsoft Satori Knowledge Base, DBpedia, Yahoo! Knowledge Base, and Wolfram Knowledgebase.

The focus of this patent is upon improving upon information that can be found in knowledge bases:

The data stored in a knowledge base may be enriched or expanded by harvesting information from a wide variety of sources. For example, entities and facts may be obtained by crawling text included in Internet web pages. As another example, entities and facts may be collected using machine learning algorithms, while it may annotate images.

All gathered information may be stored in a knowledge base to enrich the information that is available for processing search queries.

Analyzing Images to Enrich Knowledge Base Information

This approach may annotate images and select object entities contained in those images. It reminded me of a post I recently wrote about Google annotating images, How Google May Map Image Queries

This is an effort to better understand and annotate images, and explore related entities in images, so Google can focus on “relationships between the object entities and attribute entities, and store the relationships in a knowledge base.”

Google can learn from images of real-world objects (a phrase they used for entities when they started the Knowledge Graph in 2012.)

I wrote another post about image search becoming more semantic, in the labels they added to categories in Google image search results. I wrote about those in Google Image Search Labels Becoming More Semantic?

When writing about mapping image queries, I couldn’t help but think about labels helping to organize information in a useful way. I’ve suggested using those labels to better learn about entities when creating content or doing keyword research. Doing image searches and looking at those semantic labels can be worth the effort.

This new patent tells us how Google may annotate images to identify entities contained in those images. While labeling, they may select an object entity from the entities pictured and then choose at least one attribute entity from the annotated images that contain the object entity. They could also infer a relationship between the object entity and the attribute entity or entities and include that relationship in a knowledge base.

In accordance with one exemplary embodiment, a computer-implemented method is provided for enriching a knowledge base for search queries. The method includes assigning annotations to images stored in a database. The annotations may identify entities contained in the images. An object entity among the entities may be selected based on the annotations. At least one attribute entity may be determined using the annotated images containing the object entity. A relationship between the object entity and the at least one attribute entity may be inferred and stored in a knowledge base.

For example, when I search for my hometown, Carlsbad in Google image search, one of the category labels is for Legoland, which is an amusement park located in Carlsbad, California. Showing that as a label tells us that Legoland is located in Carlsbad (the captions for the pictures of Legoland tell us that it is located in Carlsbad.)

Carlsbad-Legoland-Attribute Entity

This patent can be found at:

Computerized systems and methods for enriching a knowledge base for search queries
Inventors: Ran El Manor and Yaniv Leviathan
Assignee: Google LLC
US Patent: 10,534,810
Granted: January 14, 2020
Filed: February 29, 2016

Abstract

Systems and methods are disclosed for enriching a knowledge base for search queries. According to certain embodiments, images are assigned annotations that identify entities contained in the images. An object entity is selected among the entities based on the annotations and at least one attribute entity is determined using annotated images containing the object entity. A relationship between the object entity and the at least one attribute entity is inferred and stored in the knowledge base. In some embodiments, confidence may be calculated for the entities. The confidence scores may be aggregated across a plurality of images to identify an object entity.

Confidence Scores While Labeling of Entities in Images

One of the first phrases to jump out at me when I scanned this patent to decide that I wanted to write about it was the phrase, “confidence scores,” which reminded me of association scores which I wrote about discussing Google trying to extract information about entities and relationships with other entities and confidence scores about the relationships between those entities, and about attributes involving the entities. I mentioned association scores in the post Entity Extractions for Knowledge Graphs at Google, because those scores were described in the patent Computerized systems and methods for extracting and storing information regarding entities.

I also referred to these confidence scores when I wrote about Answering Questions Using Knowledge Graphs because association scores or confidence scores can lead to better answers to questions about entities in search results, which is an aim of this patent, and how it attempts to analyze and label images and understand the relationships between entities shown in those images.

The patent lays out the purpose it serves when it may analyze and annotate images like this:

Embodiments of the present disclosure provide improved systems and methods for enriching a knowledge base for search queries. The information used to enrich a knowledge base may be learned or inferred from analyzing images and other data sources.

Per some embodiments, object recognition technology is used to annotate images stored in databases or harvested from Internet web pages. The annotations may identify who and/or what is contained in the images.

The disclosed embodiments can learn which annotations are good indicators for facts by aggregating annotations over object entities and facts that are already known to be true. Grouping annotated images by the object entity help identify the top annotations for the object entity.

Top annotations can be selected as attributes for the object entities and relationships can be inferred between the object entities and the attributes.

As used herein, the term “inferring” refers to operations where an entity relationship is inferred from or determined using indirect factors such as image context, known entity relationships, and data stored in a knowledge base to draw an entity relationship conclusion instead of learning the entity-relationship from an explicit statement of the relationship such as in text on an Internet web page.

The inferred relationships may be stored in a knowledge base and subsequently used to assist with or respond to user search queries processed by a search engine.

The patent then tells us about how confidence scores are used, that they calculate confidence scores for annotations assigned to images. Those “confidence scores may reflect the likelihood that an entity identified by an annotation is contained in an image.”

If you look back up at the pictures for Legoland above, it may be considered an attribute entity of the Object Entity Carlsbad, because Legoland is located in Carlsbad. The label annotations indicate what the images portray, and infer a relationship between the entities.

Just like an image search for Milan Italy shows a category label for Duomo, a Cathedral located in the City. The Duomo is an attribute entity of the Object Entity of Milan because it is located in Milan Italy.

In those examples, we are inferring from Legoland being included under pictures of Carlsbad that it is an attribute entity of Carlsbad and that the Duomo is an attribute entity of Milan because it is included in the results of a search for Milan.

Milan Duomo Attribute Entity

A search engine may learn from label annotations and because of confidence scores about images because the search engine (or indexing engine thereof) may index:

  • Image annotations
  • Object entities
  • Attribute entities
  • Relationships between object entities and attribute entities
  • Facts learned about object entities

The Illustrations from the patent show us images of a Bear, eating a Fish, to tell us that the Bear is an Object Entity, and the Fish is an Attribute Entity and that Bears eat Fish.

Anotate images with Bear (Object Entity) and Fish (Attribute-Entity) entities

We are also shown that Bears, as object Entities have other Attribute Entities associated with them, since they will go into the water to hunt fish, and roam around on the grass.

Bears and attribute Entities

Annotations may be detailed and cover objects within photos or images, like the bear eating the fish above. The patent points out a range of entities that might appear in a single image by telling us about a photo from a baseball game:

An annotation may identify an entity contained in an image. An entity may be a person, place, thing, or concept. For example, an image taken at a baseball game may contain entities such as “baseball fan”, “grass”, “baseball player”, “baseball stadium”, etc.

An entity may also be a specific person, place, thing, or concept. For example, the image taken at the baseball game may contain entities such as “Nationals Park” and “Ryan Zimmerman”.

Defining an Object Entity When Google May Annotate Images

The patent provides more insights into what object entities are and how they might be selected:

An object entity may be an entity selected among the entities contained in a plurality of annotated images. Object entities may be used to group images to learn facts about those object entities. In some embodiments, a server may select a plurality of images and assign annotations to those images.

A server may select an object entity based on the entity contained in the greatest number of annotated images as identified by the annotations.

For example, a group of 50 images may be assigned annotations that identify George Washington in 30 of those images. Accordingly, a server may select George Washington as the object entity if 30 out of 50 annotated images is the greatest number for any identified entity.

Confidence scores may also be determined for annotations. Confidence scores are an indication that an entity identified by an annotation is contained in an image. It “quantifies a level of confidence in an annotation being accurate.” That confidence score could be calculated by using a template matching algorithm. The annotated image may be compared with a template image.

Defining Attribute Entities When Google May Annotate Images

An attribute entity may be an entity that is among the entities contained in images that contain the object entity. They are entities other than the object entity.

Annotated images that contain the object entity may be grouped and an attribute entity may be selected based on what entity might be contained in the greatest number of grouped images as identified by the annotations.

So, a group of 30 annotated images containing object entity “George Washington” may also include 20 images that contain “Martha Washington.”

In that case, “Martha Washington,” may be considered an attribute entity

(Of Course, “Martha Washington Could be an object Entity, and “George Washington, appearing in a number of the “Martha Washington” labeled images could be considered the attribute entity.)

Infering Relationships between entities by Analyzing Images

If more than a threshold of images of “Michael Jordon” contains a basketball in his hand, a relationship between “Michael Jordan” and basketball might be made (That Michael Jordan is a basketball player.)

From analyzing images of bears hunting for fish in water, and roaming around on grassy fields, some relationships between bears and fish and water and grass can be made also:

inferences between entities

By analyzing images of Michael Jordan with a basketball in his hand wearing a Chicago Bulls jersey, a search query asking a question such as “What basketball team does Michael Jordan play for?” may be satisfied with the answer “Chicago Bulls”.

To answer a query such as “What team did Michael Jordan play basketball for, Google could perform an image search for “Michael Jordan playing basketball”. Having those images that contain the object entity of interest can allow the images to be analyzed and an answer provided. See the picture at the top of this post, showing Michael Jordan in a Bulls jersey.

Take Aways

This process to collect and annotate images can be done using any images found on the Web, and isn’t limited to images that might be found in places like Wikipedia.

Google can analyze images online in a way that scales on a web-wide basis, and by analyzing images, it may provide insights that a knowledge graph might not, such as to answer the question, “where do Grizzly Bears hunt?” an analysis of photos reveals that they like to hunt near water so that they can eat fish.

The confidence scores in this patent aren’t like the association scores in the other patents about entities that I wrote about, because they are trying to gauge how likely it is that what is in a photo or image is indeed the entity that it might then be labeled with.

The association scores that I wrote about were trying to gauge how likely relationships between entities and attributes might be more likely to be true based upon things such as the reliability and popularity of the sources of that information.

So, Google is trying to learn about real-world objects (entities) by analyzing pictures of those entities when it may annotate images (ones that it has confidence in), as an alternative way of learning about the world and the things within it.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google May Annotate Images to Improve Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Adjust Your Small Business Strategy With Google Analytics

June 24, 2020 No Comments

Do you work with/for an SMB that’s struggling with low-funnel ads? Use Google Analytics to redefine your goals to make data-driven, informed decisions.

Read more at PPCHero.com
PPC Hero


Google Podcasts App and Making Podcasts Easier to Find

June 21, 2020 No Comments

Podcasts Can Be Hard to Find

I’ve been listening to a lot of podcasts lately. They can be fun to listen to while doing chores around the house, like watering plants, washing dishes, cooking meals, and cleaning up. There are podcasts on many different subjects that I am interested in. A good number about Search Engine Optimization.

Someone asked me If I had seen any patents about podcasts on Twitter recently. I hadn’t at the time and I told them that. A patent application later appeared on January 9, 2020. I returned to the tweet where I replied that I hadn’t seen any, and tweeted that I had found a new one, and would be writing about it. This is that post.

I am not the only one listening more to podcasts. Techcrunch from last year had an article about the growth of audiences for podcasts: After a Breakout Year: looking ahead to the future of Podcasting.

It seems Google noticed this trend and has worked on making podcasts easier to find in search results and by releasing a Google Podcasts app.

Google Tries to Make Podcasts Easier to Find

At the Google Blog, the Keyword, a post last August from Sack Reneay-Wedeen, Product Manager at Google Podcasts, called: Press play: Find and listen to podcast episodes on Search

If you produce a podcast or are looking for one to listen to, you may find this article from last autumn helpful: Google will start surfacing individual podcast episodes in search results.

It tells us that:

Google is taking the next step in making podcasts easier to find. The company will now surface individual podcast episodes in search results, so if someone searches for a show about a niche topic or an interview with a specific person, Google will show them potential podcast episodes that fit their query.

In Google Search Help is a page about finding Podcasts titled Listen to podcasts with Google Podcasts

There are also Google Developer pages about how to submit your Podcasts for them to be found using Google on this page: Google Podcasts, which offer guidelines, management of podcasts information, and troubleshooting for Google Podcasts.

The Google Play Music Help pages offer information about using that service to subscribe and listen to podcasts.

There are also Google Podcast Publisher Tools, which allows you to submit your podcast to be found on the Google Podcasts App, and preview your podcast as it would appear there.

The Google Podcasts App is at: Google Podcasts: Discover free & trending podcasts

How the New Podcast Patent Application Ranks Shows and Episodes

The new Google patent application covers “identifying, curating, and presenting audio content.” That includes audio such as radio stations and podcasts.

The application starts with this statement:

Many people enjoy listening to audio content, such as by tuning to a radio show or subscribing to a podcast and playing a podcast episode. For example, people may enjoy listening to such audio content during a commute between home and work, while exercising, etc. In some cases, people may have difficulty identifying specific content that they would enjoy listening to, such as specific shows or episodes that align with their interests. Additionally, in some cases, people may have difficulty finding shows or episodes that are of a duration that is convenient for them to listen to, such as a duration that aligns with a duration of a commute.

It focuses on solving a specific problem – people being unable to identify and listen to audio content.

The method this patent uncovers for presenting audio content includes:

  • Seeing categories of audio content
  • Being able to select one of those categories
  • Seeing shows based upon that selected category
  • Being able to select from the shows in that category
  • Seeing episodes from those shows
  • Being able to select from an episode, and seeing the duration of playing time for each show
  • Ranking the episodes
  • Seeing the episodes in order of ranking.

Rankings are based on a likelihood that a searcher might enjoy the episodes being ranked.

The episodes can also be shown based upon a measure of popularity.

The episodes may also be shown based upon how relevant they might be to a searcher.

The identification of a group of candidate episodes is based on an RSS feed associated with shows in the subset of shows.

The patent application about podcasts at Google is:

Methods, Systems, and Media for Identifying, Curating, and Presenting Audio Content
Inventors Jeannette Gatlin, Manish Gaudi
Applicants Google LLC
Publication Number 20200012476
Filed: July 3, 2019
Publication Date January 9, 2020

The methods described in the patent cover podcasts and can apply to other types of audio content, such as:

  • Music
  • Radio shows
  • Any other suitable type of audio content
  • Television shows
  • Videos
  • Movies
  • Any other suitable type of video content

The patent describes several techniques that podcasts are found with.

A group of candidate shows are selected, such as podcast episodes using factors like:

  • Popularity
  • Inclusion of evergreen content relevant to a listener
  • Related to categories or topics that are of interest to a particular user

Recommendations of shows look at whether a show:

  1. Is associated with episodic content or serial content.
  2. Typically includes evergreen content (e.g., content that is generally relevant at a future time) or whether the show will become irrelevant at a predetermined future time
  3. Is likely to include news-related content based on whether a tag or keyword associated with the show includes “news.”
  4. Has tags indicating categories or topics associated with the show.
  5. Has tags indicating controversial content, such as mature language, related to particular topics, and/or any other suitable type of controversial content
  6. Has previously assigned categories or topics associated with a show that are accurate.
  7. Has episodes likely to include advertisements (e.g., pre-roll advertisements, interstitial advertisements, and/or any other suitable types of advertisements).
  8. Has episodes that are likely to include standalone segments that can be viewed or listened to individually without viewing the rest of an episode of the show.
  9. Has episodes often with an opening monologue.
  10. Has episodes featuring an interview in the middle part of an episode.
  11. Features episodic content instead of serial content, so it does not require viewing or listening to one episode before another.
  12. is limited in relevance based on a date (after the fact).

Human evaluators can identify episode based upon features such as:

  • General popularity
  • Good audio quality
  • Associated with particularly accurate keywords or categories
  • Any other suitable manner

Some podcasts may have a standalone segment within an episode that may feature:

  • A monologue
  • An interview
  • Any other suitable standalone segment
  • That standalone segment could be trimmed as a new episode and included to be selected with the other episodes.

    Blacklisted Content

    Episodes that are deemed too long in duration could be blacklisted or deemed not suitable for selection as a candidate episode.

    An episode that contains adult-oriented content may be blacklisted from being presented to a user during daytime hours based on parental controls.

    An episode containing a particular type of content may be blacklisted from being presented to a user during weekdays based on user preferences (e.g., particular topics for presentation on the weekdays as opposed to particular topics for presentation on the weekends).

    Ranking of Candidate Episodes

    Ranking can be based upon:

  • Popularity
  • Likelihood of enjoyment
  • Previous listening history
  • Relevance to previously listen to content
  • Audio quality
  • Reviewed by human evaluators

The patent tells us that this process can rank the subset of the candidate episodes in any suitable manner and based on any suitable information.

It can be based on a popularity metric associated with a show corresponding to each episode and/or based on a popularity metric associated with the episode.

That popularity metric may also be based on any suitable information or combination of information, such as:

  • A number of subscriptions to the show
  • A number of times a show and/or an episode has been downloaded to a user device
  • A number of times links to a show have been shared (e.g., on a social networking service, and/or in any other suitable manner)
  • Any other suitable information indicating popularity.

This process can also rank the subset of the candidate episodes based on a likelihood that a particular user of a user device will enjoy the episode.

That likelihood can be based on previous listening history, such as:

  • How relevant a category or topic of the episode is to categories/topics of previously listened to episodes (Is it associated with a show the user has previously listened to?)
  • Many times the user has previously listened to other episodes associated with the show
  • Any other suitable information related to listening history

This process can also rank candidate episodes based on the audio quality of each episode.

Alternatively, this process may also rank candidate episodes based on whether each episode has been identified by a human evaluator, and episodes that have been identified by human evaluators are ranked higher than other episodes.

A combined episode score might be based upon a score from:

  • A trusted listener
  • The audio quality
  • The content quality
  • The popularity of the show from which the episode originates

Takeaways

This patent appears to focus primarily upon how podcasts might be ranked on the Google Podcasts App, rather than in Google search results.

The podcasts app isn’t as well known as some of the other places to get podcasts such as iTunes.

I am curious about how many podcasts are being found in search results. I’ve been linking to ones that I’ve been a guest in from the about page on this site, and that helps many of them show up in Google SERPs on a search for my name.

I guess making podcasts easier to find in search results can be similar to making images easier to find, by the text on the page that they are hosted upon, and the links to that page as well.

SEO Industry Podcasts

making podcasts easier to find with Google SEO Podcases

I thought it might be appropriate if I ended this post with several SEO Podcasts.

I’ve been a guest on many podcasts, and have been involved in a couple over the past few years. I’ve also been listening to some, with some frequency, and have been listening to more, both about SEO and other topics as well. I decided to list some of the ones that I have either been a guest on or have listened to a few times. They are in no particular order

Experts On The Wire Podcast

Hosted by Dan Shure. Dan interviews different guests every week about different aspects of SEO and Digital Marketing. I’ve been on a couple of podcasts with Dan and enjoyed answering questions that he has asked, and have listened to him interview others on the show as well. There are some great takeaways in some of the interviews that I have listened to learn from.

Search News You Can Use – SEO Podcast with Marie Haynes

A Weekly podcast about Google Algorithm updates, and news and articles from the digital marketing industry. This is a good way to keep informed about what is happening in SEO. She provides some insights into how to deal with updates and changes at Google.

Webcology

Jim Hedger and Dave Davies have been running this podcast for a few years, and I’ve been a guest on it about 4-5 times. They discuss a lot of current industry news and invite guests to the show to talk about those. My last guest appearance was with David Harry, where we talked about what we thought were the most interesting search-related patents of the last year.

The Search Engine Journal Show!

Danny Goodwin, Brent Csutoras, Greg Finn, and Loren Baker take turns hosting and talking with guests from the world of SEO. No two SEOs do things the same way, and learning about the differences in what they do can be interesting.

Edge of the Web Podcast

Erin Sparks hosts a weekly show about Internet Marketing, and he takes an investigative approach to this show, asking some in-depth questions. He asks some interesting questions.

Search Talk Live Digital Marketing Podcast

Hosted by Robert O’Haver, Matt Weber, and Michelle Stinson Ross. They offer “Expert Advice on SEO and SEM. I had fun talking with these guys – I just listened half of my last appearance on the show.

The Recipe For SEO Success Show

Kate Toon is the host of this show, and she focuses on actionable tips and suggestions from guests on doing digital marketing.

Last Week in Local

Hosted by Mike Blumenthal, Carrie Hill, and Mary Bowling. They often discuss news and articles that focus on local search, but also discuss topics that have a broader impact on sites such as image optimization.

#AEO is SEO Podcast

This is hosted by Jason Barnard. The “AEO” in the title is “Answer Engine Optimization” and Jason has been attending conferences to give him a chance to interview people for his podcast. The last time we did a show it was in a bakery across the street from my hotel in a suburb of Paris, talking about Entities at Google.

Connecting the Digital Dots

Martha van Berkel is the host of this show and is one of the people behind Schemaapp. She and I talked about featured snippets.

Search Engine Roundtable Vlog

Barry Schwartz runs Search Engine Roundtable, which is originally based upon the roundtable in tales of King Author that knights would sit at. In this VLOG, he visits people where they work, and asks them questions about what they do. It’s fun seeing where people are from and learning more about them.

Bill and Ammon’s Bogus Hangout

This is a weekly conversation between several SEOs having discussions, often about marketing and SEO, but sometimes veering off into different topics. It takes inspiration from early days of SEO where conferences such as Pubcon were often meetups in bars, with people sharing stories about what they had been doing. I am one of the hosts, and recently I’ve been joined by Doc Sheldon, Terry van Horne, Zara Altair, and Steve Gerencser.

Page 2 Podcast

Hosted by Jacob Stoops and Jeff Louella. They have guests join them from the world of SEO, and they ask them about their origin stories as SEOs. They have added a news section to the show as well,

Deep Crawl’s Open Dialogue

These shows feature interviews with some sharp and interesting SEOs and provide details on tips and techniques involving digital marketing and technical SEO.

SEO Training Dojo

With David Harry, and Terry van Horne. The Dojo is a center for training and learning SEO. It often includes guests who have been sharing ideas and approaches about SEO for years.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google Podcasts App and Making Podcasts Easier to Find appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Google Product Search and Learning about New Product Lines

June 17, 2020 No Comments

It’s interesting seeing patents from Google that focus on eCommerce topics. The last one I recall had Google distinguishing between products and accessories for those products in search results. I wrote about it in Ranking Search Results and Product Queries.

New Product Lines in Product Search

A new patent from Google is about when new products appear in existing product lines, like a laptop that comes with more Ram or a bigger hard drive, or a camera with a zoom lens that it didn’t have before.

This patent is about determining in product search whether a query is looking for a particular product line, from within a specific brand.

Searchers frequently search for products offered for sale. Google is trying to understand the intent behind shopping-related search queries.

For Google to be able to do that well, it has to understand different aspects of product categories. This can include such things as:

  • Whether a product as an association with a brand
  • Whether a product is in a specific product line

The patent tells us it is essential to detect terms designating product lines from within product queries from searchers.

That includes associating detected product line terms along with their corresponding brands, to let Google keep up with new product lines and retiring product lines soon after changes occur.

Under the new Google patent is a process aimed at determining product lines from product search queries:

  • A product query might be classified to identify a product category
  • A brand may be identified for the product query
  • The brand may be chosen from a list of known brands for the product category

Unknown Product Lines

The patent tells us that unknown product line terms may be identified within a product query.

A metric may indicate how well the unknown product line terms correspond to an actual product line within the brand.

The metric may be compared to a specified threshold. The unknown product line terms may be designated as a new product line of the brand if the metric compares to the specified threshold.

A product search may be performed using the product query. Product search results may be returned according to the product search.

This product lines patent can be found at:

Detecting product lines within product search queries
Inventors: Ritendra Datta
Assignee: GOOGLE LLC
US Patent: 10,394,816
Granted: August 27, 2019
Filed: December 27, 2012

Abstract

Systems and methods can determine product lines product searches.

One or more computing devices can receive a product query of search terms. The product query may be classified to identify a product category. A brand may be identified for the product query. The brand may be selected from a list of known brands for the product category.

One or more unknown product line terms may be identified within the product query. A metric may be computed to indicate how well the unknown product line terms correspond to an actual product line within the brand. The metric may be compared to a specified threshold. The unknown product line terms may be designated as a new product line of the brand if the metric favorably compares to the specified threshold. A product search may be performed on the product query. Product search results may be returned according to the product search.

High Precision Query Classifiers

This patent shows Google trying to identify new products and product lines, so it can distinguish them from older product lines.

Interestingly, Google is looking at search queries to identify products and product lines. As the patent tells us:

Product lines associated with product brands may be determined from analyzing the received product search queries.

The patent refers to a “high-precision query classifier,” which is the first time I have seen that mentioned anywhere at all.

How does a “high precision query classifier” work?

As described in this patent:

  • A search query may be automatically mapped to a product category
  • A list of known brands within the product category may be used to identify terms within the product query specifying the product brand
  • Similarly, a list of known category attributes may be used to identify terms within the product query specifying attributes of the product being searched

Attributes of Products

Product Attributes

The patent provides some examples of attributes for products:

  • A number of megapixels for digital cameras
  • An amount of RAM memory for laptop computers
  • A number of cylinders for a motor vehicle

Product Query Forms

We are told that the forms that a product query may take may vary a bit, but we are provided with some examples.

A product query could take the form “[B] [PL] [A].”

In such a query form, one or more terms [B] may indicate a brand that is a known brand within a list of known product brands, and one or more terms [A] may indicate attributes that are known attributes of the category. One or more unknown terms [PL] may then be identified as a potential new product line. Such an identification may be strengthened where [PL] is in a form associated with product lines. The identification may also be strengthened where [PL] is found with brand [B] frequently over time within various product queries. The identification may be further strengthened where the terms [PL] are infrequently, or never, found with brands other than the brand [B] throughout many product queries over time.

A metric is calculated by comparing what might be the attributes of products from a new product line, with attributes of an actual product line associated with a brand.

This metric may consider the number of unique product queries containing the terms [PL] having the correct structure and/or category along with the extent to which [B] dominates among every query that has a brand preceding [PL].

Why would Google be looking at Queries to learn about new product lines from brands instead of from product pages that describe the attributes of products?

Identifying Product Lines

How this identification process may work:

  • Software for product line resolution may identify product lines associated with brands for product categories determined by the query classifier
  • Product line resolution may use a category attribute dictionary and a product brand dictionary to establish pairings between brands and product lines
  • The product query and the determined brands and product lines may then be provided to a product search engine
  • The product search engine may then provide search results to the searcher
  • The query classifier may map the product query to a product category
  • Product line resolution can use product category information with the category attribute dictionary and the product brand dictionary to identify terms from the product query about specific product lines relate to product lines
  • The unknown terms identified by the product line resolution module for a category may be fed back into the category attribute dictionary as attributes for that category
  • Each identified product line may also be related to a particular brand listed in the product brand dictionary
  • The product brand dictionary can provide a list of known brands within various product categories
  • The known brands may be used to determine and resolve terms associated with product lines within each brand
  • The product line terms may then be used to identify a potential new product line

The identification of a new product line may be strengthened:

  • When unknown terms information is in a form associated with product lines
  • Where the unknown terms are found with a brand frequently over time within various product queries
  • Where the unknown terms are infrequently, or never, found with brands other than the brand identified throughout many products queries over time

Identifying When Unknown Terms Maybe in a form associated with product lines

Here are some observations about the form of product lines:

  • Product line terms generally start with a letter
  • Product lines generally contain few or no numbers (differentiating product line terms from model numbers or serial numbers
  • Product lines may be related to a category or a brand (One brand may generally have single word product lines while a second brand may use two-word product lines where the first word relates to performance and a second word is a three-digit number

These kinds of patterns or forms about product lines could be used to associate unknown terms within a product query as product line terms.

Using a Category Attribute Dictionary to Resolve Product Line Terms within Product Queries

The category attribute dictionary can provide a dictionary of attributes associated with various product categories and brands.

Terms from the category attribute dictionary may be used to resolving product line terms within the product query.

When unknown terms are often found within product queries along with brand information, those unknown terms could be seen as product line terms associated with a specific brand. When known attribute terms are found in the category attribute dictionary to be consistent with brand [B] or the category associated with the product query by the query classifier.

Product Query Processing

The patent includes this flowchart to describe the process behind the product search patent:

Where does Google Learn about product lines?

The patent doesn’t mention product schema, or merchant product feeds. It does tell us that it is getting a lot of information about product lines from searcher’s queries.

Google also collects information about products and product attributes from web sites that sell those products, in addition to looking at product queries, as described in this patent.

Collecting such information from site owners may be the starting source of much information found in the product and category dictionaries and product attribute categories that are mentioned in this patent.

The process of updating information about products and product lines from product queries from searchers is a way to crowdsource information about products from searchers and get an idea of how much interest there might be in specific products.

Google can learn a lot about products from product data feeds that merchants submit to Google. Google is trying to get merchants to submit product feeds even if they don’t use paid product search, to make those products visible in more places on Google in Surfaces across Google as described on this Google Support page: Show your products on Surfaces Across Google.

We saw that Google is using product feed information to help it distinguish between product pages and accessory pages for those products as I wrote about in the blog post I linked to at the start of this post.

Google also describes product markup on their developers page Product. Google tells site owners that they should include that markup for their products because:

Product markup enables a badge on the image in mobile image search results, which can encourage more users to click your content.

By collecting information about products from product feeds, Product Schema, product web pages, and product queries from searchers Google is collecting a lot of data about products, which could enable it to be pretty good at providing answers to product queries, and to understand when new product lines are launched.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google Product Search and Learning about New Product Lines appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Google Cloud launches Filestore High Scale, a new storage tier for high-performance computing workloads

June 16, 2020 No Comments

Google Cloud today announced the launch of Filestore High Scale, a new storage option — and tier of Google’s existing Filestore service — for workloads that can benefit from access to a distributed high-performance storage option.

With Filestore High Scale, which is based on technology Google acquired when it bought Elastifile in 2019, users can deploy shared file systems with hundreds of thousands of IOPS, 10s of GB/s of throughput and at a scale of 100s of TBs.

“Virtual screening allows us to computationally screen billions of small molecules against a target protein in order to discover potential treatments and therapies much faster than traditional experimental testing methods,” says Christoph Gorgulla, a postdoctoral research fellow at Harvard Medical School’s Wagner Lab., which already put the new service through its paces. “As researchers, we hardly have the time to invest in learning how to set up and manage a needlessly complicated file system cluster, or to constantly monitor the health of our storage system. We needed a file system that could handle the load generated concurrently by thousands of clients, which have hundreds of thousands of vCPUs.”

The standard Google Cloud Filestore service already supports some of these use cases, but the company notes that it specifically built Filestore High Scale for high-performance computing (HPC) workloads. In today’s announcement, the company specifically focuses on biotech use cases around COVID-19. Filestore High Scale is meant to support tens of thousands of concurrent clients, which isn’t necessarily a standard use case, but developers who need this kind of power can now get it in Google Cloud.

In addition to High Scale, Google also today announced that all Filestore tiers now offer beta support for NFS IP-based access controls, an important new feature for those companies that have advanced security requirements on top of their need for a high-performance, fully managed file storage service.


Enterprise – TechCrunch


When To Test Google Ads Portfolio Bidding

June 9, 2020 No Comments

In this post, I’ll break down portfolio bidding and 5 different campaign structure scenarios when I tested it.

Read more at PPCHero.com
PPC Hero


Disambiguating Image Queries at Google

June 7, 2020 No Comments

Better Understanding Image Queries

Years ago, I wouldn’t have expected a search engine telling a searcher about objects in a photograph or video, but search engines have been evolving and getting better at what they do

In February, Google was granted a patent to help return image queries from searches involving identifying objects in photographs and videos. A search engine may have trouble trying to understand what a human may be asking, using a natural language query, and this patent focuses upon disambiguating image queries.

The patent provides the following example:

For example, a user may ask a question about a photograph that the user is viewing on the computing device, such as “What is this?”

The patent tells us that the process in it maybe for image queries, with text, or video queries, or any combination of those.

In response to a searcher asking to identify image queries, a computing device may:

  • Capture a respective image that the user is viewing
  • Transcribe the question
  • Transmit that transcription and the image to a server

The server may receive the transcription and the image from the computing device, and:

  • Identify visual and textual content in the image
  • Generate labels for images in the content of the image, such as locations, entities, names, types of animals, etc.
  • Identify a particular sub-image in the image, which may be a photograph or drawing

The Server may:

  • Identify part of a particular sub-image that may be of primary interest to a searcher, such as a historical landmark in the image
  • It may perform image recognition on the particular sub-image to generate labels for that sub-image
  • It may also generate labels for text in the image, such as comments about the sub-image, by performing text recognition on a part of the image other than the particular sub-image
  • It may then generate a search query based on the transcription and the generated labels
  • That query may ben be provided to a search engine

The Process Behind Disambiguating a Visual Query

The process described in this patent includes:

  • Receiving an image presented on, or corresponding to, at least a part of a display of a computing device
  • Receiving a transcription of an utterance spoken by a searcher, when the image is being presented
  • Identifying a particular sub-image included in the image, and based on performing image recognition on the particular sub-image
  • Determining one or more first labels that show a context of the particular sub-image
  • Performing text recognition on a part of the image other than the particular sub-image
  • Determining one or more second labels showing the context of the particular sub-image, based on the transcription, the first labels, and the second labels
  • Generating a search query
  • Providing, for output, the search query

query Images

Other Aspects of performing such image queries searches may involve:

  • Weighting the first label differently than a second label: the search query may substitute one or more of the first labels or the second labels based upon terms in the transcription
  • Generating, for each of the first labels and the second labels, a label confidence score that indicates a likelihood that the label corresponds to a part of the particular sub-image that is of primary interest to the user
  • Selecting one or more of the first labels and second labels based on the respective label confidence scores, wherein the search query is based on the one or more selected first labels and second labels
  • Accessing historical query data including previous search queries provided by other users
  • Generating, based on the transcription, the first labels, and the second labels, one or more candidate search queries
  • Comparing the historical query data to the one or more candidate search queries
  • Selecting a search query from among the one or more candidate search queries, based on comparing the historical query data to the one or more candidate search queries

The method may also include:

  • Generating, based on the transcription, the first labels, and the second labels, one or more candidate search queries
  • Determining, for each of the one or more candidate search queries, a query confidence score that indicates a likelihood that the candidate search query is an accurate rewrite of the transcription
  • Selecting, based on the query confidence scores, a particular candidate search query as the search query
  • Identifying one or more images included in the image
  • Generating for each of the one or more images included in the image, an image confidence score that indicates a likelihood that an image is an image of primary interest to the user
  • Selecting the particular sub-image, based on the image confidence scores for the one or more images
  • Receiving data indicating a selection of a control event at the computing device, wherein the control event identifies the particular sub-image. (The computing device may capture the image and capture audio data that corresponds to the utterance in response to detecting a predefined hotword.)

Further, the method may also include:

  • Receiving an additional image of the computing device and an additional transcription of an additional utterance spoken by a user of the computing device
  • Identifying an additional particular sub-image that is included in the additional image, based on performing image recognition on the additional particular sub-image
  • Determining one or more additional first labels that indicate a context of the additional particular sub-image, based on performing text recognition on a portion of the additional image other than the additional particular sub-image Determining one or more additional second labels that indicate the context of the additional particular sub-image, based on the additional transcription, the additional first labels, and the additional second labels
  • Generating a command, and performing the command

Performing the command can include:

  • Storing the additional image in memory
  • Storing the particular sub-image in the memory
  • Uploading the additional image to a server
  • Uploading the particular sub-image to the server
  • Importing the additional image to an application of the computing device
  • Importing the particular sub-image to the application of the computing device
  • Identifying metadata associated with the particular sub-image, wherein determining the one or more first labels that indicate the context of the particular sub-image based further on the metadata associated with the particular sub-image

Advantages of following the image query process described in the patent can include:L

  • The methods can determine the context of an image corresponding to a portion of a display of a computing device to aid in the processing of natural language queries
  • The context of the image may be determined through image and/or text recognition
  • The context of the image may be used to rewrite a transcription of an utterance of a user
  • The methods may generate labels that refer to the context of the image, and substitute the labels for portions of the transcription, such as “Where was this taken?”)
  • The methods may determine that the user is referring to the photo on the screen of the computing device
  • The methods can extract information about the photo to determine the context of the photo, as well as a context of other portions of the image that do not include the photo, such as a location that the photo was taken

This patent can be found at:

Contextually disambiguating queries
Inventors: Ibrahim Badr, Nils Grimsmo, Gokhan H. Bakir, Kamil Anikiej, Aayush Kumar, and Viacheslav Kuznetsov
Assignee: Google LLC
US Patent: 10,565,256
Granted: February 18, 2020
Filed: March 20, 2017

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextually disambiguating queries are disclosed. In an aspect, a method includes receiving an image being presented on a display of a computing device and a transcription of an utterance spoken by a user of the computing device, identifying a particular sub-image that is included in the image, and based on performing image recognition on the particular sub-image, determining one or more first labels that indicate a context of the particular sub-image. The method also includes, based on performing text recognition on a portion of the image other than the particular sub-image, determining one or more second labels that indicate the context of the particular sub-image, based on the transcription, the first labels, and the second labels, generating a search query, and providing, for output, the search query.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Disambiguating Image Queries at Google appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


And that’s really it for Google+

June 6, 2020 No Comments

Last year, Google launched the beta of Currents, which was essentially a rebrand of Google+ for G Suite users, since Google+ for consumers went to meet its maker in April 2019. While Google+ was meant to be an all-purpose social network, the idea behind Currents is more akin to what Microsoft is doing with Yammer or Facebook with Workplace. It’s meant to give employees a forum for internal discussions and announcements.

To complicate matters, Google kept Google+ around, even after the launch of Currents, but in an email to G Suite admins, it has now announced that Google+ for G Suite will close its doors on July 6, after which there will be no way to opt out of Currents or revert back to Google+.

And with that, Google has driven the final nail into Google+’s coffin. The Google+ mobile apps will be automatically updated to Currents. All existing links to Google+ will redirect to Currents.

Going forward, Google+ will only live on as a hazy memory, filled with circles of friends, all of which were forced to use their real name (at least at the beginning), +1 buttons everywhere, sparks and the promise of fun games, ripples and more.

Currents is all business — and while I’m not aware of a lot of companies that use it, it looks to be a solid option for companies that would otherwise use the Yammer/Teams combination in the Microsoft ecosystem. Now, I guess, we can start the countdown before Google launches another social network.

If you want to take a stroll down memory lane, check out our history of Google+ below:


Enterprise – TechCrunch