CBPO

Tag: Results

What five news-SEO experts make of Google’s new, “Full Coverage” feature in mobile search results

March 24, 2021 No Comments

30-second summary:

  • Google recently rolled out the “Full Coverage” feature for mobile SERPs
  • Will this impact SEO traffic for news sites, SEO best practices, and content strategies?
  • Here’s what in-house SEOs from The LA Times, New York Times, Conde Nast, Wall Street Journal, and prominent agency-side SEOs foresee

Google’s “Full Coverage” update rolled out earlier this month – but what does it really mean for news-SEOs? In-house SEOs from The LA Times, New York Times, Conde Nast, Wall Street Journal, and prominent agency-side SEOs weigh in.

As a news-SEO person myself, I was eager to get my peers’ opinions on: 

  • If this feature will result in greater SEO traffic for news sites?
  • If editorial SEO best practices and content strategies will evolve because of it?
  • If it will result in closer working relationships between SEO and editorial teams?
  • Or, will everything remain “business as usual”?

ICYMI: Google’s new, “Full Coverage” feature in mobile search

Google added the “full coverage”  feature to its mobile search functionality earlier this month – with the aim of making it easier for users to explore content related to developing news stories from a diverse set of publishers, perspectives, and media slants.  

Just below the “Top Stories” carousel, users will now begin seeing the option to tap into “Full Coverage”/“More news on…” for developing news stories. The news stories on this page will be organized in a variety of sub-news topics (versus one running list of stories like we’re used to seeing), such as:

  • Top news
  • Local news
  • Beyond the headlines, and more

Take a look at  in-action, here:

Google's "Full Coverage" feature

Source: Google

While the concept of Google “Full Coverage” was developed back in 2018,  it pertained strictly to the Google News site and app. The technology, temporal co-locality, works by mapping the relationships between entities – and understanding the people, places, and things in a story right as it evolves. And then, organizes it around storylines all in real-time to provide “full coverage” on the topic searched for.

The launch of Google’s new “Full Coverage” feature in mobile search, specifically, is exciting because it takes its technology a step further; able to detect long-running news stories that span many days, like the Super Bowl, to many weeks or months like the pandemic to serve to users.  The feature is currently available to English speakers in the U.S. and will be rolled out to additional languages and locations over the next few months. 

What five news-SEO experts think about “Full Coverage” in mobile search

Lily Ray, Senior Director, SEO & Head of Organic Research at Path Interactive on Google's "Full Coverage" feature
Source: Linkedin

1. Lily Ray, Senior Director, SEO & Head of Organic Research at Path Interactive

Lily Ray is a Senior SEO Director at Path Interactive in New York. She’s a prominent voice within the SEO community (with +15K followers on Twitter), and has been nominated for multiple search marketing awards throughout her career. She is well known for her E-A-T expertise.  Here’s what she had to say:

 

“Full Coverage appears to be another new tool in Google’s arsenal for displaying a diversity of perspectives and viewpoints on recent news and events. It’s a good thing for publisher sites because it represents another opportunity to have news content surfaced organically. It may also serve as a way for niche or local publishers to gain more visibility in organic search, since Google is specifically aiming to show a broader range of viewpoints that may not always come across with the major publications.

Hopefully, Google will allow us to be able to monitor the performance of Full Coverage via either Search Console or Google Analytics, so we can segment out how our articles do in this area compared to in other areas of search.”

Louisa Frahm, SEO Editor at The LA Times on Google's "Full Coverage" feature
Source: LinkedIn

2. Louisa Frahm, SEO Editor at The LA Times

Louisa Frahm currently serves as the SEO Editor at the Los Angeles Times and is also pursuing a master’s degree in communication management at the University of Southern California. Prior to the LA Times, Frahm was an SEO strategist at other high-profile digital publications including Entertainment Weekly, People Magazine, TMZ, Yahoo!, and E! Online. Here’s her take:

“I’ve always liked that element of Google News. It taps into readers (like me!) who are consistently hungry for more information. 

Working in the journalism field, I’m always in favor of readers utilizing a diverse array of news sources. I’m glad that this new update will tap into that. I’m interested to see which stories will fall into the “develop over a period of time” criteria. I could see it working well for extended themes like COVID-19, but big breakout themes like Harry and Meghan could also potentially fit that bill. 

A wide variety of story topics have resulted from that Oprah interview, and fresh angles keep flowing in! As we’re in the thick of 2021 awards season, I could also see the Golden Globes, Grammys, and Oscars playing into this with their respective news cycles before, during, and after the events. 

The long-term aspect of this update inspires me to request more updates from writers on recurring themes, so we can connect with the types of topics this particular feature likes. Though pure breaking news stories with short traffic life cycles will always be important for news SEO, this feature reinforces the additional importance of more evergreen long-term content within a publisher’s content strategy. 

I could see this update providing a traffic boost, since it provides one more way for stories to get in front of readers. We always want as many eyeballs as possible on our content. Happy to add one more element to my news SEO tool kit. Google always keeps us on our toes!”

Barry Adams, Founder of Polemic Digital on Google's "Full Coverage" feature
Source: Linkedin

3. Barry Adams, Founder of Polemic Digital

Barry Adams is the founder of SEO consultancy, Polemic Digital. He has earned numerous search marketing awards throughout his career and has also spoken at several industry conferences. His company has helped news and publishing companies such as – The Guardian, The Sun, FOX News, and Tech Radar to name a few. This is his opinion:

“The introduction of Full Coverage directly into search results will theoretically mean there’s one less click for users to make when trying to find the full breadth of reporting on a news topic. 

Whether this actually results in significantly more traffic for publishers is doubtful. The users who are interested in reading a broad range of sources on a news story will already have adopted such click behaviour via the news tab or directly through Google News. 

This removal of one layer of friction between the SERP and a larger number of news stories seems more intended as a way for Google to emphasize its commitment to showing news from all kinds of publishers – the fact remains that the initial Top Stories box is where the vast majority of clicks happen. This Full Coverage option won’t change that.”

John Shehata, Global VP of Audience Development Strategy at Conde Nast on Google's "Full Coverage" feature
Source: Linkedin

4. John Shehata, Global VP of Audience Development Strategy at Conde Nast, Founder of NewzDash News SEO

John Shehata is the Global VP of Audience Development Strategy at Conde Nast, the media company known for brands such as – Architectural Digest, Allure, Vanity Fair, and Vogue. He’s also the founder of NewzDash News SEO – a News & Editorial SEO tool that helps publishers and news sites boost their visibility and traffic in Google Search. This is his opinion:

“Google has been surfacing more news stories on their SERPs over the past few years, first Top Stories were two-three links then it became a 10-link carousel. Google then started grouping related stories together expanding Top Stories carousel from one to three featuring up 30 news stories. They also introduced local news carousels for some local queries, [and now, this new feature]. It is obvious that Google keeps testing with different formats when it comes to news. One of our top news trends and prediction for 2021 is Google will continue to introduce multiple and different formats in the SERPs beyond Top Stories article formats.

As of the impact on traffic back to publishers, it is a bit early to predict but I do not expect much boost in traffic. Do not get more wrong, this feature provides more chances for more publishers to be seen, the question is how many search users will click. And if users click, Google surfaces over 50 news links plus tweets which makes it even more competitive for publishers to get clicks back to their stories.

I did some quick analysis back in July of last year When Google Search Console started providing News tab data. I found that News Impressions are less than five percent of total web impressions. Not quite sure how is the new “Full Coverage” feature CTR will be and how many users will click! The “full coverage” link placement is better than the tabs, so we might see higher CTR.”

Claudio Cabrera, Deputy Audience Director, News SEO at The New York Times on Google's "Full Coverage" feature
Source: LinkedIn

5. Claudio Cabrera, Deputy Audience Director, News SEO at The New York Times

Claudio Cabrera serves as the Deputy Audience Director of News SEO at the New York Times. He is an award-winning audience development expert, journalist, and educator. Prior to working at The New York Times, he was Director of Social and Search strategy at CBS Local. Here are his thoughts:

“It can be looked at in so many ways. Some brands will look at it as an opportunity to gain more visibility while some will feel their strong foothold may be lost. I think it just encourages better journalism and even better SEO because it forces us to think outside of our playbooks and adjust on some level to what we’re seeing Google provide users. 

From a site traffic perspective, I can’t really comment on whether this has affected us or not but I do know there are so many other areas where sites have done serious research and testing into like Discover where audiences can grow and be picked up if you do see a drop-off. I don’t think the best practices of SEO change too much but I think the relationship between search experts and editors deepens and becomes even closer due to the changes in the algo.”

Conclusion

Google’s new “Full Coverage” feature in mobile search rolled out earlier this month and is an extension of the full coverage function developed for Google News back in 2018. The aim of this new feature is to help users gain a holistic understanding of complex news stories as they develop – by organizing editorial content in such a way that it goes beyond the top headlines and media outlets. In essence, giving users the “full coverage” of the event. 

News-SEO experts seem to be in agreement that this new feature will make it simpler for users to explore – and gain a holistic understanding of – trending news stories. As far as what this new feature means for SEO traffic and strategy, experts can only speculate until more developing news stories emerge and we can analyze impact. 

Elizabeth Lefelstein is an SEO consultant based in Los Angeles, California. She’s worked with a variety of high-profile brands throughout her career and is passionate about technical SEO, editorial SEO, and blogging. She can be found on LinkedIn and Twitter @lefelstein.

The post What five news-SEO experts make of Google’s new, “Full Coverage” feature in mobile search results appeared first on Search Engine Watch.

Search Engine Watch


Running Smart Shopping Alongside Regular Shopping [Test Results]

October 7, 2020 No Comments

One Paid Media Strategist discusses a Google Smart Shopping test and the results of running smart shopping campaigns alongside regular shopping campaigns.

Read more at PPCHero.com
PPC Hero


IPO mistakes, fintech results, and the Zenefits ‘mafia’

August 10, 2020 No Comments

Welcome back to The TechCrunch Exchange, a weekly startups-and-markets newsletter for your weekend enjoyment. It’s broadly based on the weekday column that appears on Extra Crunch, but free. And it’s made just for you. You can sign up for the newsletter here

With that out of the way, let’s talk money, upstart companies and the latest spicy IPO rumors. 

(In time the top bit of the newsletter won’t get posted to the website, so do make sure to sign up if you want the whole thing!)

BigCommerce isn’t worried about its IPO pricing

One of the most interesting disconnects in the market today is how VC Twitter discusses successful IPOs and how the CEOs of those companies view their own public market debuts.

If you read Twitter on an IPO day, you’ll often see VCs stomping around, shouting that IPOs are a racket and that they must be taken down now. But if you dial up the CEO or CFO of the company that actually went public to strong market reception, they’ll spend five minutes telling you why all that chatter is flat wrong.

Case in point from this week: BigCommerce. Well-known VC Bill Gurley was incensed that shares of BigCommerce opened sharply higher after they started trading, compared to their IPO price. He has a point, with the Texas-based e-commerce company pricing at $ 24 per share (above a raised range, it should be said), but opened at $ 68 and is worth around $ 88 on Friday as I write to you.

So, when I got BigCommerce CEO Brent Bellm on Zoom after its debut, I had some questions. 

First, some background. BigCommerce filed confidentially back in 2019, planned on going public in April, and wound up delaying its offering due to the pandemic, according to Bellm. Then in the wake of COVID-19, sales from existing customers went up, and new customers arrived. So, the IPO was back on.

BigCommerce, as a reminder, is seeing growth acceleration in recent quarters, making its somewhat modest growth rate more enticing than you’d otherwise imagine.

Anyhoo, the company was worth more than 10x its annual run-rate at its IPO price if I recall the math, so it wasn’t cheap even at $ 24 per share. And in response to my question about pricing Bellm said that he was content with his company’s final IPO price. 

He had a few reasons, including that the IPO price sets the base point for future return calculations, that he measures success based on how well investors do in his stock over a ten-year horizon, and that the more long-term investors you successfully lock in during your roadshow, the smaller your first-day float becomes; the more investors that hold their shares after the debut, the more the supply/demand curve can skew, meaning that your stock opens higher than it otherwise might due to only scarce equity being up for purchase.

All that seems incredibly reasonable. Still, VCs are livid. 

Market Notes

The Exchange spent a lot of time on the phone this week, leading to a host of notes for your consumption. And there was a deluge of interesting data. So, here’s a digest of what we heard and saw that you should know:

  • Fintech mega-rounds are heating up, with 28 in the second quarter of 2020. Total fintech rounds dipped, but it appears that the sky is still pretty much afloat for financial technology startups.
  • Tech stocks set new records this week, something that has become so common that the new all-time highs for the Nasdaq didn’t really create a ripple. Hell, it’s Nasdaq 11,000, where’s our gosh darn party?
  • Axios’ Dan Primack noted this week that SPACs may be raising more money than private equity at the moment, and that there were “over $ 1 billion in new [SPAC] filings over past 24 hours” on Wednesday. I’ve given up keeping tabs on the number of SPACs taking place, frankly.
  • But we did dig into two of the more out-there SPACs, in case you wanted a taste of today’s market.
  • The Exchange also spoke with the chief solutions officer of Rackspace, Matt Stoyka, before its shares had started to trade. The chat stressed post-COVID-19 momentum, and the continuing cloud transition of lots of IT spend. Rackspace intends on lowering its debt load with a chunk of its IPO proceeds. It priced at $ 21, the lower-end of its range, so it didn’t get an extra debut check. And as the company’s shares are sharply under its IPO price today, there was no VC chatter about mispricing, notably. (That stuff only tends to crop up when the results bend in a particular direction.)
  • I also chatted with Joshua Bixby, the CEO of Fastly this week. The cloud services company wound up giving back some of its recent gains after earnings, which goes to show how the market is perhaps overpricing some public tech shares. After all, Fastly beat on Q2 profit, Q2 revenue, and raised its full-year guidance — and its shares fell? That’s wild. Perhaps the income it generates from TikTok was concerning? Or perhaps after racing from a 52 week low of $ 10.63 to a 52 week high of over $ 117, the market realized that Fastly could only accelerate so much.

Whatever the case, during our chat Fastly CEO Joshua Bixby taught me something new: Usage-based software companies are like SaaS firms, but more so.

In the old days, you’d buy a piece of software, and then own it forever. Now, it’s common to buy one-year SaaS licenses. With usage-based pricing, you make the buying choice day-to-day, which is the next step in the evolution of buying, it feels. I asked if the model isn’t, you know, harder than SaaS? He said maybe, but that you wind up super aligned with your customers. 

Various and Sundry

To wrap up, as always, here’s a final whack of data, news and other miscellania that are worth your time from the week:

  • TechCrunch chatted with Intercom, which recently hired a CFO and is therefore prepping to go public. But then it said the debut is at least two years away, which was a bummer. The company wrapped its January 31, 2020 fiscal year with $ 150 million ARR. It’s now much larger. Go public!
  • The Zenefits “mafia” raised a lot, and a little this week. “Mafia” is a terrible term, by the way. We should come up with a new one.
  • Danny Crichton wrote about SaaS revenue securitization, which was cool.
  • Natasha Mascarenhas wrote about learning pods, which aren’t super germane to The Exchange but struck me as incredibly topical to our current lives, so I am including the piece all the same.
  • I spoke with the CEO of Wrike this week, noodling on his company’s size (over $ 100 million ARR), and his competitors Asana and Monday.com. The whole cohort is over $ 100 million ARR each, so I might turn them into a post next week entitled “Go public you cowards,” or something. But probably with a different title as I don’t want to argue with 17 internal and external PR teams about why I’m right.
  • The Exchange also chatted with VC firms M13 (big on services, various domestic office locations, focus on consumer spend over time) and Coefficient Capital (D2C brand focused, super interesting thesis) this week. Our takeaway is that there is more juice, and focus on the more consumer-focused side of VC than you’d probably expect given recent data

We’ve blown past our 1,000 word target, so, briefly: Stay tuned to TechCrunch for a super-cool funding round on Monday (it has the fastest growth I can recall hearing about), make sure to listen to the latest Equity ep, and parse through the latest TechCrunch List updates.

Hugs, fistbumps, and good vibes, 

Alex


Startups – TechCrunch


How Google Might Rank Image Search Results

August 5, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google Might Rank Image Search Results

July 20, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google Might Rank Image Search Results

July 9, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google May Annotate Images to Improve Search Results

June 25, 2020 No Comments

How might Google improve on information from sources such as knowledge bases to help them answer search queries?

That information may be learned from or inferred from sources outside of those knowledge bases when Google may:

  • Analyze and annotate images
  • Consider other data sources

A recent Google patent on this topic defines knowledge bases for us, why those are important, and it points out examples of how Google looks at entities while it may annotate images:

A knowledge base is an important repository of structured and unstructured data. The data stored in a knowledge base may include information such as entities, facts about entities, and relationships between entities. This information can be used to assist with or satisfy user search queries processed by a search engine.

Examples of knowledge bases include Google Knowledge Graph and Knowledge Vault, Microsoft Satori Knowledge Base, DBpedia, Yahoo! Knowledge Base, and Wolfram Knowledgebase.

The focus of this patent is upon improving upon information that can be found in knowledge bases:

The data stored in a knowledge base may be enriched or expanded by harvesting information from a wide variety of sources. For example, entities and facts may be obtained by crawling text included in Internet web pages. As another example, entities and facts may be collected using machine learning algorithms, while it may annotate images.

All gathered information may be stored in a knowledge base to enrich the information that is available for processing search queries.

Analyzing Images to Enrich Knowledge Base Information

This approach may annotate images and select object entities contained in those images. It reminded me of a post I recently wrote about Google annotating images, How Google May Map Image Queries

This is an effort to better understand and annotate images, and explore related entities in images, so Google can focus on “relationships between the object entities and attribute entities, and store the relationships in a knowledge base.”

Google can learn from images of real-world objects (a phrase they used for entities when they started the Knowledge Graph in 2012.)

I wrote another post about image search becoming more semantic, in the labels they added to categories in Google image search results. I wrote about those in Google Image Search Labels Becoming More Semantic?

When writing about mapping image queries, I couldn’t help but think about labels helping to organize information in a useful way. I’ve suggested using those labels to better learn about entities when creating content or doing keyword research. Doing image searches and looking at those semantic labels can be worth the effort.

This new patent tells us how Google may annotate images to identify entities contained in those images. While labeling, they may select an object entity from the entities pictured and then choose at least one attribute entity from the annotated images that contain the object entity. They could also infer a relationship between the object entity and the attribute entity or entities and include that relationship in a knowledge base.

In accordance with one exemplary embodiment, a computer-implemented method is provided for enriching a knowledge base for search queries. The method includes assigning annotations to images stored in a database. The annotations may identify entities contained in the images. An object entity among the entities may be selected based on the annotations. At least one attribute entity may be determined using the annotated images containing the object entity. A relationship between the object entity and the at least one attribute entity may be inferred and stored in a knowledge base.

For example, when I search for my hometown, Carlsbad in Google image search, one of the category labels is for Legoland, which is an amusement park located in Carlsbad, California. Showing that as a label tells us that Legoland is located in Carlsbad (the captions for the pictures of Legoland tell us that it is located in Carlsbad.)

Carlsbad-Legoland-Attribute Entity

This patent can be found at:

Computerized systems and methods for enriching a knowledge base for search queries
Inventors: Ran El Manor and Yaniv Leviathan
Assignee: Google LLC
US Patent: 10,534,810
Granted: January 14, 2020
Filed: February 29, 2016

Abstract

Systems and methods are disclosed for enriching a knowledge base for search queries. According to certain embodiments, images are assigned annotations that identify entities contained in the images. An object entity is selected among the entities based on the annotations and at least one attribute entity is determined using annotated images containing the object entity. A relationship between the object entity and the at least one attribute entity is inferred and stored in the knowledge base. In some embodiments, confidence may be calculated for the entities. The confidence scores may be aggregated across a plurality of images to identify an object entity.

Confidence Scores While Labeling of Entities in Images

One of the first phrases to jump out at me when I scanned this patent to decide that I wanted to write about it was the phrase, “confidence scores,” which reminded me of association scores which I wrote about discussing Google trying to extract information about entities and relationships with other entities and confidence scores about the relationships between those entities, and about attributes involving the entities. I mentioned association scores in the post Entity Extractions for Knowledge Graphs at Google, because those scores were described in the patent Computerized systems and methods for extracting and storing information regarding entities.

I also referred to these confidence scores when I wrote about Answering Questions Using Knowledge Graphs because association scores or confidence scores can lead to better answers to questions about entities in search results, which is an aim of this patent, and how it attempts to analyze and label images and understand the relationships between entities shown in those images.

The patent lays out the purpose it serves when it may analyze and annotate images like this:

Embodiments of the present disclosure provide improved systems and methods for enriching a knowledge base for search queries. The information used to enrich a knowledge base may be learned or inferred from analyzing images and other data sources.

Per some embodiments, object recognition technology is used to annotate images stored in databases or harvested from Internet web pages. The annotations may identify who and/or what is contained in the images.

The disclosed embodiments can learn which annotations are good indicators for facts by aggregating annotations over object entities and facts that are already known to be true. Grouping annotated images by the object entity help identify the top annotations for the object entity.

Top annotations can be selected as attributes for the object entities and relationships can be inferred between the object entities and the attributes.

As used herein, the term “inferring” refers to operations where an entity relationship is inferred from or determined using indirect factors such as image context, known entity relationships, and data stored in a knowledge base to draw an entity relationship conclusion instead of learning the entity-relationship from an explicit statement of the relationship such as in text on an Internet web page.

The inferred relationships may be stored in a knowledge base and subsequently used to assist with or respond to user search queries processed by a search engine.

The patent then tells us about how confidence scores are used, that they calculate confidence scores for annotations assigned to images. Those “confidence scores may reflect the likelihood that an entity identified by an annotation is contained in an image.”

If you look back up at the pictures for Legoland above, it may be considered an attribute entity of the Object Entity Carlsbad, because Legoland is located in Carlsbad. The label annotations indicate what the images portray, and infer a relationship between the entities.

Just like an image search for Milan Italy shows a category label for Duomo, a Cathedral located in the City. The Duomo is an attribute entity of the Object Entity of Milan because it is located in Milan Italy.

In those examples, we are inferring from Legoland being included under pictures of Carlsbad that it is an attribute entity of Carlsbad and that the Duomo is an attribute entity of Milan because it is included in the results of a search for Milan.

Milan Duomo Attribute Entity

A search engine may learn from label annotations and because of confidence scores about images because the search engine (or indexing engine thereof) may index:

  • Image annotations
  • Object entities
  • Attribute entities
  • Relationships between object entities and attribute entities
  • Facts learned about object entities

The Illustrations from the patent show us images of a Bear, eating a Fish, to tell us that the Bear is an Object Entity, and the Fish is an Attribute Entity and that Bears eat Fish.

Anotate images with Bear (Object Entity) and Fish (Attribute-Entity) entities

We are also shown that Bears, as object Entities have other Attribute Entities associated with them, since they will go into the water to hunt fish, and roam around on the grass.

Bears and attribute Entities

Annotations may be detailed and cover objects within photos or images, like the bear eating the fish above. The patent points out a range of entities that might appear in a single image by telling us about a photo from a baseball game:

An annotation may identify an entity contained in an image. An entity may be a person, place, thing, or concept. For example, an image taken at a baseball game may contain entities such as “baseball fan”, “grass”, “baseball player”, “baseball stadium”, etc.

An entity may also be a specific person, place, thing, or concept. For example, the image taken at the baseball game may contain entities such as “Nationals Park” and “Ryan Zimmerman”.

Defining an Object Entity When Google May Annotate Images

The patent provides more insights into what object entities are and how they might be selected:

An object entity may be an entity selected among the entities contained in a plurality of annotated images. Object entities may be used to group images to learn facts about those object entities. In some embodiments, a server may select a plurality of images and assign annotations to those images.

A server may select an object entity based on the entity contained in the greatest number of annotated images as identified by the annotations.

For example, a group of 50 images may be assigned annotations that identify George Washington in 30 of those images. Accordingly, a server may select George Washington as the object entity if 30 out of 50 annotated images is the greatest number for any identified entity.

Confidence scores may also be determined for annotations. Confidence scores are an indication that an entity identified by an annotation is contained in an image. It “quantifies a level of confidence in an annotation being accurate.” That confidence score could be calculated by using a template matching algorithm. The annotated image may be compared with a template image.

Defining Attribute Entities When Google May Annotate Images

An attribute entity may be an entity that is among the entities contained in images that contain the object entity. They are entities other than the object entity.

Annotated images that contain the object entity may be grouped and an attribute entity may be selected based on what entity might be contained in the greatest number of grouped images as identified by the annotations.

So, a group of 30 annotated images containing object entity “George Washington” may also include 20 images that contain “Martha Washington.”

In that case, “Martha Washington,” may be considered an attribute entity

(Of Course, “Martha Washington Could be an object Entity, and “George Washington, appearing in a number of the “Martha Washington” labeled images could be considered the attribute entity.)

Infering Relationships between entities by Analyzing Images

If more than a threshold of images of “Michael Jordon” contains a basketball in his hand, a relationship between “Michael Jordan” and basketball might be made (That Michael Jordan is a basketball player.)

From analyzing images of bears hunting for fish in water, and roaming around on grassy fields, some relationships between bears and fish and water and grass can be made also:

inferences between entities

By analyzing images of Michael Jordan with a basketball in his hand wearing a Chicago Bulls jersey, a search query asking a question such as “What basketball team does Michael Jordan play for?” may be satisfied with the answer “Chicago Bulls”.

To answer a query such as “What team did Michael Jordan play basketball for, Google could perform an image search for “Michael Jordan playing basketball”. Having those images that contain the object entity of interest can allow the images to be analyzed and an answer provided. See the picture at the top of this post, showing Michael Jordan in a Bulls jersey.

Take Aways

This process to collect and annotate images can be done using any images found on the Web, and isn’t limited to images that might be found in places like Wikipedia.

Google can analyze images online in a way that scales on a web-wide basis, and by analyzing images, it may provide insights that a knowledge graph might not, such as to answer the question, “where do Grizzly Bears hunt?” an analysis of photos reveals that they like to hunt near water so that they can eat fish.

The confidence scores in this patent aren’t like the association scores in the other patents about entities that I wrote about, because they are trying to gauge how likely it is that what is in a photo or image is indeed the entity that it might then be labeled with.

The association scores that I wrote about were trying to gauge how likely relationships between entities and attributes might be more likely to be true based upon things such as the reliability and popularity of the sources of that information.

So, Google is trying to learn about real-world objects (entities) by analyzing pictures of those entities when it may annotate images (ones that it has confidence in), as an alternative way of learning about the world and the things within it.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google May Annotate Images to Improve Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Boost Facebook E-commerce Results In 2 Steps

April 29, 2020 No Comments

Maximize your ecom spend in Facebook Ads with two simple steps.

Read more at PPCHero.com
PPC Hero


Five ways a CRM system improves SEO results

April 21, 2020 No Comments

30-second summary:

  • When you look at what all a CRM system does for you, it lines up perfectly with the goals of an effective SEO strategy. The two should go hand-in-hand, working together to make each other more successful.
  • Right from initiating surveys to getting insights from customer data for hyper-targeted content creation, there’s a lot that a CRM can offer that will add value to your SEO strategy.
  • Customer experience too is intricately linked to the heart of how SEO works. When people search for keywords or phrases, they’re trying to find answers to questions and other information to satisfy a need.
  • Don’t underestimate the impact a CRM system can have on your SEO. Five ways your CRM can help you get better results with SEO.

The beauty and bane of SEO is that it’s a richly diverse topic. There are so many avenues to explore and angles to consider. In a previous article, we looked at how different inter-organizational collaborations can boost your SEO implementation and improve your results. Part of this article was dedicated to collaborations with customer relationship management (CRM) systems.

Although we touched on some of the benefits of CRM systems for SEO teams, it’s worth diving a bit deeper. When you look at what all a CRM system does for you, it lines up perfectly with the goals of an effective SEO strategy. The two should go hand-in-hand, working together to make each other more successful.

Here are five ways your CRM system can help you get better results with SEO.

1. Getting on the same page

When you’re using a CRM system, you have the opportunity to continuously learn from your customers. You can use this opportunity to find out more about their pain points, what they’re searching for, and what they’re purchasing. This information can then be passed over to the relevant part of your company to deal with it.

Through a CRM system, you can issue surveys, ask questions, and record information given by your existing customers and leads. With a more complete picture of customer data you can create localized content, adjust the timing of certain SEO campaigns to match buying habits, and highlight features of your products or services with keywords that address customer pain points.

Here is what Daniel Liddle – SEO Growth Strategist at Green Park Content says,

“Getting that crucial customer behaviour information from CRM systems is the best way to refine the intent and behaviour of your leads. With more and more CRMs incorporating machine learning into their software such as Microsoft’s Dynamic365 and Salesforce, sentiment analysis and forecasting is becoming a lot easier to report on and also making that data actionable in order to drive sales. And, that’s the key point, there’s elements of information that’s more ‘nice to know’ but you really want to be looking and building on actionable data which a lot of software companies are driving towards doing this autonomously.”

2. Nurturing SEO leads

SEO gets people to your content, but what happens next? If you don’t get enough engagement on your pages or response from your calls to action (CTAs), even your SEO will suffer over time. You have to follow up with leads generated by your SEO efforts so that you can turn them from page visitors to conversions.

CRM can help you guide your overall SEO strategy. If you integrate SEO efforts into your CRM system, you’ll get an idea of whether your SEO is bringing the right types of people to your pages, how many conversions you’re generating, and what brought people there in the first place. All of this means better SEO with measurable results that matter to your bottom line. Traffic and page views are great, but it’s better to get less general traffic if that means more conversions.

3. Providing consistency

Using a CRM system, you can ensure that your whole organization is on the same page. No matter how small or large your operation is, consistency in marketing and customer-facing strategies matters. CRM helps you stay organized to present the same central message across all platforms.

The more you talk about a certain subject, the more your authority on the subject increases. Building authority on a topic is great for SEO, as your content will have an advantage in the rankings if your focus stays consistent. You’ll be able to build a stronger link profile, get more social media mentions, and post more relevant content that your audience loves.

Growing that authority requires company-wide efforts to produce a consistently good experience that’s relevant to your audience and fulfils their needs. CRM helps you stay on track and get everyone on board from different parts of the organization.

For instance, Mario Peshev, CEO of DevriX, relies on their in-house CRM tool for gathering case studies in a consistent manner:

“Our retainer contracts are long-term and we revise our accomplishments two to four times per year. Having our customer portfolio in one place enables us to review the progress to date, leverage a case study template, and prepare drafts for new PR opportunities or updating existing success stories delivered for our clients.”

4. SEO and customer experience

The entire point of a CRM system is to improve the customer experience your company offers, from the beginning of the sales funnel to after-sales services and everything in between. By improving your customer experience, you can also boost your SEO.

Customer experience is intricately linked to the heart of how SEO works. When people search for keywords or phrases, they’re trying to find answers to questions and other information to satisfy a need. If you know what your customers want and need, you can better tailor your content and keywords to address those needs.

To give a good customer experience you first have to know your customers. Take that information and apply it to SEO. Using the data about your existing customers, you can target your SEO efforts on a more realistic market of people that share the needs and wants of your existing customer base. Since you know these types of people already use your products or services, it’s a good idea to market to similar people as well.

Better, more targeted content means lower bounce rates, more organic traffic, and higher engagement with your CTA. Your SEO efforts can only benefit from relevant content that addresses what people really want to know.

5. Improving third-party reviews

Speaking of customer experience, SEO is also partially impacted by outside mentions of your company. Specifically, third-party review sites like Google My Business, Yelp, and Travel Advisor hold a lot of weight. If you are listed on these sites with a solid number of reviews (most of which are positive), you’ll do better in SEO than if you’re unlisted with fake reviews, no reviews, or poor reviews.

If you’re doing your CRM work right, you should end up with more satisfied customers. These are the guys who are going to be leaving the reviews for you. It’s a long-term strategy, but by focusing more on a great customer experience that leaves more people satisfied with what you provide, you will naturally end up with more positive reviews. You can even prompt people to leave reviews if you’re confident that more people will be happy rather than disappointed.

CRMs do a lot for your business if you use it right. If you’re using one already or considering adopting the use of one, think about the bigger picture of everything you can do with it. Don’t underestimate the impact a CRM system can have on your SEO. Get ahead of your competition in the ranking using every tool at your disposal.

The post Five ways a CRM system improves SEO results appeared first on Search Engine Watch.

Search Engine Watch


How to make the most of Google’s “People also ask” results

February 21, 2020 No Comments

Google’s “People also ask” boxes are widely discussed within the SEO industry as they take a lot of SERP real estate while providing little to no organic visibility to the publishers’ sites.

That said, “People also ask” listings are probably helpful for Google’s users allowing them to get a better understanding of a topic they are researching. Yet, whether they do send actual clicks to publishers’ pages remains a huge question.

While we have no power over Google’s search engine page elements, our job as digital marketers is to find ways to take any opportunity to boost our clients’ organic visibility.

Is there any way for marketers to utilize this search feature better? Let’s see.

1. Understand your target query intent better

One of the cooler aspects of “People also ask” boxes is that they are dynamic.

When you click one question, it will take you in a new direction by generating more follow-up questions underneath. Each time you choose, you get more to choose from.

The coolest thing though is that the further questions are different (in topic, direction or intent) based on which question you choose.

Let me explain this by showing you an example. Let’s search for something like – “Is wine good for your blood?”

Now try clicking one of those questions in the box, for example, “What are the benefits of drinking red wine?” and watch more follow-up questions show up. Next, click a different question “Is red wine good for your heart and blood pressure?”. Do you see the difference?

Understanding search intent through Google's people also ask

 

Source: Screenshot made by the author, as of Feb 2020

Now, while this exercise may seem rather insignificant to some people, to me, it is pretty mind-blowing as it shows us what Google may know of their users’ research patterns and what may interest them further, depending on their next step.

To give you a bit of a context, Google seems to rely on semantic analysis when figuring out which questions fit every searcher’s needs better. Bill Slawski did a solid job covering a related patent called “Generating related questions for search queries” which also states that those related questions rely on search intent:

Providing related questions to users can help users who are using   un-common keywords or terminology in their search query to identify   keywords or terms that are more commonly used to describe their intent.

Google patent on generating related questions for search queries

Source: Google patent

For a deeper insight into the variety of questions and types of intent, they may signal, try Text Optimizer. The tool uses a similar process of extracting questions Google does. For example, here are intent-based questions that refer to the topic of bitcoin.

Finding intent based questions for people also ask using Text Optimizer

 

Source: TextOptimizer’s search screenshot, as of Jan 2020

2. Identify important searching patterns

This one somewhat relates to the previous one but it serves a more practical goal, beyond understanding your audience and topic better. If you search Google for your target query enough, you will soon start seeing certain searching patterns.

For example, lots of city-related “People also ask” boxes will contain questions concerning the city safety, whether it is a good place to live in and what it is famous for:

Finding important search patterns through Google's people also ask

Identifying these searching patterns is crucial when you want:

  • Identify your cornerstone content
  • Re-structure your site or an individual landing page
  • Re-think your site navigation (both desktop and mobile)
  • Create a logical breadcrumb navigation (more on this here)
  • Consolidate your multiple pages into categories and taxonomies

3. Create on-page FAQs

Knowing your target users’ struggles can help in creating a really helpful FAQ section that can diversify your rankings and help bring steady traffic.

All you need to do is to collect your relevant “People also ask” results, organize them in sections (based on your identified intent/searching patterns) and answer all those questions on your dedicated FAQ page.

When working on the FAQ page, don’t forget to:

  • Use FAQPage schema to generate rich snippets in Google search (WordPress users can take advantage of this plugin). If you have a lot of questions in your niche, it is a good idea to build a standalone knowledge base to address them. Here are all the plugins for the job.
  • Set up engagement funnels to keep those readers interacting with your site and ultimately turn them into customers. Finteza is a solid option to use here, as it lets you serve custom CTAs based on the users’ referral source and landing page that brought them to your site:

Screenshot on Finteza

 

Source: Screenshot by Finteza, as of July 2019

4. Identify your competitor’s struggles

If you have an established competitor with a strong brand, their branded queries and consequent “People also ask” results will give you lots of insight into what kinds of struggles their customers are facing (and how to serve them better).

When it comes to branded “People also ask” results, you may want to organize them based on possible search intent:

  • ROPO questions: These customers are researching a product before making a purchasing decision.
  • High-intent questions: Customers are closest to a sale. These are usually price-related queries, for example, those that contain the word “reviews”.
  • Navigational questions: Customers are lost on your competitor’s site and need some help navigating. These queries can highlight usability issues for you to avoid when building your site.
  • Competitive questions: These queries compare two of your competitors.
  • Reputation questions: Those customers want to know more about your competitor’s company.

Identifying competitor challenges through people also ask

Source: A screenshot made by the author in January 2020

This information helps you develop a better product and a better site than those of your competitors.

Conclusion

With the changes in search algorithms over the years, the dropping and adding of key search elements, the evolution of Google’s SERPs, navigating digital marketing trends seems almost treacherous.

Yet, at the core of things, not much has really shifted and much of what we do remains the same. In fact, some of those changes have made it even easier to make an impact on the web than ever before. While we may welcome or frown upon each new change, there’s still some competitive advantage in each of them.

Our job, as digital marketers, is to distinguish that competitive advantage and make the most of it.

I hope the above ideas will help you use “People also ask” results to your advantage.

Ann Smarty is the Brand and Community manager at InternetMarketingNinjas.com.

The post How to make the most of Google’s “People also ask” results appeared first on Search Engine Watch.

Search Engine Watch