- The biggest Google update of the year is called the Page Experience update.
- Core Web Vitals are part of that update, and they are definitely ranking factors to keep in mind, especially when optimizing images.
- AMP is no longer the only way to get a “Top Stories” feature on mobile. Starting in 2021, any news webpage can become a “Top Story”.
- Combining AMP’s privacy concerns and cost of operation might mean that AMP will disappear within a couple of years.
- E-A-T is not a ranking factor right now, and we don’t know if it will become one in the future.
2020. What a year. History is happening around us, and Google? Well, Google keeps on revamping their search algorithms. Over the years, there have been many many major algorithm updates, as Google worked to keep us on our toes. 2020 was no different: in one fell swoop, we got the news about a Page Experience update and AMP news. All the while the debate about whether or not you need E-A-T for ranking rages on. How do the Core Web Vitals stand in changing the search game in 2021?
Let’s go over each of these innovations and see which will change the way we do SEO, and which will fade into obscurity sooner rather than later.
1. Importance of core web vitals for SEO
Core Web Vitals were part of Page Experience update, and, by far, caused the biggest ruckus.
There’s a lot to learn about Core Web Vitals, but they boil down to the three biggest issues on our webpages:
- LCP — Largest Contentful Paint, which deals with the loading speed of the largest single object on the page.
- FID — First Input Delay, which means the reaction time of the page to the first user input after (whether they click, tap, or press any keys).
- CLS — Cumulative Layout Shift — this is the measure of how much the content of the page jumps while loading content, mostly visual content, after opening.
How core web vitals influences rankings
Of course, some SEO experts think that the entire Page Experience update is nothing special, and could even: “[…] distract, […] from the core mission of communication and storytelling,”.
And, sure, most of Page experience update is simply an assembly of things we’ve known for a while: use HTTPS, be mobile-friendly, control your page speed, and so on.
But Core Web Vitals are a bit different and can influence the SEO practice in unexpected ways. Key factor that’s already changing rankings is Cumulative Layout Shift.
As most SEO experts know, for a while an important part of image optimization was using the <decoding=async> attribute in the <img> tag to avoid losing page speed while rendering the page.
Using <decoding=async> could lead to some seriously janky pages if coders didn’t specify the height and width of every single image to be rendered. Some websites did it anyway, for example, Wikipedia on most of its pages has a predefined space for images created ahead of time.
But as SEO experts we didn’t have to worry about pages being jumpy all too much, as that didn’t influence the rankings. Now with CLS being formally announced as a ranking factor, things will change for a whole slew of websites and SEO experts.
We’ll need to make sure that every webpage is coded with CLS in mind, with the needed space for every image defined ahead of time, to avoid the layout shifts.
Overall, of course, it’s too early to tell, and more work by SEO’s around the web needs to be done here. However, it seems that if you aren’t used to focusing on technical SEO, Core Web Vitals becoming ranking signals might not influence your day-to-day work at all.
However, if you are conducting complicated technical SEO, then Core Web Vitals will definitely change the way you work in as-yet unexpected ways.
2. Importance of AMP for SEO
The AMP’s relevance today is kind of an open question. While it’s always been great as a quick-and-easy way to increase page speed, the privacy concerns have been voiced over and over again since the technology’s very inception.
But in 2020, significant changes are afoot, since, within the same Page Experience update, Google announced that there’s finally no requirement for us to create AMP pages to occupy the “Top Stories” SERP feature.
That’s a pretty huge step for anybody trying to accrue as many SERP features as they can, and, in particular, for news websites.
How AMP influences rankings
If we believe John Muellers’ words, then AMP is not a ranking factor. Seems plain and simple enough. But of course, things aren’t so simple, because AMP comes with pretty significant gains in page speed, and speed is an important ranking factor.
Thanks to AMP’s pre-rendering combined with some severe design limitations, AMP webpages often really do win in page speed, even if not in ranking as is.
The “Top Stories” SERP feature, however, was a huge benefit to using an AMP for any news agency with a website, and it’s easy to understand why. Just look at how much of the page is occupied by the “Top Stories” results.
Not only do “Top Stories” automatically get top 1 ranking on the SERP, but they also sport a logo of the website posting them, standing out even more from the boring old blue-link SERP.
This means that for a few years now news websites were essentially forced into using AMP to get into a “Top Stories” SERP feature on mobile since it absorbs a whole lot of clicks.
On the other hand, it takes quite a lot of resources to support AMP versions of the webpages, because you are basically maintaining a whole additional version of your website.
Added to which, a page that’s been properly optimized for speed might not need AMP for those speed gains, as well.
While it’s tough to imagine that AMP will fade away completely within the next couple of years, AMP’s privacy issues combined with the cost of maintaining it might spell the end of it being a widely used practice.
Now, with the “Top Stories” becoming available to non-AMP pages, there’s virtually no reason to jeopardize the users’ security for speed gains you could get by proper optimization.
3. Importance of E-A-T for SEO
Expertise. Authority. Trust. All perfectly positive words and something we should all strive for in our professional lives. But what about search optimization?
Coming straight from Google’s Quality Rater Guidelines, E-A-T has been the talk of the town for a good moment now. Let’s dive in and see how they might change the way we optimize for search.
How E-A-T influences rankings
For most of us, they don’t really.
Sure, Quality Rater Guidelines provide valuable insights into Google’s ranking process. However, E-A-T is one of the lesser-important factors we should be focusing on, partly because these are nebulous, abstract concepts, and partly because Google doesn’t exactly want us to.
As Google’s official representatives informed us, E-A-T is not in itself a ranking factor.
Receiving follow-up questions, Google’s John Mueller then reiterated that point, and Ben Gomes, Google’s VP of search engineering confirmed that quality raters don’t influence any page’s rankings directly.
However, in practice, we often see that the so-called YMYL websites already can’t rank without having some expertise and authority established. A very popular example is that it’s virtually impossible to rank a website providing medical advice without an actual doctor writing the articles.
The problem here is that expertise, authority, and trustworthiness are not easily interpreted by the search algorithms, which only understand code.
And, at the moment, there seems to be no surefire way for Google to transform these signals into rankings, except to read the feedback of their quality raters before each algorithm update.
While using E-A-T to rank websites might sound like an inarguable benefit for the searcher, there is a couple of concerns that aren’t easily solved, namely:
- Who exactly will be determining the E-A-T signals, and according to which standard?
- The introduction of such factors creates a system where the smaller and newer websites are punished in rankings for not having the trustworthiness that they couldn’t realistically acquire.
Responding to both of these concerns requires time and effort on the search engine’s side.
As things stand right now, E-A-T is not something to keep in mind while doing day-to-day SEO operations.
Let’s imagine a fantastical scenario where a webmaster/SEO expert has some free time. Then they might want to work on E-A-T, to try and stay ahead of the curve.
On the other hand, there simply isn’t any proof that Google will actually use E-A-T. Or that, even if used, these signals will become major ranking factors. For this reason, E-A-T shouldn’t be your priority ahead of traditional SEO tasks like link building and technical optimization.
Additionally, consider this. The entire Quality Rater Guidelines is 168 pages long. However, a comprehensive explanation of what E-A-T is and why it might be calculated a certain way will take many more pages than that.
As of the time of this writing, the Core Web Vitals seems to be the most important ranking news to come out in 2020 in practical terms. However, search is an extremely volatile field: what worked two weeks ago may not work today, and what works today might not work for most of us.
The matters are further complicated because we’re fighting an uneven battle: it’s simply not in search engines’ best interest to give us a full and detailed picture of how ranking works, lest we abuse it.
This is why it’s crucial to keep our hand on the pulse of optimization news and changes occurring every single day. With constant efforts from our SEO community to work out the best way to top rankings, it’s possible for us to close that gap and know for sure which trends are paramount, and which we can allow ourselves to overlook.
Aleh Barysevich is Founder and CMO at SEO PowerSuite and Awario.
The post Google ranking factors to change search in 2021: Core Web Vitals, E-A-T, or AMP? appeared first on Search Engine Watch.
Why Google Ads and Google Analytics Convs. Don’t Align (And Why You Should Try The Attribution Beta)
In this blog, one PPC expert walks through Google Ads and Analytics conv. discrepancy scenarios and how the Google Attribution beta (within GA) can help.
Read more at PPCHero.com
- Google’s technological edge has always come from its computational power.
- This edge is no longer special. AWS, Microsoft, and other cloud services now give us access to essentially unlimited computing power on demand.
- Generative Pre-trained Transformer 3 (GPT-3) technology is the largest most advanced text predictor ever. It will eventually be available as a commercial product.
- We may see big players like Apple enter the search engine market.
- Founder and CTO of LinkGraph gives you foresight into the sea of opportunities ahead.
The tech-world recently geeked out after its first glimpse into OpenAI’s GPT-3 technology. Despite some kinks, the text predictor is already really good. From generating code to writing GoogleAds copy, to UI/UX design, the applications of GPT-3 have sparked the imaginations of web developers everywhere.
This changes everything.
With GPT-3, I built a Figma plugin to design for you.
I call it "Designer" pic.twitter.com/OzW1sKNLEC
— Jordan Singer (@jsngr) July 18, 2020
But they should also be sparking the imaginations of search engine optimizers. In its debut week, we already witnessed a developer build a search engine on top of GPT-3. It can hardly rival Google’s product, but the potential is clearly there. OpenAI plans to turn GPT-3 into a commercial product next year, meaning any brand could use the technology to create their own search platform.
The implications would be significant for the SEO landscape. More brands innovating their own search engines would create new opportunities for digital marketers and the brands we help build. For Google, though, the potential is far more nerve-wracking. GPT-3 has shown that Google’s core technological advantages – natural language processing (NLP) and massive computing – are no longer unique and are essentially being commoditized.
GPT-3 challenges Google’s technological edge from all directions
Few have monetized NLP and machine learning as well as Google, but their technological edge has always been computational power. Google’s ability to crawl billions of pages a day, its massive data centers, and its extensive computing across that data, have cemented their status as the dominant search engine and digital advertising market leader.
But AWS, Microsoft Azure, and other cloud services now give us access to essentially unlimited computing power on demand. A decade of Moore’s Law has also reduced the cost of this computing power by one-to-three orders of magnitude.
Additionally, open-source software and advances in research have made it easier for developers to access the latest breakthroughs in NLP and machine learning technology. Python, Natural Language Toolkit (NLTK), Pytorch, and Tensorflow are just a few that have granted developers access to their programming and software innovations.
Yes, Google still has BERT, which shares similar architecture with GPT-3. But GPT-3 is slightly larger (by about 175-billion parameters). GPT-3 also doesn’t need nearly as large of a training data set as Google’s BERT.
Not only will GPT-3 add value to those businesses and applications that are already using AI and machine learning with a newer, larger, and significantly improved NLP model, it would also equip Google’s biggest Cloud competitors with the ability to pair that technology with their own computing power.
Other players will soon be able to build massive search engines like Google did
In order to build a search engine, you need to be able to retrieve many different types of information: Web results for one, but also maps, data, images, and videos. Google’s indexing power is what catapulted them to be the primary retriever of all web knowledge.
In addition to building that massive information retrieval system, Google monetized on those economic searches through advertising. But the majority of searches Google doesn’t actually earn any money on.
OpenAI built its own information retrieval system GPT-3 so that it could create superintelligence. If OpenAI wanted to, they could build a competitor to Google. But the hardest part would be bringing this to a massive audience. Bing’s market share is only 6.5% while Yahoo’s is 3.5%.
It’s been a long time since the search engine market was a realistic place to compete. But what if a GPT-3 commercial product equipped a new competitor with an equally-matched technological edge, market share, cloud service, and devoted customer-base to enter the search market?
Let’s say a competitor like Apple. They launched a newly redesigned Apple Maps earlier this year. They already announced they are indexing the web through Applebot. When it comes to launching the next best search engine, Apple is well-positioned.
How could Apple change the SEO landscape with its own search engine?
Most likely, an Apple search engine would use ranking factors similar to Google. The app store ecosystem would equip Apple with greater use of in-app engagement data. We could also see a greater reliance on social signals from Facebook and Twitter.
All android devices currently ship with Chrome + Google Search as the default search OS. Apple’s devices ship with Safari and you can select your own preferred search OS. It could easily do what Google has done and default to its own search engine. With just one iPhone model launch, Apple could transition its massive customer-base away from Google through its technological edge and dominance with devices.
But what would be most troublesome for Google is how Apple could disrupt Google’s massive ads ecosystem. 71% of Google’s revenue comes from advertising. With Google Ads now being the most expensive (and competitive) place to advertise on the internet, many advertisers would welcome a disruption. It’s possible we could see billions of dollars of advertising revenue shift to Apple.
For SEOs and digital marketers, it’s fun to imagine. We could see entirely new markets for search, creating more need for our expertise and additional platforms our customers can use to grow. We’re not quite there yet, but SEOs and digital marketers should be prepared for what advancements like GPT-3 could potentially mean for our industry.
Manick Bhan is the founder and CTO of LinkGraph, an award-winning digital marketing and SEO agency that provides SEO, paid media, and content marketing services. He is also the founder and CEO of SearchAtlas, a software suite of free SEO tools. He is the former CEO of the ticket reselling app Rukkus.
The post What the commoditization of search engine technology with GPT-3 means for Google and SEO appeared first on Search Engine Watch.
- Google search trends show that there have been big increases in Google searches for certain phrases over the last few months.
- Some were expected, such as “virtual meetings”, others less so – one of the more troubling Google search trends was “get away with murder”!
- A lot of people were looking for physical comfort – everything from “chocolate” to “hot tubs”.
- Others sought an emotional or spiritual connection, including “virtual dating”, “prayer” and “virtual churches”.
- Head of Digital Marketing at Clear, Russell Welch, takes us through the latest Google search trends and with some takeaways for businesses.
As digital marketing nerds, we love data and analyzing trends. The COVID-19 pandemic has been a global tragedy – one of the most significant events of our lifetime. So how has it affected our behavior? And how can businesses cater to people’s needs?
With more people working from home and lockdowns in many countries stopping people from getting out and about, our habits and priorities have changed significantly.
To gauge what’s happened, we decided to analyze Google search trends data and see which searches had seen an increase in interest over the past few months. We originally analyzed UK COVID-19 searches but decided to look at the USA for Search Engine Watch.
The big move online
It should be fairly obvious to most of us that the world’s gone online. Zoom, Google Meet, Microsoft Teams – these are things many of us had barely used before. Now they’re mainstays of our daily life.
In the world of work, virtual meetings have become the norm. No surprises there. The initial peak has passed, but search volume is still high.
Dating isn’t just online anymore, it’s virtual – so the date itself happens in cyberspace. This has seen some decline since an initial peak in April as lockdowns have eased, but still popular in July and going into August.
Note – We also looked at “virtual mosque” and “virtual synagogue” as well, but the data was less clear.
What’s interesting here is that the graph is so similar to ‘virtual dating’. People here are also looking to make a connection – just in a different way.
Something else we’re having to do at home is teach our kids. Good to see people doing some research. It’s interesting the peak in searches is actually quite recent – in July.
As online services and meetings have become the norm, consider if there’s anything your business could move online. ‘Virtual church’ shows there’s plenty of scope for this. Say you run a coffee shop – why not set up virtual coffee mornings for your customers?
Hobbies and pastimes
Not going out can seriously put a strain on your relationships (see “get away with murder” below). But some good old-fashioned family activities can help – assuming that Monopoly game doesn’t get too competitive.
Puzzles usually see a small peak over Christmas, but March and April this year completely blew that out of the water. Things seem to have gone back to normal now though – maybe people have got bored with this search trend?
Also popular over the holiday period, Monopoly saw a resurgence in March and April, but it too has died down a bit.
Strangely, KerPlunk didn’t see the same trend. That also has a winter peak but hasn’t seen much interest during the pandemic.
I have to admit, I thought this was more of a British thing, but people in the States are loving a good pub quiz. Particularly in Wisconsin. And there was a spike in searches in July, so this one isn’t going away.
Businesses can jump on this too. Why not run a pub quiz to raise money for a local cause or charity?
Not necessarily a family activity, but exercise is still important for health and wellbeing. There was a big peak in people searching for ‘home workouts’ in March and April, and it’s still a little up against previous months. Have people got the workout advice they needed, or lost motivation? It’s hard to say.
Where comfort comes into the picture
Is it any surprise that people are looking for a bit of comfort right now?
Home baking has been high on the list, with sourdough a particular favorite.
This is one of those where it’s hard to tell what the intention behind the search is. There’s a clear peak in searches for ‘puppies’ in April, but are people looking for videos or wanting a puppy of their very own?
If you are looking to get a puppy, make sure you’ll be able to look after it long term i.e. if you’re no longer working from home.
FYI, the search trend for “kittens” is similar, but ‘puppies’ are more popular (at least according to Google).
DIY and luxuries for the home have also been popular – and it’s no wonder since so many of us have been unable to get out much. “Hot tubs” are a good example.
People doing things around the home provide some obvious opportunities for businesses. Companies selling DIY equipment, hot tubs, plants, and other things online have already seen an uplift in sales.
Wait, what? – Get away with murder
All I can say about this, is I really hope people are using their free time to write crime novels or catch up on the TV show.
If you run a bookshop, now’s the time to promote your crime section with some tongue in cheek messaging about devouring a good book instead of murdering your spouse. Or maybe do a display of ‘how to’ and writing books?
The pandemic is a major global crisis and things are changing all the time. What does the future hold?
The “picnics” graph shows perfectly that habits are still evolving, with searches high in June, July and into August. People are looking for ways to get outside safely where this is allowed. It doesn’t appear to be a purely seasonal trend.
Black Lives Matter
Of course, the pandemic isn’t the only global issue affecting us right now. The Black Lives Matter graph shows how influential the movement has been this year.
One final graph to share – ‘have hope’. This peaked in May. I hope that’s because people found it. We’re going to need it.
Russell Welch is Head of Digital Marketing at Clear, a creative digital agency based in Shrewsbury and Birmingham, UK. When not poring over data, you can find him writing fiction, playing some geeky game, or wielding a sword.
The post Google search trends: People are in search for connection during the lockdown appeared first on Search Engine Watch.
Affinity audiences are now available for Google search campaigns. This post discusses the reasons to add them and best practices when doing so.
Read more at PPCHero.com
Changes to How Google Might Rank Image Search Results
We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.
That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.
It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.
As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.
Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.
Here are signals that I would include when I rank image search results:
- Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
- Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
- Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
- Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
- Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
- Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail
Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.
A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.
Machine Learning Models to Rank Image Search Results
Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.
The machine learning model can be a:
- Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
- Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)
We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”
We are told about an image search system, which includes a training engine.
The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.
The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.
That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.
The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.
Google may rank image search results based on various factors
Those may be separate signals from:
- Features of the image
- Features of the landing page
- A combining the separate signals following a fixed weighting scheme that is the same for each received search query
This patent describes how it would rank image search results in this manner:
- Obtaining many candidate image search results for the image search query
- Each candidate image search result identifies a respective image and a respective landing page for the respective image
- For each of the candidate image search results processing
- Features of the image search query
- Features of the respective image identified by the candidate image search result
– Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
– Providing the image search results for presentation by a user device
Advantages to Using a Machine Learning Model to Rank Image Search Results
If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.
This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.
This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.
By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.
The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.
In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.
But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.
Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)
But, a trained machine learning model can automatically adjust feature weights if a new feature is added.
Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.
Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.
The Rank Image Search results patent application can be found at
Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann
Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.
The Indexing Engine
The search engine may include an indexing engine and a ranking engine.
The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.
That is, the index database includes data identifying images and, for each image, a corresponding landing page.
The index database also associates the image-landing page pairs with:
- Features of the image search query
- Features of the images, i.e., features that characterize the images
- Features of the landing pages, i.e., features that characterize the landing page
Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.
Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.
The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.
The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.
The machine learning model is a machine learning model that is configured to receive an input that includes
(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.
Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.
The Ranking Engine behind the Process to Rank Image Search Results
In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.
The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.
The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.
The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)
We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.
Features That May Be Used from Images and Landing Pages to Rank Image Search Results
The first step is to receive the image search query.
Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.
It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.
For those pairs, the search system identifies:
- Features of the image search query
- Features of the image
- Features of the landing page
Features Extracted From the Image
These features can include vectors that represent the content of the image.
Vectors to represent the image may be derived by processing the image through an embedding neural network.
Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.
Features Extracted From the Landing Page
These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:
- The date the page was first crawled or updated
- Data characterizing the author of the landing page
- The language of the landing page
- Features of the domain that the landing page belong to
- Keywords representing the content of the landing page
- Features of the links to the image and landing page such as the anchor text or source page for the links
- Features that describe the context of the image in the landing page
- So on
Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page
The patent interestingly separated these features out:
- Data characterizing the location of the image within the landing page
- Prominence of the image on the landing page
- Textual descriptions of the image on the landing page
More Details on the Context of the Image on the Landing Page
The patent points out some alternative ways that the location of the image within the Landing Page might be found:
- Using pixel-based geometric location in horizontal and vertical dimensions
- User-device based length (e.g., in inches) in horizontal and vertical dimensions
- An HTML/XML DOM-based XPATH-like identifier
- A CSS-based selector
The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.
The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.
Features Extracted from the Image Search Query
The features from the image search query can include::
- Language of the search query
- Some or all of the terms in the search query
- Time that the search query was submitted
- Location from which the search query was submitted
- Data characterizing the user device from which the query was received
- So on
How the Features from the Query, the Image, and the Landing Page Work Together
- The features may be represented categorically or discretely
- Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
- For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
- The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
- The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query
Adjusting Initial Ranking Scores
The system may adjust initial ranking scores for the image search results based on the relevance scores to:
- Promote search results having higher relevance scores
- Demote search results having lower relevance scores
- Or both
Training a Ranking Machine Learning Model to Rank Image Search Results
The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.
A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)
The patent provides another example of how ground-truth relevance scores might be generated:
When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.
For each of the training image search queries, the system may generate features for each associated image-landing page pair.
For each of those pairs, the system may identify:
(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.
We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.
The ranking engine trains the machine learning model by processing for each image search query
- Features of the image search query
- Features of the respective image identified by the candidate image search result
- Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query
The patent provides some specific implementation processes that might differ based upon the machine learning system used.
Take Aways to Rank Image Search Results
I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.
Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.
Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
Google today announced a new autofill experience for Chrome on mobile that will use biometric authentication for credit card transactions, as well as an updated built-in password manager that will make signing in to a site a bit more straightforward.
If you’ve ever bought something through the browser on your Android phone, you know that Chrome always asks you to enter the CVC code from your credit card to ensure that it’s really you — even if you have the credit card number stored on your phone. That was always a bit of a hassle, especially when your credit card wasn’t close to you.
Now, you can use your phone’s biometric authentication to buy those new sneakers with just your fingerprint — no CVC needed. Or you can opt out, too, as you’re not required to enroll in this new system.
As for the password manager, the update here is the new touch-to-fill feature that shows you your saved accounts for a given site through a standard Android dialog. That’s something you’re probably used to from your desktop-based password manager already, but it’s definitely a major new built-in convenience feature for Chrome — and the more people opt to use password managers, the safer the web will be. This new feature is coming to Chrome on Android in the next few weeks, but Google says that “is only the start.”
Don’t get me wrong, Google Grants is an amazing “in-kind” gift for those qualified 501(c)(3) Nonprofits (especially for those who are utilizing it efficiently). However, times have changed since it’s inception in 2003 and considering the multi-device environment that we live in, Google should consider adapting their Mobile Network as a viable option for Google Grantees. Maybe call it (GrantsMobile)?
In this post, I will discuss the reasons why Google should revamp their Grants program to be more mobile app friendly.
Nonprofits have been “Going Mobile” for a while
The idea that Nonprofits have become “less savvy” as compared to “For-Profit” organizations is simply not true. Even though nonprofits may not have the big advertising budgets as do for-profit companies, they are savvy enough to “fish where the fish are” in trying to increase awareness, volunteerism and most importantly fundraising. In a Capterra Nonprofit Technology Blog article published back in 2014 entitled “The Essential Guide to Going Mobile for Nonprofits“, author Leah Readings talks about the importance for Nonprofits to be more mobile because it creates a wider range of communication between the organization and its members. Readings also states “Allowing for online donation pages or portals, or donation apps, makes it much easier for your members to donate—when all they have to do is click a few buttons in order to make a donation, giving becomes easier, and in turn will encourage more people to give.“
Need more convincing? In a 2013 article from InternetRetailer.com entitled “Mobile donations triple in 2012” (which was also mentioned in the Capterra article) the author goes on to quote from a fundraising technology and services provider Frontstream (formerly Artez Interactive) which states “nonprofits that offer mobile web sites, apps or both for taking donations generate up to 123% more individual donations per campaign than organizations that don’t.“
Why Google Mobile is Ripe for Nonprofits:
If you have ever done any mobile advertising within Google Adwords (formerly AdMob), you know that the system is pretty robust and is considered one of the best platforms to promote Apps on both Google Play and the iTunes store. Moreover, advertisers can easily track engagements and downloads back to their specific audience that they are targeting. The costs are also much more affordable than traditional $ 1-2 CPC offered to Google Grants accounts which can only run on Google.com.
Here are the Mobile App Promotion Campaigns by Google Adwords:
Universal App Campaigns:
AdWords create ads for your Android app in a variety of auto-generated formats to show across the Search, Display and YouTube Networks.
- Ads are generated for you based on creative text you enter, as well as your app details in the Play Store (e.g. your icon and images). These ads can appear on all available networks
- Add an optional YouTube video link for your ads to show on YouTube as well.
Mobile app installs
Increase app downloads with ads sending people directly to app stores to download your app.
- Available for Search Network, Display Network, and YouTube
- Ad formats include standard, image and video app install ads
Mobile app engagement
Re-engage your existing users with ads that deep link to specific screens within your mobile app. Mobile app engagement campaigns are a great choice if you’re focused on finding folks interested in your app content, getting people who have installed your app to try your app again, or to open your app and take a specific action. These types of ads allow flexibility for counting conversions, bidding and targeting.
- Available for Search Network and Display Network campaigns
- Ad formats include standard and image app engagement ads
A lot has changed since 2003 with the birth of Google Grants and Google needs to continue to be socially responsible and catch up to their own standards of the online world that they helped create. Nonprofits are now, more than ever, relying on the internet to drive awareness, volunteerism and fundraising. For Nonprofits, as well as everyone else for that matter, are getting their information from Facebook, Twitter, TV, Radio and (still Google) using laptops, tablets and mobile devices and it’s time for Google Grants to adapt to this new world.
- SaaS Ventures takes the investment road less traveled
- Tips (based on data!) to Manage Amazon Campaigns During Turbulent Times
- How to drive digital innovation necessary during the pandemic
- What you must know about TikTok for business
- Ride-hailing was hit hard by COVID-19. Grab’s Russell Cohen on how the company adapted.