CBPO

Monthly Archives: November 2020

Five great ways responsive web design benefits your SEO

November 29, 2020 No Comments

30-second summary:

  • COVID-19 pandemic emphasizing the practicality of internet mobile for all-around daily tasks.
  • Mobile web browsing skyrocketing fuelled by consumer shopping intent.
  • Half the world’s population browsing the internet on a mobile device, with 61 million UK mobile internet users by 2021.
  • Google’s webmasters announcing the mobile-first index launch in March 2021, with a knock-on effect on desktop-only and m-dot websites.
  • Davies, an SEO expert at UENI reveals the top five ways of complementing SEO efforts with responsive web design to improve websites’ performance in Google’s SERP.

With mobile internet accessibility and usability growing year-on-year, it is undeniable that websites must meet users’ expectations for a smooth and relevant experience. Moreover, 2020 generated valuable insights into the relevance of mobile devices to people’s lives. Still questioning the importance of responsive web design for your website’s SEO? To weaken skepticism, Jos Davies, an SEO expert at UENI reveals the top five ways of complementing SEO efforts with responsive web design to improve websites’ performance in Google’s SERP.

1. Site usability

Mobile visitors are usually impatient, longing for on-the-spot solutions to their needs – which is not to say that desktop visitors like wasting their time! Google endorses this by saying that 53% of mobile users will bounce off a page that is not loading in less than three seconds.

Responsive web design benefits your SEO - Website usability

Source: ThinkwithGoogle

Within a highly competitive market, fast loading websites manage to stay competitive, while the rest are subject to traffic fluctuation and inconsistency due to a drop in ranking.

Responsive web design helps optimize websites for mobile search, improving your site’s functionality, and design by scaling the content to users’ devices, thus providing a consistent user experience across all devices.

Since Google is all for serving users the most relevant results, it will then favor and promote the websites that are providing a good user experience by all means: content, design, and functionality across all devices.

It goes without saying that a drop in traffic harms sales. On top, an unresponsive website on mobile devices is missing out on valuable opportunities to attract customers and make them convert.

2. Faster web page loading

Starting with Google Speed Update, back in 2018, Google uses mobile site speed as a ranking factor in mobile search.

Google’s updates are aligned to user’s behavior: an increase in mobile device use means a paradigm shift for how Google bots crawl, index, and display results in SERP to satisfy people’s needs, expectations, promoting customer satisfaction.

Responsive web design benefits your SEO - Google ranking factors that consider fast page speed load

Source: Uptimiser

In a mobile-centric world, having a mobile responsive design is a fundamental part of a successful SEO strategy. Responsive web design will help you rethink both the layout and the content of your website to offer a smooth user experience from desktop, to laptop, tablet, and smartphone without any inconsistencies.

Failing to do so, your SEO efforts would be compromised and a drop in traffic is foreseeable beyond question with your desktop only and m-dot website versions being removed from Google’s index no later than March 2021.

3. Lower bounce rate

The bounce rate reflects the percentage of users landing on a page and deciding to leave before continuing the website journey. Google takes it into account for weighting the relevancy of a webpage for a given search query.

A high bounce rate will thus generate a drop in ranking, reflecting thin or irrelevant content, or a poorly designed website, just by looking at user’s interaction.

Responsive web design benefits your SEO - Reduced bounce rate

Source: ThinkwithGoogle

It is still safe to say that content is king but keeping up to speed with the latest tech insights, the content will only remain king if is properly optimized for all devices.

Good content can only do so much, if not supported by an appealing design. Responsive web design does just that by adjusting the layout of the page, displaying the same content, to any device.

4. Boosted social sharing

Social media is not a ranking factor, but that doesn’t make it less important when it comes to your overall marketing strategy. It sure does play an instrumental role in an SEO campaign, complementing each other, and helping you leverage website traffic.

Responsive web design makes content sharing accessible across all social platforms, expanding your audience.

How? By making it easier for site visitors to access the same content on desktop and mobile devices, share it with their peers, and on their social media profiles. This uncovers great opportunities to reach a wider audience.

Responsive web design benefits your SEO - Boosts social sharing

Source: GlobalWebIndex

More traffic means more chances for your visitors to convert. Now, more than ever, responsive web design is the foundation that makes it possible for you to boost your sales. What if a desktop user shares a link to a mobile user and the website is unresponsive? Or, imagine your visitors struggling to find the share button, and simply giving up. This robs you of the opportunity to expand your potential consumer market and get more traffic.

5. No duplicate content

With the rise of mobile device use, most websites built a separate mobile version, but this approach is often raising duplicated content issues. Why? If highly similar content appears on more than one URL, then chances are you’re in for duplicate content issues.

Lesser duplicated content

Source: Statcounter

Due to the duplicated nature of the content, Google bots cannot tell which version should be indexed, nor if one version should absorb all link metrics, or should they be kept separate. On top, which version should rank for a given search query? Although chances that you will get a Google penalty are low, this doesn’t mean that your rankings will not be affected.

Installing a responsive web design will help solve your duplicated content issues on account of using one URL across devices and adapting the layout and content to fit any screen size while offering a consistent and pleasant user experience.

What is the takeaway?

Google says webmasters do! If Google announces an algorithm change, we follow. And Google recommends responsive web design for a smooth transition to the mobile-first index, with a series of benefits reflected on your overall website’s SEO performance.

Jos Davies is an SEO expert at UENI.

The post Five great ways responsive web design benefits your SEO appeared first on Search Engine Watch.

Search Engine Watch


The Absolute Best Black Friday Deals Online This Weekend

November 29, 2020 No Comments

Stay at home! Here are the very best discounts we’ve found in every category and at all the major retailers.
Feed: All Latest


Adjusting Featured Snippet Answers by Context

November 29, 2020 No Comments

How Are Featured Snippet Answers Decided Upon?

I recently wrote about Featured Snippet Answer Scores Ranking Signals. In that post, I described how Google was likely using query dependent and query independent ranking signals to create answer scores for queries that were looking like they wanted answers.

One of the inventors of that patent from that post was Steven Baker. I looked at other patents that he had written, and noticed that one of those was about context as part of query independent ranking signals for answers.

Remembering that patent about question-answering and context, I felt it was worth reviewing that patent and writing about it.

This patent is about processing question queries that want textual answers and how those answers may be decided upon.

it is a complicated patent, and at one point the description behind it seems to get a bit murky, but I wrote about when that happened in the patent, and I think the other details provide a lot of insight into how Google is scoring featured snippet answers. There is an additional related patent that I will be following up with after this post, and I will link to it from here as well.

This patent starts by telling us that a search system can identify resources in response to queries submitted by users and provide information about the resources in a manner that is useful to the users.

How Context Scoring Adjustments for Featured Snippet Answers Works

Users of search systems are often searching for an answer to a specific question, rather than a listing of resources, like in this drawing from the patent, showing featured snippet answers:

featured snippet answers

For example, users may want to know what the weather is in a particular location, a current quote for a stock, the capital of a state, etc.

When queries that are in the form of a question are received, some search engines may perform specialized search operations in response to the question format of the query.

For example, some search engines may provide information responsive to such queries in the form of an “answer,” such as information provided in the form of a “one box” to a question, which is often a featured snippet answer.

Some question queries are better served by explanatory answers, which are also referred to as “long answers” or “answer passages.”

For example, for the question query [why is the sky blue], an answer explaining light as waves is helpful.

featured snippet answers - why is the sky blue

Such answer passages can be selected from resources that include text, such as paragraphs, that are relevant to the question and the answer.

Sections of the text are scored, and the section with the best score is selected as an answer.

In general, the patent tells us about one aspect of what it covers in the following process:

  • Receiving a query that is a question query seeking an answer response
  • Receiving candidate answer passages, each passage made of text selected from a text section subordinate to a heading on a resource, with a corresponding answer score
  • Determining a hierarchy of headings on a page, with two or more heading levels hierarchically arranged in parent-child relationships, where each heading level has one or more headings, a subheading of a respective heading is a child heading in a parent-child relationship and the respective heading is a parent heading in that relationship, and the heading hierarchy includes a root level corresponding to a root heading (for each candidate answer passage)
  • Determining a heading vector describing a path in the hierarchy of headings from the root heading to the respective heading to which the candidate answer passage is subordinate, determining a context score based, at least in part, on the heading vector, adjusting the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score
  • Selecting an answer passage from the candidate answer passages based on the adjusted answer scores

Advantages of the process in the patent

  1. Long query answers can be selected, based partially on context signals indicating answers relevant to a question
  2. The context signals may be, in part, query-independent (i.e., scored independently of their relatedness to terms of the query
  3. This part of the scoring process considers the context of the document (“resource”) in which the answer text is located, accounting for relevancy signals that may not otherwise be accounted for during query-dependent scoring
  4. Following this approach, long answers that are more likely to satisfy a searcher’s informational need are more likely to appear as answers

This patent can be found at:

Context scoring adjustments for answer passages
Inventors: Nitin Gupta, Srinivasan Venkatachary , Lingkun Chu, and Steven D. Baker
US Patent: 9,959,315
Granted: May 1, 2018
Appl. No.: 14/169,960
Filed: January 31, 2014

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for context scoring adjustments for candidate answer passages.

In one aspect, a method includes scoring candidate answer passages. For each candidate answer passage, the system determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading to which the candidate answer passage is subordinate; determines a context score based, at least in part, on the heading vector; and adjusts answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The system then selects an answer passage from the candidate answer passages based on the adjusted answer scores.

Using Context Scores to Adjust Answer Scores for Featured Snippets

A drawing from the patent shows different hierarchical headings that may be used to determine the context of answer passages that may be used to adjust answer scores for featured snippets:

Hierarchical headings for featured snippets

I discuss these headings and their hierarchy below. Note that the headings include the Page title as a heading (About the Moon), and the headings within heading elements on the page as well. And those headings give those answers context.

This context scoring process starts with receiving candidate answer passages and a score for each of the passages.

Those candidate answer passages and their respective scores are provided to a search engine that receives a query determined to be a question.

Each of those candidate answer passages is text selected from a text section under a particular heading from a specific resource (page) that has a certain answer score.

For each resource where a candidate answer passage has been selected, a context scoring process determines a heading hierarchy in the resource.

A heading is text or other data corresponding to a particular passage in the resource.

As an example, a heading can be text summarizing a section of text that immediately follows the heading (the heading describes what the text is about that follows it, or is contained within it.)

Headings may be indicated, for example, by specific formatting data, such as heading elements using HTML.

This next section from the patent reminded me of an observation that Cindy Krum of Mobile Moxie has about named anchors on a page, and how Google might index those to answer a question, to lead to an answer or a featured snippet. She wrote about those in What the Heck are Fraggles?

A heading could also be anchor text for an internal link (within the same page) that links to an anchor and corresponding text at some other position on the page.

A heading hierarchy could have two or more heading levels that are hierarchically arranged in parent-child relationships.

The first level, or the root heading, could be the title of the resource.

Each of the heading levels may have one or more headings, and a subheading of a respective heading is a child heading and the respective heading is a parent heading in the parent-child relationship.

For each candidate passage, a context scoring process may determine a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

The context scoring process could be used to determine the context score and determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The context score could be based, at least in part, on the heading vector.

The context scoring process can then adjust the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The context scoring process can then select an answer passage from the candidate answer passages based on adjusted answer scores.

This flowchart from the patent shows the context scoring adjustment process:

context scoring adjustment flowchart

Identifying Question Queries And Answer Passages

I’ve written about understanding the context of answer passages. The patent tells us more about question queries and answer passages worth going over in more detail.

Some queries are in the form of a question or an implicit question.

For example, the query [distance of the earth from the moon] is in the form of an implicit question “What is the distance of the earth from the moon?”

An implicit question - the distance from the earth to the moon

Likewise, a question may be specific, as in the query [How far away is the moon].

The search engine includes a query question processor that uses processes that determine if a query is a query question (implicit or specific) and if it is, whether there are answers that are responsive to the question.

The query question processor can use several different algorithms to determine whether a query is a question and whether there are particular answers responsive to the question.

For example, it may use to determine question queries and answers:

  • Language models
  • Machine learned processes
  • Knowledge graphs
  • Grammars
  • Combinations of those

The query question processor may choose candidate answer passages in addition to or instead of answer facts. For example, for the query [how far away is the moon], an answer fact is 238,900 miles. And the search engine may just show that factual information since that is the average distance of the Earth from the moon.

But, the query question processor may choose to identify passages that are to be very relevant to the question query.

These passages are called candidate answer passages.

The answer passages are scored, and one passage is selected based on these scores and provided in response to the query.

An answer passage may be scored, and that score may be adjusted based on a context, which is the point behind this patent.

Often Google will identify several candidate answer passages that could be used as featured snippet answers.

Google may look at the information on the pages where those answers come from to better understand the context of the answers such as the title of the page, and the headings about the content that the answer was found within.

Contextual Scoring Adjustments for Featured Snippet Answers

The query question processor sends to a context scoring processor some candidate answer passages, information about the resources from which each answer passages was from, and a score for each of the featured snippet answers.

The scores of the candidate answer passages could be based on the following considerations:

  • Matching a query term to the text of the candidate answer passage
  • Matching answer terms to the text of the candidate answer passages
  • The quality of the underlying resource from which the candidate answer passage was selected

I recently wrote about featured snippet answer scores, and how a combination of query dependent and query independent scoring signals might be used to generate answer scores for answer passages.

The patent tells us that the query question processor may also take into account other factors when scoring candidate answer passages.

Candidate answer passages can be selected from the text of a particular section of the resource. And the query question processor could choose more than one candidate answer passage from a text section.

We are given the following examples of different answer passages from the same page

(These example answer passages are referred to in a few places in the remainder of the post.)

  • (1) It takes about 27 days (27 days, 7 hours, 43 minutes, and 11.6 seconds) for the Moon to orbit the Earth at its orbital distance
  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Each of those answers could be good ones for Google to use. We are told that:

More than three candidate answers can be selected from the resource, and more than one resource can be processed for candidate answers.

How would Google choose between those three possible answers?

Google might decide based on the number of sentences and a selection of up to a maximum number of characters.

The patent tells us this about choosing between those answers:

Each candidate answer has a corresponding score. For this example, assume that candidate answer passage (2) has the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1). Thus, without the context scoring processor, candidate answer passage (2) would have been provided in the answer box of FIG. 2. However, the context scoring processor takes into account the context of the answer passages and adjusts the scores provided by the query question processor.

So, we see that what might be chosen based on featured snippet answer scores could be adjusted based on the context of that answer from the page that it appears on.

Contextually Scoring Featured Snippet Answers

This process starts which begins with a query determined to be a question query seeking an answer response.

This process next receives candidate answer passages, each candidate answer passage chosen from the text of a resource.

Each of the candidate answer passages are text chosen from a text section that is subordinate to a respective heading (under a heading) in the resource and has a corresponding answer score.

For example, the query question processor provides the candidate answer passages, and their corresponding scores, to the context scoring processor.

A Heading Hierarchy to Determine Context

This process then determines a heading hierarchy from the resource.

The heading hierarchy would have two or more heading levels hierarchically arranged in parent-child relationships (Such as a page title, and an HTML heading element.)

Each heading level has one or more headings.

A subheading of a respective heading is a child heading (an (h2) heading might be a subheading of a (title)) in the parent-child relationship and the respective heading is a parent heading in the relationship.

The heading hierarchy includes a root level corresponding to a root heading.

The context scoring processor can process heading tags in a DOM tree to determine a heading hierarchy.

hierarchical headings for featured snippets

For example, concerning the drawing about the distance to the moon just above, the heading hierarchy for the resource may be:

The ROOT Heading (title) is: About The Moon (310)

The main heading (H1) on the page

H1: The Moon’s Orbit (330)

A secondary heading (h2) on the page:

H2: How long does it take for the Moon to orbit Earth? (334)

Another secondary heading (h2) on the page is:

H2: The distance from the Earth to the Moon (338)

Another Main heading (h1) on the page

H1: The Moon (360)

Another secondary Heading (h2) on the page:

H2: Age of the Moon (364)

Another secondary heading (h2) on the page:

H2: Life on the Moon (368)

Here is how the patent describes this heading hierarchy:

In this heading hierarchy, The title is the root heading at the root level; headings 330 and 360 are child headings of the heading, and are at a first level below the root level; headings 334 and 338 are child headings of the heading 330, and are at a second level that is one level below the first level, and two levels below the root level; and headings 364 and 368 are child headings of the heading 360 and are at a second level that is one level below the first level, and two levels below the root level.

The process from the patent determines a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

This score may be is based on a heading vector.

The patent says that the process, for each of the candidate answer passages, determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The heading vector would include the text of the headings for the candidate answer passage.

For the example candidate answer passages (1)-(3) above about how long it takes the moon to orbit the earch, the respectively corresponding heading vectors V1, V2 and V3 are:

  • V1=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: How long does it take for the Moon to orbit the Earth?]>
  • V2=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>
  • V3=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>

We are also told that because candidate answer passages (2) and (3) are selected from the same text section 340, their respective heading vectors V2 and V3 are the same (they are both in the content under the same (H2) heading.)

The process of adjusting a score, for each answer passage, uses a context score based, at least in part, on the heading vector (410).

That context score can be a single score used to scale the candidate answer passage score or can be a series of discrete scores/boosts that can be used to adjust the score of the candidate answer passage.

Where things Get Murky in This Patent

There do seem to be several related patents involving featured snippet answers, and this one which targets learning more about answers from their context based on where they fit in a heading hierarchy makes sense.

But, I’m confused by how the patent tells us that one answer based on the context would be adjusted over another one.

The first issue I have is that the answers they are comparing in the same contextual area have some overlap. Here those two are:

  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Note that the second answer and the third answer both include the same line: “Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles.” I find myself a little surprised that the second answer includes a couple of sentences that aren’t in the third answer, and skips a couple of lines from the third answer, and then includes the last sentence, which answers the question.

Since they both appear in the same heading and subheading section of the page they are from, it is difficult to imagine that there is a different adjustment based on context. But, the patent tells us differently:

The candidate answer score with the highest adjusted answer score (based on context from the headings) is selected, and the answer passage.

Recall that in the example above, the candidate answer passage (2) had the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1).

However, after adjustments, candidate answer passage (3) has the highest score, followed by candidate answer passage (2), and then-candidate answer passage (1).

Accordingly, candidate answer passage (3) is selected and provided as the answer passage of FIG. 2.

Boosting Scores Based on Passage Coverage Ratio

A query question processor may limit the candidate answers to a maximum length.

The context scoring processor determines a coverage ratio which is a measure indicative of the coverage of the candidate answer passage from the text from which it was selected.

The patent describes alternative question answers:

Alternatively, the text block may include text sections subordinate to respective headings that include a first heading for which the text section from which the candidate answer passage was selected is subordinate, and sibling headings that have an immediate parent heading in common with the first heading. For example, for the candidate answer passage, the text block may include all the text in the portion 380 of the hierarchy; or may include only the text of the sections, of some other portion of text within the portion of the hierarchy. A similar block may be used for the portion of the hierarchy for candidate answer passages selected from that portion.

A small coverage ratio may indicate a candidate answer passage is incomplete. A high coverage ratio may indicate the candidate answer passage captures more of the content of the text passage from which it was selected. A candidate answer passage may receive a context adjustment, depending on this coverage ratio.

A passage coverage ratio is a ratio of the total number of characters in the candidate answer passage to the ratio of the total number of characters in the passage from which the candidate answer passage was selected.

The passage cover ratio could also be a ratio of the total number of sentences (or words) in the candidate answer passage to the ratio of the total number of sentences (or words) in the passage from which the candidate answer passage was selected.

We are told that other ratios can also be used.

From the three example candidate answer passages about the distance to the moon above (1)-(3) above, passage (1) has the highest ratio, passage (2) has the second-highest, and passage (3) has the lowest.

This process determines whether the coverage ratio is less than a threshold value. That threshold value can be, for example, 0.3, 0.35 or 0.4, or some other fraction. In our “distance to the moon” example, each coverage passage ratio meets or exceeds the threshold value.

If the coverage ratio is less than a threshold value, then the process would select a first answer boost factor. The first answer boost factor might be proportional to the coverage ratio according to a first relation, or maybe a fixed value, or maybe a non-boosting value (e.g., 1.0.)

But if the coverage ratio is not less than the threshold value, the process may select a second answer boost factor. The second answer boost factor may be proportional to the coverage ratio according to a second relation, or maybe fixed value, or maybe a value greater than the non-boosting value (e.g., 1.1.)

Scoring Based on Other Features

The context scoring process can also check for the presence of features in addition to those described above.

Three example features for contextually scoring an answer passage can be based on the additional features of the distinctive text, a preceding question, and a list format.

Distinctive text

Distinctive text is the text that may stand out because it is formatted differently than other text, like using bolding.

A Preceeding Question

A preceding question is a question in the text that precedes the candidate answer question.

The search engine may process various amounts of text to detect for the question.

Only the passage from which the candidate answer passage is extracted is detected.

A text window that can include header text and other text from other sections may be checked.

A boost score that is inversely proportional to the text distance from a question to the candidate answer passage is calculated, and the check is terminated at the occurrence of a first question.

That text distance may be measured in characters, words, or sentences, or by some other metric.

If the question is anchor text for a section of text and there is intervening text, such as in the case of a navigation list, then the question is determined to only precede the text passage to which it links, not precede intervening text.

In the drawing above about the moon, there are two questions in the resource: “How long does it take for the Moon to orbit Earth?” and “Why is the distance changing?”

The first question–“How long does it take for the Moon to orbit Earth?”– precedes the first candidate answer passage by a text distance of zero sentences, and it precedes the second candidate answer passage by a text distance of five sentences.

And the second question–“Why is the distance changing?”– precedes the third candidate answer by zero sentences.

If a preceding question is detected, then the process selects a question boost factor.

This boost factor may be proportional to the text distance, whether the text is in a text passage subordinate to a header or whether the question is a header, and, if the question is in a header, whether the candidate answer passage is subordinate to the header.

Considering these factors, the third candidate answer passage receives the highest boost factor, the first candidate answer receives the second-highest boost factor, and the second candidate answer receives the smallest boost factor.

Conversely, if the preceding text is not detected, or after the question boost factor is detected, then the process detects for the presence of a list.

The Presence of a List

A list is an indication of several steps usually instructive or informative. The detection of a list may be subject to the query question being a step modal query.

A step modal query is a query where a list-based answer is likely to a good answer. Examples of step model queries are queries like:

  • [How to . . . ]
  • [How do I . . . ]
  • [How to install a door knob]
  • [How do I change a tire]

The context scoring process may detect lists formed with:

  • HTML tags
  • Micro formats
  • Semantic meaning
  • Consecutive headings at the same level with the same or similar phrases (e.g., Step 1, Step 2; or First; Second; Third; etc.)

The context scoring process may also score a list for quality.

It would look at things such as:

  • A list in the center of a page, which does not include multiple links to other pages (indicative of reference lists)
  • HREF link text that does not occupy a large portion of the text of the list will be of higher quality than a list at the side of a page, and which does include multiple links to other pages (which are indicative of reference lists), and/are has HREF link text that does occupy a large portion of the text of the list

If a list is detected, then the process selects a list boost factor.

That list boost factor may be fixed or may be proportional to the quality score of the list.

If a list is not detected, or after the list boost factor is selected, the process ends.

In some implementations, the list boost factor may also be dependent on other feature scores.

If other features, such as coverage ratio, distinctive text, etc., have relatively high scores, then the list boot factor may be increased.

The patent tells us that this is because “the combination of these scores in the presence of a list is a strong signal of a high-quality answer passage.”

Adjustment of Featured Snippet Answers Scores

Answer scores for candidate answer passages are adjusted by scoring components based on heading vectors, passage coverage ratio, and other features described above.

The scoring process can select the largest boost value from those determined above or can select a combination of the boost values.

Once the answer scores are adjusted, the candidate answer passage with the highest adjusted answer score is selected as the featured snippet answer and is displayed to a searcher.

More to Come

I will be reviewing the first patent in this series of patents about candidate answer scores because it does have some additional elements to it that haven’t been covered in this post, and the post about query dependent/independent ranking signals for answer scores. If you have been paying attention to how Google has been answering queries that appear to be seeking answers, you have likely seen those improving in many cases. Some answers have been really bad though. It will be nice to have as complete an idea as we can of how Google decides what might be a good answer to a query, based on information available to them on the Web.

Added October 14, 2020 – I have written about another Google patent on Answer Scores, and it’s worth reading about all of the patents on this topic. The new post is at Weighted Answer Terms for Scoring Answer Passages, and is about the patent Weighted answer terms for scoring answer passages.

It is about identifying questions in resources, and answers for those questions, and describes using term weights as a way to score answer passages (along with the scoring approaches identified in the other related patents, including this one.)

Added October 15, 2020 – I have written a few other posts about answer passages that are worth reading if you are interested in how Google finds questions on pages and answers to those, and scores answer passages to determine which ones to show as featured snippets. I’ve linked to some of those in the body of this post, but here is another one of those posts:

Added October 22, 2020, I have written up a description of details from about how structured and unstructured data has been selected for answer passages based on specific criteria in the patent on Scoring Answer passages in the post Selecting Candidate Answer Passages.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Adjusting Featured Snippet Answers by Context appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Wall Street needs to relax, as startups show remote work is here to stay

November 28, 2020 No Comments

We are hearing that a COVID-19 vaccine could be on the way sooner than later, and that means we could be returning to normal life some time in 2021. That’s the good news. The perplexing news, however, is that each time some positive news emerges about a vaccine — and believe me I’m not complaining — Wall Street punishes stocks it thinks benefits from us being stuck at home. That would be companies like Zoom and Peloton.

While I’m not here to give investment advice, I’m confident that these companies are going to be fine even after we return to the office. While we surely pine for human contact, office brainstorming, going out to lunch with colleagues and just meeting and collaborating in the same space, it doesn’t mean we will simply return to life as it was before the pandemic and spend five days a week in the office.

One thing is clear in my discussions with startups born or growing up during the pandemic: They have learned to operate, hire and sell remotely, and many say they will continue to be remote-first when the pandemic is over. Established larger public companies like Dropbox, Facebook, Twitter, Shopify and others have announced they will continue to offer a remote-work option going forward. There are many other such examples.

It’s fair to say that we learned many lessons about working from home over this year, and we will carry them with us whenever we return to school and the office — and some percentage of us will continue to work from home at least some of the time, while a fair number of businesses could become remote-first.

Wall Street reactions

On November 9, news that the Pfizer vaccine was at least 90% effective threw the markets for a loop. The summer trade, in which investors moved capital from traditional, non-tech industries and pushed it into software shares, flipped; suddenly the stocks that had been riding a pandemic wave were losing ground while old-fashioned, even stodgy, companies shot higher.


Enterprise – TechCrunch


Simple guide to creating an expert roundup post that drives website traffic

November 28, 2020 No Comments

30-second summary:

  • Roundup posts are pieces of content in which a list of selected experts give their insights on the same topic, in short descriptions that include their opinions, predictions, or reviews.
  • Creating an expert roundup post for your website or blog can take some preparation and organizing efforts, but it brings undeniable long-term benefits in terms of traffic, authoritativeness, and peer recognition.
  • In the following guide, I will take you through every step of creating an enticing expert roundup post for your website.

Publishing valuable content is a constant challenge when it comes to the formats and topics to cover. As a blogger, digital marketer, or content creator, you already know how much thought goes into offering your audience fresh, engaging content on a regular basis. Readers appreciate formats they are familiar with and can consume easily. A roundup post is an example of a successful approach to topics of interest in your industry.

Roundup posts are pieces of content in which a list of selected experts give their insights on the same topic, in short descriptions that include their opinions, predictions, or reviews.

Creating an expert roundup post for your website or blog can take some preparation and organizing efforts, but it brings undeniable long-term benefits in terms of traffic, authoritativeness, meaningful relationships, and peer recognition. By gathering a group of experts to answer the same question, you will not only generate relevant content for your website but build a strong relationship basis with experts in your industry.

Having a list of selected experts answer a well-placed question gives you a valuable piece of content that is highly shareable, so let’s see what it takes to do it right. In the following guide, I will take you through every step of creating an enticing expert roundup post for your website.

1. Brainstorm potential questions

The first step you need to take after deciding to publish an expert roundup post is to find the perfect question to ask the experts. This will be the key element of your post, and it will dictate whether it will be successful or not.

The perfect question might not be easy to find, but take enough time to find it. Brainstorm as much as you need before you decide who to invite in. All the further efforts of finding influencers and experts could be in vain if the topic you choose does not fit the roundup format, or doesn’t spark interest in your readers’ minds. So I’d recommend you find a question that resonates well with both your readers and experts.

Things to consider when brainstorming

To better understand what kind of questions are fit for a roundup post, you should picture the end result. You want to have your experts give your readers a piece of their own judgment, advice, or insight on a subject that your readers are familiar with. It won’t be a 101, a critical debate, or brain-picking for ultra-specialized information.

Your question needs to be:

  • Easy enough to give your respondents room to elaborate and get ample answers from them without the need for extra-questions.
  • General enough to give you a reasonably long list of influencers, experts, and peers. Go to niche and you might be able to talk to a handful of people about it.
  • Original enough to get your readers curious about the topic, and what experts have to say about it.

What about the topic will you be asking about? Naturally, it has to be specific to your website or niche and what you usually write about. The key is to find a subject that your audience is curious or interested in. Perhaps a trend, or a subject that usually sparks debate or behind-the-scenes type of information that regular posts don’t really get into.

You have the chance to get insights about the latest trends everyone is wondering about, or tips and tricks, good practices that expert peers have discovered through their experience and expertise.

Examples of questions:

  • What’s one piece of advice you’d give to beginner bloggers?
  • What’s one thing you would’ve done differently when starting your blog?
  • What do you think the future of blogging is?

How to get ideas

Easier said than done? If you don’t already have a topic you’ve been pondering about, compile a list of possible questions for your roundup with a little research.

Use tools like the Ahrefs Content Explorer to find trends in your industry, and subjects that seem to attract a lot of readers. You can filter results by their social shares, number, and quality of referring industries so that this tool and similar ones can give you a good idea of what subject should be pursued.

With these things in mind fuel that creative engine and start putting ideas on paper, whether they seem perfect candidates or just potential pursuers. It’s best to have a long list to start from when drafting the winning question for your roundup post.

2. Find talented experts

After finding your question, you should have a good idea about the expertise of your respondents. Assuming that you are active in your industry for a reasonable time, you should already know who the experts in your niche are. You want to compile an extensive list of experts of at least 50+ because not all will reply to your inquiry.

Let’s make a profile for the ideal respondent in your expert roundup post:

  • They are directly related to the industry you are writing about
  • They have a good follower base and an audience that regards them as influencers
  • They have contributed to roundup posts in the past
  • They are continually sharing thought-provoking, original ideas on their social media and on personal or business blogs
  • They have authority in the field: company owners and founders, top positions in companies of the industry, public speakers, success bloggers, and more

A practical, fast way of identifying possible candidates for your roundup post is to check other roundup posts in the industry. Does this approach seem lazy at a first glance? The redundancy of a roundup article doesn’t come from the list of people contributing to it, but from the very topic, you will choose for it.

As long as you are able to provoke your respondents to bring something original to the table, selecting them from other roundup articles is absolutely fine.

Depending on your topic, you might find tens of experts already showing potential for accepting your invitation. But don’t put your eggs in one basket: there are other ways of finding strong, authoritative voices in your niche.

A simple search on social media can give you a good idea of who is interested in the topic you have selected and fits the ideal profile described above. Twitter and Facebook are also great platforms where you can find experts in your industry.

For our roundup post about blogging tips for beginners, we have gathered content from CEOs and founders of content marketing websites, authors, bloggers, and podcasters in digital marketing. They were all able to give us valuable insight into what blogging is like for beginners, and what they should do to thrive.

Web search is another simple solution to putting together your expert list. We were able to find several experts by simply typing in our keyword or phrase onto Google. Find bloggers who have been covering your subject, or similar ones, and dig a bit more in their previous posts, to have an idea of who you’re going to contact.

Ahrefs, BuzzSumo, and Hootsuite are other awesome tools to research hot topics and authoritative blogs, as they display real-time data on their referring industries, traffic value, and the number of shares they get for their posts.

3. Find their contact information

Once you have a list of experts, bloggers, and influencers who can give you valuable insight into the subject you want to cover, it’s time to start gathering their contact information.

It’s best to keep a database of their information, on a simple Google or excel sheet with their names, email addresses, URL to their website, the date when you contacted them, and a column where you check if they submitted content or not. You can get a little more advanced by using a CRM or email outreach tool like Mailshake.

Keeping your contact information organized will speed up the preparation process and will help you avoid awkward situations like sending an invitation twice, or forgetting to do a follow-up with them.

Some of the experts you’re trying to reach won’t have a visible email address but you can use tools like Hunter.io to find them by simply entering their first name, last name, and their domain name. It will give you a list of the results it found. Ideally, you would launch your invitation privately, but if you still can’t find their email address, don’t hesitate to send a tweet that mentions your plan or a simple message via the other social media platforms.

Here are more tips on how to find contact information for people who you want to reach, and what good practices you should follow.

4. Reach out to your experts

If it’s the first time you are contacting someone, it’s a good idea to look into the good practices of a cold email. Roundup posts are great for getting quality backlinks, and the persons you will contact are aware of the positive influence their contribution can have on your traffic and domain. But they can’t endorse a website or a blog that doesn’t prove to be valuable on a constant basis.

We can talk about cold email outreach best practices for days on end but that would take too long. What I highly recommend is that you be genuine, polite, and kind when reaching out to your experts. This goes a long way and they’ll be able to tell when someone is being genuine since they receive hundreds of spam emails every day.

I also recommend you personalize each and every one of your emails. Yes, this will take time but you will have a higher conversion rate than if you were to send the same bulk email to everyone.

I don’t have a template to help you get started, but below I have provided a screenshot of an email I sent out to one of the experts that we included in our roundup. Feel free to use it for some inspiration and to help garner some ideas.

Another fantastic way of reaching out to experts is by joining and engaging with them via their live streams. We used this tactic to reach one of our experts who had not replied via email. It worked out, and he gave us some awesome advice while on his live stream.

5. Put it all together

Getting enough contributions from the experts you have contacted is a great achievement in the process of creating a stellar roundup post. But your job is not finished yet. Putting together the content you have just received from your guests is very important, as it will have to be a high-quality presentation that they will gladly share on their channels, therefore getting you some exposure to new audiences.

Things you want to include

  • A headshot
  • Their reply
  • Short bio
  • Social media and website handles
  • And your own comments

As you can see in our roundup post example, each contributor’s section starts with a professional picture of the contributor, the content that they have submitted, finished with our own thoughts on their commentary. We have added easy-to-follow social media icons that take you to their Twitter, Instagram, Facebook, or LinkedIn profiles, the titles they hold, and links to their main projects: companies they work for, blogs, YouTube channels, and others.

We wanted to make it easy for our readers to follow and engage with our experts via their blogs, businesses, and social media accounts.

While you want to emphasize the value of each contributor, you must also have your reader’s interest in mind, and clearly answer a need or a question your audience has.

6. Promote your post

Promoting your posts should be done in the interest of all your contributors, yourself, and your audience. The effort you have invested in compiling these pieces of content will ideally be rewarded with significant organic traffic, a good number of quality backlinks, and most importantly the start of new and meaningful relationships.

You should also edit the content carefully so that each contributor gets the same level of attention and appreciation. Take your time to thank each one of them for their contribution and don’t hesitate to personalize your message with your personal impression of their content.

Backlinks are certainly welcome, but asking them explicitly might not be appreciated by all the persons you will contact. The best ways of getting your contributors to share the roundup post are to simply ask them if they could share it with their audience.

Infographics, tweets, quotes included in an image, newsletters, and paid ads are all great ideas for getting your content everywhere and promoting it like crazy.

Conclusion: Creating an expert roundup post is totally worth it

Publishing an expert roundup post might not be everyone’s style of content, but for certain industries and domains, it can be a long-term valuable resource, both for your audience and your peers. And, of course, for you.

Keeping your focus on the value your post should bring to your readers will help you choose an enticing topic, ask the right question, and select the right people to answer it. While no one can ignore the advantages a roundup post has for contributors and creators alike, backlinks and traffic should not be your singular concern.

Ultimately, the success of such a compilation is given by the shares, referrals, and traffic you get from your audience. Create a fantastic expert roundup post by asking a question your readers are interested in, and your contributors can easily answer.

Give this type of content the time and effort it needs and it will prove to be a fruitful initiative, both amongst your peers and as a relevant post for your website.

The post Simple guide to creating an expert roundup post that drives website traffic appeared first on Search Engine Watch.

Search Engine Watch


The anatomy of a negative SEO attack

November 27, 2020 No Comments

30-second summary:

  • SEO can just as easily destroy websites’ rankings as it can build them up.
  • Newer websites or startups with smaller backlink profiles are the most vulnerable to negative SEO attacks.
  • Webmasters need to regularly monitor their backlink profile to make sure their site is not keeping company with any questionable web properties.
  • Negative SEO can be remediated through manual outreach or Google’s disavow tool, but high-quality link building campaigns are the best way to minimize the impact of low-quality links.

In the early days of search engine optimization, a variety of black-hat techniques allowed SEOs to dominate the first page of search. Cloaking, keyword stuffing, backlink spam, and other strategies could catapult websites to the first page. But those days are long gone. Google’s algorithms are extremely powerful and can easily result in a negative SEO attack. Not only will black-hat strategies no longer work – they will destroy your site’s rankings and even prevent your domain from ranking permanently. 

So for those out there on the internet who are not interested in seeing your domain move up the first page, black hat SEO is an easy way to harm your website. Many new site owners are so eager to get any backlinks that they can, they allow low-quality links to populate their profile without ever thinking about where those links are coming from, or why those other site owners linked to them in the first place. 

Negative SEO attacks are real. I’ve helped many clients recover from them. They can come from competitors, hackers, or seemingly out of nowhere, but without a quick response, a website’s reputation with search engines can be permanently harmed. 

Although Google algorithm updates or technical issues with your website can impact your keyword rankings, an unexpected drop could be a sign of negative SEO. The good news is, the anatomy of a negative SEO attack is clearly recognizable. If you take quick action, you can protect your website and minimize the damage.

The websites that are most vulnerable to SEO 

The reality is, every time your site moves up a spot in the SERPs, you knock another site down. It’s not fun to imagine that other people would use negative SEO to harm your efforts, but if you offer great service or product that could take business or traffic away from someone else, then your site is at risk.

Any website can experience a negative SEO attack, but local businesses and startups with less than 300 referring domains are the most vulnerable. The smaller your backlink profile, the more impactful any low-quality or unnatural links will be. If 50% of your links are spammy and you’re a brand new site, Google crawlers are going to look at your backlink profile and assume your site is trying to cheat your way to the top. 

For new webmasters, in particular, it’s critical to pay close attention to every backlink you acquire. This is also true when you pay for the services of a link building company. Some site owners are hesitant to pursue link building because they have had negative experiences with SEOs in the past who engaged in these spammy techniques that ended up tearing their site down rather than building it up. 

As your backlink profile grows, spammy links will not have as much of an impact on your domain authority or rankings. Still, it’s good to keep an eye on the referring domains and anchor text diversity of your backlink profile.

How to identify a negative SEO attack

There are a variety of common negative SEO techniques that people may use to harm your website. After handling negative SEO attacks with my own clients, these are the most common types I’ve come across and that I encourage webmasters to be on the lookout for.

1. Toxic backlinks

Backlinks from low-quality sites that have low domain authority, little relevance to your industry, or very little site traffic should always be suspect. If you receive a large influx of these low-quality links, they may be coming from a link farm that has the infrastructure to build a massive amount of links quickly. If you’re a new site with a large percentage of toxic links, Google will likely assume you’ve been participating in black hat manipulation.

A negative SEO attack can be cause by toxic poor quality backlinks

2. Comment spam links

One way SEOs used to manipulate their site authority was by leaving backlinks in the comment section of blogs or forum sites. If you suddenly receive backlinks in the comment section of older blogs with no relevance or traffic, someone might have placed them there maliciously. If it’s an SEO agency that placed the link and you paid for it, fire them immediately. Google indexes those links in the comment section, and it will not look favorably upon your site if you have a lot of these unnatural backlinks.

Spam comments could cause a negative SEO attack

3. Exact match or unnatural anchor text

Natural anchor text will most often include your brand name, the services or products your business offers, or more generic wording like, “Click Here.” If all of your anchor text has the exact keyword you’re trying to rank for, that will come across as manipulation to Google. If the anchor text is irrelevant, it will confuse Google bots about the content of your site. It’s important to pay attention to the most common ways that other sites link to yours so if new links don’t share at least some similarity, you can investigate them accordingly.

4. Fake negative reviews

Although negative reviews don’t have as drastic of an impact on your site authority as your backlink profile, Google does crawl and render those sites when considering whether to rank web pages. Local and small businesses with bad reviews, in particular, will not rank, so in addition to reviewing your backlink profile on a regular basis, site owners should also be monitoring the important review sites in their industry. Most major review sites allow you to report reviews if you have reason to believe they are fake.

There are other types of negative SEO that I haven’t listed here such as content scraping, links hidden in images, and more, but the above are very easily identified using Google Search Console or any type of backlink analyzer. Familiarizing yourself with the many ways that others may try to link to your site in a harmful way will help you be able to identify those problematic links right after they show up in your backlink profile.

How to perform negative SEO remediation

Digging yourself out of a negative SEO attack is never fun, but it can be done. If you’re being a responsible webmaster and monitoring your backlink profile regularly, you should have a solid understanding of what a healthy backlink profile for your website looks like, and will therefore be able to recognize the moment that something appears off.

If you believe that the influx of links is indeed the result of nefarious intentions, you have a few options to repair the damage, and hopefully, before Google penalizes your site. Some of these options are more expensive than others, but if you’re not an experienced webmaster, it is probably best to get the guidance of an SEO expert. If you remove the wrong links, you can end up performing negative SEO on your own website by mistake.

1. Request removal

The first step with any link is to reach out to the webmaster to ask for the link to be removed. Admittedly, this is not always successful. However, before you move on to option two, you want to make sure you have exhausted every effort to have the link removed before requesting Google to get involved. If the link was the result of comment spam, the owner of the blog may be willing to moderate or delete the comment. There have been webmasters who have charged my clients a fee to have links removed. Depending on the price you’re willing to pay, you can choose to do so or move on to other options.

2. Disavow file

In 2012, Google added the disavow tool in Google Search Console to give webmasters more agency in their off-site SEO. The reality is, no one can fully control the websites that choose to link to theirs in a harmful way, so it’s not really fair for search engines to penalize your site as a result. Google recognized this and created the disavow tool, however, they still advise site owners to use it sparingly.

A disavow file is essentially a list of links that you want invalidated on your domain, or that you don’t want Google to consider when evaluating the quality of your website. There are detailed instructions on how to submit a disavow file in the Google Search Console help center. Take note though that these links aren’t actually removed, Google just no longer takes them into consideration the next time they are crawled and indexed. If you’re using an SEO software that measures the quality of your backlink profile, you will likely have to submit the disavow file there as well if you want their metrics to accurately reflect how Google understands your site.

3. Link building campaigns

High-quality, contextual link building is different from black-hat SEO in that it uses original content to earn links on relevant, industry-specific publications. The best SEO agencies will increase site authority the right way, through techniques that are Google compliant and don’t harm your rankings in the long-term. If you are not actively trying to earn high-quality links for your website, not only are you missing out on the opportunity to improve your overall keyword rankings, you place your site in a more vulnerable position. If you pursue consistent link acquisition and build up a healthy backlink profile before a negative SEO attack occurs, you are more well-positioned to avoid a Google penalty.

It is certainly frustrating and unfair when negative SEO occurs, but there is really nothing that a webmaster can do to prevent it. So in the case of negative SEO, preparation is the best medicine. Knowing what to look for will help you be more prepared to take immediate action and minimize the damage. 

Manick Bhan is the founder and CTO of LinkGraph, an award-winning digital marketing and SEO agency that provides SEO, paid media, and content marketing services. He is also the founder and CEO of SearchAtlas, a software suite of free SEO tools. You can find Manick on Twitter @madmanick.

The post The anatomy of a negative SEO attack appeared first on Search Engine Watch.

Search Engine Watch


Proxyclick visitor management system adapts to COVID as employee check-in platform

November 27, 2020 No Comments

Proxyclick began life by providing an easy way to manage visitors in your building with an iPad-based check-in system. As the pandemic has taken hold, however, customer requirements have changed, and Proxyclick is changing with them. Today the company announced Proxyclick Flow, a new system designed to check in employees during the time of COVID.

“Basically when COVID hit, our customers told us that actually our employees are the new visitors. So what you used to ask your visitors, you are now asking your employees — the usual probing questions, but also when are you coming and so forth. So we evolved the offering into a wider platform,” Proxyclick co-founder and CEO Gregory Blondeau explained.

That means instead of managing a steady flow of visitors — although it can still do that — the company is focusing on the needs of customers who want to open their offices on a limited basis during the pandemic, based on local regulations. To help adapt the platform for this purpose, the company developed the Proovr smartphone app, which employees can use to check in prior to going to the office, complete a health checklist, see who else will be in the office and make sure the building isn’t over capacity.

When the employee arrives at the office, they get a temperature check, and then can use the QR code issued by the Proovr app to enter the building via Proxyclick’s check-in system or whatever system they have in place. Beyond the mobile app, the company has designed the system to work with a number of adjacent building management and security systems so that customers can use it in conjunction with existing tooling.

They also beefed up the workflow engine that companies can adapt based on their own unique entrance and exit requirements. The COVID workflow is simply one of those workflows, but Blondeau recognizes not everyone will want to use the exact one they have provided out of the box, so they designed a flexible system.

“So the challenge was technical on one side to integrate all the systems, and afterwards to group workflows on the employee’s smartphone, so that each organization can define its own workflow and present it on the smartphone,” Blondeau said.

Once in the building, the systems registers your presence and the information remains on the system for two weeks for contact tracing purposes should there be an exposure to COVID. You check out when you leave the building, but if you forget, it automatically checks you out at midnight.

The company was founded in 2010 and has raised $ 18.5 million. The most recent raise was a $ 15 million Series B in January.

Mobile – TechCrunch


Adjusting Featured Snippet Answers by Context

November 27, 2020 No Comments

How Are Featured Snippet Answers Decided Upon?

I recently wrote about Featured Snippet Answer Scores Ranking Signals. In that post, I described how Google was likely using query dependent and query independent ranking signals to create answer scores for queries that were looking like they wanted answers.

One of the inventors of that patent from that post was Steven Baker. I looked at other patents that he had written, and noticed that one of those was about context as part of query independent ranking signals for answers.

Remembering that patent about question-answering and context, I felt it was worth reviewing that patent and writing about it.

This patent is about processing question queries that want textual answers and how those answers may be decided upon.

it is a complicated patent, and at one point the description behind it seems to get a bit murky, but I wrote about when that happened in the patent, and I think the other details provide a lot of insight into how Google is scoring featured snippet answers. There is an additional related patent that I will be following up with after this post, and I will link to it from here as well.

This patent starts by telling us that a search system can identify resources in response to queries submitted by users and provide information about the resources in a manner that is useful to the users.

How Context Scoring Adjustments for Featured Snippet Answers Works

Users of search systems are often searching for an answer to a specific question, rather than a listing of resources, like in this drawing from the patent, showing featured snippet answers:

featured snippet answers

For example, users may want to know what the weather is in a particular location, a current quote for a stock, the capital of a state, etc.

When queries that are in the form of a question are received, some search engines may perform specialized search operations in response to the question format of the query.

For example, some search engines may provide information responsive to such queries in the form of an “answer,” such as information provided in the form of a “one box” to a question, which is often a featured snippet answer.

Some question queries are better served by explanatory answers, which are also referred to as “long answers” or “answer passages.”

For example, for the question query [why is the sky blue], an answer explaining light as waves is helpful.

featured snippet answers - why is the sky blue

Such answer passages can be selected from resources that include text, such as paragraphs, that are relevant to the question and the answer.

Sections of the text are scored, and the section with the best score is selected as an answer.

In general, the patent tells us about one aspect of what it covers in the following process:

  • Receiving a query that is a question query seeking an answer response
  • Receiving candidate answer passages, each passage made of text selected from a text section subordinate to a heading on a resource, with a corresponding answer score
  • Determining a hierarchy of headings on a page, with two or more heading levels hierarchically arranged in parent-child relationships, where each heading level has one or more headings, a subheading of a respective heading is a child heading in a parent-child relationship and the respective heading is a parent heading in that relationship, and the heading hierarchy includes a root level corresponding to a root heading (for each candidate answer passage)
  • Determining a heading vector describing a path in the hierarchy of headings from the root heading to the respective heading to which the candidate answer passage is subordinate, determining a context score based, at least in part, on the heading vector, adjusting the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score
  • Selecting an answer passage from the candidate answer passages based on the adjusted answer scores

Advantages of the process in the patent

  1. Long query answers can be selected, based partially on context signals indicating answers relevant to a question
  2. The context signals may be, in part, query-independent (i.e., scored independently of their relatedness to terms of the query
  3. This part of the scoring process considers the context of the document (“resource”) in which the answer text is located, accounting for relevancy signals that may not otherwise be accounted for during query-dependent scoring
  4. Following this approach, long answers that are more likely to satisfy a searcher’s informational need are more likely to appear as answers

This patent can be found at:

Context scoring adjustments for answer passages
Inventors: Nitin Gupta, Srinivasan Venkatachary , Lingkun Chu, and Steven D. Baker
US Patent: 9,959,315
Granted: May 1, 2018
Appl. No.: 14/169,960
Filed: January 31, 2014

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for context scoring adjustments for candidate answer passages.

In one aspect, a method includes scoring candidate answer passages. For each candidate answer passage, the system determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading to which the candidate answer passage is subordinate; determines a context score based, at least in part, on the heading vector; and adjusts answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The system then selects an answer passage from the candidate answer passages based on the adjusted answer scores.

Using Context Scores to Adjust Answer Scores for Featured Snippets

A drawing from the patent shows different hierarchical headings that may be used to determine the context of answer passages that may be used to adjust answer scores for featured snippets:

Hierarchical headings for featured snippets

I discuss these headings and their hierarchy below. Note that the headings include the Page title as a heading (About the Moon), and the headings within heading elements on the page as well. And those headings give those answers context.

This context scoring process starts with receiving candidate answer passages and a score for each of the passages.

Those candidate answer passages and their respective scores are provided to a search engine that receives a query determined to be a question.

Each of those candidate answer passages is text selected from a text section under a particular heading from a specific resource (page) that has a certain answer score.

For each resource where a candidate answer passage has been selected, a context scoring process determines a heading hierarchy in the resource.

A heading is text or other data corresponding to a particular passage in the resource.

As an example, a heading can be text summarizing a section of text that immediately follows the heading (the heading describes what the text is about that follows it, or is contained within it.)

Headings may be indicated, for example, by specific formatting data, such as heading elements using HTML.

This next section from the patent reminded me of an observation that Cindy Krum of Mobile Moxie has about named anchors on a page, and how Google might index those to answer a question, to lead to an answer or a featured snippet. She wrote about those in What the Heck are Fraggles?

A heading could also be anchor text for an internal link (within the same page) that links to an anchor and corresponding text at some other position on the page.

A heading hierarchy could have two or more heading levels that are hierarchically arranged in parent-child relationships.

The first level, or the root heading, could be the title of the resource.

Each of the heading levels may have one or more headings, and a subheading of a respective heading is a child heading and the respective heading is a parent heading in the parent-child relationship.

For each candidate passage, a context scoring process may determine a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

The context scoring process could be used to determine the context score and determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The context score could be based, at least in part, on the heading vector.

The context scoring process can then adjust the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.

The context scoring process can then select an answer passage from the candidate answer passages based on adjusted answer scores.

This flowchart from the patent shows the context scoring adjustment process:

context scoring adjustment flowchart

Identifying Question Queries And Answer Passages

I’ve written about understanding the context of answer passages. The patent tells us more about question queries and answer passages worth going over in more detail.

Some queries are in the form of a question or an implicit question.

For example, the query [distance of the earth from the moon] is in the form of an implicit question “What is the distance of the earth from the moon?”

An implicit question - the distance from the earth to the moon

Likewise, a question may be specific, as in the query [How far away is the moon].

The search engine includes a query question processor that uses processes that determine if a query is a query question (implicit or specific) and if it is, whether there are answers that are responsive to the question.

The query question processor can use several different algorithms to determine whether a query is a question and whether there are particular answers responsive to the question.

For example, it may use to determine question queries and answers:

  • Language models
  • Machine learned processes
  • Knowledge graphs
  • Grammars
  • Combinations of those

The query question processor may choose candidate answer passages in addition to or instead of answer facts. For example, for the query [how far away is the moon], an answer fact is 238,900 miles. And the search engine may just show that factual information since that is the average distance of the Earth from the moon.

But, the query question processor may choose to identify passages that are to be very relevant to the question query.

These passages are called candidate answer passages.

The answer passages are scored, and one passage is selected based on these scores and provided in response to the query.

An answer passage may be scored, and that score may be adjusted based on a context, which is the point behind this patent.

Often Google will identify several candidate answer passages that could be used as featured snippet answers.

Google may look at the information on the pages where those answers come from to better understand the context of the answers such as the title of the page, and the headings about the content that the answer was found within.

Contextual Scoring Adjustments for Featured Snippet Answers

The query question processor sends to a context scoring processor some candidate answer passages, information about the resources from which each answer passages was from, and a score for each of the featured snippet answers.

The scores of the candidate answer passages could be based on the following considerations:

  • Matching a query term to the text of the candidate answer passage
  • Matching answer terms to the text of the candidate answer passages
  • The quality of the underlying resource from which the candidate answer passage was selected

I recently wrote about featured snippet answer scores, and how a combination of query dependent and query independent scoring signals might be used to generate answer scores for answer passages.

The patent tells us that the query question processor may also take into account other factors when scoring candidate answer passages.

Candidate answer passages can be selected from the text of a particular section of the resource. And the query question processor could choose more than one candidate answer passage from a text section.

We are given the following examples of different answer passages from the same page

(These example answer passages are referred to in a few places in the remainder of the post.)

  • (1) It takes about 27 days (27 days, 7 hours, 43 minutes, and 11.6 seconds) for the Moon to orbit the Earth at its orbital distance
  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Each of those answers could be good ones for Google to use. We are told that:

More than three candidate answers can be selected from the resource, and more than one resource can be processed for candidate answers.

How would Google choose between those three possible answers?

Google might decide based on the number of sentences and a selection of up to a maximum number of characters.

The patent tells us this about choosing between those answers:

Each candidate answer has a corresponding score. For this example, assume that candidate answer passage (2) has the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1). Thus, without the context scoring processor, candidate answer passage (2) would have been provided in the answer box of FIG. 2. However, the context scoring processor takes into account the context of the answer passages and adjusts the scores provided by the query question processor.

So, we see that what might be chosen based on featured snippet answer scores could be adjusted based on the context of that answer from the page that it appears on.

Contextually Scoring Featured Snippet Answers

This process starts which begins with a query determined to be a question query seeking an answer response.

This process next receives candidate answer passages, each candidate answer passage chosen from the text of a resource.

Each of the candidate answer passages are text chosen from a text section that is subordinate to a respective heading (under a heading) in the resource and has a corresponding answer score.

For example, the query question processor provides the candidate answer passages, and their corresponding scores, to the context scoring processor.

A Heading Hierarchy to Determine Context

This process then determines a heading hierarchy from the resource.

The heading hierarchy would have two or more heading levels hierarchically arranged in parent-child relationships (Such as a page title, and an HTML heading element.)

Each heading level has one or more headings.

A subheading of a respective heading is a child heading (an (h2) heading might be a subheading of a (title)) in the parent-child relationship and the respective heading is a parent heading in the relationship.

The heading hierarchy includes a root level corresponding to a root heading.

The context scoring processor can process heading tags in a DOM tree to determine a heading hierarchy.

hierarchical headings for featured snippets

For example, concerning the drawing about the distance to the moon just above, the heading hierarchy for the resource may be:

The ROOT Heading (title) is: About The Moon (310)

The main heading (H1) on the page

H1: The Moon’s Orbit (330)

A secondary heading (h2) on the page:

H2: How long does it take for the Moon to orbit Earth? (334)

Another secondary heading (h2) on the page is:

H2: The distance from the Earth to the Moon (338)

Another Main heading (h1) on the page

H1: The Moon (360)

Another secondary Heading (h2) on the page:

H2: Age of the Moon (364)

Another secondary heading (h2) on the page:

H2: Life on the Moon (368)

Here is how the patent describes this heading hierarchy:

In this heading hierarchy, The title is the root heading at the root level; headings 330 and 360 are child headings of the heading, and are at a first level below the root level; headings 334 and 338 are child headings of the heading 330, and are at a second level that is one level below the first level, and two levels below the root level; and headings 364 and 368 are child headings of the heading 360 and are at a second level that is one level below the first level, and two levels below the root level.

The process from the patent determines a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.

This score may be is based on a heading vector.

The patent says that the process, for each of the candidate answer passages, determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.

The heading vector would include the text of the headings for the candidate answer passage.

For the example candidate answer passages (1)-(3) above about how long it takes the moon to orbit the earch, the respectively corresponding heading vectors V1, V2 and V3 are:

  • V1=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: How long does it take for the Moon to orbit the Earth?]>
  • V2=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>
  • V3=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>

We are also told that because candidate answer passages (2) and (3) are selected from the same text section 340, their respective heading vectors V2 and V3 are the same (they are both in the content under the same (H2) heading.)

The process of adjusting a score, for each answer passage, uses a context score based, at least in part, on the heading vector (410).

That context score can be a single score used to scale the candidate answer passage score or can be a series of discrete scores/boosts that can be used to adjust the score of the candidate answer passage.

Where things Get Murky in This Patent

There do seem to be several related patents involving featured snippet answers, and this one which targets learning more about answers from their context based on where they fit in a heading hierarchy makes sense.

But, I’m confused by how the patent tells us that one answer based on the context would be adjusted over another one.

The first issue I have is that the answers they are comparing in the same contextual area have some overlap. Here those two are:

  • (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
  • (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles

Note that the second answer and the third answer both include the same line: “Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles.” I find myself a little surprised that the second answer includes a couple of sentences that aren’t in the third answer, and skips a couple of lines from the third answer, and then includes the last sentence, which answers the question.

Since they both appear in the same heading and subheading section of the page they are from, it is difficult to imagine that there is a different adjustment based on context. But, the patent tells us differently:

The candidate answer score with the highest adjusted answer score (based on context from the headings) is selected, and the answer passage.

Recall that in the example above, the candidate answer passage (2) had the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1).

However, after adjustments, candidate answer passage (3) has the highest score, followed by candidate answer passage (2), and then-candidate answer passage (1).

Accordingly, candidate answer passage (3) is selected and provided as the answer passage of FIG. 2.

Boosting Scores Based on Passage Coverage Ratio

A query question processor may limit the candidate answers to a maximum length.

The context scoring processor determines a coverage ratio which is a measure indicative of the coverage of the candidate answer passage from the text from which it was selected.

The patent describes alternative question answers:

Alternatively, the text block may include text sections subordinate to respective headings that include a first heading for which the text section from which the candidate answer passage was selected is subordinate, and sibling headings that have an immediate parent heading in common with the first heading. For example, for the candidate answer passage, the text block may include all the text in the portion 380 of the hierarchy; or may include only the text of the sections, of some other portion of text within the portion of the hierarchy. A similar block may be used for the portion of the hierarchy for candidate answer passages selected from that portion.

A small coverage ratio may indicate a candidate answer passage is incomplete. A high coverage ratio may indicate the candidate answer passage captures more of the content of the text passage from which it was selected. A candidate answer passage may receive a context adjustment, depending on this coverage ratio.

A passage coverage ratio is a ratio of the total number of characters in the candidate answer passage to the ratio of the total number of characters in the passage from which the candidate answer passage was selected.

The passage cover ratio could also be a ratio of the total number of sentences (or words) in the candidate answer passage to the ratio of the total number of sentences (or words) in the passage from which the candidate answer passage was selected.

We are told that other ratios can also be used.

From the three example candidate answer passages about the distance to the moon above (1)-(3) above, passage (1) has the highest ratio, passage (2) has the second-highest, and passage (3) has the lowest.

This process determines whether the coverage ratio is less than a threshold value. That threshold value can be, for example, 0.3, 0.35 or 0.4, or some other fraction. In our “distance to the moon” example, each coverage passage ratio meets or exceeds the threshold value.

If the coverage ratio is less than a threshold value, then the process would select a first answer boost factor. The first answer boost factor might be proportional to the coverage ratio according to a first relation, or maybe a fixed value, or maybe a non-boosting value (e.g., 1.0.)

But if the coverage ratio is not less than the threshold value, the process may select a second answer boost factor. The second answer boost factor may be proportional to the coverage ratio according to a second relation, or maybe fixed value, or maybe a value greater than the non-boosting value (e.g., 1.1.)

Scoring Based on Other Features

The context scoring process can also check for the presence of features in addition to those described above.

Three example features for contextually scoring an answer passage can be based on the additional features of the distinctive text, a preceding question, and a list format.

Distinctive text

Distinctive text is the text that may stand out because it is formatted differently than other text, like using bolding.

A Preceeding Question

A preceding question is a question in the text that precedes the candidate answer question.

The search engine may process various amounts of text to detect for the question.

Only the passage from which the candidate answer passage is extracted is detected.

A text window that can include header text and other text from other sections may be checked.

A boost score that is inversely proportional to the text distance from a question to the candidate answer passage is calculated, and the check is terminated at the occurrence of a first question.

That text distance may be measured in characters, words, or sentences, or by some other metric.

If the question is anchor text for a section of text and there is intervening text, such as in the case of a navigation list, then the question is determined to only precede the text passage to which it links, not precede intervening text.

In the drawing above about the moon, there are two questions in the resource: “How long does it take for the Moon to orbit Earth?” and “Why is the distance changing?”

The first question–“How long does it take for the Moon to orbit Earth?”– precedes the first candidate answer passage by a text distance of zero sentences, and it precedes the second candidate answer passage by a text distance of five sentences.

And the second question–“Why is the distance changing?”– precedes the third candidate answer by zero sentences.

If a preceding question is detected, then the process selects a question boost factor.

This boost factor may be proportional to the text distance, whether the text is in a text passage subordinate to a header or whether the question is a header, and, if the question is in a header, whether the candidate answer passage is subordinate to the header.

Considering these factors, the third candidate answer passage receives the highest boost factor, the first candidate answer receives the second-highest boost factor, and the second candidate answer receives the smallest boost factor.

Conversely, if the preceding text is not detected, or after the question boost factor is detected, then the process detects for the presence of a list.

The Presence of a List

A list is an indication of several steps usually instructive or informative. The detection of a list may be subject to the query question being a step modal query.

A step modal query is a query where a list-based answer is likely to a good answer. Examples of step model queries are queries like:

  • [How to . . . ]
  • [How do I . . . ]
  • [How to install a door knob]
  • [How do I change a tire]

The context scoring process may detect lists formed with:

  • HTML tags
  • Micro formats
  • Semantic meaning
  • Consecutive headings at the same level with the same or similar phrases (e.g., Step 1, Step 2; or First; Second; Third; etc.)

The context scoring process may also score a list for quality.

It would look at things such as:

  • A list in the center of a page, which does not include multiple links to other pages (indicative of reference lists)
  • HREF link text that does not occupy a large portion of the text of the list will be of higher quality than a list at the side of a page, and which does include multiple links to other pages (which are indicative of reference lists), and/are has HREF link text that does occupy a large portion of the text of the list

If a list is detected, then the process selects a list boost factor.

That list boost factor may be fixed or may be proportional to the quality score of the list.

If a list is not detected, or after the list boost factor is selected, the process ends.

In some implementations, the list boost factor may also be dependent on other feature scores.

If other features, such as coverage ratio, distinctive text, etc., have relatively high scores, then the list boot factor may be increased.

The patent tells us that this is because “the combination of these scores in the presence of a list is a strong signal of a high-quality answer passage.”

Adjustment of Featured Snippet Answers Scores

Answer scores for candidate answer passages are adjusted by scoring components based on heading vectors, passage coverage ratio, and other features described above.

The scoring process can select the largest boost value from those determined above or can select a combination of the boost values.

Once the answer scores are adjusted, the candidate answer passage with the highest adjusted answer score is selected as the featured snippet answer and is displayed to a searcher.

More to Come

I will be reviewing the first patent in this series of patents about candidate answer scores because it does have some additional elements to it that haven’t been covered in this post, and the post about query dependent/independent ranking signals for answer scores. If you have been paying attention to how Google has been answering queries that appear to be seeking answers, you have likely seen those improving in many cases. Some answers have been really bad though. It will be nice to have as complete an idea as we can of how Google decides what might be a good answer to a query, based on information available to them on the Web.

Added October 14, 2020 – I have written about another Google patent on Answer Scores, and it’s worth reading about all of the patents on this topic. The new post is at Weighted Answer Terms for Scoring Answer Passages, and is about the patent Weighted answer terms for scoring answer passages.

It is about identifying questions in resources, and answers for those questions, and describes using term weights as a way to score answer passages (along with the scoring approaches identified in the other related patents, including this one.)

Added October 15, 2020 – I have written a few other posts about answer passages that are worth reading if you are interested in how Google finds questions on pages and answers to those, and scores answer passages to determine which ones to show as featured snippets. I’ve linked to some of those in the body of this post, but here is another one of those posts:

Added October 22, 2020, I have written up a description of details from about how structured and unstructured data has been selected for answer passages based on specific criteria in the patent on Scoring Answer passages in the post Selecting Candidate Answer Passages.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Adjusting Featured Snippet Answers by Context appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Pinterest tests online events with dedicated ‘class communities’

November 25, 2020 No Comments

Pinterest is getting into online events. The company has been spotted testing a new feature that allows users to sign up for Zoom classes through Pinterest, while creators use Pinterest’s class boards to organize class materials, notes and other resources, or even connect with attendees through a group chat option. The company confirmed the test of online classes is an experiment now in development, but wouldn’t offer further details about its plans.

The feature itself was discovered on Tuesday by reverse engineer Jane Manchun Wong, who found details about the online classes by looking into the app’s code.

Currently, you can visit some of these “demo” profiles directly — like “@pinsmeditation” or “@pinzoom123,” for example — and view their listed Class Communities. However, these communities are empty when you click through. That’s because the feature is still unreleased, Wong says.

When and if the feature is later launched to the public, the communities would include dedicated sections where creators will be able to organize their class materials — like lists of what to bring to class, notes, photos and more. They could also use these communities to offer a class overview and description, connect users to a related shop, group chat feature and more.

Creators are also able to use the communities — which are basically enhanced Pinterest boards — to respond to questions from attendees, share photos from the class and otherwise interact with the participants.

When a user wants to join a class, they can click a “book” button to sign up, and are then emailed a confirmation with the meeting details. Other buttons direct attendees to download Zoom or copy the link to join the class.

It’s not surprising that Pinterest would expand into the online events space, given its platform has become a popular tool for organizing remote learning resources during the coronavirus pandemic. Teachers have turned to Pinterest to keep track of lesson plans, get inspiration, share educational activities and more. In the early days of the pandemic, Pinterest reported record usage when the company saw more searches and saves globally in a single March weekend than ever before in its history, as a result of its usefulness as a online organizational tool.

This growth has continued throughout the year. In October, Pinterest’s stock jumped on strong earnings after the company beat on revenue and user growth metrics. The company brought in $ 443 million in revenue, versus $ 383.5 million expected, and grew its monthly active users to 442 million, versus the 436.4 million expected. Outside of the coronavirus impacts, much of this growth was due to strong international adoption, increased ad spend from advertisers boycotting Facebook and a surge of interest from users looking for iOS 14 home screen personalization ideas.

Given that the U.S. has failed to get the COVID-19 pandemic under control, many classes, events and other activities will remain virtual even as we head into 2021. The online events market may continue to grow in the years that follow, too, thanks to the kickstart the pandemic provided the industry as a whole.

“We are experimenting with ways to help creators interact more closely with their audience,” a Pinterest spokesperson said, when asked for more information.

Pinterest wouldn’t confirm additional details about its plans for online events, but did say the feature was in development and the test would help to inform the product’s direction.

Pinterest often tries out new features before launching them to a wider audience. Earlier this summer, TechCrunch reported on a Story Pins feature the company had in the works. Pinterest then launched the feature in September. If the same time frame holds up for online events, we could potentially see the feature become more widely available sometime early next year.


Social – TechCrunch


Join us for a live Q&A with Sapphire’s Jai Das on Tuesday at 2 pm ET/11 am PT

November 25, 2020 No Comments

Sure, we’re heading into a holiday weekend here in America, but that doesn’t mean that the good ship TechCrunch is going to slow down. We’re diving right back in next week with another installment in season two of Extra Crunch Live, our regular interview series with startup founders, venture capitalists, and other leaders from the technology community.

This series is for Extra Crunch members, so if you haven’t signed up you can hop on that train right here.

Next week I’m virtually sitting down with Jai Das, a well-known managing director at Sapphire Ventures.

Das as invested in companies like MuleSoft (sold for $ 6.5 billion), Alteryx (now public), Square (also public), Sumo Logic (yep, public) while at Sapphire, having previously worked corporate venture jobs at Intel Capital and Agilent Ventures. (Sapphire was itself originally SAP’s corporate venture capital arm, but it split off from its parent in 2011, rebranded, and kept on raising funds.)

Here are notes from the last episode of Extra Crunch Live with Bessemer’s Byron Deeter.

It’s going to be fun as there’s so much to talk about. I’m still bubbling up my question list, so to avoid giving the Sapphire PR team too much pre-discussion ammo let’s just say that corporate venture capital’s place in the 2020 boom is an interesting topic for both founders, and investors alike.

And I’ll want to press Das on the current market for software startups, where we are in the historical arc of SaaS multiples, the importance of API-led tech upstarts, where founders might look to build the next great enterprise startup, and if there are any new platforms bubbling up that could be a foundation for future founders to later leverage.

As this is an Extra Crunch Live, I’ll also work in a few questions from the audience (that means you, make sure you Extra Crunch subscription is live), to augment my own clipboard of notes.

This is going to be a good one. I’ll see you next Tuesday for the show.

Details

Below are links to add the event to your calendar and to save the Zoom link. We’ll share the YouTube link shortly before the discussion:


Startups – TechCrunch