Monthly Archives: November 2020
Is it fair that people can pay to get to the top of search results? This blog discusses the ethical implications of PPC from one advertiser’s perspective.
Read more at PPCHero.com
Boston-based marketing automation firm Klaviyo wants to change the way marketers interact with data, giving them direct access to their data and their customers. It believes that makes it easier to customize the messages and produce better results. Investors apparently agree, awarding the company a $ 200 million Series C on a hefty $ 4.15 billion valuation today.
The round was led by Accel with help from Summit Partners. It comes on the heels of last year’s $ 150 million Series B, and brings the total raised to $ 385.5 million, according the company. Accel’s Ping Li will also be joining the company board under the terms of today’s announcement.
Marketing automation and communication takes on a special significance as we find ourselves in the midst of this pandemic and companies need to find ways to communicate in meaningful ways with customers who can’t come into brick and mortar establishments. Company CEO and co-founder Andrew Bialecki says that his company’s unique use of data helps in this regard.
“I think our success is because we are a hybrid customer data and marketing platform. We think about what it takes to create these owned experiences. They’re very contextual and you need all of that customer data, not some of it, all of it, and you need that to be tightly coupled with how you’re building customer experiences,” Bialecki explained.
He believes that by providing a platform of this scope that combines the data, the ability to customize messages and the use of machine learning to keep improving that, it will help them compete with the largest platforms. In fact his goal is to help companies understand that they don’t have to give up their customer data to Amazon, Google and Facebook.
“The flip side of that is growing through Amazon where you give up all your customer data, or Facebook or Google where you kind of are delegated to wherever their algorithms decide where you get to show up,” he said. With Klaviyo, the company retains its own data, and Ping Li, who is leading the investment at Accel says that it where the e-commerce market is going.
“So the question is, is there a tool that allows you to do that as easily as going on Facebook and Google, and I think that’s the vision and the promise that Klaviyo is delivering on,” Li said. He believes that this will allow their customers to actually build that kind of fidelity with their customers by going directly to them, instead of through a third-party intermediary.
The company has seen some significant success with 50,000 customers in 125 countries along with that lofty valuation. The customer number has doubled year over year, even during the economic malaise brought on by the pandemic.
Today, the company has 500 employees with plans to double that in the next year. As he grows his company, Bialecki believes diversity is not just the right thing to do, it’s also smart business. “I think the competitive advantages that tech companies are going to have going forward, especially for the tech companies that are not the leaders today, but [could be] leaders in the coming decades, it’s because they have the most diverse teams and inclusive culture and those are both big focuses for us,” he said.
As they move forward flush with this cash, the company wants to continue to build out the platform, giving customers access to a set of tools that allow them to know their own customers on an increasingly granular level, while delivering more meaningful interactions. “It’s all about accelerating product development and getting into new markets,” Bialecki said. They certainly have plenty of runway to do that now.
- Visual content offers a ton of value for your website.
- It can boost critical statistics such as time on page.
- Visuals guide the reader through your content more smoothly.
- They make your content more consumable and increase sharing.
- Google loves images that are optimized for search.
- Read on to learn more about how adding optimized images to your site can boost your search visits from your target audience.
- See examples of how companies are successfully incorporating visual content into their marketing and websites.
It’s no secret that visual content is hot right now (queue the Zoolander references). You know content formats like video, infographics, GIFs, memes, and more should be a part of your content strategy, but did you know these also impact your site’s SEO?
How does visual content impact SEO?
There is so much value to adding visual content on your website. While your written content serves the purpose of enabling you to naturally incorporate keywords and create more content to rank in search engines, the visual content you add to your site and elsewhere can help give that content a further boost.
1. Video content keeps visitors on the page
One stat Google loves is “time on page”. If visitors are checking out your site and leaving after an average of 10 seconds, that signifies to Google that your content is bad or isn’t relevant. By placing a video in the middle of your written content, you can keep people on page longer.
Think about it. Let’s say it takes someone 10 seconds to read the first two paragraphs of your article. Then, directly on your site is a video your visitors can easily click on that adds more value to the piece.
They click to view and end up watching the full two-minute video. This intrigues your visitors to dig deeper. Before they know it, they’ve been on your site for five minutes. This can give a huge boost to your time on page stats.
Video also impacts critical stats like your bounce rate, which is also a critical factor used by Google. The last thing you want is people visiting your site and bouncing away after just reading a few lines on one page. Video can help reduce your bounce rate and convince people to stick around.
While we’re on the subject, here’s a video from Neil Patel that explains this concept a bit more. In the video, Patel highlights a few ways (including video) that you can use to reduce your bounce rate.
2. Visuals help guide the reader through your content
Reading straight through a 1000-word article, no matter how well-written, can become tedious quickly. To keep site visitors flowing through the content, you can add things like infographics, screenshots, and more to help visualize the concepts you’re presenting and push your visitors further down the page.
Breaking up your content with related visuals allows readers to take a break from soaking up the copy and instead check out a few related graphics, videos, or other visual content. It also provides an opportunity for the reader to pause and look at a graphic that might more easily explain a complex subject you’re presenting or highlight some related stats visually to really drive home the impact, so they don’t get lost in the text.
Here’s a great example of an infographic that grabs readers’ attention and gives them something more to soak up in addition to just text. These are a few screenshots from a larger infographic that appeared in an article highlighting the state of SEO in 2019.
To view the full infographic, click here.
3. Google’s machine learning is learning to read visuals
While it’s not 100% clear how this works, it’s out there and known well enough that Google is actively learning how to read images on pages. With billions of images online, Google’s machine learning becomes adept at using shapes and other elements to compare and comprehend what the images on your site represent.
I mean, is there really much more I need to say here? If Google is focusing on learning how to crawl something and then attribute it to the value your site brings to the Internet, you need to pay attention. That’s why it’s so important to ensure your images are relevant and are formatted in a way Google can read them.
How can you use images to boost SEO?
So, now that you know the “why” part, let’s dig into the “how” part. It’s important to dig a bit deeper and explore some of the ways you can apply visual marketing to your efforts to boost your SEO.
1. Make sure your images add to the story
There is a ton of value in adding things like graphs, screenshots, and other content that actually relate to your article and adds value. There is decidedly less value in adding generic images that simply represent the concepts and don’t really add anything. Since we’re on the topic, why don’t I use some visuals to show you what I mean?
For example number one, you can see instructive screenshots dropped into this piece of content. These are screenshots from an article I recently wrote that details how to use HARO for SEO and backlink building. I used screenshots to walk readers through each step and provide them with actionable guides like the image of the email template and the walkthrough of how to set up an email.
On the opposite side, you have the images below that show an example of using images that relate to the topic but don’t really add value. This is another article on my site. I decided to test out generic images on this piece, as you can see in the screenshots below. The images relate to the content, but they really don’t add much extra.
As you can see, both do add a certain level of appeal to their respective articles. That said, for example one, the HARO article, has 12 times the number of page views, 11 more comments, and double the time on-page. So, you can see the value is clear that adding relevant images that add to the story brings a boost to your SEO.
2. Optimize your images
It’s not enough to just add images to your pages and posts. You also need to ensure they are optimized. If you ignore this step, you can run into issues with the performance of your site. For example, images that aren’t optimized can lead to slow load times on your site, and site speed is a critical ranking factor for Google.
To ensure you aren’t bogging down your site with heavy images, try using appropriate image types. The best formats to use are JPEG, PNG, and GIF. And as for videos, host the videos elsewhere (YouTube, for example) and then embed them on your site rather than uploading them directly.
Another important factor in optimizing your images is the tags you add. Just like you need to add meta tags to your posts, you need to add tags to your images as well. This serves as a way to tell Google (and let’s not forget other search engines, of course) what your images are about.
3. Take advantage of off-site search
You’ve likely heard this before, but it deserves being restated. YouTube is the second largest search engine. Second only to…drumroll please…Google!
So, why not take advantage of posting videos to YouTube and optimizing those videos to give you more content to rank in search?
While this is obviously an off-site strategy, if you create excellent video content and then optimize it properly to appear in search, your videos can grab some SEO value.
You can then add links back to your website in your video descriptions and on your YouTube channel, and as your videos become more popular, clicks from the links on your YouTube channel will give a boost to your site traffic.
Wrapping it up
So, you get it now, right? Images are good for the health of your website and the impact of your SEO strategy. They not only add some life to your website and grab readers’ attention, they also help you improve critical stats that can help give your SEO a boost.
If you’ve been using visuals in your content, your first step should be to review those visuals to ensure they are optimized. Make sure they add to the story and then check to catch any missed opportunities to enhance your files with the right file types along with proper tagging.
Using images and video content on and off your website is a no-brainer. In today’s visual world, it’s important to stay on top of the continuing trend toward a preference for visual content. Make sure to work visuals into your content to give your SEO a serious boost.
Anthony is the Founder of AnthonyGaenzle.com a marketing and business blog. He also serves as the Head of Marketing and Business Development at Granite Creative Group, a full-service marketing firm. He is a storyteller, strategist, and eternal student of marketing and business strategy.
The post How visual content can give a boost to your SEO and how to take advantage appeared first on Search Engine Watch.
Just over a week after the U.S. elections, Twitter has offered a breakdown of some of its efforts to label misleading tweets. The site says that from October 27 to November 11, it labeled some 300,000 tweets as part of its Civic Integrity Policy. That amounts to around 0.2% of the total number of election-related tweets sent during that two-week period.
Of course, not all Twitter warnings are created equal. Only 456 of those included a warning that covered the text and limited user engagement, disabling retweets, replies and likes. That specific warning did go a ways toward limited engagement, with around three-fourths of those who encountered the tweets seeing the obscured texts (by clicking through the warning). Quote tweets for those so labeled decreased by around 29%, according to Twitter’s figures.
The president of the United States received a disproportionate number of those labels, as The New York Times notes that just over a third of Trump’s tweets between November 3 and 6 were hit with such a warning. The end of the election (insofar as the election has actually ended, I suppose) appears to have slowed the site’s response time somewhat, though Trump continues to get flagged, as he continues to devote a majority of his feed to disputing the election results confirmed by nearly every major news outlet.
His latest tweet as of this writing has been labeled disputed, but not hidden, as Trump repeats claims against voting machine maker, Dominion. “We also want to be very clear that we do not see our job as done,” Legal, Policy and Trust & Safety Lead Vijaya Gadde and Product Lead Kayvon Beykpour wrote. “Our work here continues and our teams are learning and improving how we address these challenges.”
Twitter and other social media sites were subject to intense scrutiny following the 2016 election for the roles the platforms played in the spread of misinformation. Twitter sought to address the issue by tweaking recommendations and retweets, as well as individually labeling tweets that violate its policies.
Earlier today, YouTube defended its decision to keep controversial election-related videos, noting, “Like other companies, we’re allowing these videos because discussion of election results & the process of counting votes is allowed on YT. These videos are not being surfaced or recommended in any prominent way.”
The next generation of gaming is here with the PlayStation 5 and Xbox Series X — except it isn’t, because there are almost no next-generation games to play on them. Demon’s Souls is the first title that can truly be called next-gen, and it shows — even though it’s a remake of a PS3 game… which also shows.
The original Demon’s Souls was an incredibly influential game. Its sequel, Dark Souls, was more popular and improved on the first quite a bit, but much of what made the now major series good had already been established. “Souls-like” is practically a genre now, though the originals are unsurprisingly still the nonpareil.
The comparative few who played Demon’s Souls were elated to hear that it was being remade, and by Bluepoint at that (who also remade the legendary Shadow of the Colossus), but worried that the game might not stand up by modern standards.
Can an old game, the essentials of which are a decade behind its descendants, be given a really, really, really ridiculously good-looking coat of paint and still act as a blockbuster next-gen debut? Well, it kind of has to — there’s no other option! Fortunately the game really does hold up, and in fact makes for a harrowing, cinematic experience despite a few significant creaks.
I don’t want to give a full review of the game itself; let it suffice to say that, although it looks and runs much better, the core of the game is almost entirely unchanged. Any review from the last decade is still completely relevant, down to the “magic is overpowered” and “inventory burden is annoying.”
As a next-gen gaming experience, however, Demon’s Souls is as yet without comparison. It serves as a showcase not only for the PS5’s graphical prowess, but its sound design, haptics, speed and OS.
First, the graphics. It’s clear that Sony and Bluepoint intended this to be a truly lavish remake, and the game’s structure — essentially five long, mostly linear levels — provides an excellent platform for breathtaking visuals carefully tuned to the user’s experience.
The environments themselves are incredibly detailed, and the various enemies you fight very well realized, but what I kept being impressed by was the lighting. Realistic lighting is something that has proven difficult even for top-tier developers, and it’s only now that the hardware has enough headroom to start doing it properly.
Demon’s Souls doesn’t use ray-tracing, the computation-heavy lighting technique perennially on the cusp of being implemented, but the real-time lighting effects are nevertheless dramatic and extremely engaging. This is a dark, dark world and the player is very limited as far as personal light sources, meaning the way you experience the environment is carefully designed.
Although the detailed armor, props and monsters are all very nice, it’s the realistic lighting that really sets them off in a way that seems truly new and beautiful. Dynamic range is used properly, to have actually dark areas illuminated dramatically, such as the still-terrifying Tower of Latria.
The game isn’t a huge leap over the best the PC has to offer right now, but it does make me excited for game designers who really want to use light and shadow as gameplay elements.
(Incidentally, don’t bother with the “cinematic” option versus “performance.” The latter keeps the game silky smooth, which for Souls games is a luxury, and the other setting didn’t improve the look much if at all, while severely affecting the framerate. Skip it unless you’re taking glamour shots.)
Similarly sound is extremely well done in the game, though I’m cautious about hyping Sony’s “3D audio” — really, games have had this sort of thing for years on many platforms. Having a decent pair of headphones is the important bit. But perhaps the PS5 offers improved workflows for spatializing sound; at all events in Demon’s Souls it was very good, with great separation, location and clarity. I have reliably dodged an enemy attack from offscreen after recognizing the characteristic grunt of an attacking foe, and the screeches and roars of dragons and boss monsters (as well as the general milieu of Latria) were suitably chilling.
This combined well with the improved haptics of the DualSense controller, which seemed to have a different “sensation” for every event. A dragon flying overhead, a demon stomping the ground, a blocked attack, an elevator ride. Mostly these were good and only aided immersion, but some, like the elevators, felt to me more like an annoying buzz than a rumble, like holding a power tool. I hope that developers will be sensible about these things and identify vibration patterns that are irritating. Fortunately the intensity can be adjusted universally in the PS5’s controls.
Likewise the adaptive triggers were nice but not game-changing. It was helpful when using the bow to know when the arrow was ready to release, for instance, but beyond a few things like that it was not used to great advantage.
Something that had a more immediate effect on how I played was the incredibly short load times. The Souls series has always been plagued by long load times when traveling and dying, the latter of which you can expect to do a lot. But now it’s rare that I can count to three before I’m materializing at the bonfire again.
This significantly reduces (but far from eliminates) frustration in this infamously unforgiving game, and actually makes me play it differently. Where once I could not be bothered to briefly travel to another area or the hub in order to accomplish some small task, now I know I can return to the Nexus, fuss around a bit with my loadout and be back in Boletaria in 30 seconds flat. If I die, I’m back in action in five seconds rather than 20, and believe me, that adds up real fast. (Load times are improved across the board in PS4 games running on the PS5 as well.)
Aiding this, kind of, is the new fancy pause screen Sony has implemented on its new console. When hitting the (annoyingly PS-shaped) PS button, a set of “cards” appears showing recent achievements and screenshots, but also ongoing missions or game progress. Pausing in Latria to take a breath, the menu offered up the ability to instantly warp to one of the other worlds, losing my souls but skipping the ordinarily requisite Nexus stop. This will certainly change how speedruns are accomplished, and provides a useful, if somewhat immersion-breaking option for the scatterbrained player.
The pause menu also provides a venue for tips and hints, in both text and video form. Again, this is a funny game to debut these in (I don’t count Astro’s Playroom, the included game/tech demo, which is fun but slight), because one of the Souls series’s distinctive features is player-generated notes and ghosts that alternatively warn and deceive new players. In another game I might have relied on the PS5’s hints more, but for this specific title they seem somewhat redundant.
As arguably the only “real” PS5 launch title, Demon’s Souls is a curious but impressive creature. It definitely shows the new console to advantage in some ways, but the game itself (while still amazing) is dated in many ways, limiting the possibilities of what can be shown off in the first place.
Certainly the remake is the best (and for many, only) way to play a classic, and for that alone it is recommended — though the $ 70 price (more in Europe and elsewhere) is definitely a bit of a squinter. One would hope that for the new higher asking price, we could expect next-generation gameplay as well as next-generation trimmings. Well, for now we have to take what we can get.
One specialist shares her top 5 tips for taking a PPC holiday without disaster striking.
Read more at PPCHero.com
- With remote work and the peak season coming in, creating an internal communication plan has become a massive priority for upper management to facilitate great teamwork.
- How can companies create internal communication plans that not only work in the short term but can also be utilized for years to come to achieve steady team growth?
- Eight great steps to improve your SMB digital marketing team’s internal communication plan and optimize successful outcomes.
Creating an internal communication plan has become a massive priority this year.
With more companies working remotely, documenting processes and making them accessible to teams has taken precedence over many other tasks.
Since team members cannot walk over to each other’s desks for a quick chat, it has become key for upper management to determine how best to facilitate teamwork.
How can companies create internal communication plans that not only work in the short term but can also be utilized for years to come to achieve steady team growth?
We share a step by step guide that will help marketing teams communicate better in the long run.
1. Audit established internal communication plans
Before you go steaming ahead to create a new internal communication plan, you need to look at what you already have—or whether you have something at all.
Analyze your documentation to see whether you have the most up-to-date information. Examine your process flow to determine whether obstacles are being overlooked.
Study how effective your former communication plans had been—did you reach the goals set out for it? If not, what areas require improvement?
It’s also worth creating a company-wide survey to get a better understanding of where your employees think the company is.
Are they comfortable with the way communication is handled? Would they prefer changes in a particular area?
A thorough audit, like in the example below, will make it much easier to start creating an internal communication plan that helps your company grow.
2. Internal communication plan metrics
Now that you’ve determined what worked and didn’t in your previous internal communication plans—or that you do need one—you need to determine metrics for it.
Some metrics to aim for include:
- Employee reach
- Employee engagement rates
- Employee retention rates
- Message open rates
- Message click rates
- Level of productivity
- Employee net promoter score
- Revenue increase
Your SMB may not be focused on all of these metrics—but you should aim for at least a few of these in your initial plans.
You don’t want to have a narrow focus such as increasing productivity rates that don’t improve engagement or revenue.
Instead, choose a set of metrics that you can calculate, and that won’t be unattainable, like the open rate below used by Contact Monkey.
3. Setting goals for internal communication plans
Miscommunication can cost SMBs an average of $ 420K in losses, found SHRM. Which is why internal communication plans are so necessary.
When you determine your metrics, you also need to set goals for your company and how the internal communication plan will tie into those goals.
How do you choose a goal for your plan that will help your marketing team grow? By using the SMART goal-setting method:
- Be specific about what you want to accomplish, and use simple language to convey it
- Choose a measurable goal that you can analyze
- Make the goal attainable so your communications don’t aim for something you can’t get
- Your goal should be relevant to the team and your overarching company goal
- The goal should also be time-based so that your plan doesn’t miss deadlines
Here’s a visual overview of SMART goals that your team can refer to in the future.
4. Identify internal communication plan stakeholders
No matter the size of your company or marketing team, your internal communication plan needs to be targeted towards specific stakeholders.
This is because not all messages are relevant to every member of the team.
The way a website design team functions will differ from the PR team, or social media team, like in this example.
You also don’t want to bombard other teams with marketing information that won’t be important to them.
Creating a one-size-fits-all plan for your company will only cause more misunderstandings—that is why you need to outline your target audience before creating a plan.
Use behavioral targeting methods to understand how, when, and why your team members need to be reached. This will make communication more effective.
5. Brand your internal communication plan
Branding is an essential part of external communications, but it should also be acknowledged for internal communication plans.
While brand symbols like logos, colors, and fonts are meant to evoke a connection between companies and consumers, it’s also necessary to maintain consistency within the company.
Branding ensures that your team is seeing the same message design internally and on your external channels.
So, make sure your internal newsletters and your business letter template, like this example, are updated regularly to reflect your branding styles.
But remember that branding isn’t just about the visual appearance of your company—your tone and messaging should also be consistent in your communications.
Don’t adopt a serious tone internally, only to be jovial with customers—your team is in many ways a customer, as well.
Your messaging should also be consistent—don’t tell external parties something that your marketing team doesn’t already know.
6. Design the internal communication process
One of the key aspects of an internal communication plan is workflow—your team should know whom they need to send content and strategies for approvals.
You should include a flow chart in your plan—these demystify approval processes so team members send their material to the right people in the right order.
7. Channels to implement internal communication
There are numerous channels where you can implement your internal communication plan:
- Internal newsletter
- Closed social media groups
- Project management tools
- Slack, or similar alternatives
- Video conference calls
Most of these channels are completely free to create and maintain but some of them can be time-consuming, as you can see from this chart.
Emails aren’t as instantaneous as they used to be and many people have found themselves using them solely for external purposes.
On the other hand, instant messaging tools have allowed remote workers to keep in touch with each other and respond immediately.
Your SMB may not have the ability to institute an intranet, but you may be able to design newsletters to send to your team.
Include which channels you will be using in your plan and for what kind of communication so that team members are aligned and know where to reach out for responses.
8. Regularly evaluate your internal communication plan
If you think your job is done because you’ve created your internal communication plan, I’m afraid that isn’t the end of it.
Your plan will need to change to reflect your marketing team’s structure, your company’s new goals, and even the external environment.
Be prepared to assess the following on an annual basis:
- Email open rates
- Click-through rates
- Channels used
- Feedback from your team
- Common barriers
- Areas for improvement
Once you determine how these areas have performed you can redesign your communication plan.
Conclusion: Create an internal communication plan that keeps your team aligned
Creating an internal communication plan that works takes time and energy. You will also need to A/B test your tools and processes to define the ones that work best for your company.
To recap, here’s how you can create an internal communication plan that works:
- Audit your current plans
- Choose your metrics
- Set plan goals
- Identify stakeholders
- Brand your documents
- Design the process
- Choose your channels
- Evaluate your plan
By following these steps, you can create an internal communication plan that will help your marketing team become aligned with your company.
Ronita Mohan is a content marketer at Venngage, the online infographic maker and design platform. Ronita regularly writes about marketing, sales, and small businesses.
The post Internal communication plan: How SMB marketing teams can achieve growth appeared first on Search Engine Watch.
DoorDash filed to go public on Friday, meaning we’ll have at least one more unicorn IPO before 2020 comes to a close. For a high-level look at its numbers, I wrote this, Danny covered who will profit from the deal, and I noodled on the impact of COVID-19 on its business.
I bring all that up because there is another COVID-19 impacted unicorn that we are expecting to see go public in very short order: Airbnb.
When Airbnb filed to go public in August, it seemed like a solid plan. The company was widely reported to be on an upswing from its COVID-doldrums, the public markets were hot for growth and tech shares, and the pandemic’s caseload in the United States was coming down from its summer highs. It looked great for Airbnb to wrap its Q3, drop its public S-1 with the new numbers, and laugh all the way to the bank after showing investors that even a global pandemic and travel industry depression couldn’t stop it.
And yet. The United States and world at large are now in the midst of the worst COVID-19 spike yet, and consumer spend is going down right before we get the company’s S-1. November feels less winsome for an Airbnb recovery than August or September did. Still, when Airbnb files — next week, the scuttlebutt indicates, so get ready — we’ll only have a look at its numbers through the third quarter.
That’s effectively the same timeframe for a dataset that the folks at Cardify sent over and I dug through. Per the company, which tracks real-time consumer spend data, here’s a look at how well Airbnb recovered ahead of its larger industry after the initial recession in pandemic lodging spend:
Impressive, right? Sadly for Airbnb, the initial boom of demand through late June into July tapered as time continued.
Zooming in somewhat, here’s Airbnb spend data from July 2020 through the end of October, the first month of Q4, compared to the same period of 2019:
Declines, then, but still an encouraging set of data for the company regardless. I would not have expected Airbnb spend — via third-party, admittedly — to be this strong.
The trend of folks renting a house for a month seems to have diminished somewhat, in case you are factoring that into your mental math concerning Airbnb revenues from the above charts. Cardify told TechCrunch that after peaking at around +70% in the March-April timeframe, “average booking sizes have now normalized and are approximately 30% higher on a YTD basis.”
There is weakness in October, the charts show, but that appears to be at least partially seasonal given the 2019 line, so I don’t want to over-ascribe rising COVID cases as the cause. The drooping line, however, was echoed in similar SimilarWeb data that was also shared with The Exchange. The dataset concerned accommodation booking volume around the world for a number of travel services, including Airbnb. Its data tracking the US market showed that a bookings recovery through September that made up some ground on March lows was undercut by October declines. Europe’s bookings’ recovery peaked in July and has been falling ever since. Asian volume is creeping higher, but down sharply from prior levels.
It was a mixed picture, but as Airbnb is doing better than its broader industry per Cardify, the aggregated data could be leading us to be more pessimistic than we otherwise need to be. We’ll see shortly what the real numbers are, but I couldn’t help but share what I was reading with you. On to the S-1!
Before DoorDash filed, we were going to talk about Brex today in this space after Airbnb. But, since we got extra busy, expect those notes early next week on The Exchange.
The week was super busy with earnings, so I’ve collected a few notes from calls with select companies after they reported. Apologies to everyone’s’ favorite reporting firm, but we’re space-limited.
Appian crushed earnings expectations. What drove the low-code application development services’ growth forward? According to CEO Matt Calkins, it wasn’t a single thing. Instead, the company’s performance was driven by a long ramp he said, though he did also state that the concept of low-code has reached the public consciousness in new, higher levels during the last few quarters.
Why? The year’s chaos pushed companies into new patterns faster than they had anticipated. Chalk this result up to the accelerating digital transformation being real, which is good news for startups. (For more on Appian and the low-code space, head here.)
Alteryx gave The Exchange an earnings first, providing both its newly former CEO Dean Stoecker and its new CEO Mark Anderson to chat results. The company crushed Q3 expectations, but its Q4 projections did not excite investors. What was up? Anderson argued that ARR growth, not forward GAAP revenue projections, is the most transparent and clear view of an expanding software company, to paraphrase his thinking. You can’t ignore revenue, he said, but given the nuances in how revenue is counted, pay attention to ARR.
Alteryx has a solid ARR target for 2021. We’ll see how investors view its Q4 results and if they align their thinking to that of the new CEO. Alteryx’s former CEO is bullish, saying that in time the market will realize that analytics is at the epicenter of digital transformation. And his company will be there with code to sell.
Moving along, earlier this week I asked a number of VCs about the software venture capital market in the wake of Monday’s sharp selloff and my question about what might happen to public and private software companies if other stocks suddenly became more attractive — strong vaccine news on Monday was later overwhelmed by surging cases as the week went along, but on Monday Zoom lost billions in value as investors fled.
One set of responses came in late, but I wanted to share them all the same as they were more bullish than I anticipated. In the view of Laela Sturdy, a general partner at Alphabet Capital G, “private software investors are unlikely to change their investing patterns much as a result of fluctuations in the public market,” adding later that “public market changes would have to be very extreme — as in 30 percent or more — in order to impact growth stage valuations.”
The connection between public valuations and trading patterns and private capital deployment exists, but how closely the two are linked depends on what’s happening at any given moment, and it appears that at the moment private investor excitement about software is durable.
Sturdy explained why that may be: “Long-term secular trends around cloud adoption, automation and AI, data, security, fintech infrastructure, and the ongoing rapid acceleration of digital transformation will help tech companies maintain their status as the darlings of growth investors in both the private and public markets.”
- Hopin raised $ 125 million at a $ 2.125 billion valuation after scaling to $ 20 million in ARR in under a year. Wow.
- Square and PayPal earnings augur well for fintech startups overall, though it appears that most fintech money is going to only the latest-stages of that niche. (TrueBill just raised $ 17 million, notably.)
- Udemy wants $ 100 million more.
- What’s ahead for edtech startups now that edtech stocks are taking hits?
- Menlo Security landed $ 100 million more at an $ 800 million valuation. Not bad!
Various and Sundry
And finally, the rest of the stuff that I couldn’t get to this week. Here we go:
- Chatted with Cambridge Innovation Capital, a neat venture capital firm from Cambridge in the U.K. — not the Cambridge on the American East Coast. More to say here, but the good news is that hubs of innovation really are maturing into startup factories the world around.
- I got my hands on an early copy of a survey of LPs put together by Allocate. It comes out Monday I think, but it said that “only 20% of [LP] respondents said COVID had slowed their investment activities,” which helps explain all the funds we’ve seen in the past few months.
Closing with something fun, remember that look we did of the performance of various startups in Q3? That was fun. Anyhoo, no-code “online form builder” JotForm told The Exchange that its revenue is up 50% from its 2019 results, that its enterprise customer base is up 620%, and that it expects to reach “100,000 total paid users by end of year.” Neat!
How Are Featured Snippet Answers Decided Upon?
I recently wrote about Featured Snippet Answer Scores Ranking Signals. In that post, I described how Google was likely using query dependent and query independent ranking signals to create answer scores for queries that were looking like they wanted answers.
One of the inventors of that patent from that post was Steven Baker. I looked at other patents that he had written, and noticed that one of those was about context as part of query independent ranking signals for answers.
Remembering that patent about question-answering and context, I felt it was worth reviewing that patent and writing about it.
This patent is about processing question queries that want textual answers and how those answers may be decided upon.
it is a complicated patent, and at one point the description behind it seems to get a bit murky, but I wrote about when that happened in the patent, and I think the other details provide a lot of insight into how Google is scoring featured snippet answers. There is an additional related patent that I will be following up with after this post, and I will link to it from here as well.
This patent starts by telling us that a search system can identify resources in response to queries submitted by users and provide information about the resources in a manner that is useful to the users.
How Context Scoring Adjustments for Featured Snippet Answers Works
Users of search systems are often searching for an answer to a specific question, rather than a listing of resources, like in this drawing from the patent, showing featured snippet answers:
For example, users may want to know what the weather is in a particular location, a current quote for a stock, the capital of a state, etc.
When queries that are in the form of a question are received, some search engines may perform specialized search operations in response to the question format of the query.
For example, some search engines may provide information responsive to such queries in the form of an “answer,” such as information provided in the form of a “one box” to a question, which is often a featured snippet answer.
Some question queries are better served by explanatory answers, which are also referred to as “long answers” or “answer passages.”
For example, for the question query [why is the sky blue], an answer explaining light as waves is helpful.
Such answer passages can be selected from resources that include text, such as paragraphs, that are relevant to the question and the answer.
Sections of the text are scored, and the section with the best score is selected as an answer.
In general, the patent tells us about one aspect of what it covers in the following process:
- Receiving a query that is a question query seeking an answer response
- Receiving candidate answer passages, each passage made of text selected from a text section subordinate to a heading on a resource, with a corresponding answer score
- Determining a hierarchy of headings on a page, with two or more heading levels hierarchically arranged in parent-child relationships, where each heading level has one or more headings, a subheading of a respective heading is a child heading in a parent-child relationship and the respective heading is a parent heading in that relationship, and the heading hierarchy includes a root level corresponding to a root heading (for each candidate answer passage)
- Determining a heading vector describing a path in the hierarchy of headings from the root heading to the respective heading to which the candidate answer passage is subordinate, determining a context score based, at least in part, on the heading vector, adjusting the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score
- Selecting an answer passage from the candidate answer passages based on the adjusted answer scores
Advantages of the process in the patent
- Long query answers can be selected, based partially on context signals indicating answers relevant to a question
- The context signals may be, in part, query-independent (i.e., scored independently of their relatedness to terms of the query
- This part of the scoring process considers the context of the document (“resource”) in which the answer text is located, accounting for relevancy signals that may not otherwise be accounted for during query-dependent scoring
- Following this approach, long answers that are more likely to satisfy a searcher’s informational need are more likely to appear as answers
This patent can be found at:
Context scoring adjustments for answer passages
Inventors: Nitin Gupta, Srinivasan Venkatachary , Lingkun Chu, and Steven D. Baker
US Patent: 9,959,315
Granted: May 1, 2018
Appl. No.: 14/169,960
Filed: January 31, 2014
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for context scoring adjustments for candidate answer passages.
In one aspect, a method includes scoring candidate answer passages. For each candidate answer passage, the system determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading to which the candidate answer passage is subordinate; determines a context score based, at least in part, on the heading vector; and adjusts answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.
The system then selects an answer passage from the candidate answer passages based on the adjusted answer scores.
Using Context Scores to Adjust Answer Scores for Featured Snippets
A drawing from the patent shows different hierarchical headings that may be used to determine the context of answer passages that may be used to adjust answer scores for featured snippets:
I discuss these headings and their hierarchy below. Note that the headings include the Page title as a heading (About the Moon), and the headings within heading elements on the page as well. And those headings give those answers context.
This context scoring process starts with receiving candidate answer passages and a score for each of the passages.
Those candidate answer passages and their respective scores are provided to a search engine that receives a query determined to be a question.
Each of those candidate answer passages is text selected from a text section under a particular heading from a specific resource (page) that has a certain answer score.
For each resource where a candidate answer passage has been selected, a context scoring process determines a heading hierarchy in the resource.
A heading is text or other data corresponding to a particular passage in the resource.
As an example, a heading can be text summarizing a section of text that immediately follows the heading (the heading describes what the text is about that follows it, or is contained within it.)
Headings may be indicated, for example, by specific formatting data, such as heading elements using HTML.
This next section from the patent reminded me of an observation that Cindy Krum of Mobile Moxie has about named anchors on a page, and how Google might index those to answer a question, to lead to an answer or a featured snippet. She wrote about those in What the Heck are Fraggles?
A heading could also be anchor text for an internal link (within the same page) that links to an anchor and corresponding text at some other position on the page.
A heading hierarchy could have two or more heading levels that are hierarchically arranged in parent-child relationships.
The first level, or the root heading, could be the title of the resource.
Each of the heading levels may have one or more headings, and a subheading of a respective heading is a child heading and the respective heading is a parent heading in the parent-child relationship.
For each candidate passage, a context scoring process may determine a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.
The context scoring process could be used to determine the context score and determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.
The context score could be based, at least in part, on the heading vector.
The context scoring process can then adjust the answer score of the candidate answer passage at least in part by the context score to form an adjusted answer score.
The context scoring process can then select an answer passage from the candidate answer passages based on adjusted answer scores.
This flowchart from the patent shows the context scoring adjustment process:
Identifying Question Queries And Answer Passages
I’ve written about understanding the context of answer passages. The patent tells us more about question queries and answer passages worth going over in more detail.
Some queries are in the form of a question or an implicit question.
For example, the query [distance of the earth from the moon] is in the form of an implicit question “What is the distance of the earth from the moon?”
Likewise, a question may be specific, as in the query [How far away is the moon].
The search engine includes a query question processor that uses processes that determine if a query is a query question (implicit or specific) and if it is, whether there are answers that are responsive to the question.
The query question processor can use several different algorithms to determine whether a query is a question and whether there are particular answers responsive to the question.
For example, it may use to determine question queries and answers:
- Language models
- Machine learned processes
- Knowledge graphs
- Combinations of those
The query question processor may choose candidate answer passages in addition to or instead of answer facts. For example, for the query [how far away is the moon], an answer fact is 238,900 miles. And the search engine may just show that factual information since that is the average distance of the Earth from the moon.
But, the query question processor may choose to identify passages that are to be very relevant to the question query.
These passages are called candidate answer passages.
The answer passages are scored, and one passage is selected based on these scores and provided in response to the query.
An answer passage may be scored, and that score may be adjusted based on a context, which is the point behind this patent.
Often Google will identify several candidate answer passages that could be used as featured snippet answers.
Google may look at the information on the pages where those answers come from to better understand the context of the answers such as the title of the page, and the headings about the content that the answer was found within.
Contextual Scoring Adjustments for Featured Snippet Answers
The query question processor sends to a context scoring processor some candidate answer passages, information about the resources from which each answer passages was from, and a score for each of the featured snippet answers.
The scores of the candidate answer passages could be based on the following considerations:
- Matching a query term to the text of the candidate answer passage
- Matching answer terms to the text of the candidate answer passages
- The quality of the underlying resource from which the candidate answer passage was selected
I recently wrote about featured snippet answer scores, and how a combination of query dependent and query independent scoring signals might be used to generate answer scores for answer passages.
The patent tells us that the query question processor may also take into account other factors when scoring candidate answer passages.
Candidate answer passages can be selected from the text of a particular section of the resource. And the query question processor could choose more than one candidate answer passage from a text section.
We are given the following examples of different answer passages from the same page
(These example answer passages are referred to in a few places in the remainder of the post.)
- (1) It takes about 27 days (27 days, 7 hours, 43 minutes, and 11.6 seconds) for the Moon to orbit the Earth at its orbital distance
- (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
- (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
Each of those answers could be good ones for Google to use. We are told that:
More than three candidate answers can be selected from the resource, and more than one resource can be processed for candidate answers.
How would Google choose between those three possible answers?
Google might decide based on the number of sentences and a selection of up to a maximum number of characters.
The patent tells us this about choosing between those answers:
Each candidate answer has a corresponding score. For this example, assume that candidate answer passage (2) has the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1). Thus, without the context scoring processor, candidate answer passage (2) would have been provided in the answer box of FIG. 2. However, the context scoring processor takes into account the context of the answer passages and adjusts the scores provided by the query question processor.
So, we see that what might be chosen based on featured snippet answer scores could be adjusted based on the context of that answer from the page that it appears on.
Contextually Scoring Featured Snippet Answers
This process starts which begins with a query determined to be a question query seeking an answer response.
This process next receives candidate answer passages, each candidate answer passage chosen from the text of a resource.
Each of the candidate answer passages are text chosen from a text section that is subordinate to a respective heading (under a heading) in the resource and has a corresponding answer score.
For example, the query question processor provides the candidate answer passages, and their corresponding scores, to the context scoring processor.
A Heading Hierarchy to Determine Context
This process then determines a heading hierarchy from the resource.
The heading hierarchy would have two or more heading levels hierarchically arranged in parent-child relationships (Such as a page title, and an HTML heading element.)
Each heading level has one or more headings.
A subheading of a respective heading is a child heading (an (h2) heading might be a subheading of a (title)) in the parent-child relationship and the respective heading is a parent heading in the relationship.
The heading hierarchy includes a root level corresponding to a root heading.
The context scoring processor can process heading tags in a DOM tree to determine a heading hierarchy.
For example, concerning the drawing about the distance to the moon just above, the heading hierarchy for the resource may be:
The ROOT Heading (title) is: About The Moon (310)
The main heading (H1) on the page
H1: The Moon’s Orbit (330)
A secondary heading (h2) on the page:
H2: How long does it take for the Moon to orbit Earth? (334)
Another secondary heading (h2) on the page is:
H2: The distance from the Earth to the Moon (338)
Another Main heading (h1) on the page
H1: The Moon (360)
Another secondary Heading (h2) on the page:
H2: Age of the Moon (364)
Another secondary heading (h2) on the page:
H2: Life on the Moon (368)
Here is how the patent describes this heading hierarchy:
In this heading hierarchy, The title is the root heading at the root level; headings 330 and 360 are child headings of the heading, and are at a first level below the root level; headings 334 and 338 are child headings of the heading 330, and are at a second level that is one level below the first level, and two levels below the root level; and headings 364 and 368 are child headings of the heading 360 and are at a second level that is one level below the first level, and two levels below the root level.
The process from the patent determines a context score based, at least in part, on the relationship between the root heading and the respective heading to which the candidate answer passage is subordinate.
This score may be is based on a heading vector.
The patent says that the process, for each of the candidate answer passages, determines a heading vector that describes a path in the heading hierarchy from the root heading to the respective heading.
The heading vector would include the text of the headings for the candidate answer passage.
For the example candidate answer passages (1)-(3) above about how long it takes the moon to orbit the earch, the respectively corresponding heading vectors V1, V2 and V3 are:
- V1=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: How long does it take for the Moon to orbit the Earth?]>
- V2=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>
- V3=<[Root: About The Moon], [H1: The Moon's Orbit], [H2: The distance from the Earth to the Moon]>
We are also told that because candidate answer passages (2) and (3) are selected from the same text section 340, their respective heading vectors V2 and V3 are the same (they are both in the content under the same (H2) heading.)
The process of adjusting a score, for each answer passage, uses a context score based, at least in part, on the heading vector (410).
That context score can be a single score used to scale the candidate answer passage score or can be a series of discrete scores/boosts that can be used to adjust the score of the candidate answer passage.
Where things Get Murky in This Patent
There do seem to be several related patents involving featured snippet answers, and this one which targets learning more about answers from their context based on where they fit in a heading hierarchy makes sense.
But, I’m confused by how the patent tells us that one answer based on the context would be adjusted over another one.
The first issue I have is that the answers they are comparing in the same contextual area have some overlap. Here those two are:
- (2) Why is the distance changing? The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
- (3) The moon’s distance from Earth varies because the moon travels in a slightly elliptical orbit. Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles
Note that the second answer and the third answer both include the same line: “Thus, the moon’s distance from the Earth varies from 225,700 miles to 252,000 miles.” I find myself a little surprised that the second answer includes a couple of sentences that aren’t in the third answer, and skips a couple of lines from the third answer, and then includes the last sentence, which answers the question.
Since they both appear in the same heading and subheading section of the page they are from, it is difficult to imagine that there is a different adjustment based on context. But, the patent tells us differently:
The candidate answer score with the highest adjusted answer score (based on context from the headings) is selected, and the answer passage.
Recall that in the example above, the candidate answer passage (2) had the highest score, followed by candidate answer passage (3), and then by candidate answer passage (1).
However, after adjustments, candidate answer passage (3) has the highest score, followed by candidate answer passage (2), and then-candidate answer passage (1).
Accordingly, candidate answer passage (3) is selected and provided as the answer passage of FIG. 2.
Boosting Scores Based on Passage Coverage Ratio
A query question processor may limit the candidate answers to a maximum length.
The context scoring processor determines a coverage ratio which is a measure indicative of the coverage of the candidate answer passage from the text from which it was selected.
The patent describes alternative question answers:
Alternatively, the text block may include text sections subordinate to respective headings that include a first heading for which the text section from which the candidate answer passage was selected is subordinate, and sibling headings that have an immediate parent heading in common with the first heading. For example, for the candidate answer passage, the text block may include all the text in the portion 380 of the hierarchy; or may include only the text of the sections, of some other portion of text within the portion of the hierarchy. A similar block may be used for the portion of the hierarchy for candidate answer passages selected from that portion.
A small coverage ratio may indicate a candidate answer passage is incomplete. A high coverage ratio may indicate the candidate answer passage captures more of the content of the text passage from which it was selected. A candidate answer passage may receive a context adjustment, depending on this coverage ratio.
A passage coverage ratio is a ratio of the total number of characters in the candidate answer passage to the ratio of the total number of characters in the passage from which the candidate answer passage was selected.
The passage cover ratio could also be a ratio of the total number of sentences (or words) in the candidate answer passage to the ratio of the total number of sentences (or words) in the passage from which the candidate answer passage was selected.
We are told that other ratios can also be used.
From the three example candidate answer passages about the distance to the moon above (1)-(3) above, passage (1) has the highest ratio, passage (2) has the second-highest, and passage (3) has the lowest.
This process determines whether the coverage ratio is less than a threshold value. That threshold value can be, for example, 0.3, 0.35 or 0.4, or some other fraction. In our “distance to the moon” example, each coverage passage ratio meets or exceeds the threshold value.
If the coverage ratio is less than a threshold value, then the process would select a first answer boost factor. The first answer boost factor might be proportional to the coverage ratio according to a first relation, or maybe a fixed value, or maybe a non-boosting value (e.g., 1.0.)
But if the coverage ratio is not less than the threshold value, the process may select a second answer boost factor. The second answer boost factor may be proportional to the coverage ratio according to a second relation, or maybe fixed value, or maybe a value greater than the non-boosting value (e.g., 1.1.)
Scoring Based on Other Features
The context scoring process can also check for the presence of features in addition to those described above.
Three example features for contextually scoring an answer passage can be based on the additional features of the distinctive text, a preceding question, and a list format.
Distinctive text is the text that may stand out because it is formatted differently than other text, like using bolding.
A Preceeding Question
A preceding question is a question in the text that precedes the candidate answer question.
The search engine may process various amounts of text to detect for the question.
Only the passage from which the candidate answer passage is extracted is detected.
A text window that can include header text and other text from other sections may be checked.
A boost score that is inversely proportional to the text distance from a question to the candidate answer passage is calculated, and the check is terminated at the occurrence of a first question.
That text distance may be measured in characters, words, or sentences, or by some other metric.
If the question is anchor text for a section of text and there is intervening text, such as in the case of a navigation list, then the question is determined to only precede the text passage to which it links, not precede intervening text.
In the drawing above about the moon, there are two questions in the resource: “How long does it take for the Moon to orbit Earth?” and “Why is the distance changing?”
The first question–“How long does it take for the Moon to orbit Earth?”– precedes the first candidate answer passage by a text distance of zero sentences, and it precedes the second candidate answer passage by a text distance of five sentences.
And the second question–“Why is the distance changing?”– precedes the third candidate answer by zero sentences.
If a preceding question is detected, then the process selects a question boost factor.
This boost factor may be proportional to the text distance, whether the text is in a text passage subordinate to a header or whether the question is a header, and, if the question is in a header, whether the candidate answer passage is subordinate to the header.
Considering these factors, the third candidate answer passage receives the highest boost factor, the first candidate answer receives the second-highest boost factor, and the second candidate answer receives the smallest boost factor.
Conversely, if the preceding text is not detected, or after the question boost factor is detected, then the process detects for the presence of a list.
The Presence of a List
A list is an indication of several steps usually instructive or informative. The detection of a list may be subject to the query question being a step modal query.
A step modal query is a query where a list-based answer is likely to a good answer. Examples of step model queries are queries like:
- [How to . . . ]
- [How do I . . . ]
- [How to install a door knob]
- [How do I change a tire]
The context scoring process may detect lists formed with:
- HTML tags
- Micro formats
- Semantic meaning
- Consecutive headings at the same level with the same or similar phrases (e.g., Step 1, Step 2; or First; Second; Third; etc.)
The context scoring process may also score a list for quality.
It would look at things such as:
- A list in the center of a page, which does not include multiple links to other pages (indicative of reference lists)
- HREF link text that does not occupy a large portion of the text of the list will be of higher quality than a list at the side of a page, and which does include multiple links to other pages (which are indicative of reference lists), and/are has HREF link text that does occupy a large portion of the text of the list
If a list is detected, then the process selects a list boost factor.
That list boost factor may be fixed or may be proportional to the quality score of the list.
If a list is not detected, or after the list boost factor is selected, the process ends.
In some implementations, the list boost factor may also be dependent on other feature scores.
If other features, such as coverage ratio, distinctive text, etc., have relatively high scores, then the list boot factor may be increased.
The patent tells us that this is because “the combination of these scores in the presence of a list is a strong signal of a high-quality answer passage.”
Adjustment of Featured Snippet Answers Scores
Answer scores for candidate answer passages are adjusted by scoring components based on heading vectors, passage coverage ratio, and other features described above.
The scoring process can select the largest boost value from those determined above or can select a combination of the boost values.
Once the answer scores are adjusted, the candidate answer passage with the highest adjusted answer score is selected as the featured snippet answer and is displayed to a searcher.
More to Come
I will be reviewing the first patent in this series of patents about candidate answer scores because it does have some additional elements to it that haven’t been covered in this post, and the post about query dependent/independent ranking signals for answer scores. If you have been paying attention to how Google has been answering queries that appear to be seeking answers, you have likely seen those improving in many cases. Some answers have been really bad though. It will be nice to have as complete an idea as we can of how Google decides what might be a good answer to a query, based on information available to them on the Web.
Added October 14, 2020 – I have written about another Google patent on Answer Scores, and it’s worth reading about all of the patents on this topic. The new post is at Weighted Answer Terms for Scoring Answer Passages, and is about the patent Weighted answer terms for scoring answer passages.
It is about identifying questions in resources, and answers for those questions, and describes using term weights as a way to score answer passages (along with the scoring approaches identified in the other related patents, including this one.)
Added October 15, 2020 – I have written a few other posts about answer passages that are worth reading if you are interested in how Google finds questions on pages and answers to those, and scores answer passages to determine which ones to show as featured snippets. I’ve linked to some of those in the body of this post, but here is another one of those posts:
- January 24, 2019 – Does Google Use Schema to Write Answer Passages for Featured Snippets?
Added October 22, 2020, I have written up a description of details from about how structured and unstructured data has been selected for answer passages based on specific criteria in the patent on Scoring Answer passages in the post Selecting Candidate Answer Passages.
Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
- Adjusting Featured Snippet Answers by Context
- Pinterest tests online events with dedicated ‘class communities’
- Join us for a live Q&A with Sapphire’s Jai Das on Tuesday at 2 pm ET/11 am PT
- 10 Effective ways to boost click-through rate (CTR) using SERPs
- Proxyclick visitor management system adapts to COVID as employee check-in platform