Monthly Archives: January 2020
Searching for Quotes has shifted at Google with an Updated Continuation Patent
In August of 2017, I wrote the post Google Searching Quotes of Entities. The patent that post was about was called Systems and methods for searching quotes of entities using a database.
I noticed that this patent was updated last year (February 2019) with a continuation patent. I like comparing the claims in older patents with the claims from newer continuation patents – it is a message saying, “We used to do something one way, but we have changed how we do it now, and want to protect our intellectual property by updating the claims in this patent with a newer version of it.”
Reviewing the Patents on quote searching
It appears that this patent is showing us that Google is paying more attention to indexing audio, and that shows in this updated patent.
Here is a comparison of the claims from the patents.
The first claim from the 2017 version – Systems and methods for searching quotes of entities using a database:
1. A computerized system for searching and identifying quotes, the system comprising: a memory device that stores a set of instructions; and at least one processor that executes the set of instructions to: receive a search query for a quote from a user; parse the query to identify one or more key words; match the one or more key words to knowledge graph items associated with candidate subject entities in a knowledge graph stored in one or more databases, wherein the knowledge graph includes a plurality of items associated with a plurality of subject entities and a plurality of relationships between the plurality of items; determine, based on the matching knowledge graph items, a relevance score for each of the candidate subject entities; identify, from the candidate subject entities, one or more subject entities for the query based on the relevance scores associated with the candidate subject entities; identify a set of quotes corresponding to the one or more subject entities; determine quote scores for the identified quotes based on at least one of the relationship of each quote to the one or more subject entities, the recency of each quote, or the popularity of each quote; select quotes from the identified quotes based on the quote scores; and transmit information to a display device to display the selected quotes to the user.
The first claim from the 2019 version – Systems and methods for searching quotes of entities using a database
1. A method comprising the following operations performed by one or more processors: receiving audio content from a client device of a user; performing audio analysis on the audio content to identify a quote in the audio content; determining the user as an author of the audio content based on recognizing the user as the speaker of the audio content; identifying, based on words or phrases extracted from the quote, one or more subject entities associated with the quote; storing, in a database, the quote, and an association of the quote to the subject entities and to the user being the author; subsequent to storing the quote and the association: receiving, from the user, a search query; parsing the search query to identify that the search query requests one or more quotes by the user about one or more of the subject entities; identifying, from the database and responsive to the search query, a set of quotes by the user corresponding to the one or more of the subject entities, the set of quotes including the quote; selecting the quote from the quotes of the set based at least in part on the recency of each quote; and transmitting, in response to the search query, information for presenting the selected quote to the user via the client device or an additional client device of the user.
If you want to read about how this patent was originally intended to work, I detailed that process when I wrote about the original granted patent that was granted in 2017. The continuation patent was filed in 2017 and was granted last spring. The first version tells us about finding quotes looking at knowledge graph entries. The phrase “knowledge graph” was left out of the newer claim, but it also tells us that it is specifically looking for audio content, and performing analysis on audio content to collect quotes from entities.
What this update tells me is that Google is going to rely less upon finding quote information from knowledge base sources, and work upon collecting quote information from performing audio analysis. This seems to indicate a desire to build an infrastructure that doesn’t rely upon humans to update a knowledge graph but instead can rely upon automated programs that can crawl content on the web, and analyze that information and index it. This does look like an attempt to move towards an approach that can scale on a web level without relying upon people to record quotes from others.
I am seeing videos at the top of results when I search for quotes from movies, and that have been reported upon in the news. Like President Trump referring to a phone call he had with the leader of Ukraine as a “perfect phone call.”
Note that Google is showing videos as search results for that quote.
I tried a number of quotes that I am familiar with from history and from Movies, and I am seeing at or near the top of search results videos with those quotes in them. That isn’t proof that Google is using audio from videos to identify the sources of those quotes, but it isn’t a surprise after seeing how this patent has changed.
Has Google gotten that much better at understanding what is said in videos and indexing such content? It may be telling us that they have more confidence in how they have indexed video content. I would still recommend making transcripts of any videos that you publish to the web, to be safe in making sure content from a video gets indexed correctly. But it is possible that Google has gotten better at understanding audio in videos.
Of course, this change may be one triggered by an understanding of the intent behind quote searches. It’s possible that when someone searches for a quote, they may be less interested in learning who said something, and more interested in watching or hearing them say it. This would be a motivation for making sure that a video appears ranking highly in search results.
Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
The post Google Has Updated Quote Searching to Focus on Videos appeared first on SEO by the Sea ⚓.
India’s ruling party accused of running deceptive Twitter campaign to gain support for a controversial law
Bharatiya Janata Party, the ruling party in India, has been accused of running a highly deceptive Twitter campaign to trick citizens into supporting a controversial law.
First, some background: The Indian government passed the Citizenship Amendment Act (CAA) last month that eases the path of non-Muslim minorities from the neighboring Muslim-majority nations of Afghanistan, Bangladesh and Pakistan to gain Indian citizenship.
But, combined with a proposed national register of citizens, critics have cautioned that it discriminates against minority Muslims in India and chips away at India’s secular traditions.
Over the past few weeks, tens of thousands of people in the country — if not more — have participated in peaceful protests across the nation against the law. The Indian government, which has temporarily cut down internet access and mobile communications in many parts of India to contain the protests, has so far shown no signs of withdrawing the law.
On Saturday, it may have found a new way to gain support for it, however.
India’s Home Minister Amit Shah on Thursday tweeted a phone number, urging citizens to place a call to that number in “support of the CAA law.”
Thousands of people in India today, many affiliated with the BJP party, began circulating that phone number on Twitter with the promise that anyone who places a call would be offered job opportunities, free mobile data, Netflix credentials, and even company with “lonely women.”
The story of CAA support, in four pictures… pic.twitter.com/ueLNmqDRr8
— Meghnad (@Memeghnad) January 4, 2020
— SamSays (@samjawed65) January 4, 2020
— Netflix India (@NetflixIndia) January 4, 2020
Huffington Post India called the move latest “BJP ploy” to win support for its controversial law. BoomLive, a fact checking organization based in India, reported the affiliation of many of these people to the ruling party.
We have reached out to a BJP spokesperson and Twitter spokespeople for comment.
First time in 70 years that a legislation passed by Parliament needs huge rallies, promises of sex, jobs and Netflix accounts to drum up support.
— Rohini Singh (@rohini_sgh) January 4, 2020
If the allegations are true, this won’t be the first time BJP has used Twitter to aggressively promote its views. In 2017, BuzzFeed News reported that a number of political hashtags that appeared in the top 10 Twitter’s trends column in India were the result of organized campaigns.
Pratik Sinha, co-founder of fact-checking website Alt News, last year demonstrated how easy it was to manipulate many politicians in the country to tweet certain things after he gained accessed to a Google document of prepared statements and tinkered with the content.
Last month, snowfall in Kashmir, a highly sensitive region that hasn’t had internet connection for more than four months, began trending on Twitter in the U.S. It mysteriously disappeared after many journalists questioned how it made it to the list.
— Julia Carrie Wong (@juliacarriew) December 22, 2019
When we reached out, a Twitter spokesperson in India pointed TechCrunch to an FAQ article that explained how Trending Topics work. Nothing in the FAQ article addressed the question.
The Saturday night before CES seems like a less than ideal time to drop some big smartphone news — but it appears Samsung’s hand was forced on this one. Granted, the smartphone giant has never been great about keeping big news under wraps, but this morning’s early release of a promo video through its official Vimeo channel was no doubt all the motivation it needed.
The company has just made the February 11 date officially official for the launch of its upcoming flagship. As for what the flagship will be called, well, that (among other things) leaves some room for speculation. Rumors have pointed to both the more traditional S11, along with the more fascinating jump to the S20.
— Samsung Mobile (@SamsungMobile) January 5, 2020
I’ve collated a bunch of the rumors into an earlier post. The TLDR is even larger screens across the board, coupled with a bunch of camera upgrades and a healthy battery increase. The invite art, which matches the earlier the video, appears to confirm the existence of two separate devices, with different dimensions. That could well point to the reported followup to the Galaxy Fold. In additional to better reinforced folding (a follow up to last year’s issues), the device reportedly adopts a clamshell form factor, more akin to the newly announced Motorola Razr.
More info (and rumors) to come. As ever, we’ll be there (San Francisco) as the news breaks.
After acquiring Ukraine startup Looksery in 2015 to supercharge animated selfie lenses in Snapchat — arguably changing the filters game for all social video and photo apps — Snap has made another acquisition with roots in the country, co-founded by one of Looksery’s founders, to give a big boost to its video capabilities.
The company has acquired AI Factory, a computer vision startup that Snap had worked with to create Snapchat’s new Cameos animated selfie-based video feature, for a price believed to be in the region of $ 166 million.
The news was first reported by a Ukrainian publication, AIN,
and while I’m still waiting for a direct reply from Snap about the acquisition, I’ve had the news confirmed by another source close to the deal, and Snap has now also confirmed the news to TechCrunch with no further comment on the financial terms or any other details.
Victor Shaburov, the founder of Looksery who then went on to become Snap’s director of engineering — leaving in May 2018 to found and lead AI Factory — declined to provide a comment for this story. (The other founders of AI Factory are Greg Tkachenko and Eugene Krokhalev.)
Cameos, launched last month, lets you take a selfie, which is then automatically “animated” and inserted into a short video. The selection of videos, currently around 150, is created by Snap, with the whole concept not unlike the one underpinning “deepfakes” — AI-based videos that look “real” but are actually things that never really happened.
Deepfake videos have been around for a while. But if your experience of that word has strong dystopian undertones, we now appear to be in a moment where consumer apps are tapping into the technology in a race for new — fun, lighthearted — features to attract and keep users. Just today, Josh reported that TikTok has secretly built a deepfake tool, too. I expect we’ll be hearing about Facebook’s newest deepfake tool in 3, 2, 1…
From what I understand, while AI Factory has offices in San Francisco, the majority of the team of around 70 is based out of Ukraine. Part of the team will relocate with the deal, and part will stay there.
Snap had also been an investor in AI Factory. Part of its early interest would have been because of the track record of the talent associated with the startup: lenses have been a huge success for Snap — 70% of its daily active users play with them, and they not only bring in new users, but increase retention and bring in revenues by way of sponsorships or users buying them — so creating new features to give users more ways to play around with their selfies is a good bet.
It’s not clear whether AI Factory will be developing a way to insert selfies into any video, or if the feature will be tied just to specific videos offered by Snap itself, or whether the videos will extend beyond the timing of a GIF. It’s also not clear what else AI Factory was working on: the company’s site is offline and there is very little information about the company beyond its mission to bring more AI-based imaging tools into mainstream apps and usage.
The company’s LinkedIn profile says that AI Factory “provide[s] multiple AI business solutions based on image and video recognition, analysis and processing,” so while the company will come under Snap’s wing, there may be scope for the team to build some of its technology into more innovative ways for businesses to use the Snap platform in the future, too.
We’ll update this post as we learn more.
Updated with Snap’s confirmation of the acquisition.
In addition to securing physical structures, the Diplomatic Security Service runs simulations of protests in a model city in Virginia.
Feed: All Latest
Back in 2013, Dropbox was scaling fast.
The company had grown quickly by taking advantage of cloud infrastructure from Amazon Web Services (AWS), but when you grow rapidly, infrastructure costs can skyrocket, especially when approaching the scale Dropbox was at the time. The company decided to build its own storage system and network — a move that turned out to be a wise decision.
In a time when going from on-prem to cloud and closing private data centers was typical, Dropbox took a big chance by going the other way. The company still uses AWS for certain services, regional requirements and bursting workloads, but ultimately when it came to the company’s core storage business, it wanted to control its own destiny.
Storage is at the heart of Dropbox’s service, leaving it with scale issues like few other companies, even in an age of massive data storage. With 600 million users and 400,000 teams currently storing more than 3 exabytes of data (and growing) if it hadn’t taken this step, the company might have been squeezed by its growing cloud bills.
Controlling infrastructure helped control costs, which improved the company’s key business metrics. A look at historical performance data tells a story about the impact that taking control of storage costs had on Dropbox.
In March of 2016, Dropbox announced that it was “storing and serving” more than 90% of user data on its own infrastructure for the first time, completing a 3-year journey to get to this point. To understand what impact the decision had on the company’s financial performance, you have to examine the numbers from 2016 forward.
There is good financial data from Dropbox going back to the first quarter of 2016 thanks to its IPO filing, but not before. So, the view into the impact of bringing storage in-house begins after the project was initially mostly completed. By examining the company’s 2016 and 2017 financial results, it’s clear that Dropbox’s revenue quality increased dramatically. Even better for the company, its revenue quality improved as its aggregate revenue grew.
With every year seeing new technological developments that shift the boundaries of business, working to take advantage of the new opportunities can be a challenge in digital marketing. One of these transformations in the market has been caused by the widespread adoption of voice search technology and its effects on internet usage.
As a consequence, this has had an impact on search engine optimization, where following SEO best practices is essential for most businesses in the current era. Internet voice search could be set to disrupt SEO conventions, so businesses would be well-advised to stay informed of the changes and plan accordingly.
The rise of voice technology
The introduction of IBM’s Watson in 2010 paved the way for voice technology devices. Watson is a powerful voice recognition question-answer computer system that stunned the world as a super-intelligent, thinking, and speaking robot that was able to beat Trivia grandmasters on the TV quiz show, ‘Jeopardy’. In the following year, Google launched its Voice Search and Apple released Siri for the iPhone 4S, the first digital personal assistant.
This was followed in 2014 by Cortana from Microsoft and Amazon Echo, a voice speaker powered by the personal assistant, Alexa. Google Assistant was launched in 2016, as well as the smart speaker Google Home. Initial figures showed Amazon Alexa to be leading the market, though Google Home is forecast to take the lead by 2020. Other prominent digital assistants on the global stage include Alice from Yandex, and AliGenie from Alibaba.
Voice recognition technology has significantly improved since its inception. Google claims 95 percent accuracy, while the Chinese iFlytek speech recognition system has an accuracy of 98%.
Voice technology has also spread to devices that fall under the umbrella term, the Internet of Things (IoT), such as a smart TV, a smart thermostat or a home kit. While it may be possible, internet voice search doesn’t have direct applications for most of these devices yet, and by far the greatest share of searches are currently made on either a smartphone or a smart speaker.
Twenty percent of queries on Google’s mobile app and Android devices are made with voice, while 31% of smartphone users use voice at least once a week, according to Statistica.
Media analytics firm Comscore predicts that half of all online searches will be made through voice by 2020, while Gartner predicts that in the same year, 30% of online searches will be made on devices without a screen. This suggests an enormous rise in voice search, as well as the increased adoption of smart speakers. Earlier this year, Juniper Research predicted that 3.25 billion voice assistants were in use – a figure they forecast to reach eight billion by 2023.
The effects of voice on SEO
Voice is, therefore, transforming our approaches to technology and the internet, but what impact is it having on search engine optimization?
With improved and reliable voice recognition systems, voice technology is well adapted to follow everyday language use, so users can give commands as if they were speaking to a human. For any areas of potential confusion, emerging technologies are seeking to improve the user experience. The 2018 Internet Trends Report by venture capitalist and internet trends specialist, Mary Meeker, found that 70% of English language voice searches were made in natural or conversational language.
Spoken language usually isn’t as concise as the written word, so queries will be longer than the three or four keyword searches more common to graphical user interfaces (GUI). Voice searches currently average 29 words in length, according to Backlinko. SEO strategists will need to adjust by using more long-tail keywords, with the added benefit that the longer the keyword phrases are, the higher the probability of conversion.
Voice searches will more frequently include the question words who, which, when, where, and how, that are usually omitted in written searches. Marketers need to ensure content can deliver accurate and relevant answers to voice search queries, and distinguish between simple questions and those that require more comprehensive answers. Queries that can be answered with very short responses typically won’t generate traffic to a website because Google will often provide the required information via featured search snippets.
According to SeoClarity, 20% of voice searches are triggered by just 25 keywords. These include question words and other commonly used verbs, such as make, do and can, as well as key nouns and adjectives, including a recipe, new, easy, types and home. These can be worked into SEO strategies, and question-form queries can show user intent to a higher degree. Marketers are therefore able to optimize content according to questions of a higher value.
As opposed to lexical searches that look for literal matches of keywords, semantic searches attempt to find the user’s intended meaning within the context of the terms used. This understanding can be aided by user search history, global search history, the location of the user and keyword spelling variations.
Google’s RankBrain is an artificial intelligence system designed to recognize words and phrases in order to improve internet search outcomes. This independent thinking quality of RankBrain helps it take query handling to a more sophisticated level. Hummingbird is another Google technology that helps natural language queries. It helps search result pages be more relevant based on context and intent, causing relevant pages to rank higher.
Voice technology has brought an increased emphasis on the use of local search. Consumers are three times more likely to search locally when searching by voice. Research carried out over the last year shows that 58% of consumers find local businesses using voice search, and 46 percent use voice technology to find information on local businesses daily. Marketing strategies should account for this change and optimize for “near me” queries.
Around 75% of voice search results will rank in the top three positions in search engine results pages (SERPs). Most voice searches are answered by Rich Answer Boxes shown at the top of results pages. Featured snippets are included in 30 percent of Google queries. These are extracts from any website on the first page of SERPs, and brands are given credit in voice search as well as usual GUI searches. Brands only need to be on the first page to be used in featured snippets, rather than position zero.
Ecommerce is especially impacted by voice, as consumers are much more likely to use voice to make purchases. Sixty-two percent of voice speaker owners have made purchases through their virtual assistant, and 40 percent of millennials use voice assistants before making online purchases. Digital assistants – and the best ways to optimize for them – should, therefore, be a priority for online retailers.
Adapting to voice search
With voice technology impacting SEO in various ways, here are a few recommended steps brands can take to adapt accordingly.
- Google Voice prioritizes quick-loading websites, so brands should ensure images are optimized, files are compressed, response time is reduced, and the site is fully responsive.
- Content should be optimized with long-tail keywords that reflect popular queries used in voice search. Focus on natural language.
- Featured snippets are summary answers from web pages that may be used in position zero. To optimize content for this, include identifiable extracts to be featured and make content easier for Google to read by using H-tags and bullet points.
- Structured data and schema markup provide more information about a brand and drive traffic. They help pages appear in rich snippets, which will increase the chances of being the first result delivered in voice searches.
- Local information for your brand should be provided to meet the increased search volume for local businesses with voice – using Google My Business will help.
- Increasing domain authority will help with search rankings – this can be improved by including high-quality links.
The impact of voice technology on SEO is certain. Given the huge rise in the adoption and use of voice, the impact on businesses will be considerable. Those brands that can anticipate and stay ahead of the changes before they happen will surely reap the benefits in years to come.
Roy Castleman is founder and managing director of EC-MSP Ltd., a London-based IT support organization focusing on small and medium-sized businesses.
The post What impact will voice search have on SEO in 2020? appeared first on Search Engine Watch.
Learn about the seven biggest mistakes and misconceptions in mobile A/B testing along with tips & tricks for successful app store A/B testing.
Read more at PPCHero.com
- SEO competition research: The complete guide
- 4 Ways You’re Not Utilising AI Properly In Your PPC Campaigns
- Harvestr gathers user feedback in one place
- 44% of TikTok’s all-time downloads were in 2019, but app hasn’t figured out monetization
- PopSockets, Sonos, and Tile Ask Congress to Rein in Big Tech