Monthly Archives: November 2019
Europe’s lead data regulator has issued its first ever sanction of an EU institution — taking enforcement action against the European parliament over its use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections.
Software provider NationBuilder is a veteran of the digital campaign space — indeed, we first covered the company back in 2011— which has become nearly ubiquitous tool for digital campaigns in some markets.
But in recent years European privacy regulators have raised questions over whether all its data processing activities comply with regional data protection rules, responding to growing concern around election integrity and data-fuelled online manipulation of voters.
The European parliament had used NationBuilder as a data processor for a public engagement campaign to promote voting in the spring election, which was run via a website called thistimeimvoting.eu.
The website collected personal data from more than 329,000 people interested in the EU election campaign — data that was processed on behalf of the parliament by NationBuilder.
The European Data Protection Supervisor (EDPS), which started an investigation in February 2019, acting on its own initiative — and “taking into account previous controversy surrounding this company” as its press release puts it — found the parliament had contravened regulations governing how EU institutions can use personal data related to the selection and approval of sub-processors used by NationBuilder.
The sub-processors in question are not named. (We’ve asked for more details.)
“The issue EDPS had was with the Parliament’s lack of awareness of the extent of the processing being carried out by third parties and the lack of prior authorisation, by Parliament as data controller, provided in advance of the processing,” an EDPS spokesman told us.
The EDPS also has an ongoing investigation into whether the Parliament’s use of the voter mobilization website, and related processing operations of personal data, were in accordance with rules applicable to EU institutions (as set out in Regulation (EU) 2018/1725).
The enforcement actions had not been made public until a hearing earlier this week — when assistant data protection supervisor, Wojciech Wiewiórowski, mentioned the matter during a Q&A session in front of MEPs.
He referred to the investigation as “one of the most important cases we did this year”, without naming the data processor. “Parliament was not able to create the real auditing actions at the processor,” he told MEPs. “Neither control the way the contract has been done.”
“Fortunately nothing bad happened with the data but we had to make this contract terminated the data being erased,” he added.
When TechCrunch asked the EDPS for more details about this case on Tuesday a spokesperson told us the matter is “still ongoing” and “being finalized” and that it would communicate about it soon.
Today’s press release looks to be the upshot.
Provided canned commentary in the release Wiewiórowski writes:
The EU parliamentary elections came in the wake of a series of electoral controversies, both within the EU Member States and abroad, which centred on the the threat posed by online manipulation. Strong data protection rules are essential for democracy, especially in the digital age. They help to foster trust in our institutions and the democratic process, through promoting the responsible use of personal data and respect for individual rights. With this in mind, starting in February 2019, the EDPS acted proactively and decisively in the interest of all individuals in the EU to ensure that the European Parliament upholds the highest of standards when collecting and using personal data. It has been encouraging to see a good level of cooperation developing between the EDPS and the European Parliament over the course of this investigation.
One question that arises is why no firmer sanction has been issued to the European parliament — beyond a (now public) reprimand, some nine months after the investigation began.
The EDPS spokesman told us the decision was taken not to impose an administrative fine because the parliament complied with its recommendations.
Another question is why the matter was not more transparently communicated to EU citizens. On that the spokesman said it was because part of the investigation is ongoing.
“The EDPS is still investigating with the European Parliament, and received additional evidence. We are now completing our analysis of that evidence, and we anticipate closing the investigation in the near future,” he added.
The EDPS’ PR says it will “continue to check the parliament’s data protection processes” — revealing that the European Parliament has finished informing individuals of a revised intention to retain personal data collected by the thistimeimvoting website until 2024.
“The outcome of these checks could lead to additional findings,” it also warns, adding that it intends to finalise the investigation by the end of this year.
Asked about the case, a spokeswoman for the European parliament told us that the thistimeimvoting campaign had been intended to motivate EU citizens to participate in the democratic process, and that it used a mix of digital tools and traditional campaigning techniques in order to try to reach as many potential voters as possible.
She said NationBuilder had been used as a customer relations management platform to support staying in touch with potential voters — via an offer to interested citizens to sign up to receive information from the parliament about the elections (including events and general info).
Subscribers were also asked about their interests — which allowed the parliament to send personalized information to people who had signed up.
Some of the regulatory concerns around NationBuilder have centered on how it allows campaigns to match data held in their databases (from people who have signed up) with social media data that’s publicly available, such as an unlocked Twitter account or public Facebook profile.
TechCrunch understands the European parliament was not using this feature.
In 2017 in France, after an intervention by the national data watchdog, NationBuilder suspended the data matching tool in the market.
The same feature has attracted attention from the UK’s Information Commissioner — which warned last year that political parties should be providing a privacy notice to individuals whose data is collected from public sources such as social media and matched. Yet aren’t.
“The ICO is concerned about political parties using this functionality without adequate information being provided to the people affected,” the ICO said in the report, while stopping short of ordering a ban on the use of the matching feature.
Its investigation confirmed that up to 200 political parties or campaign groups used NationBuilder during the 2017 UK general election.
NationBuilder has now sent us a statement in response to the news of the regulator’s action. In it a spokesperson said:
NationBuilder exists to help people participate in the democratic process. Our software is designed to scale authentic, one-to-one relationships. As the European Parliament has explained, they used NationBuilder’s software for customer relationship management to motivate democratic participation among EU citizens in the 2019 European Parliament elections. We are incredibly proud to have helped power that effort.
NationBuilder was founded on the belief that everyone should own their own data and, as such, our software incorporates advanced privacy and consent tools that enable our customers to comply with relevant data protection laws. The sanctity of customer data is core to our company — we do not share or sell our customers’ data, and every NationBuilder customer has a self-contained database.
We agree with the EDPS that strong data protection rules are essential for democracy, especially in the digital age. NationBuilder is — and always has been — committed to the highest standards of privacy and data protection.
The company also disputes that its contract with the EU parliament was terminated — saying it came to a natural end at the conclusion of the spring election.
This report was updated with additional comment
Artificial intelligence algorithms are creating portraits, movies, and music, though the results are often … mechanical.
Feed: All Latest
Lead generation via SEO is one of the best ways to improve the overall conversion rate of your website. There are several go-to SEO tools like SEMrush, Ahrefs, Moz, and Google Keyword Planner that most marketers use for keyword research, competitor tracking, and SERP movements. However, this is only one side of the equation.
Once people’s organic searches have pointed them to your web pages, what’s the best way to ensure they take the next step and opt into your email list?
Let’s take a look at the top five SEO lead generation tools and how you can use them to convert more of your site’s visitors in 2020 and ahead.
1. Hello Bar
With Hello Bar, you can convert your existing visitors into customers. You can design custom messages for your visitors and display them just at the right time.
Hello Bar sits at the top of your site, and it can be used to display irresistible offers to your visitors. You can even collect email addresses from your visitors to increase your subscriber database. Here is an example of Hello Bar in action:
Besides, you can use Hello Bar to create pop-ups that collect the name and email id of your visitors.
Pop-ups help to drive 1375% more subscribers.
An example of a Hello Bar pop-up is provided below:
You can easily customize your headline, CTA and the overall design of the bar and the pop-up. The platform automatically chooses the best color combination for the CTAs so you don’t need to spend hours testing that.
With Hello Bar, you can customize your message targeting by:
- Sending holiday-related messages to visitors during the holidays.
- Customize your pop-up for the mobile audience as the screen size is less.
- Customize your message based on the location of your customer.
- Display the pop-up during the exit-intent, just when the visitors are planning to leave your website.
Webinars are one of the best ways to generate leads.
Webinars offer a dual advantage. Firstly, you can generate leads right when you run a webinar, and secondly, you can repurpose your webinar into a blog post.
Generate leads directly via webinars
With ClickMeeting, you can run custom webinars to share product demos, conduct training sessions or run online courses. You can customize your webinar with a few clicks, and run them without worrying about the type of device and operating system. You can even stream your webinar live to Facebook or YouTube, allowing you to acquire even more leads.
But the true SEO-based lead capture power of webinars is to be found in evergreen topics that will continue to attract relevant audience members over time.
On-demand webinars are one of the fastest and easiest ways to expand your lead base.
Repurpose your webinar
Repurposing your webinar into a lengthy blog post, consisting of more than 2000 words, helps it to rank for new search queries. When your site achieves higher rankings for new keywords, it automatically maximizes your organic traffic, leading to more conversions.
Here are some great ways to repurpose webinars to generate leads:
- You can divide your webinar recordings into short videos of three to five minutes each and post the video on channels like LinkedIn, Twitter, Facebook, and YouTube. Add a compelling call to action, and people who watch the video are likely to reach out.
- Turn the entire webinar into a blog post and promote it on your social networks for added visibility. Try to present the blog post in a series of steps. This helps your site to get ranked as a featured snippet.
- Turn your webinar Q&A into a support resource page. FAQ pages offer an excellent opportunity to rank as a featured snippet. When people find answers to questions related to your business niche, they will be all the more likely to connect with your business.
- Create a transcript of your webinar and include long-tail (especially question keywords) in it.
It is difficult to succeed in your lead generation efforts in 2020 without videos.
VideoBoost is an app that lets you create trendy videos easily. It has an impressive collection of ready to use video templates and marketing copy. You can easily brand it and start generating leads for your business.
Next time when you are planning to optimize your website for the festive occasion, head over to VideoBoost and create a video for your audience using video templates for Black Friday, Thanksgiving and Cyber Monday.
vCita offers a dynamic widget that you can add to your site to convert your visitors into leads or customers.
With vCita’s lead generation widget, you can capture leads from all the pages on your website with a floating CTA that follows the users from page to page.
The tool also lets your audience to book appointments without leaving the site. All the contact details of the visitors get stored in a built-in CRM that can be used later to trigger follow-up nurture messages via email or SMS.
The best place to start with this kind of strategy might be to identify the pages on your site with the most traffic from high-intent organic search terms rates and add the vCita widget to them. I am sure you’ll be able to notice the difference in the number of conversions happening on your site.
OptinMonster is the most powerful conversion optimization tool in the world. It easily integrates with all the major email marketing and CRM platforms.
One of the tricks that OptinMonster uses to generate leads is via content upgrades. With the help of content upgrade, you offer users bonus content for performing an action on your site. This action can be – joining your email list or filling out a form.
SnackNation was able to generate 1200 new leads each month by using OptinMonster for content upgrades.
With features like MonsterLinks, you can convert any image or link into a two-step opt-in process. It works on the Zeigarnik effect which states that people are more likely to complete a task if they start it.
SEO is all about generating relevant, and quality leads for a business. Moreover, your SEO strategy should also focus on converting the acquired leads. Both lead generation and CRO forms an integral part of a comprehensive SEO strategy.
Start making the most with the power of the above five SEO tools to generate quality leads in 2020 and ahead. Happy marketing!
Video Ad Sequencing (VAS) is a recent addition to the Google Ads video campaign types that allows advertisers to, “…tell your product or brand story by showing people a series of videos in the order that you define.” But it is really a lot more.
Video Ad Sequencing can be used to take your target audience on a video journey based upon, to a limited extent, their behavior. By telling a story VAS lets you drive deeper awareness, engagement, and consideration.
Examples of Video Sequencing usage
Let’s say you want to let people know about “Five key elements of your product” and why it makes you better than the competition. With VAS, you can effectively ensure that potential customers see each video, in a set sequence.
We used VAS with one of our clients which had one long-form video that was just too long to capture the short attention span of users on YouTube. So, instead, we split the ad into five short vignettes, each with a quick intro and value-prop within the first five seconds (which is the non-skippable length of a video ad) to ensure our message got out before a user could skip the full 30-second video. We then set up a VAS campaign that would show these ads, in sequence, so that users would see the full story and all of the value that the product could offer.
What’s great about VAS is that you can go beyond a flat sequence and actually vary the content a user sees, depending on how they interact with each video in the sequence. For example, let’s say a user skips your first ad, rather than having them continue through your sequence, you can say, show them an alternate video outside of your sequence. If they skip that too, then you drop them entirely out of the sequence.
Another potential usage of Video Ad Sequencing
Another potential usage of Video Ad Sequencing is rewarding users for watching your content or calling out when they skip your videos. You can show videos to users that skipped your prior videos in sequence, meaning you can show them alternate content such as alternate value propositions, drop them out of the sequence, or even directly address with the audience that they skipped your prior video but you still really think your product is right for them. Alternatively, if a user views your first video, you can put them into a sequence with longer-form content for the second video, effectively creating exclusive content that only those viewers get to see.
Things you must know
The settings allow for you to dictate what content a user sees after they see an ad (impression) without watching, viewed an ad (watch the full video if shorter than 30-seconds or at least 30-seconds if the video is longer), or skipped an ad.
What you end up with is a flow like this
If you are looking to try out video ad sequencing keep this in mind – you are limited to target CPM or Maximum CPV bidding and you cannot target by content.
This means no specific placements, topics, or keywords (you can exclude them though). You can really only target them by demographics and target audiences. YouTube does not currently allow custom affinity or custom intent audiences so you are stuck with life events or In-Market Audiences. Google recommends testing sequencing alongside brand lift studies, which basically means: “This campaign can spend a lot if you let it.”
Available bid strategies
- Target CPM (Recommended by Google)
- With Target CPM, we optimize bids to show your entire sequence campaign to your audience, which can help you get a higher sequence completion rate.
- Maximum CPV
Ad formats include the following
- Skippable in-stream ads
- Non-skippable in-stream ads
- Bumper ads
- A combination of the above
The bid strategy you select also dictates the ad formats you can use
Bidding type Available formats
Target CPM (tCPM) Skippable in-stream ads
Non-skippable in-stream ads
A combination of the above
Maximum CPV (CPV) Skippable in-stream ads
I would also strongly recommend mapping out your sequence before-hand. Every step of a sequence is set as a new ad group in the campaign, so it can get big and messy quite quickly.
It’s also good to know how you want to deal with the different interactions at different steps in the sequence. Just because a user skips one video, doesn’t mean they won’t watch another and get back into sequence. But similarly, if a user skips your video(s), do you really want to keep showing them ads in the sequence they care nothing about? Maybe at that point, you show them a totally unrelated tried-and-true video and then drop them out of the sequence.
My testing with Video Ad Sequencing so far has been limited, but I am very excited about the opportunity to keep working with several of our larger clients on sequencing. It is a really powerful tool that Google has shown can grow brand awareness and consideration.
Next, I’ll have a guide for setting up your first video ad sequence should you still need help.
The post An introduction to Google Ads Video Ad Sequencing (VAS) appeared first on Search Engine Watch.
Science is exciting in theory, but it can also be dreadfully dull. Some experiments require hundreds or thousands of repetitions or trials — an excellent opportunity to automate. That’s just what MIT scientists have done, creating a robot that performs a certain experiment, observes the results, and plans a follow-up… and has now done so 100,000 times in the year it’s been operating.
The field of fluid dynamics involves a lot of complex and unpredictable forces, and sometimes the best way to understand them is to repeat things over and over until patterns emerge. (Well, it’s a little more complex than that, but this is neither the time nor the place to delve into the general mysteries of fluid dynamics.)
One of the observations that needs to be performed is of “vortex-induced vibration,” a kind of disturbance that matters a lot to designing ships that travel through water efficiently. It involves close observation of an object moving through water… over, and over, and over.
Turns out it’s also a perfect duty for a robot to take over. But the Intelligent Tow Tank, as they call this robotic experimentation platform, is designed not just to do the mechanical work of dragging something through the water, but to intelligently observe the results, change the setup accordingly to pursue further information, and continue doing that until it has something worth reporting.
“The ITT has already conducted about 100,000 experiments, essentially completing the equivalent of all of a Ph.D. student’s experiments every 2 weeks,” say the researchers in their paper, published today in Science Robotics.
The hard part, of course, was not designing the robot (though that was undoubtedly difficult as well) but the logic that lets it understand, at a surface level so to speak, the currents and flows of the fluid system and conduct follow-up experiments that produce useful results.
Normally a human (probably a grad student) would have to observe every trial — the parameters of which may be essentially random — and decide how to move forward. But this is rote work — not the kind of thing an ambitious researcher would like to spend their time doing.
So it’s a blessing that this robot, and others like it, could soon take over the grunt work while humans focus on high-level concepts and ideas. The paper notes other robots at CMU and elsewhere that have demonstrated how automation of such work could proceed.
“This constitutes a potential paradigm shift in conducting experimental research, where robots, computers, and humans collaborate to accelerate discovery and to search expeditiously and effectively large parametric spaces that are impracticable with the traditional approach,” the team writes.
A recently granted patent from Google is about supporting querying and predictions, and it does this by focusing on user-specific knowledge graphs.
Those User Specific Knowledge Graphs can be specific to particular users.
This means Google can use those graphs to provide results in response to one or more queries submitted by the user, and/or to surface data that might be relevant to the user.
I was reminded of another patent that I recently wrote about when I saw this patent, in the post Answering Questions Using Knowledge Graphs, where Google may perform a search on a question someone asks, and build a knowledge graph from the search results returned, to use to find the answer to their question.
So Google doesn’t just have one knowledge graph but may use many knowledge graphs.
New ones for questions that may be asked, or for different people asking those questions.
This User-Specific Knowledge Graph patent tells us that innovative aspects of the process behind it include:
- Receiving user-specific content
- The user-specific content can be associated with a user of one or more computer services
- That user-specific content is processed using one or more parsers to identify one or more entities and one or more relationships between those entities
- A parser being specific to a schema, and the one or more entities and the one or more relationships between entities being identified based on the schema
- This processes provides one or more user-specific knowledge graphs
- A user-specific knowledge graph being specific to the user, which includes nodes and edges between nodes to define relationships between entities based on the schema
- The process includes storing the one or more user-specific knowledge graphs
Optional Features involving providing one or more user-specific knowledge graphs may also include:
- Determining that a node representing an entity of the one or more entities and an edge representing a relationship associated with the entity are absent from a user-specific knowledge graph
- Adding the node and the edge to the user-specific knowledge graph
- The edge connecting the node to another node of the user-specific knowledge graph
Actions further include:
- Receiving a query
- Receiving one or more user-specific results that are responsive to the query
- The one or more user-specific results are provided based on the one or more user-specific knowledge graphs
- Providing the one or more user-specific results for display to the user
- An edge is associated with a weight
- The weight indicating a relevance of a relationship represented by the edge
- A value of the weight increases based on reinforcement of the relationship in subsequent user-specific content
- A value of the weight decreases based on lack of reinforcement of the relationship in subsequent user-specific content
- A number of user-specific knowledge graphs are provided based on the user-specific content
- Each user-specific knowledge graph being specific to a respective schema
- The user-specific content is provided through use of the one or more computer-implemented services by the user
Advantages of Using the User-Specific Knowledge Graph System
The patent describes the advantages of implementing the process in this patent:
- Enables knowledge about individual users to be captured in a structured manner
- Enabling results to be provided in response to complex queries, e.g., series of queries, regarding a user
- The user-specific knowledge graph may provide a single canonical representation of the user based on user activity inferred from one or more computer-implemented services
- User activities could be overlapping, where reconciliation of the user-specific knowledge graph ensures a canonical entry is provided for each activity
- Joining these together could lead to a universal knowledge graph, e.g., non-user-specific knowledge graph, and user-specific knowledge graphs
(That Universal Knowledge Graph sounds interesting.)
Information from sources like the following may be used to create User-Specific Knowledge Graphs:
- A user’s social network
- Social actions or activities
- A user’s preferences
- A user’s current location
This is so that content that could be more relevant to the user is used in those knowledge graphs.
We are told also that “a user’s identity may be treated so that no personally identifiable information can be determined for the user,” and that “a user’s geographic location may be generalized so that a particular location of a user cannot be determined.”
The User-specific Knowledge Graph Patent
This patent can be found at:
Structured user graph to support querying and predictions
Inventors: Pranav Khaitan and Shobha Diwakar
Assignee: Google LLC
US Patent: 10,482,139
Granted: November 19, 2019
Filed: November 5, 2013
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving user-specific content, the user-specific content being associated with a user of one or more computer-implemented services, processing the user-specific content using one or more parsers to identify one or more entities and one or more relationships between entities, a parser being specific to a schema, and the one or more entities and the one or more relationships between entities being identified based on the schema, providing one or more user-specific knowledge graphs, a user-specific knowledge graph being specific to the user and including nodes and edges between nodes to define relationships between entities based on the schema, and storing the one or more user-specific knowledge graphs.
What Content is in User-Specific Knowledge Graphs?
The types of services that user-specific knowledge graph information could be pulled from can include:
- A search service
- An electronic mail service
- A chat service
- A document sharing service
- A calendar sharing service
- A photo sharing service
- A video sharing service
- Blogging service
- A micro-blogging service
- A social networking service
- A location (location-aware) service
- A check-in service
- A ratings and review service
A User-Specific Knowledge Graph System
This patent describes a search system that includes a user-specific knowledge graph system as part of that search system, either directly connected to or connected to search system over a network.
The search system may interact with the user-specific knowledge graph system to create a user-specific knowledge graph.
That user-specific knowledge graph system may provide one or more user-specific knowledge graphs, which can be stored in a data store.
Each user-specific knowledge graph is specific to a user of the one or more computer-implemented services, e.g., search services provided by the search system.
The search system may interact with the user-specific knowledge graph system to provide one or more user-specific search results in response to a search query.
Structured User Graphs For Querying and Predictions
A user-specific knowledge graph is created based on content associated with the user.
These user-specific knowledge graphs include a number of nodes and edges between nodes.
A node represents an entity and an edge represents a relationship between entities.
Nodes and/or entities of a user-specific knowledge graph can be provided based on the content associated with a respective user, to which the user-specific knowledge graph is specific.
User-Specific Knowledge Graphs and Schemas
The user-specific knowledge graphs can be created based on one or more schemas (examples follow). A schema describes how data is structured in the user-specific knowledge graph.
A schema defines a structure for information provided in the graph.
A schema structures data based on domains, types, and properties.
A domain includes one or more types that share a namespace.
A namespace is provided as a directory of uniquely named objects, where each object in the namespace has a unique name or identifier.
For example, a type denotes an “is a” relationship about a topic, and is used to hold a collection of properties.
A topic can represent an entity, such as a person, place or thing.
Each of these topics can have one or more types associated with them.
A property can be associated with a topic and defines a “has a” relationship between the topic and a value of the property.
In some examples, the value of the property can include another topic.
A user-specific knowledge graph can be created based on content associated with a respective user.
That content may be processed by one or more parsers to populate the user-specific structured graph.
A parser may be specific to a particular schema.
Confidence or Weights in Connections
Weights that are assigned between nodes indicate a relative strength in the relationship between nodes.
The weights can be determined based on the content associated with the user, which content underlies provision of the user-specific knowledge graph.
That content can provide a single instance of a relationship between nodes, or multiple instances of a relationship between nodes.
So, there can be a minimum value and a maximum value.
Weights can also be dynamic:
- Varying over time based on content associated with the user
- Based on content associated with the user at a first time
- Based on content or a lack of content associated with the user at a second time
- The content at the first time can indicate a relationship between nodes
- Weights can decay over time
Multiple User Specific Knowledge Graphs
More than one user-specific knowledge graph can be provided for a particular user.
Each user-specific knowledge graph may be specific to a particular schema.
Generally, a user-specific knowledge graph includes knowledge about a specific user in a structured manner. (It represents a portion of the user’s world through content associated with the user through one or more services.)
Knowledge captured in the user-specific knowledge graph can include things such as:
- Social connections, e.g., real-world and/or virtual
- General likes
- General dislikes
User-Specific Knowledge Graph Versus User-Specific Social Graph
A social graph contains information about people who someone might be connected to, where a user-specific Knowledge graph also overs knowledge about those connections, such as shared activities between people who might be connected in a knowledge graph.
Examples of Queries and User-Specific Knowledge Graphs
These are examples from the patent. Note that searches, emails, social network posts may all work together to build a user-specific Knowledge Graph as seen in the combined messages/actions below, taken together, which may cause the weights on edges between nodes to become stronger, and nodes and edges to be added to that knowledge graph.
Example search query: [playing tennis with my kids in mountain view] to a search service
Search results: which may provide information about playing tennis with kids in Mountain View, Calif.
Nodes can be provided, with one representing the entity “Tennis,” one representing “Mountain View,” one representing “Family,” and a couple more each representing “Child.”
An edge can be provided that represents a “/Location/Play_In” relationship between the nodes, another edge may represent a “/Sport/Played_With” relationship between the nodes and other edges may represent “/Family/Member_Of” relationships between the node and the nodes.
Weights may be generated for each of the edges to represent different values as well.
A Person may post the example post “We had a great time playing tennis with our kids today!” in a social networking service, associated with geo-location data indicating Mountain View, Calif.
Nodes may be identified representing tennis, Mountain View, family and children, and edges between those nodes.
Weights may be generated between those edges.
Someone may receive an electronic message from a hotel, which says “Confirming your hotel reservation in Waikiki, Hi. from Oct. 15, 2014, through Oct. 20, 2014. We’re looking forward to making your family’s vacation enjoyable!”
Nodes can be added to the user-specific Knowledge graph, where those nodes represent the entities “Vacation” and “Waikiki”
Edges can be created in the user-specific knowledge graph in response to that email that represents a “/Vacation/Travelled_With” relationship between the nodes, one that represents a “/Vacation/CityTown” relationship between the nodes, and another edge that represents a “/Vacation/CityTown” relationship between the nodes.
Timing nodes may also be associated with the other nodes, such as a timing node representing October 2014, or a node representing a date range of Oct. 15, 2014, through Oct. 20, 2014.
The user can submit the example search query [kids tennis lessons in waikiki] to a search service.
Nodes may be created in the user-specific knowledge graph representing tennis, Waikiki, family, and children, as well as respective edges between at least some of the nodes.
That example search query may reinforce the relevance of the various entities and the relationships between the entities to the particular user.
That reinforcement may cause the respective weights associated with the edges to be increased.
The user can receive an email from a tennis club, which can include “Confirming tennis lessons at The Club of Tennis, Waikiki, Hi.”
Nodes represent tennis, and Waikiki, and the edges between them.
That email reinforces the relevance of the entities and the relationships between the entities to the particular user.
The weights between the entities could be increased, and a node could be added to represent the entity “The Club of Tennis,” which could then be connected to one or more other nodes.
User-Specific Knowledge Graphs Takeaways
This reminds me of personalized search, but tells us that it is looking at more than just our search history – It includes data from sources such as emails that we might send or receive, or posts that we might make to social networks. This knowledge graph may contain information about the social connections we have, but it also contains knowledge information about those connections as well. The patent tells us that personally identifiable information (including location information) will be protected, as well.
And it tells us that User-specific knowledge graph information could be joined together to build a universal knowledge graph, which means that Google is building knowledge graphs to answer specific questions and for specific users that could potentially be joined together, to enable them to avoid the limitations of a knowledge graph based upon human-edited sources like Wikipedia.
Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
The post User-Specific Knowledge Graphs to Support Queries and Predictions appeared first on SEO by the Sea ⚓.
Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.
That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.
“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.
Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.
A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $ 30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.
“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.’”
Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.
Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.
But the point of raising a massive $ 30 million Series B just a year after Loom’s $ 11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.
Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.
To build out these features Loom has already grown to a headcount of 45. It’s also hired away former head of growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.
Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete. Slack already has 12 million daily users it could provide with video tools.
Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”
The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. Vidyard and Wistia’s Soapbox are already pushing into the space. As long as it has the head start, Loom needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.
One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”
While ecommerce businesses are in the midst of the Q4 craziness and rising CPCs of the holiday season, B2B clients are planning for their business to pick up at the start of 2020.
In this post, I’ll walk through a few things to consider and refresh before Q1 gets here.
1. Study the competitive landscape
One of the most valuable sources of knowledge from Google campaigns is the ‘Auction Insights’ report, which provides info on when competitors have come into and out of the auction during the year. It’s also valuable to look at competitors that might be newer in the space and have recently entered the auction. With this information, you can dive into new keyword research by using tools like SEMrush and SpyFu. I also recommend studying creative, offers, and copy that your competitors are using across their ads helping to inform potential creative and development and testing for the start of the year.
2. Reevaluate budgets for 2020
As the start of the year approaches, look to set budgets based on historical performance and anticipated seasonality. In order to have a strong plan in place, you should look beyond monthly breakdowns.
Some questions to consider
- Did you expand into new channels late into the year?
- Do you need to invest in more budget into certain channels?
- Are our remarketing campaigns fully funded across channels?
- Are you planning on investing budget into new channels?
- How much of the budget will you set aside for testing?
Answering these questions will help ensure you budget appropriately for both historically efficient channels and promising new channels that can get you some early-adoption benefits.
3. Refresh and rethink audiences
It’s important to review the audiences that you have been targeting over the past few months. Along with identifying new audiences to add and poor-performing audiences to pause, consider re-engaging qualified leads that went dark, bolstering account-based marketing efforts, and testing new lookalike audiences.
4. Map out new creative and content
Creative and content are some of the most crucial aspects of campaign development. While you are preparing for Q1, make sure to do an audit of your current and planned creative and content. Are you thinking about the full funnel? Users who haven’t engaged with the brand before are typically looking to download a piece of content that they find valuable. It could be a whitepaper, case study, infographic, or something else that could engage them.
As users progress down the funnel, they will be more willing to give their information to request a demo or get contacted by your company. It’s important to understand where a user is in the funnel and offer them content that aligns with that step. Make sure you’re analyzing content from 2019 and identifying your successes. Which can be spun forward, made into a series, or meaningfully refreshed? Give yourself a leg up by producing content you know to be effective.
Looking at historical performance will help you understand your successes and failures in 2019 and incorporate those into the 2020 planning. Creative, testing, competitive insights, and new audiences will be key efforts in driving growth and performance in the new year, so lay the groundwork now to get ahead of steam going into January.
The post Four initiatives B2Bs must tackle now to win in 2020 appeared first on Search Engine Watch.
- Data scientists: Bring the narrative to the forefront
- Core Web Vitals & Preparing for Google’s Page Experience Update
- Conversion modeling through Consent Mode in Google Ads
- The search dilemma: looking beyond Google’s third-party cookie death
- The FDA’s Decision to Pause J&J Could Help Defeat Covid-19