Monthly Archives: October 2017
Optimize is now available in 37 new languages. Got a team in Thailand? No trouble. Cross-functional partner in Croatia? You’re covered. You’ll find the full list of supported languages here.
We’re always glad to bring our products to more of the world. But in this case, we’re extra excited about the way this will help teams collaborate and innovate not just across the office but across the globe.
In this data-rich world, everyone in your company needs to be part of building a culture of growth: a culture that embraces testing and analytics as the best way to learn what customers like most and to improve their experience day by day. Optimize opens the door for innovators at every level to explore how even tiny UI changes can improve results.
Often those innovators take the form of a small “X-team” — maybe an analyst, a designer, and an engineer working together and totally focused on testing and optimization. With Optimize, a group like that can create changes in minutes instead of days, and they can more easily share that growth mindset and inspire others across their organization.
Now with 37 more languages in play, Optimize makes it possible for many more local teams to take on the role of optimizers, innovators, and culture-changers.
If you have team members who have selected one of the 37 new languages in their Google Account preferences, they’ll see Optimize in that language whenever they next sign in. (If you’d like to select a language preference just for Optimize, you can do so in your Optimize user settings at any time.) And if you’re happy with your current Optimize language, you’re fine: No action is needed.
To learn more about your global language options, visit our help center. Happy optimizing!
Posted by Rotimi Iziduh, Product Manager, Google Optimize
MongoDB has finished up what is essentially the final step in going public, pricing its IPO at $ 24 and raising $ 192 million in the process. The company will debut on the public markets tomorrow and will once again test the waters for companies that are looking to build full-fledged businesses on the back of open-sourced software. MongoDB provides open-sourced database software that can be… Read More
Enterprise – TechCrunch
The $ 300 drip machine from Breville offers many ways to tinker with the variables that go into brewing your morning cup.
Feed: All Latest
Twitter, a platform infested with trolls, hate and abuse, can be one of the worst places on the internet. As a followup to Twitter CEO Jack Dorsey’s tweetstorm last week, in which he promised to crack down on hate and abuse by implementing more aggressive rules, Twitter is gearing up to roll out some updates in the coming weeks, Wired reported earlier today. Read More
Social – TechCrunch
Director Ryan Coogler’s movie looks amazing—in all the best ways.
Feed: All Latest
I have a new habit. It quenches a thirst. It soothes a weakened, battered piece of my psyche. It repairs my wounds and unleashes me, more powerful, into my day. It fulfills the saddest, most regrettable pieces of myself. It may also be a habit that you want to pick up. You too may find that this simple step into the breach of social engagement online can make you… Fitter. Happier.… Read More
Social – TechCrunch
At one point in time, search engines such as Google learned about topics on the Web from sources such as Yahoo! and the Open Directory Project, which provided categories of sites, within directories that people could skim through to find something that they might be interested in.
Those listings of categories included hierarchical topics and subtopics; but they were managed by human beings and both directories have closed down.
In addition to learning about categories and topics from such places, search engines used to use such sources to do focused crawls of the web, to make sure that they were indexing as wide a range of topics as they could.
In that patent we learned that Google was using information from knowledge bases (sources such as Yahoo Finance, IMDB, Wikipedia, and other data-rich and well organized places) to learn about words that may have more than one meaning.
An example from that patent was that the word “horse” has different meanings in different contexts.
To an equestrian, a horse is an animal. To a carpenter, a horse is a work tool when they do carpentry. To a gymnast, a horse is a piece of equipment that they perform manuevers upon during competitions with other gymnasts.
A context vector takes these different meanings from knowledge bases, and the number of times they are mentioned in those places to catalogue how often they are used in which context.
I thought knowing about context vectors was useful for doing keyword research, but I was excited to see another patent from Google appear where the word “context” played a featured role in the patent. When you search for something such as a “horse”, the search results you recieve are going to be mixed with horses of different types, depending upon the meaning. As this new patent tells us about such search results:
The ranked list of search results may include search results associated with a topic that the user does not find useful and/or did not intend to be included within the ranked list of search results.
If I was searching for a horse of the animal type, I might include another word in my query that identified the context of my search better. The inventors of this new patent seem to have a similar idea. The patent mentions
In yet another possible implementation, a system may include one or more server devices to receive a search query and context information associated with a document identified by the client; obtain search results based on the search query, the search results identifying documents relevant to the search query; analyze the context information to identify content; and generate a group of first scores for a hierarchy of topics, each first score, of the group of first scores, corresponding to a respective measure of relevance of each topic, of the hierarchy of topics, to the content.
From the pictures that accompany the patent it looks like this context information is in the form of Headings that appear above each search result that identify Context information that those results fit within. Here’s a drawing from the patent showing off topical search results (showing rock/music and geology/rocks):
This patent does remind me of the context vector patent, and the two processes in these two patents look like they could work together. This patent is:
Context-based filtering of search results
Inventors: Sarveshwar Duddu, Kuntal Loya, Minh Tue Vo Thanh and Thorsten Brants
Assignee: Google Inc.
US Patent: 9,779,139
Granted: October 3, 2017
Filed: March 15, 2016
A server is configured to receive, from a client, a query and context information associated with a document; obtain search results, based on the query, that identify documents relevant to the query; analyze the context information to identify content; generate first scores for a hierarchy of topics, that correspond to measures of relevance of the topics to the content; select a topic that is most relevant to the context information when the topic is associated with a greatest first score; generate second scores for the search results that correspond to measures of relevance, of the search results, to the topic; select one or more of the search results as being most relevant to the topic when the search results are associated with one or more greatest second scores; generate a search result document that includes the selected search results; and send, to a client, the search result document.
It will be exciting to see topical search results start appearing at Google.
Copyright © 2017 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
The post Does Tomorrow Deliver Topical Search Results at Google? appeared first on SEO by the Sea ⚓.
Today we’re happy to announce that data for the Google Analytics BigQuery export can be streamed as often as every 10 minutes into Google Cloud.
If you’re a Google Analytics 360 client who wants to do current-day analysis, this means you can choose to send data to BigQuery up to six times per hour for almost real-time analysis and action. That’s a 48x improvement over the existing three-times-per-day exports.
What can I do with streaming data delivery?
Many businesses use faster access to their data to identify and engage with clients who show an intent to convert.
For example, it’s well known that a good time to offer a discount to consumers is just after they’ve shown intent (like adding a product to their cart) but then abandoned the conversion funnel. An offer at that moment can bring back large numbers of consumers who then convert. In a case like this, it’s critical to use the freshest data to identify those users in minutes and deploy the right campaign to bring them back.
More frequent updates also help clients recognize and fix issues more quickly, and react to cultural trends in time to join the conversation. BigQuery is an important part of the process: it helps you join other datasets from CRM systems, call centers, or offline sales that are not available in Google Analytics today to gain greater context into those clients, issues, or emerging trends.
When streaming data is combined with BigQuery’s robust programmatic and statistical tools, predictive user models can capture a greater understanding of your audience ― and help you engage those users where and when they’re ready to convert. That means more sales opportunities and better returns on your investment.
Those who opt in to streaming Google Analytics data into BigQuery will see data delivered to their selected BigQuery project as fast as every 10 minutes.
Those who don’t opt-in will continue to see data delivered just as it has been, arriving about every eight hours.
Why is opt-in required?
The new export uses Cloud Streaming Service, which costs a little extra: $ 0.05 per GB (that is, “a nickel a gig”). The opt-in is our way of making sure nobody gets surprised by the additional cost. If you don’t take any action, your account will continue to run as it does now, and there will be no added cost.
What data is included?
Most data sent directly to Google Analytics is included. However, data pulled in from other sources like AdWords and DoubleClick, also referred to as “integration sources”, operate with additional requirements like fraud detection. That means that this data is purposefully delayed for your benefit and therefore exempt from this new streaming functionality.
For further details on what is supported or not supported, please read the help center article here.
How do I get started?
You can start receiving the more frequent data feeds by opting in. To do so, just visit the Google Analytics BigQuery linking page in the Property Admin section and choose the following option:
You can also visit our Help Center for full details on this change and opt-in instructions.
Posted by Breen Baker, on behalf of the Google Analytics team
The last time I spoke to Red Hat CEO Jim Whitehurst, he had set a pretty audacious goal for his company to achieve $ 5 billion in revenue. At the time, that seemed a bit far-fetched. But the company has continued to thrive and is on track to pass $ 3 billion in revenue some time in the next couple of quarters. Read More
Enterprise – TechCrunch