CBPO

Monthly Archives: July 2018

The Istio service mesh hits version 1.0

July 31, 2018 No Comments

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open source projects out there right now. It sits at the intersection of a number of industry trends like containers, microservices and serverless computing and makes it easier for enterprises to embrace them. Istio now has over 200 contributors and the code has seen over 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer–  say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing its only a matter of time before we hear more about where Istio will land.


Enterprise – TechCrunch


Putting machine learning into the hands of every advertiser

July 31, 2018 No Comments


This post originally appeared on the Inside AdWords blog

The ways people get things done are constantly changing, from finding the closest coffee shop to organizing family photos. Earlier this year, we explored how machine learning is being used to improve our consumer products and help people get stuff done.

In just one hour, we’ll share how we’re helping marketers unlock more opportunities for their businesses with our largest deployment of machine learning in ads. We’ll explore how this technology works in our products and why it’s key to delivering the helpful and frictionless experiences consumers expect from brands.

Join us live today at 9am PT (12pm ET).

Deliver more relevance with responsive search ads

Consumers today are more curious, more demanding, and they expect to get things done faster because of mobile. As a result, they expect your ads to be helpful and personalized. Doing this isn’t easy, especially at scale. That’s why we’re introducing responsive search ads. Responsive search ads combine your creativity with the power of Google’s machine learning to help you deliver relevant, valuable ads.

Simply provide up to 15 headlines and 4 description lines, and Google will do the rest. By testing different combinations, Google learns which ad creative performs best for any search query. So people searching for the same thing might see different ads based on context.

We know this kind of optimization works: on average, advertisers who use Google’s machine learning to test multiple creative see up to 15 percent more clicks.1 Responsive search ads will start rolling out to advertisers over the next several months.

Maximize relevance and performance on YouTube

People watch over 1 billion hours of video on YouTube every day. And increasingly, they’re tuning in for inspiration and information on purchases large and small. For example, nearly 1 in 2 car buyers say they turn to YouTube for information before their purchase.2 And nearly 1 in 2 millennials go there for food preparation tips before deciding what ingredients to buy.3 That means it’s critical your video ads show at the right moment to the right audience.

Machine learning helps us turn that attention into results on YouTube. In the past, we’ve helped you optimize campaigns for views and impressions. Later this year, we’re rolling out Maximize lift to help you reach people who are most likely to consider your brand after seeing a video ad. This new Smart Bidding strategy is also powered by machine learning. It automatically adjusts bids at auction time to maximize the impact your video ads have on brand perception throughout the consumer journey.

Maximize lift is available now as a beta and will roll out to advertisers globally later this year.

Drive more foot traffic with Local campaigns

Whether they start their research on YouTube or Google, people still make the majority of their purchases in physical stores. In fact, mobile searches for “near me” have grown over 3X in the past two years4, and almost 80 percent of shoppers will go in store when there’s an item they want immediately.5 For many of you, that means driving foot traffic to your brick-and-mortar locations is critical—especially during key moments in the year, like in-store events or promotions.

Today we’re introducing Local campaigns: a new campaign type designed to drive store visits exclusively. Provide a few simple things—like your business locations and ad creative—and Google automatically optimizes your ads across properties to bring more customers into your store.

Show your business locations across Google properties and networks

Local campaigns will roll out to advertisers globally over the coming months.

Get the most from your Shopping campaigns

Earlier this year, we rolled out a new Shopping campaign type that optimizes performance based on your goals. These Smart Shopping campaign help you hit your revenue goals without the need to manually manage and bid to individual products. In the coming months, we’re improving them to optimize across multiple business goals.

Beyond maximize conversion value, you’ll also be able to select store visits or new customers as goals. Machine learning factors in the likelihood that a click will result in any of these outcomes and helps adjust bids accordingly.

Machine learning is also used to optimize where your Shopping ads show—on Google.com, Image Search, YouTube and millions of sites and apps across the web—and which products are featured. It takes into account a wide range of signals, like seasonal demand and pricing. Brands like GittiGidiyor, an eBay company, are using Smart Shopping campaigns to simplify how they manage their ads and deliver better results. GittiGidiyor was able to increase return on ad spend by 28 percent and drive 4 percent more sales, while saving time managing campaigns.

We’re also adding support for leading e-commerce platforms to help simplify campaign management. In the coming weeks, you’ll be able to set up and manage Smart Shopping campaigns right from Shopify, in addition to Google Ads.

Tune in to see more

This is an important moment for marketers and we’re excited to be on this journey with you. Tune in at 9am PT (12pm ET) today to see it all unfold at Google Marketing Live.

For the latest news, follow the new Google Ads blog. And check out g.co/adsannouncements for more information about product updates and announcements.

1 Internal Google data.
2 Google / Kantar TNS, Auto CB Gearshift Study, US, 2017. n=312 new car buyers who watched online video.
3 Google / Ipsos, US, November 2017.
4 Internal Google data, U.S., July–Dec. 2015 vs. July–Dec. 2017.
5 Google/Ipsos, U.S., “Shopping Tracker,” Online survey, n=3,613 online Americans 13+ who shopped in the past two days, Oct.–Dec. 2017.


Google Analytics Blog


Quality Scores for Queries: Structured Data, Synthetic Queries and Augmentation Queries

July 31, 2018 No Comments

Augmentation Queries

In general, the subject matter of this specification relates to identifying or generating augmentation queries, storing the augmentation queries, and identifying stored augmentation queries for use in augmenting user searches. An augmentation query can be a query that performs well in locating desirable documents identified in the search results. The performance of the query can be determined by user interactions. For example, if many users that enter the same query often select one or more of the search results relevant to the query, that query may be designated an augmentation query.

In addition to actual queries submitted by users, augmentation queries can also include synthetic queries that are machine generated. For example, an augmentation query can be identified by mining a corpus of documents and identifying search terms for which popular documents are relevant. These popular documents can, for example, include documents that are often selected when presented as search results. Yet another way of identifying an augmentation query is mining structured data, e.g., business telephone listings, and identifying queries that include terms of the structured data, e.g., business names.

These augmentation queries can be stored in an augmentation query data store. When a user submits a search query to a search engine, the terms of the submitted query can be evaluated and matched to terms of the stored augmentation queries to select one or more similar augmentation queries. The selected augmentation queries, in turn, can be used by the search engine to augment the search operation, thereby obtaining better search results. For example, search results obtained by a similar augmentation query can be presented to the user along with the search results obtained by the user query.

This past March, Google was granted a patent that involves giving quality scores to queries (the quote above is from that patent). The patent refers to high scoring queries as augmentation queries. Interesting to see that searcher selection is one way that might be used to determine the quality of queries. So, when someone searches. Google may compare the SERPs they receive from the original query to augmentation query results based upon previous searches using the same query terms or synthetic queries. This evaluation against augmentation queries is based upon which search results have received more clicks in the past. Google may decide to add results from an augmentation query to the results for the query searched for to improve the overall search results.

How does Google find augmentation queries? One place to look for those is in query logs and click logs. As the patent tells us:

To obtain augmentation queries, the augmentation query subsystem can examine performance data indicative of user interactions to identify queries that perform well in locating desirable search results. For example, augmentation queries can be identified by mining query logs and click logs. Using the query logs, for example, the augmentation query subsystem can identify common user queries. The click logs can be used to identify which user queries perform best, as indicated by the number of clicks associated with each query. The augmentation query subsystem stores the augmentation queries mined from the query logs and/or the click logs in the augmentation query store.

This doesn’t mean that Google is using clicks to directly determine rankings But it is deciding which augmentation queries might be worth using to provide SERPs that people may be satisfied with.

There are other things that Google may look at to decide which augmentation queries to use in a set of search results. The patent points out some other factors that may be helpful:

In some implementations, a synonym score, an edit distance score, and/or a transformation cost score can be applied to each candidate augmentation query. Similarity scores can also be determined based on the similarity of search results of the candidate augmentation queries to the search query. In other implementations, the synonym scores, edit distance scores, and other types of similarity scores can be applied on a term by term basis for terms in search queries that are being compared. These scores can then be used to compute an overall similarity score between two queries. For example, the scores can be averaged; the scores can be added; or the scores can be weighted according to the word structure (nouns weighted more than adjectives, for example) and averaged. The candidate augmentation queries can then be ranked based upon relative similarity scores.

I’ve seen white papers from Google before mentioning synthetic queries, which are queries performed by the search engine instead of human searchers. It makes sense for Google to be exploring query spaces in a manner like this, to see what results are like, and using information such as structured data as a source of those synthetic queries. I’ve written about synthetic queries before at least a couple of times, and in the post Does Google Search Google? How Google May Create and Use Synthetic Queries.

Implicit Signals of Query Quality

It is an interesting patent in that it talks about things such as long clicks and short clicks, and ranking web pages on the basis of such things. The patent refers to such things as “implicit Signals of query quality.” More about that in the patent here:

In some implementations, implicit signals of query quality are used to determine if a query can be used as an augmentation query. An implicit signal is a signal based on user actions in response to the query. Example implicit signals can include click-through rates (CTR) related to different user queries, long click metrics, and/or click-through reversions, as recorded within the click logs. A click-through for a query can occur, for example, when a user of a user device, selects or “clicks” on a search result returned by a search engine. The CTR is obtained by dividing the number of users that clicked on a search result by the number of times the query was submitted. For example, if a query is input 100 times, and 80 persons click on a search result, then the CTR for that query is 80%.

A long click occurs when a user, after clicking on a search result, dwells on the landing page (i.e., the document to which the search result links) of the search result or clicks on additional links that are present on the landing page. A long click can be interpreted as a signal that the query identified information that the user deemed to be interesting, as the user either spent a certain amount of time on the landing page or found additional items of interest on the landing page.

A click-through reversion (also known as a “short click”) occurs when a user, after clicking on a search result and being provided the referenced document, quickly returns to the search results page from the referenced document. A click-through reversion can be interpreted as a signal that the query did not identify information that the user deemed to be interesting, as the user quickly returned to the search results page.

These example implicit signals can be aggregated for each query, such as by collecting statistics for multiple instances of use of the query in search operations, and can further be used to compute an overall performance score. For example, a query having a high CTR, many long clicks, and few click-through reversions would likely have a high-performance score; conversely, a query having a low CTR, few long clicks, and many click-through reversions would likely have a low-performance score.

The reasons for the process behind the patent are explained in the description section of the patent where we are told:

Often users provide queries that cause a search engine to return results that are not of interest to the users or do not fully satisfy the users’ need for information. Search engines may provide such results for a number of reasons, such as the query including terms having term weights that do not reflect the users’ interest (e.g., in the case when a word in a query that is deemed most important by the users is attributed less weight by the search engine than other words in the query); the queries being a poor expression of the information needed; or the queries including misspelled words or unconventional terminology.

A quality signal for a query term can be defined in this way:

the quality signal being indicative of the performance of the first query in identifying information of interest to users for one or more instances of a first search operation in a search engine; determining whether the quality signal indicates that the first query exceeds a performance threshold; and storing the first query in an augmentation query data store if the quality signal indicates that the first query exceeds the performance threshold.

The patent can be found at:

Query augmentation
Inventors: Anand Shukla, Mark Pearson, Krishna Bharat and Stefan Buettcher
Assignee: Google LLC
US Patent: 9,916,366
Granted: March 13, 2018
Filed: July 28, 2015

Abstract

Methods, systems, and apparatus, including computer program products, for generating or using augmentation queries. In one aspect, a first query stored in a query log is identified and a quality signal related to the performance of the first query is compared to a performance threshold. The first query is stored in an augmentation query data store if the quality signal indicates that the first query exceeds a performance threshold.

References Cited about Augmentation Queries

These were a number of references cited by the applicants of the patent, which looked interesting, so I looked them up to see if I could find them to read them and share them here.

  1. Boyan, J. et al., A Machine Learning Architecture for Optimizing Web Search Engines,” School of Computer Science, Carnegie Mellon University, May 10, 1996, pp. 1-8. cited by applicant.
  2. Brin, S. et al., “The Anatomy of a Large-Scale Hypertextual Web Search Engine“, Computer Science Department, 1998. cited by applicant.
  3. Sahami, M. et al., T. D. 2006. A web-based kernel function for measuring the similarity of short text snippets. In Proceedings of the 15th International Conference on World Wide Web (Edinburgh, Scotland, May 23-26, 2006). WWW ’06. ACM Press, New York, NY, pp. 377-386. cited by applicant.
  4. Ricardo A. Baeza-Yates et al., The Intention Behind Web Queries. SPIRE, 2006, pp. 98-109, 2006. cited by applicant.
  5. Smith et al. Leveraging the structure of the Semantic Web to enhance information retrieval for proteomics” vol. 23, Oct. 7, 2007, 7 pages. cited by applicant.
  6. Robertson, S.E. Documentation Note on Term Selection for Query Expansion J. of Documentation, 46(4): Dec. 1990, pp. 359-364. cited by applicant.
  7. Talel Abdessalem, Bogdan Cautis, and Nora Derouiche. 2010. ObjectRunner: lightweight, targeted extraction and querying of structured web data. Proc. VLDB Endow. 3, 1-2 (Sep. 2010). cited by applicant .
  8. Jane Yung-jen Hsu and Wen-tau Yih. 1997. Template-based information mining from HTML documents. In Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative application of artificial intelligence (AAAI’97/IAAI’97). AAAI Press, pp. 256-262. cited by applicant .
  9. Ganesh, Agarwal, Govind Kabra, and Kevin Chen-Chuan Chang. 2010. Towards rich query interpretation: walking back and forth for mining query templates. In Proceedings of the 19th international conference on World wide web (WWW ’10). ACM, New York, NY USA, 1-10. DOI=10. 1145/1772690. 1772692 http://doi.acm.org/10.1145/1772690.1772692. cited by applicant.

This is a Second Look at Augmentation Queries

This is a continuation patent, which means that it was granted before, with the same description, and it now has new claims. When that happens, it can be worth looking at the old claims and the new claims to see how they have changed. I like that the new version seems to focus more strongly upon structured data. It tells us that it might use structured data in sites that appear for queries as synthetic queries, and if those meet the performance threshold, they may be added to the search results that appear for the original queries. The claims do seem to focus a little more on structured data as synthetic queries, but it doesn’t really change the claims that much. They haven’t changed enough to publish them side by side and compare them.

What Google Has Said about Structured Data and Rankings

Google spokespeople had been telling us that Structured Data doesn’t impact rankings directly, but what they have been saying does seem to have changed somewhat recently. In the Search Engine Roundtable post, Google: Structured Data Doesn’t Give You A Ranking Boost But Can Help Rankings we are told that just having structured data on a site doesn’t automatically boost the rankings of a page, but if the structured data for a page is used as a synthetic query, and it meets the performance threshold as an augmentation query, it might be shown in rankings, thus helping in rankings (as this patent tells us.)

Note that this isn’t new, and the continuation patent’s claims don’t appear to have changed that much so that structured data is still being used as synthetic queries, and is checked to see if they work as augmented queries. This does seem to be a really good reason to make sure you are using the appropriate structured data for your pages.


Copyright © 2018 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Quality Scores for Queries: Structured Data, Synthetic Queries and Augmentation Queries appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


It’s Not Game Over: How Programmatic Can Nurture Lead Gen Wins

July 30, 2018 No Comments

Programmatic is known to be a bit confusing, so we’re having Criteo’s Ned Samuelson and Hanapin’s Bryan Gaynor take an hour to discuss the ways you can utilize programmatic to build brand awareness and create fantastic content around your product and services.

Read more at PPCHero.com
PPC Hero


Fabric offers an alternative to Facebook sharing with a private timeline of personal moments

July 30, 2018 No Comments

Fabric, a personal journaling app that emerged from Y Combinator’s 2016 batch of startups, is relaunching itself as a Facebook alternative. The app is giving itself a makeover in the wake of Facebook’s closure of the Moves location tracker, by offering its own tool to record your activities, photos, memories and other moments shared with friends and family. But unlike on Facebook, everything in Fabric is private by default and data isn’t shared with marketers.

Instead, the startup hopes to build something users will eventually pay for, via premium features or subscriptions.

The idea for the startup came from two people who helped create Facebook’s core features.

Co-founders Arun Vijayvergiya and Nikolay Valtchanov worked for several years at the social network, where Vijayvergiya built the product that would later become Facebook Timeline at an internal hackathon. He also worked on products like Friendship Pages, Year in Review and On This Day, while Valtchanov developed integrations between Facebook and fitness applications.

After leaving Facebook, both were inspired to work on Fabric because of their interest in personal journaling – and that became the key focus for the original version of the Fabric app. But while other journaling apps may offer a blank space for recording thoughts, Fabric automates the process by pulling in photos, posts from elsewhere on social media, places you visited, and more, and put those on its map interface.

The longer-term goal is that Fabric users will be able to look back across their personal history to answer any kind of question about where they had been, what they did, and who they were with – but in a more private environment than what’s available on Facebook.

Facebook could have built something similar, but its focus has been more on how personal profile data could be useful to advertisers.

Despite numerous check-ins, posts where you tagged friends, shared photos and more, there’s still not an easy way to ask Facebook about that great Indian restaurant you tried last March, or who was on that group beach trip with you a few years ago, for example. At best, Facebook offers memory flashbacks through its On This Day feature (now available at any time via the Memories tab), or round-ups and collages that appear at various times throughout the year.

As a search engine for your own memories, it’s not that great.

What’s New 

This is where Fabric comes in. It will automatically record your activities, checking you in to places you visit, which you can then choose to add friends to.

While the idea of automatic location gathering may turn a good number of users off, the difference is that Fabric’s data collection is meant for your eyes only, unless you explicitly choose to share something with friends.

Fabric doesn’t use third-party software for its location system – it’s written in-house, so the data is never touched by a third-party. It also uses industry standard encryption for data transfer and storage, and login information is stored in a separate system from the rest of your data as an added precaution.

Notably, Fabric doesn’t plan to generate revenue by selling data or offering it to advertisers for targeting purposes. Instead, the company hopes users will eventually pay for its product – perhaps as a subscription or through premium upgrades. (It’s not doing this yet, however.)

“The whole motivation behind Fabric is that many meaningful parts of your life do not belong in the public sphere,” explains Vijayvergiya. “In order to be able to capture these moments, user trust is essential and is something we have baked into our company culture. Internally, we refer to ourselves as a ‘private-first’ company. Everything on Fabric is private by default. You have to choose to include friends in your moments. We don’t share any data with marketers, and we don’t intend to share personally identifiable information with advertisers,” he says.

Since its 2016 release, Fabric has been downloaded 70,000 times by users across 117 countries, and has seen 112 million automatic check-ins.

The new version of the app has been redesigned to be something users engage with more often, as opposed to the more passive journaling app it was before.

The app now offers an outline of your activities, which it also calls Timeline. Here, you can add people, photos and memorable anecdotes to those automated entries. You can jump back to any day to see your history with any person or place that appears on the Timeline.

You can also turn any moment into one you collaborate on with friends, by allowing others to add photos and comments. That is, instead of broad post to a group of so-called “friends” on Facebook, you share the moment with those who really matter. This isn’t all that different from how people use private messaging apps and group chats today – in order to share things with people that aren’t necessarily meant for everyone to see.

In addition, Fabric allows you to add your friends to the app, so you can be automatically tagged when you both spend time together in the real world. This also simplifies sharing because you won’t have to think about which posts should be shared with which audience.

For instance, Vijayvergiya says, “this means you can add your mom as a friend, and only share with her the moments you spend together in the same place.”

The most compelling feature in the updated app may not be check-ins or sharing, but search.

In Fabric, you can now search for past events in your life similar to how you search the web. That is, you could type in “restaurant rome 2017” or “camila los angeles birthday” and find the matching posts, Vijayvergiya suggests. And because you can import your Facebook, Instagram, and Camera Roll to Fabric, it’s now offering the search engine that Facebook itself forgot to build. (You can import your Facebook Moves history, too, ahead of its shutdown.)

Fabric’s search will also be available on the desktop web, where it’s currently in beta.

Fabric’s real challenger, as it turns out, may not be Facebook, though. It’s Google Photos.

Because of advances in image recognition technology, Google Photos (and some other photo apps) have built advanced search capabilities that let you pull up not places, things, people, and more, using data recognized in the image itself. Users can also share those photos with others, collaborate on albums, and leave notes as comments.

The difference is that Fabric offers import from a variety of sources and encourages journaling. But that may not be enough to attract a large user base, especially when automatic check-ins rely on the app’s use of background location which has some impact on battery life.

Fabric is a free download on iOS.

 

 


Social – TechCrunch


Fabric offers an alternative to Facebook sharing with a private timeline of personal moments

July 30, 2018 No Comments

Fabric, a personal journaling app that emerged from Y Combinator’s 2016 batch of startups, is relaunching itself as a Facebook alternative. The app is giving itself a makeover in the wake of Facebook’s closure of the Moves location tracker, by offering its own tool to record your activities, photos, memories and other moments shared with friends and family. But unlike on Facebook, everything in Fabric is private by default and data isn’t shared with marketers.

Instead, the startup hopes to build something users will eventually pay for, via premium features or subscriptions.

The idea for the startup came from two people who helped create Facebook’s core features.

Co-founders Arun Vijayvergiya and Nikolay Valtchanov worked for several years at the social network, where Vijayvergiya built the product that would later become Facebook Timeline at an internal hackathon. He also worked on products like Friendship Pages, Year in Review and On This Day, while Valtchanov developed integrations between Facebook and fitness applications.

After leaving Facebook, both were inspired to work on Fabric because of their interest in personal journaling – and that became the key focus for the original version of the Fabric app. But while other journaling apps may offer a blank space for recording thoughts, Fabric automates the process by pulling in photos, posts from elsewhere on social media, places you visited, and more, and put those on its map interface.

The longer-term goal is that Fabric users will be able to look back across their personal history to answer any kind of question about where they had been, what they did, and who they were with – but in a more private environment than what’s available on Facebook.

Facebook could have built something similar, but its focus has been more on how personal profile data could be useful to advertisers.

Despite numerous check-ins, posts where you tagged friends, shared photos and more, there’s still not an easy way to ask Facebook about that great Indian restaurant you tried last March, or who was on that group beach trip with you a few years ago, for example. At best, Facebook offers memory flashbacks through its On This Day feature (now available at any time via the Memories tab), or round-ups and collages that appear at various times throughout the year.

As a search engine for your own memories, it’s not that great.

What’s New 

This is where Fabric comes in. It will automatically record your activities, checking you in to places you visit, which you can then choose to add friends to.

While the idea of automatic location gathering may turn a good number of users off, the difference is that Fabric’s data collection is meant for your eyes only, unless you explicitly choose to share something with friends.

Fabric doesn’t use third-party software for its location system – it’s written in-house, so the data is never touched by a third-party. It also uses industry standard encryption for data transfer and storage, and login information is stored in a separate system from the rest of your data as an added precaution.

Notably, Fabric doesn’t plan to generate revenue by selling data or offering it to advertisers for targeting purposes. Instead, the company hopes users will eventually pay for its product – perhaps as a subscription or through premium upgrades. (It’s not doing this yet, however.)

“The whole motivation behind Fabric is that many meaningful parts of your life do not belong in the public sphere,” explains Vijayvergiya. “In order to be able to capture these moments, user trust is essential and is something we have baked into our company culture. Internally, we refer to ourselves as a ‘private-first’ company. Everything on Fabric is private by default. You have to choose to include friends in your moments. We don’t share any data with marketers, and we don’t intend to share personally identifiable information with advertisers,” he says.

Since its 2016 release, Fabric has been downloaded 70,000 times by users across 117 countries, and has seen 112 million automatic check-ins.

The new version of the app has been redesigned to be something users engage with more often, as opposed to the more passive journaling app it was before.

The app now offers an outline of your activities, which it also calls Timeline. Here, you can add people, photos and memorable anecdotes to those automated entries. You can jump back to any day to see your history with any person or place that appears on the Timeline.

You can also turn any moment into one you collaborate on with friends, by allowing others to add photos and comments. That is, instead of broad post to a group of so-called “friends” on Facebook, you share the moment with those who really matter. This isn’t all that different from how people use private messaging apps and group chats today – in order to share things with people that aren’t necessarily meant for everyone to see.

In addition, Fabric allows you to add your friends to the app, so you can be automatically tagged when you both spend time together in the real world. This also simplifies sharing because you won’t have to think about which posts should be shared with which audience.

For instance, Vijayvergiya says, “this means you can add your mom as a friend, and only share with her the moments you spend together in the same place.”

The most compelling feature in the updated app may not be check-ins or sharing, but search.

In Fabric, you can now search for past events in your life similar to how you search the web. That is, you could type in “restaurant rome 2017” or “camila los angeles birthday” and find the matching posts, Vijayvergiya suggests. And because you can import your Facebook, Instagram, and Camera Roll to Fabric, it’s now offering the search engine that Facebook itself forgot to build. (You can import your Facebook Moves history, too, ahead of its shutdown.)

Fabric’s search will also be available on the desktop web, where it’s currently in beta.

Fabric’s real challenger, as it turns out, may not be Facebook, though. It’s Google Photos.

Because of advances in image recognition technology, Google Photos (and some other photo apps) have built advanced search capabilities that let you pull up not places, things, people, and more, using data recognized in the image itself. Users can also share those photos with others, collaborate on albums, and leave notes as comments.

The difference is that Fabric offers import from a variety of sources and encourages journaling. But that may not be enough to attract a large user base, especially when automatic check-ins rely on the app’s use of background location which has some impact on battery life.

Fabric is a free download on iOS.

 

 

Mobile – TechCrunch


3 Chances to Win A Free Ticket to Hero Conf London

July 29, 2018 No Comments

Our ever-popular Golden Ticket Giveaway is returning for Hero Conf London! So how do you win? The rules are simple…

Read more at PPCHero.com
PPC Hero


Twitter will suspend repeat offenders posting abusive comments on Periscope live streams

July 28, 2018 No Comments

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.

The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.

Currently, Periscope’s comment moderation policy involves group moderation.

That is, when one viewer reports a comment as “abuse,” “spam” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.

However, this suspension would only disable their ability to chat during the broadcast itself — it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator — and their community — feel threatened or otherwise harassed.

Twitter says that accounts that repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.

To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts — including spam, despite the impact to monthly active user numbers — is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?

The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter address abuse at a greater scale in the future.


Social – TechCrunch


Space Photos of the Week: A Stormy Summer on Mars

July 28, 2018 No Comments

Planet-encircling dust storms are shrouding skies and imperiling a NASA rover.
Feed: All Latest


New brand, new home: Where to find Google Marketing Platform online

July 28, 2018 No Comments

When we brought together DoubleClick and the Google Analytics 360 Suite under Google Marketing Platform, we knew we had to make some changes to our websites, blogs and social media channels too. Now, the resources you’ve been reading and visiting over the years have been updated to reflect our new brand, so you can find the latest news, tips and more on our advertising and analytics solutions in one spot.

First, you should know that we’ve moved our content and product information to marketingplatform.google.com. You’ll also find product sign-in links there. (Those bookmarks you have for the old DoubleClick and Google Analytics websites should automatically redirect you.)

We’ve also launched new and improved blogs, with information for our product users and enterprise customers. We’ll be regularly updating them with product news and digital marketing insights. Bookmark us.

Of course, you can also connect with Google Marketing Platform on social:

Twitter: Follow @GMktgPlatform

LinkedIn: Follow Google Marketing Platform for updates

YouTube: Subscribe for new videos

You’ll find customer stories, major product announcements, research, reports and other advertising and analytics content intended for large enterprises.

And don’t worry: We haven’t changed the Google Analytics social channels. We will continue to bring you product news and tips on Google+, Twitter, YouTube, LinkedIn and Facebook.

We hope you like our new home. Thanks for visiting, and come back soon!


Google Analytics Blog