CBPO

Monthly Archives: August 2019

Twitter to test a new filter for spam and abuse in the Direct Message inbox

August 17, 2019 No Comments

Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send you private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.

This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.

Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.

Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.

And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.

In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.

The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.

It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.

It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn off the filter.

Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.

The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.

Twitter is updating is Direct Message system in other ways, too.

At a press conference this week, Twitter announced several changes coming to its platform, including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos and more.


Social – TechCrunch



5 Reasons Why You Should Use Google Experiments

August 17, 2019 No Comments

Google Experiments is an A/B testing tool that is available within Google Analytics interface.  This post is not about what A/B testing is, why you should conduct A/B tests and what other tools are available but really to make a case for using Google Analytics as your testing platform.  I am not getting paid to write this or have any affiliation with Google. This post is in response to a question I received from a reader of my blog.

  1. Free –There is absolutely no cost for the Tool. You can’t beat Free, it is a great way to start with A/B testing and learn about how testing works. I strongly recommend that you try this tool before moving to more sophisticated paid tools. Additionally, if you are just trying to make a case for Testing within your organization then cost does become a barrier and this tools removes that barrier.
  2. Easy To Setup – Easy to use wizard allows you to choose the pages to test and setup test parameters.
  3. Easy Implementation – Once you are done with setting up (point 2 above) the page(s) you want to test, you have to implement some code on your site.  It may sound daunting but that code is very easy to implement. Google provide you the code after your setup is done and all you have to do is stick that on your pages.  Since you already have Google Analytics installed, you are already half way through. Easy setup makes it easy for you to cross the IT/development team barrier.
  4. Setting up Objective– If you have already defined the Goals in Google Analytics, you can use them as the objective of your test. During your setup you can pick a goal that you have already defined in Google Analytics as your desired optimization objective. If you have not defined them already then you can quickly define them while setting up your test.
  5. Segments – Many tools just gives you the final results based on the data of entire population or based on some predefined segments.  In case of Google Experiments, you can pick Segments that you have defined in Google Analytics and see how each variation is performing for each of your segment. Since not all segments behave in similar fashion this kind of analysis helps you drive even more conversion by understanding which variation of your pages(s) work better for which segments.

Keep in mind that no matter how good your conversions are, there is always a room for improvement and A/B testing helps you with it. As Bryan Eisenberg would say, Always Be Testing.

This post was originally posted on http://anilbatra.com/analytics/2013/11/5-reasons-to-use-google-experiments/


Google Analytics Premium


Facebook Dynamic Creative: Level Up Your Ads

August 15, 2019 No Comments

Understand Facebook’s dynamic creative feature, how it works, how to set it up, best practices, and real-world results with positive impact.

Read more at PPCHero.com
PPC Hero


Huawei pushes back launch of 5G foldable, the Mate X

August 15, 2019 No Comments

If you were desperately ripping days off of your calendar until you could get your hands on Huawei’s $ 2,600 5G foldable, the Mate X — which was originally slated to launch next month — it sounds like you’re going to have to wait a bit longer, per TechRadar which attended a press event at Huawei’s Shenzhen headquarters today. 

It reports being told there is no possibility of a September launch. Instead Huawei is now aiming for November. But the company would only profess itself certain its first smartphone that folds out to a (square) tablet will launch before 2020. So it seems Mate X buyers may need to wait until circa Christmas to fondle this foldable.

It’s not clear exactly why the launch is being delayed. But — speculating wildly — we imagine it’s something to do with the fact that the screen, er, folds.

We’ve reached out to Huawei for official comment on the delay.

Huawei’s Mate X date slippage suggests Samsung will still be first to market with its (previously) delayed Galaxy Fold — which was itself delayed after a bunch of review units broke (because, well, did we tell you the screen folds?).

Last we heard, the Galaxy Fold is slated for a September release — Samsung seemingly confident it’s fixed the problem of how to make a foldable phone survive actual use.

Of course survival in the wild very much remains to be seen with any of these foldable. So expect TC’s in house hardware guru, Brian Heater, to put all of these expensively hinged touchscreens through their paces.

Returning to Huawei’s Mate X, potential buyers may not be entirely reassured to learn the company appeared to dangle rather more information about a planned sequel in front of reporters at the press event.

A sequel which may or may not have even more screens, as Huawei is apparently considering putting glass on the back. Yes, glass. (The gen-one Mate X will have a steel back.) Glass panels which it says could double as touchscreens. On the back. As well as the front. We have no idea if that means the price-tag will double too.

This theoretical quad (?) screen foldable follow-up to the still unreleased Mate X might even be released as soon as next year, according to TechRadar’s reportage. Or — again speculating wildly — it might never be released. Because, frankly, it sounds mental. But that’s the wacky world of foldables for ya.

There may be method in this madness too. Because, since smartphones turned into all-screen devices — making it almost impossible to tell one touch-sensitive slab from another — plucky Android device makers are trying to find a way to put more screen on the slab so you can see more.

If they can pull that off it might be great. However sticking a hinge right through the middle of a smartphone’s primary feature and function without that simultaneously causing problems is certainly a major engineering challenge.

Mobile – TechCrunch


Google Knowledge Graph Reconciliation

August 15, 2019 No Comments

Exploring how Google’s knowledge graph works can provide some insights into how is growing and improving and may influence what we see on the web. A newly granted Google patent from the end of last month tells us about one way that Google is using to improve the amount of data that its knowledge graph contains.

The process involved in that patent doesn’t work quite the same way as the patent I wrote about in the post How the Google Knowledge Graph Updates Itself by Answering Questions but taken together, they tell us about how the knowledge graph is growing and improving. But part of the process involves the entity extraction that I wrote about in Google Shows Us How It Uses Entity Extractions for Knowledge Graphs.

This patent tells us that information that may make its way into Google’s knowledge graph isn’t limited to content on the Web, but can also may “originate from another document corpus, such as internal documents not available over the Internet or another private corpus, from a library, from books, from a corpus of scientific data, or from some other large corpus.”

What Knowledge Graph Reconciliation is?

The patent tells us about how a knowledge graph is constructed and processes that it follows to update and improve itself.

The site Wordlift includes some defintions related to Entities and the Semantic Web. The Definition that they provide for reconciling entities means “providing computers with unambiguous identifications of the entities we talk about.” This patent from Google focuses upon a broader use of the word “Reconciliation” and how it applies to knowledge graphs, to make sure that those take advantage of all of the information from web sources that may be entered into those about entities.

This process involves finding missing entities and missing facts about entities from a knowledge graph by using web-based sources to add information to a knowledge graph.

Problems with knowledge graphs

Large data graphs like Google’s Knowledge Graph store data and rules that describe knowledge about the data in a way that allows the information they provide to be built upon. A patent granted to Google describes how Google may build upon data within a knowledge graph so that it contains more information. The patent doesn’t just cover information from within the knowledge graph itself, but can look to sources such as online news

Tuples as Units of Knowledge Graphs

The patent presents some definitions that are worth learning. One of those is about facts involving entities:

A fact for an entity is an object related to the entity by a predicate. A fact for a particular entity may thus be described or represented as a predicate/object pair.

The relationship between the Entity (a subject) and a fact about the entity (a predicate/object pair) is known as a tuple.

In a knowledge graph, entities, such as people, places, things, concepts, etc., may be stored as nodes and the edges between those nodes may indicate the relationship between the nodes.

For example, the nodes “Maryland” and “United States” may be linked by the edges of “in country” and/or “has state.”

A basic unit of such a data graph can be a tuple that includes two entities, a subject entity and an object entity, and a relationship between the entities.

Tuples often represent real-world facts, such as “Maryland is a state in the United States.” (A Subject, A Verb, and an Object.)

A tuple may also include information, such as:

  • Context information
  • Statistical information
  • Audit information
  • Metadata about the edges
  • etc.

When a knowledge graph contains information about a tuple, it may also know about the source of that tuple and a score for the originating source of the tuple.

A knowledge graph may lack information about some entities. Those entities may be described in document sources, such as web pages, but manual addition of that entity information can be slow and does not scale.

This is a problem facing knowledge graphs – missing entities and their relationships to other entities can reduce the usefulness of querying the data graph. Knowledge graph reconciliation provides a way to make a knowledge graph richer and stronger.

The patent tells us about inverse tuples, which reverses the subject and object entities.

For example, if the potential tuples include the tuple the system may generate an inverse tuple of .

Sometimes inverse tuples may be generated for some predicates but not for others. For example, tuples with a date or measurement as the object may not be good candidates for inverse occurrences, and may not have many inverse occurrences.

For example, the tuple is not likely to have an inverse occurrence of <2001, is the year of release, Planet of the Apes> in the target data graph.

Clustering of Tuples is also discussed in the patent. We are told that the system may then cluster the potential tuples by:

  • source
  • provenance
  • subject entity type
  • subject entity name

This kind of clustering takes place in order to generate source data graphs.

The process behind the knowledge graph reconciliation patent:

  1. Potential entities may be identified from facts generated from web-based sources
  2. Facts from those sources are analyzed and cleaned, generating a small source data graph that includes entities and facts from those sources
  3. The source graph may be generated for a potential source entity that does not have a matching entity in the target data graph
  4. The system may repeat the analysis and generation of source data graphs for many source documents, generating many source graphs, each for a particular source document
  5. The system may cluster the source data graphs together by type of source entity and source entity name
  6. The entity name may be a string extracted from the text of the source
  7. Thus, the system generates clusters of source data graphs of the same source entity name and type
  8. The system may split a cluster of source graphs into buckets based on the object entity of one of the relationships, or predicates
  9. The system may use a predicate that is determinative for splitting the cluster
  10. A determinative predicate generally has a unique value, e.g., object entity, for a particular entity
  11. The system may repeat the dividing a predetermined number of times, for example using two or three different determinative predicates, splitting the buckets into smaller buckets. When the iteration is complete, graphs in the same bucket share two or three common facts
  12. The system may discard buckets without sufficient reliability and discard any conflicting facts from graphs in the same bucket
  13. The system may merge the graphs in the remaining buckets, and use the merged graphs to suggest new entities and new facts for the entities for inclusion in a target data graph

How Googlebot may be Crawling Facts to Build a Knowledge Graph

This is where some clustering comes into play. Imagine that the web sources are about science fiction movies, and they contain information about movies involving the “Planet of the Apes.” series, which has been remade at least once, and there are a number of related movies in the series, and movies with the same names. The information about those movies may be found from sources on the Web, and clustered together and go through a reconciliation process because of the similarities. Relationships between the many entities involved may be determined and captured. We are told about the following steps:

  1. Each source data graph is associated with a source document, includes a source entity with an entity type that exists in the target data graph, and includes fact tuples
  2. The fact tuples identify a subject entity, a relationship connecting the subject entity to an object entity, and the object entity
  3. The relationship is associated with the entity type of the subject entity in the target data graph
  4. The computer system also includes instructions that, when executed by the at least one processor, cause the computer system to perform operations that include generating a cluster of source data graphs, the cluster including source data graphs associated with a first source entity of a first source entity type that share at least two fact tuples that have the first source entity as the subject entity and a determinative relationship as the relationship connecting the subject entity to the object entity
  5. The operations also include generating a reconciled graph by merging the source data graphs in the cluster when the source data graphs meet a similarity threshold and generating a suggested new entity and entity relationships for the target data graph based on the reconciled graph
  6. More Features to Knowledge Graph Reconciliation

    There appear to be 9 movies in the Planet of the Apes Series and the rebooted series. The first “Planet of the Apes” was released in 1968, and the second “Planet of the Apes” was released in 2001. Since they have the same name, things could get confusing if they weren’t separated from each other, and using facts about those movies to break the cluster about “Planet of the Apes” down into buckets based upon facts that tell us that there was an original series, and a rebooted series involving the “Planet of the Apes.”

    entity graph reconciliation planet of the apes

    I’ve provided details of an example that Google pointed out, but here is how they describe this breaking a cluster down into bucked based on facts:

    For example, generating the cluster can include generating a first bucket for source data graphs associated with the first source entities and the first source entity type, splitting the first bucket into second buckets based on a first fact tuple, the first fact tuple having the first source entity as the subject entity and a first determinative relationship, so that source data graphs sharing the first fact tuple are in a same second bucket; and generating final buckets by repeating the splitting a quantity of times, each iteration using another fact tuple for the first source entity that represents a distinct determinative relationship, so that source data graphs sharing the first fact tuple and the other fact tuples are in the same final bucket, wherein the cluster is one of the final buckets.

    So this aspect of knowledge graph reconciliation involves understanding related entities, including some that may share the same name, and removing ambiguity from how they might be presented within a knowledge graph.

    Another aspect of knowledge graph reconciliation may involve merging data, such as seeing when one of the versions of the movie “Planet of the Apes” has more than one actor who is in the movie and merging that information together to make the knowledge graph more complete. The image below from the patent shows how that can be done:

    Knowledge graph reconciliation actors from Planet of the Apes

    The patent also tells us that discarding fact tuples that represent conflicting facts from a particular data source may take place also. Some types of facts about entities have only one answer, such as a birthdate of a person, or the launch date of a movie. If there is more than one of those appearing, they will be checked to see if one of them is wrong, and should be removed It is also possible that this may happen with inverse tuples, which the patent also tells us about.

    Inverse Tuples Generated and Discarded

    Knowledge Graph Reconciliation - Reverse Tuples

    When a tuple is a subject-verb-object, what is known as inverse tuples may be generated? If we have fact tuples such as “Maryland is a state in the United States of America,” and “California is a state in the United States of America,” we may generate inverse tuples such as “The United States of America has a state named Maryland,” and “The United States of America has a state named California.”

    Sometimes tuples may be generated from one source and conflict when they are clustered by topic from another source. An example might be because of the recent trade deadline in Major League Baseball where the right fielder Yasul Puig was traded from the Cincinnati Reds to the Cleveland Indians. The tuple “Yasul Puig plays for the Cincinnati Reds” conflicts with the tuple “The Cleveland Indians have a player named Yasul Puig.” One of those tuples may be discarded during the knowledge graph reconciliation.

    There is a reliability threshold for tuples, and tuples that don’t meet it may be discarded as having insufficient evidence. For instance, a tuple that is only from one source may not be considered reliable and may be discarded. If there are three sources for a tuple that are all from the same domain, that may also be considered insufficient evidence, and that tuple may be discarded.

    Advantages of the Knowledge Graph Reconciliation Patent Process

  1. A data graph may be extended more quickly by identifying entities in documents and facts concerning the entities
  2. The entities and facts may be of high quality due to the corroborative nature of the graph reconciliation process
  3. The identified entities may be identified from news sources, to more quickly identify new entities to be added to the data graph
  4. Potential new entities and their facts may be identified from thousands or hundreds of thousands of sources, providing potential entities on a scale that is not possible with manual evaluation of documents
  5. Entities and facts added to the data graph can be used to provide more complete or accurate search results

The Knowledge Graph Reconciliation Patent can be found here:

Automatic discovery of new entities using graph reconciliation
Inventors: Oksana Yakhnenko and Norases Vesdapunt
Assignee: GOOGLE LLC
US Patent: 10,331,706
Granted: June 25, 2019
Filed: October 4, 2017

Abstract

Systems and methods can identify potential entities from facts generated from web-based sources. For example, a method may include generating a source data graph for a potential entity from a text document in which the potential entity is identified. The source data graph represents the potential entity and facts about the potential entity from the text document. The method may also include clustering a plurality of source data graphs, each for a different text document, by entity name and type, wherein at least one cluster includes the potential entity. The method may also include verifying the potential entity using the cluster by corroborating at least a quantity of determinative facts about the potential entity and storing the potential entity and the facts about the potential entity, wherein each stored fact has at least one associated text document.

Takeaways

The patent points out at one place, that human evaluators may review additions to a knowledge graph. It is interesting seeing how it can use sources such as news sources to add new entities and facts about those entities. Being able to use web-based news to add to the knowledge graph means that it isn’t relying upon human-edited sources such as Wikipedia to grow, and the knowledge graph reconciliation process was interesting to learn about as well.


Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post Google Knowledge Graph Reconciliation appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Robust Reporting with Google Sheets Query Function

August 14, 2019 No Comments

The reporting steps and examples in this article will assist in automating your own complex report and creating dynamic tables to save your time and sanity.

Read more at PPCHero.com
PPC Hero


Facebook denies making contradictory claims on Cambridge Analytica and other ‘sketchy’ apps

August 13, 2019 No Comments

Facebook has denied contradicting itself in evidence presented to the U.K. parliament and a U.S. public prosecutor.

Last month the Department for Digital, Culture, Media and Sport (DCMS) committee wrote to the company to raise what it said were discrepancies in evidence Facebook has given to international parliamentarians versus evidence submitted in response to the Washington, DC Attorney General — which is suing Facebook on its home turf over the Cambridge Analytica data misuse scandal.

Yesterday Bloomberg obtained Facebook’s response to the committee.

In the letter, Rebecca Stimson, the company’s head of U.K. public policy, denies any inconsistency in evidence submitted on both sides of the Atlantic, writing:

The evidence given to the Committees by Mike Schroepfer (Chief Technology Officer), Lord Allan (Vice President for Policy Solutions), and other Facebook representatives is entirely consistent with the allegations in the SEC 
Complaint filed 24 July 2019. In their evidence, Facebook representatives truthfully answered questions about when the company first learned of Aleksandr Kogan / GSR’s improper transfer of data to Cambridge Analytica, which was in 
December 2015 through The Guardian’s reporting. We are aware of no evidence to suggest that Facebook learned any earlier of that improper transfer.

 As we have told regulators, and many media stories have since reported, we heard speculation about data scraping by Cambridge Analytica in September 2015. We have also testified publicly that we first learned Kogan sold data to Cambridge Analytica in December 2015. These are two different things and this 
is not new information.

Stimson goes on to claim that Facebook merely heard “rumours in September 2015 that Cambridge Analytica was promoting its ability to scrape user data from public Facebook pages.” (In statements made earlier this year to the press on this same point Facebook has also used the word “speculation” to refer to the internal concerns raised by its staff, writing that “employees heard speculation that Cambridge Analytica was scraping data.”)

In the latest letter, Stimson repeats Facebook’s earlier line about data scraping being common for public pages (which may be true, but plenty of Facebook users’ pages aren’t public to anyone other than their hand-picked friends so… ), before claiming it’s not the same as the process by which Cambridge Analytica obtained Facebook data (i.e. by paying a developer on Facebook’s platform to build an app that harvested users’ and users friends’ data).

The scraping of data from public pages (which is unfortunately common for any internet service) is different from, and has no relationship to, the illicit transfer to third parties of data obtained by an app developer (which was the subject of the December 2015 Guardian article and of Facebook representatives’ evidence),” she writes, suggesting a “sketchy” data modeling company with deep Facebook platform penetration looked like “business as usual” for Facebook management back in 2015. 

As we’ve reported before, it has emerged this year — via submissions to other U.S. legal proceedings against Facebook — that staff working for its political advertising division raised internal concerns about what Cambridge Analytica was up to in September 2015, months prior to The Guardian article which Facebook founder Mark Zuckerberg has claimed is the point when he personally learned what Cambridge Analytica was doing on his platform.

These Facebook staff described Cambridge Analytica as a “sketchy (to say the least) data modeling company that has penetrated our market deeply” — months before the newspaper published its scoop on the story, per an SEC complaint which netted Facebook a $ 100 million fine, in addition to the FTC’s $ 5 billion privacy penalty.

Nonetheless, Facebook is once again claiming there’s nothing but “rumors” to see here.

The DCMS committee also queried Facebook’s flat denial to the Washington, DC Attorney General that the company knew of any other apps misusing user data; failed to take proper measures to secure user data by failing to enforce its own platform policy; and failed to disclose to users when their data was misused — pointing out that Facebook reps told it on multiple occasions that Facebook knew of other apps violating its policies and had taken action against them.

Again, Facebook denies any contradiction whatsoever here.

“The particular allegation you cite asserts that Facebook knew of third party applications that violated its policies and failed to take reasonable measures to enforce against them,” writes Stimson. “As we have consistently stated to the Committee and elsewhere, we regularly take action against apps and developers who violate our policies. We therefore appropriately, and consistently with what we told the Committee, denied the allegation.”

So, turns out, Facebook was only flat denying some of the allegations in para 43 of the Washington, DC Attorney General’s complaint. But the company doesn’t see bundling responses to multiple allegations under one blanket denial as in any way misleading…

In a tweet responding to Facebook’s latest denial, DCMS committee chair Damian Collins dubbed the company’s response “typically disingenuous” — before pointing out: “They didn’t previously disclose to us concerns about Cambridge Analytica prior to Dec 2015, or say what they did about it & haven’t shared results of investigations into other Apps.”

On the app audit issue, Stimson’s letter justifies Facebook’s failure to provide the DCMS committee with the requested information on other “sketchy” apps it’s investigating, writing this is because the investigation — which CEO Mark Zuckerberg announced in a Facebook blog post on March 21, 2018; saying then that it would “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; “ban any developer from our platform that does not agree to a thorough audit”; and ban any developers found to have misused user data; and “tell everyone affected by those apps” — is, er, “ongoing.”

More than a year ago Facebook did reveal that it had suspended around 200 suspicious apps out of “thousands” reviewed. However, updates on Zuckerberg’s great app audit have been thin on the ground since then, to say the least.

“We will update the Committee as we publicly share additional information about that extensive effort,” says Stimson now.


Social – TechCrunch


Xiaomi tops Indian smartphone market for eighth straight quarter

August 13, 2019 No Comments

Xiaomi has now been India’s top smartphone seller for eight straight quarters. The company has become a constant headache for Samsung in the world’s second largest smartphone market as sales have slowed pretty much everywhere else in the world.

The Chinese electronics giant shipped 10.4 million handsets in the quarter that ended in June, commanding 28.3% of the market, research firm IDC reported Tuesday. Its closest rival, Samsung — which once held the top spot in India — shipped 9.3 million handsets in the nation during the same period, settling for a 25.3% market share.

Overall, 36.9 million handsets were shipped in India during the second quarter of this year, up 9.9% from the same period last year, IDC reported. This was the highest volume of handsets ever shipped in India for Q2, the research firm said.

As smartphone shipments slow or decline in most of the world, India has emerged as an outlier that continues to show strong momentum as tens of millions of people purchase their first handset in the country each quarter.

Research firm Counterpoint told TechCrunch that there are about 450 million smartphone users in India, up from about 350 million late last year and 300 million in late 2017. This growth has made India, home to more than 1.3 billion people, the fastest growing market worldwide.

Globally, meanwhile, smartphone shipments declined by 2.3% year-over-year in Q2 2019, according to IDC.

Chinese phone makers Vivo and Oppo, both of which spent lavishly in marketing during the recent local favorite cricket season in India, also expanded their base in the country. Vivo had 15.1% of the local market share, up from 12.6% in Q2 2018, while Oppo’s share grew from 7.6% to 9.7% during the same period. The market share of Realme, which has gained following after it started to replicate some of Xiaomi’s early models, also shot up, moving from 1.2% in Q2 2018 to 7.7% in Q2 2019.

GettyImages 1128860832

Samsung showroom demonstrator seen showing the features of new S10 Smartphone during the launching ceremony (Photo by Avishek Das/SOPA Images/LightRocket via Getty Images)

The key to gaining market share in India has remained unchanged over the years: better specs at lower prices. The average selling price of a handset during Q2 was $ 159 in the quarter that ended in June this year. Seventy-eight percent of the 36.9 million phones that shipped in India during this period sported a sticker price below $ 200, IDC said.

That’s not to say that phones priced above $ 200 don’t have a market in India. Per IDC, the fastest growing smartphone segment in the nation was priced between $ 200 to $ 300, witnessing a 105.2% growth over the same period last year.

Smartphones priced between $ 400 and $ 600 were the second-fastest growing segment in the country, with a 16.1% growth since the same period last year. Chinese phone maker OnePlus assumed 63.6% of this premium segment, followed by Apple (which has less than 2% of the overall local market share) and Samsung.

Feature phones that have maintained a crucial position in India’s handsets market continue to maintain their significant footprint, though their popularity is beginning to wane — 32.4 million feature phones shipped in India during Q2 this year, down 26.3% since the same period last year.

Xiaomi versus Samsung

India has become Xiaomi’s biggest market. It entered the country five years ago, and for the first two, relied mostly on selling handsets online to cut overhead. But the company has since established and expanded its presence in the brick and mortar market, which continues to account for much of the sales in the country.

Earlier this month, the Chinese phone maker said it had set up its 2,000th Mi Home store in India. It is on track to have a presence in 10,000 physical stores in the country by the end of the year, and expects to see half of its sales come from the offline market by that time frame.

Samsung has stepped up its game in India in the last two years, as well. The company, which opened the world’s largest phone factory in the country last year, has ramped up productions of its Galaxy A series of smartphones that are aimed at budget-conscious customers and conceptualized a similar series that includes Galaxy M10, M20 and M30 smartphone models for the Indian market. The Galaxy A series handsets drove much of the growth for the company, IDC said.

Even as it lags behind Xiaomi, Samsung shipped more handsets in Q2 2019 compared to Q2 2018 (9.3 million versus 8 million) and its market share grew from 23.9% to 25.3% during the same period.

“The vendor was also offering attractive channel schemes to clear the stocks of Galaxy J series. Galaxy M series (exclusive online till the end of 2Q19) saw price reductions, which helped retain the 13.5% market share in the online channel in 2Q19 for Samsung,” IDC said.

But the South Korean giant continues to have a tough time passing Xiaomi, which continues to maintain low profit margins (Xiaomi says it only makes 5% profit on any hardware it sells). Xiaomi has also expanded its local production efforts in India and created more than 10,000 jobs in the country, more than 90% of which have been filled by women.

Gadgets – TechCrunch


How to Filter out Bots and Spiders from Google Analytics

August 13, 2019 No Comments

A common misconception is that Google Analytics or any other JavaScript based Web Analytics solution filters out Spiders and Bots automatically.  This was true till few years ago because most of the spiders and bots were not capable of executing JavaScript and hence were never captured by JavaScript based Web Analytics solutions. As shown in 4 reasons why your bounce rate might be wrong, these days bots and spiders can execute JavaScript and hence are showing up in your Web Analytics reports.

Google Analytics has released a new feature that will let you filter out known spiders and bots.  Here are few things to keep in mind

  1. The data will only filter spiders and bots from the day you enable this setting. It won’t be allied to the data already processed.
  2. Since this will filter out bots, you might notice a drop in your visits, page views etc.

 

Here are the steps to filter out Spiders and Bots

  1. Go to the Admin section of your Google Analytics report
  2. Click  “View” section and choose the right report view
  3. Click  on “ View Settings” (see image 1 below)
  4. Check the box under “Bot Filtering” which says “Exclude all hits from known bots and spiders” (see image 2 below)
  5. Click “Save” button at bottom and you are done.

 

filter-spider-bots-google-analytics-1Image 1

filter-spider-bots-google-analytics-2Image 2


Google Analytics Premium