CBPO

Social

The case against behavioral advertising is stacking up

January 22, 2019 No Comments

No one likes being stalked around the Internet by adverts. It’s the uneasy joke you can’t enjoy laughing at. Yet vast people-profiling ad businesses have made pots of money off of an unregulated Internet by putting surveillance at their core.

But what if creepy ads don’t work as claimed? What if all the filthy lucre that’s currently being sunk into the coffers of ad tech giants — and far less visible but no less privacy-trampling data brokers — is literally being sunk, and could both be more honestly and far better spent?

Case in point: This week Digiday reported that the New York Times managed to grow its ad revenue after it cut off ad exchanges in Europe. The newspaper did this in order to comply with the region’s updated privacy framework, GDPR, which includes a regime of supersized maximum fines.

The newspaper business decided it simply didn’t want to take the risk, so first blocked all open-exchange ad buying on its European pages and then nixed behavioral targeting. The result? A significant uptick in ad revenue, according to Digiday’s report.

“NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, SVP for global advertising at New York Times International,” it writes.

“Currently, all the ads running on European pages are direct-sold. Although the publisher doesn’t break out exact revenues for Europe, Demarta said that digital advertising revenue has increased significantly since last May and that has continued into early 2019.”

It also quotes Demarta summing up the learnings: “The desirability of a brand may be stronger than the targeting capabilities. We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely.”

So while (of course) not every publisher is the NYT, publishers that have or can build brand cachet, and pull in a community of engaged readers, must and should pause for thought — and ask who is the real winner from the notion that digitally served ads must creep on consumers to work?

The NYT’s experience puts fresh taint on long-running efforts by tech giants like Facebook to press publishers to give up more control and ownership of their audiences by serving and even producing content directly for the third party platforms. (Pivot to video anyone?)

Such efforts benefit platforms because they get to make media businesses dance to their tune. But the self-serving nature of pulling publishers away from their own distribution channels (and content convictions) looks to have an even more bass string to its bow — as a cynical means of weakening the link between publishers and their audiences, thereby risking making them falsely reliant on adtech intermediaries squatting in the middle of the value chain.

There are other signs behavioural advertising might be a gigantically self-serving con too.

Look at non-tracking search engine DuckDuckGo, for instance, which has been making a profit by serving keyword-based ads and not profiling users since 2014, all the while continuing to grow usage — and doing so in a market that’s dominated by search giant Google.

DDG recently took in $ 10M in VC funding from a pension fund that believes there’s an inflection point in the online privacy story. These investors are also displaying strong conviction in the soundness of the underlying (non-creepy) ad business, again despite the overbearing presence of Google.

Meanwhile, Internet users continue to express widespread fear and loathing of the ad tech industry’s bandwidth- and data-sucking practices by running into the arms of ad blockers. Figures for usage of ad blocking tools step up each year, with between a quarter and a third of U.S. connected device users’ estimated to be blocking ads as of 2018 (rates are higher among younger users).

Ad blocking firm Eyeo, maker of the popular AdBlock Plus product, has achieved such a position of leverage that it gets Google et al to pay it to have their ads whitelisted by default — under its self-styled ‘acceptable ads’ program. (Though no one will say how much they’re paying to circumvent default ad blocks.)

So the creepy ad tech industry is not above paying other third parties for continued — and, at this point, doubly grubby (given the ad blocking context) — access to eyeballs. Does that sound even slightly like a functional market?

In recent years expressions of disgust and displeasure have also been coming from the ad spending side too — triggered by brand-denting scandals attached to the hateful stuff algorithms have been serving shiny marketing messages alongside. You don’t even have to be worried about what this stuff might be doing to democracy to be a concerned advertiser.

Fast moving consumer goods giants Unilever and Procter & Gamble are two big spenders which have expressed concerns. The former threatened to pull ad spend if social network giants didn’t clean up their act and prevent their platforms algorithmically accelerating hateful and divisive content.

While the latter has been actively reevaluating its marketing spending — taking a closer look at what digital actually does for it. And last March Adweek reported it had slashed $ 200M from its digital ad budget yet had seen a boost in its reach of 10 per cent, reinvesting the money into areas with “‘media reach’ including television, audio and ecommerce”.

The company’s CMO, Marc Pritchard, declined to name which companies it had pulled ads from but in a speech at an industry conference he said it had reduced spending “with several big players” by 20 per cent to 50 per cent, and still its ad business grew.

So chalk up another tale of reduced reliance on targeted ads yielding unexpected business uplift.

At the same time, academics are digging into the opaquely shrouded question of who really benefits from behavioral advertising. And perhaps getting closer to an answer.

Last fall, at an FTC hearing on the economics of big data and personal information, Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, teased a piece of yet to be published research — working with a large U.S. publisher that provided the researchers with millions of transactions to study.

Acquisti said the research showed that behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.

“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $ 0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”

“How is it possible that for merchants the cost of targeting ads is so much higher whereas for publishers the return on increased revenues for targeted ads is just 4%,” he wondered, posing a question that publishers should really be asking themselves — given, in this example, they’re the ones doing the dirty work of snooping on (and selling out) their readers.

Acquisti also made the point that a lack of data protection creates economic winners and losers, arguing this is unavoidable — and thus qualifying the oft-parroted tech industry lobby line that privacy regulation is a bad idea because it would benefit an already dominant group of players. The rebuttal is that a lack of privacy rules also does that. And that’s exactly where we are now.

“There is a sort of magical thinking happening when it comes to targeted advertising [that claims] everyone benefits from this,” Acquisti continued. “Now at first glance this seems plausible. The problem is that upon further inspection you find there is very little empirical validation of these claims… What I’m saying is that we actually don’t know very well to which these claims are true and false. And this is a pretty big problem because so many of these claims are accepted uncritically.”

There’s clearly far more research that needs to be done to robustly interrogate the effectiveness of targeted ads against platform claims and vs more vanilla types of advertising (i.e. which don’t demand reams of personal data to function). But the fact that robust research hasn’t been done is itself interesting.

Acquisti noted the difficulty of researching “opaque blackbox” ad exchanges that aren’t at all incentivized to be transparent about what’s going on. Also pointing out that Facebook has sometimes admitted to having made mistakes that significantly inflated its ad engagement metrics.

His wider point is that much current research into the effectiveness of digital ads is problematically narrow and so is exactly missing a broader picture of how consumers might engage with alternative types of less privacy-hostile marketing.

In a nutshell, then, the problem is the lack of transparency from ad platforms; and that lack serving the self same opaque giants.

But there’s more. Critics of the current system point out it relies on mass scale exploitation of personal data to function, and many believe this simply won’t fly under Europe’s tough new GDPR framework.

They are applying legal pressure via a set of GDPR complaints, filed last fall, that challenge the legality of a fundamental piece of the (current) adtech industry’s architecture: Real-time bidding (RTB); arguing the system is fundamentally incompatible with Europe’s privacy rules.

We covered these complaints last November but the basic argument is that bid requests essentially constitute systematic data breaches because personal data is broadcast widely to solicit potential ad buys and thereby poses an unacceptable security risk — rather than, as GDPR demands, people’s data being handled in a way that “ensures appropriate security”.

To spell it out, the contention is the entire behavioral advertising business is illegal because it’s leaking personal data at such vast and systematic scale it cannot possibly comply with EU data protection law.

Regulators are considering the argument, and courts may follow. But it’s clear adtech systems that have operated in opaque darkness for years, without no worry of major compliance fines, no longer have the luxury of being able to take their architecture as a given.

Greater legal risk might be catalyst enough to encourage a market shift towards less intrusive targeting; ads that aren’t targeted based on profiles of people synthesized from heaps of personal data but, much like DuckDuckGo’s contextual ads, are only linked to a real-time interest and a generic location. No creepy personal dossiers necessary.

If Acquisti’s research is to be believed — and here’s the kicker for Facebook et al — there’s little reason to think such ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.

The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?

Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.

Behavioral ad giants are facing growing legal risk on another front. The adtech market has long been referred to as a duopoly, on account of the proportion of digital ad spending that gets sucked up by just two people-profiling giants: Google and Facebook (the pair accounted for 58% of the market in 2018, according to eMarketer data) — and in Europe a number of competition regulators have been probing the duopoly.

Earlier this month the German Federal Cartel Office was reported to be on the brink of partially banning Facebook from harvesting personal data from third party providers (including but not limited to some other social services it owns). Though an official decision has yet to be handed down.

While, in March 2018, the French Competition Authority published a meaty opinion raising multiple concerns about the online advertising sector — and calling for an overhaul and a rebalancing of transparency obligations to address publisher concerns that dominant platforms aren’t providing access to data about their own content.

The EC’s competition commissioner, Margrethe Vestager, is also taking a closer look at whether data hoarding constitutes a monopoly. And has expressed a view that, rather than breaking companies up in order to control platform monopolies, the better way to go about it in the modern ICT era might be by limiting access to data — suggesting another potentially looming legal headwind for personal data-sucking platforms.

At the same time, the political risks of social surveillance architectures have become all too clear.

Whether microtargeted political propaganda works as intended or not is still a question mark. But few would support letting attempts to fiddle elections just go ahead and happen anyway.

Yet Facebook has rushed to normalize what are abnormally hostile uses of its tools; aka the weaponizing of disinformation to further divisive political ends — presenting ‘election security’ as just another day-to-day cost of being in the people farming business. When the ‘cost’ for democracies and societies is anything but normal. 

Whether or not voters can be manipulated en masse via the medium of targeted ads, the act of targeting itself certainly has an impact — by fragmenting the shared public sphere which civilized societies rely on to drive consensus and compromise. Ergo, unregulated social media is inevitably an agent of antisocial change.

The solution to technology threatening democracy is far more transparency; so regulating platforms to understand how, why and where data is flowing, and thus get a proper handle on impacts in order to shape desired outcomes.

Greater transparency also offers a route to begin to address commercial concerns about how the modern adtech market functions.

And if and when ad giants are forced to come clean — about how they profile people; where data and value flows; and what their ads actually deliver — you have to wonder what if anything will be left unblemished.

People who know they’re being watched alter their behavior. Similarly, platforms may find behavioral change enforced upon them, from above and below, when it becomes impossible for everyone else to ignore what they’re doing.


Social – TechCrunch


Facebook fears no FTC fine

January 19, 2019 No Comments

Reports emerged today that the FTC is considering a fine against Facebook that would be the largest ever from the agency. Even if it were 10 times the size of the largest, a $ 22.5 million bill sent to Google in 2012, the company would basically laugh it off. Facebook is made of money. But the FTC may make it provide something it has precious little of these days: accountability.

A Washington Post report cites sources inside the agency (currently on hiatus due to the shutdown) saying that regulators have “met to discuss imposing a record-setting fine.” We may as well say here that this must be taken with a grain of salt at the outset; that Facebook is non-compliant with terms set previously by the FTC is an established fact, so how much they should be made to pay is the natural next topic of discussion.

But how much would it be? The scale of the violation is hugely negotiable. Our summary of the FTC’s settlement requirements for Facebook indicate that it was:

  • barred from making misrepresentations about the privacy or security of consumers’ personal information;
  • required to obtain consumers’ affirmative express consent before enacting changes that override their privacy preferences;
  • required to prevent anyone from accessing a user’s material more than 30 days after the user has deleted his or her account;
  • required to establish and maintain a comprehensive privacy program designed to address privacy risks associated with the development and management of new and existing products and services, and to protect the privacy and confidentiality of consumers’ information; and
  • required, within 180 days, and every two years after that for the next 20 years, to obtain independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order, and to ensure that the privacy of consumers’ information is protected.

How many of those did it break, and how many times? Is it per user? Per account? Per post? Per offense? What is “accessing” under such and such a circumstance? The FTC is no doubt deliberating these things.

Yet it is hard to imagine them coming up with a number that really scares Facebook. A hundred million dollars is a lot of money, for instance. But Facebook took in more than $ 13 billion in revenue last quarter. Double that fine, triple it, and Facebook bounces back.

If even a fine 10 times the size of the largest it ever threw can’t faze the target, what can the FTC do to scare Facebook into playing by the book? Make it do what it’s already supposed to be doing, but publicly.

How many ad campaigns is a user’s data being used for? How many internal and external research projects? How many copies are there? What data specifically and exactly is it collecting on any given user, how is that data stored, who has access to it, to whom is it sold or for whom is it aggregated or summarized? What is the exact nature of the privacy program it has in place, who works for it, who do they report to and what are their monthly findings?

These and dozens of other questions come immediately to mind as things Facebook should be disclosing publicly in some way or another, either directly to users in the case of how one’s data is being used, or in a more general report, such as what concrete measures are being taken to prevent exfiltration of profile data by bad actors, or how user behavior and psychology is being estimated and tracked.

Not easy or convenient questions to answer at all, let alone publicly and regularly. But if the FTC wants the company to behave, it has to impose this level of responsibility and disclosure. Because, as Facebook has already shown, it cannot be trusted to disclose it otherwise. Light touch regulation is all well and good… until it isn’t.

This may in fact be such a major threat to Facebook’s business — imagine having to publicly state metrics that are clearly at odds with what you tell advertisers and users — that it might attempt to negotiate a larger initial fine in order to avoid punitive measures such as those outlined here. Volkswagen spent billions not on fines, but in sort of punitive community service to mitigate the effects of its emissions cheating. Facebook too could be made to shell out in this indirect way.

What the FTC is capable of requiring from Facebook is an open question, since the scale and nature of these violations are unprecedented. But whatever they come up with, the part with a dollar sign in front of it — however many places it goes to — will be the least of Facebook’s worries.


Social – TechCrunch


Facebook finds and kills another 512 Kremlin-linked fake accounts

January 17, 2019 No Comments

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $ 135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network. (For more on the open source investigation check out this blog post from DFRLab.)

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $ 25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.


Social – TechCrunch


Instagram caught selling ads to follower-buying services it banned

January 15, 2019 No Comments

Instagram has been earning money from businesses flooding its social network with spam notifications. Instagram hypocritically continues to sell ad space to services that charge clients for fake followers or that automatically follow/unfollow other people to get them to follow the client back. This is despite Instagram reiterating a ban on these businesses in November and threatening the accounts of people who employ them.

A TechCrunch investigation initially found 17 services selling fake followers or automated notification spam for luring in followers that were openly advertising on Instagram despite blatantly violating the network’s policies. This demonstrates Instagram’s failure to adequately police its app and ad platform. That neglect led to users being distracted by notifications for follows and Likes generated by bots or fake accounts. Instagram raked in revenue from these services while they diluted the quality of Instagram notifications and wasted people’s time.

In response to our investigation, Instagram tells me it’s removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services we reported were violating its policies. Pages and accounts that themselves weren’t in violation but whose ads were have been banned from advertising on Facebook and Instagram. However, a day later TechCrunch still found ads from two of these services on Instagram, and discovered five more companies paying to promote policy-violating follower growth services.

This raises a big question about whether Instagram properly protects its community from spammers. Why would it take a journalist’s investigation to remove these ads and businesses that brazenly broke Instagram’s rules when the company is supposed to have technical and human moderation systems in place? The Facebook-owned app’s quest to “move fast” to grow its user base and business seems to have raced beyond what its watchdogs could safeguard.

Hunting Spammers

I first began this investigation a month ago after being pestered with Instagram Stories ads by a service called GramGorilla. The slicked-back hipster salesmen boasted how many followers he gained with the service and that I could pay to do the same. The ads linked to the website of a division of Krends Marketing where for $ 46 to $ 126 per month, it promised to score me 1000 to 2500 Instagram followers.

Some apps like this sell followers directly, though these are typically fake accounts. They might boost your follower count (unless they’re detected and terminated) but won’t actually engage with your content or help your business, and end up dragging down your metrics so Instagram shows your posts to fewer people. But I discovered that GramGorilla/Krends and the majority of apps selling Instagram audience growth do something even worse.

You give these scammy businesses your Instagram username and password, plus some relevant topics or demographics, and they automatically follow and unfollow, like, and comment on strangers’ Instagram profiles. The goal is to generate notifications those strangers will see in hopes that they’ll get curious or want to reciprocate and so therefore follow you back. By triggering enough of this notification spam, they trick enough strangers to follow you to justify the monthly subscription fee.

That pissed me off. Facebook, Instagram, and other social networks send enough real notifications as is, growth hacking their way to more engagement, ad views, and daily user counts. But at least they have to weigh the risk of annoying you so much that you turn off notifications all together. Services that sell followers don’t care if they pollute Instagram and ruin your experience as long as they make money. They’re classic villains in the ‘tragedy of the commons’ of our attention.

This led me to start cataloging these spam company ads, and I was startled by how many different ones I saw. Soon, Instagram’s ad targeting and retargeting algorithms were backfiring, purposefully feeding me ads for similar companies that also violated Instagram’s policies.

The 17 services selling followers or spam that I originally indexed were Krends Marketing / GramGorilla, SocialUpgrade, MagicSocial, EZ-Grow, Xplod Social, Macurex, GoGrowthly, Instashop / IG Shops, TrendBee, JW Social Media Marketing, YR Charisma, Instagrocery, SocialSensational, SocialFuse, WeGrowSocial, IGWildfire, and GramFlare. TrendBee and GramFlare were found to still be running Instagram ads after the platform said they’ve been banned from doing so. Upon further investigation after Instagram’s supposed crackdown, I discovered five more services sell prohibited growth services: FireSocial, InstaMason/IWentMissing, NexStore2019, InstaGrow, and Servantify.

Knowingly Poisoning The Well

I wanted to find out if these companies were aware that they violate Instagram’s policies and how they justify generating spam. Most hide their contact info and merely provide a customer support email, but eventually I was able to get on the phone with some of the founders.

What we’re doing is obviously against their terms of service” said GoGrowthly’s co-founder who refused to provide their name. “We’re going in and piggybacking off their free platform and not giving them any of the revenue. Instagram doesn’t like us at all. We utilize private proxies depending on clients’ geographic location. That’s sort of our trick to reduce any sort of liability” so clients’ accounts don’t get shut down, they said. “It’s a careful line that we tread with Instagram. Similar to SEO companies and Google, Google wants the best results for customers and customers want the best results for them. There’s a delicate dance” said Macurex founder Gun Hudson.

EZ-Grow’s co-founder Elon refused to give his last name on the record, but told me “[Clients] always need something new. At first it was follows and likes. Now we even watch Stories for them. Every new feature that Instagram has we take advantage of it to make more visibility for our clients.” He says EZ-Grow spends $ 500 per day on Instagram ads, which are its core strategy for finding new customers. SocialFuse founder Alexander Heit says his company spends a couple hundred dollars per day on Instagram and Facebook ads, and was worried when Instagram reiterated its ban on his kind of service in November, but says “We thought that we were definitely going to get shut down but nothing has changed on our end.”

Several of the founders tried to defend their notification spam services by saying that at least they weren’t selling fake followers. Lacking any self-awareness, Macurex’s Hudson said “If it’s done the wrong way it can ruin the user experience. There are all sorts of marketers who will market in untasteful or spammy ways. Instagram needs to keep a check on that.” GoGrowthly’s founder actually told me “We’re actually doing good for the community by generating those targeted interactions.” WeGrowSocial’s co-founder Brandon also refused to give his last name, but was willing to rat out his competitor SocialSensational for selling followers.

Only EZ-Grow’s Elon seemed to have a moment of clarity. “Because the targeting goes to the right people . . . and it’s something they would like, it’s not spam” he said before his epiphany. “People can also look at it as spam, maybe.”

Instagram Finally Shuts Down The Spammers

In response to our findings, an Instagram spokesperson provided this lengthy statement confirming it’s shut down the ads and accounts of the violators we discovered, claiming that it works hard to fight spam, and admitting it needs to do better:

“Nobody likes receiving spammy follows, likes and comments. It’s really important to us that the interactions people have on Instagram are genuine, and we’re working hard to keep the community free from spammy behavior. Services that offer to boost an account’s popularity via inauthentic likes, comments and followers, as well as ads that promote these services, aren’t allowed on Instagram. We’ve taken action on the services raised in this article, including removing violating ads, disabling Pages and accounts, and stopping Pages from placing further ads. We have various systems in place that help us catch and remove these types of ads before anyone sees them, but given the number of ads uploaded to our platform every day, there are times when some still manage to slip through. We know we have more to do in this area and we’re committed to improving.”

Instagram tells me it uses machine learning tools to identify accounts that pay third-party apps to boost their popularity and claims to remove inauthentic engagement before it reaches the recipient of the notifications. By nullifying the results of these services, Instagram believes users will have less incentive to use them. It uses automated systems to evaluate the images, captions, and landing pages of all its ads before they run, and sends some to human moderators. It claims this lets it catch most policy-violating ads, and that users can report those it misses.

But these ads and their associated accounts were filled with terms like “get followers”, “boost your Instagram followers”, “real followers”, “grow your engagement”, “get verified”, “engagement automation”, and other terms tightly linked to policy-violating services. That casts doubt on just how hard Instagram was working on this problem. It may have simply relied on cheap and scalable technical approaches to catching services with spam bots or fake accounts instead of properly screening ads or employing sufficient numbers of human moderators to police the network.

That misplaced dependence on AI and other tech solutions appears to be a trend in the industry. When I recently reported that child sexual abuse imagery was easy to find on WhatsApp and Microsoft Bing, both seemed to be understaffing the human moderation team that could have hunted down this illegal content with common sense where complex algorithms failed. As with Instagram, these products have highly profitable parent companies who can afford to pour more dollars in policy enforcement.

Kicking these services off Instagram is an important step, but the company must be more proactive. Social networks and self-serve ad networks have been treated as efficient cash cows for too long. The profits from these products should be reinvested in policing them. Otherwise, crooks will happily fleece users for our money and attention.

To learn more about the future of Instagram, check out this article’s author Josh Constine’s SXSW 2019 keynote with Instagram co-founders Kevin Systrom and Mike Krieger — their first talk together since leaving the company.


Social – TechCrunch


Daily Crunch: Bing has a child porn problem

January 12, 2019 No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Microsoft Bing not only shows child pornography, it suggests it

A TechCrunch-commissioned report has found damning evidence on Microsoft’s search engine. Our findings show a massive failure on Microsoft’s part to adequately police its Bing search engine and to prevent its suggested searches and images from assisting pedophiles.

2. Unity pulls nuclear option on cloud gaming startup Improbable, terminating game engine license

Unity, the widely popular gaming engine, has pulled the rug out from underneath U.K.-based cloud gaming startup Improbable and revoked its license — effectively shutting them out from a top customer source. The conflict arose after Unity claimed Improbable broke the company’s Terms of Service and distributed Unity software on the cloud.

3. Improbable and Epic Games establish $ 25M fund to help devs move to ‘more open engines’ after Unity debacle

Just when you thought things were going south for Improbable the company inked a late-night deal with Unity competitor Epic Games to establish a fund geared toward open gaming engines. This begs the question of how Unity and Improbable’s relationship managed to sour so quickly after this public debacle.

4. The next phase of WeChat 

WeChat boasts more than 1 billion daily active users, but user growth is starting to hit a plateau. That’s been expected for some time, but it is forcing the Chinese juggernaut to build new features to generate more time spent on the app to maintain growth.

5. Bungie takes back its Destiny and departs from Activision 

The creator behind games like Halo and Destiny is splitting from its publisher Activision to go its own way. This is good news for gamers, as Bungie will no longer be under the strict deadlines of a big gaming studio that plagued the launch of Destiny and its sequel.

6. Another server security lapse at NASA exposed staff and project data

The leaking server was — ironically — a bug-reporting server, running the popular Jira bug triaging and tracking software. In NASA’s case, the software wasn’t properly configured, allowing anyone to access the server without a password.

7. Is Samsung getting serious about robotics? 

This week Samsung made a surprise announcement during its CES press conference and unveiled three new consumer and retail robots and a wearable exoskeleton. It was a pretty massive reveal, but the company’s look-but-don’t-touch approach raised far more questions than it answered.


Social – TechCrunch


Zuckerberg’s 2019 challenge is to hold public talks on tech & society

January 8, 2019 No Comments

Rather than just focus on Facebook’s problems like his 2018 challenge, this year Mark Zuckerberg wants to give transparency to his deliberations and invite the views of others. Today he announced his 2019 challenge will be “to host a series of public discussions about the future of technology in society — the opportunities, the challenges, the hopes, and the anxieties.” He plans to hold the talks with different leaders, experts and community members in a variety of formats and venues, though they’ll all be publicly viewable from his Facebook and Instagram accounts or traditional media.

This isn’t the first time Zuckerberg has held a series of public talks. He ran community Q&A sessions in 2014 and 2015 to take questions directly from his users. The idea for Facebook Reactions for expressing emotions beyond “Likes” first emerged during those talks.

From his initial framing of the 2019 challenge, though, it already sounds like Zuckerberg sees more Facebook as the answer to many of the issues facing society. He asks, “There are so many big questions about the world we want to live in and technology’s place in it. Do we want technology to keep giving more people a voice, or will traditional gatekeepers control what ideas can be expressed? Should we decentralize authority through encryption or other means to put more power in people’s hands? In a world where many physical communities are weakening, what role can the internet play in strengthening our social fabric?”

The implied answers there are “people should have a voice through Facebook,” “people should use Facebook’s encrypted chat app WhatsApp,” and “people should collaborate through Facebook Groups.” Hopefully the talks will also address how too much social media can impact polarization, self-image and focus.

[Update: Zuckerberg asked me in the comments of his posts for some format and speaker suggestions. My ideas include:

  • A formal debate between him and a civil but pointed critic.
  • An independent moderator asking him questions with no pre-brief and/or selecting questions from public submissions.
  • A talk where he’s challenged to never say the word “Facebook” while discussing larger issues facing society & technology.
  • A mythbusting talk where he addresses the biggest Facebook conspiracy theories. An open discussion between him and Jack Dorsey.
  • A referendum where he asks or is asked questions where the public can select from multiple-choice answers, with him then discussing the publicly visible tallies.
  • A discussion with an early employee like Ruchi Sanghvi, Leah Pearlman or Naomi Gleit about how Facebook’s culture and priorities have changed.
  • A talk with Bill Gates and Warren Buffet on longitudinal approaches to philanthropy.
  • A round-table with high-achieving high school students about the next generation’s concerns about privacy and the internet.
  • A talk with the heads of Messenger (Stan Chudnovsky), Instagram (Adam Mosseri), and WhatsApp (Chris Daniels) about how the arms of the company work together.
  • A panel with top Facebook Group and Page admins about what the app’s most dedicated users want from the product.]

It’s nice that one of the de facto leaders of the world will shed more light on his thoughts. But given Zuckerberg is prone to sticking to his talking points, the public would benefit from talks held by moderators who don’t give the CEO all the questions ahead of time.

Hearing Zuckerberg’s candid thoughts on the inherent trade-offs of “bringing the world closer together” or “making the world more open and connected” could help users determine whose interests he has at heart.

Zuckerberg’s past challenges have been:

2009 – Wear a neck tie every day

2010 – Learn Mandarin Chinese

2011 – Only eat animals he killed himself

2012 – Write code every day

2013 – Meet a new person who isn’t a Facebook employee every day

2014 – Write a thank-you note every day

2015 – Read a new book every two weeks

2016 – Build an artificial intelligence home assistant like Iron Man’s Jarvis

2017 – Visit all 50 states he hadn’t already to meet and talk to people

2018 – Fix Facebook’s problems


Social – TechCrunch


Indonesia unblocks Tumblr following its ban on adult content

December 29, 2018 No Comments

Indonesia, the world’s fourth largest country by population, has unblocked Tumblr nine months after it blocked the social networking site over pornographic content.

Tumblr — which, disclaimer, is owned by Oath Verizon Media Group just like TechCrunch — announced earlier this month that it would remove all “adult content” from its platform. That decision, which angered many in the adult entertainment industry who valued the platform as an increasingly rare outlet that supported erotica, was a response to Apple removing Tumblr’s app from the iOS Store after child pornography was found within the service.

The impact of this new policy has made its way to Indonesia, where KrAsia reports that the service was unblocked earlier this week. The service had been blocked in March after falling foul of the country’s anti-pornography laws.

“Tumblr sent an official statement regarding the commitment to clean the platform from pornographic content,” Ferdinandus Setu, acting head of the Ministry of Communication and Informatics Bureau, is reported to have said in a press statement.

Messaging apps WhatsApp and Line are among the other services that have been forced to comply with the government’s ban on “unsuitable” content in order to keep their services open in the country. Telegram, meanwhile, removed suspected terrorist content last year after its service was partially blocked.

While perhaps not widely acknowledged in the West, Indonesia is a huge market, with a population of more than 260 million people. The world’s largest Muslim country, it is the largest economy in Southeast Asia and its growth is tipped to help triple the region’s digital economy to $ 240 billion by 2025.

In other words, Indonesia is a huge market for internet companies.

The country’s anti-porn laws have been used to block as many as 800,000 websites as of 2017so potentially over a million by now — but they have also been used to take aim at gay dating apps, some of which have been removed from the Google Play Store. As Vice notes, “while homosexuality is not illegal in Indonesia, it’s no secret that the country has become a hostile place for the LGBTQ community.”


Social – TechCrunch


People lost their damn minds when Instagram accidentally went horizontal

December 28, 2018 No Comments

Earlier today, when Instagram suddenly transformed into a landscape-oriented Tinder-esque nightmare, the app’s dedicated users extremely lost their minds and immediately took to Twitter to be vocal about it.

As we reported, the company admitted that the abrupt shift from Instagram’s well-established vertical scrolling was a mistake. The mea culpa came quickly enough, but Instagram’s accidental update was already solidified as one of the last meme-able moments of 2018.

Why learn about the thing itself and why it happened when you could watch the meta-story play out in frantic, quippy tweets, all vying for relevance as we slide toward 2019’s horrific gaping maw? If you missed it the first time around, here you go.

A handful of memes even managed to incorporate another late-2018 meme, Sandra Bullock in Bird Box — a Netflix original that is not a birds-on-demand service, we are told.

Unupdate might not be a word, but it is absolutely a state of mind.

For better or worse, the Met got involved with what we can only assume is a Very Important Artifact for the cause.

But can we ever really go back? Can we unsee a fate so great, one still looming on some distant social influencer shore? Probably yeah, but that doesn’t mean we won’t all lose it if it happens again.


Social – TechCrunch


My product launch wishlist for Instagram, Twitter, Uber and more

December 26, 2018 No Comments

‘Twas the night before Xmas, and all through the house, not a feature was stirring from the designer’s mouse . . . Not Twitter! Not Uber, Not Apple or Pinterest! On Facebook! On Snapchat! On Lyft or on Insta! . . . From the sidelines I ask you to flex your code’s might. Happy Xmas to all if you make these apps right.

Instagram

See More Like This – A button on feed posts that when tapped inserts a burst of similar posts before the timeline continues. Want to see more fashion, sunsets, selfies, food porn, pets, or Boomerangs? Instagram’s machine vision technology and metadata would gather them from people you follow and give you a dose. You shouldn’t have to work through search, hashtags, or the Explore page, nor permanently change your feed by following new accounts. Pinterest briefly had this feature (and should bring it back) but it’d work better on Insta.

Web DMs Instagram’s messaging feature has become the defacto place for sharing memes and trash talk about people’s photos, but it’s stuck on mobile. For all the college kids and entry-level office workers out there, this would make being stuck on laptops all day much more fun. Plus, youth culture truthsayer Taylor Lorenz wants Instagram web DMs too.

Upload Quality Indicator – Try to post a Story video or Boomerang from a crummy internet connection and they turn out a blurry mess. Instagram should warn us if our signal strength is low compared to what we usually have (since some places it’s always mediocre) and either recommend we wait for Wi-Fi, or post a low-res copy that’s replaced by the high-res version when possible.

Oh, and if new VP of product Vishal Shah is listening, I’d also like Bitmoji-style avatars and a better way to discover accounts that shows a selection of their recent posts plus their bio, instead of just one post and no context in Explore which is better for discovering content.

Twitter

DM Search – Ummm, this is pretty straightforward. It’s absurd that you can’t even search DMs by person, let alone keyword. Twitter knows messaging is a big thing on mobile right? And DMs are one of the most powerful ways to get in contact with mid-level public figures and journalists. PS: My DMs are open if you’ve got a news tip — @JoshConstine.

Unfollow Suggestions – Social networks are obsessed with getting us to follow more people, but do a terrible job of helping us clean up our feeds. With Twitter bringing back the option to see a chronological feed, we need unfollow suggestions more than ever. It should analyze who I follow but never click, fave, reply to, retweet, or even slow down to read and ask if I want to nix them. I asked for this 5 years ago and the problem has only gotten worse. Since people feel like their feeds are already overflowing, they’re stingy with following new people. That’s partly why you see accounts get only a handful of new followers when their tweets go viral and are seen by millions. I recently had a tweet with 1.7 million impressions and 18,000 Likes that drove just 11 follows. Yes I know that’s a self-own.

Analytics Benchmarks – If Twitter wants to improve conversation quality, it should teach us what works. Twitter offers analytics about each of your tweets, but not in context of your other posts. Did this drive more or fewer link clicks or follows than my typical tweet? That kind of info could guide users to create more compelling content.

Facebook

(Obviously we could get into Facebook’s myriad problems here. A less sensationalized feed that doesn’t reward exaggerated claims would top my list. Hopefully its plan to downrank “borderline content” that almost violates its policies will help when it rolls out.)

Batched Notifications – Facebook sends way too many notifications. Some are downright useless and should be eliminated. “14 friends responded to events happening tomorrow”? “Someone’s fundraiser is half way to its goal?” Get that shit out of here. But there are other notifications I want to see but that aren’t urgent nor crucial to know about individually. Facebook should let us decide to batch notifications so we’d only get one of a certain type every 12 or 24 hours, or only when a certain number of similar ones are triggered. I’d love a digest of posts to my Groups or Events from the past day rather than every time someone opens their mouth.

I so don’t care

Notifications In The “Time Well Spent” Feature – Facebook tells you how many minutes you spent on it each day over the past week and on average, but my total time on Facebook matters less to me than how often it interrupts my life with push notifications. The “Your Time On Facebook” feature should show how many notifications of each type I’ve received, which ones I actually opened, and let me turn off or batch the ones I want fewer of.

Oh, and for Will Cathcart, Facebook’s VP of apps, can I also get proper syncing so I don’t rewatch the same Stories on Instagram and Facebook, the ability to invite people to Events on mobile based on past invite lists of those I’ve hosted or attended, and the See More Like This feature I recommended for Instagram?

Uber/Lyft/Ridesharing

“Quiet Ride” Button – Sometimes you’re just not in the mood for small talk. Had a rough day, need to get work done, or want to just zone out? Ridesharing apps should offer a request for a quiet ride that if the driver allows with a preset and accepts before you get in, you pay them an extra dollar (or get it free as a loyalty perk), and you get ferried to your destination without unnecessary conversation. I get that it’s a bit dehumanizing for the driver, but I’d bet some would happily take a little extra cash for the courtesy.

“I Need More Time” Button – Sometimes you overestimate the ETA and suddenly your car is arriving before you’re ready to leave. Instead of cancelling and rebooking a few minutes later, frantically rushing so you don’t miss your window and get smacked with a no-show fee, or making the driver wait while they and the company aren’t getting paid, Uber, Lyft, and the rest should offer the “I Need More Time” button that simply rebooks you a car that’s a little further away.

Spotify/Music Streaming Apps

Scan My Collection – I wish I could just take photos of the album covers, spines, or even discs of my CD or record collection and have them instantly added to a playlist or folder. It’s kind of sad that after lifetimes of collecting physical music, most of it now sits on a shelf and we forget to play what we used to love. Music apps want more data on what we like, and it’s just sitting there gathering dust. There’s obviously some fun viral potential here too. Let me share what’s my most embarrassing CD. For me, it’s my dual copies of Limp Bizkit’s “Significant Other” because I played the first one so much it got scratched.

Friends Weekly Spotify ditched its in-app messaging, third-party app platform, and other ways to discover music so its playlists would decide what becomes a hit in order to exert leverage over the record labels to negotiate better deals. But music discovery is inherently social and the desktop little ticker of what friends are playing on doesn’t cut it. Spotify should let me choose to recommend my new favorite song or agree to let it share what I’ve recently played most, and put those into a Discover Weekly-style social playlist of what friends are listening to.

Snapchat

Growth – I’m sorry, I had to.

Bulk Export Memories – But seriously, Snapchat is shrinking. That’s worrisome because some users’ photos and videos are trapped on its Memories cloud hosting feature that’s supposed to help free up space on your phone. But there’s no bulk export option, meaning it could take hours of saving shots one at a time to your camera roll if you needed to get off of Snapchat, if for example it was shutting down, or got acquired, or you’re just bored of it.

Add-On Cameras – Snapchat’s Spectacles are actually pretty neat for recording first-person or underwater shots in a circular format. But otherwise they don’t do much more, and in some ways do much less, than your phone’s camera and are a long way from being a Magic Leap competitor. That’s why if Snapchat really wants to become a “Camera Company”, it should build sleek add-on cameras that augment our phone’s hardware. Snap previously explored selling a 360-camera but never launched one. A little Giroptic iO-style 360 lens that attaches to your phone’s charging port could let you capture a new kind of content that really makes people feel like they’re there with you. An Aukey Aura-style zoom lens attachment that easily fits in your pocket unlike a DSLR could also be a hit

iOS

Switch Wi-Fi/Bluetooth From Control Center – I thought the whole point of Control Center was one touch access, but I can only turn on or off the Wi-Fi and Bluetooth. It’s silly having to dig into the Settings menu to switch to a different Wi-Fi network or Bluetooth device, especially as we interact with more and more of them. Control Center should unfurl a menu of networks or devices you can choose from.

Shoot GIFs – Live Photos are a clumsy proprietary format. Instagram’s Boomerang nailed what we want out of live action GIFs and we should be able to shoot them straight from the iOS camera and export them as actual GIFs that can be used across the web. Give us some extra GIF settings and iPhones could have a new reason for teens to choose them over Androids.

Gradual Alarms – Anyone else have a heart attack whenever they hear their phone’s Alarm Clock ringtone? I know I do because I leave my alarms on so loud that I’ll never miss them, but end up being rudely shocked awake. A setting that gradually increases the volume of the iOS Alarm Clock every 15 seconds or minute so I can be gently arisen unless I refuse to get up.

Maybe some of these apply to Android, but I wouldn’t know because I’m a filthy casual iPhoner. Send me your Android suggestions, as well as what else you want to see added to your favorite apps.

[Image Credit: Hanson Inc]


Social – TechCrunch


How Juul made vaping viral to become worth a dirty $38 billion

December 22, 2018 No Comments

A Juul is not a cigarette. It’s much easier than that. Through devilishly slick product design I’ll discuss here, the startup has massively lowered the barrier to getting hooked on nicotine. Juul has dismantled every deterrent to taking a puff.

The result is both a new $ 38 billion valuation thanks to a $ 12.8 billion investment from Marlboro Cigarettes-maker Altria this week, and an explosion in popularity of vaping amongst teenagers and the rest of the population. Game recognize game, and Altria’s game is nicotine addiction. It knows it’s been one-upped by Juul’s tactics, so it’s hedged its own success by handing the startup over a tenth of the public corporation’s market cap in cash.

Juul argues it can help people switch from obviously dangerous smoking to supposedly healthier vaping. But in reality, the tiny aluminum device helps people switch from nothing to vaping…which can lead some to start smoking the real thing. A study found it causes more people to pick up cigarettes than put them down.

Photographer: Gabby Jones/Bloomberg via Getty Images

How fast has Juul swept the nation? Nielsen says it controls 75 percent of the U.S. e-cigarette market up from 27 percent in September last year. In the year since then, the CDC says the percentage of high school students who’ve used an e-cigarette in the last 30 days has grown 75 percent. That’s 3 million teens or roughly 20 percent of all high school kids. CNBC reports that Juul 2018 revenue could be around $ 1.5 billion.

The health consequences aside, Juul makes it radically simple to pick up a lifelong vice. Parents, regulators, and potential vapers need to understand why Juul works so well if they’ll have any hope of suppressing its temptations.

Shareable

It’s tough to try a cigarette for the first time. The heat and smoke burn your throat. The taste is harsh and overwhelming. The smell coats your fingers and clothes, marking you as smoker. There’s pressure to smoke a whole one lest you waste the tobacco. Even if you want to try a friend’s, they have to ignite one first. And unlike bigger box mod vaporizers where you customize the temperature and e-juice, Juul doesn’t make you look like some dorky hardcore vapelord.

Juul is much more gentle on your throat. The taste is more mild and can be masked with flavors. The vapor doesn’t stain you with a smell as quickly. You can try just a single puff from a friend’s at a bar or during a smoking break with no pressure to inhale more. The elegant, discrete form factor doesn’t brand you as a serious vape users. It’s casual. Yet the public gesture and clouds people exhale are still eye catching enough to trigger the questions, “What’s that? Can I try?” There’s a whole other article to be written about how Juul memes and Instagram Stories that glamorized the nicotine dispensers contributed to the device’s spread.

And perhaps most insidiously, vaping seems healthier. A lifetime of anti-smoking ads and warning labels drilled the dangers into our heads. But how much harm could a little vapor do?

A friend who had never smoked tells me they burn through a full Juul pod per day now. Someone got him to try a single puff at a nightclub. Soon he was asking for drag off of strangers’ Juuls. Then he bought one and never looked back. He’d been around cigarettes at parties his whole life but never got into them. Juul made it too effortless to resist.

Concealable

Lighting up a cigarette is a garish activity prohibited in many places. Not so with discretely sipping from a Juul.

Cigarettes often aren’t allowed to be smoked inside. Hiding it is no easy feat and can get you kicked out. You need to have a lighter and play with fire to get one started. They can get crushed or damp in your pocket. The burning tip makes them unruly in tight quarters, and the bud or falling ash can damage clothing and make a mess. You smoke a cigarette because you really want to smoke a cigarette.

Public establishments are still figuring out how to handle Juuls and other vaporizers. Many places that ban smoking don’t explicitly do the same for vaping. The less stinky vapor and more discrete motion makes it easy to hide. Beyond airplanes, you could probably play dumb and say you didn’t know the rules if you did get caught. The metal stick is hard to break. You won’t singe anyone. There’s no mess, need for an ashtray, or holes in your jackets or couches.

As long as your battery is charged, there’s no need for extra equipment and you won’t draw attention like with a lighter. Battery life is a major concern for heavy Juulers that smokers don’t have worry about, but I know people who now carry a giant portable charger just to keep their Juul alive. But there’s also a network effect that’s developing. Similar to iPhone cords, Juuls are becoming common enough that you can often conveniently borrow a battery stick or charger from another user. 

And again, the modular ability to take as few or as many puffs as you want lets you absent-mindedly Juul at any moment. At your desk, on the dance floor, as you drive, or even in bed. A friend’s nieces and nephews say that they see fellow teens Juul in class by concealing it in the cuff of their sleeve. No kid would be so brazen as to try smoke in cigarette in the middle of a math lesson.

Distributable

Gillette pioneered the brilliant razor and blade business model. Buy the sometimes-discounted razor, and you’re compelled to keep buying the expensive proprietary blades. Dollar Shave Club leveled up the strategy by offering a subscription that delivers the consumable blades to your door. Juul combines both with a product that’s physically addictive.

When you finish a pack of cigarettes, you could be done smoking. There’s nothing left. But with Juul you’ve still got the $ 35 battery pack when you finish vaping a pod. There’s a sunk cost fallacy goading you to keep buying the pods to get the most out of your investment and stay locked into the Juul ecosystem.

(Photo by Scott Olson/Getty Images)

One of Juul’s sole virality disadvantages compared to cigarettes is that they’re not as ubiquitously available. Some stores that sells cigs just don’t carry them yet. But more and more shops are picking them up, which will continue with Altria’s help. And Juul offers an “auto-ship” delivery option that knocks $ 2 off the $ 16 pack of four pods so you don’t even have to think about buying more. Catch the urge to quit? Well you’ve got pods on the way so you might as well use them. Whether due to regulation or a lack of innovation, I couldn’t find subscription delivery options for traditional cigarettes.

And for minors that want to buy Juuls or Juul pods illegally, their tiny size makes them easy to smuggle and resell. A recent South Park episode featured warring syndicates of fourth-graders selling Juul pods to even younger kids.

Dishonorable

Juul co-founder James Monsees told the San Jose Mercury News that “The first phase is proving the value and creating a product that makes cigarettes obsolete.” But notice he didn’t say Juul wants to make nicotine obsolete or reduce the number of people addicted to it.

Juul co-founder James Monsees

If Juul actually cared about fighting addiction, it’d offer a regimen for weaning yourself off of nicotine. Yet it doesn’t sell low-dose or no-dose pods that could help people quit entirely. In the US it only sells 5% and 3% nicotine versions. It does make 1.7% pods for foreign markets like Israel where that’s the maximum legal strengths, though refuses to sell them in the States. Along with taking over $ 12 billion from one of the largest cigarette companies, that makes the mission statement ring hollow.

Juul is the death stick business as usual, but strengthened by the product design and virality typically reserved for Apple and Facebook.


Social – TechCrunch