CBPO

Author: CBPO

New Webinar! Getting Relevant Traffic with Quora Ads

January 20, 2019 No Comments

New webinar with Hanapin’s Emma Franks, Hero Conf speakers JD Prater and Joe Martinez, and SEMrush’s Alex Ponomareva. Learn how to significantly increase your visibility, engagement, and lead generation on Quora.

Read more at PPCHero.com
PPC Hero


Salesforce is building new tower in Dublin and adding hundreds of jobs

January 19, 2019 No Comments

Salesforce put the finishing touches on a tower in San Francisco last year. In October, it announced Salesforce Tower in Atlanta. Today, it was Dublin’s turn. Everyone gets a tower.

Salesforce first opened an office in Dublin back in 2001, and has since expanded to 1,400 employees. Today’s announcement represents a significant commitment to expand even further, adding 1,500 new jobs over the next five years.

The new tower in Dublin is actually going to be a campus made up of four interconnecting buildings on the River Liffey. It will eventually encompass 430,000 square feet with the first employees expected to move into the new facility sometime in the middle of 2021.

Artist’s rendering of Salesforce Tower Dublin rooftop garden. Picture: Salesforce

Martin Shanahan, who is CEO at IDA Ireland, the state agency responsible for attracting foreign investment in Ireland, called this one of the largest single jobs announcements in the 70-year history of his organization.

As with all things Salesforce, they will do this up big with an “immersive video lobby” and a hospitality space for Salesforce employees, customers and partners. This space, which will be known as the “Ohana Floor,” will also be available for use by nonprofits.They also plan to build paths along the river that will connect the campus to the city center.

Artist’s rendering of Salesforce Tower Dublin lobby. Picture: Salesforce

The company intends to make the project “one of the most sustainable building projects to-date” in Dublin, according to a statement announcing the project. What does that mean? It will, among other things, be a nearly Net Zero Energy building and it will use 100 percent renewable energy, including onsite solar panels.

Finally, as part of the company’s commitment to the local communities in which it operates, it announced a $ 1 million grant to Educate Together, an education nonprofit. The grant should help the organization expand its mission running equality-based schools. Salesforce has been supporting the group since 2009 with software grants, as well as a program where Salesforce employees volunteer at some of the organization’s schools.


Enterprise – TechCrunch


Curate Your Audience to Get Better Results from Your Digital Campaigns

January 19, 2019 No Comments

Knowing your audience is an important part of your digital marketing strategy. These tips will help you sharpen your audience targeting.

Read more at PPCHero.com
PPC Hero


Facebook fears no FTC fine

January 19, 2019 No Comments

Reports emerged today that the FTC is considering a fine against Facebook that would be the largest ever from the agency. Even if it were 10 times the size of the largest, a $ 22.5 million bill sent to Google in 2012, the company would basically laugh it off. Facebook is made of money. But the FTC may make it provide something it has precious little of these days: accountability.

A Washington Post report cites sources inside the agency (currently on hiatus due to the shutdown) saying that regulators have “met to discuss imposing a record-setting fine.” We may as well say here that this must be taken with a grain of salt at the outset; that Facebook is non-compliant with terms set previously by the FTC is an established fact, so how much they should be made to pay is the natural next topic of discussion.

But how much would it be? The scale of the violation is hugely negotiable. Our summary of the FTC’s settlement requirements for Facebook indicate that it was:

  • barred from making misrepresentations about the privacy or security of consumers’ personal information;
  • required to obtain consumers’ affirmative express consent before enacting changes that override their privacy preferences;
  • required to prevent anyone from accessing a user’s material more than 30 days after the user has deleted his or her account;
  • required to establish and maintain a comprehensive privacy program designed to address privacy risks associated with the development and management of new and existing products and services, and to protect the privacy and confidentiality of consumers’ information; and
  • required, within 180 days, and every two years after that for the next 20 years, to obtain independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order, and to ensure that the privacy of consumers’ information is protected.

How many of those did it break, and how many times? Is it per user? Per account? Per post? Per offense? What is “accessing” under such and such a circumstance? The FTC is no doubt deliberating these things.

Yet it is hard to imagine them coming up with a number that really scares Facebook. A hundred million dollars is a lot of money, for instance. But Facebook took in more than $ 13 billion in revenue last quarter. Double that fine, triple it, and Facebook bounces back.

If even a fine 10 times the size of the largest it ever threw can’t faze the target, what can the FTC do to scare Facebook into playing by the book? Make it do what it’s already supposed to be doing, but publicly.

How many ad campaigns is a user’s data being used for? How many internal and external research projects? How many copies are there? What data specifically and exactly is it collecting on any given user, how is that data stored, who has access to it, to whom is it sold or for whom is it aggregated or summarized? What is the exact nature of the privacy program it has in place, who works for it, who do they report to and what are their monthly findings?

These and dozens of other questions come immediately to mind as things Facebook should be disclosing publicly in some way or another, either directly to users in the case of how one’s data is being used, or in a more general report, such as what concrete measures are being taken to prevent exfiltration of profile data by bad actors, or how user behavior and psychology is being estimated and tracked.

Not easy or convenient questions to answer at all, let alone publicly and regularly. But if the FTC wants the company to behave, it has to impose this level of responsibility and disclosure. Because, as Facebook has already shown, it cannot be trusted to disclose it otherwise. Light touch regulation is all well and good… until it isn’t.

This may in fact be such a major threat to Facebook’s business — imagine having to publicly state metrics that are clearly at odds with what you tell advertisers and users — that it might attempt to negotiate a larger initial fine in order to avoid punitive measures such as those outlined here. Volkswagen spent billions not on fines, but in sort of punitive community service to mitigate the effects of its emissions cheating. Facebook too could be made to shell out in this indirect way.

What the FTC is capable of requiring from Facebook is an open question, since the scale and nature of these violations are unprecedented. But whatever they come up with, the part with a dollar sign in front of it — however many places it goes to — will be the least of Facebook’s worries.


Social – TechCrunch


Google starts pulling unvetted Android apps that access call logs and SMS messages

January 19, 2019 No Comments

Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.

The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.

Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.

“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.

Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.

Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access and the risks associated with having access to call and texting data.

Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.

So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.

Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.

The last two years alone has seen several high-profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.

Mobile – TechCrunch



Facebook finds and kills another 512 Kremlin-linked fake accounts

January 17, 2019 No Comments

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $ 135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network. (For more on the open source investigation check out this blog post from DFRLab.)

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $ 25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.


Social – TechCrunch


Robots learn to grab and scramble with new levels of agility

January 17, 2019 No Comments

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Gadgets – TechCrunch


How to Use Asset Placement Customization in Facebook

January 16, 2019 No Comments

Learn step-by-step how to optimize images and videos across the various placements available in the Facebook Ads Platform.

Read more at PPCHero.com
PPC Hero


Instagram caught selling ads to follower-buying services it banned

January 15, 2019 No Comments

Instagram has been earning money from businesses flooding its social network with spam notifications. Instagram hypocritically continues to sell ad space to services that charge clients for fake followers or that automatically follow/unfollow other people to get them to follow the client back. This is despite Instagram reiterating a ban on these businesses in November and threatening the accounts of people who employ them.

A TechCrunch investigation initially found 17 services selling fake followers or automated notification spam for luring in followers that were openly advertising on Instagram despite blatantly violating the network’s policies. This demonstrates Instagram’s failure to adequately police its app and ad platform. That neglect led to users being distracted by notifications for follows and Likes generated by bots or fake accounts. Instagram raked in revenue from these services while they diluted the quality of Instagram notifications and wasted people’s time.

In response to our investigation, Instagram tells me it’s removed all ads as well as disabled all the Facebook Pages and Instagram accounts of the services we reported were violating its policies. Pages and accounts that themselves weren’t in violation but whose ads were have been banned from advertising on Facebook and Instagram. However, a day later TechCrunch still found ads from two of these services on Instagram, and discovered five more companies paying to promote policy-violating follower growth services.

This raises a big question about whether Instagram properly protects its community from spammers. Why would it take a journalist’s investigation to remove these ads and businesses that brazenly broke Instagram’s rules when the company is supposed to have technical and human moderation systems in place? The Facebook-owned app’s quest to “move fast” to grow its user base and business seems to have raced beyond what its watchdogs could safeguard.

Hunting Spammers

I first began this investigation a month ago after being pestered with Instagram Stories ads by a service called GramGorilla. The slicked-back hipster salesmen boasted how many followers he gained with the service and that I could pay to do the same. The ads linked to the website of a division of Krends Marketing where for $ 46 to $ 126 per month, it promised to score me 1000 to 2500 Instagram followers.

Some apps like this sell followers directly, though these are typically fake accounts. They might boost your follower count (unless they’re detected and terminated) but won’t actually engage with your content or help your business, and end up dragging down your metrics so Instagram shows your posts to fewer people. But I discovered that GramGorilla/Krends and the majority of apps selling Instagram audience growth do something even worse.

You give these scammy businesses your Instagram username and password, plus some relevant topics or demographics, and they automatically follow and unfollow, like, and comment on strangers’ Instagram profiles. The goal is to generate notifications those strangers will see in hopes that they’ll get curious or want to reciprocate and so therefore follow you back. By triggering enough of this notification spam, they trick enough strangers to follow you to justify the monthly subscription fee.

That pissed me off. Facebook, Instagram, and other social networks send enough real notifications as is, growth hacking their way to more engagement, ad views, and daily user counts. But at least they have to weigh the risk of annoying you so much that you turn off notifications all together. Services that sell followers don’t care if they pollute Instagram and ruin your experience as long as they make money. They’re classic villains in the ‘tragedy of the commons’ of our attention.

This led me to start cataloging these spam company ads, and I was startled by how many different ones I saw. Soon, Instagram’s ad targeting and retargeting algorithms were backfiring, purposefully feeding me ads for similar companies that also violated Instagram’s policies.

The 17 services selling followers or spam that I originally indexed were Krends Marketing / GramGorilla, SocialUpgrade, MagicSocial, EZ-Grow, Xplod Social, Macurex, GoGrowthly, Instashop / IG Shops, TrendBee, JW Social Media Marketing, YR Charisma, Instagrocery, SocialSensational, SocialFuse, WeGrowSocial, IGWildfire, and GramFlare. TrendBee and GramFlare were found to still be running Instagram ads after the platform said they’ve been banned from doing so. Upon further investigation after Instagram’s supposed crackdown, I discovered five more services sell prohibited growth services: FireSocial, InstaMason/IWentMissing, NexStore2019, InstaGrow, and Servantify.

Knowingly Poisoning The Well

I wanted to find out if these companies were aware that they violate Instagram’s policies and how they justify generating spam. Most hide their contact info and merely provide a customer support email, but eventually I was able to get on the phone with some of the founders.

What we’re doing is obviously against their terms of service” said GoGrowthly’s co-founder who refused to provide their name. “We’re going in and piggybacking off their free platform and not giving them any of the revenue. Instagram doesn’t like us at all. We utilize private proxies depending on clients’ geographic location. That’s sort of our trick to reduce any sort of liability” so clients’ accounts don’t get shut down, they said. “It’s a careful line that we tread with Instagram. Similar to SEO companies and Google, Google wants the best results for customers and customers want the best results for them. There’s a delicate dance” said Macurex founder Gun Hudson.

EZ-Grow’s co-founder Elon refused to give his last name on the record, but told me “[Clients] always need something new. At first it was follows and likes. Now we even watch Stories for them. Every new feature that Instagram has we take advantage of it to make more visibility for our clients.” He says EZ-Grow spends $ 500 per day on Instagram ads, which are its core strategy for finding new customers. SocialFuse founder Alexander Heit says his company spends a couple hundred dollars per day on Instagram and Facebook ads, and was worried when Instagram reiterated its ban on his kind of service in November, but says “We thought that we were definitely going to get shut down but nothing has changed on our end.”

Several of the founders tried to defend their notification spam services by saying that at least they weren’t selling fake followers. Lacking any self-awareness, Macurex’s Hudson said “If it’s done the wrong way it can ruin the user experience. There are all sorts of marketers who will market in untasteful or spammy ways. Instagram needs to keep a check on that.” GoGrowthly’s founder actually told me “We’re actually doing good for the community by generating those targeted interactions.” WeGrowSocial’s co-founder Brandon also refused to give his last name, but was willing to rat out his competitor SocialSensational for selling followers.

Only EZ-Grow’s Elon seemed to have a moment of clarity. “Because the targeting goes to the right people . . . and it’s something they would like, it’s not spam” he said before his epiphany. “People can also look at it as spam, maybe.”

Instagram Finally Shuts Down The Spammers

In response to our findings, an Instagram spokesperson provided this lengthy statement confirming it’s shut down the ads and accounts of the violators we discovered, claiming that it works hard to fight spam, and admitting it needs to do better:

“Nobody likes receiving spammy follows, likes and comments. It’s really important to us that the interactions people have on Instagram are genuine, and we’re working hard to keep the community free from spammy behavior. Services that offer to boost an account’s popularity via inauthentic likes, comments and followers, as well as ads that promote these services, aren’t allowed on Instagram. We’ve taken action on the services raised in this article, including removing violating ads, disabling Pages and accounts, and stopping Pages from placing further ads. We have various systems in place that help us catch and remove these types of ads before anyone sees them, but given the number of ads uploaded to our platform every day, there are times when some still manage to slip through. We know we have more to do in this area and we’re committed to improving.”

Instagram tells me it uses machine learning tools to identify accounts that pay third-party apps to boost their popularity and claims to remove inauthentic engagement before it reaches the recipient of the notifications. By nullifying the results of these services, Instagram believes users will have less incentive to use them. It uses automated systems to evaluate the images, captions, and landing pages of all its ads before they run, and sends some to human moderators. It claims this lets it catch most policy-violating ads, and that users can report those it misses.

But these ads and their associated accounts were filled with terms like “get followers”, “boost your Instagram followers”, “real followers”, “grow your engagement”, “get verified”, “engagement automation”, and other terms tightly linked to policy-violating services. That casts doubt on just how hard Instagram was working on this problem. It may have simply relied on cheap and scalable technical approaches to catching services with spam bots or fake accounts instead of properly screening ads or employing sufficient numbers of human moderators to police the network.

That misplaced dependence on AI and other tech solutions appears to be a trend in the industry. When I recently reported that child sexual abuse imagery was easy to find on WhatsApp and Microsoft Bing, both seemed to be understaffing the human moderation team that could have hunted down this illegal content with common sense where complex algorithms failed. As with Instagram, these products have highly profitable parent companies who can afford to pour more dollars in policy enforcement.

Kicking these services off Instagram is an important step, but the company must be more proactive. Social networks and self-serve ad networks have been treated as efficient cash cows for too long. The profits from these products should be reinvested in policing them. Otherwise, crooks will happily fleece users for our money and attention.

To learn more about the future of Instagram, check out this article’s author Josh Constine’s SXSW 2019 keynote with Instagram co-founders Kevin Systrom and Mike Krieger — their first talk together since leaving the company.


Social – TechCrunch