CBPO

Tag: Instagram

New Instagram features flag potentially offensive comments, allow you to quietly ‘restrict’ users

July 9, 2019 No Comments

Instagram announced two new features today that it said are designed to combat online bullying.

In both cases, the Facebook -owned service seems to be trying to find ways to limit bad behavior without outright blocking posts or banning users.

“We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” wrote Instagram head Adam Mosseri in the announcement. “Today we’re announcing one new feature in both areas. These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, but they’re only two steps on a longer path.”

The first feature is supposed to use artificial intelligence to flag comments that “may be considered offensive.” In those cases, users are asked, “Are you sure you want to post this?” and then given the option button to “undo” their comment before it posts.

This might seem like a relatively tame response, particularly because users can still go ahead and post the original comment if they want, but Mosseri said that in early tests, his team found that the prompt “encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Instagram warning

The other addition, which Mosseri said the service will start testing soon, is the ability to “restrict” users looking at your account.

“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mosseri wrote.

So by using this new option, you can limit another user’s interaction with your account without making it obvious. If you restrict someone, their comments on your posts will only be visible to them, unless you approve a comment for general consumption. They also won’t be able to see if you’re active on Instagram or if you’ve read their direct messages.

Mosseri described earlier versions of these features at Facebook’s F8 developer conference in April.


Social – TechCrunch


Daily Crunch: Instagram influencer contact info exposed

May 22, 2019 No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Millions of Instagram influencers had their private contact data scraped and exposed

A massive database containing contact information for millions of Instagram influencers, celebrities and brand accounts was found online by a security researcher.

We traced the database back to Mumbai-based social media marketing firm Chtrbox. Shortly after we reached out, Chtrbox pulled the database offline.

2. US mitigates Huawei ban by offering temporary reprieve

Last week, the Trump administration effectively banned Huawei from importing U.S. technology, a decision that forced several American companies, including Google, to take steps to sever their relationships. Now, the Department of Commerce has announced that Huawei will receive a “90-day temporary general license” to continue to use U.S. technology to which it already has a license.

3. GM’s car-sharing service Maven to exit eight cities

GM is scaling back its Maven car-sharing company and will stop service in nearly half of the 17 North American cities in which it operates.

4. Maisie Williams’ talent discovery startup Daisie raises $ 2.5M, hits 100K members

The actress who became famous playing Arya Stark on “Game of Thrones” has fresh funding for her startup.

5. ByteDance, TikTok’s parent company, plans to launch a free music streaming app

The company, which operates popular app TikTok, has held discussions with music labels to launch the app as soon as the end of this quarter.

6. Future Family launches a $ 200 membership for fertility coaching

In its recent user research, Future Family found that around 70% of new customers had yet to see a fertility doctor. So today, the startup is rolling out a new membership plan that offers customers a dedicated fertility coach, and helps them find a doctor in their area.

7. When will customers start buying all those AI chips?

Danny Crichton says it’s the best and worst time to be in semiconductors right now. (Extra Crunch membership required.)


Social – TechCrunch


Instagram will let you appeal post takedowns

May 10, 2019 No Comments

Instagram isn’t just pretty pictures. It now also harbors bullying, misinformation and controversial self-expression content. So today Instagram is announcing a bevvy of safety updates to protect users and give them more of a voice. Most significantly, Instagram will now let users appeal the company’s decision to take down one of their posts.

A new in-app interface (rolling out starting today) over the next few months will let users “get a second opinion on the post,” says Instagram’s head of policy, Karina Newton. A different Facebook moderator will review the post, and restore its visibility if it was wrongly removed, and they’ll inform users of their conclusion either way. Instagram always let users appeal account suspensions, but now someone can appeal a takedown if their post was mistakenly removed for nudity when they weren’t nude or hate speech that was actually friendly joshing.

Blocking vaccine misinfo hashtags

On the misinformation front, Instagram will begin blocking vaccine-related hashtag pages when content surfaced on a hashtag page features a large proportion of verifiably false content about vaccines. If there is some violating content, but under that threshold, Instagram will lock a hashtag into a “Top-only” post, where Recent posts won’t show up, to decrease visibility of problematic content. Instagram says that it will test this approach and expand it to other problematic content genres if it works. Instagram will also be surfacing educational information via a pop-up to people who search for vaccine content, similar to what it’s used in the past for self-harm and opioid content.

Instagram says now that health agencies like the Center for Disease Control and World Health Organization are confirming that VACCINES DO NOT CAUSE AUTISM, it’s comfortable declaring contradictory information as verifiably false, and it can be aggressively demoted on the platform.

The automated system scans and scores every post uploaded to Instagram, checking them against classifiers of prohibited content and what it calls “text-matching banks.” These collections of fingerprinted content it’s already banned have their text indexed and words pulled out of imagery through optical character recognition so Instagram can find posts with the same words later. It’s working on extending this technology to videos, and all the systems are being trained to spot obvious issues like threats, unwanted contact and insults, but also those causing intentional fear-of-missing-out, taunting, shaming and betrayals.

If the AI is confident a post violates policies, it’s taken down and counted as a strike against any hashtag included. If a hashtag has too high of a percentage of violating content, the hashtag will be blocked. If it had fewer strikes, it’d get locked in Top-Only mode. The change comes after stern criticism from CNN and others about how hashtag pages like #VaccinesKill still featured tons of dangerous misinformation as recently as yesterday.

Tally-based suspensions

One other new change announced this week is that Instagram will no longer determine whether to suspend an account based on the percentage of their content that violates policies, but by a tally of total violations within a certain period of time. Otherwise, Newton says, “It would disproportionately benefit those that have a large amount of posts,” because even a large number of violations would be a smaller percentage than a rare violation by someone who doesn’t post often. To prevent bad actors from gaming the system, Instagram won’t disclose the exact time frame or number of violations that trigger suspensions.

Instagram recently announced at F8 several new tests on the safety front, including a “nudge” not to post a potentially hateful comment a user has typed, “away mode” for taking a break from Instagram without deleting your account and a way to “manage interactions” so you can ban people from taking certain actions like commenting on your content or DMing you without blocking them entirely.

The announcements come as Instagram has solidified its central place in youth culture. That means it has intense responsibility to protect its user base from bullying, hate speech, graphic content, drugs, misinformation and extremism. “We work really closely with subject matter experts, raise issues that might be playing out differently on Instagram than Facebook, and we identify gaps where we need to change how our policies are operationalized or our policies are changed,” says Newton.


Social – TechCrunch


Instagram hides Like counts in leaked design prototype

April 19, 2019 No Comments

“We want your followers to focus on what you share, not how many likes your posts get. During this test, only the person who shares a post will see the total number of likes it gets.” That’s how Instagram describes a seemingly small design change test with massive potential impact on users’ well-being.

Hiding Like counts could reduce herd mentality, where people just Like what’s already got tons of Likes. It could reduce the sense of competition on Instagram, since users won’t compare their own counts with those of more popular friends or superstar creators. And it could encourage creators to post what feels most authentic rather than trying to rack up Likes for everyone to see.

The design change test was spotted by Jane Manchun Wong, the prolific reverse-engineering expert and frequent TechCrunch tipster who has spotted tons of Instagram features before they’re officially confirmed or launched. Wong discovered the design change test in Instagram’s Android code and was able to generate the screenshots above.

You can see on the left that the Instagram feed post lacks a Like count, but still shows a few faces and a name of other people who’ve Liked it. Users are alerted that only they will see their post’s Like counts, and anyone else won’t. Many users delete posts that don’t immediately get “enough” Likes or post to their fake “Finstagram” accounts if they don’t think they’ll be proud of the hearts they collect. Hiding Like counts might get users posting more because they’ll be less self-conscious.

Instagram confirmed to TechCrunch that this design is an internal prototype that’s not visible to the public yet. A spokesperson told us: “We’re not testing this at the moment, but exploring ways to reduce pressure on Instagram is something we’re always thinking about.” Other features we’ve reported on in the same phase, such as video calling, soundtracks for Stories and the app’s time well spent dashboard, all went on to receive official launches.

Instagram’s prototypes (from left): feed post reactions, Stories lyrics and Direct stickers

Meanwhile, Wong has also recently spotted several other Instagram prototypes lurking in its Android code. Those include chat thread stickers for Direct messages, augmented reality filters for Direct Video calls, simultaneous co-watching of recommended videos through Direct, karaoke-style lyrics that appear synced to soundtracks in Stories, emoji reactions to feed posts and a shopping bag for commerce.

It appears there’s no plan to hide follower counts on user profiles, which are the true measure of popularity, but also serve a purpose of distinguishing great content creators and assessing their worth to marketers. Hiding Likes could just put more of a spotlight on follower and comment counts. And even if users don’t see Like counts, they still massively impact the feed’s ranking algorithm, so creators will still have to battle for them to be seen.

Close-up of Instagram’s design for feed posts without Like counters

The change matches a growing belief that Like counts can be counter-productive or even harmful to users’ psyches. Instagram co-founder Kevin Systrom told me back in 2016 that getting away from the pressure of Like counts was one impetus for Instagram launching Stories. Last month, Twitter began testing a design that hides retweet counts behind an extra tap to similarly discourage inauthentic competition and herd mentality. And Snapchat has never shown Like counts or even follower counts, which has made it feel less stressful but also less useful for influencers.

Narcissism, envy spiraling and low self-image can all stem from staring at Like counts. They’re a constant reminder of the status hierarchies that have emerged from social networks. For many users, at some point it stopped being fun and started to feel more like working in the heart mines. If Instagram rolls out the feature, it could put the emphasis back on sharing art and self-expression, not trying to win some popularity contest.

Mobile – TechCrunch


Instagram now demotes vaguely ‘inappropriate’ content

April 11, 2019 No Comments

Instagram is home to plenty of scantily clad models and edgy memes that may start to get fewer views starting today. Now Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent or hurtful, it could get fewer views.

Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages,” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event at the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community,” Instagram’s product lead for Discovery, Will Ruben, said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content,” which on Facebook would mean being shown lower in News Feed. That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Facebook’s Henry Silverman explained that, “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons.

This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

[Update: After we published, Instagram posted to its Help Center a brief note about its borderline content policy, but with no visual examples, mentions of impacted categories other than sexually suggestive content, or indications of what qualifies content as “inappropriate.” So officially, it’s still leaving users in the dark.]

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.


Social – TechCrunch


Instagram now demotes vaguely ‘inappropriate’ content

April 11, 2019 No Comments

Instagram is home to plenty of scantily clad models and edgy memes that may start to get fewer views starting today. Now Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent or hurtful, it could get fewer views.

Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages,” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event at the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community,” Instagram’s product lead for Discovery, Will Ruben, said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content,” which on Facebook would mean being shown lower in News Feed. That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Facebook’s Henry Silverman explained that, “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons.

This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

[Update: After we published, Instagram posted to its Help Center a brief note about its borderline content policy, but with no visual examples, mentions of impacted categories other than sexually suggestive content, or indications of what qualifies content as “inappropriate.” So officially, it’s still leaving users in the dark.]

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.

Mobile – TechCrunch


Instagram prototypes video co-watching

March 8, 2019 No Comments

The next phase of social media is about hanging out together while apart. Rather than performing on a live stream or engaging with a video chat, Instagram may allow you to chill and watch videos together with a friend. Facebook already has Watch Party for group co-viewing, and in November we broke the news that Facebook Messenger’s code contains an unreleased “Watch Videos Together” feature. Now Instagram’s code reveals a “co-watch content” feature hidden inside Instagram Direct Messaging.

It’s unclear what users might be able to watch simultaneously, but the feature could give IGTV a much-needed boost, or just let you laugh and cringe at Instagram feed videos and Stories. But either way, co-viewing could make you see more ads, drive more attention to creators that will win Instagram their favor or just make you rack up time spent on the app without forcing you to create anything.

The Instagram co-watch code was discovered by TechCrunch’s favorite tipster and reverse-engineering specialist Jane Manchun Wong, who previously spotted the Messenger Watch Together code. Her past findings include Instagram’s video calling, music soundtracks and Time Well Spent dashboard, months before they were officially released. The code mentions that you can “cowatch content” that comes from a “Playlist” similar to the queues of videos Facebook Watch Party admins can tee up. Users could also check out “Suggested” videos from Instagram, which would give it a new way to promote creators or spawn a zeitgeist moment around a video. It’s not certain whether users will be able to appear picture-in-picture while watching so friends can see their reactions, but that would surely be more fun.

Instagram declined to comment on the findings, which is typical of the company when a feature has been prototyped internally but hasn’t begun externally testing with users. At this stage, products can still get scrapped or take many months or even more than a year to launch. But given Facebook’s philosophical intention to demote mindless viewing and promote active conversation around videos, Instagram co-watching is a sensible direction.

Facebook launched Watch Party to this end back in July, and by November, 12 million had been started from Groups and they generated 8X more comments than non-synced or Live videos. That proves co-watching can make video feel less isolating. That’s important as startups like Houseparty group video chatrooms and Squad screenshare messaging try to nip at Insta’s heels.

It’s also another sign that following the departure of the Instagram founders, Facebook has been standardizing features across its apps, eroding their distinct identities. Mark Zuckerberg plans to unify the backend of Facebook Messenger, WhatsApp, and Instagram to allow cross-app messaging. But Instagram has always been Facebook’s content-first app, so while Watch Party might have been built for Facebook Groups, Instagram could be where it hits its stride.

Speaking of the Instagram founders Kevin Systrom and Mike Krieger, this article’s author Josh Constine will be interviewing them on Monday 3/11 at SXSW. Come see them at 2 pm in the Austin Convention Center’s Ballroom D to hear about their thoughts on the creator economy, why they left Facebook and what they’ll do next. Check out the rest of TechCrunch’s SXSW panels here, and RSVP for our party on Sunday.


Social – TechCrunch


Instagram confirms that a bug is causing follower counts to change

February 13, 2019 No Comments

Instagram confirmed today that an issue has been causing some accounts’ follower numbers to change. Users began noticing the bug about 10 hours ago and the drastic drop in followers caused some to wonder if Instagram was culling inactive and fake accounts, as part of its fight against spam.

“We’re aware of an issue that is causing a change in account follower numbers for some people right now. We’re working to resolve this as quickly as possible,” the company said on Twitter.

The Instagram bug comes a few hours after a Twitter bug messed with the Like count on tweets, causing users to wonder if accounts were being suspended en masse or if they were just very bad at tweeting.


Social – TechCrunch



Facebook plans new products as Instagram Stories hits 500M users/day

January 31, 2019 No Comments

Roughly half of Instagram’s users 1 billion users now use Instagram Stories every day. That 500 million daily user count is up from 400 million in June 2018. 2 million advertisers are now buying Stories ads across Facebook’s properties.

CEO Mark Zuckerberg called Stories the last big game-changing feature from Facebook, but after concentrating on security last year, it plans to ship more products that make “major improvements” in people’s lives.

During today’s Q4 2018 earnings call, Zuckerberg outlined several areas where Facebook will push new products this year:

  • Encryption and ephemerality will be added to more features for security and privacy
  • Messaging features will make Messenger and WhatsApp “the center of [your] social experiences”
  • WhatsApp payments will expand to more countries
  • Stories will gain new private sharing options
  • Groups will become an organizing function of Facebook on par with friends & family
  • Facebook Watch will become mainstream this year as video is moved there from the News Feed, Zuckerberg expects
  • Augmented and virtual reality will be improved, and Oculus Quest will ship this spring
  • Instagram commerce and shopping will get new features

Zuckerberg was asked about Facebook’s plan to unify the infrastructure to allow encrypted cross-app messaging between Facebook Messenger, Instagram, and WhatsApp, as first reported by NYT’s Mike Isaac. Zuckerberg explained that the plan wasn’t about a business benefit, but supposedly to improve the user experience. Specifically, it would allow Marketplace buyers and sellers in countries where WhatsApp dominates messaging to use that app to chat instead of Messenger. And for Android users who use Messenger as their SMS client, the unification would allow those messages to be sent with encryption too. He sees expanding encryption here as a way to decentralize Facebook and keep users’ data safe by never having it on the company’s servers. However, Zuckerberg says this will take time and could be a “2020 thing”.

Facebook says it now has 2.7 billion monthly users across the Facebook family of apps: Facebook, Instagram, Messenger, and WhatsApp. However, Facebook CFO David Wehner says “Over time we expect family metrics to play the primary role in how we talk about our company and we will eventually phase out Facebook-only community metrics.” That shows Facebook is self-conscious about how its user base is shifting away from its classic social network and towards Instagram and its messaging apps. Family-only metrics could mask how teens are slipping away.


Social – TechCrunch