CBPO

Tag: Instagram

Instagram launches Create mode with On This Day throwbacks

October 9, 2019 No Comments

Instagram has finally turned Throwback Thursday into an official feature. It’s part of the new Instagram “Create” mode that launches today in Stories, bringing the app beyond the camera. Create makes Instagram a more omni-purpose social network with the flexibility to adapt to a broader range of content formats.

For now, the highlight of Create is the “On This Day” option that shows a random feed post you shared on the same calendar date in the past. Tap the dice button to view a different On This Day post, and once you find one you prefer, you can share it to Stories as an embedded post people can open.

The launch could make it easy for users to convert their old impermanent content into fresh ephemeral content. That could be especially helpful because not everyone does something Stories-worthy every day. And given how many #TBT throwbacks get shared already, there’s clearly demand for sharing nostalgia with new commentary.

Instagram Create On This Day

When asked about Create mode, an Instagram spokesperson told me, “this new mode helps you combine interactive stickers, drawings and text without needing a photo or video to share . . . On This Day suggests memories and lets you share them via Direct and Stories.” It’d sure be nice if embedded On This Day video posts played inside of Stories, but for now you have to tap to open them on their own page.

Instagram actually launched a different way to share throwbacks, called “Memories,” early this year. But most users didn’t know about it because it was tucked in the Profile -> Three-Line ‘Hamburger’ Sidebar -> Archive option used to for Highlighting or Restoring expired Stories or post you’d hidden.

Instagram Archive Memories

Now On This Day is much more accessible as part of the new Create Mode inside the Stories composer, which replaces Type mode with more options for sharing without your camera than just posting text. You can access it by swiping right at the bottom of the screen from the Stories camera, instead of left to other options like Boomerang. Create lets you use features otherwise added as Stickers atop photos and videos, but on their own with new suggestions of what to share:

-Countdown timer with suggestions for “The Weekend,” “Quittin’ Time,” and “School’s Out”

Instagram Create Countdown

-Quiz with suggestions including “What’s my biggest fear?” and “Only one of these is true” (The Quiz sticker already had suggestions)

Instagram Create Quiz

-Poll with suggestions including “Sweet or savory?” and “Better first date: dinner or movie?”

Instagram Create Poll

-Question with suggestions including “If you had 3 wishes…” and “Any hidden talents?”

Instagram Create Questions

Instagram is also offering a new version of its Giphy -powered GIFs feature inside Create. It lets you search for a GIF and see it tiled three times vertically as the background of your Create post, rather than laid on top.

Instagram Create GIFs

Through all these features, Create lets people generate new things to share even if they’re laying in bed or stuck somewhere. As Instagram grows internationally to more users with lower-quality phones, and replaces Facebook for many people, the ability to share text and other stuff without having to use their camera could increase people’s posting. Between the Camera shutter modes and room for more sharing styles in Create, Instagram can encompass most any content.

As of today, Instagram is about more than photos and videos. It’s stepping up as a multi-faceted social app just as Facebook’s battered brand becomes desperate to turn Instagram into its reputation and business lifeboat.


Social – TechCrunch


Instagram ad partner secretly sucked up and tracked millions of users’ locations and stories

August 8, 2019 No Comments

Hyp3r, an apparently trusted marketing partner of Facebook and Instagram, has been secretly collecting and storing location and other data on millions of users, against the policies of the social networks, Business Insider reported today. It’s hard to see how it could do this for years without intervention by the platforms except if the latter were either ignorant or complicit.

After BI informed Instagram, the company confirmed that Hyp3r (styled HYP3R) had violated its policies and has now been removed from the platform. In a statement to TechCrunch, a Facebook spokesperson confirmed the report, saying:

HYP3R’s actions were not sanctioned and violate our policies. As a result, we’ve removed them from our platform. We’ve also made a product change that should help prevent other companies from scraping public location pages in this way.

The company started several years ago as a platform via which advertisers could target users attending a given event, like a baseball game or concert. It used Instagram’s official API to hoover up data originally, the kind of data-gathering that has been happening for years by unsavory firms in tech, most infamously Cambridge Analytica.

The idea of getting an ad because you’re at a ball game isn’t so scary, but if the company maintains a persistent record not just of your exact locations, but objects in your photos and types of places you visit, in order to combine that with other demographics and build a detailed shadow profile… well, that’s a little scary. And so Hyp3r’s business model evolved.

Unfortunately, the API was severely restricted in early 2018, limiting Hyp3r’s access to location and user data. Although there were unconfirmed reports that this led to layoffs at the company around the time, the company seems to have survived (and raised millions shortly afterwards) not by adapting its business model, but by sneaking around the apparently quite minimal barriers Instagram put in place to prevent location data from being scraped.

Some of this was done by taking advantage of Instagram’s Location pages, which would serve up public accounts visiting them to anyone who asked, logged in or not. (This was one of the features turned off today by Instagram.)

According to BI’s report, Hyp3r built tools to circumvent limitations on both location collection and saving of personal accounts’ stories — content meant to disappear after 24 hours. If a user posted anything at one of thousands of locations and regions monitored by Hyp3r, their data would be sucked up and added to their shadow profile.

To be clear, it only collected information from public stories and accounts. Naturally these people opted out of a certain amount of privacy by choosing a public account, but as the Cambridge Analytica case and others have shown, no one expects or should have to expect that their data is being secretly and systematically assembled into a personal profile by a company they’ve never heard of.

Facebook and Instagram, however, had definitely heard of Hyp3r. In fact, Hyp3r could until today be found in the official Facebook Marketing Partners directory, a curated list of companies it recommends for various tasks and services that advertisers might need.

And Hyp3r has been quite clear about what it is doing, though not about the methods by which it is doing it. It wasn’t a secret that the company was building profiles based around tracking locations and brands — that was presumably what Facebook listed it for. It was only when this report surfaced that Hyp3r had its Facebook Marketing Partner privileges rescinded.

For its part Hyp3r claims to be “compliant with consumer privacy regulations and social network Terms of Services,” and emphasized in a statement that it only accessed public data.

It’s unclear how Hyp3r could exist as a privileged member of Facebook’s stable of recommended companies and simultaneously be in such blatant violation of its policies. If these partners receive even cursory reviews of their products and methods, wouldn’t it have been obvious to any informed auditor that there was no legitimate source for the location and other data that Hyp3r was collecting? Wouldn’t it have been obvious that it was engaging in Automated Data Collection, which is specifically prohibited without Facebook’s permission?

I’ve asked Facebook for more detail on how and when its Marketing Partners are reviewed, and how this seemingly fundamental violation of the prohibition against automated data collection could have gone undetected for so long. This story is developing and may be updated further.


Social – TechCrunch



New Instagram features flag potentially offensive comments, allow you to quietly ‘restrict’ users

July 9, 2019 No Comments

Instagram announced two new features today that it said are designed to combat online bullying.

In both cases, the Facebook -owned service seems to be trying to find ways to limit bad behavior without outright blocking posts or banning users.

“We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” wrote Instagram head Adam Mosseri in the announcement. “Today we’re announcing one new feature in both areas. These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, but they’re only two steps on a longer path.”

The first feature is supposed to use artificial intelligence to flag comments that “may be considered offensive.” In those cases, users are asked, “Are you sure you want to post this?” and then given the option button to “undo” their comment before it posts.

This might seem like a relatively tame response, particularly because users can still go ahead and post the original comment if they want, but Mosseri said that in early tests, his team found that the prompt “encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

Instagram warning

The other addition, which Mosseri said the service will start testing soon, is the ability to “restrict” users looking at your account.

“We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life,” Mosseri wrote.

So by using this new option, you can limit another user’s interaction with your account without making it obvious. If you restrict someone, their comments on your posts will only be visible to them, unless you approve a comment for general consumption. They also won’t be able to see if you’re active on Instagram or if you’ve read their direct messages.

Mosseri described earlier versions of these features at Facebook’s F8 developer conference in April.


Social – TechCrunch


Daily Crunch: Instagram influencer contact info exposed

May 22, 2019 No Comments

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Millions of Instagram influencers had their private contact data scraped and exposed

A massive database containing contact information for millions of Instagram influencers, celebrities and brand accounts was found online by a security researcher.

We traced the database back to Mumbai-based social media marketing firm Chtrbox. Shortly after we reached out, Chtrbox pulled the database offline.

2. US mitigates Huawei ban by offering temporary reprieve

Last week, the Trump administration effectively banned Huawei from importing U.S. technology, a decision that forced several American companies, including Google, to take steps to sever their relationships. Now, the Department of Commerce has announced that Huawei will receive a “90-day temporary general license” to continue to use U.S. technology to which it already has a license.

3. GM’s car-sharing service Maven to exit eight cities

GM is scaling back its Maven car-sharing company and will stop service in nearly half of the 17 North American cities in which it operates.

4. Maisie Williams’ talent discovery startup Daisie raises $ 2.5M, hits 100K members

The actress who became famous playing Arya Stark on “Game of Thrones” has fresh funding for her startup.

5. ByteDance, TikTok’s parent company, plans to launch a free music streaming app

The company, which operates popular app TikTok, has held discussions with music labels to launch the app as soon as the end of this quarter.

6. Future Family launches a $ 200 membership for fertility coaching

In its recent user research, Future Family found that around 70% of new customers had yet to see a fertility doctor. So today, the startup is rolling out a new membership plan that offers customers a dedicated fertility coach, and helps them find a doctor in their area.

7. When will customers start buying all those AI chips?

Danny Crichton says it’s the best and worst time to be in semiconductors right now. (Extra Crunch membership required.)


Social – TechCrunch


Instagram will let you appeal post takedowns

May 10, 2019 No Comments

Instagram isn’t just pretty pictures. It now also harbors bullying, misinformation and controversial self-expression content. So today Instagram is announcing a bevvy of safety updates to protect users and give them more of a voice. Most significantly, Instagram will now let users appeal the company’s decision to take down one of their posts.

A new in-app interface (rolling out starting today) over the next few months will let users “get a second opinion on the post,” says Instagram’s head of policy, Karina Newton. A different Facebook moderator will review the post, and restore its visibility if it was wrongly removed, and they’ll inform users of their conclusion either way. Instagram always let users appeal account suspensions, but now someone can appeal a takedown if their post was mistakenly removed for nudity when they weren’t nude or hate speech that was actually friendly joshing.

Blocking vaccine misinfo hashtags

On the misinformation front, Instagram will begin blocking vaccine-related hashtag pages when content surfaced on a hashtag page features a large proportion of verifiably false content about vaccines. If there is some violating content, but under that threshold, Instagram will lock a hashtag into a “Top-only” post, where Recent posts won’t show up, to decrease visibility of problematic content. Instagram says that it will test this approach and expand it to other problematic content genres if it works. Instagram will also be surfacing educational information via a pop-up to people who search for vaccine content, similar to what it’s used in the past for self-harm and opioid content.

Instagram says now that health agencies like the Center for Disease Control and World Health Organization are confirming that VACCINES DO NOT CAUSE AUTISM, it’s comfortable declaring contradictory information as verifiably false, and it can be aggressively demoted on the platform.

The automated system scans and scores every post uploaded to Instagram, checking them against classifiers of prohibited content and what it calls “text-matching banks.” These collections of fingerprinted content it’s already banned have their text indexed and words pulled out of imagery through optical character recognition so Instagram can find posts with the same words later. It’s working on extending this technology to videos, and all the systems are being trained to spot obvious issues like threats, unwanted contact and insults, but also those causing intentional fear-of-missing-out, taunting, shaming and betrayals.

If the AI is confident a post violates policies, it’s taken down and counted as a strike against any hashtag included. If a hashtag has too high of a percentage of violating content, the hashtag will be blocked. If it had fewer strikes, it’d get locked in Top-Only mode. The change comes after stern criticism from CNN and others about how hashtag pages like #VaccinesKill still featured tons of dangerous misinformation as recently as yesterday.

Tally-based suspensions

One other new change announced this week is that Instagram will no longer determine whether to suspend an account based on the percentage of their content that violates policies, but by a tally of total violations within a certain period of time. Otherwise, Newton says, “It would disproportionately benefit those that have a large amount of posts,” because even a large number of violations would be a smaller percentage than a rare violation by someone who doesn’t post often. To prevent bad actors from gaming the system, Instagram won’t disclose the exact time frame or number of violations that trigger suspensions.

Instagram recently announced at F8 several new tests on the safety front, including a “nudge” not to post a potentially hateful comment a user has typed, “away mode” for taking a break from Instagram without deleting your account and a way to “manage interactions” so you can ban people from taking certain actions like commenting on your content or DMing you without blocking them entirely.

The announcements come as Instagram has solidified its central place in youth culture. That means it has intense responsibility to protect its user base from bullying, hate speech, graphic content, drugs, misinformation and extremism. “We work really closely with subject matter experts, raise issues that might be playing out differently on Instagram than Facebook, and we identify gaps where we need to change how our policies are operationalized or our policies are changed,” says Newton.


Social – TechCrunch


Instagram hides Like counts in leaked design prototype

April 19, 2019 No Comments

“We want your followers to focus on what you share, not how many likes your posts get. During this test, only the person who shares a post will see the total number of likes it gets.” That’s how Instagram describes a seemingly small design change test with massive potential impact on users’ well-being.

Hiding Like counts could reduce herd mentality, where people just Like what’s already got tons of Likes. It could reduce the sense of competition on Instagram, since users won’t compare their own counts with those of more popular friends or superstar creators. And it could encourage creators to post what feels most authentic rather than trying to rack up Likes for everyone to see.

The design change test was spotted by Jane Manchun Wong, the prolific reverse-engineering expert and frequent TechCrunch tipster who has spotted tons of Instagram features before they’re officially confirmed or launched. Wong discovered the design change test in Instagram’s Android code and was able to generate the screenshots above.

You can see on the left that the Instagram feed post lacks a Like count, but still shows a few faces and a name of other people who’ve Liked it. Users are alerted that only they will see their post’s Like counts, and anyone else won’t. Many users delete posts that don’t immediately get “enough” Likes or post to their fake “Finstagram” accounts if they don’t think they’ll be proud of the hearts they collect. Hiding Like counts might get users posting more because they’ll be less self-conscious.

Instagram confirmed to TechCrunch that this design is an internal prototype that’s not visible to the public yet. A spokesperson told us: “We’re not testing this at the moment, but exploring ways to reduce pressure on Instagram is something we’re always thinking about.” Other features we’ve reported on in the same phase, such as video calling, soundtracks for Stories and the app’s time well spent dashboard, all went on to receive official launches.

Instagram’s prototypes (from left): feed post reactions, Stories lyrics and Direct stickers

Meanwhile, Wong has also recently spotted several other Instagram prototypes lurking in its Android code. Those include chat thread stickers for Direct messages, augmented reality filters for Direct Video calls, simultaneous co-watching of recommended videos through Direct, karaoke-style lyrics that appear synced to soundtracks in Stories, emoji reactions to feed posts and a shopping bag for commerce.

It appears there’s no plan to hide follower counts on user profiles, which are the true measure of popularity, but also serve a purpose of distinguishing great content creators and assessing their worth to marketers. Hiding Likes could just put more of a spotlight on follower and comment counts. And even if users don’t see Like counts, they still massively impact the feed’s ranking algorithm, so creators will still have to battle for them to be seen.

Close-up of Instagram’s design for feed posts without Like counters

The change matches a growing belief that Like counts can be counter-productive or even harmful to users’ psyches. Instagram co-founder Kevin Systrom told me back in 2016 that getting away from the pressure of Like counts was one impetus for Instagram launching Stories. Last month, Twitter began testing a design that hides retweet counts behind an extra tap to similarly discourage inauthentic competition and herd mentality. And Snapchat has never shown Like counts or even follower counts, which has made it feel less stressful but also less useful for influencers.

Narcissism, envy spiraling and low self-image can all stem from staring at Like counts. They’re a constant reminder of the status hierarchies that have emerged from social networks. For many users, at some point it stopped being fun and started to feel more like working in the heart mines. If Instagram rolls out the feature, it could put the emphasis back on sharing art and self-expression, not trying to win some popularity contest.

Mobile – TechCrunch


Instagram now demotes vaguely ‘inappropriate’ content

April 11, 2019 No Comments

Instagram is home to plenty of scantily clad models and edgy memes that may start to get fewer views starting today. Now Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent or hurtful, it could get fewer views.

Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages,” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event at the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community,” Instagram’s product lead for Discovery, Will Ruben, said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content,” which on Facebook would mean being shown lower in News Feed. That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Facebook’s Henry Silverman explained that, “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons.

This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

[Update: After we published, Instagram posted to its Help Center a brief note about its borderline content policy, but with no visual examples, mentions of impacted categories other than sexually suggestive content, or indications of what qualifies content as “inappropriate.” So officially, it’s still leaving users in the dark.]

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.


Social – TechCrunch


Instagram now demotes vaguely ‘inappropriate’ content

April 11, 2019 No Comments

Instagram is home to plenty of scantily clad models and edgy memes that may start to get fewer views starting today. Now Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means if a post is sexually suggestive, but doesn’t depict a sex act or nudity, it could still get demoted. Similarly, if a meme doesn’t constitute hate speech or harassment, but is considered in bad taste, lewd, violent or hurtful, it could get fewer views.

Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages,” which could severely hurt the ability of creators to gain new followers. The news came amidst a flood of “Integrity” announcements from Facebook to safeguard its family of apps revealed today at a press event at the company’s Menlo Park headquarters.

“We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community,” Instagram’s product lead for Discovery, Will Ruben, said. Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

These posts won’t be fully removed from the feed, and Instagram tells me for now the new policy won’t impact Instagram’s feed or Stories bar. But Facebook CEO Mark Zuckerberg’s November manifesto described the need to broadly reduce the reach of this “borderline content,” which on Facebook would mean being shown lower in News Feed. That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

Facebook’s Henry Silverman explained that, “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons.

This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram.

Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

[Update: After we published, Instagram posted to its Help Center a brief note about its borderline content policy, but with no visual examples, mentions of impacted categories other than sexually suggestive content, or indications of what qualifies content as “inappropriate.” So officially, it’s still leaving users in the dark.]

Instagram denied an account from a creator claiming that the app reduced their feed and Stories reach after one of their posts that actually violates the content policy taken down.

One female creator with around a half-million followers likened receiving a two-week demotion that massively reduced their content’s reach to Instagram defecating on them. “It just makes it like, ‘Hey, how about we just show your photo to like 3 of your followers? Is that good for you? . . . I know this sounds kind of tin-foil hatty but . . . when you get a post taken down or a story, you can set a timer on your phone for two weeks to the godd*mn f*cking minute and when that timer goes off you’ll see an immediate change in your engagement. They put you back on the Explore page and you start getting followers.”

As you can see, creators are pretty passionate about Instagram demoting their reach. Instagram’s Will Ruben said regarding the feed/Stories reach reduction: No, that’s not happening. We distinguish between feed and surfaces where you’ve taken the choice to follow somebody, and Explore and hashtag pages where Instagram is recommending content to people.”

The questions now are whether borderline content demotions are ever extended to Instagram’s feed and Stories, and how content is classified as recommendable, non-recommendable or violating. With artificial intelligence involved, this could turn into another situation where Facebook is seen as shirking its responsibilities in favor of algorithmic efficiency — but this time in removing or demoting too much content rather than too little.

Given the lack of clear policies to point to, the subjective nature of deciding what’s offensive but not abusive, Instagram’s 1 billion user scale and its nine years of allowing this content, there are sure to be complaints and debates about fair and consistent enforcement.

Mobile – TechCrunch


Instagram prototypes video co-watching

March 8, 2019 No Comments

The next phase of social media is about hanging out together while apart. Rather than performing on a live stream or engaging with a video chat, Instagram may allow you to chill and watch videos together with a friend. Facebook already has Watch Party for group co-viewing, and in November we broke the news that Facebook Messenger’s code contains an unreleased “Watch Videos Together” feature. Now Instagram’s code reveals a “co-watch content” feature hidden inside Instagram Direct Messaging.

It’s unclear what users might be able to watch simultaneously, but the feature could give IGTV a much-needed boost, or just let you laugh and cringe at Instagram feed videos and Stories. But either way, co-viewing could make you see more ads, drive more attention to creators that will win Instagram their favor or just make you rack up time spent on the app without forcing you to create anything.

The Instagram co-watch code was discovered by TechCrunch’s favorite tipster and reverse-engineering specialist Jane Manchun Wong, who previously spotted the Messenger Watch Together code. Her past findings include Instagram’s video calling, music soundtracks and Time Well Spent dashboard, months before they were officially released. The code mentions that you can “cowatch content” that comes from a “Playlist” similar to the queues of videos Facebook Watch Party admins can tee up. Users could also check out “Suggested” videos from Instagram, which would give it a new way to promote creators or spawn a zeitgeist moment around a video. It’s not certain whether users will be able to appear picture-in-picture while watching so friends can see their reactions, but that would surely be more fun.

Instagram declined to comment on the findings, which is typical of the company when a feature has been prototyped internally but hasn’t begun externally testing with users. At this stage, products can still get scrapped or take many months or even more than a year to launch. But given Facebook’s philosophical intention to demote mindless viewing and promote active conversation around videos, Instagram co-watching is a sensible direction.

Facebook launched Watch Party to this end back in July, and by November, 12 million had been started from Groups and they generated 8X more comments than non-synced or Live videos. That proves co-watching can make video feel less isolating. That’s important as startups like Houseparty group video chatrooms and Squad screenshare messaging try to nip at Insta’s heels.

It’s also another sign that following the departure of the Instagram founders, Facebook has been standardizing features across its apps, eroding their distinct identities. Mark Zuckerberg plans to unify the backend of Facebook Messenger, WhatsApp, and Instagram to allow cross-app messaging. But Instagram has always been Facebook’s content-first app, so while Watch Party might have been built for Facebook Groups, Instagram could be where it hits its stride.

Speaking of the Instagram founders Kevin Systrom and Mike Krieger, this article’s author Josh Constine will be interviewing them on Monday 3/11 at SXSW. Come see them at 2 pm in the Austin Convention Center’s Ballroom D to hear about their thoughts on the creator economy, why they left Facebook and what they’ll do next. Check out the rest of TechCrunch’s SXSW panels here, and RSVP for our party on Sunday.


Social – TechCrunch