CBPO

Tag: Time

It’s time for Facebook and Twitter to coordinate efforts on hate speech

September 2, 2018 No Comments

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never expected an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.


Social – TechCrunch


Tried Everything to Get Qualified Leads? Time to Try Programmatic

July 27, 2018 No Comments

Join us on Thursday, August 2nd for the webinar you need to attend to understand why programmatic could be a PPC life-changer. Criteo’s Ned Samuelson and Hanapin’s Bryan Gaynor can’t wait to show you their tips and tricks. 

Read more at PPCHero.com
PPC Hero


First look at Instagram’s self-policing Time Well Spent tool

June 17, 2018 No Comments

Are you Overgramming? Instagram is stepping up to help you manage overuse rather than leaving it to iOS and Android’s new screen time dashboards. Last month after TechCrunch first reported Instagram was prototyping a Usage Insights feature, the Facebook sub-company’s CEO Kevin System confirmed its forthcoming launch.

Tweeting our article, Systrom wrote “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Now we have our first look at the tool via Jane Manchun Wong, who’s recently become one of TechCrunch’s favorite sources thanks to her skills at digging new features out of apps’ Android APK code. Though Usage Insights might change before an official launch, these screenshots give us an idea of what Instagram will include. Instagram declined to comment, saying it didn’t have any more to share about the feature at this time.

This unlaunched version of Instagram’s Usage Insights tool offers users a daily tally of their minutes spent on the app. They’ll be able to set a time spent daily limit, and get a reminder once they exceed that. There’s also a shortcut to manage Instagram’s notifications so the app is less interruptive. Instagram has been spotted testing a new hamburger button that opens a slide-out navigation menu on the profile. That might be where the link for Usage Insights shows up, judging by this screenshot.

Instagram doesn’t appear to be going so far as to lock you out of the app after your limit, or fading it to grayscale which might annoy advertisers and businesses. But offering a handy way to monitor your usage that isn’t buried in your operating system’s settings could make users more mindful.

Instagram has an opportunity to be a role model here, especially if it gives its Usage Insights feature sharper teeth. For example,  rather than a single notification when you hit your daily limit, it could remind you every 15 minutes after, or create some persistent visual flag so you know you’ve broken your self-imposed rule.

Instagram has already started to push users towards healthier behavior with a “You’re all caught up” notice when you’ve seen everything in your feed and should stop scrolling.

I expect more apps to attempt to self-police with tools like these rather than leaving themselves at the mercy of iOS’s Screen Time and Android’s Digital Wellbeing features that offer more drastic ways to enforce your own good intentions.

Both let you see overall usage of your phone and stats about individual apps. iOS lets you easily dismiss alerts about hitting your daily limit in an app but delivers a weekly usage report (ironically via notification), while Android will gray out an app’s icon and force you to go to your settings to unlock an app once you exceed your limit.

For Android users especially, Instagram wants to avoid looking like such a time sink that you put one of those hard limits on your use. In that sense, self-policing shows both empathy for its users’ mental health, but is also a self-preservation strategy. With Instagram slated to launch a long-form video hub that could drive even longer session times this week, Usage Insights could be seen as either hypocritical or more necessary than ever.

New time management tools coming to iOS (left) and Android (right). Images via The VergeInstagram is one of the world’s most beloved apps, but also one of the most easily abused. From envy spiraling as you watch the highlights of your friends’ lives to body image issues propelled by its endless legions of models, there are plenty of ways to make yourself feel bad scrolling the Insta feed. And since there’s so little text, no links, and few calls for participation, it’s easy to zombie-browse in the passive way research shows is most dangerous.

We’re in a crisis of attention. Mobile app business models often rely on maximizing our time spent to maximize their ad or in-app purchase revenue. But carrying the bottomless temptation of the Internet in our pockets threatens to leave us distracted, less educated, and depressed. We’ve evolved to crave dopamine hits from blinking lights and novel information, but never had such an endless supply.

There’s value to connecting with friends by watching their days unfold through Instagram and other apps. But tech giants are thankfully starting to be held responsible for helping us balance that with living our own lives.


Social – TechCrunch


First look at Instagram’s self-policing Time Well Spent tool

June 17, 2018 No Comments

Are you Overgramming? Instagram is stepping up to help you manage overuse rather than leaving it to iOS and Android’s new screen time dashboards. Last month after TechCrunch first reported Instagram was prototyping a Usage Insights feature, the Facebook sub-company’s CEO Kevin System confirmed its forthcoming launch.

Tweeting our article, Systrom wrote “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Now we have our first look at the tool via Jane Manchun Wong, who’s recently become one of TechCrunch’s favorite sources thanks to her skills at digging new features out of apps’ Android APK code. Though Usage Insights might change before an official launch, these screenshots give us an idea of what Instagram will include. Instagram declined to comment, saying it didn’t have any more to share about the feature at this time.

This unlaunched version of Instagram’s Usage Insights tool offers users a daily tally of their minutes spent on the app. They’ll be able to set a time spent daily limit, and get a reminder once they exceed that. There’s also a shortcut to manage Instagram’s notifications so the app is less interruptive. Instagram has been spotted testing a new hamburger button that opens a slide-out navigation menu on the profile. That might be where the link for Usage Insights shows up, judging by this screenshot.

Instagram doesn’t appear to be going so far as to lock you out of the app after your limit, or fading it to grayscale which might annoy advertisers and businesses. But offering a handy way to monitor your usage that isn’t buried in your operating system’s settings could make users more mindful.

Instagram has an opportunity to be a role model here, especially if it gives its Usage Insights feature sharper teeth. For example,  rather than a single notification when you hit your daily limit, it could remind you every 15 minutes after, or create some persistent visual flag so you know you’ve broken your self-imposed rule.

Instagram has already started to push users towards healthier behavior with a “You’re all caught up” notice when you’ve seen everything in your feed and should stop scrolling.

I expect more apps to attempt to self-police with tools like these rather than leaving themselves at the mercy of iOS’s Screen Time and Android’s Digital Wellbeing features that offer more drastic ways to enforce your own good intentions.

Both let you see overall usage of your phone and stats about individual apps. iOS lets you easily dismiss alerts about hitting your daily limit in an app but delivers a weekly usage report (ironically via notification), while Android will gray out an app’s icon and force you to go to your settings to unlock an app once you exceed your limit.

For Android users especially, Instagram wants to avoid looking like such a time sink that you put one of those hard limits on your use. In that sense, self-policing shows both empathy for its users’ mental health, but is also a self-preservation strategy. With Instagram slated to launch a long-form video hub that could drive even longer session times this week, Usage Insights could be seen as either hypocritical or more necessary than ever.

New time management tools coming to iOS (left) and Android (right). Images via The VergeInstagram is one of the world’s most beloved apps, but also one of the most easily abused. From envy spiraling as you watch the highlights of your friends’ lives to body image issues propelled by its endless legions of models, there are plenty of ways to make yourself feel bad scrolling the Insta feed. And since there’s so little text, no links, and few calls for participation, it’s easy to zombie-browse in the passive way research shows is most dangerous.

We’re in a crisis of attention. Mobile app business models often rely on maximizing our time spent to maximize their ad or in-app purchase revenue. But carrying the bottomless temptation of the Internet in our pockets threatens to leave us distracted, less educated, and depressed. We’ve evolved to crave dopamine hits from blinking lights and novel information, but never had such an endless supply.

There’s value to connecting with friends by watching their days unfold through Instagram and other apps. But tech giants are thankfully starting to be held responsible for helping us balance that with living our own lives.

Mobile – TechCrunch


Time to VOTE! 2018 Top 25 Most Influential PPC Experts

April 12, 2018 No Comments

This year is flying by, but GUESS WHAT? The time has come to vote! And I’m not talking about politics – I’m talking about voting for the Top 25 Most Influential PPC Experts of 2018. Are there PPC experts that you love following on Twitter? Or that you never miss a chance to watch speak? […]

Read more at PPCHero.com
PPC Hero


Hit The Facebook Ad On The Head Every Time

February 11, 2018 No Comments

Making Facebook ad targeting simple is laying out a plan for each campaign. Each dollar you put into your campaigns needs to have a purpose.

Read more at PPCHero.com
PPC Hero


A New Bill Wants Jail Time for Execs Who Hide Data Breaches

December 3, 2017 No Comments

A bill to punish hack hiders, Apple bug fix bumbling, and more of the week’s top security stories.
Feed: All Latest


GoodTime nabs $2M to match job applicants with interviewers to save time and build rapport

August 21, 2017 No Comments

 Despite countless attempts and millions in venture capital, the calendar, one of the most ubiquitous work tools, has remained largely unchanged for as long as I can remember. Rather than overwrite the calendar in an effort to make it obsolete, Ahryun Moon and Jasper Sone, co-founders of GoodTime, are putting the calendar front and center — embracing it as a means of understanding people. Read More
Startups – TechCrunch


Ava DuVernay’s ‘A Wrinkle in Time’ Is the Ultimate Adaptation

July 16, 2017 No Comments

In the director’s vision, Madeleine L’Engle’s 1963 story looks like a fantasy world—and the real one as well.
Wired


Now is the time to apply for Startup Battlefield Africa

June 16, 2017 No Comments

 We’re looking for a few good startups based in sub-Saharan Africa to participate in our inaugural Startup Battlefield Africa, and all you startup folks out there in the region aren’t going to want to miss this. TechCrunch is excited to be partnering with Facebook to bring our illustrious startup competition, the Startup Battlefield, to Nairobi, Kenya, later this year. The… Read More
Startups – TechCrunch