CBPO

Social

Facebook plans voter drive, partners with Democratic/Republican Institutes

September 19, 2018 No Comments

Facebook will push users to register to vote through a partnership with TurboVote, has partnered with the International Republican Institute and National Democratic Institute nonprofits to monitor foreign election interference and will publish a weekly report of trends and issues emerging from its new political ads archive. Facebook has also confirmed that its election integrity war room is up and running and the team is now “red teaming” how it would react to problem scenarios such as a spike in voter suppression content.

These were the major announcements from today’s briefing call between Facebook’s election integrity team and reporters.

Facebook’s voter registration drive will also partner with TurboVote, which Instagram announced yesterday will assist it with a similar initiative

Much of the call reviewed Facebook’s past efforts, but also took time to focus on the upcoming Brazilian election. There, Facebook has engaged with over 1,000 prosecutors, judges and clerks to establish a dialog with election authorities. It’s partnered with three fact-checkers in the country and worked with them on Messenger bots like “Fátima” and “Projeto Lupe” that can help people spot fake news.

The voter registration drive mirrors Instagram’s plan announced yesterday to work with TurboVote to push users to registration info via ads. Facebook says it also will remind people to vote on election day and let them share with friends that “I voted.” One concern is that voter registration and voting efforts by Facebook could unevenly advantage one political party, for instance those with a base of middle-aged constituents who might be young enough to use Facebook but not so young that they’ve abandoned it for YouTube and Snapchat. If Facebook can’t prove the efforts are fair, the drive could turn into a talking point for congressional members eager to paint the social network as biased against their party.

The partnerships with the Institutes that don’t operate domestically are designed “to understand what they’re seeing on the ground in elections” around the world so Facebook can move faster to safeguard its systems, says Facebook’s director of Global Politics and Government Outreach Team Katie Harbath. Here, Facebook is admitting this problem is too big to tackle on its own. Beyond working with independent fact-checkers and government election commissions, it’s tasking nonprofits to help be its eyes and ears on the ground.

The war room isn’t finished yet, according to a story from The New York Times published in the middle of the press call. Still under construction in a central hallway between two of Facebook’s Menlo Park HQ buildings, it will fit about 20 of Facebook’s 300 staffers working on election integrity. It will feature screens showing dashboards about information flowing through Facebook to help the team quickly identify and respond to surges in false news or fake accounts.

Overall, Facebook is trying to do its homework so it’s ready for a “heat of the moment, last day before the election scenario” and won’t get caught flat-footed, says Facebook’s director of Product Management for News Feed Greg Marra. He says Facebook is “being a lot more proactive and building systems to look for problems so they don’t become big problems on our platform.” Facebook’s director of Product Management for Elections and Civic Engagement Samidh Chakrabarti noted, this is “one of the biggest cross-team efforts we’ve seen.”


Social – TechCrunch


Twitter is bringing back the chronological timeline

September 18, 2018 No Comments

Your Twitter prayers are answered! Well, maybe not the prayers about harassment or the ones about an edit tweet button, but your other prayers.

Today in a series of tweets, the company announced that it had heard the cries of its various disgruntled users and will bring back a form of the pure chronological timeline that users can opt into. Twitter first took an interest in a more algorithmic timeline three-ish years ago and committed to it in 2016.

Some users were under the impression that they were living that algo-free life already by toggling off the “Show the best Tweets first” option in the account settings menu. Unfortunately for all of us, unchecking this box didn’t revert Twitter to ye olde pure chronological timeline so much as it removed some of the more prominent algorithmic bits that would otherwise be served to users first thing.  Users regularly observed non-chronological timeline behaviors even with the option toggled off.

As Twitter Product Lead Kayvon Beykpour elaborated, “We’re working on making it easier for people to control their Twitter timeline, including providing an easy switch to see the most recent tweets.”

Nostalgic users who want regular old Twitter back can expect to see the feature in testing “in the coming weeks.”

We’re ready to pull the switch, just tell us when.


Social – TechCrunch


Facebook is hiring a director of human rights policy to work on “conflict prevention” and “peace-building”

September 16, 2018 No Comments

Facebook is advertising for a human rights policy director to join its business, located either at its Menlo Park HQ or in Washington DC — with “conflict prevention” and “peace-building” among the listed responsibilities.

In the job ad, Facebook writes that as the reach and impact of its various products continues to grow “so does the responsibility we have to respect the individual and human rights of the members of our diverse global community”, saying it’s:

… looking for a Director of Human Rights Policy to coordinate our company-wide effort to address human rights abuses, including by both state and non-state actors. This role will be responsible for: (1) Working with product teams to ensure that Facebook is a positive force for human rights and apply the lessons we learn from our investigations, (2) representing Facebook with key stakeholders in civil society, government, international institutions, and industry, (3) driving our investigations into and disruptions of human rights abusers on our platforms, and (4) crafting policies to counteract bad actors and help us ensure that we continue to operate our platforms consistent with human rights principles.

Among the minimum requirements for the role, Facebook lists experience “working in developing nations and with governments and civil society organizations around the world”.

It adds that “global travel to support our international teams is expected”.

The company has faced fierce criticism in recent years over its failure to take greater responsibility for the spread of disinformation and hate speech on its platform. Especially in international markets it has targeted for business growth via its Internet.org initiative which seeks to get more people ‘connected’ to the Internet (and thus to Facebook).

More connections means more users for Facebook’s business and growth for its shareholders. But the costs of that growth have been cast into sharp relief over the past several years as the human impact of handing millions of people lacking in digital literacy some very powerful social sharing tools — without a commensurately large investment in local education programs (or even in moderating and policing Facebook’s own platform) — has become all too clear.

In Myanmar Facebook’s tools have been used to spread hate and accelerate ethic cleansing and/or the targeting of political critics of authoritarian governments — earning the company widespread condemnation, including a rebuke from the UN earlier this year which blamed the platform for accelerating ethnic violence against Myanmar’s Muslim minority.

In the Philippines Facebook also played a pivotal role in the election of president Rodrigo Duterte — who now stands accused of plunging the country into its worst human rights crisis since the dictatorship of Ferdinand Marcos in the 1970s and 80s.

While in India the popularity of the Facebook-owned WhatsApp messaging platform has been blamed for accelerating the spread of misinformation — leading to mob violence and the deaths of several people.

Facebook famously failed even to spot mass manipulation campaigns going on in its own backyard — when in 2016 Kremlin-backed disinformation agents injected masses of anti-Clinton, pro-Trump propaganda into its platform and garnered hundreds of millions of American voters’ eyeballs at a bargain basement price.

So it’s hardly surprising the company has been equally naive in markets it understands far less. Though also hardly excusable — given all the signals it has access to.

In Myanmar, for example, local organizations that are sensitive to the cultural context repeatedly complained to Facebook that it lacked Burmese-speaking staff — complaints that apparently fell on deaf ears for the longest time.

The cost to American society of social media enabled political manipulation and increased social division is certainly very high. The costs of the weaponization of digital information in markets such as Myanmar looks incalculable.

In the Philippines Facebook also indirectly has blood on its hands — having provided services to the Duterte government to help it make more effective use of its tools. This same government is now waging a bloody ‘war on drugs’ that Human Rights Watch says has claimed the lives of around 12,000 people, including children.

Facebook’s job ad for a human rights policy director includes the pledge that “we’re just getting started” — referring to its stated mission of helping  people “build stronger communities”.

But when you consider the impact its business decisions have already had in certain corners of the world it’s hard not to read that line with a shudder.

Citing the UN Guiding Principles on Business and Human Rights (and “our commitments as a member of the Global Network Initiative”), Facebook writes that its product policy team is dedicated to “understanding the human rights impacts of our platform and to crafting policies that allow us both to act against those who would use Facebook to enable harm, stifle expression, and undermine human rights, and to support those who seek to advance rights, promote peace, and build strong communities”.

Clearly it has an awful lot of “understanding” to do on this front. And hopefully it will now move fast to understand the impact of its own platform, circa fifteen years into its great ‘society reshaping experience’, and prevent Facebook from being repeatedly used to trash human rights.

As well as representing the company in meetings with politicians, policymakers, NGOs and civil society groups, Facebook says the new human rights director will work on formulating internal policies governing user, advertiser, and developer behavior on Facebook. “This includes policies to encourage responsible online activity as well as policies that deter or mitigate the risk of human rights violations or the escalation of targeted violence,” it notes. 

The director will also work with internal public policy, community ops and security teams to try to spot and disrupt “actors that seek to misuse our platforms and target our users” — while also working to support “those using our platforms to foster peace-building and enable transitional justice”.

So you have to wonder how, for example, Holocaust denial continuing to be being protected speech on Facebook will square with that stated mission for the human rights policy director.

At the same time, Facebook is currently hiring for a public policy manager in Francophone, Africa — who it writes can “combine a passion for technology’s potential to create opportunity and to make Africa more open and connected, with deep knowledge of the political and regulatory dynamics across key Francophone countries in Africa”.

That job ad does not explicitly reference human rights — talking only about “interesting public policy challenges… including privacy, safety and security, freedom of expression, Internet shutdowns, the impact of the Internet on economic growth, and new opportunities for democratic engagement”.

As well as “new opportunities for democratic engagement”, among the role’s other listed responsibilities is working with Facebook’s Politics & Government team to “promote the use of Facebook as a platform for citizen and voter engagement to policymakers and NGOs and other political influencers”.

So here, in a second policy job, Facebook looks to be continuing its ‘business as usual’ strategy of pushing for more political activity to take place on Facebook.

And if Facebook wants an accelerated understanding of human rights issues around the world it might be better advised to take a more joined up approach to human rights across its own policy staff board, and at least include it among the listed responsibilities of all the policy shapers it’s looking to hire.


Social – TechCrunch


Facebook’s new ‘SapFix’ AI automatically debugs your code

September 14, 2018 No Comments

Facebook has quietly built and deployed an artificial intelligence programming tool called SapFix that scans code, automatically identifies bugs, tests different patches and suggests the best ones that engineers can choose to implement. Revealed today at Facebook’s @Scale engineering conference, SapFix is already running on Facebook’s massive code base and the company plans to eventually share it with the developer community.

“To our knowledge, this marks the first time that a machine-generated fix — with automated end-to-end testing and repair — has been deployed into a codebase of Facebook’s scale,” writes Facebook’s developer tool team. “It’s an important milestone for AI hybrids and offers further evidence that search-based software engineering can reduce friction in software development.” SapFix can run with or without Sapienz, Facebook’s previous automated bug spotter. It uses it in conjunction with SapFix, suggesting solutions to problems Sapienz discovers.

These types of tools could allow smaller teams to build more powerful products, or let big corporations save a ton on wasted engineering time. That’s critical for Facebook as it has so many other problems to worry about.

 

Glow AI hardware partners

Meanwhile, Facebook is pressing forward with its strategy of reorienting the computing hardware ecosystem around its own machine learning software. Today it announced that its Glow compiler for machine learning hardware acceleration has signed up the top silicon manufacturers, like Cadence, Esperanto, Intel, Marvell, and Qualcomm, to support Glow. The plan mirrors Facebook’s Open Compute Project for open sourcing server designs and Telecom Infra Project for connectivity technology.

Glow works with a wide array of machine learning frameworks and hardware accelerators to speed up how they perform deep learning processes. It was open sourced earlier this year at Facebook’s F8 conference.

“Hardware accelerators are specialized to solve the task of machine learning execution. They typically contain a large number of execution units, on-chip memory banks, and application-specific circuits that make the execution of ML workloads very efficient,” Facebook’s team writes. “To execute machine learning programs on specialized hardware, compilers are used to orchestrate the different parts and make them work together . . . Hardware partners that use Glow can reduce the time it takes to bring their product to market.”

Facebook VP of infrastructure Jason Taylor

Essentially, Facebook needs help in the silicon department. Instead of isolating itself and building its own chips like Apple and Google, it’s effectively outsourcing the hardware development to the experts. That means it might forego a competitive advantage from this infrastructure, but it also allows it to save money and focus on its core strengths.

“What I talked about today was the difficulty of predicting what chip will really do well in the market. When you build a piece of silicon, you’re making predictions about where the market is going to be in two years” Facebook’s VP of infrastructure Jason Taylor tells me. “The big question is if the workload that they design for is the worlflow that’s really important at the time. You’re going to see this fragmentation. At Facebook, wew want to work with all the partners out there so we have good options now and over the next several years.” Essentially, by partnering with all the chip makers instead of building its own, Facebook future-proofs its software against volatility in which chip becomes the standard.

The technologies aside, the Scale conference was evidence that Facebook will keep hacking, policy scandals be damned. There was nary a mention of Cambridge Analytica or election interference as a packed room of engineers chuckled to nerdy jokes during keynotes packed with enough coding jargon to make the unindoctrinated assume it was in another language. If Facebook is burning, you couldn’t tell from here\


Social – TechCrunch


Snapchat shares hit all-time low as search acquisition Vurb’s CEO bails

September 13, 2018 No Comments

Snapchat’s sagging share price is making it tough to retain talent. Bobby Lo, founder and CEO of mobile search app Vurb that Snap Inc acquired for $ 114.5 million two years ago is leaving day-to-day operations at the company. That means Lo cut out early on his four-year retention package vesting schedule, which was likely influenced by Snapchat falling to new share price lows. Snap is trading around $ 9.15 today, compared to its $ 17 IPO price and $ 24 first-day close.

That’s down over 7 percent from yesterday following BTIG analyst Rich Greenfield gave Snap a sell rating with a target price of $ 5 saying “We are tired of Snapchat’s excuses for missing numbers and are no longer willing to give management ‘time’ to figure out monetization.” Greenfield is known as one of the top social network analysts, so people take him seriously when he says “We have been disappointed in SNAP’s product evolution (as have users) and see no reason to believe this will change.”

Vurb is a good example of this. The app let users make plans with friends to visit local places, allowing them to bundle restaurants, movie theaters, and more into shareable decks of search cards. It took over a year after the October 2016 acquisition for the tech to be integrated into Snapchat in the form of context cards in search. But Snap never seemed to figure out how to make its content-craving teen audience care about Vurb’s utility. Snap could have built powerful offline meetup tools out of the cards but never did, and lackluster Snap Map adoption furthered clouded the company’s path forward around local businesses.

Now Lo tells TechCrunch of his departure, “Building experiences at Snap has been a wonderful culmination of my seven-year startup journey with Vurb. My transition to an advisor at Snap lets me continue supporting the amazing people there while directing my time back into startups, starting with investing and advising in founders.”

Lo was early to embrace the monolithic app style pioneered by WeChat in China that’s become increasingly influential in the states. Snap confirmed the departure while trying to downplay it. A spokesperson tells me, “Bobby transitioned to an advisory role this summer, and we appreciate his continued contributions to Snap.”

Given Snap is known to back-weight its stock vesting schedules, Lo could be leaving over half of his retention shares on the table. That decision should worry investors. As a solo founder, Lo already made off with a big chunk of the acquisition price that including $ 21 million in cash and $ 83 million in stock, so with the company’s share price so low, he might have had little incentive to stay.

 

Snapchat Context Cards built from Vurb’s acquired technology

Since last July, Snap has lost a ton of talent including SVP of Engineering Tim Sehn, early employee Chloe Drimal, VP of HR and Legal Robyn Thomas and VP of Securities and Facilities Martin Lev, CFO Drew Vollero, VP of product Tom Conrad, TimeHop co-founder Jonathan Wegener, Spectacles team lead Mark Randall, ad tech manager Sriram Krishnan, head of sales Jeff Lucas, and just last week, its COO Imran Khan.

With its user count shrinking, constant competition from Facebook and Instagram, and talent fleeing, it’s hard to see a bright future for Snap. Unless CEO Evan Spiegel, without the help of his departed lieutenants, can come up with a groundbreaking new product that’s not easy to copy, we could be looking at downward spiral for the ephemeral app. At what point must Snap consider selling itself to Google, Apple, Tencent, Disney, or whoever will take on the distressed social network?


Social – TechCrunch


Joe Biden is headed to IGTV

September 11, 2018 No Comments

What better way to reach millennial voters ahead of a 2020 presidential run than through Instagram?

Joe Biden, in partnership with ATTN:, will host a 10-episode series streaming on IGTV beginning September 12. In reality, he has yet to confirm a presidential run; the partnership, rather, is meant to help combat digital misinformation in an era of “fake news.” 

The show, called “Here’s the Deal,” will air weekly until the midterm elections on November 6. Each episode will hit on big issues, including gun safety, education, infrastructure and healthcare.

“Folks, with less than 100 days until the most consequential election of our lifetimes, we’ve got to keep our eye on the ball,” Biden says in the announcement, adding that the show will not have “complicated, policy-wonk language or acronyms. Just facts — at least as I see them.”

Biden had just come off of an Instagram hiatus when digital media startup ATTN: announced the news. On Saturday, former President Barack Obama posted this nice tribute, welcoming Biden back to Instagram. 

In a bid to compete with YouTube, Instagram launched IGTV at the end of June. The new feature lets users upload vertical videos of up to one hour in length.


Social – TechCrunch


Twitter launches audio-only broadcasting feature on its iOS app and Periscope

September 7, 2018 No Comments

Twitter is launching a new feature that allows users to create audio-only broadcasts directly from Twitter itself, as well as Twitter’s Periscope. The feature, which Twitter CEO Jack Dorsey confirmed in a tweet this morning, is available from the same interface where you would normally launch live video. It’s currently accessible on the Twitter for iOS app, as well as on Periscope.

Now, instead of only having the option to record video after you tap “Live,” there’s a button you can tap to pick audio-only broadcast.

The feature was seen in beta testing in recent weeks, but @Jack’s tweet – along with the mobile app’s update log  – indicates it has now rolled out to all.

Twitter also confirmed to TechCrunch the feature is currently available only on the Twitter app for iOS and on Periscope for the time being. It hasn’t provided a time frame for when it will reach other platforms.

While those users will only be the ones at present who can record audio, all Twitter users across platforms will be able to see the recordings and play them back.

As the update text explains, the feature is valuable for those times when you want viewers to hear you but not see you. This could allow people to share live news on Twitter of an audio-only nature, record sharable mini-podcasts, or post something to their followers that takes longer than 280 characters to explain.

Similar to live video, audio broadcasters will be able to view their stats, like number of live viewers, replay viewers, time watched and other metrics.

The company plans to share the news through an official Twitter Engineering blog post shortly.

Update: Twitter has now tweeted the news on its own account, as well.


Social – TechCrunch


Justice Department’s threat to social media giants is wrong

September 6, 2018 No Comments

Never has it been so clear that the attorneys charged with enforcing the laws of the country have a complete disregard for the very laws they’re meant to enforce.

As executives of Twitter and Facebook took to the floor of the Senate to testify about their companies’ response to international meddling into U.S. elections and addressed the problem of propagandists and polemicists using their platforms to spread misinformation, the legal geniuses at the Justice Department were focused on a free speech debate that isn’t just unprecedented, but also potentially illegal.

These attorneys general convened to confabulate on the “growing concern” that social media companies are stifling expression and hurting competition. What’s really at issue is a conservative canard and talking point that tries to make a case that private companies have a First Amendment obligation to allow any kind of speech on their platforms.

The simple fact is that they do not. Let me repeat that. They simply do not.

What the government’s lawyers are trying to do is foist a responsibility that they have to uphold the First Amendment onto private companies that are under no such obligation. Why are these legal eagles so up in arms? The simple answer is the decision made by many platforms to silence voices that violate the expressed policies of the platforms they’re using.

Chief among these is Alex Jones who has claimed that the Sandy Hook school shooting was a hoax and accused victims of the Parkland school shooting of being crisis actors.

Last month a number of those social media platforms that distributed Jones finally decided that enough was enough.

The decision to boot Jones is their prerogative as private companies. While Jones has the right to shout whatever he wants from a soapbox in free speech alley (or a back alley, or into a tin can) — and while he can’t be prosecuted for anything that he says (no matter how offensive, absurd or insane) — he doesn’t have the right to have his opinions automatically amplified by every social media platform.

Almost all of the big networking platforms have come to that conclusion.

The technology-lobbying body has already issued a statement excoriating the Department of Justice for its ham-handed approach.

[The] U.S. Department of Justice (DOJ) today released a statement saying that it was convening state attorneys general to discuss its concerns that these companies were “hurting competition and intentionally stifling the free exchange of ideas.” Social media platforms have the right to determine what types of legal speech they will permit on their platforms. It is inappropriate for the federal government to use the threat of law enforcement to limit companies from exercising this right. In particular, law enforcement should not threaten social media companies with unwarranted investigations for their efforts to rid their platforms of extremists who incite hate and violence.

While the Justice Department’s approach muddies the waters and makes it more difficult for legitimate criticism and reasoned regulation of the social media platforms to take hold, there are legitimate issues that legislators need to address.

Indeed, many of them were raised in a white paper from Senator Mark Warner, which was released in the dog days of summer.

Or the Justice Department could focus on the issue that Senator Ron Wyden emphasized in the hours after the hearing:

Instead of focusing on privacy or security, attorneys general for the government are waging a Pyrrhic war against censorship that doesn’t exist and ignoring the real cold war for platform security.


Social – TechCrunch


It’s time for Facebook and Twitter to coordinate efforts on hate speech

September 2, 2018 No Comments

Since the election of Donald Trump in 2016, there has been burgeoning awareness of the hate speech on social media platforms like Facebook and Twitter. While activists have pressured these companies to improve their content moderation, few groups (outside of the German government) have outright sued the platforms for their actions.

That’s because of a legal distinction between media publications and media platforms that has made solving hate speech online a vexing problem.

Take, for instance, an op-ed published in the New York Times calling for the slaughter of an entire minority group.  The Times would likely be sued for publishing hate speech, and the plaintiffs may well be victorious in their case. Yet, if that op-ed were published in a Facebook post, a suit against Facebook would likely fail.

The reason for this disparity? Section 230 of the Communications Decency Act (CDA), which provides platforms like Facebook with a broad shield from liability when a lawsuit turns on what its users post or share. The latest uproar against Alex Jones and Infowars has led many to call for the repeal of section 230 – but that may lead to government getting into the business of regulating speech online. Instead, platforms should step up to the plate and coordinate their policies so that hate speech will be considered hate speech regardless of whether Jones uses Facebook, Twitter or YouTube to propagate his hate. 

A primer on section 230 

Section 230 is considered a bedrock of freedom of speech on the internet. Passed in the mid-1990s, it is credited with freeing platforms like Facebook, Twitter, and YouTube from the risk of being sued for content their users upload, and therefore powering the exponential growth of these companies. If it weren’t for section 230, today’s social media giants would have long been bogged down with suits based on what their users post, with the resulting necessary pre-vetting of posts likely crippling these companies altogether. 

Instead, in the more than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.

Private regulation of hate speech 

Of course, section 230 has not meant that hate speech online has gone unchecked. Platforms like Facebook, YouTube and Twitter all have their own extensive policies prohibiting users from posting hate speech. Social media companies have hired thousands of moderators to enforce these policies and to hold violating users accountable by suspending them or blocking their access altogether. But the recent debacle with Alex Jones and Infowars presents a case study on how these policies can be inconsistently applied.  

Jones has for years fabricated conspiracy theories, like the one claiming that the Sandy Hook school shooting was a hoax and that Democrats run a global child-sex trafficking ring. With thousands of followers on Facebook, Twitter, and YouTube, Jones’ hate speech has had real life consequences. From the brutal harassment of Sandy Hook parents to a gunman storming a pizza restaurant in D.C. to save kids from the restaurant’s nonexistent basement, his messages have had serious deleterious consequences for many. 

Alex Jones and Infowars were finally suspended from ten platforms by our count – with even Twitter falling in line and suspending him for a week after first dithering. But the varying and delayed responses exposed how different platforms handle the same speech.  

Inconsistent application of hate speech rules across platforms, compounded by recent controversies involving the spread of fake news and the contribution of social media to increased polarization, have led to calls to amend or repeal section 230. If the printed press and cable news can be held liable for propagating hate speech, the argument goes, then why should the same not be true online – especially when fully two-thirds of Americans now report getting at least some of their news from social media.  Amid the chorus of those calling for more regulation of tech companies, section 230 has become a consistent target. 

Should hate speech be regulated? 

But if you need convincing as to why the government is not best placed to regulate speech online, look no further than Congress’s own wording in section 230. The section enacted in the mid-90s states that online platforms “offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”  

Section 230 goes on to declare that it is the “policy of the United States . . . to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet.”  Based on the above, section 230 offers the now infamous liability protection for online platforms.  

From the simple fact that most of what we see on our social media is dictated by algorithms over which we have no control, to the Cambridge Analytica scandal, to increased polarization because of the propagation of fake news on social media, one can quickly see how Congress’s words in 1996 read today as a catalogue of inaccurate predictions. Even Ron Wyden, one of the original drafters of section 230, himself admits today that drafters never expected an “individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children” to be enabled through the protections offered by section 230.

It would be hard to argue that today’s Congress – having shown little understanding in recent hearings of how social media operates to begin with – is any more qualified at predicting the effects of regulating speech online twenty years from now.   

More importantly, the burden of complying with new regulations will definitely result in a significant barrier to entry for startups and therefore have the unintended consequence of entrenching incumbents. While Facebook, YouTube, and Twitter may have the resources and infrastructure to handle compliance with increased moderation or pre-vetting of posts that regulations might impose, smaller startups will be at a major disadvantage in keeping up with such a burden.

Last chance before regulation 

The answer has to lie with the online platforms themselves. Over the past two decades, they have amassed a wealth of experience in detecting and taking down hate speech. They have built up formidable teams with varied backgrounds to draft policies that take into account an ever-changing internet. Their profits have enabled them to hire away top talent, from government prosecutors to academics and human rights lawyers.  

These platforms also have been on a hiring spree in the last couple of years to ensure that their product policy teams – the ones that draft policies and oversee their enforcement – are more representative of society at large. Facebook proudly announced that its product policy team now includes “a former rape crisis counselor, an academic who has spent her career studying hate organizations . . . and a teacher.” Gone are the days when a bunch of engineers exclusively decided where to draw the lines. Big tech companies have been taking the drafting and enforcement of their policies ever more seriously.

What they now need to do is take the next step and start to coordinate policies so that those who wish to propagate hate speech can no longer game policies across platforms. Waiting for controversies like Infowars to become a full-fledged PR nightmare before taking concrete action will only increase calls for regulation. Proactively pooling resources when it comes to hate speech policies and establishing industry-wide standards will provide a defensible reason to resist direct government regulation.

The social media giants can also build public trust by helping startups get up to speed on the latest approaches to content moderation. While any industry consortium around coordinating hate speech is certain to be dominated by the largest tech companies, they can ensure that policies are easy to access and widely distributed.

Coordination between fierce competitors may sound counterintuitive. But the common problem of hate speech and the gaming of online platforms by those trying to propagate it call for an industry-wide response. Precedent exists for tech titans coordinating when faced with a common threat. Just last year, Facebook, Microsoft, Twitter, and YouTube formalized their “Global Internet Forum to Counter Terrorism” – a partnership to curb the threat of terrorist content online. Fighting hate speech is no less laudable a goal.

Self-regulation is an immense privilege. To the extent that big tech companies want to hold onto that privilege, they have a responsibility to coordinate the policies that underpin their regulation of speech and to enable startups and smaller tech companies to get access to these policies and enforcement mechanisms.


Social – TechCrunch


Twitter hints at new threaded conversations and who’s online features

September 1, 2018 No Comments

Twitter head Jack Dorsey sent out a tweet this afternoon hinting the social platform might get a couple of interesting updates to tell us who else is currently online and to help us more easily follow Twitter conversation threads.

“Playing with some new Twitter features: presence (who else is on Twitter right now?) and threading (easier to read convos),” Dorsey tweeted, along with samples.

The “presence” feature would make it easier to engage with those you follow who are online at the moment and the “threading” feature would allow Twitter users to follow a conversation easier than the current embed and click-through method.

However, several responders seemed concerned about followers seeing them online.

Twitter’s head of product Sarah Haider responded to one such tweeted concern at the announcement saying she “would definitely want you to have full control over sharing your presence.” So it seems there would be some sort of way to hide that you are online if you don’t want people to know you are there.

There were also a few design concerns involved in threading conversations together. TC OG reporter turned VC M.G. Siegler wasn’t a fan of the UI’s flat tops. Another user wanted to see something more like iMessage. I personally like the nesting idea. Cleans it up and makes it easier to follow along and I really don’t care how it’s designed (flat tops, round tops) as long as I don’t have to click through a bunch like I do with the @reply.

I also don’t think I’d want others knowing if I’m online and it’s not a feature I need for those I tweet at, either. Conversations happen at a ripping pace on the platform sometimes. You are either there for it or you can read about it later. I get the thinking on letting users know who’s live but it’s not necessary and seems to be something a lot of people don’t want.

Its unclear when either of these features would roll out to the general public, though they’re available to those in a select test group. We’ve asked Twitter and are waiting to hear back for more information. Of course, plenty of users are still wondering when we’re getting that edit button.


Social – TechCrunch