CBPO

Social

VOCHI raises additional $2.4 million for its computer vision-powered video editing app

July 22, 2021 No Comments

VOCHI, a Belarus-based startup behind a clever computer vision-based video editing app used by online creators, has raised an additional $ 2.4 million in a “late-seed” round that follows the company’s initial $ 1.5 million round led by Ukraine-based Genesis Investments last year. The new funds follow a period of significant growth for the mobile tool, which is now used by over 500,000 people per month and has achieved a $ 4 million-plus annual run rate in a year’s time.

Investors in the most recent round include TA Ventures, Angelsdeck, A.Partners, Startup Wise Guys, Kolos VC, and angels from other Belarus-based companies like Verv and Bolt. Along with the fundraise, VOCHI is elevating the company’s first employee, Anna Bulgakova, who began as head of marketing, to the position of co-founder and Chief Product Officer.

According to VOCHI co-founder and CEO lya Lesun, the company’s idea was to provide an easy way for people to create professional edits that could help them produce unique and trendy content for social media that could help them stand out and become more popular. To do so, VOCHI leverages a proprietary computer-vision-based video segmentation algorithm that applies various effects to specific moving objects in a video or to images in static photos.

“To get this result, there are two trained [convolutional neural networks] to perform semi-supervised Video Object Segmentation and Instance Segmentation,” explains Lesun, of VOCHI’s technology. “Our team also developed a custom rendering engine for video effects that enables instant application in 4K on mobile devices. And it works perfectly without quality loss,” he adds. It works pretty fast, too — effects are applied in just seconds.

The company used the initial seed funding to invest in marketing and product development, growing its catalog to over 80 unique effects and more than 30 filters.

Image Credits: VOCHI

Today, the app offers a number of tools that let you give a video a particular aesthetic (like a dreamy vibe, artistic feel, or 8-bit look, for example). It can also highlight the moving content with glowing lines, add blurs or motion, apply different filters, insert 3D objects into the video, add glitter or sparkles, and much more.

In addition to editing their content directly, users can swipe through a vertical home feed in the app where they can view the video edits others have applied to their own content for inspiration. When they see something they like, they can then tap a button to use the same effect on their own video. The finished results can then be shared out to other platforms, like Instagram, Snapchat and TikTok.

Though based in Belarus, most of VOCHI’s users are young adults from the U.S. Others hail from Russia, Saudi Arabia, Brazil and parts of Europe, Lesun says.

Unlike some of its video editor rivals, VOCHI offers a robust free experience where around 60% of the effects and filters are available without having to pay, along with other basic editing tools and content. More advanced features, like effect settings, unique presents and various special effects require a subscription. This subscription, however, isn’t cheap — it’s either $ 7.99 per week or $ 39.99 for 12 weeks. This seemingly aims the subscription more at professional content creators rather than a casual user just looking to have fun with their videos from time to time. (A one-time purchase of $ 150 is also available, if you prefer.)

To date, around 20,000 of VOCHI’s 500,000 monthly active users have committed to a paid subscription, and that number is growing at a rate of 20% month-over-month, the company says.

Image Credits: VOCHI

The numbers VOCHI has delivered, however, aren’t as important as what the startup has been through to get there.

The company has been growing its business at a time when a dictatorial regime has been cracking down on opposition, leading to arrests and violence in the country. Last year, employees from U.S.-headquartered enterprise startup PandaDoc were arrested in Minsk by the Belarus police, in an act of state-led retaliation for their protests against President Alexander Lukashenko. In April, Imaguru, the country’s main startup hub, event and co-working space in Minsk — and birthplace of a number of startups, including MSQRD, which was acquired by Facebook — was also shut down by the Lukashenko regime.

Meanwhile, VOCHI was being featured as App of the Day in the App Store across 126 countries worldwide, and growing revenues to around $ 300,000 per month.

“Personal videos take an increasingly important place in our lives and for many has become a method of self-expression. VOCHI helps to follow the path of inspiration, education and provides tools for creativity through video,” said Andrei Avsievich, General Partner at Bulba Ventures, where VOCHI was incubated. “I am happy that users and investors love VOCHI, which is reflected both in the revenue and the oversubscribed round.”

The additional funds will put VOCHI on the path to a Series A as it continues to work to attract more creators, improve user engagement, and add more tools to the app, says Lesun.


Social – TechCrunch


Ireland must ‘swiftly’ investigate legality of Facebook-WhatsApp data sharing, says EDPB

July 17, 2021 No Comments

Facebook’s lead regulator in the European Union must “swiftly” investigate the legality of data sharing related to a controversial WhatsApp policy update, following an order by the European Data Protection Board (EDPB).

We’ve reached out to the Irish Data Protection Commission (DPC) for a response. (Update: See below for their statement.)

Updated terms had been set to be imposed upon users of the Facebook-owned messaging app early this year — but in January Facebook delayed the WhatsApp terms update until May after a major privacy backlash and ongoing confusion over the details of its user data processing.

Despite WhatsApp going ahead with the policy update, the ToS has continued to face scrutiny from regulators and rights organizations around the world.

The Indian government, for example, has repeatedly ordered Facebook to withdraw the new terms. While, in Europe, privacy regulators and consumer protection organizations have raised objections about how opaque terms are being pushed on users — and in May a German data protection authority issued a temporary (national) blocking order.

Today’s development follows that and is significant as it’s the first urgent binding decision adopted by the EDPB under the bloc’s General Data Protection Regulation (GDPR).

Although the Board has not agreed to order the adoption of final measures against Facebook-WhatsApp as the requesting data supervisor, the Hamburg DPA, had asked — saying that “conditions to demonstrate the existence of an infringement and an urgency are not met”.

The Board’s intervention in the confusing mess around the WhatsApp policy update follows the use of GDPR Article 66 powers by Hamburg’s data protection authority.

In May the latter ordered Facebook not to apply the new terms to users in Germany — saying its analysis found the policy granted “far-reaching powers” to WhatsApp to share data with Facebook, without it being clear what legal basis the tech giant was relying upon to be able to process users’ data.

Hamburg also accused the Irish DPC of failing to investigate the Facebook-WhatsApp data sharing when it raised concerns — hence seeking to take matters into its own hands by making an Article 66 intervention.

As part of the process it asked the EDPB to take a binding decision — asking it to take definitive steps to block data-sharing between WhatsApp and Facebook — in a bid to circumvent the Irish regulator’s glacial procedures by getting the Board to order enforcement measures that could be applied stat across the whole bloc.

However, the Board’s assessment found that Hamburg had not met the bar for demonstrating the Irish DPC “failed to provide information in the context of a formal request for mutual assistance under Article 61 GDPR”, as it puts it.

It also decided that the adoption of updated terms by WhatsApp — which it nonetheless says “contain similar problematic elements as the previous version” — cannot “on its own” justify the urgency for the EDPB to order the lead supervisor to adopt final measures under Article 66(2) GDPR.

The upshot — as the Hamburg DPA puts it — is that data exchange between WhatsApp and Facebook remains “unregulated at the European level”.

Article 66 powers

The importance of Article 66 of the GDPR is that it allows EU data protection authorities to derogate from the regulation’s one-stop-shop mechanism — which otherwise funnels cross border complaints (such as those against Big Tech) via a lead data supervisor (oftentimes the Irish DPC), and is thus widely seen as a bottleneck to effective enforcement of data protection (especially against tech giants).

An Article 66 urgency proceeding allows any data supervisor across the EU to immediately adopt provisional measures — provided a situation meets the criteria for this kind of emergency intervention. Which is one way to get around a bottleneck, even if only for a time-limited period.

A number of EU data protection authorities have used (or threatened to use) Article 66 powers in recent years, since GDPR came into application in 2018, and the power is increasingly proving its worth in reconfiguring certain Big Tech practices — with, for example, Italy’s DPA using it recently to force TikTok to remove hundreds of thousands of suspected underage accounts.

Just the threat of Article 66’s use back in 2019 (also by Hamburg) was enough to encourage Google to suspend manual reviews of audio reviews of recordings captured by its voice AI, Google Assistant. (And later led to a number of major policy changes by several tech giants who had similarly been manually reviewing users’ interactions with their voice AIs.)

At the same time, Article 66 provisional measures can only last three months — and only apply nationally, not across the whole EU. So it’s a bounded power. (Perhaps especially in this WhatsApp-Facebook case, where the target is a ToS update, and Facebook could just wait out the three months and apply the policy anyway in Germany after the suspension order lapses.)

This is why Hamburg wanted the EDPB to make a binding decision. And it’s certainly a blow to privacy watchers eager for GDPR enforcement to fall on tech giants like Facebook that the Board has declined to do so in this case.

Unregulated data sharing

Responding to the Board’s decision not to impose definitive measures to prevent data sharing between WhatsApp and Facebook, the Hamburg authority expressed disappointment — see below for its full statement — and also lamented that the EDPB has not set a deadline for the Irish DPC to conduct the investigation into the legal basis of the data sharing.

Ireland’s data protection authority has only issued one final GDPR decision against a tech giant to date (Twitter) — so there is plenty of cause to be concerned that without a concrete deadline the ordered probe could be kicked down the road for years.

Nonetheless, the EDPB’s order to the Irish DPC to “swiftly” investigate the finer-grained detail of the Facebook-WhatsApp data sharing does look like a significant intervention by a pan-EU body — as it very publicly pokes a regulator with a now infamous reputation for reluctance to actually do the job of rigorously investigating privacy concerns. 

Demonstrably it has failed to do so in this WhatsApp case. Despite major concerns being raised about the policy update — within Europe and globally — Facebook’s lead EU data supervisor did not open a formal investigation and has not raised any public objections to the update.

Back in January when we asked about concerns over the update, the DPC told TechCrunch it had obtained a “confirmation” from Facebook-owned WhatsApp that there was no change to data-sharing practices that would affect EU users — reiterating Facebook’s line that the update didn’t change anything, ergo “nothing to see here”. 

“The updates made by WhatsApp last week are about providing clearer, more detailed information to users on how and why they use data. WhatsApp have confirmed to us that there is no change to data-sharing practices either in the European Region or the rest of the world arising from these updates,” the DPC told us then, although it also noted that it had received “numerous queries” from stakeholders who it described as “confused and concerned about these updates”, mirroring Facebook’s own characterization of complaints.

“We engaged with WhatsApp on the matter and they confirmed to us that they will delay the date by which people will be asked to review and accept the terms from February 8th to May 15th,” the DPC went on, referring to a pause in the ToS application deadline which Facebook enacted after a public backlash that saw scores of users signing up to alternative messaging apps, before adding: “In the meantime, WhatsApp will launch information campaigns to provide further clarity about how privacy and security works on the platform. We will continue to engage with WhatsApp on these updates.”

The EDPB’s assessment of the knotty WhatsApp-Facebook data-sharing terms looks rather different — with the Board calling out WhatsApp’s user communications as confusing and simultaneously raising concerns about the legal basis for the data exchange.

In a press release, the EDPB writes that there’s a “high likelihood of infringements” — highlighting purposes contained in the updated ToS in the areas of “safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, as well as for the purpose of improvement of the products of the Facebook Companies” as being of particular concern.

From the Board’s PR [emphasis its]:

Considering the high likelihood of infringements in particular for the purpose of safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, as well as for the purpose of improvement of the products of the Facebook Companies, the EDPB considered that this matter requires swift further investigations. In particular to verify if, in practice, Facebook Companies are carrying out processing operations which imply the combination or comparison of WhatsApp IE’s [Ireland] user data with other data sets processed by other Facebook Companies in the context of other apps or services offered by the Facebook Companies, facilitated inter alia by the use of unique identifiers. For this reason, the EDPB requests the IE SA [Irish supervisory authority] to carry out, as a matter of priority, a statutory investigation to determine whether such processing activities are taking place or not, and if this is the case, whether they have a proper legal basis under Article 5(1)(a) and Article 6(1) GDPR.

NB: It’s worth recalling that WhatsApp users were initially told they must accept the updated policy or else the app would stop working. (Although Facebook later changed its approach — after the public backlash.) While WhatsApp users who still haven’t accepted the terms continue to be nagged to do so via regular pop-ups, although the tech giant does not appear to be taking steps to degrade the user experience further as yet (i.e. beyond annoying, recurring pop-ups).

The EDPB’s concerns over the WhatsApp-Facebook data sharing extend to what it says is “a lack of information around how data is processed for marketing purposes, cooperation with the other Facebook Companies and in relation to WhatsApp Business API” — hence its order to Ireland to fully investigate.

The Board also essentially confirms the view that WhatsApp users themselves have no hope of understanding what Facebook is doing with their data by reading the comms material it has provided them with — with the Board writing [emphasis ours]:

Based on the evidence provided, the EDPB concluded that there is a high likelihood that Facebook IE [Ireland] already processes WhatsApp IE [Ireland] user data as a (joint) controller for the common purpose of safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, and for the common purpose of improvement of the products of the Facebook Companies. However, in the face of the various contradictions, ambiguities and uncertainties noted in WhatsApp’s user-facing information, some written commitments adopted by Facebook IE [Ireland] and WhatsApp IE’s [Ireland] written submissions, the EDPB concluded that it is not in a position to determine with certainty which processing operations are actually being carried out and in which capacity.

We contacted Facebook for a response to the EDPB’s order, and the company sent us this statement — attributed to a WhatsApp spokesperson:

We welcome the EDPB’s decision not to extend the Hamburg DPA’s order, which was based on fundamental misunderstandings as to the purpose and effect of the update to our terms of service. We remain fully committed to delivering secure and private communications for everyone and will work with the Irish Data Protection Commission as our lead regulator in the region in order to fully address the questions raised by the EDPB.

Facebook also claimed it has controls in place for “controller to processor data sharing” (i.e. between WhatsApp and Facebook) — which it said prohibit it (Facebook) from using WhatsApp user data for its own purposes.

The tech giant went on to reiterate its line that the update does not expand WhatsApp’s ability to share data with Facebook.

GDPR enforcement stalemate

A further vital component to this saga is the fact the Irish DPC has, for years, been investigating long-standing complaints against WhatsApp’s compliance with GDPR’s transparency requirements — and still hasn’t issued a final decision.

So when the EDPB says it’s highly likely that some of the WhatsApp-Facebook data-processing being objected to is already going on it doesn’t mean Facebook gets a pass for that — because the DPC hasn’t issued a verdict on whether or not WhatsApp has been up front enough with users.

tl;dr: The regulatory oversight process is still ongoing.

The DPC provisionally concluded its WhatsApp transparency investigation last year — saying in January that it sent a draft decision to the other EU data protection authorities for review (and the chance to object) on December 24, 2020; a step that’s required under the GDPR’s co-decision-making process.

In January, when it said it was still waiting to receive comments on the draft decision, it also said: “When the process is completed and a final decision issues, it will make clear the standard of transparency to which WhatsApp is expected to adhere as articulated by EU Data Protection Authorities.”

Over a half a year later and WhatsApp users in the EU are still waiting to find out whether the company’s comms lives up to the required legal standard of transparency or not — with their data continuing to pass between Facebook and WhatsApp in the meanwhile.

The Irish DPC was contacted for comment on the EDPB’s order today and with questions on the current status of the WhatsApp transparency investigation.

It told us it would have a response later today — we’ll update this report when we get it.

Update: The DPC’s deputy commissioner Graham Doyle said [emphasis his]:

This Article 66 procedure was about whether the EDPB on request from Hamburg would take final measures confirming the provisional measures applied by the Hamburg SA against Facebook. The EDPB decision decided not to take measures as insufficient evidence to ground such measures was presented by the Hamburg SA.

Measures, had they been decided by the Board, would not in any case be measures that would be adopted by the Irish DPC. They would be measures adopted by the EDPB. This is a decision of the Board based on a request from Hamburg SA under a provision that is a derogation to the cooperation and consistency mechanism.

The DPC, of course, has already carried out an in-depth inquiry into WhatsApp’s privacy policy user facing material in the context of its transparency inquiry. That inquiry reached the Article 60 (co-decision making) stage in December 2020 and is now progressing through the dispute resolution procedure. The Hamburg SA has been actively involved in the decision-making process since December 2020 and the dispute resolution process (which commenced in June) is an EDPB-led initiative, involving all other supervisory authorities.

The DPC notes the request of the Board and will give consideration to any appropriate regulatory follow-up where it identifies matters canvassed in the EDPB decision have not already been addressed in the Article 60 draft decision transmitted by the DPC (and now currently with the Board under Article 65).

The DPC also has a separate, complaint-based inquiry ongoing that considers the legal basis that WhatsApp relies upon for processing. That inquiry is also at an advanced stage.

Back in November the Irish Times reported that WhatsApp Ireland had set aside €77.5 million for “possible administrative fines arising from regulatory compliance matters presently under investigation”. No fines against Facebook have yet been forthcoming, though.

Indeed, the DPC has yet to issue a single final GDPR decision against Facebook (or a Facebook-owned company) — despite more than three years having passed since the regulation started being applied.

Scores of GDPR complaints against the Facebook’s data-processing empire — such as this May 2018 complaint against Facebook, Instagram and WhatsApp’s use of so-called “forced consent” — continue to languish without regulatory enforcement in the EU because there’s been no decisions from Ireland (and sometimes no investigations either).

The situation is a huge black mark against the EU’s flagship data protection regulation. So the Board’s failure to step in more firmly now — to course-correct — does look like a missed opportunity to tackle a problematic GDPR enforcement bottleneck.

That said, any failure to follow the procedural letter of the law could invite a legal challenge that unpicked any progress. So it’s hard to see any quick wins in the glacial game of GDPR enforcement.

In the meanwhile, the winners of the stalemate are of course the tech giants who get to continue processing people’s data how they choose, with plenty of time to work on reconfiguring their legal, business and system structures to route around any enforcement damage that does eventually come.

Hamburg’s deputy commissioner for data protection, Ulrich Kühn, essentially warns as much in a statement responding to the EDPB’s decision in a statement — in which he writes:

The decision of the European Data Protection Board is disappointing. The body, which was created to ensure the uniform application of the GDPR throughout the European Union, is missing the opportunity to clearly stand up for the protection of the rights and freedoms of millions of data subjects in Europe. It continues to leave this solely to the Irish supervisory authority. Despite our repeated requests over more than two years to investigate and, if necessary, sanction the matter of data exchanges between WhatsApp and Facebook, the IDPC has not taken action in this regard. It is a success of our efforts over many years that IDPC is now being urged to conduct an investigation. Nonetheless, this non-binding measure does not do justice to the importance of the issue. It is hard to imagine a case in which, against the background of the risks for the rights and freedoms of a very large number of data subjects and their de facto powerlessness vis-à-vis monopoly-like providers, the urgent need for concrete action is more obvious. The EDPB is thus depriving itself of a crucial instrument for enforcing the GDPR throughout Europe. This is no good news for data subjects and data protection in Europe as a whole.

In further remarks the Hamburg authority emphasizes that the Board noted “considerable inconsistencies between the information with which WhatsApp users are informed about the extensive use of their data by Facebook on the one hand, and on the other the commitments made by the company to data protection authorities not (yet) to do so”; and also that it “expressed considerable doubts about the legal basis on which Facebook intends to rely when using WhatsApp data for its own or joint processing” — arguing that the Board therefore agrees with the “essential parts” of its arguments against WhatsApp-Facebook data sharing.

Despite carrying that weight of argument, the call for action is once again back in Ireland’s court.

 


Social – TechCrunch


Streamlabs launches Crossclip, a new tool for sharing Twitch clips to TikTok, Instagram and YouTube

July 16, 2021 No Comments

The company behind ubiquitous livestreaming software Streamlabs is introducing a new way for streamers to share their gaming highlights to platforms well beyond Twitch. Streamlabs calls the new tool Crossclip, and it’s available now as an iOS app and as a lightweight web tool.

With Crossclip, creators can easily convert Twitch clips into a format friendly to TikTok, Instagram Reels, YouTube Shorts and Facebook videos. Adapting a snippet from Twitch that you’d like to share is as simple as putting in the clip’s URL and choosing an output format (landscape, vertical or square) and a pre-loaded layout.

Crossclip iOS app

You can crop the clip’s length within Crossclip, blur part of the background and choose from a handful of layouts that let you place the frames in different places (to show the facecam view and the stream view together in vertical orientation, for example).

Crossclip’s core functionality is free, but a premium subscription version ($ 4.99/month or $ 49.99/year) removes a branded watermark and unlocks exports in 1080/60fps, larger uploads, added layers and pushes your edits to the front of the processing queue.

Discovery on Twitch is tough. Established streamers grow their audiences easily but anybody just getting started usually has to slog through long stretches of lonely Stardew Valley sessions with only the occasional viewer popping in to say hi. The idea behind Crossclip is to make it easier for streamers to build audiences on other social networks that have better discoverability features, subcommunities and tags to make that process less grueling.

“For a creator, making your content more discoverable is a huge advantage,” Streamlabs Head of Product Ashray Urs told TechCrunch. “When you consider the most popular Twitch streamers, you will notice that they have extremely popular YouTube channels and actively post on Twitter, Instagram, TikTok. If you aren’t sharing content and building your audience with different platforms, you’re making things more difficult for yourself.”

Urs notes that creators are increasingly using TikTok’s algorithmic discovery abilities to grow their audiences. TikTok’s recent addition of longer, three-minute videos is a boon for many kinds of creators interested in leveraging the platform, including gamers and other Twitch streamers.

Anyone with an established audience will find Crossclip a breeze to use too, making it dead-simple to share gaming highlights or Just Chatting clips wherever they’re trying to build up a following. The average clip conversation takes two to three minutes and is a simple one-click process. There are a few tools out there that have similar functionality, independent web tool StreamLadder probably being the most notable, but Streamlabs takes the same idea, refines it and adds a mobile app.

Streamlabs, now owned by Logitech, has released a few useful products in recent months. In February, the company launched Willow, its own link-in-bio tool with built-in tipping. In May, Streamlabs deepened its relationship with TikTok — an emerging hub for all kinds of gaming content — adding the ability to “go live” on TikTok into its core livestreaming platform, Streamlabs OBS.


Social – TechCrunch


Twitter now lets you limit who can reply to a tweet after the fact

July 14, 2021 No Comments

If you’re tired of sending brilliant takes into the Twitterverse only to be met with wave after wave of reply guys, a new Twitter feature could give you some relief.

Starting today, anyone on Twitter will be able to adjust who can reply to individual tweets after they’ve been sent. Previously, you could limit who could reply to tweets when they were created, but you couldn’t go in and change your selection after the fact.

On Twitter, you don’t always have a sense of what kind of tweets will attract unwanted attention until it’s too late. The new feature makes the option to limit replies to people you follow or only people mentioned in a tweet much more useful, particularly because the mute button doesn’t always cut it.

Twitter added the option to limit replies last August to boost “meaningful conversations” on the social network and to help people feel safer from harassment when they tweet. Product researcher Jane Manchun Wong first spotted the feature’s expansion in June.


Social – TechCrunch


Controversial WhatsApp policy change hit with consumer law complaint in Europe

July 12, 2021 No Comments

Facebook has been accused of multiple breaches of European Union consumer protection law as a result of its attempts to force WhatsApp users to accept controversial changes to the messaging platforms’ terms of use — such as threatening users that the app would stop working if they did not accept the updated policies by May 15.

The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.

“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.

“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”

After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.

However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).

“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”

The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.

And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”

The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:

“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”

Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:

“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”

The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.

The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.

Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.

German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).

Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.

In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.

There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.

 


Social – TechCrunch


Google faces a major multi-state antitrust lawsuit over Google Play fees

July 9, 2021 No Comments

A group of 37 attorneys general filed a second major multi-state antitrust lawsuit against Google Wednesday, accusing the company of abusing its market power to stifle competitors and forcing consumers into in-app payments that grant the company a hefty cut.

New York Attorney General Letitia James is co-leading the suit alongside the Tennessee, North Carolina and Utah attorneys general. The bipartisan coalition represents 36 U.S. states, including California, Florida, Massachusetts, New Jersey, New Hampshire, Colorado and Washington, as well as the District of Columbia.

“Through its illegal conduct, the company has ensured that hundreds of millions of Android users turn to Google, and only Google, for the millions of applications they may choose to download to their phones and tablets,” James said in a press release. “Worse yet, Google is squeezing the lifeblood out of millions of small businesses that are only seeking to compete.”

In December, 35 states filed a separate antitrust suit against Google, alleging that the company engaged in illegal behavior to maintain a monopoly on the search business. The Justice Department filed its own antitrust case focused on search last October.

In the new lawsuit, embedded below, the bipartisan coalition of states allege that Google uses “misleading” security warnings to keep consumers and developers within its walled app garden, the Google Play store. But the fees that Google collects from Android app developers are likely the meat of the case.

“Not only has Google acted unlawfully to block potential rivals from competing with its Google Play Store, it has profited by improperly locking app developers and consumers into its own payment processing system and then charging high fees,” District of Columbia Attorney General Karl Racine said.

Like Apple, Google herds all app payment processing into its own service, Google Play Billing, and reaps the rewards: a 30 percent cut of all payments. Much of the criticism here is a case that could — and likely will — be made against Apple, which exerts even more control over its own app ecosystem. Google doesn’t have an iMessage equivalent exclusive app that keeps users locked in in quite the same way.

While the lawsuit discusses Google’s “monopoly power” in the app marketplace, the elephant in the room is Apple — Google’s thriving direct competitor in the mobile software space. The lawsuit argues that consumers face pressure to stay locked into the Android ecosystem, but on the Android side at least, much of that is ultimately familiarity and sunk costs. The argument on the Apple side of the equation here is likely much stronger.

The din over tech giants squeezing app developers with high mobile payment fees is just getting louder. The new multi-state lawsuit is the latest beat, but the topic has been white hot since Epic took Apple to court over its desire to bypass Apple’s fees by accepting mobile payments outside the App Store. When Epic set up a workaround, Apple kicked it out of the App Store and Epic Games v. Apple was born.

The Justice Department is reportedly already interested in Apple’s own app store practices, along with many state AGs who could launch a separate suit against the company at any time.


Social – TechCrunch


YouTube’s recommender AI still a horrorshow, finds major crowdsourced study

July 7, 2021 No Comments

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.

And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted.

The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform.

The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of ‘commercial secrecy’.

But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

To fix YouTube’s algorithm, Mozilla is calling for “common sense transparency laws, better oversight, and consumer pressure” — suggesting a combination of laws that mandate transparency into AI systems; protect independent researchers so they can interrogate algorithmic impacts; and empower platform users with robust controls (such as the ability to opt out of “personalized” recommendations) are what’s needed to rein in the worst excesses of the YouTube AI.

Regrets, YouTube users have had a few…

To gather data on specific recommendations being made made to YouTube users — information that Google does not routinely make available to external researchers — Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.

The tool can generate a report which includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. (Or, well, ‘dysfunctioning’ as the case may be.)

The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets’, including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.

A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs.

The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves.

Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. So a clear fail.

A very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language — with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. (And none of the three can be classed as minor international markets.)

Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report — a worrying detail to read in the middle of an ongoing global health crisis.

The crowdsourced study — which Mozilla bills as the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos which the report draws on directly.

These reports were generated between July 2020 and May 2021.

What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. But Mozilla argues that taking this “people-powered” approach centres the lived experiences of Internet users and is therefore helpful in foregrounding the experiences of marginalised and/or vulnerable people and communities (vs, for example, applying only a narrower, legal definition of ‘harm’).

“We wanted to interrogate and explore further [people’s experiences of falling down the YouTube ‘rabbit hole’] and frankly confirm some of these stories — but then also just understand further what are some of the trends that emerged in that,” explained Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, discussing the aims of the research.

“My main feeling in doing this work was being — I guess — shocked that some of what we had expected to be the case was confirmed… It’s still a limited study in terms of the number of people involved and the methodology that we used but — even with that — it was quite simple; the data just showed that some of what we thought was confirmed.

“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’… And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues. But I was just like — oh wow, it’s actually coming out really clearly in our data.”

Mozilla says the crowdsourced research uncovered “numerous examples” of reported content that would likely or actually breach YouTube’s community guidelines — such as hate speech or debunked political and scientific misinformation.

But it also says the reports flagged a lot of what YouTube “may” consider ‘borderline content’. Aka, stuff that’s harder to categorize — junk/low quality videos that perhaps toe the acceptability line and may therefore be trickier for the platform’s algorithmic moderation systems to respond to (and thus content that may also survive the risk of a take down for longer).

However a related issue the report flags is that YouTube doesn’t provide a definition for borderline content — despite discussing the category in its own guidelines — hence, says Mozilla, that makes the researchers’ assumption that much of what the volunteers were reporting as ‘regretful’ would likely fall into YouTube’s own ‘borderline content’ category impossible to verify.

The challenge of independently studying the societal effects of Google’s tech and processes is a running theme underlying the research. But Mozilla’s report also accuses the tech giant of meeting YouTube criticism with “inertia and opacity”.

It’s not alone there either. Critics have long accused YouTube’s ad giant parent of profiting off-of engagement generated by hateful outrage and harmful disinformation — allowing “AI-generated bubbles of hate” surface ever more baleful (and thus stickily engaging) stuff, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low grade content business under a user-generated content umbrella.

Indeed, ‘falling down the YouTube rabbit hole’ has become a well-trodden metaphor for discussing the process of unsuspecting Internet users being dragging into the darkest and nastiest corners of the web. This user reprogramming taking place in broad daylight via AI-generated suggestions that yell at people to follow the conspiracy breadcrumb trail right from inside a mainstream web platform.

Back as 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of exactly this: Automating radicalization.

However it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being ‘radicalized’ after viewing hours of extremist content or conspiracy theory junk on Google’s platform.

Ex-YouTube insider — Guillaume Chaslot — is one notable critic who’s sought to pull back the curtain shielding the proprietary tech from deeper scrutiny, via his algotransparency project.

Mozilla’s crowdsourced research adds to those efforts by sketching a broad — and broadly problematic — picture of the YouTube AI by collating reports of bad experiences from users themselves.

Of course externally sampling platform-level data that only Google holds in full (at its true depth and dimension) can’t be the whole picture — and self-reporting, in particular, may introduce its own set of biases into Mozilla’s data-set. But the problem of effectively studying big tech’s blackboxes is a key point accompanying the research, as Mozilla advocates for proper oversight of platform power.

In a series of recommendations the report calls for “robust transparency, scrutiny, and giving people control of recommendation algorithms” — arguing that without proper oversight of the platform, YouTube will continue to be harmful by mindlessly exposing people to damaging and braindead content.

The problematic lack of transparency around so much of how YouTube functions can be picked up from other details in the report. For example, Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines).

Collectively, just this subset of videos had had a total of 160M views prior to being removed for whatever reason.

In other findings, the research found that regretful views tend to perform well on the platform.

A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform — lending weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it brings in the clicks.

While that might be great for Google’s ad business, it’s clearly a net negative for democratic societies which value truthful information over nonsense; genuine public debate over artificial/amplified binaries; and constructive civic cohesion over divisive tribalism.

But without legally-enforced transparency requirements on ad platforms — and, most likely, regulatory oversight and enforcement that features audit powers — these tech giants are going to continue to be incentivized to turn a blind eye and cash in at society’s expense.

Mozilla’s report also underlines instances where YouTube’s algorithms are clearly driven by a logic that’s unrelated to the content itself — with a finding that in 43.6% of the cases where the researchers had data about the videos a participant had watched before a reported regret the recommendation was completely unrelated to the previous video.

The report gives examples of some of these logic-defying AI content pivots/leaps/pitfalls — such as a person watching videos about the U.S. military and then being recommended a misogynistic video entitled ‘Man humiliates feminist in viral video.’

In another instance, a person watched a video about software rights and was then recommended a video about gun rights. So two rights make yet another wrong YouTube recommendation right there.

In a third example, a person watched an Art Garfunkel music video and was then recommended a political video entitled ‘Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.’

To which the only sane response is, umm what???

YouTube’s output in such instances seems — at best — some sort of ‘AI brain fart’.

A generous interpretation might be that the algorithm got stupidly confused. Albeit, in a number of the examples cited in the report, the confusion is leading YouTube users toward content with a right-leaning political bias. Which seems, well, curious.

Asked what she views as the most concerning findings, Mozilla’s Geurkink told TechCrunch: “One is how clearly misinformation emerged as a dominant problem on the platform. I think that’s something, based on our work talking to Mozilla supporters and people from all around the world, that is a really obvious thing that people are concerned about online. So to see that that is what is emerging as the biggest problem with the YouTube algorithm is really concerning to me.”

She also highlighted the problem of the recommendations being worse for non-English-speaking users as another major concern, suggesting that global inequalities in users’ experiences of platform impacts “doesn’t get enough attention” — even when such issues do get discussed.

Responding to Mozilla’s report in a statement, a Google spokesperson sent us this statement:

“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”

Google also claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front.

At the same time, its response queried how Mozilla’s study defines ‘regrettable’ content — and went on to claim that its own user surveys generally show users are satisfied with the content that YouTube recommends.

In further non-quotable remarks, Google noted that earlier this year it started disclosing a ‘violative view rate‘ (VVR) metric for YouTube — disclosing for the first time the percentage of views on YouTube that comes from content that violates its policies.

The most recent VVR stands at 0.16-0.18% — which Google says means that out of every 10,000 views on YouTube, 16-18 come from violative content. It said that figure is down by more than 70% when compared to the same quarter of 2017 — crediting its investments in machine learning as largely being responsible for the drop.

However, as Geurkink noted, the VVR is of limited use without Google releasing more data to contextualize and quantify how far its AI was involved in accelerating views of content its own rules state shouldn’t be viewed on its platform. Without that key data the suspicion must be that the VVR is a nice bit of misdirection.

“What would be going further than [VVR] — and what would be really, really helpful — is understanding what’s the role that the recommendation algorithm plays in this?” Geurkink told us on that, adding: “That’s what is a complete blackbox still. In the absence of greater transparency [Google’s] claims of progress have to be taken with a grain of salt.”

Google also flagged a 2019 change it made to how YouTube’s recommender algorithm handles ‘borderline content’ — aka, content that doesn’t violate policies but falls into a problematic grey area — saying that that tweak had also resulted in a 70% drop in watchtime for this type of content.

Although the company confirmed this borderline category is a moveable feast — saying it factors in changing trends as well as context and also works with experts to determine what’s get classed as borderline — which makes the aforementioned percentage drop pretty meaningless since there’s no fixed baseline to measure against.

It’s notable that Google’s response to Mozilla’s report makes no mention of the poor experience reported by survey participants in non-English-speaking markets. And Geurkink suggested that, in general, many of the claimed mitigating measures YouTube applies are geographically limited — i.e. to English-speaking markets like the US and UK. (Or at least arrive in those markets first, before a slower rollout to other places.) 

A January 2019 tweak to reduce amplification of conspiracy theory content in the US was only expanded to the UK market months later — in August — for example.

“YouTube, for the past few years, have only been reporting on their progress of recommendations of harmful or borderline content in the US and in English-speaking markets,” she also said. “And there are very few people questioning that — what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”

We asked Google to confirm whether it had since applied the 2019 conspiracy theory related changes globally — and a spokeswoman told us that it had. But the much higher rate of reports made to Mozilla of — a yes broader measure of — ‘regrettable’ content being made in non-English-speaking markets remains notable.

And while there could be others factors at play, which might explain some of the disproportionately higher reporting, the finding may also suggest that, where YouTube’s negative impacts are concerned, Google directs greatest resource at markets and languages where its reputational risk and the capacity of its machine learning tech to automate content categorization are strongest.

Yet any such unequal response to AI risk obviously means leaving some users at greater risk of harm than others — adding another harmful dimension and layer of unfairness to what is already a multi-faceted, many-headed-hydra of a problem.

It’s yet another reason why leaving it up to powerful platforms to rate their own AIs, mark their own homework and counter genuine concerns with self-serving PR is for the birds.

(In additional filler background remarks it sent us, Google described itself as the first company in the industry to incorporate “authoritativeness” into its search and discovery algorithms — without explaining when exactly it claims to have done that or how it imagined it would be able to deliver on its stated mission of ‘organizing the world’s information and making it universally accessible and useful’ without considering the relative value of information sources… So color us baffled at that claim. Most likely it’s a clumsy attempt to throw disinformation shade at rivals.)

Returning to the regulation point, an EU proposal — the Digital Services Act — is set to introduce some transparency requirements on large digital platforms, as part of a wider package of accountability measures. And asked about this Geurkink described the DSA as “a promising avenue for greater transparency”.

But she suggested the legislation needs to go further to tackle recommender systems like the YouTube AI.

“I think that transparency around recommender systems specifically and also people having control over the input of their own data and then the output of recommendations is really important — and is a place where the DSA is currently a bit sparse, so I think that’s where we really need to dig in,” she told us.

One idea she voiced support for is having a “data access framework” baked into the law — to enable vetted researchers to get more of the information they need to study powerful AI technologies — i.e. rather than the law trying to come up with “a laundry list of all of the different pieces of transparency and information that should be applicable”, as she put it.

The EU also now has a draft AI regulation on the table. The legislative plan takes a risk-based approach to regulating certain applications of artificial intelligence. However it’s not clear whether YouTube’s recommender system would fall under one of the more closely regulated categories — or, as seems more likely (at least with the initial Commission proposal), fall entirely outside the scope of the planned law.

“An earlier draft of the proposal talked about systems that manipulate human behavior which is essentially what recommender systems are. And one could also argue that’s the goal of advertising at large, in some sense. So it was sort of difficult to understand exactly where recommender systems would fall into that,” noted Geurkink.

“There might be a nice harmony between some of the robust data access provisions in the DSA and the new AI regulation,” she added. “I think transparency is what it comes down to, so anything that can provide that kind of greater transparency is a good thing.

“YouTube could also just provide a lot of this… We’ve been working on this for years now and we haven’t seen them take any meaningful action on this front but it’s also, I think, something that we want to keep in mind — legislation can obviously take years. So even if a few of our recommendations were taken up [by Google] that would be a really big step in the right direction.”


Social – TechCrunch


This week in growth marketing on TechCrunch

July 5, 2021 No Comments

TechCrunch is trying to help you find the best growth marketer to work with through founder recommendations that we get in this survey. We’re sharing a few of our favorites so far, below.

We’re using your recommendations to find top experts to interview and have them write their own columns here. This week we talked to Kathleen Estreich and Emily Kramer of new growth advising firm MKT1 and veteran designer Scott Tong, and published a pair of articles by growth marketing agency Demand Curve.

Demand Curve: Email marketing tactics that convert subscribers into customers Growth marketing firm Demand Curve shares their approaches to subject line length, the three outcomes of an email and how to optimize your format for each outcome.

(Extra Crunch) Demand Curve: 7 ad types that increase click-through rates The growth marketing agency tells us how to use customer reactions and testimonials, and other ads types to a startup’s advantage.

MKT1: Developer marketing is what startup marketing should look like MKT1, co-founded by Kathleen Estreich, previously at Facebook, Box, Intercom and Scalyr, and Emily Kramer, previously at Ticketfly, Asana, Astro and Carta, tell us about the importance of finding the right marketer at the right time, and the biggest mistakes founders are still making in 2021.

The pandemic showed why product and brand design need to sit togetherScott Tong shares the importance of understanding users and his thoughts on how companies manage to work together collaboratively in a remote world.

(Extra Crunch) 79% more leads without more traffic: Here’s how we did it — Conversion rate optimization expert Jasper Kuria shared a detailed case study deconstructing the CRO techniques he used to boost conversion rates by nearly 80% for China Expat Health, a lead generation company.

This week’s recommended growth marketers

As always, if you have a top-tier marketer that you think we should know about, tell us!

Marketer: Dipti Parmar
Recommended by: Brody Dorland, co-founder, DivvyHQ
Testimonial: “She gave me an easy-to-implement plan to start with clear outcomes and timeline. She delivered it within one month and I was able to see the results in a couple of months. This encouraged me to hand over bigger parts of our content strategy and publishing to her.”

Marketer: Amy Konefal (Closed Loop)
Recommended by: Dan Reardon, Vudu
Testimonial: “Amy drove scale for us as we grew to a half-billion-dollar company. She identified and exploited efficiencies and built out a rich portfolio of channels.”

Marketer: Karl Hughes (draft.dev)
Recommended by: Joshua Shulman, Bitmovin.com
Testimonial: “Karl is incredibly knowledgeable in the field of content and growth marketing to a large (and equally niche) target audience of developers. He and his team at Draft.dev are some of the best at “developer marketing,” which is a greatly underrated target audience.”

Marketer: Ladder
Recommended by: Anonymous
Testimonial: “They really get what I need. By testing different messaging on different personas, we discover what works and what doesn’t to better understand our users and prospects. This is gold for a company at our stage. Showing those results to our investors blew their minds.”


Social – TechCrunch


Multilingual SEO for voice searches: Comprehensive guide

July 2, 2021 No Comments

30-second summary:

  • Search engines are laser-focused on improving user experience and voice search plays an increasingly key role
  • With 100+ global languages, people are prone to searching in their native language
  • How do you optimize your website for multilingual search while keeping a natural and conversational tone?
  • Atul Jindal accurately guides you through the process

Google is now recognizing 119 different languages on voice search. Which is great for user experience. But it makes ranking a bit more challenging for website owners, especially those who host multi-linguistic traffic. Website owners must act to cater to these people who are taking a different linguistic approach to search. That’s where multilingual SEO comes in, done with voice search in mind.

But before we begin digging deeper into multilingual SEO for voice search, let us first introduce the search of the future aka multilingual voice search.

What is Multilingual Voice Search?

With the evolution of technology, search engines like Google, Bing, Yandex, and others work towards enhancing their user experience and making the search easier than ever.

Keeping up with these efforts, they now let people talk to them in their own language, understand it and yield the results they were searching for.

Moreover, more than 23 percent of American households use digital assistants, and nearly 27 percent of people conduct voice searches using smartphones. This number is expected to increase by more than nine percent in 2021 alone.

This means, more and more people will converse with Google in languages other than English. Like, a German native is likely to search for something by talking in German. A native Indian could use any of the 100+ languages spoken in India, and a US national may use English, Spanish, or some other language.

This increase in the popularity of voice assistants, multilingual voice search inadvertently leads to an increase in the demand for multilingual SEO for voice search.

But do you need to optimize your website for multilingual searches? Yes. How else will your website reach your target audience that searches in their native language?

Combining Multilingual SEO with voice search

So far, there are guides only for either multilingual SEO or for voice search. However, gauging the rising importance of this relatively new search, we present you with a guide that combines voice search and multilingual SEO.

What is Multilingual SEO?

Multilingual SEO is a practice that adapts your website to cater to your target audience that uses multi-linguistic search. It involves translating the web page, using the right keywords, and optimizing the web page accordingly. We will go into the details below.

Notice how Google yields Hindi results for a search conducted in Urdu/Hindi. That’s because these results were optimized for multilingual voice searches.

Voice search: The search of the future

Voice searches are hugely different from regular typing searches. When typing, you want to do minimum physical effort, that is typing, and get results. Anyway, when speaking, you are not doing any physical effort and just talking. Therefore, voice searches tend to be longer and have a more conversational style and tone.

Let’s take an example

A person looking for a Chinese restaurant will go about it in two different ways when using voice search and regular search.

When typing, this person will type something like “best Chinese restaurant near me.”

On the other hand, when using voice search, he or she will simply say “Hey Google, tell me about the best Chinese restaurants I can go to right now.”

Do you see the difference? To optimize for voice assistants, you have to adapt to this difference when doing SEO.

Adding the multilingual touch to this and you’ll have a multilingual voice search.

From the example above, I searched for the weather in my city.

If I were typing, I simply would’ve typed “[my city name] weather.”

However, when using voice, I used a complete phrase in my native language, and google yielded results in that language. These results showed that they were optimized for multilingual voice searches.

How to Do Multilingual SEO for Voice Searches?

Now, if you want to cater to a global audience and expand your reach. And you want your website to rank when your target audience searches for something you offer, in their own language, you need multilingual SEO.

Below we are discussing some steps to optimizing your website for multilingual searches:

Keyword Research

No SEO strategy can ever start without keyword research. Therefore, before you begin doing multilingual SEO for your website, you need to perform proper keyword research.

When translating your website, you can’t just translate the keywords or phrases. Because a keyword that has high search volume in one language may not be that viable when translated in another language.

Let’s look at a case study from Ahrefs to understand this point.

Ahrefs looked at the search volume for the key phrase “last minute holidays.” They found out it received 117k searches from the UK in a month.

However, the same phrase translated into French “ Vacances dernière minute.” Had a total search volume of 8.4k.

keyword research for multilingual seo

keyword list - geography specific

The findings from this case study go to show the importance of independent keyword research for multilingual SEO. Because simply translating the keywords won’t yield good results.

So, what you can do is pick up the phrases from your original website, which we assume is in English and is optimized for voice search. Translate them. Brainstorm additional relevant keywords and plug them into any of the keyword research tools to see their search volume and competition.

Additionally, keywords for voice searches are different from regular keywords as you need to take an intuitive approach by getting to your target audience’s mind to see what they think and speak when searching. And how they do it. Then use these phrases to go ahead with your keyword search and make a list based on high search volume and low competition.

Translation

Once you have a list of keywords you want to optimize, the next step is to translate the content that’s already there on your website and optimize it with the keywords.

When translating a website, the best approach is to hire a human translator who is a native speaker of the target language.

You may be tempted to use Google Translate or some other automatic translation tools. But even though Google endorses its translators, it leaves a subtle recommendation on using human translators. Because robots are yet to come as far as competing and beating humans. At least when it comes to translations.

translation code for multilingual seo

Additionally, make sure the translator aligns the content with the tone of your original website.  

Hreflang Annotation

Here comes the technical part. Did you really think you can get by multilingual SEO without getting involved in the technicalities?

Hreflang annotation is critical for websites that have different versions in different languages for various searches.

It enables Google to identify which web page to show to which visitor. For example, you don’t want your English visitors to land on the French version of your page. Using Hreflang will enable you to receive English visitors on the English page, and French-speaking people on the page in French.

Another important attribute that will go in your website’s code when doing multilingual SEO is the alternate attribute. It tells the search engine that a translated page is a different version, in an alternate language, of a pre-existing page and not a duplicate. Because Google cracks down on duplicate pages and can penalize your website if you haven’t used the alternate tag.

URL structure

You can’t discuss multilingual SEO, without talking about URL structure.

When doing multilingual SEO, you are often saving different versions of your website under the same domain. This means, you have to create a URL structure for each version, so the search engine can take the visitor to the right page.

When it comes to URLs for multilingual websites, you have many options, and each option has its pros and cons. You can check out how Google lists these pros and cons in the image below.

url structure

 Source: Google Search Central

Confused about which URL structure to use?

You can choose any option as per your preferences. According to Google, no URL structure has a special impact on SEO except using parameters within URLs. I personally think using a sub-domain as Wikipedia or Sub-folder/directory as Apple, are the easiest options to create a multilingual site. But if you’re using WordPress then you can use a plugin like Polylang to multi-lingual.

Content style

The content writing style is quite important when optimizing your website for multilingual SEO. your content should be more focused on conversational style rather than academic or complex sentence structures. As said, voice-related queries are mostly in questions format, so faqs, short paragraphs with more emphasis on addressing questions will be better for voice-related search queries.

The importance of multilingual SEO for Voice Search

Now that you know how to set your website for multilingual SEO, you might be wondering whether it is worth all the hassle.

If your website sees a lot of multilingual traffic, you have no other choice than to go for multilingual SEO for voice search because,

  1.   Voice search is the future of search 51 percent of people already use it for product research before buying. Therefore, starting with multilingual voice search right now will prepare you to tackle the challenges of search and SEO that the future brings.
  2.   Your business can’t grow all that much unless it personalizes its offerings to the visitor. In this case, speaking to them in their own language adds up to a good user experience.
  3. Multilingual SEO will expand your website’s reach by catering to multi-linguistic searchers. If your business is global or spread to multiple countries with different languages, and your website is restricted to only English, I bet you must be missing a big chunk of easy traffic. Which would be difficult with English keywords with higher competition globally and keywords difficulty.

Final thoughts

Multilingual SEO for voice search is something that you’ll see all website owners (who receive multilinguistic traffic) doing in the future. Therefore, it is better to start now and get ahead of your competitors.

The key takeaways for optimizing your website for multilingual voice searches are target language keyword search, human translation, hreflang tags, and the right URL structure.

With the right keyword research, a meaningful translation, thorough technical SEO, and by using the URL structure that fits best with your unique web requirements, you can enjoy riding the wave of multilingual voice search when it arrives, and it will arrive soon.

Atul Jindal is Sr. Web Engineer at Adobe Research.

The post Multilingual SEO for voice searches: Comprehensive guide appeared first on Search Engine Watch.

Search Engine Watch


Instagram is developing its own version of Twitter’s Super Follow with ‘Exclusive Stories’

June 30, 2021 No Comments

Instagram is building its own version of Twitter’s Super Follow with a feature that would allow online creators to publish “exclusive” content to their Instagram Stories that’s only available to their fans — access that would likely come with a subscription payment of some kind. Instagram confirmed the screenshots of the feature recently circulated across social media are from an internal prototype that’s now in development, but not yet being publicly tested. The company declined to share any specific details about its plans, saying the company is not at a place to talk about this project just yet.

Image Credits: Exclusive Story in development via Alessandro Paluzzi

The screenshots, however, convey a lot of about Instagram’s thinking as they show a way that creators could publish what are being called “Exclusive Stories” to their account, which are designated with a different color (currently purple). When other Instagram users come across the Exclusive Stories, they’ll be shown a message that says that “only members” can view this content. The Stories cannot be screenshot either, it appears, and they can be shared as Highlights. A new prompt encourages creators to “save this to a Highlight for your Fans,” explaining that, by doing so, “fans always have something to see when they join.”

The Exclusive Stories feature was uncovered by reverse engineer Alessandro Paluzzi, who often finds unreleased features in the code of mobile apps. Over the past week, he’s published a series screenshots to an ongoing Twitter thread about his findings.

Image Credits: Instagram Exclusive Story Highlight feature in development via Alessandro Paluzzi (opens in a new window)

Exclusive Stories are only one part of Instagram’s broader plans for expanded creator monetization tools.

The company has been slowly revealing more details about its efforts in this space, with Instagram Head Adam Mosseri first telling The Information in May that the company was “exploring” subscriptions along with other new features, like NFTs.

Paluzzi also recently found references to the NFT feature, Collectibles, which shows how digital collectibles could appear on a creator’s Instagram profile in a new tab.

Image Credits: Instagram NFT feature in development via Alessandro Paluzzi (opens in a new window)

 

Instagram, so far, hasn’t made a public announcement about these specific product developments, instead choosing to speak at a high-level about its plans around things like subscriptions and tips.

For example, during Instagram’s Creator Week in early June — an event that could have served as an ideal place to offer a first glimpse at some of these ideas — Mosseri talked more generally about the sort of creator tools Instagram was interested in building, without saying which were actually in active development.

“We need to create, if we want to be the best platform for creators long term, a whole suite of things, or tools, that creators can use to help do what they do,” he said, explaining that Instagram was also working on more creative tools and safety features for creators, as well as tools that could help creators make a living.

“I think it’s super important that we create a whole suite of different tools, because what you might use and what would be relevant for you as a creator might be very different than an athlete or a writer,” he said.

“And so, largely, [the creator monetization tools] fall into three categories. One is commerce — so either we can do more to help with branded content; we can do more with affiliate marketing…we can do more with merch,” he explained. “The second is ways for users to actually pay creators directly — so whether it is gated content or subscriptions or tips, like badges, or other user payment-type products. I think there’s a lot to do there. I love those because those give creators a direct relationship with their fans — which I think is probably more sustainable and more predictable over the long run,” Mosseri said.

The third area is focused on revenue share, as with IGTV long-form video and short-form video, like Reels, he added.

Image Credits: Instagram Exclusive Story feature in development via Alessandro Paluzzi (opens in a new window)

Instagram isn’t the only large social platform moving forward with creator monetization efforts.

The membership model, popularized by platforms like OnlyFans and Patreon, has been more recently making its way to a number of mainstream social networks as the creator economy has become better established.

Twitter, for example, first announced its own take on creator subscriptions, with the unveiling of its plans for the Super Follow feature during an Analyst Day event in February. Last week, it began rolling out applications for Super Follows and Ticked Spaces — the latter, a competitor to Clubhouse’s audio social networking rooms.

Meanwhile, Facebook just yesterday launched its Substack newsletter competitor, Bulletin, which offers a way for creators to sell premium subscriptions and access member-only groups and live audio rooms. Even Spotify has launched an audio chat room and Clubhouse rival, Greenroom, which it also plans to eventually monetize.

Though the new screenshots offer a deeper look into Instagram’s product plans on this front, we should caution that an in-development feature is not necessarily representative of what a feature will look like at launch or how it will ultimately behave. It’s also not a definitive promise of a public launch — though, in this case, it would be hard to see Instagram scrapping its plans for exclusive, member-only content given its broader interest in serving creators, where such a feature is essentially part of a baseline offering.


Social – TechCrunch