Monthly Archives: July 2021
Kobo’s Elipsa is the latest in the Amazon rival’s e-reading line, and it’s a big one. The 10.3-inch e-paper display brings it up to iPad dimensions and puts it in direct competition with the reMarkable and Boox’s e-reader tablets. It excels on reading experience, gets by on note taking and drawing, but falls a bit short on versatility.
Kobo has been creeping upmarket for a few years now, and though the cheaper Clara HD is still the pick of the litter in my opinion, the Forma and Libra H2O are worthy competitors to the Kindle lines. The $ 400 Elipsa represents a big step up in size, function and price, and it does justify itself — though there are a few important caveats.
The device is well designed but lacks any flourishes. The tilted “side chin” of the Forma and Libra is flattened out into a simple wide bezel on the right side. The lopsided appearance doesn’t bother me much, and much of the competition has it as well. (Though my favorite is Boox’s ultracompact, flush-fronted Poke 3)
The 10.3″ screen has a resolution of 1404 x 1872, giving it 227 pixels per inch. That’s well below the 300 PPI of the Clara and Forma, and the typography suffers from noticeably more aliasing if you look closely. Of course, you won’t be looking that closely, since as a larger device you’ll probably be giving the Elipsa a bit more distance and perhaps using a larger type size. I found it perfectly comfortable to read on — 227 PPI isn’t bad, just not the best.
There is a front light, which is easily adjustable by sliding your finger up and down the left side of the screen, but unlike other Kobo devices there is no way to change the color temperature. I’ve been spoiled by other devices and now the default cool grey I lived with for years doesn’t feel right, especially with a warmer light shining on your surroundings. The important part is that it is consistent across the full display and adjustable down to a faint glow, something my eyes have thanked me for many times.
It’s hard to consider the Elipsa independent from the accessories it’s bundled with, and in fact there’s no way to buy one right now without the “sleep cover” and stylus. The truth is they really complete the package, though they do add considerably to its weight and bulk. What when naked is lighter and feels smaller than a standard iPad is heavier and larger once you put its case on and stash the surprisingly weighty stylus at the top.
The cover is nicely designed, if a bit stiff, and will definitely protect your device from harm. The cover, secured by magnets at the bottom, flips off like a sheet on a legal pad and folds flat behind the device, attaching itself with the same magnets from the other direction. A couple folds in it also stiffen up with further magnetic arrangement into a nice, sturdy little stand. The outside is a grippy faux leather and the inside is soft microfiber.
You can wake and turn off the device by opening and closing the cover, but the whole thing comes with a small catch: you have to have the power button, charging port and big bezel on the right. When out of its case the Elipsa can, like the others of its lopsided type, be inverted and your content instantly flips. But once you put it in the case, you’re locked in to a semi-right-handed mode. This may or may not bother people but it’s worth mentioning.
The reading experience is otherwise very similar to that on Kobo’s other devices. A relatively clean interface that surfaces your most recently accessed content and a not overwhelming but still unwelcome amount of promotional stuff (“Find your next great read”). E-books free and paid for display well, though it’s never been my preference to read on a large screen like this. I truly wish one of these large e-readers would make a landscape mode with facing pages. Isn’t that more booklike?
Articles from the web, synced via Pocket, look great and are a pleasure to read in this format. It feels more like a magazine page, which is great when you’re reading an online version of one. It’s simple, foolproof and well integrated.
Kobo’s new note-taking prowess
What’s new on the bottom row, though, is “Notebooks,” where unsurprisingly you can create notebooks for scribbling down lists, doodles, notes of course, and generally use the stylus.
The writing experience is adequate. Here I am spoiled by the reMarkable 2, which boasts extremely low lag and high accuracy, as well as much more expression in the line. Kobo doesn’t approach that, and the writing experience is fairly basic, with a noticeable amount of lag, but admirable accuracy.
There are five pen tips, five line widths and five line shades, and they’re all fine. The stylus has a nice heft to it, though I’d like a grippier material. Two buttons on it let you quickly switch from the current pen style to a highlighter or eraser, where you have stroke-deleting or brush modes. The normal notebooks have the usual gridded, dotted, lined and blank styles, and unlimited pages, but you can’t zoom in or out (not so good for artists).
Then there are the “advanced” notebooks, which you must use if you want handwriting recognition and other features. These have indelible lines on which you can write, and a double tap captures your words into type very quickly. You can also put in drawings and equations in their own sections.
The handwriting recognition is fast and good enough for rough notes, but don’t expect to send these directly to your team without any editing. Likewise the diagram tool that turns gestural sketches of shapes and labels into finalized flowcharts and the like — better than the original wobbly art but still a rough draft. There are a few clever shortcuts and gestures to add or subtract spaces and other common tasks, something you’ll probably get used to fairly quick if you use the Elipsa regularly.
The notebook interface is snappy enough going from page to page or up and down on the “smart” notebooks but nothing like the fluidity of a design program or an art-focused one on an iPad. But it’s also unobtrusive, has good palm blocking, and feels nice in action. The lag on the line is definitely a con, but something you can get used to if you don’t mind the resulting product being a little sloppy.
You can also mark up e-books, which is nice for highlights but ultimately not that much better than simply selecting the text. And there’s no way you’re writing in the margins with the limitations of this stylus.
Exporting notepads can be done via a linked Dropbox account or over USB connection. Again the reMarkable has a leg up here, for even if its app is a bit restrictive, the live syncing means you don’t ever have to worry about what version of what is where, as long as it’s in the system. On the Kobo it’s more traditional.
Compared to the reMarkable, the Kobo is really just an easier platform for everyday reading, so if you’re looking for a device that focuses on that and has the option of doodling or note taking on the side, it’s a much better deal. On the other hand, those just looking for an improvement to that stylus-focused tablet should look elsewhere — writing and sketching still feels way better on a reMarkable than almost anything on the market. Compared with something like a Boox tablet, the Elipsa is more simple and focused, but doesn’t allow the opportunity of adding Android apps and games.
At $ 400 — though this includes a case and stylus — the Elipsa is a considerable investment and comparably priced to an iPad, which is certainly a more versatile device. But I don’t particularly enjoy reading articles or books on my iPad, and the simplicity of an e-reader in general helps me focus when I’m making notes on a paper or something. It’s a different device for a different purpose, but not for everyone.
It is however probably the best way right now to step into the shallow end of the “big e-reader” pool, with more complex or expensive options available should you desire them.
In the post 10 Most Important SEO Patents, Part 5 – Phrase-Based Indexing I wrote about how Google’s then Head of Webspam sent a newsletter to Librarians. It described the inverted index that Google used to organize terms in their index of the web. It is no longer available online, but it was a great … Read more
If you want people to find your video content, you need the right keywords. It sounds so simple, doesn’t it? Well, the key is knowing which keywords are the right ones for you, where to find them, and how to implement them on your video to boost your SEO ranking. Just as you put a […]
Read more at PPCHero.com
Last year, during the pandemic, a free browser extension called Netflix Party gained traction because it enabled people trapped in their homes to connect with far-flung friends and family by watching the same Netflix TV shows and movies simultaneously. It also enabled them to dish about the action in a side bar chat.
Yet that company — later renamed Teleparty — was just the beginning, argue two young companies that have raised seed funding. One, a year-old upstart in London that launched in December, just closed its round this week led by Craft Ventures. The other, a four-year-old, Bay Area-based startup, has raised $ 3 million in previously undisclosed seed funding, including from 500 Startups.
Both believe that while investors have thrown money at virtual events and edtech companies, there is an even bigger opportunity in developing a kind of multiplayer browsing experience that enables people to do much more together online. From watching sports to watching movies to perhaps even reviewing X-rays with one’s doctor some day, both say more web surfing together is inevitable, particularly for younger users.
The companies are taking somewhat different approaches. The startup on which Craft just made a bet, leading its $ 2.2 million seed round, is Giggl, a year-old, London-based startup that invites users of its web app to tap into virtual sessions. It calls these “portals” to which they can invite friends to browse content together, as well as text chat and call in. The portals can be private rooms or switched to “public” so that anyone can join.
Giggl was founded by four teenagers who grew up together, including its 19-year-old chief product officer, Tony Zog. It only recently graduated from the LAUNCH accelerator program. Still, it already has enough users — roughly 20,000 of whom use the service on an active monthly basis — that it’s beginning to build its own custom server infrastructure to minimize downtime and reduce its costs.
The bigger idea is to build a platform for all kinds of scenarios and to charge for these accordingly. For example, while people can chat for free while web surfing or watching events together like Apple Worldwide Developers Conference, Giggl plans to charge for more premium features, as well as to sell subscriptions to enterprises that are looking for more ways to collaborate. (You can check out a demo of Giggl’s current service below.)
Hearo.live is the other “multiplayer” startup — the one backed by 500 Startups, along with numerous angel investors. The company is the brainchild of Ned Lerner, who previously spent 13 years as a director of engineering with Sony Worldwide Studios and a short time before that as the CTO of an Electronic Arts division.
Hearo has a more narrow strategy in that users can’t browse absolutely anything together as with Giggl. Instead, Hearo enables users to access upwards of 35 broadcast services in the U.S. (from NBC Sports to YouTube to Disney+), and it relies on data synchronization to ensure that every user sees the same original video quality.
Hearo has also focused a lot of its efforts on sound, aiming to ensure that when multiple streams of audio are being created at the same time — say users are watching the basketball playoffs together and also commenting — not everyone involved is confronted with a noisy feedback loop.
Indeed, Lerner says, through echo cancellation and other “special audio tricks” that Hearo’s small team has developed, users can enjoy the experience without “noise and other stuff messing up the experience.” (“Pretty much we can do everything Clubhouse can do,” says Lerner. “We’re just doing it as you’re watching something else because I honestly didn’t think people just sitting around talking would be a big thing.”)
Like Giggl, Hearo Lerner envisions a subscription model; it also anticipates an eventual ad revenue split with sports broadcasters and says it’s already working with the European Broadcasting Union on that front. Like Giggl, Hearo’s users numbers are conservative by most standards, with 300,000 downloads to date of its app for iOS, Android, Windows, and macOS, and 60,000 actively monthly users.
It begs the question of whether “watching together online” is a huge opportunity, and the answer doesn’t yet seem clear, even if Hearo and Giggl have more compelling tech and viable paths to generating revenue.
The startups aren’t the first to focus on watch-together type experiences. Scener, an app founded by serial entrepreneur Richard Wolpert, says it has 2 million active registered users and “the best, most active relationship with all the studios.” But it markets itself a virtual movie theater, which is a slightly different use case.
Rabbit, a company founded in 2013, enabled people to more widely browse and watch the same content simultaneously, as well as to text and video chat. It’s closer to what Giggl is building. But Rabbit eventually ran aground.
Lerner says that’s because the company was screen-sharing other people’s copyrighted material and so couldn’t charge for its service. (“Essentially,” he notes, “you can get away with some amount of piracy if it’s not for your personal financial benefit.”) But it’s probably fair to wonder if there will ever be massive demand for services like his, particularly as the coronavirus fades into the distance and people reengage more actively in the physical world.
For his part, Lerner isn’t worried. He points to a generation that is far more comfortable watching video on a phone than elsewhere. He also notes that screen time has become “an isolating thing,” and predicts it will eventually become “an ideal time to hang out with your buddies,” akin to watching a game on the couch together.
There is a precedent, in his mind. “Over the last 20 years, games went from single player to multiplayer to voice chats showing up in games so people can actually hang out,” he says. “Because mobile is everywhere and social is fun, we think the same is going to happen to the rest of the media business.”
Zog thinks the trends play in Giggl’s favor, too. “It’s obvious that people are going to meet up more often” as the pandemic winds down, he says. But all that real-world socializing “isn’t really going to be a substitute” for the kind of online socializing that’s already happening in so many corners of the internet.
Besides, he adds Giggl wants to “make it so that being together online is just as good as being together in real life. That’s the end goal here.”
Netskope, focused on Secure Access Service Edge architecture, announced Friday a $ 300 million investment round on a post-money valuation of $ 7.5 billion.
The oversubscribed insider investment was led by ICONIQ Growth, which was joined by other existing investors, including Lightspeed Venture Partners, Accel, Sequoia Capital Global Equities, Base Partners, Sapphire Ventures and Geodesic Capital.
Netskope co-founder and CEO Sanjay Beri told TechCrunch that since its founding in 2012, the company’s mission has been to guide companies through their digital transformation by finding what is most valuable to them — sensitive data — and protecting it.
“What we had before in the market didn’t work for that world,” he said. “The theory is that digital transformation is inevitable, so our vision is to transform that market so people could do that, and that is what we are building nearly a decade later.”
With this new round, Netskope continues to rack up large rounds: it raised $ 340 million last February, which gave it a valuation of nearly $ 3 billion. Prior to that, it was a $ 168.7 million round at the end of 2018.
Similar to other rounds, the company was not actively seeking new capital, but that it was “an inside round with people who know everything about us,” Beri said.
“The reality is we could have raised $ 1 billion, but we don’t need more capital,” he added. “However, having a continued strong balance sheet isn’t a bad thing. We are fortunate to be in that situation, and our destination is to be the most impactful cybersecurity company in the world.
Beri said the company just completed a “three-year journey building the largest cloud network that is 15 milliseconds from anyone in the world,” and intends to invest the new funds into continued R&D, expanding its platform and Netskope’s go-to-market strategy to meet demand for a market it estimated would be valued at $ 30 billion by 2024, he said.
Even pre-pandemic the company had strong hypergrowth over the past year, surpassing the market average annual growth of 50%, he added.
Today’s investment brings the total raised by Santa Clara-based Netskope to just over $ 1 billion, according to Crunchbase data.
With the company racking up that kind of capital, the next natural step would be to become a public company. Beri admits that Netskope could be public now, though it doesn’t have to do it for the traditional reasons of raising capital or marketing.
“Going public is one day on our path, but you probably won’t see us raise another private round,” Beri said.
He is a good friend, he has 15+ million-plus social media followers, is Chairman of VaynerX, and active CEO of VaynerMedia – a digital marketing agency you might have already heard of. Go to SESlingshots.GaryVaynerchukMedia.com and get your website on Google’s first page.
Search engine optimization (SEO) is the number one form of advertising in terms of return on investment so that means if you have a website or a business, search means more to you than anything else out there if you’re looking for more leads, sales, customers, and friends. As a matter of fact, if you aren’t purposely and actively pursuing SEO for your business, you’re leaving more than money off the table, you’re leaving your entire business profile, reviews and ultimately destroying your reputation.
Content created in partnership with Gary V.
If you’re still asking, “What is search engine optimization”? – it’s the digital version of yellow or white pages. If someone is looking for something they typically use Google or another search engine and type what’s on their mind. From there, whatever websites and keywords show up on the first results will get clicked on.
Basically, if you have a website that’s selling t-shirts or sportswear you want to be able to have your website show up on Google when someone types up “red shirt” or “giants hat” (if you live in San Francisco). The neat thing about search engine marketing is that anyone can get it done, and save $ 1000s you would spend elsewhere otherwise. You can have a website up and running and use SESlingshots.GaryVaynerchukMedia.com to get your search terms ranked on Google as the first result in as little as 72 hours. It’s called backlink building and if done right, you can get links from other trusted pages that tell Google – your website is positively referred to and belongs at the top of search results.
It’s a really smart and effective strategy and once again, it has the highest return on investment in comparison to any other form of marketing or advertising to have ever have existed on the planet. Visit SESlingshots.GaryVaynerchukMedia.com and use the search tool to find out what your website needs to get to the top result on Google.
Click here to show Gary V your support: SESlingshots.GaryVaynerchukMedia.com
A group of 37 attorneys general filed a second major multi-state antitrust lawsuit against Google Wednesday, accusing the company of abusing its market power to stifle competitors and forcing consumers into in-app payments that grant the company a hefty cut.
New York Attorney General Letitia James is co-leading the suit alongside the Tennessee, North Carolina and Utah attorneys general. The bipartisan coalition represents 36 U.S. states, including California, Florida, Massachusetts, New Jersey, New Hampshire, Colorado and Washington, as well as the District of Columbia.
“Through its illegal conduct, the company has ensured that hundreds of millions of Android users turn to Google, and only Google, for the millions of applications they may choose to download to their phones and tablets,” James said in a press release. “Worse yet, Google is squeezing the lifeblood out of millions of small businesses that are only seeking to compete.”
In December, 35 states filed a separate antitrust suit against Google, alleging that the company engaged in illegal behavior to maintain a monopoly on the search business. The Justice Department filed its own antitrust case focused on search last October.
In the new lawsuit, embedded below, the bipartisan coalition of states allege that Google uses “misleading” security warnings to keep consumers and developers within its walled app garden, the Google Play store. But the fees that Google collects from Android app developers are likely the meat of the case.
“Not only has Google acted unlawfully to block potential rivals from competing with its Google Play Store, it has profited by improperly locking app developers and consumers into its own payment processing system and then charging high fees,” District of Columbia Attorney General Karl Racine said.
Like Apple, Google herds all app payment processing into its own service, Google Play Billing, and reaps the rewards: a 30 percent cut of all payments. Much of the criticism here is a case that could — and likely will — be made against Apple, which exerts even more control over its own app ecosystem. Google doesn’t have an iMessage equivalent exclusive app that keeps users locked in in quite the same way.
While the lawsuit discusses Google’s “monopoly power” in the app marketplace, the elephant in the room is Apple — Google’s thriving direct competitor in the mobile software space. The lawsuit argues that consumers face pressure to stay locked into the Android ecosystem, but on the Android side at least, much of that is ultimately familiarity and sunk costs. The argument on the Apple side of the equation here is likely much stronger.
The din over tech giants squeezing app developers with high mobile payment fees is just getting louder. The new multi-state lawsuit is the latest beat, but the topic has been white hot since Epic took Apple to court over its desire to bypass Apple’s fees by accepting mobile payments outside the App Store. When Epic set up a workaround, Apple kicked it out of the App Store and Epic Games v. Apple was born.
The Justice Department is reportedly already interested in Apple’s own app store practices, along with many state AGs who could launch a separate suit against the company at any time.
For the second time in a month, the company issued an update that doesn’t fully address a severe security vulnerability in Windows.
Feed: All Latest
Now is a prime opportunity for travel brands to expand top-of-funnel acquisitions with Discovery Campaigns, Brand Awareness Campaigns, and Lead Forms.
Read more at PPCHero.com
For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.
And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted.
The suspicion remains nowhere near far enough.
New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.
That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform.
The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of ‘commercial secrecy’.
But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.
To fix YouTube’s algorithm, Mozilla is calling for “common sense transparency laws, better oversight, and consumer pressure” — suggesting a combination of laws that mandate transparency into AI systems; protect independent researchers so they can interrogate algorithmic impacts; and empower platform users with robust controls (such as the ability to opt out of “personalized” recommendations) are what’s needed to rein in the worst excesses of the YouTube AI.
Regrets, YouTube users have had a few…
To gather data on specific recommendations being made made to YouTube users — information that Google does not routinely make available to external researchers — Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.
The tool can generate a report which includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. (Or, well, ‘dysfunctioning’ as the case may be.)
The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets’, including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.
A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs.
The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves.
Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. So a clear fail.
A very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language — with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. (And none of the three can be classed as minor international markets.)
Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report — a worrying detail to read in the middle of an ongoing global health crisis.
The crowdsourced study — which Mozilla bills as the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos which the report draws on directly.
These reports were generated between July 2020 and May 2021.
What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. But Mozilla argues that taking this “people-powered” approach centres the lived experiences of Internet users and is therefore helpful in foregrounding the experiences of marginalised and/or vulnerable people and communities (vs, for example, applying only a narrower, legal definition of ‘harm’).
“We wanted to interrogate and explore further [people’s experiences of falling down the YouTube ‘rabbit hole’] and frankly confirm some of these stories — but then also just understand further what are some of the trends that emerged in that,” explained Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, discussing the aims of the research.
“My main feeling in doing this work was being — I guess — shocked that some of what we had expected to be the case was confirmed… It’s still a limited study in terms of the number of people involved and the methodology that we used but — even with that — it was quite simple; the data just showed that some of what we thought was confirmed.
“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’… And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues. But I was just like — oh wow, it’s actually coming out really clearly in our data.”
Mozilla says the crowdsourced research uncovered “numerous examples” of reported content that would likely or actually breach YouTube’s community guidelines — such as hate speech or debunked political and scientific misinformation.
But it also says the reports flagged a lot of what YouTube “may” consider ‘borderline content’. Aka, stuff that’s harder to categorize — junk/low quality videos that perhaps toe the acceptability line and may therefore be trickier for the platform’s algorithmic moderation systems to respond to (and thus content that may also survive the risk of a take down for longer).
However a related issue the report flags is that YouTube doesn’t provide a definition for borderline content — despite discussing the category in its own guidelines — hence, says Mozilla, that makes the researchers’ assumption that much of what the volunteers were reporting as ‘regretful’ would likely fall into YouTube’s own ‘borderline content’ category impossible to verify.
The challenge of independently studying the societal effects of Google’s tech and processes is a running theme underlying the research. But Mozilla’s report also accuses the tech giant of meeting YouTube criticism with “inertia and opacity”.
It’s not alone there either. Critics have long accused YouTube’s ad giant parent of profiting off-of engagement generated by hateful outrage and harmful disinformation — allowing “AI-generated bubbles of hate” surface ever more baleful (and thus stickily engaging) stuff, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low grade content business under a user-generated content umbrella.
Indeed, ‘falling down the YouTube rabbit hole’ has become a well-trodden metaphor for discussing the process of unsuspecting Internet users being dragging into the darkest and nastiest corners of the web. This user reprogramming taking place in broad daylight via AI-generated suggestions that yell at people to follow the conspiracy breadcrumb trail right from inside a mainstream web platform.
Back as 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of exactly this: Automating radicalization.
However it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being ‘radicalized’ after viewing hours of extremist content or conspiracy theory junk on Google’s platform.
Ex-YouTube insider — Guillaume Chaslot — is one notable critic who’s sought to pull back the curtain shielding the proprietary tech from deeper scrutiny, via his algotransparency project.
Mozilla’s crowdsourced research adds to those efforts by sketching a broad — and broadly problematic — picture of the YouTube AI by collating reports of bad experiences from users themselves.
Of course externally sampling platform-level data that only Google holds in full (at its true depth and dimension) can’t be the whole picture — and self-reporting, in particular, may introduce its own set of biases into Mozilla’s data-set. But the problem of effectively studying big tech’s blackboxes is a key point accompanying the research, as Mozilla advocates for proper oversight of platform power.
In a series of recommendations the report calls for “robust transparency, scrutiny, and giving people control of recommendation algorithms” — arguing that without proper oversight of the platform, YouTube will continue to be harmful by mindlessly exposing people to damaging and braindead content.
The problematic lack of transparency around so much of how YouTube functions can be picked up from other details in the report. For example, Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines).
Collectively, just this subset of videos had had a total of 160M views prior to being removed for whatever reason.
In other findings, the research found that regretful views tend to perform well on the platform.
A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform — lending weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it brings in the clicks.
While that might be great for Google’s ad business, it’s clearly a net negative for democratic societies which value truthful information over nonsense; genuine public debate over artificial/amplified binaries; and constructive civic cohesion over divisive tribalism.
But without legally-enforced transparency requirements on ad platforms — and, most likely, regulatory oversight and enforcement that features audit powers — these tech giants are going to continue to be incentivized to turn a blind eye and cash in at society’s expense.
Mozilla’s report also underlines instances where YouTube’s algorithms are clearly driven by a logic that’s unrelated to the content itself — with a finding that in 43.6% of the cases where the researchers had data about the videos a participant had watched before a reported regret the recommendation was completely unrelated to the previous video.
The report gives examples of some of these logic-defying AI content pivots/leaps/pitfalls — such as a person watching videos about the U.S. military and then being recommended a misogynistic video entitled ‘Man humiliates feminist in viral video.’
In another instance, a person watched a video about software rights and was then recommended a video about gun rights. So two rights make yet another wrong YouTube recommendation right there.
In a third example, a person watched an Art Garfunkel music video and was then recommended a political video entitled ‘Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.’
To which the only sane response is, umm what???
YouTube’s output in such instances seems — at best — some sort of ‘AI brain fart’.
A generous interpretation might be that the algorithm got stupidly confused. Albeit, in a number of the examples cited in the report, the confusion is leading YouTube users toward content with a right-leaning political bias. Which seems, well, curious.
Asked what she views as the most concerning findings, Mozilla’s Geurkink told TechCrunch: “One is how clearly misinformation emerged as a dominant problem on the platform. I think that’s something, based on our work talking to Mozilla supporters and people from all around the world, that is a really obvious thing that people are concerned about online. So to see that that is what is emerging as the biggest problem with the YouTube algorithm is really concerning to me.”
She also highlighted the problem of the recommendations being worse for non-English-speaking users as another major concern, suggesting that global inequalities in users’ experiences of platform impacts “doesn’t get enough attention” — even when such issues do get discussed.
Responding to Mozilla’s report in a statement, a Google spokesperson sent us this statement:
“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”
Google also claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front.
At the same time, its response queried how Mozilla’s study defines ‘regrettable’ content — and went on to claim that its own user surveys generally show users are satisfied with the content that YouTube recommends.
In further non-quotable remarks, Google noted that earlier this year it started disclosing a ‘violative view rate‘ (VVR) metric for YouTube — disclosing for the first time the percentage of views on YouTube that comes from content that violates its policies.
The most recent VVR stands at 0.16-0.18% — which Google says means that out of every 10,000 views on YouTube, 16-18 come from violative content. It said that figure is down by more than 70% when compared to the same quarter of 2017 — crediting its investments in machine learning as largely being responsible for the drop.
However, as Geurkink noted, the VVR is of limited use without Google releasing more data to contextualize and quantify how far its AI was involved in accelerating views of content its own rules state shouldn’t be viewed on its platform. Without that key data the suspicion must be that the VVR is a nice bit of misdirection.
“What would be going further than [VVR] — and what would be really, really helpful — is understanding what’s the role that the recommendation algorithm plays in this?” Geurkink told us on that, adding: “That’s what is a complete blackbox still. In the absence of greater transparency [Google’s] claims of progress have to be taken with a grain of salt.”
Google also flagged a 2019 change it made to how YouTube’s recommender algorithm handles ‘borderline content’ — aka, content that doesn’t violate policies but falls into a problematic grey area — saying that that tweak had also resulted in a 70% drop in watchtime for this type of content.
Although the company confirmed this borderline category is a moveable feast — saying it factors in changing trends as well as context and also works with experts to determine what’s get classed as borderline — which makes the aforementioned percentage drop pretty meaningless since there’s no fixed baseline to measure against.
It’s notable that Google’s response to Mozilla’s report makes no mention of the poor experience reported by survey participants in non-English-speaking markets. And Geurkink suggested that, in general, many of the claimed mitigating measures YouTube applies are geographically limited — i.e. to English-speaking markets like the US and UK. (Or at least arrive in those markets first, before a slower rollout to other places.)
A January 2019 tweak to reduce amplification of conspiracy theory content in the US was only expanded to the UK market months later — in August — for example.
“YouTube, for the past few years, have only been reporting on their progress of recommendations of harmful or borderline content in the US and in English-speaking markets,” she also said. “And there are very few people questioning that — what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”
We asked Google to confirm whether it had since applied the 2019 conspiracy theory related changes globally — and a spokeswoman told us that it had. But the much higher rate of reports made to Mozilla of — a yes broader measure of — ‘regrettable’ content being made in non-English-speaking markets remains notable.
And while there could be others factors at play, which might explain some of the disproportionately higher reporting, the finding may also suggest that, where YouTube’s negative impacts are concerned, Google directs greatest resource at markets and languages where its reputational risk and the capacity of its machine learning tech to automate content categorization are strongest.
Yet any such unequal response to AI risk obviously means leaving some users at greater risk of harm than others — adding another harmful dimension and layer of unfairness to what is already a multi-faceted, many-headed-hydra of a problem.
It’s yet another reason why leaving it up to powerful platforms to rate their own AIs, mark their own homework and counter genuine concerns with self-serving PR is for the birds.
(In additional filler background remarks it sent us, Google described itself as the first company in the industry to incorporate “authoritativeness” into its search and discovery algorithms — without explaining when exactly it claims to have done that or how it imagined it would be able to deliver on its stated mission of ‘organizing the world’s information and making it universally accessible and useful’ without considering the relative value of information sources… So color us baffled at that claim. Most likely it’s a clumsy attempt to throw disinformation shade at rivals.)
Returning to the regulation point, an EU proposal — the Digital Services Act — is set to introduce some transparency requirements on large digital platforms, as part of a wider package of accountability measures. And asked about this Geurkink described the DSA as “a promising avenue for greater transparency”.
But she suggested the legislation needs to go further to tackle recommender systems like the YouTube AI.
“I think that transparency around recommender systems specifically and also people having control over the input of their own data and then the output of recommendations is really important — and is a place where the DSA is currently a bit sparse, so I think that’s where we really need to dig in,” she told us.
One idea she voiced support for is having a “data access framework” baked into the law — to enable vetted researchers to get more of the information they need to study powerful AI technologies — i.e. rather than the law trying to come up with “a laundry list of all of the different pieces of transparency and information that should be applicable”, as she put it.
The EU also now has a draft AI regulation on the table. The legislative plan takes a risk-based approach to regulating certain applications of artificial intelligence. However it’s not clear whether YouTube’s recommender system would fall under one of the more closely regulated categories — or, as seems more likely (at least with the initial Commission proposal), fall entirely outside the scope of the planned law.
“An earlier draft of the proposal talked about systems that manipulate human behavior which is essentially what recommender systems are. And one could also argue that’s the goal of advertising at large, in some sense. So it was sort of difficult to understand exactly where recommender systems would fall into that,” noted Geurkink.
“There might be a nice harmony between some of the robust data access provisions in the DSA and the new AI regulation,” she added. “I think transparency is what it comes down to, so anything that can provide that kind of greater transparency is a good thing.
“YouTube could also just provide a lot of this… We’ve been working on this for years now and we haven’t seen them take any meaningful action on this front but it’s also, I think, something that we want to keep in mind — legislation can obviously take years. So even if a few of our recommendations were taken up [by Google] that would be a really big step in the right direction.”
- VOCHI raises additional $2.4 million for its computer vision-powered video editing app
- Cowboy Ventures’ Ted Wang: CEO coaching is ‘about having a second set of eyes’
- How To Combine The Best Amazon PPC & SEO Strategies
- Samsung will announce new foldables on August 11
- Pillar VC closes $192M for two funds targeting SaaS, crypto, biotech, manufacturing