CBPO

Monthly Archives: March 2018

The real threat to Facebook is the Kool-Aid turning sour

March 31, 2018 No Comments

These kinds of leaks didn’t happen when I started reporting on Facebook eight years ago. It was a tight-knit cult convinced of its mission to connect everyone, but with the discipline of a military unit where everyone knew loose lips sink ships. Motivational posters with bold corporate slogans dotted its offices, rallying the troops. Employees were happy to be evangelists.

But then came the fake news, News Feed addiction, violence on Facebook Live, cyberbullying, abusive ad targeting, election interference and, most recently, the Cambridge Analytica app data privacy scandals. All the while, Facebook either willfully believed the worst case scenarios could never come true, was naive to their existence or calculated the benefits and growth outweighed the risks. And when finally confronted, Facebook often dragged its feet before admitting the extent of the issues.

Inside the social network’s offices, the bonds began to fray. An ethics problem metastisized into a morale problem. Slogans took on sinister second meanings. The Kool-Aid tasted different.

Some hoped they could right the ship but couldn’t. Some craved the influence and intellectual thrill of running one of humanity’s most popular inventions, but now question if that influence and their work is positive. Others surely just wanted to collect salaries, stock and resumé highlights, but lost the stomach for it.

Now the convergence of scandals has come to a head in the form of constant leaks.

The trouble tipping point

The more benign leaks merely cost Facebook a bit of competitive advantage. We’ve learned it’s building a smart speaker, a standalone VR headset and a Houseparty split-screen video chat clone.

Yet policy-focused leaks have exacerbated the backlash against Facebook, putting more pressure on the conscience of employees. As blame fell to Facebook for Trump’s election, word of Facebook prototyping a censorship tool for operating in China escaped, triggering questions about its respect for human rights and free speech. Facebook’s content rulebook got out alongside disturbing tales of the filth the company’s contracted moderators have to sift through. Its ad targeting was revealed to be able to pinpoint emotionally vulnerable teens.

In recent weeks, the leaks have accelerated to a maddening pace in the wake of Facebook’s soggy apologies regarding the Cambridge Analytica debacle. Its weak policy enforcement left the door open to exploitation of data users gave third-party apps, deepening the perception that Facebook doesn’t care about privacy.

And it all culminated with BuzzFeed publishing a leaked “growth at all costs” internal post from Facebook VP Andrew “Boz” Bosworth that substantiated people’s worst fears about the company’s disregard for user safety in pursuit of world domination. Even the ensuing internal discussion about the damage caused by leaks and how to prevent them…leaked.

But the leaks are not the disease, just the symptom. Sunken morale is the cause, and it’s dragging down the company. Former Facebook employee and Wired writer Antonio Garcia Martinez sums it up, saying this kind of vindictive, intentionally destructive leak fills Facebook’s leadership with “horror”:

And that sentiment was confirmed by Facebook’s VP of News Feed Adam Mosseri, who tweeted that leaks “create strong incentives to be less transparent internally and they certainly slow us down,” and will make it tougher to deal with the big problems.

Those thoughts weigh heavy on Facebook’s team. A source close to several Facebook executives tells us they feel “embarrassed to work there” and are increasingly open to other job opportunities. One current employee told us to assume anything certain execs tell the media is “100% false.”

If Facebook can’t internally discuss the problems it faces without being exposed, how can it solve them?

Implosion

The consequences of Facebook’s failures are typically pegged as external hazards.

You might assume the government will finally step in and regulate Facebook. But the Honest Ads Act and other rules about ads transparency and data privacy could end up protecting Facebook by being simply a paperwork speed bump for it while making it tough for competitors to build a rival database of personal info. In our corporation-loving society, it seems unlikely that the administration would go so far as to split up Facebook, Instagram and WhatsApp — one of the few feasible ways to limit the company’s power.

Users have watched Facebook make misstep after misstep over the years, but can’t help but stay glued to its feed. Even those who don’t scroll rely on it as a fundamental utility for messaging and login on other sites. Privacy and transparency are too abstract for most people to care about. Hence, first-time Facebook downloads held steady and its App Store rank actually rose in the week after the Cambridge Analytica fiasco broke. In regards to the #DeleteFacebook movement, Mark Zuckerberg himself said “I don’t think we’ve seen a meaningful number of people act on that.” And as long as they’re browsing, advertisers will keep paying Facebook to reach them.

That’s why the greatest threat of the scandal convergence comes from inside. The leaks are the canary in the noxious blue coal mine.

Can Facebook survive slowing down?

If employees wake up each day unsure whether Facebook’s mission is actually harming the world, they won’t stay. Facebook doesn’t have the same internal work culture problems as some giants like Uber. But there are plenty of other tech companies with less questionable impacts. Some are still private and offer the chance to win big on an IPO or acquisition. At the very least, those in the Bay could find somewhere to work without a spending hours a day on the traffic-snarled 101 freeway.

If they do stay, they won’t work as hard. It’s tough to build if you think you’re building a weapon. Especially if you thought you were going to be making helpful tools. The melancholy and malaise set in. People go into rest-and-vest mode, living out their days at Facebook as a sentence not an opportunity. The next killer product Facebook needs a year or two from now might never coalesce.

And if they do work hard, a culture of anxiety and paralysis will work against them. No one wants to code with their hands tied, and some would prefer a less scrutinized environment. Every decision will require endless philosophizing and risk-reduction. Product changes will be reduced to the lowest common denominator, designed not to offend or appear too tyrannical.

Source: Volkan Furuncu/Anadolu Agency + David Ramos/Getty Images

In fact, that’s partly how Facebook got into this whole mess. A leak by an anonymous former contractor led Gizmodo to report Facebook was suppressing conservative news in its Trending section. Terrified of appearing liberally biased, Facebook reportedly hesitated to take decisive action against fake news. That hands-off approach led to the post-election criticism that degraded morale and pushed the growing snowball of leaks down the mountain.

It’s still rolling.

How to stop morale’s downward momentum will be one of Facebook’s greatest tests of leadership. This isn’t a bug to be squashed. It can’t just roll back a feature update. And an apology won’t suffice. It will have to expel or reeducate the leakers and those disloyal without instilling a witch hunt’s sense of dread. Compensation may have to jump upwards to keep talent aboard like Twitter did when it was floundering. Its top brass will need to show candor and accountability without fueling more indiscretion. And it may need to make a shocking, landmark act of contrition to convince employees its capable of change.

When asked how Facebook could address the morale problem, Mosseri told me “it starts with owning our mistakes and being very clear about what we’re doing now” and noted that “it took a while to get into this place and I think it’ll take a while to work our way out . . . Trust is lost quickly, and takes a long time to rebuild.”

This isn’t about whether Facebook will disappear tomorrow, but whether it will remain unconquerable for the forseeable future.

Growth has been the driving mantra for Facebook since its inception. No matter how employees are evaluated, it’s still the underlying ethos. Facebook has poised itself as a mission-driven company. The implication was always that connecting people is good so connecting more people is better. The only question was how to grow faster.

Now Zuckerberg will have to figure out how to get Facebook to cautiously foresee the consequences of what it says and does while remaining an appealing place to work. “Move slow and think things through” just doesn’t have the same ring to it.

If you’re a Facebook employee or anyone else that has information to share with TechCrunch, you can contact us at Tips@techcrunch.com or this article’s author Josh Constine’s DMs are open on Twitter. Here are some of our feature stories on Facebook’s recent issues:

 


Social – TechCrunch


China and the Children Will Save Electric Cars From the EPA

March 31, 2018 No Comments

The EPA may be able to roll back regulations in the US, but other forces will push automakers to keep making electric cars.
Feed: All Latest


As marketing data proliferates, consumers should have more control

March 31, 2018 No Comments

At the Adobe Summit in Las Vegas this week, privacy was on the minds of many people. It was no wonder with social media data abuse dominating the headlines, GDPR just around the corner, and Adobe announcing the concept of a centralized customer experience record.

With so many high profile breaches in recent years, putting your customer data in a central record-keeping system would seem to be a dangerous proposition, yet Adobe sees so many positives for marketers, it likely believes this to be a worthy trade-off.

Which is not to say that the company doesn’t see the risks. Executives speaking at the conference continually insisted that privacy is always part of the conversation at Adobe as they build tools — and they have built in security and privacy safeguards into the customer experience record.

Ben Kepes, an independent analyst says this kind of data collection does raise ethical questions about how to use it. “This new central repository of data about individuals is going to be incredibly attractive to Adobe’s customers. The company is doing what big brands and corporations ask for. But in these post-Cambridge Analytica days, I wonder how much of a moral obligation Adobe and the other vendors have to ensure their tools are used for good purposes,” Kepes asked.

Offering better experiences

It’s worth pointing out that the goal of this exercise isn’t simply to collect data for data’s sake. It’s to offer consumers a more customized and streamlined experience. How does that work? There was a demo in the keynote illustrating a woman’s experience with a hotel brand.

Brad Rencher, EVP and GM at Adobe Experience Cloud explains Adobe’s Cloud offerings. Photo: Jeff Bottari/Invision for Adobe/AP Images

The mythical woman started a reservation for a trip to New York City, got distracted in the middle and was later “reminded” to return to it via Facebook ad. She completed the reservation and was later issued a digital key to her room, allowing her to bypass the front desk check-in process. Further, there was a personal greeting on the television in her room with a custom message and suggestions for entertainment based on her known preferences.

As one journalist pointed out in the press event, this level of detail from the hotel is not something that would thrill him (beyond the electronic check-in). Yet there doesn’t seem to be a way to opt out of that data (unless you live in the EU and will be subject to GDPR rules).

Consumers may want more control

As it turns out, that reporter wasn’t alone. According to a survey conducted last year by The Economist Intelligence Unit in conjunction with ForgeRock, an identity management company, consumers are not just willing sheep that tech companies may think we are.

The survey was conducted last October with 1,629 consumers participating from eight countries including Australia, China, France, Germany, Japan, South Korea, the UK and the US. It’s worth noting that survey questions were asked in the context of Internet of Things data, but it seems that the results could be more broadly applied to any types of data collection activities by brands.

There are a couple of interesting data points that perhaps brands should heed as they collect customer data in the fashion outlined by Adobe. In particular as it relates to what Adobe and other marketing software companies are trying to do to build a central customer profile, when asked to rate the statement, “I am uncomfortable with companies building a “profile” of me to predict my consumer behaviour,” 39 percent strongly agreed with that statement. Another 35 percent somewhat agreed. That would suggest that consumers aren’t necessarily thrilled with this idea.

When presented with the statement, Providing my personal information may have more drawbacks than benefits, 32 percent strongly agreed and 41 percent somewhat agreed.

That would suggest that it is on the brand to make it clearer to consumers that they are collecting that data to provide a better overall experience, because it appears that consumers who answered this survey are not necessarily making that connection.

Perhaps it wasn’t a coincidence that at a press conference after the Day One keynote announcing the unified customer experience record, many questions from analysts and journalists focused on notions of privacy. If Adobe is helping companies gather and organize customer data, what role do they have in how their customers’ use that data, what role does the brand have and how much control should consumers have over their own data?

These are questions we seem to be answering on the fly. The technology is here now or very soon will be, and wherever the data comes from, whether the web, mobile devices or the Internet of Things, we need to get a grip on the privacy implications — and we need to do it quickly. If consumers want more control as this survey suggests, maybe it’s time for companies to give it to them.


Enterprise – TechCrunch


Creating a Seamless Pre- to Post-Click Experience

March 31, 2018 No Comments

How often are you taking into consideration what you’re offering users throughout their entire experience with your brand? Is the pre- to post-click experience seamless?

Read more at PPCHero.com
PPC Hero


Introducing the Google Analytics Sample Dataset for BigQuery

March 31, 2018 No Comments
The Google Analytics integration with Google BigQuery gives analysts the opportunity to glean new business insights by accessing session and hit level data and combining it with separate data sets. Organizations and developers can analyze unsampled analytics data in seconds through BigQuery, a web service that lets developers and businesses conduct interactive analysis of big data sets and tap into powerful data analytics.

To help you learn or teach practical experience with analyzing analytics data in BigQuery, we are pleased to announce the availability of a Google Analytics sample dataset. This is accessible directly through the BigQuery interface. The dataset includes data from the Google Merchandise Store, an Ecommerce site that sells Google branded merchandise. The typical Google Analytics data you would expect to see such as AdWords, Goals and Enhanced Ecommerce data can be queried. You can see the fields part of the export schema that you can query here.

Google Analytics Sample Dataset for BigQuery

When it comes to helping businesses ask advanced questions on unsampled Google Analytics data, we like to use BigQuery. Its fast and scalable for big data analytics. When providing trainings on the benefits of the Google Analytics and BigQuery integration, there is nothing like having a high quality dataset with sufficient volume to be meaningful. That’s why we are so pleased to see the public availability of a robust Google Analytics sample dataset with marketing and ecommerce data. Everyone can experience big data analytics!
– Doug Hall, Director of Analytics, Conversion Works

Self-Learning 
You can use the sample dataset to learn how granular information can be extracted from analytics data in BigQuery. We’ve created this guide to help you create queries to find answers to the following for the Google Merchandise Store:

  • What is the average number of transactions per purchaser?
  • What is the percentage of stock sold per product?
  • What is the average bounce rate per marketing channel segmented by purchasers?
  • What are the products purchased by customers who previously purchased a particular product?
  • What is the average number of user interactions before a purchase?

Education Programs

If you’re an educator trying to teach others to use BigQuery, then we encourage you to use the sample dataset as a tool. You can use it to create task based assessments and other learning materials for your students. We’ve started to do just that by integrating it into our education courses.

The Analytics Academy provides an introduction to BigQuery in their Getting Started with Google Analytics 360 course. The Data Insights course by the Google Cloud team provides an in-depth look at BigQuery with practical exercises.

Access the Dataset
You can learn more about the dataset including how to get access in this help article. If you need some help, please let us know through the Advertiser Community. Share any feature requests or ideas to make the dataset more useful. We hope the dataset gives you a practical way to learn about the benefits of analysing Google Analytics data in BigQuery.

Happy analyzing!
Posted by Deepak Aujla, Program Manager, Google Analytics Solutions


Google Analytics Blog


Stay Up-to-Date about Hero Conf London

March 29, 2018 No Comments

Subscribe today to become a Hero Conf Insider to stay informed on Hero Conf London’s exclusive offers, early bird registrations prices, speaker announcements, and pass promotions!

Read more at PPCHero.com
PPC Hero


Festo’s latest bio-inspired creations are a robo-bat and rolling robo-spider

March 29, 2018 No Comments

Festo’s flashy biomimetic robots are more or less glorified tech demos, but that doesn’t mean they aren’t cool. The engineering is still something to behold, although these robot critters likely won’t be doing any serious work. Its latest units move in imitation of two unusual animals: a tumbling spider and a flying fox (think big bat).

The BionicWheelBot, when walking, isn’t anything we haven’t seen before: hexapodal locomotion has been achieved by countless roboticists — one recent project even attempted to capture the spontaneity of an insect’s gait.

But its next trick is new, at least if you haven’t watched the Star Wars prequels. It uses the legs on each side to form a wheel and propels itself with the last pair. Useful for getting downhill or blowing in the wind, as some spiders and insects in fact do.

It looks as if it can get going quite fast, and although it seems to me it would be in a fix if knocked over, it had no problem dropping off the end of the table and rolling on in the Festo video.

The other robo-critter is the BionicFlyingFox, modeled on the enormous fruit bats bearing that name. Like all flying creatures there is a great emphasis on lightness and simplicity, allowing this robot (like its distant forebear, Festo’s bird) to flap around realistically and stay aloft for a time.

In imitation of the strong but light and flexible membrane that forms flying mammals’ wings, the Festo bot uses a modified elastane material (sort of a super-Spandex) that’s airtight and won’t crease or rip.

If you’re lucky, you might see one of these majestic robeasts demonstrated at a robotics conference one day.

Gadgets – TechCrunch


Data is not the new oil

March 27, 2018 No Comments

 

It’s easier than ever to build software, which makes it harder than ever to build a defensible software business. So it’s no wonder investors and entrepreneurs are optimistic about the potential of data to form a new competitive advantage. Some have even hailed data as “the new oil.” We invest exclusively in startups leveraging data and AI to solve business problems, so we certainly see the appeal — but the oil analogy is flawed.

In all the enthusiasm for big data, it’s easy to lose sight of the fact that all data is not created equal. Startups and large corporations alike boast about the volume of data they’ve amassed, ranging from terabytes of data to quantities surpassing all of the information contained in the Library of Congress. Quantity alone does not make a “data moat.”

Firstly, raw data is not nearly as valuable as data employed to solve a problem. We see this in the public markets: companies that serve as aggregators and merchants of data, such as Nielsen and Acxiom, sustain much lower valuation multiples than companies that build products powered by data in combination with algorithms and ML, such as Netflix or Facebook. The current generation of AI startups recognize this difference and apply machine learning models to extract value from the data they collect.

Even when data is put to work powering ML-based solutions, the size of the data set is only one part of the story. The value of a data set, the strength of a data moat, comes from context. Some applications require models to be trained to a high degree of accuracy before they can provide any value to a customer, while others need little or no data at all. Some data sets are truly proprietary, others are readily duplicated. Some data decays in value over time, while other data sets are evergreen. The application determines the value of the data.

Defining the “data appetite”

Machine learning applications can require widely different amounts of data to provide valuable features to the end user.

MAP threshold

In the cloud era, the idea of the minimum viable product (or MVP) has taken hold — that collection of software features which has just enough value to seek initial customers. In the intelligence era, we see the analog emerging for data and models: the minimum level of accurate intelligence required to justify adoption. We call this the minimum algorithmic performance (MAP).

Most applications don’t require 100 percent accuracy to create value. For example, a productivity tool for doctors might initially streamline data entry into electronic health record systems, but over time could automate data entry by learning from what doctors enter in the system. In this case, the MAP is zero, because the application has value from day one based on software features alone. Intelligence can be added later. However, solutions where AI is central to the product (for example, a tool to identify strokes from CT scans), would likely need to equal the accuracy of status quo (human-based) solutions. In this case the MAP is to match the performance of human radiologists, and an immense volume of data might be needed before a commercial launch is viable.

Performance threshold

Not every problem can be solved with near 100 percent accuracy. Some problems are too complex to fully model given the current state of the art; in that case, volume of data won’t be a silver bullet. Adding data might incrementally improve the model’s performance, but quickly hit diminishing marginal returns.

At the other extreme, some problems can be solved with near 100 percent accuracy with a very small training set, because the problem being modeled is relatively simple, with few dimensions to track and few variations in outcome.

In short, the amount of data you need to effectively solve a problem varies widely. We call the amount of training data needed to reach viable levels of accuracy the performance threshold.

AI-powered contract processing is a good example of an application with a low performance threshold. There are thousands of contract types, but most of them share key fields: the parties involved, the items of value being exchanged, time frame, etc. Specific document types like mortgage applications or rental agreements are highly standardized in order to comply with regulation. Across multiple startups, we’ve seen algorithms that automatically process documents needing only a few hundred examples to train to an acceptable degree of accuracy.

Entrepreneurs need to thread a needle. If the performance threshold is high, you’ll have a bootstrap problem acquiring enough data to create a product to drive customer usage and more data collection. Too low, and you haven’t built much of a data moat!

Stability threshold

Machine learning models train on examples taken from the real-world environment they represent. If conditions change over time, gradually or suddenly, and the model doesn’t change with it, the model will decay. In other words, the model’s predictions will no longer be reliable.

For example, Constructor.io is a startup that uses machine learning to rank search results for e-commerce websites. The system observes customer clicks on search results and uses that data to predict the best order for future search results. But e-commerce product catalogs are constantly changing. A model that weighs all clicks equally, or trained only on a data set from one period of time, risks overvaluing older products at the expense of newly introduced and currently popular products.

Keeping the model stable requires ingesting fresh training data at the same rate that the environment changes. We call this rate of data acquisition the stability threshold.

Perishable data doesn’t make for a very good data moat. On the other hand, ongoing access to abundant fresh data can be a formidable barrier to entry when the stability threshold is low.

Identifying opportunities with long-term defensibility

The MAP, performance threshold and stability threshold are all central elements to identifying strong data moats.

First-movers may have a low MAP to enter a new category, but once they have created a category and lead it, the minimum bar for future entrants is to equal or exceed the first mover.

Domains requiring less data to reach the performance threshold and less data to maintain that performance (the stability threshold) are not very defensible. New entrants can readily amass enough data and match or leapfrog your solution. On the other hand, companies attacking problems with low performance threshold (don’t require too much data) and a low stability threshold (data decays rapidly) could still build a moat by acquiring new data faster than the competition.

More elements of a strong data moat

AI investors talk enthusiastically about “public data” versus “proprietary data” to classify data sets, but the strength of a data moat has more dimensions, including:

  • Accessibility
  • Time — how quickly can the data be amassed and used in the model? Can the data be accessed instantly, or does it take a significant amount of time to obtain and process?
  • Cost — how much money is needed to acquire this data? Does the user of the data need to pay for licensing rights or pay humans to label the data?
  • Uniqueness — is similar data widely available to others who could then build a model and achieve the same result? Such so-called proprietary data might better be termed “commodity data” — for example: job listings, widely available document types (like NDAs or loan applications), images of human faces.
  • Dimensionality — how many different attributes are described in a data set? Are many of them relevant to solving the problem?
  • Breadth — how widely do the values of attributes vary? Does the data set account for edge cases and rare exceptions? Can data or learnings be pooled across customers to provide greater breadth of coverage than data from just one customer?
  • Perishability — how broadly applicable over time is this data? Is a model trained from this data durable over a long time period, or does it need regular updates?
  • Virtuous loop — can outcomes such as performance feedback or predictive accuracy be used as inputs to improve the algorithm? Can performance compound over time?

Software is now a commodity, making data moats more important than ever for companies to build a long-term competitive advantage. With tech titans democratizing access to AI toolkits to attract cloud computing customers, data sets are one of the most important ways to differentiate. A truly defensible data moat doesn’t come from just amassing the largest volume of data. The best data moats are tied to a particular problem domain, in which unique, fresh, data compounds in value as it solves problems for customers.


Startups – TechCrunch


Rackspace may reportedly go public again after a $4.3B deal took it private in 2016

March 27, 2018 No Comments

Rackspace, which was taken private in a $ 4.3 billion deal in August 2016 by private equity firm Apollo Global Management, is reportedly in consideration for an IPO by the firm, according to a report by Bloomberg.

The company could have an enterprise value of up to $ 10 billion, according to the report. Rackspace opted to go private in an increasingly challenging climate that faced competition on all sides from much more well capitalized companies like Amazon, Microsoft, and Google. Despite getting an early start in the cloud hosting space, Rackspace found itself quickly focusing on services in order to continue to gain traction. But under scrutiny from Wall Street as a public company, it’s harder to make that kind of a pivot.

Bloomberg reports that the firm has held early talks with advisers and may seek to begin the process by the end of the year, and these processes can always change over time. Rackspace offers managed services, including data migration, architecture to support on-boarding, and ongoing operational support for companies looking to work with cloud providers like AWS, Google Cloud and Azure. Since going private, Rackspace acquired Datapipe, and in July said it would begin working with Pivotal to continue to expand its managed services business.

Rackspace isn’t alone in companies that have found themselves opting to go private, such as Dell going private in 2013 in a $ 24.4 billion deal, in order to resolve issues with its business model without the quarter-to-quarter fiduciary obligations to public investors. Former Qualcomm executive chairman Paul Jacobs, too, expressed some interest in buying out Qualcomm in a process that would take the company private. There are different motivations for all these operations, but each has the same underlying principle: make some agile moves under the purview of a public owner rather than release financial statements every three months or so and watch the stock continue to tumble.

Should Rackspace actually end up going public, it would both catch a wave of successful IPOs like Zscalar and Dropbox — though things could definitely change by the end of the year — as well as an increased need by companies to manage their services in cloud environments. So, it makes sense that the private equity firm would consider taking it public to capitalize on Wall Street’s interest at this time in the latter half.

A spokesperson for Rackspace said the company does not comment on rumors or speculation. We also reached out to Apollo Global Management and will update the post when we hear back.


Enterprise – TechCrunch


New Analytics Academy course: Getting Started With Google Analytics 360

March 27, 2018 No Comments
Today, we are introducing a new course in Analytics Academy: Getting Started With Google Analytics 360.


Krista Seiden and Ashish Vij Introduce Getting Started With Google Analytics 360 (Video)

In this course, you will join instructors Ashish Vij and Krista Seiden as you learn key Google Analytics 360 features such as Roll-Up Reporting, Custom Funnels, Unsampled Reports, and Custom Tables. You’ll gain insight into how you can benefit from reporting with BigQuery and native integrations with DoubleClick products, and we will provide you with real-world examples to illustrate how you can leverage Analytics 360’s features and integrations to drive performance and achieve your business goals.

By participating in the course, you’ll learn how to:

  • Set up Roll-Up Reporting
  • Analyze customer journeys with Custom Funnels
  • Leverage Unsampled Reports and Custom Tables
  • Analyze big data with BigQuery Export
  • Evaluate marketing performance with DoubleClick reporting integrations
Sign up for Getting Started With Google Analytics 360 now and start learning today.

Happy analyzing!

Helen Huang & The Google Analytics Education Team


Google Analytics Blog