Monthly Archives: March 2019
You are now living in the midst of a tantalizing revolution as the great minds of user experience (UX) and search engine optimization (SEO) finally converge to produce beautiful on-page content designed to rank in search results AND engage or educate the user.
Gone are the days of plugging in keyword phrases into your blog posts to get the density just right and building landing page after landing page targeted at keyword variations like, “automobiles for sale”, “cars for sale” and “trucks for sale”.
Since the introduction of RankBrain, the machine-learning component of Google’s Core Algorithm, in late 2015, Google has moved farther away from a simple question and answer engine and has become a truly intelligent source of information matching the user’s intent — not just the user’s query.
Crafting compelling content is tough, especially in such a competitive landscape. How can you avoid vomiting up a 1,500-word blog post that will meet the deadline but fall very short of the user’s expectations? If you follow these 10 on-page essential elements, your brand will be on the right track to provide a rich content experience designed to resonate with your audience for months to come.
Always seen in the <head> block or the beginning of a web page’s source code, the title tag is text wrapped in the <title> HTML tag. Visible as the headline of the search listing on results pages, on the user’s browser tab, and sometimes in social media applications when an Open Graph Tag is not present, this text is intended to describe the overarching intent of the page and the type of content a user can expect to see when browsing.
What I mean by “intent” can be illustrated with the following example. Say my title tag for a product page was Beef for Dogs | Brand Name. As a user, I would not expect to find a product page, but rather, information about whether I can feed beef to my dogs.
A better title tag to accurately match my users’ intent would be Beef Jerky Dog Treats | Brand Name.
Query = “beef for dogs”
Query = “beef jerky dog treats”
How do I know what the title tag of my page is?
Identifying what has been set as the title tag or meta description of your pages can be done URL-by-URL or at scale for many URLs. There are distinct uses for each discovery method, and it is always important to remember that Google may choose to display another headline for your page in search results if it feels that its title is a better representation for the user. Here are a few great online tools to get you started:
NOTE: If you are one that prefers to “live in the moment”, you can also view the page source of the page you are currently on and search for “<title>” in the code to determine what should be output in search results. Lifewire produced this handy guide on viewing the source code of a webpage, regardless of the internet browser you are using.
Are there guidelines for crafting the perfect title tag?
Yes. The optimal title tag is designed to fit the width of the devices it’s displayed upon. In my experience, the sweet spot for most screens is between 50-60 characters. In addition, a page title should:
- Be descriptive and concise
- Be on-brand
- Avoid keyword stuffing
- Avoid templated/boilerplate content
Though the text below the headline of your search result, also known as the meta description, does not influence the ranking of your business’ URL in search results, this text is still important for providing a summary of the webpage. The meta description is your chance to correctly set a potential user’s expectations and engage them to click-through to the website.
How do I build the perfect meta description?
Pay close attention to three things when crafting a great meta description for each of your website’s pages: branding, user-intent, and what’s working well in the vertical (competitive landscape). These 150-160 characters are a special opportunity for your page to stand out from the crowd.
Do your page descriptions look and sound like they are templated? Investing time in describing the page in a unique way that answers user’s questions before they get to the website can go a long way in delighting customers and improving search performance.
Take for example the following product page for the Outdoor Products Multi-Purpose Poncho. The top listing for this product page is via Amazon.com, with a very obviously templated meta description. The only information provided is the product name, aggregate rating, and an indication of free delivery.
While not the top listing, the following result from REI Co-op clearly includes the product name, breadcrumbs, aggregate rating, price, availability, and a unique non-templated meta description. The standout feature of this meta description is that it does not copy the manufacturer’s text, provides some product differentiators like “easy to pull out of your bag” and “great travel item” that speak to user questions about portability.
The meta description plays an important role in complementing other elements of a well defined rich result, and it is often overlooked when retail businesses are using rich results to improve the ecommerce search experience specifically. That said, the same considerations apply to information focused pages as well.
Section heading elements (H1-H6) were originally intended to resize text on a webpage, with the H1 being used to style the primary title of a document as the largest text on the page. With the advent of Cascading Styling Sheets (CSS) in the late 90’s, this element had has less effect. CSS started being used for much of this functionality, and HTML tags acted as more of a “table of contents” for a variety of user-agents (i.e. Googlebot) and users alike.
For this reason, the primary header (h1) and subheaders (h2-h6) can be important in helping search engines understand the organization of and context around a particular page of written content. Users do not want to read through a huge brick of text and neither do search engines. Organizing written words into smaller entities (sections) will help digestion and lead to better organic results, as seen in the example below:
In the example above, the primary topic (How to Teach a Child to Ride a Bike) is marked-up with an H1 tag, indicating that it is the primary topic of the information to follow. The next section “Getting Ready to Ride” is marked-up with an H2 tag, indicating that it’s a secondary topic. Subsequent sections are marked up with <h3> tags. As a result of carefully crafted headings, which organize the content in a digestible way and supporting written content (among other factors), this particular page boasts 1,400 search listings in the top 100 positions on Google — with only 1,400 words.
Over 92% of long-tail (greater than 3 words) keyword phrases get less than 10 searches per month, but they are more likely to convert users than their head term counterparts.
Focus on providing your potential users with answers to the search questions about a particular topic, rather than granular keyword phrases, will lead to a more authentic reading experience, more engaged readers, and more chances of capturing the plethora of long-tail phrases popping up by the minute.
Internal links are hyperlinks in your piece of content that point back to a page on your own website. What is important to note here is that one should not create a link in a piece simply to provide a link pathway for SEO success. This is an old practice, and it will lead to a poor user experience. Instead, focus on providing a link to a supplemental resource if it will genuinely help a user answer a question or learn more about a specific topic.
A great example of helpful internal linking can be found above. In this article about “How to Ride a Bike”, the author has linked the text “Braking” to an article about types of bicycle brakes and more specifically how to adjust each type for optimal performance.
If there is supplemental information on your own website to substantiate your claims or provide further education to the reader in the article at hand, link to this content. If this doesn’t exist or there’s a better source of information on a particular topic, link out to this external content. There’s no harm in linking out to 3rd parties and in many if not all cases, this will serve as a citation of sorts, making your content more legitimate and credible in the user’s eyes.
Linking to sources outside your own domain, also known as external linking, is often seen as one of the major ranking factors in organic search. External entities linking to your content are similar to calling someone you live next to a good neighbor, with a credibility effect similar to the citations you put in a term paper or an article on Wikipedia.
When writing a post or crafting a page for your own website, consider the following:
- How can I substantiate my statistics or claims?
- Why should my users believe what I have to say?
- Can anyone (customers or companies) back up my thoughts?
If you are crafting the best user experience, you will want to take special care in building an authentic, data-driven relationship with your past and present customers.
There are no magic rules or hacks in how you link to external sources. As the SEO industry evolves, you will realize professionals are simply “internet custodial engineers,” cleaning up the manipulations of the past (part of the reasons for Penguin, Panda, Hummingbird, and less notable algorithm changes by Google) and promoting the creation of expert-driven, authoritative, and accurate (E.A.T.) content on the web.
For more information on E.A.T., check out Google’s Official Quality Raters Guidelines.
Now more than ever, visual search as an alternative to text search is becoming a reality. In fact, even Pinterest’s CEO Silbermann said, “the future of search will be about pictures rather than keywords.” Seen below is data from Jumpshot compiled by Rand Fishkin at SparkToro that confirms Google Image Search now makes up more than 20% of web searches as of February 2018. As a result, including images in your content has some unique benefits as it relates to search engine optimization (SEO):
- Images break up large blocks of text with useful visuals,
- Alternate text embedded within an image can provide more context to search engines about the object, place, or person it is representing. This can help to improve your rankings in this medium.
- According to a study by Clutch in 2017, written articles, videos, and images are the three most engaging types of content on social media. Adding images to your text can improve a piece’s shareability.
A great example of using varying types of content to break up a topic can be seen below. In the article titled, “How to Tie the Windsor Knot”, the author has provided an informative primary header (h1) based on the functional query and also included video content (in case the user prefers this method of consumption), origin information, a comparison of this knot to others, and an explanatory graphic to walk anyone through the entire process.
By providing an abundance of detail and multimedia, not only can your business realize the additional search opportunities in the form of video object structured data and alternate text on the images, but meet the E.A.T. standards that will delight your potential users and drive performance.
Open Graph Tags
Developed by Facebook in 2007, with inspiration from Microformats and RDFa, the Open Graph protocol is one element of your page that can be easily forgotten because it’s often built into popular content management systems. Forgetting to review how your shared content will display on popular social networks can kill productivity as you race to add an image, name, description post-publishing. A lack of “OG Tags” can also hurt the shareability of the piece, decreasing the chances for its promotion to be successful.
“OG Tags” as they are commonly referred to are similar to other forms of structured data but are specifically relevant to social media sharing. They can act as a failsafe when a page title is not available, as Google commonly looks to this field when it cannot find text between the <title> elements.
How can I construct and validate open graph tags on my website?
Unless your content management system automatically generates Open Graph tags for you, you will have to build a few snippets of code to populate this information for those sharing your posts. You can find a few tools to help you out below:
Code snippet generators:
Code snippet validation:
Meta Robots Tags
The content your team produces will never get the success it deserves in organic search if no one can find it. While a powerful tool for ensuring search results stay nice and tidy, the meta robots tag can also be a content marketers worst enemy. Similar to the robots.txt file, it is designed to provide crawlers information about how to treat a certain singular URL in the search engine results and following it’s contained links, a single line of code can make your page or post disappear.
Where can I find the meta robots instructions?
This specific tag (if your website contains one) is generally contained within the <head> section of the HTML document and may appear to look similar to the following:
<META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”>
What instructions can I provide to crawlers via the meta robots tag?
At bare minimum, your URL will need to be eligible for indexing by Google or other search engines. This can be accomplished with an INDEX directive in the content field above.
Note: It is still up to the search engine’s discretion if your URL is worthy and high-quality enough to include in search results.
In addition to the INDEX directive, you can also pass the following instructions via the meta robots tag:
NOINDEX – Tells a search engine crawler to exclude this page from their index
NOFOLLOW – Instructs the crawler to ignore following any links on the given page
NOARCHIVE – Excludes the particular page from being cached in search results
NOSNIPPET – Prevents a description from displaying below the headline in search results
NOODP – Blocks the usage of the Open Directory Project description from search results.
NONE – Acts as a NOFOLLOW, NO INDEX tag.
If you are taking the time to produce a high-quality article, make sure the world can see it with ease! Competing against yourself with duplicate articles and/or pages can lead to index bloat, and your search performance will not live up to its true potential.
The canonicalization and the canonical tag can be a tricky subject, but it is one that should not be taken lightly. Duplicate content can be the root of many unforeseen problems with your business’ organic search efforts.
What does a canonical tag (rel=”canonical”) do?
In simple terms, utilizing a canonical tag is a way of indicating to search engines that the destination URL noted in this tag is the “master copy” or the “single point of truth” that is worthy of being included in the search index. When implemented correctly, this should prevent multiple URLs with the same information or identical wording from being indexed and competing against each other on search engine results pages (SERPs).
Can my canonical tag be self-referential?
Absolutely. If it’s the best version of a page, do not leave it up to a search engine to decide this. Wear the “single source of truth” badge with pride and potentially prevent the incorrect implementation of canonical tags on other pages that are identical or similar.
Page Speed Test
Last but not least, we can’t forget about page speed on individual pages of a business’ website. While the elements listed above are great for helping search engines and users better understand the context around a piece of content, page speed is important for ensuring the user gets a quality technical experience.
The entire premise of using a search engine is centered around getting a quick answer for a particular question or topic search. Delivering a slow page to a user will likely lead to them leaving your website all together. According to a study from Google across multiple verticals, increasing page load time from 1 to 5 seconds increases the probability of a bounce by 90%. That could be a huge loss in revenue for a business.
Source: Google/SOASTA Research, 2017.
Tools for testing page speed:
Page by page:
Crafting the perfect piece of content is more than simply understanding your audience and what they want to read about online. There are many technical elements outlined above that can make or break your success in organic search or many other marketing mediums. As you think about producing a blog, an informational guide, or even a product page, consider all of the information a user needs to take the desired next step.
(All screenshots were taken by the author for the purpose of this article.)
The post 10 on-page SEO essentials: Crafting the perfect piece of content appeared first on Search Engine Watch.
Taylor Lorenz was in high demand this week. As a prolific journalist at The Atlantic and about-to-be member of Harvard’s prestigious Nieman Fellowship for journalism, that’s perhaps not surprising. Nor was this the first time she’s had a bit of a moment: Lorenz has already served as an in-house expert on social media and the internet for several major companies, while having written and edited for publications as diverse as The Daily Beast, The Hill, People, The Daily Mail, and Business Insider, all while remaining hip and in touch enough to currently serve as a kind of youth zeitgeist translator, on her beat as a technology writer for The Atlantic.
Lorenz is in fact publicly busy enough that she’s one of only two people I personally know to have openly ‘quit email,’ the other being my friend Russ, an 82 year-old retired engineer and MIT alum who literally spends all day, most days, working on a plan to reinvent the bicycle.
I wonder if any of Lorenz’s previous professional experiences, however, could have matched the weight of the events she encountered these past several days, when the nightmarish massacre in Christchurch, New Zealand brought together two of her greatest areas of expertise: political extremism (which she covered for The Hill), and internet culture. As her first Atlantic piece after the shootings said, the Christchurch killer’s manifesto was “designed to troll.” Indeed, his entire heinous act was a calculated effort to manipulate our current norms of Internet communication and connection, for fanatical ends.
Lorenz responded with characteristic insight, focusing on the ways in which the stylized insider subcultures the Internet supports can be used to confuse, distract, and mobilize millions of people for good and for truly evil ends:
Before people can even begin to grasp the nuances of today’s internet, they can be radicalized by it. Platforms such as YouTube and Facebook can send users barreling into fringe communities where extremist views are normalized and advanced. Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling. The darker corners of the internet are so fragmented that even when they spawn a mass shooting, as in New Zealand, the shooter’s words can be nearly impossible to parse, even for those who are Extremely Online.”
Such insights are among the many reasons I was so grateful to be able to speak with Taylor Lorenz for this week’s installment of my TechCrunch series interrogating the ethics of technology.
As I’ve written in my previous interviews with author and inequality critic Anand Giridharadas, and with award-winning Google exec turned award-winning tech critic James Williams, I come to tech ethics from 25 years of studying religion. My personal approach to religion, however, has essentially always been that it plays a central role in human civilization not only or even primarily because of its theistic beliefs and “faith,” but because of its culture — its traditions, literature, rituals, history, and the content of its communities.
And because I don’t mind comparing technology to religion (not saying they are one and the same, but that there is something to be learned from the comparison), I’d argue that if we really want to understand the ethics of the technologies we are creating, particularly the Internet, we need to explore, as Taylor and I did in our conversation below, “the ethics of internet culture.”
What resulted was, like Lorenz’s work in general, at times whimsical, at times cool enough to fly right over my head, but at all times fascinating and important.
Editor’s Note: we ungated the first of 11 sections of this interview. Reading time: 22 minutes / 5,500 words.
Joking with the Pope
Greg Epstein: Taylor, thanks so much for speaking with me. As you know, I’m writing for TechCrunch about religion, ethics, and technology, and I recently discovered your work when you brought all those together in an unusual way. You subtweeted the Pope, and it went viral.
Taylor Lorenz: I know. [People] were freaking out.
Greg: What was that experience like?
Taylor: The Pope tweeted some insane tweet about how Mary, Jesus’ mother, was the first influencer. He tweeted it out, and everyone was spamming that tweet to me because I write so much about influencers, and I was just laughing. There’s a meme on Instagram about Jesus being the first influencer and how he killed himself or faked his death for more followers.
Because it’s fluid, it’s a lifeline for so many kids. It’s where their social network lives. It’s where identity expression occurs.
I just tweeted it out. I think a lot of people didn’t know the joke, the meme, and I think they just thought that it was new & funny. Also [some people] were saying, “how can you joke about Jesus wanting more followers?” I’m like, the Pope literally compared Mary to a social media influencer, so calm down. My whole family is Irish Catholic.
A bunch of people were sharing my tweet. I was like, oh, god. I’m not trying to lead into some religious controversy, but I did think whether my Irish Catholic mother would laugh. She has a really good sense of humor. I thought, I think she would laugh at this joke. I think it’s fine.
Greg: I loved it because it was a real Rorschach test for me. Sitting there looking at that tweet, I was one of the people who didn’t know that particular meme. I’d like to think I love my memes but …
Taylor: I can’t claim credit.
Greg: No, no, but anyway most of the memes I know are the ones my students happen to tell me about. The point is I’ve spent 15 plus years being a professional atheist. I’ve had my share of religious debates, but I also have had all these debates with others I’ll call Professional Strident Atheists.. who are more aggressive in their anti-religion than I am. And I’m thinking, “Okay, this is clearly a tweet that Richard Dawkins would love. Do I love it? I don’t know. Wait, I think I do!”
Taylor: I treated it with the greatest respect for all faiths. I thought it was funny to drag the Pope on Twitter .
The influence of Instagram
In a convoluted letter to Congress, Attorney General William Barr summarized Robert Mueller’s report on the Russia investigation and said he won’t charge President Trump with obstruction.
Feed: All Latest
Google remarketing strategies can help reel back in previously lost conversions. Learn which strategies work and apply them in your campaigns today!
Read more at PPCHero.com
Further details have emerged about when and how much Facebook knew about data-scraping by the disgraced and now defunct Cambridge Analytica political data firm.
Last year a major privacy scandal hit Facebook after it emerged CA had paid GSR, a developer with access to Facebook’s platform, to extract personal data on as many as 87 million Facebook users without proper consent.
Cambridge Analytica’s intention was to use the data to build psychographic profiles of American voters to target political messages — with the company initially working for the Ted Cruz and later the Donald Trump presidential candidate campaigns.
But employees at Facebook appear to have raised internal concerns about CA scraping user data in September 2015 — i.e. months earlier than Facebook previously told lawmakers it became aware of the GSR/CA breach (December 2015).
The latest twist in the privacy scandal has emerged via a redacted court filing in the U.S. — where the District of Columbia is suing Facebook in a consumer protection enforcement case.
Facebook is seeking to have documents pertaining to the case sealed, while the District argues there is nothing commercially sensitive to require that.
In its opposition to Facebook’s motion to seal the document, the District includes a redacted summary (screengrabbed below) of the “jurisdictional facts” it says are contained in the papers Facebook is seeking to keep secret.
According to the District’s account, a Washington, DC-based Facebook employee warned others in the company about Cambridge Analytica’s data-scraping practices as early as September 2015.
Under questioning in Congress last April, Mark Zuckerberg was asked directly by congressman Mike Doyle when Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article (which broke the story).
Zuckerberg responded with a “yes” to Doyle’s question.
Damian Collins, the chair of the DCMS committee — which made repeat requests for Zuckerberg himself to testify in front of its enquiry into online disinformation, only to be repeatedly rebuffed — tweeted yesterday that the new detail could suggest Facebook “consistently mislead” the British parliament.
— Damian Collins (@DamianCollins) March 21, 2019
The DCMS committee has previously accused Facebook of deliberately misleading its enquiry on other aspects of the CA saga, with Collins taking the company to task for displaying a pattern of evasive behavior.
The earlier charge that it mislead the committee refers to a hearing in Washington in February 2018 — when Facebook sent its U.K. head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field DCMS’ questions — where the pair failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015.
The committee’s final report was also damning of Facebook, calling for regulators to instigate antitrust and privacy probes of the tech giant.
Meanwhile, questions have continued to be raised about Facebook’s decision to hire GSR co-founder Joseph Chancellor, who reportedly joined the company around November 2015.
The question now is if Facebook knew there were concerns about CA data-scraping prior to hiring the co-founder of the company that sold scraped Facebook user data to CA, why did it go ahead and hire Chancellor?
The GSR co-founder has never been made available by Facebook to answer questions from politicians (or press) on either side of the pond.
Last fall he was reported to have quietly left Facebook, with no comment from Facebook on the reasons behind his departure — just as it had never explained why it hired him in the first place.
But the new timeline that has emerged of what Facebook knew when makes those questions more pressing than ever.
Reached for a response to the details contained in the District of Columbia’s court filing, a Facebook spokeswomen sent us this statement:
Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath
In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service. In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.
Facebook did not engage with questions about any of the details and allegations in the court filing.
A little later in the court filing, the District of Columbia writes that the documents Facebook is seeking to seal are “consistent” with its allegations that “Facebook has employees embedded within multiple presidential candidate campaigns who… knew, or should have known… [that] Cambridge Analytica [was] using the Facebook consumer data harvested by [[GSR’s]] [Aleksandr] Kogan throughout the 2016 [United States presidential] election.”
It goes on to suggest that Facebook’s concern to seal the document is “reputational,” suggesting — in another redacted segment (below) — that it might “reflect poorly” on Facebook that a DC-based employee had flagged Cambridge Analytica months prior to news reports of its improper access to user data.
“The company may also seek to avoid publishing its employees’ candid assessments of how multiple third-parties violated Facebook’s policies,” it adds, chiming with arguments made last year by GSR’s Kogan, who suggested the company failed to enforce the terms of its developer policy, telling the DCMS committee it therefore didn’t have a “valid” policy.
As we’ve reported previously, the U.K.’s data protection watchdog — which has an ongoing investigation into CA’s use of Facebook data — was passed information by Facebook as part of that probe, which showed that three “senior managers” had been involved in email exchanges, prior to December 2015, concerning the CA breach.
It’s not clear whether these exchanges are the same correspondence the District of Columbia has obtained and which Facebook is seeking to seal, or whether there were multiple email threads raising concerns about the company.
The ICO passed the correspondence it obtained from Facebook to the DCMS committee — which last month said it had agreed at the request of the watchdog to keep the names of the managers confidential. (The ICO also declined to disclose the names or the correspondence when we made a Freedom of Information request last month — citing rules against disclosing personal data and its ongoing investigation into CA meaning the risk of release might be prejudicial to its investigation.)
In its final report, the committee said this internal correspondence indicated “profound failure of governance within Facebook” — writing:
[I]t would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case. The scale and importance of the GSR/Cambridge Analytica breach was such that its occurrence should have been referred to Mark Zuckerberg as its CEO immediately. The fact that it was not is evidence that Facebook did not treat the breach with the seriousness it merited. It was a profound failure of governance within Facebook that its CEO did not know what was going on, the company now maintains, until the issue became public to us all in 2018. The incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.
We reached out to the ICO for comment on the information to emerge via the Columbia suit, and also to the Irish Data Protection Commission, the lead DPA for Facebook’s international business, which currently has 15 open investigations into Facebook or Facebook-owned businesses related to various security, privacy and data protection issues.
An ICO spokesperson told us: “We are aware of these reports and will be considering the points made as part of our ongoing investigation.”
Last year the ICO issued Facebook with the maximum possible fine under U.K. law for the CA data breach.
Shortly after, Facebook announced it would appeal, saying the watchdog had not found evidence that any U.K. users’ data was misused by CA.
A date for the hearing of the appeal set for earlier this week was canceled without explanation. A spokeswoman for the tribunal court told us a new date would appear on its website in due course.
This report was updated with comment from the ICO.
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines.
What a Friday. This afternoon (mere hours after we released our regularly scheduled episode no less!), both Pinterest and Zoom dropped their public S-1 filings. So we rolled up our proverbial sleeves and ran through the numbers. If you want to follow along, the Pinterest S-1 is here, and the Zoom document is here.
Got it? Great. Pinterest’s long-awaited IPO filing paints a picture of a company cutting its losses while expanding its revenue. That’s the correct direction for both its top and bottom lines.
As Kate points out, it’s not in the same league as Lyft when it comes to scale, but it’s still quite large.
More than big enough to go public, whether it’s big enough to meet, let alone surpass its final private valuation ($ 12.3 billion) isn’t clear yet. Peeking through the numbers, Pinterest has been improving margins and accelerating growth, a surprisingly winsome brace of metrics for the decacorn.
Pinterest has raised a boatload of venture capital, about $ 1.5 billion since it was founded in 2010. Its IPO filing lists both early and late-stage investors, like Bessemer Venture Partners, FirstMark Capital, Andreessen Horowitz, Fidelity and Valiant Capital Partners as key stakeholders. Interestingly, it doesn’t state the percent ownership of each of these entities, which isn’t something we’ve ever seen before.
Next, Zoom’s S-1 filing was more dark horse entrance than Katy Perry album drop, but the firm has a history of rapid growth (over 100 percent, yearly) and more recently, profit. Yes, the enterprise-facing video conferencing unicorn actually makes money!
In 2019, the year in which the market is bated on Uber’s debut, profit almost feels out of place. We know Zoom’s CEO Eric Yuan, which helps. As Kate explains, this isn’t his first time as a founder. Nor is it his first major success. Yuan sold his last company, WebEx, for $ 3.2 billion to Cisco years ago then vowed never to sell Zoom (he wasn’t thrilled with how that WebEx acquisition turned out).
Should we have been that surprised to see a VC-backed tech company post a profit — no. But that tells you a little something about this bubble we live in, doesn’t it?
When we think of enterprise SaaS companies today, just about every startup in the space aspires to be a platform. That means they want people using their stack of services to build entirely new applications, either to enhance the base product, or even build entirely independent companies. But when Salesforce launched Force.com, the company’s Platform as a Service, in 2007, there wasn’t any model.
It turns out that Force.com was actually the culmination of a series of incremental steps after the launch of the first version of Salesforce in February, 2000, all of which were designed to make the software more flexible for customers. Company co-founder and CTO Parker Harris says they didn’t have this goal to be a platform early on. “We were a solution first, I would say. We didn’t say ‘let’s build a platform and then build sales-force automation on top of it.’ We wanted a solution that people could actually use,” Harris told TechCrunch.
The march toward becoming a full-fledged platform started with simple customization. That first version of Salesforce was pretty basic, and the company learned over time that customers didn’t always use the same language it did to describe customers and accounts — and that was something that would need to change.
Customizing the product
Constantly evolving search results driven by Google’s increasing implementation of AI are challenging SEOs to keep pace. Search is more dynamic, competitive, and faster than ever before.
Where SEOs used to focus almost exclusively on what Google and other search engines were looking for in their site structure, links, and content, digital marketing now revolves solidly around the needs and intent of consumers.
This past year was perhaps the most transformative in SEO, an industry expected to top $ 80 billion in spending by 2020. AI is creating entirely new engagement possibilities across multiple channels and devices. Consumers are choosing to find and interact with information by voice search, or even on connected IoT appliances, and other devices. Brands are being challenged to reimagine the entire customer journey and how they optimize content for search, as a result.
How do you even begin to prioritize when your to-do list and the data available to you are growing at such a rapid pace? The points shared below intend to help you with that.
From analysis to activation, data is key
SEO is becoming less a matter of simply optimizing for search. Today, SEO success hinges on our ability to seize every opportunity. Research from my company’s Future of Marketing and AI Study highlights current opportunities in five important areas.
1. Data cleanliness and structure
As the volume of data consumers are producing in their searches and interactions increases, it’s critically important that SEOs properly tag and structure the information we want search engines to match to those queries. Google offers rich snippets and cards that enable you to expand and enhance your search results, making them more visually appealing but also adding functionality and opportunities to engage.
Google has experimented with a wide variety of rich results, and you can expect them to continue evolving. Therefore, it’s best practice to properly mark up all content so that when a rich search feature becomes available, your content is in place to capitalize on the opportunity.
2. Increasingly automated actionable insights
While Google is using AI to interpret queries and understand results, marketers are deploying AI to analyze data, recognize patterns and deliver insights as output at rates humans simply cannot achieve. AI is helping SEOs in interpreting market trends, analyzing site performance, gathering and understanding competitor performance, and more.
It’s not just that we’re able to get insights faster, though. The insights available to us now may have gone unnoticed, if not for the in-depth analysis we can accomplish with AI.
Machines are helping us analyze different types of media to understand the content and context of millions of images at a time and it goes beyond images and video. With Google Lens, for example, augmented reality will be used to glean query intent from objects rather than expressed words.
Opportunities for SEOs include:
- Greater ability to define opportunity space more precisely in a competitive context. Understand underlying need in a customer journey
- Deploying longer-tail content informed by advanced search insights
- Better content mapping to specific expressions of consumer intent across the buying journey
3. Real-time response and interactions
In a recent “State of Chatbots” report, researchers asked consumers to identify problems with traditional online experiences by posing the question, “What frustrations have you experienced in the past month?”
As you can see, at least seven of the top consumer frustrations listed above can be solved with properly programmed chatbots. It’s no wonder that they also found that 69% of consumers prefer chatbots for quick communication with brands.
Search query and online behavior data can make smart bots so compelling and efficient in delivering on consumer needs that in some cases, the visitor may not even realize it’s an automated tool they’re dealing with. It’s a win for the consumer, who probably isn’t there for a social visit anyway as well as for the brand that seeks to deliver an exceptional experience even while improving operational efficiency.
SEOs have an opportunity to:
- Facilitate more productive online store consumer experiences with smart chatbots.
- Redesign websites to support visual and voice search.
- Deploy deep learning, where possible, to empower machines to make decisions, and respond in real-time.
4. Smart automation
SEOs have been pretty ingenious at automating repetitive, time-consuming tasks such as pulling rankings reports, backlink monitoring, and keyword research. In fact, a lot of quality digital marketing software was born out of SEOs automating their own client work.
Now, AI is enabling us to make automation smarter by moving beyond simple task completion to prioritization, decision-making, and executing new tasks based on those data-backed decisions.
Content marketing is one area where AI can have a massive impact, and marketers are on board. We found that just four percent of respondents felt they were unlikely to use AI/deep learning in their content strategy in 2018, and over 42% had already implemented it.
In content marketing, AI can help us quickly analyze consumer behavior and data, in order to:
- Identify content opportunities
- Build optimized content
- Promote the right content to the most motivated audience segments and individuals
5. Personalizations that drive business results
Personalization was identified as the top trend in marketing at the time of our survey, followed closely by AI (which certainly drives more accurate personalizations). In fact, you could argue that the top four trends namely, personalization, AI, voice search, and mobile optimization are closely connected if not overlapping in places.
Across emails, landing pages, paid advertising campaigns, and more, search insights are being injected into and utilized across multiple channels. These intend to help us better connect content to consumer needs.
Each piece of content produced must be purposeful. It needs to be optimized for discovery, a process that begins in content planning as you identify where consumers are going to find and engage with each piece. Smart content is personalized in such a way that it meets a specific consumer’s need, but it must deliver on the monetary needs of the business, as well.
Check out these 5 steps for making your content smarter from a previous column for more.
How SEOs are uniquely positioned to drive smarter digital marketing forward
As the marketing professionals have one foot in analysis and the other solidly planted in creative, SEOs have a unique opportunity to lead smart utilization and activation of all manners of consumer data.
You understand the critical importance of clean data input (or intelligent systems that can clean and make sense of unstructured data) and differentiating between first and third-party data. You understand economies of scale in SEO and the value in building that scalability into systems from the ground up.
SEOs have long nurtured a deep understanding of how people search for and discover information, and how technology delivers. Make the most of your current opportunities by picking your low-hanging fruit opportunities for quick wins. Focus your efforts on putting the scalable, smart systems in place that will allow you to anticipate consumer needs, react quickly, report SEO appropriately, and convey business results to the stakeholders who will determine budgets in future.
You might like to read these next:
- AI and machine learning: What you do and don’t need to know for SEO
- Using Python to recover SEO site traffic (Part one)
- TechSEO Boost: Machine Learning for SEOs
- Artificial intelligence for marketers
The post Five ways SEOs can utilize data with insights, automation, and personalization appeared first on Search Engine Watch.
Flip the “days since last Facebook security incident” back to zero.
The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.
Krebs said the bug dated back to 2012.
“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.
Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.
Krebs said as many as 600 million users could be affected — about one-fifth of the company’s 2.7 billion users, but Facebook has yet to confirm the figure.
Facebook also didn’t say how the bug came to be. Storing passwords in readable plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.
It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.
It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.
The Irish data protection office, which covers Facebook’s European operations, said the company “informed us of this issue” and the regulator is “currently seeking further information.”