Monthly Archives: August 2018
The company also announced that Paula Long, most recently CEO at Data Gravity, has joined the company as SVP of engineering.
This tool combines customer content with automation, chatbots and machine learning. It’s designed to help teams who work directly with customers get at the information they need faster and the machine learning element should allow it to improve over time.
You can deploy the product as a widget on your website to give customers direct access to the information, but Rob May, company founder and CEO says the most common use case involves helping sales, customer service and customer success teams get access to the most relevant and current information, whether that’s maintenance or pricing.
The information can get into the knowledge base in several ways. First of all you can enter elements like product pages and FAQs directly in the Talla product as with any knowledge base. Secondly if an employee asks a questions and there isn’t an adequate answer, it exposes the gaps in information.
“It really shows you the unknown unknowns in your business. What are the questions people are asking that you didn’t realize you don’t have content for or you don’t have answers for. And so that allows you to write new content and better content,” May explained.
Finally, the company can import information into the knowledge base from Salesforce, ServiceNow, Jira or wherever it happens to live, and that can be added to a new page or incorporated into existing page as appropriate.
Employees interact with the system by asking a bot questions and it supplies the answers if one exists. It works with Slack, Microsoft Teams or Talla Chat.
Customer service remains a major pain point for many companies. It is the direct link to customers when they are having issues. A single bad experience can taint a person’s view of a brand, and chances are when a customer is unhappy they let their friends know on social media, making an isolated incident much bigger. Having quicker access to more accurate information could help limit negative experiences.
Today’s announcement builds on an earlier version of the product that took aim at IT help desks. Talla found customers kept asking for a solution that provided similar functionality with customer-facing information and they have tuned it for that.
May launched Talla in 2015 after selling his former startup Backupify to Datto in 2014. The company, which is based near Boston, has raised $ 12.3 million.
Page speed has been a ranking factor for desktop searches since April 2010, but it was never officially a ranking factor for mobile searches (despite what we’ve all suspected for a long time). Not until July 2018, that is, when Google rolled out the Speed Update.
Google’s pushing for a faster mobile experience
The Speed Update is the latest in a long list of speed-related updates, tools, and technologies that Google has developed over the last decade – many of which specifically target the mobile experience.
For example, PageSpeed tools, such as the modules for servers like Apache and Nginx, PageSpeed reports in Google Search Console and Google Analytics, and plugins like the PageSpeed Chrome Developer Tools extension have become par for course since their introduction back in 2010.
Since then, Google has introduced tools such as the Mobile-Friendly Test to help websites gauge their responsiveness.
They’ve also launched Accelerated Mobile Pages (AMP), which allows content creators to make lightweight and lightning-fast versions of pages for their mobile audiences, and Progressive Web Apps (PWA), which load content instantly regardless of a user’s network state.
And, in the past 6 months alone, Google has further introduced an onslaught of new speed-related tools, including:
- Lighthouse – helps users automatically audit and optimize web pages
- Impact Calculator and Mobile Speed Scorecard – grades your mobile site’s speed and calculates what impact your site speed is having on your conversion rates and revenue
- Chrome User Experience Report (CrUX) – a database of real user experience metrics from Chrome users.
Google also transitioned to ‘mobile-first’ indexing in February 2018, which means it now prioritizes the mobile versions of websites over desktop versions when it comes to ranking and indexing.
And last but not least, the Speed Update has ushered in page speed as a ranking factor for mobile websites.
Recent changes to how Google measures page speed
Another recent change you may have noticed is that PageSpeed Insights looks a little different these days. Entering a URL a few months ago would return a report that looked something like this:
As you can see, your site receives one rating and it’s evaluated based on a set of clear technical criteria: redirects, compression, minification, etc. Optimizing, while not always easy per se, was straightforward.
But if you plug in your URL today, you’ll see a screen that looks more like this:
Now you’re scored according to two different categories: speed and optimization.
Optimization is the new name given to the technical checklist you were already familiar with. Anyone who’s used the PageSpeed Insights in the past should instantly recognize these recommendations.
Speed, however, is something new. It’s scored based on two new metrics: First Contentful Paint (FCP), which measures how long it takes a user to see the first visual response from a page, and DOM Content Loaded (DCL), which measures the time it takes an HTML document to be loaded and parsed.
These two new metrics are the game-changers because even if you were measuring them before the update (most SEOs I know weren’t), there’s a high chance that Google’s numbers don’t match yours.
So why the disconnect? Well, while you’re measuring DCL based on your website’s optimal performance, Google is pulling its results from its CrUX database. In other words, these metrics are based on real user measurements.
That means that even if everything looks perfectly optimized on your end, Google may consider your website to be ‘slow’ if most of your users have poor connection speeds or outdated mobile devices.
In other words, Google’s switched from measuring ‘lab’ data to ‘field’ data. Unfortunately, there’s nothing you can do to improve field data except for optimizing your website to make it even faster.
Our experiment measuring the impact of the Speed Update
My team recently conducted a series of experiments to determine what impact, if any, the Speed Update has had on mobile rankings.
First, we analyzed one million pages in mobile search results to understand the relationship between page speed and mobile SERPs before the update. Our research revealed that a page’s Page Speed Optimization Score had a high correlation (0.97) to its position in SERPs. FCP and DCL, however, had almost no bearing on a page’s rank.
Three months later, after Google’s Speed Update went live, we ran the same experiment. Again, we analyzed one million different pages and we collected Optimization Scores, median FCPs, and median DCLs for each unique URL.
What we discovered is that the correlation between a page’s average speed Optimization Score and its position in SERPs remains static: 0.97.
We also discovered that there is still no significant correlation between a page’s position in mobile SERPs and the median FCP/DCL metrics.
The only change we did notice was an industry-wide increase in the performance of mobile pages: the ranking on the first 30 positions in mobile search improved by 0.83 Optimization Score points between our first and our second experiments.
So, what’s the takeaway? At this point in time, it’s very important to continue improving your Optimization Score. FCP and DCL metrics seem to play a minor role where search results are concerned, but the standards for the top positions in SERPs keep increasing.
Advanced checklist for optimizing page speed
Optimizing mobile page speed requires you to test your page speed first. Before you begin making any improvements, plug your URLs into PageSpeed Insights. Or, if you find the thought of checking every page one-by-one exhausting, use a tool that can monitor all of your pages at once.
My team uses the tool we developed, WebSite Auditor. It’s integrated with PageSpeed Insights, which makes it easy to test, analyze, and optimize each page’s performance. GTMetrix and Pingdom are two other great tools for testing and optimizing page speed.
Once you’ve tested your mobile site speed and identified areas of improvement, it’s time to get to work:
- Ensure each page has no more than one redirect
– If you need to use a redirect: use 301 for permanent redirects (e.g. deleted content) and 302 for temporary redirects (e.g. limited-time promotions)
- Enable compression to reduce file size
– Gzip all compressible content or use a Gzip alternative (e.g. Brotli)
– Remove unnecessary data whenever possible
– Use different compression techniques for HTML codes & digital assets
- Aim for a server response time of <200ms
– Use HTTP/2 for a performance boost
– Enable OCSP stapling
– Support both IPv6 and IPv4
– Add resource hints like dns-lookup, preconnect, prefetch, and preload.
- Implement a caching policy
– Use cache-control to automatically control how and how long browsers cache responses
– Use Etags to enable efficient revalidation
– Double check Google’s caching checklist to determine optimal caching policy
- Minify resources
– Minify images, videos, and other content if they’re slowing down your page speed
– Automate minification using third-party tools
- Optimize images
– Eliminate unnecessary resources
– Replace images with CSS3 where possible
– Don’t encode text in images; use web fonts instead
– Minify and compress SVG assets
– Remove metadata if it’s not needed
– Select smaller raster formats if they don’t interfere with quality
– Resize and scale images to fit display size
– Choose the image quality settings that best fit your site needs.
- Optimize CSS delivery
– Inline small CSS files directly into the HTML to remove small external resources.
- Keep above-the-fold content under 148kB (compressed)
– Reduce the size of data required to render above-the-fold content
– Organize HTML markup to quickly render above-the-fold content.
– Inline critical scripts
Needless to say, there are a lot of technical SEO tips and tricks you can do to continue tweaking and refining your mobile page speed. If you need more information on how, exactly, to perform any of the above actions, visit Google’s PageSpeed Insight Rules for more detail.
Conclusion: why you need to be optimizing mobile page speed
Year-after-year search engines continue to push the importance of mobile optimization. And it’s no secret why: recent studies suggest that 53% of all mobile visits are abandoned when a page takes longer than 3 seconds to load, and you lose 10% of your users with every additional second.
Page speed has always mattered, but providing people with a fast mobile experience is now more important than ever before. This is especially true when you consider mobile-first indexing and the news that the average Optimization Scores of top ranking pages continue to rise.
Jack Dorsey is hedging his bets. In an interview with CNN’s Brian Stelter, the beard-rocking CEO said Twitter is reluctant to commit to a timetable for enacting policies aimed at curbing heated political rhetoric on the site.
The executive’s lukewarm comments reflect an embattled social network that has been the brunt of criticism from both sides of the political divide. The left has taken Twitter to task for relative inaction over incendiary comments from far right pundits like Alex Jones. The site was slow to act, compared to the likes of services including YouTube, Facebook and even YouPorn (yep).
When it ultimately did ban Jones’ Infowars, it was a seven day “timeout.” That move, expectedly, has drawn scrutiny from the other side of the aisle. Yesterday, Trump tweeted a critique of social media in general, that is generally being regarded as a thinly-veiled allusion to his embattled supporter, Jones.
Social Media is totally discriminating against Republican/Conservative voices. Speaking loudly and clearly for the Trump Administration, we won’t let that happen. They are closing down the opinions of many people on the RIGHT, while at the same time doing nothing to others
Trump also recently called for an end to what the right has deemed the “shadow banning” of conservative voices on social media.
“How do we earn peoples’ trust?” the CEO asked rhetorically during the conversation. “How do we guide people back to healthy conversation?”
Social Media is totally discriminating against Republican/Conservative voices. Speaking loudly and clearly for the Trump Administration, we won’t let that happen. They are closing down the opinions of many people on the RIGHT, while at the same time doing nothing to others…….
— Donald J. Trump (@realDonaldTrump) August 18, 2018
Dorsey suggested that his company is “more left-leaning,” a notion that has made him extra cautious of blowback from the right. He also continued his position of refusing to hold the company to be accountable for fact-checking, a policy that runs counter to proclamations of other social media like Facebook.
“We have not figured this out,” Dorsey said, “but I do think it would be dangerous for a company like ours… to be arbiters of truth.”
.@BrianStelter: Is your job to make sure people are not misinformed on Twitter?
— CNN (@CNN) August 19, 2018
For now, Dorsey and co. appear to be in a holding pattern, an indecisiveness that has drawn fire from all sides. The exec pines for a less polarized dialogue, citing NBA and K-Pop accounts as examples of Twitter subcultures that have been more measured in their approach.
Of course, anyone who’s spent time reading replies to LeBron or The Warriors can tell you that that’s a pretty low bar for discourse.
The fact of the matter is that this is the state of politics in 2018. Things are vicious and rhetoric can be incendiary. All of that is amplified by social media, as political pundits lean into troubling comments, conspiracy theory and outright lies to drive clicks.
Dorsey, says he’s pushing for policies “that encourage people to talk and to have healthy conversation.” Whatever Twitter’s “small staff” might have in the works, it certainly feels a long way off.
Twitter tried to downplay the impact deactivating its legacy APIs would have on its community and the third-party Twitter clients preferred by many power users by saying that “less than 1%” of Twitter developers were using these old APIs. Twitter is correct in its characterization of the size of this developer base, but it’s overlooking millions of third-party app users in the process. According to data from Sensor Tower, six million App Store and Google Play users installed the top five third-party Twitter clients between January 2014 and July 2018.
Over the past year, these top third-party apps were downloaded 500,000 times.
This data is largely free of reinstalls, the firm also said.
The top third-party Twitter apps users installed over the past three-and-a-half years have included: Twitterrific, Echofon, TweetCaster, Tweetbot and Ubersocial.
Of course, some portion of those users may have since switched to Twitter’s native app for iOS or Android, or they may run both a third-party app and Twitter’s own app in parallel.
Even if only some of these six million users remain, they represent a small, vocal and — in some cases, prominent — user base. It’s one that is very upset right now, too. And for a company that just posted a loss of one million users during its last earnings, it seems odd that Twitter would not figure out a way to accommodate this crowd, or even bring them on board its new API platform to make money from them.
Twitter, apparently, was weighing data and facts, not user sentiment and public perception, when it made this decision. But some things have more value than numbers on a spreadsheet. They are part of a company’s history and culture. Of course, Twitter has every right to blow all that up and move on, but that doesn’t make it the right decision.
To be fair, Twitter is not lying when it says this is a small group. The third-party user base is tiny compared with Twitter’s native app user base. During the same time that six million people were downloading third-party apps, the official Twitter app was installed a whopping 560 million times across iOS and Android. That puts the third-party apps’ share of installs at about 1.1 percent of the total.
That user base may have been shrinking over the years, too. During the past year, while the top third-party apps were installed half a million times, Twitter’s app was installed 117 million times. This made third-party apps’ share only about 0.4 percent of downloads, giving the official app a 99 percent market share.
But third-party app developers and the apps’ users are power users. Zealots, even. Evangelists.
Twitter itself credited them with pioneering “product features we all know and love,” like the mute option, pull-to-refresh and more. That means the apps’ continued existence brings more value to Twitter’s service than numbers alone can show.
Image credit: iMore
They are part of Twitter’s history. You can even credit one of the apps for Twitter’s logo! Initially, Twitter only had a typeset version of its name. Then Twitterrific came along and introduced a bird for its logo. Twitter soon followed.
Twitterrific was also the first to use the word “tweet,” which is now standard Twitter lingo. (The company used “twitter-ing.” Can you imagine?)
These third-party apps also play a role in retaining users who struggle with the new user experience Twitter has adopted — its algorithmic timeline. Instead, the apps offer a chronological view of tweets, as some continue to prefer.
Twitter’s decision to cripple these developers’ apps is shameful.
It shows a lack of respect for Twitter’s history, its power user base, its culture of innovation and its very own nature as a platform, not a destination.
According to a report by the American Cancer Society, an estimated 266,120 women will be newly diagnosed with breast cancer in the United States this year and (according to a 2016 estimate) can expect to pay between $ 60,000 and $ 134,000 on average for treatment and care. But, after hundreds of thousands of dollars and non-quantifiable emotional stress for them and their families, the American Cancer Society still estimates 40,920 women will lose their battle to the disease this year.
Worldwide, roughly 1.7 million women will be diagnosed with the disease yearly, according to a 2012 estimate by The World Cancer Research Fund International.
While these numbers are stark, they do little to fully capture just how devastating a breast cancer diagnosis is for women and their loved ones. This is a feeling that Higia Technologies‘ co-founder and CEO Julián Ríos Cantú is unfortunately very familiar with.
“My mom is a two-time breast cancer survivor,” Cantú told TechCrunch. “The first time she was diagnosed I was eight years old.”
Cantú says that his mother’s second diagnosis was originally missed through standard screenings because her high breast density obscured the tumors from the X-ray. As a result, she lost both of her breasts, but has since fully recovered.
“At that moment I realized that if that was the case for a woman with private insurance and a prevention mindset, then for most women in developing countries, like Mexico where we’re from, the outcome could’ve not been a mastectomy but death,” said Cantú.
Following his mother’s experience, Cantú resolved to develop a way to improve the value of women’s lives and support them in identifying breast abnormalities and cancers early in order to ensure the highest likelihood of survival.
To do this, at the age of 18 Cantú designed EVA — a bio-sensing bra insert that uses thermal sensing and artificial intelligence to identify abnormal temperatures in the breast that can correlate to tumor growth. Cantú says that EVA is not only an easy tool for self-screening but also fills in gaps in current screening technology.
Today, women have fairly limited options when it comes to breast cancer screening. They can opt for a breast ultrasound (which has lower specificity than other options), or a breast MRI (which has higher associated costs), but the standard option is a yearly or bi-yearly mammogram for women 45 and older. This method requires a visit to a doctor, manual manipulation of the breasts by a technologist and exposure to low-levels of radiation for an X-ray scan of the breast tissue.
While this method is relatively reliable, there are still crucial shortcomings, Higia Technologies’ medical adviser Dr. Richard Kaszynski M.D., PhD told TechCrunch.
“We need to identify a real-world solution to diagnosing breast cancer earlier,” said Dr. Kaszynski. “It’s always a trade-off when we’re talking about mammography because you have the radiation exposure, discomfort and anxiety in regards to exposing yourself to a third-party.”
Dr. Kaszynski continued to say that these yearly or bi-yearly mammograms also leave a gap in care in which interval cancers — cancers that begin to take hold between screenings — have time to grow unhindered.
Additionally, Dr. Kaszynski says mammograms are not highly sensitive when it comes to detecting tumors in dense breast tissue, like that of Cantú’s mom. Dense breast tissue, which is more common in younger women and is present in 40 percent of women globally and 80 percent of Asian women, can mask the presence of tumors in the breast from mammograms.
Through its use of non-invasive, thermal sensors EVA is able to collect thermal data from a variety of breast densities that can enable women of all ages to more easily (and more frequently) perform breast examinations.
Here’s how it works:
To start, the user inserts the thermal sensing cups (which come in three standard sizes ranging from A-D) into a sports bra, open EVA’s associated EVA Health App, follow the instructions and wait for 60 minutes while the cup collects thermal data. From there, EVA will send the data via Bluetooth to the app and an AI will analyze the results to provide the user with an evaluation. If EVA believes the user may have an abnormality that puts them at risk, the app will recommend follow-up steps for further screening with a healthcare professional.
While sacrificing your personal health data to the whims of an AI might seem like a scary (and dangerous, if the device were to be hacked) idea to some, Cantú says Higia Technologies has taken steps to protect its users’ data, including advanced encryption of its server and a HIPAA-compliant privacy infrastructure.
So far, EVA has undergone clinical trials in Mexico, and through these trials has seen 87.9 percent sensibility and 81.7 percent specificity from the device. In Mexico, the company has already sold 5,000 devices and plans to begin shipping the first several hundred by October of this year.
And the momentum for EVA is only increasing. In 2017, Cantú was awarded Mexico’s Presidential Medal for Science and Technology and so far this year Higia Technologies has won first place in the SXSW’s International Pitch Competition, been named one of “30 Most Promising Businesses of 2018” by Forbes Magazine Mexico and this summer received a $ 120,000 investment from Y Combinator.
Moving forward, the company is looking to enter the U.S. market and has plans to begin clinical trials with Stanford Medicine X in October 2018 that should run for about a year. Following these trials, Dr. Kaszynski says that Higia Technologies will continue the process of seeking FDA approval to sell the inserts first as a medical device, accessible at a doctor’s office, and then as a device that users can have at home.
The final pricing for the device is still being decided, but Cantú says he wants the product to be as affordable and accessible as possible so it can be the first choice for women in developing countries where preventative cancer screening is desperately needed.
Cryptocurrency projects can crash and burn if developers don’t predict how humans will abuse their blockchains. Once a decentralized digital economy is released into the wild and the coins start to fly, it’s tough to implement fixes to the smart contracts that govern them. That’s why Incentivai is coming out of stealth today with its artificial intelligence simulations that test not just for security holes, but for how greedy or illogical humans can crater a blockchain community. Crypto developers can use Incentivai’s service to fix their systems before they go live.
“There are many ways to check the code of a smart contract, but there’s no way to make sure the economy you’ve created works as expected,” says Incentivai’s solo founder Piotr Grudzień. “I came up with the idea to build a simulation with machine learning agents that behave like humans so you can look into the future and see what your system is likely to behave like.”
Incentivai will graduate from Y Combinator next week and already has a few customers. They can either pay Incentivai to audit their project and produce a report, or they can host the AI simulation tool like a software-as-a-service. The first deployments of blockchains it’s checked will go out in a few months, and the startup has released some case studies to prove its worth.
“People do theoretical work or logic to prove that under certain conditions, this is the optimal strategy for the user. But users are not rational. There’s lots of unpredictable behavior that’s difficult to model,” Grudzień explains. Incentivai explores those illogical trading strategies so developers don’t have to tear out their hair trying to imagine them.
Protecting crypto from the human x-factor
There’s no rewind button in the blockchain world. The immutable and irreversible qualities of this decentralized technology prevent inventors from meddling with it once in use, for better or worse. If developers don’t foresee how users could make false claims and bribe others to approve them, or take other actions to screw over the system, they might not be able to thwart the attack. But given the right open-ended incentives (hence the startup’s name), AI agents will try everything they can to earn the most money, exposing the conceptual flaws in the project’s architecture.
“The strategy is the same as what DeepMind does with AlphaGo, testing different strategies,” Grudzień explains. He developed his AI chops earning a masters at Cambridge before working on natural language processing research for Microsoft.
Here’s how Incentivai works. First a developer writes the smart contracts they want to test for a product like selling insurance on the blockchain. Incentivai tells its AI agents what to optimize for and lays out all the possible actions they could take. The agents can have different identities, like a hacker trying to grab as much money as they can, a faker filing false claims or a speculator that cares about maximizing coin price while ignoring its functionality.
Incentivai then tweaks these agents to make them more or less risk averse, or care more or less about whether they disrupt the blockchain system in its totality. The startup monitors the agents and pulls out insights about how to change the system.
For example, Incentivai might learn that uneven token distribution leads to pump and dump schemes, so the developer should more evenly divide tokens and give fewer to early users. Or it might find that an insurance product where users vote on what claims should be approved needs to increase its bond price that voters pay for verifying a false claim so that it’s not profitable for voters to take bribes from fraudsters.
Grudzień has done some predictions about his own startup too. He thinks that if the use of decentralized apps rises, there will be a lot of startups trying to copy his approach to security services. He says there are already some doing token engineering audits, incentive design and consultancy, but he hasn’t seen anyone else with a functional simulation product that’s produced case studies. “As the industry matures, I think we’ll see more and more complex economic systems that need this.”
In this podcast, Hanapin experts Matt Umbro and Jeff Allen discuss the new Google Ads update, how it will affect the PPC industry, and give you ways you can take advantage of this extra ad space.
Read more at PPCHero.com
That access was suspended last month, with Facebook saying it was investigating whether the company had violated any of its data use policies. (The social network, of course, has been dealing with the fallout from a separate controversy over user data.)
In this case, the issue appears to be related to some of Crimson Hexagon’s contracts with the U.S. government, with Facebook saying it wasn’t aware of those contracts when contacted by The Wall Street Journal.
What followed, according to a blog post by Crimson Hexagon Dan Shore, was “several weeks of constructive discussion and information exchange.” It seems that Facebook was satisfied with what it learned and ended Crimson Hexagon’s suspension.
Shore said that government customers make up less than 5 percent of the company’s business, adding, “To our knowledge, no government customer has used the Crimson Hexagon platform for surveillance of any individual or group.”
“Over time we have enhanced our vetting procedures for government customers,” he said. “Nevertheless, we recognize it is important to go beyond vetting by monitoring these government customers on an ongoing basis to ensure the public’s expectations of privacy are met. As governments and government-sponsored organizations change how they use data, we too must change.”