Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.
The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.
“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market Nvidia A100 GPUs, just as we were with Nvidia’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”
Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.
Over the past couple of weeks, paid search specialists Adthena have been sharing some fascinating insight into how the coronavirus pandemic is affecting the paid search sector in markets around the globe.
I spoke to Adthena’s VP of marketing Ashley Fletcher about the questions C-level executives are asking, their plans in the short and longer-term, and what he is observing in the data.
We’re past the shock stage
C-level executives now want to see the lay of the land amidst the coronavirus outbreak. Retailers, for instance, want a view of who’s moving out and many are asking:
- What’s happened to strategy?
- How are markets reacting?
- How do we now adjust?
Paid search is a fantastic window on all of this. While our offline lives have been massively disrupted by the coronavirus, the paid search sector is comparatively ever-present. We see customers switch to the channel when they can’t use others and we have good segmentation within data across products and more business verticals.
“Search intelligence offers not only remarkable clarity but also a real-time lens into market movements, trends, and opportunities across verticals and in close to real-time”,
Fletcher writes at the Adthena blog.
“PPC is a stable, transparent refuge every marketer needs to be leveraging right now to keep the oars in the water.”
There is positivity even in industries that have been hardest hit
One of the surprises for Fletcher is that the sentiment among marketers he is speaking to is not all doom and gloom.
“Businesses like the UK travel sector (we’re seeing this with some of our hotel chain clients) have been the hardest hit. But the positive aspect of this is we are already seeing this sector with eyes on their recovery and looking at where they go next”,
“People are prepared to lower spend now, but are gearing up for coming out the other side.”
Data showing significant feats of agility
It is not only the travel sector which has had to change track quickly.
“In the food vertical, many brands have been seen to suspend some generic ads, but they are keeping the lights on for brand traffic”,
“Managers are coming to the paid search data asking: What’s my brand looking like while competition might be able to take more capacity?”
This is particularly visible as vast numbers of users seek to use delivery services offered by the likes of Tesco and Sainsbury’s in the UK, as well as Coles in Australia (see below).
Digital-first brands like Amazon, Catch, and Hello Fresh are jumping into the gaps created when the legacy supermarkets have quickly hit capacity for food grocery deliveries.
We can also see Amazon shifting paid ad priorities to essential products, which is creating further gaps. This means other companies like Best Buy have then been able to garner clicks for things Amazon has had the monopoly on till date – such as TVs, kitchenware, and mobile phones.
Fletcher is seeing this agility being demonstrated in other sectors too – from online banking to online betting.
Takeaways for digital marketers
The paid search sector gives us a fascinating glimpse into the disruption at play across the global business. But the positivity, agility, and resolve on display is heartening too.
The real-time data available to paid search marketers answer three key questions
- How consumer habits sometimes shift rapidly
- How their brands are retaining visibility in the melee
- How competitors are changing strategy and focus in order to adapt
In some cases, we can certainly see prices go up and clicks go down as users and brands change their ways. The flipside of this is that gaps and opportunities are opening up in surprising places as big names shift their focus to specific products and services. Smart marketers will be observing those gaps and acting on them.
Yet, the most important takeaway from Adthena’s data is a long-term strategy
Here in the UK and US, we may still be in the beginning stages of this global event, but while many businesses have been forced to make some quick near-term changes, some are already making plans as to what their priorities will be when coronavirus is behind them.
Marketers can expect that business and consumer habits may well be altered entirely, but in the very least the value of search and data will continue to be vital. In order to remain agile and competitive in the markets of tomorrow, it’s likely to become even more important.
The post Coronavirus and the paid search sector: How businesses are gearing up to come out the other side appeared first on Search Engine Watch.
More than a decade after announcing that it would keep Polaroid’s abandoned instant film alive, The Impossible Project has done the… improbable: It has officially become the brand it set out to save. And to commemorate the occasion, there’s a new camera, the Polaroid Now.
The convergence of the two brands has been in the works for years, and in fact Impossible Project products were already Polaroid-branded. But this marks a final and satisfying shift in one of the stranger relationships in startups or photography.
I first wrote about The Impossible Project in early 2009 (and apparently thought it was a good idea to Photoshop a Bionic Commando screenshot as the lead image), when the company announced its acquisition of some Polaroid instant film manufacturing assets.
Polaroid at the time was little more than a shell. Having declined since the ’80s and more or less shuttered in 2001, the company was relaunched as a digital brand and film sales were phased out. This was unsuccessful, and in 2008 Polaroid was filing for bankruptcy again.
This time, however, it was getting rid of its film production factories, and a handful of Dutch entrepreneurs and Polaroid experts took over the lease as The Impossible Project. But although the machinery was there, the patents and other IP for the famed Polaroid instant film were not. So they basically had to reinvent the process from scratch — and the early results were pretty rough.
But they persevered, aided by a passionate community of Polaroid owners, continuously augmented by the film-curious who want something more than a Fujifilm Instax but less than a 35mm SLR. In time the process matured and Impossible developed new films and distribution partners, growing more successful even as Polaroid continued applying its brand to random, never particularly good photography-adjacent products. They even hired Lady Gaga as “Creative Director,” but the devices she hyped at CES never really materialized.
In 2017, the student became the master as Impossible’s CEO purchased the Polaroid brand name and IP. They relaunched Impossible as “Polaroid Originals” and released the OneStep 2 camera using a new “i-Type” film process that more closely resembled old Polaroids (while avoiding the expensive cartridge battery).
Polaroid continued releasing new products in the meantime — presumably projects that were under contract or in development under the brand before its acquisition. While the quality has increased from the early days of rebranded point-and-shoots, none of the products has ever really caught on, and digital instant printing (Polaroid’s last redoubt) has been eclipsed by a wave of nostalgia for real film, Instax Mini in particular.
But at last the merger dance is complete and Polaroid, Polaroid Originals and The Impossible Project are finally one and the same. All devices and film will be released under the Polaroid name, though there may be new sub-brands like i-Type and the new Polaroid Now camera.
Speaking of which, the Now is not a complete reinvention of the camera by far — it’s a “friendlier” redesign that takes after the popular OneStep but adds improved autofocus, a flash-adjusting light sensor, better battery and a few other nips and tucks. At $ 100 it’s not too hard on the wallet, but remember that film is going to run you about $ 2 per shot. That’s how they get you.
It’s been a long, strange trip to watch, but ultimately a satisfying one: Impossible made a bet on the fundamental value of instant film photography, while a series of owners bet on the Polaroid brand name to sell anything they put it on. The riskier long-term play won out in the end (though many got rich running Polaroid into the ground over and over), and now with a little luck the brand that started it all will continue its success.
A decade ago, it was almost inconceivable that nearly every household item could be hooked up to the internet. These days, it’s near impossible to avoid a non-smart home gadget, and they’re vacuuming up a ton of new data that we’d never normally think about.
Thermostats know the temperature of your house, and smart cameras and sensors know when someone’s walking around your home. Smart assistants know what you’re asking for, and smart doorbells know who’s coming and going. And thanks to the cloud, that data is available to you from anywhere — you can check in on your pets from your phone or make sure your robot vacuum cleaned the house.
Because the data is stored or accessible by the smart home tech makers, law enforcement and government agencies have increasingly sought data from the companies to solve crimes.
And device makers won’t say if your smart home gadgets have been used to spy on you.
For years, tech companies have published transparency reports — a semi-regular disclosure of the number of demands or requests a company gets from the government for user data. Google was first in 2010. Other tech companies followed in the wake of Edward Snowden’s revelations that the government had enlisted tech companies’ aid in spying on their users. Even telcos, implicated in wiretapping and turning over Americans’ phone records, began to publish their figures to try to rebuild their reputations.
As the smart home revolution began to thrive, police saw new opportunities to obtain data where they hadn’t before. Police sought Echo data from Amazon to help solve a murder. Fitbit data was used to charge a 90-year old man with the murder of his stepdaughter. And recently, Nest was compelled to turn over surveillance footage that led to gang members pleading guilty to identity theft.
Yet, Nest — a division of Google — is the only major smart home device maker that has published how many data demands it receives.
As first noted by Forbes last week, Nest’s little-known transparency report doesn’t reveal much — only that it’s turned over user data about 300 times since mid-2015 on over 500 Nest users. Nest also said it hasn’t to date received a secret order for user data on national security grounds, such as in cases of investigating terrorism or espionage. Nest’s transparency report is woefully vague compared to some of the more detailed reports by Apple, Google and Microsoft, which break out their data requests by lawful request, by region and often by the kind of data the government demands.
As Forbes said, “a smart home is a surveilled home.” But at what scale?
We asked some of the most well-known smart home makers on the market if they plan to release a transparency report, or disclose the number of demands they receive for data from their smart home devices.
For the most part, we received fairly dismal responses.
What the big four tech giants said
Amazon did not respond to requests for comment when asked if it will break out the number of demands it receives for Echo data, but a spokesperson told me last year that while its reports include Echo data, it would not break out those figures.
Facebook said that its transparency report section will include “any requests related to Portal,” its new hardware screen with a camera and a microphone. Although the device is new, a spokesperson did not comment on if the company will break out the hardware figures separately.
Google pointed us to Nest’s transparency report but did not comment on its own efforts in the hardware space — notably its Google Home products.
And Apple said that there’s no need to break out its smart home figures — such as its HomePod — because there would be nothing to report. The company said user requests made to HomePod are given a random identifier that cannot be tied to a person.
What the smaller but notable smart home players said
August, a smart lock maker, said it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA),” but did not comment on the number of subpoenas, warrants and court orders it receives. “August does comply with all laws and when faced with a court order or warrant, we always analyze the request before responding,” a spokesperson said.
Roomba maker iRobot said it “has not received any demands from governments for customer data,” but wouldn’t say if it planned to issue a transparency report in the future.
Both Arlo, the former Netgear smart home division, and Signify, formerly Philips Lighting, said they do not have transparency reports. Arlo didn’t comment on its future plans, and Signify said it has no plans to publish one.
Ring, a smart doorbell and security device maker, did not answer our questions on why it doesn’t have a transparency report, but said it “will not release user information without a valid and binding legal demand properly served on us” and that Ring “objects to overbroad or otherwise inappropriate demands as a matter of course.” When pressed, a spokesperson said it plans to release a transparency report in the future, but did not say when.
Spokespeople for Honeywell and Canary — both of which have smart home security products — did not comment by our deadline.
And, Samsung, a maker of smart sensors, trackers and internet-connected televisions and other appliances, did not respond to a request for comment.
Only Ecobee, a maker of smart switches and sensors, said it plans to publish its first transparency report “at the end of 2018.” A spokesperson confirmed that, “prior to 2018, Ecobee had not been requested nor required to disclose any data to government entities.”
All in all, that paints a fairly dire picture for anyone thinking that when the gadgets in your home aren’t working for you, they could be helping the government.
As helpful and useful as smart home gadgets can be, few fully understand the breadth of data that the devices collect — even when we’re not using them. Your smart TV may not have a camera to spy on you, but it knows what you’ve watched and when — which police used to secure a conviction of a sex offender. Even data from when a murder suspect pushed the button on his home alarm key fob was enough to help convict someone of murder.
Two years ago, former U.S. director of national intelligence James Clapper said the government was looking at smart home devices as a new foothold for intelligence agencies to conduct surveillance. And it’s only going to become more common as the number of internet-connected devices spread. Gartner said more than 20 billion devices will be connected to the internet by 2020.
As much as the chances are that the government is spying on you through your internet-connected camera in your living room or your thermostat are slim — it’s naive to think that it can’t.
But the smart home makers wouldn’t want you to know that. At least, most of them.
As a society, we have been conditioned with the age-old saying “Build it and they shall come”.
However, does this hold true for the digital world and your website? And more specifically, what about Google?
In most organizations, organic search optimization becomes a layer that is applied after the fact. After the brand teams, product owners and tech teams have decided what a website’s architecture should be.
However, what if I were to tell you that if search were a primary driver in your site’s architecture you could see a 200%+ performance gain out of your organic channel (and paid quality scores if you drive paid to organic pages), along with meeting brand guidelines and tech requirements?
The top 5 benefits of architecture driven by organic search
- Match Google relevancy signals with audience segmentation and user demand
- Categorization of topical & thematic content silos
- A defined taxonomy and targeted URL naming schemes
- Ability to scale content as you move up funnel
- A logical user experience that both your audience and Google can understand
When search strategy is aligned with your architecture you gain important relevancy signals that Google needs to understand your website.
You position yourself to acquire volume and market share that you would otherwise lose out on. In addition, you will be poised for organic site links within Google, answer box results and local map pack acquisition.
Imagine opening a 1,000-page hardcover book and looking for the table of contents, only to find it is either missing completely or reads with zero logic. As a user, how would you feel? Would you know what the chapters are about? Get a sense of what the book is about?
If you want Google to understand what your website is about and how it is put together, then make sure and communicate it properly – which is the first step for proper site architecture.
Let us pick on a few common, simplistic examples:
/about-us (About who?)
/contact-us (Contact who?)
/products/ (What kind of products?)
/articles (Articles about what?)
/categories (Category about what?)
And my very favorite…
/blog (Blog? What is that about? Could be anything in the world)
These sub-directories within the infrastructure of your website are key components – they are the “chapter names” in your book. Naming something “articles” lacks the relevancy and key signals to describe what your chapter is about.
The upper level sub-directories are known as parent level pages, which means any pages underneath them are child level pages. As you build and scale child level pages, it should be categorized under the proper parent level page. This allows all of the related content of the children pages to “roll up” and become relevant for the parent level page.
Google thrives on this sort of organization, as it provides a good user experience for their users, as well as communicating systematically what the pages are supposed to be about and how they are related to each other.
Example of a proper architecture
As you can see from this example, the relevancy of the two category levels (business plan template & how to write a business plan) all have relevancy that rolls up to the term business plans.
Then as you drill down one level deeper, you can see that you would isolate and build pages that are for business plan outline and business plan samples. These both roll up to the business plan template category.
Through proper keyword targeting and research you would locate the primary keyword driver that matches the page intent and high volume for the URL naming conventions. This communicates to Google what the page will be about as well as matching high customer demand from a search perspective.
Most brand or product teams create and name a structure based on internal reasons, or no particular reason at all. So rather than applying search filters after the fact and trying to retrofit, do the research and understand the volume drivers – then apply them to the architectural plan. You will have significant gains in your rankings and share of voice.
With a structure like this, every page has a home and a purpose. This architecture not only is designed for “current state” but also will scale easily for “future state”. It becomes very easy to add child categories under the primary silo category thus allowing you to scale easily and move up funnel to capture new market share and volume.
How does user experience (UX) play a role in architecture?
A common crossroads we encounter is the UX as it relates to search, content marketing and architecture. UX typically wants minimal content, limited navigational options and a controlled user journey.
However, keep in mind that a UX journey is considered from one point of entry (typically the home page), while search if done properly – every page becomes a point of entry. So we need to solve for both.
The good news is that pure architecture structure and URL naming schemes is and can be completely different than the UX. Build the architecture the proper way and you can still apply any UX as an overlay.
Where the primary differences come in is between UX and navigation. Here again, UX typically wants to limit the choices and control the journey, which means that the navigation is reduced and not all architectural levels are available and visible.
The challenge here is that you want Google to rank you number one in the world for all of these pages; however, you are also telling Google they are not important enough to you to even be in your navigation.
A rule of thumb I learned almost 20 years ago is to make sure every page can stand on its own. A user should never have to go “back” in order to go forward. So make sure your navigation and categorical pages are available from every page, especially knowing for organic search, a user will enter your site and the journey at every level.
Now does this mean abandoning UX? No. You can still control the journey through your primary CTAs and imagery, without sacrificing navigation or architecture.
On this week’s episode of Technotopia we talk to Sarah Kaufman, the Assistant Director at the Rudin Center for Transportation Policy & Management. Kaufman is working to create new transit opportunities for New Yorkers – and the world – and expects the future to be quite interesting. Her prediction? As we move towards self-driving cars we will see more options for… Read More
Startups – TechCrunch