Monthly Archives: July 2019
“How Long is Harry Potter?” is asked in a diagram from a Google Patent. The answer is unlikely to do with a dimension related to the fictional character but may have something to do with one of the best selling books featuring Harry Potter as a main Character.
When questions are asked as queries at Google, sometimes they aren’t asked clearly, with enough preciseness to make an answer easy to provide. How do vague questions get answered?
Question answering seems to be a common topic in Google Patents recently. I wrote about one not long ago in the post, How Google May Handle Question Answering when Facts are Missing
So this post is also on question answering but involves issues involving the questions rather than the answers. And particularly vague questions.
Early in the description for a recently granted Google Patent, we see this line, which is the focus of the patent:
Some queries may indicate that the user is searching for a particular fact to answer a question reflected in the query.
I’ve written a few posts about Google working on answering questions, and it is good seeing more information about that topic being published in a new patent. As I have noted, this one focuses upon when questions asking for facts may be vague:
When a question-and-answer (Q&A) system receives a query, such as in the search context, the system must interpret the query, determine whether to respond, and if so, select one or more answers with which to respond. Not all queries may be received in the form of a question, and some queries might be vague or ambiguous.
The patent provides an example query for “Washington’s age.”
Washington’s Age could be referring to:
- President George Washington
- Actor Denzel Washington
- The state of Washington
- Washington D.C.
For the Q&A system to work correctly, it would have to decide which the searcher who typed that into a search box the query was likely interested in finding the age of. Trying that query, Google decided that I was interested in George Washington:
The problem that this patent is intended to resolve is captured in this line from the summary of the patent:
The techniques described in this paper describe systems and methods for determining whether to respond to a query with one or more factual answers, including how to rank multiple candidate topics and answers in a way that indicates the most likely interpretation(s) of a query.
How would Google potentially resolve this problem?
It would likely start by trying to identify one or more candidate topics from a query. It may try to generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic.
It would obtain search results based on the query, which references an annotated resource, which would be is a resource that, based on automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource. For each candidate topic-answer pair,
There would be a Determination of a score for the candidate topic-answer pair based on:
(i) The candidate topic appearing in the annotations of the resources referenced by one or more of the search results
(ii) The query answer appearing in annotations of the resources referenced by the search results, or in the resources referenced by the search results.
A decision would also be made on whether to respond to the query, with one or more answers from the candidate topic-answer pairs, based on the scores for each.
The patent tells us about some optional features as well.
- The scores for the candidate topic-answer pairs would have to meet a predetermined threshold
- This process may decide to not respond to the query with any of the candidate topic answer pairs
- One or More of the highest-scoring topic-answer pairs might be shown
- An topic-answer might be selected from one of a number of interconnected nodes of a graph
- The Score for the topic-answer pair may also be based upon a respective query relevance score of the search results that include annotations in which the candidate topic occurs
- The score to the topic-answer pair may also be based upon a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, which could indicate the likelihood that the answer is correct for that question
Knowledge Graph Connection to Vague Questions?
This question-answering system can include a knowledge repository which includes a number of topics, each of which includes attributes and associated values for those attributes.
It may use a mapping module to identify one or more candidate topics from the topics in the knowledge repository, which may be determined to relate to a possible subject of the query.
An answer generator may generate for each candidate topic, a candidate topic-answer pair that includes:
(i) the candidate topic, and
(ii) an answer to the query for the candidate topic, wherein the answer for each candidate topic is identified from information in the knowledge repository.
A search engine may return search results based on the query, which can reference an annotated resource. A resource that, based on an automated evaluation of the content of the resource, is associated with an annotation that identifies one or more likely topics associated with the resource.
A score may be generated for each candidate topic-answer pair based on:
(i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results
(ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results. A front-end system at the one or more computing devices can determine whether to respond to the query with one or more answers from the candidate topic-answer pairs, based on the scores.
The additional features above for topic-answers appears to be repeated in this knowledge repository approach:
- The front end system can determine whether to respond to the query based on a comparison of one or more of the scores to a predetermined threshold
- Each of the number of topics that in the knowledge repository can be represented by a node in a graph of interconnected nodes
- The returned search results can be associated with a respective query relevance score and the score can be determined by the scoring module for each candidate topic-answer pair based on the query relevance scores of one or more of the search results that reference an annotated resource in which the candidate topic occurs
- For one or more of the candidate topic-answer pairs, the score can be further based on a confidence measure associated with each of one or more annotations in which the candidate topic in a respective candidate topic-answer pair occurs, or each of one or more annotations in which the answer in a respective candidate topic-answer pair occurs
Advantages of this Vague Questions Approach
- Candidate responses to the query can be scored so that a Q&A system or method can determine whether to provide a response to the query.
- If the query is not asking a question or none of the candidate answers are sufficiently relevant to the query, then no response may be provided
- The techniques described herein can interpret a vague or ambiguous query and provide a response that is most likely to be relevant to what a user desired in submitting the query.
This patent about answering vague questions is:
Determining question and answer alternatives
Inventors: David Smith, Engin Cinar Sahin and George Andrei Mihaila
Assignee: Google Inc.
US Patent: 10,346,415
Granted: July 9, 2019
Filed: April 1, 2016
A computer-implemented method can include identifying one or more candidate topics from a query. The method can generate, for each candidate topic, a candidate topic-answer pair that includes both the candidate topic and an answer to the query for the candidate topic. The method can obtain search results based on the query, wherein one or more of the search results references an annotated resource. For each candidate topic-answer pair, the method can determine a score for the candidate topic-answer pair for use in determining a response to the query, based on (i) an occurrence of the candidate topic in the annotations of the resources referenced by one or more of the search results, and (ii) an occurrence of the answer in annotations of the resources referenced by the one or more search results, or in the resources referenced by the one or more search results.
Vague Questions Takeaways
I am reminded of a 2005 Google Blog post called Just the Facts, Fast when this patent tells us that sometimes it is “most helpful to a user to respond directly with one of more facts that answer a question determined to be relevant to a query.”
The different factors that might be used to determine which answer to show if an answer is shown, includes a confidence level, which may be confidence that an answer to a question is correct. That reminds me of the association scores of attributes related to entities that I wrote about in Google Shows Us How It Uses Entity Extractions for Knowledge Graphs. That patent told us that those association scores for entity attributes might be generated over the corpus of web documents as Googlebot crawled pages extracting entity information, so those confidence levels might be built into the knowledge graph for attributes that may be topic-answers for a question answering query.
A webpage that is relevant for such a query, and that an answer might be taken from may be used as an annotation for a displayed answer in search results.
Copyright © 2019 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana
The post How would Google Answer Vague Questions in Queries? appeared first on SEO by the Sea ⚓.
Don’t get me wrong, Google Grants is an amazing “in-kind” gift for those qualified 501(c)(3) Nonprofits (especially for those who are utilizing it efficiently). However, times have changed since it’s inception in 2003 and considering the multi-device environment that we live in, Google should consider adapting their Mobile Network as a viable option for Google Grantees. Maybe call it (GrantsMobile)?
In this post, I will discuss the reasons why Google should revamp their Grants program to be more mobile app friendly.
Nonprofits have been “Going Mobile” for a while
The idea that Nonprofits have become “less savvy” as compared to “For-Profit” organizations is simply not true. Even though nonprofits may not have the big advertising budgets as do for-profit companies, they are savvy enough to “fish where the fish are” in trying to increase awareness, volunteerism and most importantly fundraising. In a Capterra Nonprofit Technology Blog article published back in 2014 entitled “The Essential Guide to Going Mobile for Nonprofits“, author Leah Readings talks about the importance for Nonprofits to be more mobile because it creates a wider range of communication between the organization and its members. Readings also states “Allowing for online donation pages or portals, or donation apps, makes it much easier for your members to donate—when all they have to do is click a few buttons in order to make a donation, giving becomes easier, and in turn will encourage more people to give.“
Need more convincing? In a 2013 article from InternetRetailer.com entitled “Mobile donations triple in 2012” (which was also mentioned in the Capterra article) the author goes on to quote from a fundraising technology and services provider Frontstream (formerly Artez Interactive) which states “nonprofits that offer mobile web sites, apps or both for taking donations generate up to 123% more individual donations per campaign than organizations that don’t.“
Why Google Mobile is Ripe for Nonprofits:
If you have ever done any mobile advertising within Google Adwords (formerly AdMob), you know that the system is pretty robust and is considered one of the best platforms to promote Apps on both Google Play and the iTunes store. Moreover, advertisers can easily track engagements and downloads back to their specific audience that they are targeting. The costs are also much more affordable than traditional $ 1-2 CPC offered to Google Grants accounts which can only run on Google.com.
Here are the Mobile App Promotion Campaigns by Google Adwords:
Universal App Campaigns:
AdWords create ads for your Android app in a variety of auto-generated formats to show across the Search, Display and YouTube Networks.
- Ads are generated for you based on creative text you enter, as well as your app details in the Play Store (e.g. your icon and images). These ads can appear on all available networks
- Add an optional YouTube video link for your ads to show on YouTube as well.
Mobile app installs
Increase app downloads with ads sending people directly to app stores to download your app.
- Available for Search Network, Display Network, and YouTube
- Ad formats include standard, image and video app install ads
Mobile app engagement
Re-engage your existing users with ads that deep link to specific screens within your mobile app. Mobile app engagement campaigns are a great choice if you’re focused on finding folks interested in your app content, getting people who have installed your app to try your app again, or to open your app and take a specific action. These types of ads allow flexibility for counting conversions, bidding and targeting.
- Available for Search Network and Display Network campaigns
- Ad formats include standard and image app engagement ads
A lot has changed since 2003 with the birth of Google Grants and Google needs to continue to be socially responsible and catch up to their own standards of the online world that they helped create. Nonprofits are now, more than ever, relying on the internet to drive awareness, volunteerism and fundraising. For Nonprofits, as well as everyone else for that matter, are getting their information from Facebook, Twitter, TV, Radio and (still Google) using laptops, tablets and mobile devices and it’s time for Google Grants to adapt to this new world.
At first glance launching a new social app may seem as sensible a startup idea as plunging headfirst into shark-infested waters. But with even infamous curtain-ripper Facebook now making grand claims about a ‘pivot to privacy’ it’s clear something is shifting in the commercial shipping channels that contain our digital chatter.
Whisper it: Feeds are tiring. Follows are tedious. Attention is expiring. There’s also, of course, the damage that personal digital baggage left out in the open can wreak long after the fact of a blown fuse or fleeting snap.
Public feeds have become vehicles of self-promotion; carefully and heavily curated — which of course brings its own peer pressures to keep up with friends’ lux exploits and the influencer ‘gram aesthetic that pretends life looks like a magazine spread.
Yet for a brief time, in the gritty early years of social media, there was something akin to spontaneous, confessional reality on show online. People do like to share. That’s mostly been swapped for the polish of aspirational faking it on apps like Facebook-owned Instagram. While genuine friend chatter has moved behind the quasi-closed doors of group messaging apps, like Facebook-owned WhatsApp (or rival Telegram).
If you want to chat more freely online without being defined by your existing social graph the options are less mainstream friendly to say the least.
Twitter is genuinely great if you’re willing to put in the time and effort to find interesting strangers. But its user growth problem shows most consumers just aren’t willing (or able) to do that. Telegram groups also require time and effort to track down.
Also relevant in interest-based chat: Veteran forum Reddit, and game chat platform Discord — both pretty popular, though not in a way that really cuts across the mainstream, tending to cater to more niche and/or focused interests. Neither is designed for mobile first either.
This is why Capture’s founders are convinced there’s a timely opportunity for a new social app to slot in — one which leverages smartphone sensors and AI smarts to make chatting about anything as easy as pointing a camera to take a shot.
They’re not new to the social app game, either. As we reported last year, two of Capture’s founders were part of the team behind the style transfer app Prisma, which racked up tens of millions of downloads over a few viral months of 2016.
And with such a bright feather in their cap, a number of investors — led by General Catalyst — were unsurprisingly eager to chip into Capture’s $ 1M seed, setting them on the road to today’s launch.
Point and chat
“The main idea behind the app is during the day you’ve got different experiences — working, watching some TV series etc, you’re sitting in an arena watching some sports, or something like that. So we imagine that you should open the app during any type of experience you have during the day,” says Capture co-founder and CEO Alexey Moiseenkov fleshing out the overarching vision for the app.
“It’s not for your friends; it’s the moment when you should share something or just ask something or discuss something with other people. Like news, for example… I want to discuss news with the people who are relevant, who want to discuss it. And so on and on. So I imagine it is about small groups with the same goal, discussing the same experience, or something like that. It’s all about your everyday life.”
“Basically you can imagine our app as like real-time forum,” he adds. “Real-time social things like Reddit. So it’s more about live discussion, not postponing something.”
Chat(room) recommendations are based on contextual inferences that Capture can glean from the mobile hardware. Namely where you are (so the app needs access to your location) and even whether you’re on the move or lounging around (it also accesses the accelerometer so can tell the angle of the phone).
The primary sensory input comes from the camera of course. So like Snap it’s a camera-first app, opening straight into the rear lens’ live view.
By default chats in Capture are public so it also knows what topics users are discussing — which in turn further feeds and hones its recommendations for chats (and indeed matching users).
Co-founder and CMO Aram Hardy (also formerly at Prisma) gives the example of the free-flowing discussion you can see unrolling in YouTube comments when a movie trailer gets its first release — as the sort of energetic, expressive discussion Capture wants to channel inside its app.
“It’s exploding,” he says. “People are throwing those comments, discussing it on YouTube, on web, and that’s a real pain because there is no tool where you can simply discuss it with people, maybe with people around you, who are just interested in this particular trailer live on a mobile device — that’s a real pain.”
“Everything which is happening around the person should be taken into consideration to be suggested in Capture — that’s our simple vision,” he adds.
Everything will mean pop culture, news, local events and interest-based communities.
Though some of the relevant sources of pop/events content aren’t yet live in the app. But the plan is to keep bulking out the suggestive mix to expand what can be discovered via chat suggestions. (There’s also a discovery tab to surface public chats.)
Hardy even envisages Capture being able to point users to an unfolding accident in their area — which could generate a spontaneous need for locals or passers by to share information.
The aim for the app — which is launching on iOS today (Android will come later; maybe by fall) — is to provide an ever ready, almost no-barrier-to-entry chat channel that offers mobile users no-strings-attached socializing free from the pressures (and limits) of existing social graphs/friend networks; as well as being a context-savvy aid for content and event discovery, which means helping people dive into relevant discussion communities based on shared interests and/or proximity.
Of course location-based chatting is hardly a new idea. (And messaging giant Telegram just added a location-based chats feature to its platform.)
But the team’s premise is that mobile users are now looking for smart ways to supplement their social graph — and it’s betting on a savvy interface unlocking and (re)channelling underserved demand.
“People are really tired of something really follower based,” argues Moiseenkov. “All this stuff with a following, liking and so on. I feel there is a huge opportunity for all the companies around the world to make something based on real-time communication. It’s more like you will be heard in this chat so you can’t miss a thing. And I think that’s a powerful shot.
“We want to create a smaller room for every community in the Internet… So you can always join any group and just start talking in a free way. So you never shared your real identity — or it’s under your control. You can share or not, it’s up to you. And I think we need that.
“It’s what we miss during this Facebook age where everybody is ‘real’. Imagine that it’s like a game. In a game you’re really free — you can express yourself what way you want. I think that’s a great idea.”
“The entry threshold [for Twitter] is enormous,” adds Hardy. “You can’t have an account on Twitter and get famous within a week if you’re not an influencer. If you’re a simple person who wants to discuss something it’s impossible. But you can just create a chat or enter any chat within Capture and instantly be heard.
“You can create a chat manually. We have an add button — you can add any chat. It will be automatically recognized and suggested to other users who are interested in these sort of things. So we want every user to be heard within Capture.”
How it works
Capture’s AI-powered chatroom recommendations are designed to work as an onboarding engine for meeting relevant strangers online — using neural networks and machine learning to do the legwork of surfacing relevant people and chats.
Here’s how the mobile app works: Open the app, point the camera at something you view as a conversational jumping off point — and watch as it processes the data using computer vision technology to figure out what you’re looking at and recommend related chats for you to join.
For example, you might point the camera around your front room and be suggested a chatroom for ‘interior design trends and ideas’ , or at a pot plant and get ‘gardeners’ chat, or at your cat and get ‘pet chat’ or ‘funny pets’.
Point the camera at yourself and you might see suggestions like ‘Meet new friends’, ‘Hot or not?’, ‘Dating’, ‘Beautiful people’ — or be nudged to start a ‘Selfie chat’, which is where the app will randomly connect you with another Capture user for a one-to-one private chat.
Chat suggestions are based on an individual user’s inferred interests and local context (pulled via the phone) and also on matching users across the app based on respective usage of the app.
At the same time the user data being gathered is not used to pervasively profile uses, as is the case with ad-supported social networks. Rather Capture’s founders say personal data pulled from the phone — such as location — is only retained for a short time and used to power the next set of recommendations.
Capture users are also not required to provide any personal data (beyond creating a nickname) to start chatting. If they want to use Capture’s web platform they can provide an email to link their app and web accounts — but again that email address does not have to include anything linked to their real identity.
“The key tech we want to develop is a machine learning system that can suggest you the most relevant stuff and topics for you right now — based on data we have from your phone,” continues Moiseenkov. “This is like a magical moment. We do not know who you are — but we can suggest something relevant.
“This is like a smart system because we’ve got some half graph of connection between people. It’s not like the entire graph like your friends and family but it’s a graph on what chat you are in, so where are you discussing something. So we know this connection between people [based on the chats you’re participating in]… so we can use this information.
“Imagine this is somehow sort of a graph. That’s a really key part of our system. We know these intersections, we know the queries, and the intersection of queries from different people. And that’s the key here — the key machine learning system then want to match this between people and interests, between people and topics, and so on.
“On top of that we’ve got recognition stuff for images — like six or seven neural networks that are working to recognize the stuff, what are you seeing, how, what position and so on. We’ve got some quite slick computer vision filters that can do some magic and do not miss.
“Basically we want to perform like Google in terms of query we’ve got — it’s really big system, lots of tabs — to suggest relevant chats.”
Image recognition processing is all done locally on the user’s device so Capture is not accessing any actual image data from the camera view — just mathematical models of what the AI believes it’s seen (and again they claim they don’t hold that data for long).
“Mostly the real-time stuff comes from machine learning, analyzing the data we have from your phone — everybody has location. We do not store this location… we never store your data for a long time. We’re trying to move into more private world where we do not know who you are,” says Moiseenkov.
“When you log into our app you just enter the nickname. It’s not about your phone number, it’s not about your social networks. We sometimes — when you just want to log in from other device — we ask you an email. But that’s all. Email and nickname it’s nothing. We do not know nothing about you. About your person, like where you work, who’s your friends, so on and so on. We do not know anything.
“I think that’s the true way for now. That’s why gaming is so fast in terms of growing. People just really want to share, really want to log in and sign up [in a way] that’s easy. And there is no real barriers for that — I think that’s what we want to explore more.”
Having tested Capture’s app prior to launch I can report that the first wave chat suggestions are pretty rudimentary and/or random.
Plus its image recognition often misfires (for instance my cat was identified as, among other things, a dog, hamster, mouse and even a polar bear (!) — as well as a cat — so clearly the AI’s eye isn’t flawless, and variable environmental conditions around the user can produce some odd and funny results).
The promise from the founders is that recommendations will get better as the app ingests more data and the AI (and indeed Capture staff performing manual curation of chat suggestions) get a better handle on what people are clicking on and therefore wanting to talk with other users about.
They also say they’re intending to make better linkage leaps in chat suggestions — so rather than being offered a chatroom called ‘Pen’ (as I was), if you point the Capture camera at a pen, the app might instead nudge you towards more interesting-sounding chats — like ‘office talk’ or ‘writing room’ and so on.
Equally, if a bunch of users point their Capture cameras at the same pen the app might in future be smart enough to infer that they all want to join the same chatroom — and suggest creating a private group chat just for them.
On that front you could imagine members of the same club, say, being able to hop into the same discussion channel — summoning it by scanning a mutual object or design they all own or have access to. And you could also imagine people being delighted by a scanner-based interface linked to custom stuff in their vicinity — as a lower friction entry point vs typing in their directions. (Though — to be clear — the app isn’t hitting those levels of savvy right now.)
“Internally we imagine that we’re like Google but without direct query typing,” Moiseenkov tells TechCrunch. “So basically you do the query — like scanning the world around you. Like you are in some location, like some venue, imagine all this data is like a query — so then step by step we know what people are clicking, then improving the results and this step by step, month by month, so after three month or four month we will be better. So we know what people are clicking, we know what people are discussing and that’s it.”
“It’s tricky stuff,” he adds. “It’s really really hard. So we need lots of machine learning, we need lots of like our hands working on this moderating stuff, replacing some stuff, renaming, suggest different things. But I think that’s the way — that’s the way for onboarding people.
“So when people will know that they will open the app in the arena and they will receive the right results the most relevant stuff for this arena — for the concert, for the match, or something like that, it will be the game. That’s what we want to achieve. So every time during the day you open the app you receive relevant community to join. That’s the key.”
Right now the founders say they’re experimenting with various chat forms and features so they can figure out how people want to use the app and ensure they adapt to meet demand.
Hence, for example, the chatroulette-style random ‘selfie chat’ feature. Which does what it says on the tin — connecting you to another random user for a one-to-one chat. (If selfie chats do end up getting struck out of the app I hope they’ll find somewhere else to house the cute slide-puzzle animation that’s displayed as the algorithms crunch data to connect you to a serendipitous interlocutor.)
They’re also not yet decided on whether public chat content in Capture will persist indefinitely — thereby potentially creating ongoing, topics-based resources — or be ephemeral by default, with a rolling delete which kicks in after a set time to wipe the chat slate clean.
“We actually do not know what will be in the next one to three months. We need to figure out — will it be consistent or ephemeral,” admits Moiseenkov. “We need to figure out certain areas, like usage patterns. We should watch how people behave in our app and then decide what will be the feed.”
Capture does support private group chats as well as public channels — so there’s certainly overlap with the messaging platform Telegram, which also supports both. Though one nuance between them is Capture Channels let everyone comment but only admins post vs Telegram channels being a pure one-way broadcast.
But it’s on interface and user experience where Capture’s approach really diverges from the more standard mobile messaging playbook.
If you imagine it as a mash-up of existing social apps Capture could be thought of as something like a Snap-style front end atop a Telegram-esque body yet altogether sleeker, with none of the usual social baggage and clutter. (Some of that may creep in of course, if users demand it, and they do have a reactions style feature linked up to add in so… )
“With our tool you can find people not from your graph,” says Moiseenkov. “That’s the key here. So with WhatsApp it’s really hard to invite people not from your graph — or like friends of friends. And that’s a really tough question — where I can find the relevant people whom I chat about football? So now we add the tool for you in our app to just find these people and invite them to your [chat].”
“It’s really really hard not to like your friend’s post on Instagram because it’s social capital,” he adds. “You are always liking these posts. And we are not in this space. We do not want to move in this direction of followers, likers, and all this stuff — scrolling and endless communication.
“Time is changing, my life is changing, my friends and family somehow is changing because life is changing… We’re mobile like your everyday life… the app is suggesting you something relevant for this life [now]. And you can just find people also doing the same things, studying, discussing the same things.”
Why include private chats at all in Capture? Given the main premise (and promise) of the app is its ability to combine strangers with similar interests in the same virtual spaces — thereby expanding interest communities and helping mobile users escape the bubbles of closed chat groups.
On that Moiseenkov says they envisage communities will still want to be able to create their own closed groups — to maintain “a persistent, consistent community”.
So Capture has been designed to contain backchannels as well as open multiple windows into worlds anyone can join. “It’s one of opportunities to make this and I think that we should add it because we do not know exact scenarios right from the launch,” he says of including private conduits alongside public chats.
Given the multiple chat channels in the first release Capture does risk being a bit confusing. And during our interview the founders joke about having created a “maximal viable product” rather than the usual MVP.
But they say they’re also armed to be able to respond quickly to usage patterns — with bits and pieces lined up in the background so they can move quickly to add/remove features based on the usage feedback they get. So, basically, watch this space.
All the feature creep and experimentation has delayed their launch a little though. The app had been slated to arrive in Q4 last year. Albeit, a later-than-expected launch is hardly an unusual story for a startup.
Capture also of course suffers from a lack of users for people to chat to at the point of release — aka, the classic network effect problem (which also makes testing it prior to launch pretty tricky; safe to say, it was a very minimalist messaging experience).
Not having many users also means Capture’s chat suggestions aren’t as intelligent and savvy as the founders imply they’ll be.
So again the MVP will need some time to mature before it’s safe to pass judgement on the underlying idea. It does feel a bit laggy right now — and chat suggestions definitely hit and miss but it will be interesting to see how that evolves as/if users pile in.
Part of their plan is to encourage and nurture movie/TV/entertainment discussion communities specifically — with Hardy arguing there’s “no such tool” that easily supports that. So in future they want Capture users to be notified about new series coming up on Netflix, or Disney’s latest release. Then, as users watch that third party content, their idea is they’ll be encouraged to discuss it live on their mobiles via Capture.
But movie content is only partially launched at this stage. So again that’s all just a nice idea at this stage.
Testing pre-launch on various celebrity visages also drew a suggestive blank — and Hardy confirmed they’ve got more pop culture adds planned for the future.
Such gaps will likely translate into a low stickiness rate at first. But when the team’s ambition is to support a Google-esque level of content queries the scale of the routing and pattern matching task ahead of them is really both massive and unending.
To get usage off the ground they’re aiming to break the content recommendation problem down into more bite-size chunks — starting by seeding links to local events and news (sourced from parsing the public Internet); and also by focusing on serving specific communities (say around sports), and also linked to particular locations, such as cities — the latter two areas likely informed by in what and where the app gets traction.
They’ve also hired a content manager to help with content recommendations. This person is also in charge of “banning some bad things and all that stuff”, as they put it. (From the get go they’re running a filter to ban nudity; and don’t yet support video uploads/streams to reduce their moderation risk. Clearly they will need to be very ‘on it’ to avoid problem usage mushrooming into view and discouraging positive interactions and community growth within the app. But again they say they’re drawing on their Prisma experience.)
They also say they want this social app to be more a slow burn on the growth front — having seen the flip side of burn out viral success at Prisma — which, soon after flooding the social web with painterly selfies, had to watch as tech giants ruthlessly cloned the style transfer effect, reducing their novelty factor and pushing users to move on to their next selfie lens fix.
“As data-driven guys we’re mostly looking for some numbers,” says Moiseenkov when asked where they hope to be with Capture in 12 months’ time. “So I think achieving something like 1M or 2M MAU with a good retention and engagement loop by then is our goal.
“We want to keep this growth under control. So we could release the features step by step, more about engagement not more about viral growth. So our focus is doing something that can keep engagement loop, that can increase our spend time in the app, increase the usage and so on, not driving this into the peak and like acquiring all the trends.”
“Conclusions are drawn from Prisma!” adds Hardy with investor-winning levels of chutzpah.
While it’s of course super early to talk business model, the question is a valid one given Capture’s claims of zero user profiling. Free apps backed by VC will need to monetize the hoped for scale and usage at some point. So how does Capture plan to do that?
The founders say they envisage the app acting as a distribution tool. And for that use case their knowing (only) the timing, location and subject of chats is plenty enough data to carry out contextual targeting of whatever stuff they can get paid to distribute to their users.
They are also toying with models in a Patreon style — such as users being able to donate to content authors who are in turn distributing stuff to them via Capture. But again plans aren’t fully formed at this nascent stage.
“Our focus right now is more like going into partnerships with different companies that have lots of content and lots of events going on,” says Hardy. “We also are going to ask for permission to get access to music apps like Spotify or Apple Music to be aware of those artists and songs a person is interested in and is listening to. So this will give us an opportunity to suggest relevant new albums, maybe music events, concerts and so on and so forth.
“For example if a band is coming to your city and we know we have access to Apple Music we know you’re listening to it we’ll suggest a concert — we’ll say ‘hey maybe you can win a free ticket’ if we can partner… with someone, so yeah we’re moving into this in the near future I think.”
Opinion: Freezers fail. Samples are mislabeled. Embryos get switched. But a lack of regulation leaves those harmed by such negligence without clear recourse.
Feed: All Latest
We’re less than two months out from our first TC Sessions: Enterprise event, which is happening in San Francisco on September 5, and did you know our buy 1 get 1 free sale ends today too! Among the many enterprise and startup executives that’ll join us for the event is Qualtrics’ Julie Larson-Green. If that name sounds familiar to you, that’s most likely because you remember her from her 25 years at Microsoft. After a successful career in Redmond, Larson-Green left Microsoft in 2017 to become the Chief Experience Officer at SAP’s Qualtrics.
In that role, she’s perfect for our panel about — you guessed it — customer experience management.
Larson-Green joined Microsoft as a program manager for Visual C++ back in 1993. After moving up the ladder inside the company, she oversaw the launch of Windows 7 and became the co-lead of Microsoft’s hardare, games, music and entertainment division in 2013. At the time, she was seen as a potential replacement for then-CEO Steve Ballmer.
Later, during a period of reshuffling at the company in the wake of the Nokia acquisition, became the Chief Experience Officer of Microsoft’s My Life and Work group.
Larson-Green joined Qualtrics before it was acquired by SAP for $ 8 billion in cash. Qualtrics offers a number of products that range from customer experience tools to brand tracking and ad testing services, as well as employee research products for gathering feedback about managers, for example. At the core of its product is an analytics engine that helps businesses make sense of their employee and customer data, which in turn should help them optimize their customer experience scores and reduce employee attrition rates.
Our buy one get one free ticket deal ends today! Book a ticket for just $ 249 and you can bring a buddy for free. Book here before this deal ends.
We’re still selling startup demo tables, and each package comes with 4 tickets. Learn more here.
If you’re looking for a way to optimize your site for technical SEO and rank better, consider deleting your pages.
I know, crazy, right? But hear me out.
We all know Google can be slow to index content, especially on new websites. But occasionally, it can aggressively index anything and everything it can get its robot hands on whether you want it or not. This can cause terrible headaches, hours of clean up, and subsequent maintenance, especially on large sites and/or ecommerce sites.
Our job as search engine optimization experts is to make sure Google and other search engines can first find our content so that they can then understand it, index it, and rank it appropriately. When we have an excess of indexed pages, we are not being clear with how we want search engines to treat our pages. As a result, they take whatever action they deem best which sometimes translates to indexing more pages than needed.
Before you know it, you’re dealing with index bloat.
What is the index bloat?
Put simply, index bloat is when you have too many low-quality pages on your site indexed in search engines. Similar to bloating in the human digestive system (disclaimer: I’m not a doctor), the result of processing this excess content can be seen in search engines indices when their information retrieval process becomes less efficient.
Index bloat can even make your life difficult without you knowing it. In this puffy and uncomfortable situation, Google has to go through much more content than necessary (most of the times low-quality and internal duplicate content) before they can get to the pages you want them to index.
Think of it this way: Google visits your XML sitemap to find 5,000 pages, then crawls all your pages and finds even more of them via internal linking, and ultimately decides to index 30,000 URLs. This comes out to an indexation excess of approximately 500% or even more.
But don’t worry, diagnosing your indexation rate to measure against index bloat can be a very simple and straight forward check. You simply need to cross-reference which pages you want to get indexed versus the ones that Google is indexing (more on this later).
The objective is to find that disparity and take the most appropriate action. We have two options:
- Content is of good quality = Keep indexability
- Content is of low quality (thin, duplicate, or paginated) = noindex
You will find that most of the time, index bloat results in removing a relatively large number of pages from the index by adding a “NOINDEX” meta tag. However, through this indexation analysis, it is also possible to find pages that were missed during the creation of your XML sitemap(s), and they can then be added to your sitemap(s) for better indexing.
Why index bloat is detrimental for SEO
Index bloat can slow processing time, consume more resources, and open up avenues outside of your control in which search engines can get stuck. One of the objectives of SEO is to remove roadblocks that hinder great content from ranking in search engines, which are very often technical in nature. For example, slow load speeds, using noindex or nofollow meta tags where you shouldn’t, not having proper internal linking strategies in place, and other such implementations.
Ideally, you would have a 100% indexation rate. Meaning every quality page on your site would be indexed – no pollution, no unwanted material, no bloating. But for the sake of this analysis, let’s consider anything above 100% bloat. Index bloat forces search engines to spend more resources (which are limited) than needed processing the pages they have in their database.
At best, index bloat causes inefficient crawling and indexing, hindering your ranking capability. But index bloat at worst can lead to keyword cannibalization across many pages on your site, limiting your ability to rank in top positions, and potentially impacting the user experience by sending searchers to low-quality pages.
To summarize, index bloat causes the following issues:
- Exhausts the limited resources Google allocates for a given site
- Creates orphaned content (sending Googlebot to dead-ends)
- Negatively impacts the website’s ranking capability
- Decreases the quality evaluation of the domain in the eyes of search engines
Sources of index bloat
1. Internal duplicate content
Unintentional duplicate content is one of the most common sources of index bloat. This is because most sources of internal duplicate content revolve around technical errors that generate large numbers of URL combinations that end up indexed. For example, using URL parameters to control the content on your site without proper canonicalization.
Faceted navigation has also been one of the “thorniest SEO challenges” for large ecommerce sites, as Portent describes, and has the potential of generating billions of duplicate content pages by overlooking a simple feature.
2. Thin content
It’s important to mention an issue introduced by the Yoast SEO plugin version 7.0 around attachment pages. This WordPress plugin bug led to “Panda-like problems” in March of 2018 causing heavy ranking drops for affected sites as Google deemed these sites to be lower in the overall quality they provided to searchers. In summary, there is a setting within the Yoast plugin to remove attachment pages in WordPress – a page created to include each image in your library with minimal content – the epitome of thin content for most sites. For some users, updating to the newest version (7.0 then) caused the plugin to overwrite the previous selection to remove these pages and defaulted to index all attachment pages.
This then meant that having five images per blog post would lead to 5x-ing the number of indexed pages with 16% of actual quality content per URL, causing a massive drop in domain value.
Pagination refers to the concept of splitting up content into a series of pages to make content more accessible and improve user experience. This means that if you have 30 blog posts on your site, you may have ten blog posts per page that go three pages deep. Like so:
You’ll see this often on shopping pages, press releases, and news sites, among others.
Within the purview of SEO, the pages beyond the first in the series will very often contain the same page title and meta description, along with very similar (near duplicate) body content, introducing keyword cannibalization to the mix. Additionally, the purpose of these pages is for a better browsing user experience for users already on your site, it doesn’t make sense to send search engine visitors to the third page of your blog.
4. Under-performing content
If you have content on your site that is not generating traffic, has not resulted in any conversions, and does not have any backlinks, you may want to consider changing your strategy. Repurposing content is a great way to maximize any value that can be salvaged from under-performing pages to create stronger and more authoritative pages.
Remember, as SEO experts our job is to help increase the overall quality and value that a domain provides, and improving content is one of the best ways to do so. For this, you will need a content audit to evaluate your own individual situation and what the best course of action would be.
Even a 404 page that results in a 200 Live HTTP status code is a thin and low-quality page that should not be indexed.
Common index bloat issues
One of the first things I do when auditing a site is to pull up their XML sitemap. If they’re on a WordPress site using a plugin like Yoast SEO or All in One SEO, you can very quickly find page types that do not need to be indexed. Check for the following:
- Custom post types
- Testimonial pages
- Case study pages
- Team pages
- Author pages
- Blog category pages
- Blog tag pages
- Thank you pages
- Test pages
To determine if the pages in your XML sitemap are low-quality and need to be removed from search really depends on the purpose they serve on your site. For instance, sites do not use author pages in their blog, but still, have the author pages live, and this is not necessary. “Thank you” pages should not be indexed at all as it can cause conversion tracking anomalies. Test pages usually mean there’s a duplicate somewhere else. Similarly, some plugins or developers build custom features on web builds and create lots of pages that do not need to be indexed. For example, if you find an XML sitemap like the one below, it probably doesn’t need to be indexed:
Different methods to diagnose index bloat
Remember that our objective here is to find the greatest contributors of low-quality pages that are bloating the index with low-quality content. Most times it’s very easy to find these pages on a large scale since a lot of thin content pages follow a pattern.
This is a quantitative analysis of your content, looking for volume discrepancies based on the number of pages you have, the number of pages you are linking to, and the number of pages Google is indexing. Any disparity between these numbers means there’s room for technical optimization, which often results in an increase in organic rankings once solved. You want to make these sets of numbers as similar as possible.
As you go through the various methods to diagnose index bloat below, look out for patterns in URLs by reviewing the following:
- URLs that have /dev/
- URLs that have “test”
- Subdomains that should not be indexed
- Subdirectories that should not be indexed
- A large number of PDF files that should not be indexed
Next, I will walk you through a few simple steps you can take on your own using some of the most basic tools available for SEO. Here are the tools you will need:
- Paid Screaming Frog
- Verified Google Search Console
- Your website’s XML sitemap
- Editor access to your Content Management System (CMS)
As you start finding anomalies, start adding them to a spreadsheet so they can be manually reviewed for quality.
1. Screaming Frog crawl
Under Configuration > Spider > Basics, configure Screaming Frog to crawl (check “crawl all subdomains”, and “crawl outside of start folder”, manually add your XML sitemap(s) if you have them) for your site in order to run a thorough scan of your site pages. Once the crawl has been completed, take note of all the indexable pages it has listed. You can find this in the “Self-Referencing” report under the Canonicals tab.
Take a look at the number you see. Are you surprised? Do you have more or fewer pages than you thought? Make a note of the number. We’ll come back to this.
2. Google’s Search Console
Open up your Google Search Console (GSC) property and go to the Index > Coverage report. Take a look at the valid pages. On this report, Google is telling you how many total URLs they have found on your site. Review the other reports as well, GSC can be a great tool to evaluate what the Googlebot is finding when it visits your site.
How many pages does Google say it’s indexing? Make a note of the number.
3. Your XML sitemaps
This one is a simple check. Visit your XML sitemap and count the number of URLs included. Is the number off? Are there unnecessary pages? Are there not enough pages?
Conduct a crawl with Screaming Frog, add your XML sitemap to the configuration and run a crawl analysis. Once it’s done, you can visit the Sitemaps tab to see which specific pages are included in your XML sitemap and which ones aren’t.
Make a note of the number of indexable pages.
4. Your own Content Management System (CMS)
This one is a simple check too, don’t overthink it. How many pages on your site do you have? How many blog posts do you have? Add them up. We’re looking for quality content that provides value, but more so in a quantitative fashion. It doesn’t have to be exact as the actual quality a piece of content has can be measured via a content audit.
Make a note of the number you see.
At last, we come to the final check of our series. Sometimes Google throws a number at you and you have no idea where it comes from, but try to be as objective as possible. Do a “site:domain.com” search on Google and check how many results Google serves you from its index. Remember, this is purely a numeric value and does not truly determine the quality of your pages.
Make a note of the number you see and compare it to the other numbers you found. Any discrepancies you find indicates symptoms of an inefficient indexation. Completing a simple quantitative analysis will help direct you to areas that may not meet minimum qualitative criteria. In other words, comparing numeric values from multiple sources will help you find pages on your site that contain a low value.
The quality criteria we evaluate against can be found in Google’s Webmaster guidelines.
How to resolve index bloat
Resolving index bloat is a slow and tedious process, but you have to trust the optimizations you’re performing on the site and have patience during the process, as the results may be slow to become noticeable.
1. Deleting pages (Ideal)
In an ideal scenario, low-quality pages would not exist on your site, and thus, not consume any limited resources from search engines. If you have a large number of outdated pages that you no longer use, cleaning them up (deleting) can often lead to other benefits like fewer redirects and 404s, fewer thin-content pages, less room for error and misinterpretation from search engines, to name a few.
The less control you give search engines by limiting their options on what action to take, the more control you will have on your site and your SEO.
Of course, this isn’t always realistic. So here are a few alternatives.
2. Using Noindex (Alternative)
When you use this method at the page level please don’t add a site-wide noindex – happens more often than we’d like), or within a set of pages, is probably the most efficient as it can be completed very quickly on most platforms.
- Do you use all those testimonial pages on your site?
- Do you have a proper blog tag/category in place, or are they just bloating the index?
- Does it make sense for your business to have all those blog author pages indexed?
All of the above can be noindexed and removed from your XML sitemap(s) with a few clicks on WordPress if you use Yoast SEO or All in One SEO.
3. Using Robots.txt (Alternative)
Using the robots.txt file to disallow sections or pages of your site is not recommended for most websites unless it has been explicitly recommended by an SEO Expert after auditing your website. It’s incredibly important to look at the specific environment your site is in and how a disallow of certain pages would affect the indexation of the rest of the site. Making a careless change here may result in unintended consequences.
Now that we’ve got that disclaimer out of the way, disallowing certain areas of your site means that you’re blocking search engines from even reading those pages. This means that if you added a noindex, and also disallowed, Google won’t even get to read the noindex tag on your page or follow your directive because you’ve blocked them from access. Order of operations, in this case, is absolutely crucial in order for Google to follow your directives.
4. Using Google Search Console’s manual removal tool (Temporary)
As a last resort, an action item that does not require developer resources is using the manual removal tool within the old Google Search Console. Using this method to remove pages, whole subdirectories, and entire subdomains from Google Search is only temporary. It can be done very quickly, all it takes is a few clicks. Just be careful of what you’re asking Google to deindex.
A successful removal request lasts only about 90 days, but it can be revoked manually. This option can also be done in conjunction with a noindex meta tag to get URLs out of the index as soon as possible.
Search engines despise thin content and try very hard to filter out all the spam on the web, hence the never-ending search quality updates that happen almost daily. In order to appease search engines and show them all the amazing content we spent so much time creating, webmasters must make sure their technical SEO is buttoned up as early in the site’s lifespan as possible before index bloat becomes a nightmare.
Using the different methods described above can help you diagnose any index bloat affecting your site so you can figure out which pages need to be deleted. Doing this will help you optimize your site’s overall quality evaluation in search engines, rank better, and get a cleaner index, allowing Google to find the pages you’re trying to rank quickly and efficiently.
Pablo Villalpando is a Bilingual SEO Strategist for Victorious. He can be found on Twitter @pablo_vi.
The post Delete your pages and rank higher in search – Index bloat and technical optimization 2019 appeared first on Search Engine Watch.
Is Facebook preparing to launch a serious competitor to TikTok? If so, the company just picked up some key talent to make that happen. Last week, Facebook announced plans for a new division, called the NPE Team, which will build experimental consumer-focused apps where it will try different ideas and features, then see how people react. Now, Facebook has picked up former Vine GM Jason Toff to join the NPE team as a Product Management Director.
Now that we've moved to CA, I suppose it's a good time to share what I'm up to next! In two weeks, I'll be joining Facebook as a PM Director starting up a new initiative under the recently formed NPE team (https://t.co/HzK6Bjqzqx)
Toff’s experience also includes time spent at Google, most notably as a Product Lead for YouTube before exiting to Vine in 2014. At the short-form video app maker, Toff worked as Head of Product for a year, then became Vine’s General Manager.
Vine, of course, was later snatched up by Twitter — and there, Toff moved up to Director of Product Management before boomeranging back to Google, where his initial focus was on AR and VR projects.
Most recently, Toff worked as a Partner at Google’s Area 120, Google’s in-house incubator where employees work on experimental projects.
That’s not all that different from what Facebook appears to have in store with its own NPE Team ambitions. Similar to Area 120 or Microsoft Garage, for example, the NPE Team plans to deliver apps that will “change very rapidly” in response to consumer feedback. It will also be quick to close down experiments that aren’t useful to people in fairly short order.
That’s not how Facebook itself operates. Its more experimental apps have had longer runs, as the company used them to gain feedback to inform its larger projects. For example, its photo-sharing app Moments ran from 2015 through early 2019, and its TrueCaller-like app Hello for emerging markets ran for several years, despite fairly limited adoption.
Facebook has also tried and failed with a number of other offshoots over the past decade, like Facebook Paper, Notify, a Snapchat clone called Lifestage, and others, as well as those it picked up through acquisitions, then later shut down like tbh or Moves. It also previously ran an internal incubator of sorts called Facebook Creative Labs, which birthed now-failed projects like Slingshot, Riff, and Rooms.
Many of these efforts were fairly high-profile at launch, which made their eventual shut down more problematic for Facebook’s image. With NPE Team — as with Area 120 or Microsoft Garage — there’s a layer of separation between the test apps and the larger company. Many of the apps that the NPE Team puts out will bomb, and that’s the point — it wants to get the failures out of the way faster so others can find success.
While Toff can’t yet say what he’ll be working on at Facebook, there’s a lot of speculation that NPE Team will try to come up with some sort of answer to TikTok, the Beijing-based short-form video app that sucked up Musical.ly in 2018 and now is a Gen Z social networking hit with some 500 million-plus monthly users. Toff’s background with Vine could certainly be helpful if that were the case.
Toff says he’s hiring for NPE Team, including both UX designers and engineers.
I can't talk project specifics but can share that I'll be HIRING. I'm looking to assemble a diverse and mighty 2-pizza dream team full of creative can-doers, so if you're a UX designer or engineer (or both) and thrive in zero-to-one environments, HMU!
— Jason Toff (@jasontoff) July 15, 2019
The unchecked digital land grab for consumers’ personal data that has been going on for more than a decade is coming to an end, and the dominoes have begun to fall when it comes to the regulation of consumer privacy and data security.
We’re witnessing the beginning of a sweeping upheaval in how companies are allowed to obtain, process, manage, use and sell consumer data, and the implications for the digital ad competitive landscape are massive.
On the backdrop of evolving privacy expectations and requirements, we’re seeing the rise of a new class of digital advertising player: consumer-facing apps and commerce platforms. These commerce companies are emerging as the most likely beneficiaries of this new regulatory privacy landscape — and we’re not just talking about e-commerce giants like Amazon.
Traditional commerce companies like eBay, Target and Walmart have publicly spoken about advertising as a major focus area for growth, but even companies like Starbucks and Uber have an edge in consumer data consent and, thus, an edge over incumbent media players in the fight for ad revenues.
Tectonic regulatory shifts
By now, most executives, investors and entrepreneurs are aware of the growing acronym soup of privacy regulation, the two most prominent ingredients being the GDPR (General Data Protection Regulation) and the CCPA (California Consumer Privacy Act).
It’s becoming quite apparent that the Startup world is experiencing a slowdown and there appears to be no quick-end in site, unless you scrap everything your doing and start over. According to many articles from very reputable news sites around the web, the common issue at hand is the lack of funding coming from Venture Capital Firms. In addition, due to this shift in funding, Startups are now forced to change their thinking on how to grow their business on a shoe-string budget. In this post, I will discuss the importance of hiring a digital marketing firm that can not only get their business off the ground, but also do it without relying on the stress of getting additional funding to keep the doors open for another month.
Why is Venture Capital Slowing Down?
Across the major news sites and tech blogs, there has been a slew of articles discussing the apparent slowdown of Venture Capital funding across the globe (not just in the USA). According to the Forbes article entitled Tech CEO Shares Difficulties of Raising Venture Capital in a Down Market, the authors Samantha Walravens & Heather Cabot of GeekGirlRising stated “According to a 2016 report from PricewaterhouseCoopers and the National Venture Capital Association, funding in Silicon Valley startups fell 19.5% in the first quarter of 2016 compared to a year earlier, and is down 10% for seed stage companies in the first quarter 2016, amidst fears over the global economy and the run-up in startups’ valuations.”
To reinforce this trend, another article from Bloomberg.com, entitled “Is There a Slowdown in Venture Capital?” Phil Libin goes on to say that the reason for the pause and/or decline in Venture Capital funding is due to the current lack of interest of those “Me Too Businesses” that once thrived with the evolution of smartphones and social media. However, he does go on to say that right now is a great time for startups that can offer something new and original. See the video below for the interview (courtesy of Blooomberg.com)
Is Online Finally Catching up to Traditional Media?
In another yet predictable twist, it appears Social Media has finally started to crack that old TV Advertising Egg and is creeping its way deeper into the annual $ 70+ Billion Dollar TV Ad Budget. According to the AdAge.com article “TV Budgets Shifting to Social? Yes, It’s Time to Worry” author
Debra Aho Williamson states “… eMarketer believes the conversation about social and TV will change. For buyers who want the best way to reach their audience, the growing video businesses of Facebook, Instagram, Twitter and Snapchat now present a viable alternative to TV.”
Williamson also goes on to say that even though this shift sounds monumental, the actual amount of Ad dollars from TV to social is very small. On the other hand, she believes that this trend can very easily become a real “game changer” in the near future. So, with the potential of more advertising dollars making their way to the online marketing world, Startups are going to have to rely more on Digital Agencies to promote their product/service.
Getting Big Agency Results on a Shoe-String Budget
Since VCs and Investors are only interested in funding companies that offer something new, exciting and most importantly different, what does that mean for those “me too businesses”? Due to this natural shift in the business ecosystem, Startups need to find a more affordable way to launch their “baby” to the world without going bankrupt in the process. To help with this scenario, startups need to find a reliable Digital Agency that can jump right in and “move the needle”. This agency would need to provide guidance and help build the foundation needed to compete in this highly competitive online space. Here’s a recent article entitled “What Every Startup Needs to Know Before Choosing a Digital Agency” which can help highlight other services that Startups can benefit from.
Here is a brief outline of the services that startups need to remain competitive
Many Startups, especially Early-Stage Startups, operate on very small Ad budgets and are often second guessing themselves on where they can get the “best bang for their buck” with regard to online advertising. Based on the trends mentioned in this article about the decline in VC funding Social media getting more of the overall Ad budgets, it’s pretty clear that Startups need to focus on finding an affordable digital agency that treats them like a partner and not another typical client.
- How would Google Answer Vague Questions in Queries?
- Why Google Grants Needs To Focus on Mobile Networks
- Social chat app Capture launches to take a shot at less viral success
- What Happens When Reproductive Tech Like IVF Goes Awry?
- Qualtrics’ Julie Larson-Green will talk customer experience at TC Sessions: Enterprise