CBPO

Tag: Helps

Bandit ML helps e-commerce businesses present the most effective offer to each shopper

December 23, 2020 No Comments

Bandit ML aims to optimize and automate the process of presenting the right offer to the right customer.

The startup was part of the summer 2020 class at accelerator Y Combinator . It also raised a $ 1.32 million seed round in September from YC, Haystack Fund, Webb Investment Network, Liquid 2 Ventures, Jigsaw Ventures, Basecamp Fund, Pathbreaker Ventures and various angels — including what CEO Edoardo Conti said are 10 current and former Uber employees.

Conti (who founded the company with Lionel Vital and Joseph Gilley) is a former Uber software engineer and researcher himself.

The idea, as he explained via email, is that one customer might be more excited about a $ 5 discount, while another might be more effectively enticed by free shipping, and a third might be completely uninterested because they just made a large purchase. Using a merchant’s order history and website activity data, Bandit ML is supposed to help them determine which offer will be most effective with which shopper.

Bandit ML screenshot

Image Credits: Bandit ML

Conti acknowledged that there’s other discount-optimizing software out there, but he suggested none of them offers what Bandit ML does: “off the shelf tools that use machine learning the way giants like Uber, Amazon and Walmart do.”

He added that Bandit ML’s technology is unique in its support for full automation (“some stores sent their first batch of offers within 10 minutes of signing up”) and its ability to optimize for longer-term metrics, like purchases over a 120-day period, rather than focusing on one-off redemptions. In fact, Conti said the technology the startup uses to make these decisions is similar to the ReAgent project that he worked on at Facebook.

Bandit ML is currently focused on merchants with Shopify stores, though it also supports other stores not on Shopify, like Calii. Conti said the platform has been used to send millions of dollars’ worth of promotions since July, with one clothing company seeing a 20% increase in net revenue.

“Starting with an always-on incentive engine for every online business, we aim to build functioning out-of-the-box machine learning tools that a small online business needs to compete with the Walmarts and Amazons of the world,” he said.

 


Startups – TechCrunch


LinkedIn’s Career Explorer helps you identify new kinds of jobs based on the skills you have

October 29, 2020 No Comments

One of the key side-effects of the Covid-19 pandemic has been how it has played out in the economy. There are currently 12.6 million people out of work in the U.S. alone, with estimates from the International Labour Organization noting that globally number some 245 million full-time jobs have been impacted.

To meet some of that challenge, today, LinkedIn is launching a new Career Explorer tool to help people find new jobs. Out in beta today in English (and adding further languages soon), this is not another job search engine. It’s a tool that matches a person’s skills with jobs that she or he might not have otherwise considered, and then provides pointers on what extra skills you might want to learn to be even more relevant.

Alongside this, LinkedIn is launching a new skills portal specifically to hone digital skills; subtle profile picture “frames” to indicate when you’re looking for work, or when you are hiring; and interview prep tools.

The Career Explorer tool is perhaps the most interesting of the new features.

Built with flexibility in mind, LinkedIn is leaning on its own trove of data to map some career paths that people have taken, combining that with data it has on jobs that are currently in higher demand, and are extrapolating that to help people get more creative about jobs they could go for.

This would be especially useful if there are none in their current field, or if they are considering using the opportunity of a job loss to rethink what they are doing (if Covid-19 hasn’t done the rethinking for them).

The example that LinkedIn gives for how this works is a notable one. It notes that a food server and a customer service specialist (an in-demand job) have a 71% skills overlap.

Neither might be strictly considered a “knowledge worker” (interesting that LinkedIn is positioning itself in that way, as it’s been a tool largely dominated by the category up to now), but both interface with customers. LinkedIn uses the Explorer to then suggest what training you could undertake (on its platform) to learn or improve the skills you might not already have.

The Career Explorer is a development on the skills assessment tool that LinkedIn launched last year, which were tests that people could take to verify what skills they had and what skills they still needed to learn for a particular role.

In the midst of a pandemic, that effort took on a more pointed recovery role, with skills training developed in partnership with Microsoft (which owns LinkedIn) specifically to address digital gaps in the employment market, which when filled could help the economy rebuild. LinkedIn said that to date, around 13 million people have used the those tools to learn new skills for the most in-demand jobs.

The idea with these new tools is that while people may be losing their jobs, there is still work out there. LinkedIn itself says it has more than 14 million positions open right now, with close to 40 million people coming to the site to search for work every week, and three people getting hired each minute. So the aim is to figure out how best to connect people with the opportunities around them.

And given that LinkedIn, now with 722 million users, has long made recruitment and job searches a central part of its business — both in terms of traffic and in terms of the revenue it makes from those services; I often think of it as the place where professionals go to network and look for work — launching these tools not only can help LinkedIn be a more useful partner in the job-search process. It helps keep that jobs business evolving at a time when it otherwise might feel somewhat stagnant. And after all, despite the activity on LinkedIn, unemployment remains high and some believe will get worse before it gets better again.


Social – TechCrunch


A Well-Formed Query Helps Search Engines Understand User Intent in the Query

July 1, 2020 No Comments

A Well-Formed Query Helps a Search Engine understand User Intent Behind the Query

To start this post, I wanted to include a couple of whitepapers that include authors from Google. The authors of the first paper are the inventors of a patent application that was just published on April 28, 2020, and it is very good seeing a white paper from the inventors of a recent patent published by Google. Both papers are worth reading to get a sense of how Google is trying to rewrite queries into “Well-Formed Natural Language Questions.

August 28, 2018 – Identifying Well-formed Natural Language Questions

The abstract for that paper:

Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline can perform more accurate interpretation, thus reducing downstream compounding errors.

Hence, identifying whether or not a query is well-formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set.

We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.

The paper provides examples of well-formed queries and ill-formed queries:

Examples of Well forned and non wll formed queries

November 21, 2019 – How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions

The abstract for that paper:

We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one. Our multi-domain question rewriting (MQR) dataset is constructed from human contributed Stack Exchange question edit histories.

The dataset contains 427,719 question pairs which come from 303 domains. We provide human annotations for a subset of the dataset as a quality estimate. When moving from ill-formed to well-formed questions, the question quality improves by an average of 45 points across three aspects.

We train sequence-to-sequence neural models on the constructed dataset and obtain an improvement of 13.2%in BLEU-4 over baseline methods built from other data resources. We release the MQR dataset to encourage research on the problem of question rewriting.

examples of ill-formed and well-formed questions

The patent application I am writing about was filed on January 18, 2019, which puts it around halfway between those two whitepapers, and both of them are recommended to get a good sense of the topic if you are interested in featured snippets, people also ask questions, and queries that Google tries to respond to. The Second Whitepaper refers to the first one, and tells us how it is trying to improve upon it:

Faruqui and Das (2018) introduced the task of identifying well-formed natural language questions. In this paper, we take a step further to investigate methods to rewrite ill-formed questions into well-formed ones without changing their semantics. We create a multi-domain question rewriting dataset (MQR) from human contributed StackExchange question edit histories.

Rewriting Ill-Formed Search Queries into Well-Formed Queries

Interestingly, the patent is also about rewriting search Queries.

It starts by telling us that “Rules-based rewrites of search queries have been utilized in query processing components of search systems.”

Sometimes this happens by removing certain stop-words from queries, such as “the”, “a”, etc.

After Rewriting a Query

Once a query is rewritten, it may be “submitted to the search system and search results returned that are responsive to the rewritten query.”

The patent also tells us about “people also search for X” queries (first patent I have seen them mentioned in.)

We are told that these similar queries are used to recommend additional queries that are related to a submitted query (e.g., “people also search for X”).

These “similar queries to a given query are often determined by navigational clustering.”

As an example, we are told that for the query “funny cat pictures”, a similar query of “funny cat pictures with captions” may be determined because that similar query is frequently submitted by searchers following submission of the query “funny cat pictures”.

Determining if a Query is a Well Formed Query

The patent tells us about a process that can be used to determine if a natural language search query is well-formed and if it is not, to use a trained canonicalization model to create a well-formed variant of that natural language search query.

First, we are given a definition of “Well-formedness” We are told that it is “an indication of how well a word, a phrase, and/or another additional linguistic element (s) conform to the grammar rules of a particular language.”

These are three steps to tell whether something is a well-formed query. It is:

  • Grammatically correct
  • Does not contain spelling errors
  • Asks an explicit question

The first paper from the authors of this patent tells us the following about queries:

The lack of regularity in the structure of queries makes it difficult to train models that can optimally process the query to extract information that can help understand the user intent behind the query.

That translates to the most important takeaway for this post:

A Well-Formed Query is structured in a way that allows a search engine to understand the user intent behind the query

The patent gives us an example:

“What are directions to Hypothetical Café?” is an example of a well-formed version of the natural language query “Hypothetical Café directions”.

How the Classification Model Works

It also tells us that the purpose behind the process in the patent is to determine whether a query is well-formed using a trained classification model and/or a well-formed variant of a query and if that well-formed version can be generated using a trained canonicalization model.

It can create that model by using features of the search query as input to the classification model and deciding whether the search query is well-formed.

Those features of the search query can include, for example:

  • Character(s)
  • Word(s)
  • Part(s) of speech
  • Entities included in the search query
  • And/or other linguistic representation(s) of the search query (such as word n-grams, character bag of words, etc.)

And the patent tells us more about the nature of the classification model:

The classification model is a machine learning model, such as a neural network model that contains one or more layers such as one or more feed-forward layers, softmax layer(s), and/or additional neural network layers. For example, the classification model can include several feed-forward layers utilized to generate feed-forward output. The resulting feed-forward output can be applied to softmax layer(s) to generate a measure (e.g., a probability) that indicates whether the search query is well-formed.

A Canonicalization Model May Be Used

If the Classification model determines that the search query is not well-formed, the query is turned over to a trained canonicalization model to generate a well-formed version of the search query.

The search query may have some of its features extracted from the search query, and/or additional input processed using the canonicalization model to generate a well-formed version that correlates with the search query.

The canonicalization model may be a neural network model. The patent provides more details on the nature of the neural network used.

The neural network can indicate a well-formed query version of the original query.

We are also told that in addition to identifying a well-formed query, it may also determine “one or more related queries for a given search query.”

A related query can be determined based on the related query being frequently submitted by users following the submission of the given search query.

The query canonicalization system can also determine if the related query is well-formed. If it isn’t, then it can determine a well-formed variant of the related query.

For example, in response to the submission of the given search query, a selectable version of the well-formed variant can be presented along with search results for the given query and, if selected, the well-formed variant (or the related query itself in some implementations) can be submitted as a search query and results for the well-formed variant (or the related query) then presented.

Again, the idea of “intent” surfaces in the patent regarding related queries (people also search for queries)

The value of showing a well-formed variant of a related query, instead of the related query itself, is to let a searcher more easily and/or more quickly understand the intent of the related query.

The patent tells us that this has a lot of value by stating:

Such efficient understanding enables the user to quickly submit the well-formed variant to quickly discover additional information (i.e., result(s) for the related query or well-formed variant) in performing a task and/or enables the user to only submit such query when the intent indicates likely relevant additional information in performing the task.

We are given an example of a related well-formed query in the patent:

As one example, the system can determine the phrase “hypothetical router configuration” is related to the query “reset hypothetical router” based on historical data indicating the two queries are submitted proximate (in time and/or order) to one another by a large number of users of a search system.

In some such implementations, the query canonicalization system can determine the related query “reset hypothetical router” is not a well-formed query, and can determine a well-formed variant of the related query, such as: “how to reset hypothetical router”.

The well-formed variant “how to reset hypothetical router” can then be associated, in a database, as a related query for “hypothetical router configuration”—and can optionally supplant any related query association between “reset hypothetical router” and “hypothetical router configuration”.

The patent tells us that sometimes a well-formed related query might be presented as a link to search results.

Again, one of the features of a well-formed query is that it is grammatical, is an explicit question, and contains no spelling errors.

The patent application can be found at:

Canonicalizing Search Queries to Natural language Questions
Inventors Manaal Faruqui and Dipanjan Das
Applicants Google LLC
Publication Number 20200167379
Filed: January 18, 2019
Publication Date May 28, 2020

Abstract

Techniques are described herein for training and/or utilizing a query canonicalization system. In various implementations, a query canonicalization system can include a classification model and a canonicalization model. A classification model can be used to determine if a search query is well-formed. Additionally, a canonicalization model can be used to determine a well-formed variant of a search query in response to determining a search query is not well-formed. In various implementations, a canonicalization model portion of a query canonicalization system can be a sequence to sequence model.

Well-Formed Query Takeaways

I have summarized the summary of the patent, and if you want to learn more details, click through and read the detailed description. The two white papers I started the post off with describing databases of well-formed questions that people as Google (including the inventors of this patent) have built and show the effort that Google has put into the idea of rewriting queries so that they are well-formed queries, where the intent behind them can be better understood by the search engine.

As we have seen from this patent, the analysis that is undertaken to find canonical queries also is used to surface “people also search for” queries, which may also be canonicalized and displayed in search results.

A well-formed query is grammatically correct, contains no spelling mistakes, and asks an explicit question. It also makes it clear to the search engine what the intent behind the query may be.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post A Well-Formed Query Helps Search Engines Understand User Intent in the Query appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


A Well-Formed Query Helps Search Engines Understand User Intent in the Query

June 13, 2020 No Comments

A Well-Formed Query Helps a Search Engine understand User Intent Behind the Query

To start this post, I wanted to include a couple of whitepapers that include authors from Google. The authors of the first paper are the inventors of a patent application that was just published on April 28, 2020, and it is very good seeing a white paper from the inventors of a recent patent published by Google. Both papers are worth reading to get a sense of how Google is trying to rewrite queries into “Well-Formed Natural Language Questions.

August 28, 2018 – Identifying Well-formed Natural Language Questions

The abstract for that paper:

Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well-formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.

The paper provides examples of well-formed queries and ill-formed queries:

Examples of Well forned and non wll formed queries

November 21, 2019 – How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions

The abstract for that paper:

We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one. Our multi-domain question rewriting (MQR) dataset is constructed from human contributed Stack Exchange question edit histories. The dataset contains 427,719 question pairs which come from 303 domains. We provide human annotations for a subset of the dataset as a quality estimate. When moving from ill-formed to well-formed questions, the question quality improves by an average of 45 points across three aspects. We train sequence-to-sequence neural models on the constructed dataset and obtain an improvement of 13.2%in BLEU-4 over baseline methods built from other data resources. We release the MQR dataset to encourage research on the problem of question rewriting.

examples of ill-formed and well-formed questions

The patent application I am writing about was filed on January 18, 2019, which puts it around halfway between those two whitepapers, and both of them are recommended to get a good sense of the topic if you are interested in featured snippets, people also ask questions, and queries that Google tries to respond to. The Second Whitepaper refers to the first one, and tells us how it is trying to improve upon it:

Faruqui and Das (2018) introduced the task of identifying well-formed natural language questions. In this paper,we take a step further to investigate methods to rewrite ill-formed questions into well-formed ones without changing their semantics. We create a multi-domain question rewriting dataset (MQR) from human contributed StackExchange question edit histories.

Rewriting Ill-Formed Search Queries into Well-Formed Queries

Interestingly, the patent is also about rewriting search Queries.

It starts by telling us that “Rules-based rewrites of search queries have been utilized in query processing components of search systems.”

Sometimes this happens by removing certain stop-words from queries, such as “the”, “a”, etc.

After Rewriting a Query

Once a query is rewritten, it many be “submitted to the search system and search results returned that are responsive to the rewritten query.”

The patent also tells us about “people also search for X” queries (first patent I have seen them mentioned in.)

We are told that these similar queries are used to recommend additional queries that are related to a submitted query (e.g., “people also search for X”).

These “similar queries to a given query are often determined by navigational clustering.”

As an example, we are told that for the query “funny cat pictures”, a similar query of “funny cat pictures with captions” may be determined because that similar query is frequently submitted by searchers following submission of the query “funny cat pictures”.

Determining if a Query is a Well Formed Query

The patent tells us about a process that can be used to determine if a natural language search query is well-formed, and if it is not, to use a trained canonicalization model to create a well-formed variant of that natural language search query.

First, we are given a definition of “Well-formedness” We are told that it is “an indication of how well a word, a phrase, and/or another additional linguistic element (s) conform to the grammar rules of a particular language.”

These are three steps to tell whether something is a well-formed query. It is:

  • Grammatically correct
  • Does not contain spelling errors
  • Asks an explicit question

The first paper from the authors of this patent tells us the following about queries:

The lack of regularity in the structure of queries makes it difficult to train models that can optimally process the query to extract information that can help understand the user intent behind the query.

That translates to the most important takeaway for this post:

A Well-Formed Query is structured in a way that allows a search engine to understand the user intent behind the query

The patent gives us an example:

“What are directions to Hypothetical Café?” is an example of a well-formed version of the natural language query “Hypothetical Café directions”.

How the Classification Model Works

It also tells us that the purpose behind the process in the patent is to determine whether a query is well-formed using a trained classification model and/or a well-formed variant of a query and if that well-formed version can be generated using a trained canonicalization model.

It can create that model by using features of the search query as input to the classification model and deciding whether the search query is well-formed.

Those features of the search query can include, for example:

  • Character(s)
  • Word(s)
  • Part(s) of speech
  • Entities included in the search query
  • And/or other linguistic representation(s) of the search query (such as word n-grams, character bag of words, etc.)

And the patent tells us more about the nature of the classification model:

The classification model is a machine learning model, such as a neural network model that contains one or more layers such as one or more feed-forward layers, softmax layer(s), and/or additional neural network layers. For example, the classification model can include several feed-forward layers utilized to generate feed-forward output. The resulting feed-forward output can be applied to softmax layer(s) to generate a measure (e.g., a probability) that indicates whether the search query is well-formed.

A Canonicalization Model May Be Used

If the Classification model determines that the search query is not a well-formed query, the query is turned over to a trained canonicalization model to generate a well-formed version of the search query.

The search query may have some of its features extracted from the search query, and/or additional input processed using the canonicalization model to generate a well-formed version that correlates with the search query.

The canonicalization model may be a neural network model. The patent provides more details on the nature of the neural network used.

The neural network can indicate a well-formed query version of the original query.

We are also told that in addition to identifying a well-formed query, it may also determine “one or more related queries for a given search query.”

A related query can be determined based on the related query being frequently submitted by users following the submission of the given search query.

The query canonicalization system can also determine if the related query is a well-formed query. If it isn’t, then it can determine a well-formed variant of the related query.

For example, in response to the submission of the given search query, a selectable version of the well-formed variant can be presented along with search results for the given query and, if selected, the well-formed variant (or the related query itself in some implementations) can be submitted as a search query and results for the well-formed variant (or the related query) then presented.

Again, the idea of “intent” surfaces in the patent regarding related queries (people also search for queries)

The value of showing a well-formed variant of a related query, instead of the related query itself, is to let a searcher more easily and/or more quickly understand the intent of the related query.

The patent tells us that this has a lot of value by stating:

Such efficient understanding enables the user to quickly submit the well-formed variant to quickly discover additional information (i.e., result(s) for the related query or well-formed variant) in performing a task and/or enables the user to only submit such query when the intent indicates likely relevant additional information in performing the task.

We are given an example of a related well-formed query in the patent:

As one example, the system can determine the phrase “hypothetical router configuration” is related to the query “reset hypothetical router” based on historical data indicating the two queries are submitted proximate (in time and/or order) to one another by a large number of users of a search system.

In some such implementations, the query canonicalization system can determine the related query “reset hypothetical router” is not a well-formed query, and can determine a well-formed variant of the related query, such as: “how to reset hypothetical router”.

The well-formed variant “how to reset hypothetical router” can then be associated, in a database, as a related query for “hypothetical router configuration”—and can optionally supplant any related query association between “reset hypothetical router” and “hypothetical router configuration”.

The patent tells us that sometimes a well-formed related query might be presented as a link to search results.

Again, one of the features of a well-formed query is that it is grammatical, is an explicit question, and contains no spelling errors.

The patent application can be found at:

Canonicalizing Search Queries to Natural language Questions
Inventors Manaal Faruqui and Dipanjan Das
Applicants Google LLC
Publication Number 20200167379
Filed: January 18, 2019
Publication Date May 28, 2020

Abstract

Techniques are described herein for training and/or utilizing a query canonicalization system. In various implementations, a query canonicalization system can include a classification model and a canonicalization model. A classification model can be used to determine if a search query is well-formed. Additionally or alternatively, a canonicalization model can be used to determine a well-formed variant of a search query in response to determining a search query is not well-formed. In various implementations, a canonicalization model portion of a query canonicalization system can be a sequence to sequence model.

Well-Formed Query Takeaways

I have summarized the summary of the patent, and if you want to learn more details, click through and read the detailed description. The two white papers I started the post off with describing databases of well-formed questions that people as Google (including the inventors of this patent) have built and show the effort that Google has put into the idea of rewriting queries so that they are well-formed queries, where the intent behind them can be better understood by the search engine.

A well-formed query is grammatically correct, contains no spelling mistakes, and asks an explicit question. It also makes it clear to the search engine what the intent behind the query may be.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post A Well-Formed Query Helps Search Engines Understand User Intent in the Query appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


A Well-Formed Query Helps a Search Engine Understand User Intent in the Query

June 9, 2020 No Comments

A Well-Formed Query Helps a Search Engine understand User Intent Behind the Query

To start this post, I wanted to include a couple of whitepapers that include authors from Google. The authors of the first paper are the inventors of a patent application that was just published on April 28, 2020, and it is very good seeing a white paper from the inventors of a recent patent published by Google. Both papers are worth reading to get a sense of how Google is trying to rewrite queries into “Well-Formed Natural Language Questions.

August 28, 2018 – Identifying Well-formed Natural Language Questions

The abstract for that paper:

Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well-formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.

The paper provides examples of well-formed queries and ill-formed queries:

Examples of Well forned and non wll formed queries

November 21, 2019 – How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions

The abstract for that paper:

We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one. Our multi-domain question rewriting (MQR) dataset is constructed from human contributed Stack Exchange question edit histories. The dataset contains 427,719 question pairs which come from 303 domains. We provide human annotations for a subset of the dataset as a quality estimate. When moving from ill-formed to well-formed questions, the question quality improves by an average of 45 points across three aspects. We train sequence-to-sequence neural models on the constructed dataset and obtain an improvement of 13.2%in BLEU-4 over baseline methods built from other data resources. We release the MQR dataset to encourage research on the problem of question rewriting.

examples of ill-formed and well-formed questions

The patent application I am writing about was filed on January 18, 2019, which puts it around halfway between those two whitepapers, and both of them are recommended to get a good sense of the topic if you are interested in featured snippets, people also ask questions, and queries that Google tries to respond to. The Second Whitepaper refers to the first one, and tells us how it is trying to improve upon it:

Faruqui and Das (2018) introduced the task of identifying well-formed natural language questions. In this paper,we take a step further to investigate methods to rewrite ill-formed questions into well-formed ones without changing their semantics. We create a multi-domain question rewriting dataset (MQR) from human contributed StackExchange question edit histories.

Rewriting Ill-Formed Search Queries into Well-Formed Queries

Interestingly, the patent is also about rewriting search Queries.

It starts by telling us that “Rules-based rewrites of search queries have been utilized in query processing components of search systems.”

Sometimes this happens by removing certain stop-words from queries, such as “the”, “a”, etc.

After Rewriting a Query

Once a query is rewritten, it many be “submitted to the search system and search results returned that are responsive to the rewritten query.”

The patent also tells us about “people also search for X” queries (first patent I have seen them mentioned in.)

We are told that these similar queries are used to recommend additional queries that are related to a submitted query (e.g., “people also search for X”).

These “similar queries to a given query are often determined by navigational clustering.”

As an example, we are told that for the query “funny cat pictures”, a similar query of “funny cat pictures with captions” may be determined because that similar query is frequently submitted by searchers following submission of the query “funny cat pictures”.

Determining if a Query is a Well Formed Query

The patent tells us about a process that can be used to determine if a natural language search query is well-formed, and if it is not, to use a trained canonicalization model to create a well-formed variant of that natural language search query.

First, we are given a definition of “Well-formedness” We are told that it is “an indication of how well a word, a phrase, and/or another additional linguistic element (s) conform to the grammar rules of a particular language.”

These are three steps to tell whether something is a well-formed query. It is:

  • Grammatically correct
  • Does not contain spelling errors
  • Asks an explicit question

The first paper from the authors of this patent tells us the following about queries:

The lack of regularity in the structure of queries makes it difficult to train models that can optimally process the query to extract information that can help understand the user intent behind the query.

That translates to the most important takeaway for this post:

A Well-Formed Query is structured in a way that allows a search engine to understand the user intent behind the query

The patent gives us an example:

“What are directions to Hypothetical Café?” is an example of a well-formed version of the natural language query “Hypothetical Café directions”.

How the Classification Model Works

It also tells us that the purpose behind the process in the patent is to determine whether a query is well-formed using a trained classification model and/or a well-formed variant of a query and if that well-formed version can be generated using a trained canonicalization model.

It can create that model by using features of the search query as input to the classification model and deciding whether the search query is well-formed.

Those features of the search query can include, for example:

  • Character(s)
  • Word(s)
  • Part(s) of speech
  • Entities included in the search query
  • And/or other linguistic representation(s) of the search query (such as word n-grams, character bag of words, etc.)

And the patent tells us more about the nature of the classification model:

The classification model is a machine learning model, such as a neural network model that contains one or more layers such as one or more feed-forward layers, softmax layer(s), and/or additional neural network layers. For example, the classification model can include several feed-forward layers utilized to generate feed-forward output. The resulting feed-forward output can be applied to softmax layer(s) to generate a measure (e.g., a probability) that indicates whether the search query is well-formed.

A Canonicalization Model May Be Used

If the Classification model determines that the search query is not a well-formed query, the query is turned over to a trained canonicalization model to generate a well-formed version of the search query.

The search query may have some of its features extracted from the search query, and/or additional input processed using the canonicalization model to generate a well-formed version that correlates with the search query.

The canonicalization model may be a neural network model. The patent provides more details on the nature of the neural network used.

The neural network can indicate a well-formed query version of the original query.

We are also told that in addition to identifying a well-formed query, it may also determine “one or more related queries for a given search query.”

A related query can be determined based on the related query being frequently submitted by users following the submission of the given search query.

The query canonicalization system can also determine if the related query is a well-formed query. If it isn’t, then it can determine a well-formed variant of the related query.

For example, in response to the submission of the given search query, a selectable version of the well-formed variant can be presented along with search results for the given query and, if selected, the well-formed variant (or the related query itself in some implementations) can be submitted as a search query and results for the well-formed variant (or the related query) then presented.

Again, the idea of “intent” surfaces in the patent regarding related queries (people also search for queries)

The value of showing a well-formed variant of a related query, instead of the related query itself, is to let a searcher more easily and/or more quickly understand the intent of the related query.

The patent tells us that this has a lot of value by stating:

Such efficient understanding enables the user to quickly submit the well-formed variant to quickly discover additional information (i.e., result(s) for the related query or well-formed variant) in performing a task and/or enables the user to only submit such query when the intent indicates likely relevant additional information in performing the task.

We are given an example of a related well-formed query in the patent:

As one example, the system can determine the phrase “hypothetical router configuration” is related to the query “reset hypothetical router” based on historical data indicating the two queries are submitted proximate (in time and/or order) to one another by a large number of users of a search system.

In some such implementations, the query canonicalization system can determine the related query “reset hypothetical router” is not a well-formed query, and can determine a well-formed variant of the related query, such as: “how to reset hypothetical router”.

The well-formed variant “how to reset hypothetical router” can then be associated, in a database, as a related query for “hypothetical router configuration”—and can optionally supplant any related query association between “reset hypothetical router” and “hypothetical router configuration”.

The patent tells us that sometimes a well-formed related query might be presented as a link to search results.

Again, one of the features of a well-formed query is that it is grammatical, is an explicit question, and contains no spelling errors.

The patent application can be found at:

Canonicalizing Search Queries to Natural language Questions
Inventors Manaal Faruqui and Dipanjan Das
Applicants Google LLC
Publication Number 20200167379
Filed: January 18, 2019
Publication Date May 28, 2020

Abstract

Techniques are described herein for training and/or utilizing a query canonicalization system. In various implementations, a query canonicalization system can include a classification model and a canonicalization model. A classification model can be used to determine if a search query is well-formed. Additionally or alternatively, a canonicalization model can be used to determine a well-formed variant of a search query in response to determining a search query is not well-formed. In various implementations, a canonicalization model portion of a query canonicalization system can be a sequence to sequence model.

Well-Formed Query Takeaways

I have summarized the summary of the patent, and if you want to learn more details, click through and read the detailed description. The two white papers I started the post off with describing databases of well-formed questions that people as Google (including the inventors of this patent) have built and show the effort that Google has put into the idea of rewriting queries so that they are well-formed queries, where the intent behind them can be better understood by the search engine.

A well-formed query is grammatically correct, contains no spelling mistakes, and asks an explicit question. It also makes it clear to the search engine what they intent behind the query may be.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post A Well-Formed Query Helps a Search Engine Understand User Intent in the Query appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Funerals are tough. Ever Loved helps you pay for them

March 9, 2019 No Comments

Alison Johnston didn’t plan to build a startup around death. An early employee at Q&A app Aardvark that was bought by Google, she’d founded tutoring app InstaEDU and sold it to Chegg. She made mass market consumer products. But then, “I had a family member who was diagnosed with terminal cancer and I thought about how she’d be remembered” she recalls. Inventing the next big social app suddenly felt less consequential.

I started looking into the funeral industry and discovered that there were very few resources to support and guide families who had recently experienced a death. It was difficult to understand and compare options and prices (which were also much higher than I ever imagined), and there weren’t good tools to share information and memories with others” Johnston tells me. Bombarded by options and steep costs that average $ 9,000 per funeral in the US, families in crisis become overwhelmed.

Ever Loved co-founder and CEO Alison Johnston

Johnston’s startup Ever Loved wants to provide peace of mind during the rest-in-peace process. It’s a comparison shopping and review site for funeral homes, cemeteries, caskets, urns, and headstones. It offers price guides and recommends top Amazon funeral products and takes a 5 percent affiliate fee that finances Ever Loved’s free memorial site maker for sharing funeral details plus collecting memories and remembrances. And families can even set up fundraisers to cover their costs or support a charity.

The startup took seed funding from Social Capital and a slew of angel investors about a year ago. Now hundreds of thousands of users are visiting Ever Loved shopping and memorial sites each month. Eventually Ever Loved wants to build its own marketplace of funeral services and products that takes a 10 percent cut of purchases, while also selling commerce software to funeral homes.

“People don’t talk about death. It’s taboo in our society and most people don’t plan ahead at all” Johnston tells me. Rushing to arrange end-of-life logistics is enormously painful, and Johnston believes Ever Loved can eliminate some of that stress. “I wanted to explore areas where fewer people in Silicon Valley had experience and that weren’t just for young urban professionals.”

There’s a big opportunity to modernize this aging industry with a sustainable business model and empathy as an imperative. 86 percent of funeral homes are independent, Johnston says, so few have the resources to build tech products. One of the few big companies in the space, the $ 7 billion market cap public Service Corporation International, has rolled up funeral homes and cemeteries but has done little to improve pricing transparency or the user experience for families in hardship. Rates and reviews often aren’t available, so customers can end up overpaying for underwhelming selection.

On the startup side, there’s direct competitors like FuneralWise, which is focused on education and forums but lacks robust booking features or a memorial site maker. Funeral360 is Ever Loved’s biggest rival, but Ever Loved’s memorial sites looked better and it had much deeper step-by-step pricing estimates and information on funeral homes.

Johnston wants to use revenue from end-of-life commerce to subsidize Ever Loved’s memorial and fundraiser features so they can stay free or cheap while generating leads and awareness for the marketplace side. But no one has hit scale and truly become wedding site The Knot but for funerals.

I’ve known Johnston since college, and she’s always had impressive foresight for what was about to blow up. From an extremely early gig at Box.com to Q&A and on-demand answers with Aardvark to the explosion of online education with InstaEDU, she’s managed to get out in front of the megatrends. And tech’s destiny to overhaul unsexy businesses is one of the biggest right now.

Amazon has made us expect to see prices and reviews up front, so Ever Loved has gathered rate estimates for about two-thirds of US funeral homes and is pulling in testimonials. You can search for 4-star+ funeral homes nearby and instantly get high-quality results. Meanwhile, funeral homes can sign up to claim their page and add information.

Facebook popularized online event pages. But its heavy-handed prerogatives, generalist tone, and backlash can make it feel like a disrespectful place to host funeral service details. And with people leaving their hometowns, newspapers can’t spread the info properly. Ever Loved is purpose-built for these serious moments, makes managing invites easy, and also offers a place to collect obituaries, photos, and memories.

Rather than having to click through a link to a GoFundMe page that can be a chore, Ever Loved hosts fundraisers right on its memorial sites to maximize donations. That’s crucial since funerals cost more than most people have saved. Ever Loved only charges a processing fee and allows visitors to add an additional tip, so it’s no more expensive that popular fundraising sites.

Next, “the two big things are truly building out booking through our site and expanding into some of the other end of life logistics” Johnstone tells me. Since the funeral is just the start of the post-death process, Ever Loved is well positioned to move into estate planning. “There are literally dozens of things you have to do after someone passes away — contacting the social security office, closing out bank accounts and Facebook profiles…”

Johnston reveals that 44 percent of families say they had arguments while divvying up assets — a process that takes an average of 560 hours aka 3 months of full-time work. As the baby boomer era ends over the next 30 years, $ 30 trillion in assets are expected to transfer through estates, she claims. Earning a tiny cut of that by giving mourners tools outlining popular ways to divide estates could alleviate disagreements could make Ever Loved quite lucrative.

“When I first started out, I was pretty awkward about telling people about this. We’re death averse, and that hinders us in a lot of ways” Johnston concludes. My own family struggled with this, as an unwillingness to accept mortality kept my grandparents from planning for after they were gone. “But I quickly learned was this was a huge conversation starter rather than a turn off. This is a topic people want to talk about more and educate themselves more on. Tech too often merely makes life and work easier for those who already have it good. Tech that tempers tragedy is a welcome evolution for Silicon Valley.”


Social – TechCrunch


Foursquare’s Hypertrending helps you spy on the coolest local happenings

March 9, 2019 No Comments

Ten years after the launch of Foursquare at SXSW, the company is laying its technology bare with a futuristic version of its old app that doesn’t require a check-in at all. The godfather of location apps is returning to the launchpad with Hypertrending, but this time it hopes to learn what developers might do with real-time info about where people are and where they aren’t.

Hypertrending uses Foursquare’s Pilgrim technology, which is baked into Foursquare’s apps and offered as a third-party enterprise tool, to show where phones are in real time over the course of SXSW in Austin, Texas.

This information is relayed through dots on a map. The size of those dots is a reflection of the number of devices in that place at a given time. Users can filter the map by All places, Food, Nightlife and Fun (events and parties).

Hypertrending also has a Top 100 list that is updated in real time to show which places are super popular, with arrows to show whether a place is trending up or down.

Before you throw up your hands in outrage, the information on Hypertrending is aggregated and anonymized (just like it is within Pilgrim), and there are no trails showing the phone’s route from one place to another. Dots only appear on the map when the phone arrives at a destination.

Hypertrending was cooked up in Foursquare’s skunkworks division, Foursquare Labs, led by the company’s co-founder Dennis Crowley .

The feature is only available during SXSW and in the Austin area, and thus far Foursquare has no plans to launch this publicly. So… what’s the deal?

First and foremost, Hypertrending is about showing off the technology. In many ways, Hypertrending isn’t new at all, in that it runs off the Pilgrim technology that has powered Foursquare since around 2014.

Pilgrim is the tech that recognizes you’ve just sat down at a restaurant and offers up a tip about the menu on Foursquare City Guide, and it’s the same tech that notices you’ve just touched down in a new city and makes some recommendations on places to go. In Swarm, it’s the tech that offers up a list of all the places you’ve been in case you want to retroactively check in to them.

That sounds rather simple, but a combination of Foursquare’s 10 years’ worth of location data and Pilgrim’s hyper-precision is unparalleled when it comes to accuracy, according to Crowley.

Whereas other location tech might not understand the difference between you being in the cafe on the first floor or the salon on the second floor, or the bar that shares a wall with both, Pilgrim does.

This is what led Foursquare to build out the Pilgrim SDK, which now sees more than 100 million user-confirmed visits per month. Apps that use the Pilgrim SDK offer users the ability to opt-in to Foursquare’s always-on location tracking for its mobile app panel in the U.S., which has grown to 10 million devices.

These 10 million phones provide the data that powers Hypertrending.

Now, the data itself might not be new, per se. But Foursquare has never visualized the information quite like this, even for enterprise customers.

Whereas customers of the Foursquare Place Insights, Pinpoint and Attribution get snapshots into their own respective audiences, Hypertrending represents on a large scale just what Foursquare’s tech is capable of in not only knowing where people are, but where people aren’t.

This brings us back to SXSW, which happens to be the place where Foursquare first launched back in 2009.

“This week has felt a little nostalgic as we try to get this thing ready to go,” said Crowley. “It’s not that dissimilar to when we went to SXSW in 2009 and showed off Foursquare 1.0. There is this curious uncertainty and my whole thing is to get a sense of what people think of it.”

Crowley recalled his first trip to SXSW with co-founder Naveen Selvadurai. They couldn’t afford an actual pass to the show so they just went from party to party showing people the app and hearing what they thought. Crowley said that he doesn’t expect Hypertrending to be some huge consumer app.

“I want to show off what we can do with the technology and the data and hopefully inspire developers to do interesting stuff with this raw visualization of where phones are at,” said Crowley. “What would you do if you had access to this? Would you make something cool and fun or make something obnoxious and creepy?”

Beyond the common tie of SXSW, Hypertrending brings Foursquare’s story full circle in the fact that it’s potentially the most poignant example of what Crowley always wanted Foursquare to be. Location is one of the most powerful pieces of information about an individual. One’s physical location is, in many ways, the most purely truthful piece of information about them in a sea of digital clicks and scroll-bys.

If this data could be harnessed properly, without any work on the side of the consumer, what possibilities might open up?

“We’ve long talked about making ‘a check-in button you never had to press,’ ” said Crowley in the blog post. “Hypertrending is part of that vision realized, spread across multiple apps and services.”

Crowley also admits in the blog post that Hypertrending walks a fine line between creepy and cool, which is another reason for the ephemeral nature of the feature. It’s also the exact reason he wants to open it up to everyone.

From the blog post:

After 10 years, it’s clear that we (Foursquare!) are going to play a role in influencing how contextual-aware technologies shape the future – whether that’s apps that react to where you are and where you’ve been, smarter virtual assistants (e.g Alexa, Siri, Marsbot) that understand how you move through cities, or AR objects that need to appear at just the right time in just the right spot. We want to build a version of the future that we’re proud of, and we want your input as we get to work building it.

And…

We made Hypertrending to show people how Foursquare’s panel works in terms of what it can do (and what it will not do), as well as to show people how we as a company think about navigating this space. We feel the general trend with internet and technology companies these days has been to keep giving users a more and more personalized (albeit opaquely personalized) view of the world, while the companies that create these feeds keep the broad “God View” to themselves. Hypertrending is one example of how we can take Foursquare’s aggregate view of the world and make it available to the users who make it what it is. This is what we mean when we talk about “transparency” – we want to be honest, in public, about what our technology can do, how it works, and the specific design decisions we made in creating it.

We asked Crowley what would happen if brands and marketers loved the idea of Hypertrending, but general consumers were freaked out?

“This is an easy question,” said Crowley. “If this freaks people out, we don’t build stuff with it. We’re not ready for it yet. But I’d go back to the drawing board and ask ‘What do we learn from people that are freaked out about it that would help us communicate to them,’ or ‘what are the changes we could make to this that would make people comfortable,’ or ‘what are the things we could build that would illustrate the value of this that this view didn’t communicate?’ ”

As mentioned above, Hypertrending is only available during the SXSW conference in the Austin area. Users can access Hypertrending through both the Foursquare City Guide app and Swarm by simply shaking their phone.


Enterprise – TechCrunch


Koala-sensing drone helps keep tabs on drop bear numbers

March 2, 2019 No Comments

It’s obviously important to Australians to make sure their koala population is closely tracked — but how can you do so when the suckers live in forests and climb trees all the time? With drones and AI, of course.

A new project from Queensland University of Technology combines some well-known techniques in a new way to help keep an eye on wild populations of the famous and soft marsupials. They used a drone equipped with a heat-sensing camera, then ran the footage through a deep learning model trained to look for koala-like heat signatures.

It’s similar in some ways to an earlier project from QUT in which dugongs — endangered sea cows — were counted along the shore via aerial imagery and machine learning. But this is considerably harder.

A koala

“A seal on a beach is a very different thing to a koala in a tree,” said study co-author Grant Hamilton in a news release, perhaps choosing not to use dugongs as an example because comparatively few know what one is.

“The complexity is part of the science here, which is really exciting,” he continued. “This is not just somebody counting animals with a drone, we’ve managed to do it in a very complex environment.”

The team sent their drone out in the early morning, when they expected to see the greatest contrast between the temperature of the air (cool) and tree-bound koalas (warm and furry). It traveled as if it was a lawnmower trimming the tops of the trees, collecting data from a large area.

Infrared image, left, and output of the neural network highlighting areas of interest

This footage was then put through a deep learning system trained to recognize the size and intensity of the heat put out by a koala, while ignoring other objects and animals like cars and kangaroos.

For these initial tests, the accuracy of the system was checked by comparing the inferred koala locations with ground truth measurements provided by GPS units on some animals and radio tags on others. Turns out the system found about 86 percent of the koalas in a given area, considerably better than an “expert koala spotter,” who rates about a 70. Not only that, but it’s a whole lot quicker.

“We cover in a couple of hours what it would take a human all day to do,” Hamilton said. But it won’t replace human spotters or ground teams. “There are places that people can’t go and there are places that drones can’t go. There are advantages and downsides to each one of these techniques, and we need to figure out the best way to put them all together. Koalas are facing extinction in large areas, and so are many other species, and there is no silver bullet.”

Having tested the system in one area of Queensland, the team is now going to head out and try it in other areas of the coast. Other classifiers are planned to be added as well, so other endangered or invasive species can be identified with similar ease.

Their paper was published today in the journal Nature Scientific Reports.

Gadgets – TechCrunch


ExceptionAlly helps parents navigate the special needs education labyrinth

June 28, 2018 No Comments

The challenges faced by parents of kids with special needs are always unique, but in one way they are surely much alike: making sure the kids are getting what they need from schools is way harder than it ought to be. ExceptionAlly is a new startup that aims to help parents understand, organize and communicate all the info they need to make sure their child is getting the help they require.

“There are millions of parents out there trying to navigate special education. And parents with special needs should have access to more information than what one school tells them,” said ExceptionAlly co-founder and CEO Rayford Davis. “Those with the means actually hire special education attorneys, but those are few and far between. We thought, how can we democratize this? So we’re trying to do what TurboTax did for CPAs: deliver a large percentage of the value for a small percentage of the cost.”

The company just emerged from Y Combinator and is pursuing full deployment ahead of this school year, with a visibility push during the usual back-to-school dates. It’s still early days, but Davis tells me they already have thousands of users who are taking advantage of the free and paid aspects of the service.

Just because a parent has a kid with dyslexia, or a hearing impairment, or a physical disability, doesn’t mean they suddenly become an expert in what resources are out there for those kids — what’s required by law, what a school offers voluntarily and so on. Achieving fluency in these complex issues is a big ask on top of all the usual parental duties — and on top of that, parents and schools are often put in adversarial positions.

There are resources out there for parents, certainly, but they’re scattered and often require a great deal of effort on the parents’ part. So the first goal of the service is to educate and structure the parents’ information on the systems they’re dealing with.

Based on information provided by the parent, such as their kid’s conditions or needs, and other information like school district, state and so on, the platform assists the parent in understanding both the condition itself, what they can expect from a school and what their rights are. It could be something as simple as moving a kid to the front row of a classroom to knowing how frequently the school is required to share reports on that kid’s progress.

Parents rarely know the range of accommodations a school can offer, Davis said, and even the schools themselves might not know or properly explain what they can or must provide if asked.

For instance, an IEP, or individual education plan, and yearly goals are required for every student with special needs, along with meetings and progress reports. These are often skipped or, if not, done in a rote way that isn’t personalized.

Davis said that by helping parents collaborate with the school and teacher on IEPs and other facets of the process, they accomplish several things. First, the parent feels more confident and involved in their kid’s education, having brought something to the table. Second, less pressure is put on overworked teachers to produce these things in addition to everything else they have to do. And third, it either allows or compels schools to provide all the resources they have available.

Naturally, this whole process produces reams of documents: evaluations, draft plans, lesson lists, observations, reports and so on. “If you talk to any parent of a child with special needs, they’ll tell you how they have file cabinets full of paperwork,” Davis said.

ExceptionAlly will let you scan or send it all these docs, which it helps you organize into the various categories and find again should you need them. A search feature based on OCR processing of the text is in development and should be in place for the latter half of the coming school year, which Davis pointed out is really when it starts being necessary.

That, he said, is when parents need to keep schools accountable. Being informed both on the kid’s progress and what the school is supposed to be doing lets the resulting process be collaborative rather than combative. But if the latter comes to pass, the platform has resources for parents to deploy to make sure the schools don’t dominate the power equation.

“If things progress that way, there’s a ‘take action toolkit’ to develop communications with the school,” Davis said. Ideally you don’t want to be the parent threatening legal action or calling the principal at home. A timely reminder of what was agreed upon and a nudge to keep things on track keeps it positive. “It’s sort of a reminder that we should all be on ‘team kid,’ if you will,” he added.

Schools, unfortunately, have not shown themselves to be highly willing to collaborate.

“We spent about six months talking to over a hundred schools and districts. What we found was not a lot of energy to provide parents with any more information than what the school was already providing,” Davis explained.

The sad truth here is that many schools are already neck-deep in administrative woes, the teachers are overworked and have new responsibilities every year and the idea of volunteering for new ones doesn’t strike even the most well-intentioned schools as attractive. So instead, ExceptionAlly has focused on going directly to parents, who, confidently and well-armed, can take their case to the school on their own.

“Listen, we’re not getting ready to solve all of education today with our solution. We’re going to find that one mom who says, ‘I know there’s more out there, can someone help me find it?’ Yes, we’re going to help you do that,” he said. “Could that put pressure on the system? As long as it does it legally and lawfully, I am perfectly okay with advocating for a child and parents’ legal rights and putting pressure on the system to give them what they by law deserve.”

After the official launch ahead of this school year, the company plans to continue adding features. Rich text search is among them, and deeper understanding of the documents could both help automate storage and retrieval and also lead to new insights. At some point there will also be an optional program to submit a child’s information (anonymously, of course) to help create a database of what accommodations in which places and cases led to what outcomes — essentially aggregating information direct from the source.

ExceptionAlly has some free content to peruse if you’re curious whether it might be helpful for you or someone you know, and there are a variety of paid options should it seem like a good fit.


Startups – TechCrunch


Drip Capital helps exporters access working capital

June 21, 2018 No Comments

Drip Capital is raising a $ 20 million funding round from Accel, Wing VC and Sequoia India. The company is helping small exporters in emerging markets access working capital in order to finance big orders.

The startup also participated in Y Combinator back in 2015. Many small companies in emerging markets have to turn down orders because they can’t finance big orders. Even if you found a client in the U.S. or Europe, chances are companies will end up paying for your order a month or two after signing a contract.

If you’re an importer or an exporter, capital is arguably your most valuable resource. You know where to source your products and how to ship many goods. But you still need to buy goods yourself.

And in many emerging markets, you have to pay right away. It creates a sort of capital gap.

At the same time, local banks are often too slow and reject too many credit applications. Drip Capital thinks there’s an opportunity for a tech platform that finances exporters in no time.

The startup is first focusing on India because it meets many of the criteria I listed. This could be particularly useful for small and medium businesses. Large companies don’t necessarily face the same issues as they can access capital more easily.

So far, Drip Capital has funded more than $ 100 million of trade. After signing up to the platform, you can submit invoices and open a credit line to finance your next orders. Family offices and institutional investors can also invest some money in Drip Capital’s fund and get returns on investment.

This isn’t the only platform that helps you get paid faster. But larger companies tend to do it all and optimize the supply chain for the biggest companies in the world. Drip Capital is focusing on a specific vertical.

With today’s funding round, the company plans to get more customers and expand to other countries.


Startups – TechCrunch