Toyota is enlisting the help of startup Preferred Networks, a Japanese company founded in 2014 with a focus on artificial intelligence and deep learning, to help move forward its goal of developing useful service robots that can assist people in everyday life.
The two companies announced a partnership today to collaborate on research and development that will use Toyota’s Human Support Robot (HSR) robotics platform. The platform, which Toyota originally created in 2012 and has been developing since, is a basic robot designed to be able to work alongside people in everyday settings. Its primary uses involve offering basic car and support assistance in nursing and long-term care applications. Equipped with one arm, a display, cameras and a wheeled base, it can collect and retrieve items, and provide remote control and communication capabilities.
Preferred Networks already has some experience with Toyota’s HSR – it demonstrated one-such robot programmed to clean a room fully autonomously at Japan’s CEATEC robotics conference in 2018. The system could identify objects, responsd to specific human instructions and, importably pick up and put down objects it couldn’t define from its database in a safe manner.Toyota will be providing “several dozen” HSR units to Preferred Networks for the startup to work on, and then over the next three years, the two will collaborate on R&D, sharing the results of their work and the resulting intellectual property, with no restrictions on how either party uses the results of the joint work.
One of Toyota’s guiding goals as a company is to develop commercial home robotics that can work with people where they live. The automaker has a number of different projects in the works to make this happen, including through research at its Toyota Research Institute (TRI) subsidiary which works with a number of academic institutions. Toyota also recently revealed a number of robotics projects its bringing to the 2020 Olympic Games in Tokyo, which will help it field test a number of its projects.
Want to rock out together even when you’re apart? Spotify has prototyped an unreleased feature called “Social Listening” that lets multiple people add songs to a queue they can all listen to. You just all scan one friend’s QR-style Spotify Social Listening code, and then anyone can add songs to the real-time playlist. Spotify could potentially expand the feature to synchronize playback so you’d actually hear the same notes at the same time, but for now it’s a just a shared queue.
Social Listening could give Spotify a new viral growth channel, as users could urge friends to download the app to sync up. The intimate experience of co-listening might lead to longer sessions with Spotify, boosting ad plays or subscription retention. Plus, it could differentiate Spotify from Apple Music, YouTube Music, Tidal and other competing streaming services.
A Spotify spokesperson tells TechCrunch that “We’re always testing new products and experiences, but have no further news to share at this time.” Spotify already offers Collaborative Playlists friends can add to, but Social Listening is designed for real-time sharing. The company refused to provide further details on the prototype or when it might launch.
The feature is reminiscent of Turntable.fm, a 2011 startup that let people DJ in virtual rooms on their desktop that other people could join where they could chat, vote on the next song and watch everyone’s avatars dance. But the company struggled to properly monetize through ad-free subscriptions and shut down in 2014. Facebook briefly offered its own version called “Listen With…” in 2012 that let Spotify or Rdio users synchronize music playback.
Spotify Social Listening was first spotted by reverse-engineering sorceress and frequent TechCrunch tipster Jane Manchun Wong. She discovered code for the feature buried in Spotify’s Android app, but for now it’s only available to Spotify employees. Social Listening appears in the menu of connected devices you can open while playing a song beside nearby Wi-Fi and Bluetooth devices. “Connect with friends: Your friends can add tracks by scanning this code – You can also scan a friend’s code,” the feature explains.
A help screen describes Social Listening as “Listen to music together. 1. On your phone, play a song and select (Connected Devices). You’ll see a code at the bottom of the screen. 2. On your friend’s phone, select the same (Connected Devices) icon, tap SCAN CODE, and point the camera at your code. 3. Now you can control the music together.” You’ll then see friends who are part of your Social Listening session listed in the Connected Devices menu. Users can also copy and share a link to join their Social Listening session that starts with the URL prefix https://open.spotify.com/socialsession/. Note that Spotify never explicitly says that playback will be synchronized.
With streaming apps largely having the same music catalog and similar $ 9.99 per month premium pricing, they have to compete on discovery and user experience. Spotify has long been in the lead here with its algorithmically personalized Discover Weekly playlists, which were promptly copied by Apple and SoundCloud.
Oddly, Spotify has stripped out some of its own social features over the years, eliminating the in-app messaging inbox and instead pushing users to share songs over third-party messaging apps. The deemphasis in discovery through friends conveniently puts the focus on Spotify’s owned playlists. That gives it leverage over the record labels during their rate negotiations as it’s who influences which songs will become hits, so if labels don’t play nice their artists might not get promoted via playlists.
That’s why it’s good to see Spotify remembering that music is an inherently social experience. Music physically touches us through its vibrations, and when people listen to the same songs and are literally moved by it at the same time, it creates a sense of togetherness we’re too often deprived of on the internet.
Citizens Reserve, a Bay Area startup, has a broad goal of digitizing the supply chain. Last fall, the company launched the Alpha version of Suku, a Supply Chain as a Service platform built on the blockchain. Today, it announced a partnership with Smartrac, an RFID tag manufacturer, based in Amsterdam, as a key identity piece for the platform.
Companies use RFID to track products from field or factory to market. Eric Piscini, CEO at Citizens says this partnership helps solve a crucial piece of digitizing the supply chain. It provides a way to trace products on their journey to market, and ensure their provenance, whether that is to be sure no labor was exploited in production, environmental standards were maintained or that the products were stored under the proper conditions to ensure freshness.
One of the big issues in track and trace on the supply chain is simply identifying the universe of items in motion across the world at any given moment. RFID tagging provides a way to give each of these items a digital identity, which can be placed on the blockchain to help prevent fraud. Once you have an irrefutable digital identity, it solves a big problem around digitizing the supply chain.
He said this is all part of a broader effort to move the supply chain to the digital realm by building a platform on the blockchain. This not only provides an irrefutable, traceable digital record, it can have all kinds of additional benefits like reducing theft and fraud and ensuring provenance.
There are so many parties involved in this process from farmers and manufacturers to customs authorities to shipping and container companies to logistics companies moving the products to market to the stores that sell the goods. Getting all of the various parties involved in the supply chain to move to a blockchain solution remains a huge challenge.
Today’s partnership offers one way to help build an identity mechanism for the Citizens Reserve solution. The company is also working on other partnerships to help solve other problems like warehouse management and logistics.
The company currently has 11 employees based in Los Gatos, California. It has raised $ 11 million, according to Piscini.
Salesforce first opened an office in Dublin back in 2001, and has since expanded to 1,400 employees. Today’s announcement represents a significant commitment to expand even further, adding 1,500 new jobs over the next five years.
The new tower in Dublin is actually going to be a campus made up of four interconnecting buildings on the River Liffey. It will eventually encompass 430,000 square feet with the first employees expected to move into the new facility sometime in the middle of 2021.
Martin Shanahan, who is CEO at IDA Ireland, the state agency responsible for attracting foreign investment in Ireland, called this one of the largest single jobs announcements in the 70-year history of his organization.
As with all things Salesforce, they will do this up big with an “immersive video lobby” and a hospitality space for Salesforce employees, customers and partners. This space, which will be known as the “Ohana Floor,” will also be available for use by nonprofits.They also plan to build paths along the river that will connect the campus to the city center.
The company intends to make the project “one of the most sustainable building projects to-date” in Dublin, according to a statement announcing the project. What does that mean? It will, among other things, be a nearly Net Zero Energy building and it will use 100 percent renewable energy, including onsite solar panels.
Finally, as part of the company’s commitment to the local communities in which it operates, it announced a $ 1 million grant to Educate Together, an education nonprofit. The grant should help the organization expand its mission running equality-based schools. Salesforce has been supporting the group since 2009 with software grants, as well as a program where Salesforce employees volunteer at some of the organization’s schools.
Netflix and chill from afar? Facebook Messenger is now internally testing simultaneous co-viewing of videos. That means you and your favorite people could watch a synchronized video over group chat on your respective devices while discussing or joking about it. This “Watch Videos Together” feature could make you spend more time on Facebook Messenger while creating shared experiences that are more meaningful and positive for well-being than passively zombie-viewing videos solo. This new approach to Facebook’s Watch Party feature might feel more natural as part of messaging than through a feed, Groups or Events post.
The feature was first spotted in Messenger’s codebase by Ananay Arora, the founder of deadline management app Timebound as well as a mobile investigator in the style of frequent TechCrunch tipster Jane Manchun Wong. The code he discovered describes Messenger allowing you to “tap to watch together now” and “chat about the same videos at the same time” with chat thread members receiving a notification that a co-viewing is starting. “Everyone in this chat can control the video and see who’s watching,” the code explains.
A Facebook spokesperson confirmed to TechCrunch that this is an “internal test” and that it doesn’t have any more to share right now. But other features originally discovered in Messenger’s code, like contact syncing with Instagram, have eventually received official launches.
A fascinating question this co-viewing feature brings up is where users will find videos to watch. It might just let you punch in a URL from Facebook or share a video from there to Messenger. The app could put a new video browsing option into the message composer or Discover tab. Or, if it really wanted to get serious about chat-based co-viewing, Facebook could allow the feature to work with video partners, ideally YouTube.
Co-viewing of videos could also introduce a new revenue opportunity for Messenger. It might suggest sponsored videos, such as recent movie trailers. Or it could simply serve video ads between a queue of videos lined up for co-viewing. Facebook has recently been putting more pressure on its subsidiaries like Messenger and Instagram to monetize as News Feed ad revenue growth slows due to plateauing user growth and limited News Feed ad space.
Other apps like YouTube’s Uptime (since shut down) and Facebook’s first president Sean Parker’s Airtime (never took off) have tried and failed to make co-watching a popular habit. The problem is that coordinating these synced-up experiences with friends can be troublesome. By baking simultaneous video viewing directing into Messenger, Facebook could make it as seamless as sharing a link.
“Yeah! Well of course we’re working on it,” Facebook’s head of augmented reality Ficus Kirkpatrick told me when I asked him at TechCrunch’s AR/VR event in LA if Facebook was building AR glasses. “We are building hardware products. We’re going forward on this . . . We want to see those glasses come into reality, and I think we want to play our part in helping to bring them there.”
This is the clearest confirmation we’ve received yet from Facebook about its plans for AR glasses. The product could be Facebook’s opportunity to own a mainstream computing device on which its software could run after a decade of being beholden to smartphones built, controlled and taxed by Apple and Google.
— TechCrunch (@TechCrunch) October 24, 2018
This month, Facebook launched its first self-branded gadget out of its Building 8 lab, the Portal smart display, and now it’s revving up hardware efforts. For AR, Kirkpatrick told me, “We have no product to announce right now. But we have a lot of very talented people doing really, really compelling cutting-edge research that we hope plays a part in the future of headsets.”
There’s a war brewing here. AR startups like Magic Leap and Thalmic Labs are starting to release their first headsets and glasses. Microsoft is considered a leader thanks to its early HoloLens product, while Google Glass is still being developed for the enterprise. And Apple has acquired AR hardware developers like Akonia Holographics and Vrvana to accelerate development of its own headsets.
Technological progress and competition seems to have sped up Facebook’s timetable. Back in April 2017, CEO Mark Zuckerberg said, “We all know where we want this to get eventually, we want glasses,” but explained that “we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years.” He explained that “We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses.” The company’s Oculus division had talked extensively about the potential of AR glasses, yet similarly characterized them as far off.
But a few months later, a Facebook patent application for AR glasses was spotted by Business Insider that detailed using “waveguide display with two-dimensional scanner” to project media onto the lenses. Cheddar’s Alex Heath reports that Facebook is working on Project Sequoia that uses projectors to display AR experiences on top of physical objects like a chess board on a table or a person’s likeness on something for teleconferencing. These indicate Facebook was moving past AR research.
Last month, The Information spotted four Facebook job listings seeking engineers with experience building custom AR computer chips to join the Facebook Reality Lab (formerly known as Oculus research). And a week later, Oculus’ Chief Scientist Michael Abrash briefly mentioned amidst a half-hour technical keynote at the company’s VR conference that “No off the shelf display technology is good enough for AR, so we had no choice but to develop a new display system. And that system also has the potential to bring VR to a different level.”
But Kirkpatrick clarified that he sees Facebook’s AR efforts not just as a mixed reality feature of VR headsets. “I don’t think we converge to one single device . . . I don’t think we’re going to end up in a Ready Player One future where everyone is just hanging out in VR all the time,” he tells me. “I think we’re still going to have the lives that we have today where you stay at home and you have maybe an escapist, immersive experience or you use VR to transport yourself somewhere else. But I think those things like the people you connect with, the things you’re doing, the state of your apps and everything needs to be carried and portable on-the-go with you as well, and I think that’s going to look more like how we think about AR.”
Oculus virtual reality headsets and Facebook augmented reality glasses could share an underlying software layer, though, which might speed up engineering efforts while making the interface more familiar for users. “I think that all this stuff will converge in some way maybe at the software level,” Kirkpatrick said.
The problem for Facebook AR is that it may run into the same privacy concerns that people had about putting a Portal camera inside their homes. While VR headsets generate a fictional world, AR must collect data about your real-world surroundings. That could raise fears about Facebook surveilling not just our homes but everything we do, and using that data to power ad targeting and content recommendations. This brand tax haunts Facebook’s every move.
Startups with a cleaner slate like Magic Leap and giants with a better track record on privacy like Apple could have an easier time getting users to put a camera on their heads. Facebook would likely need a best-in-class gadget that does much that others can’t in order to convince people it deserves to augment their reality.
You can watch our full interview with Facebook’s director of camera and head of augmented reality engineering Ficus Kirkpatrick from our TechCrunch Sessions: AR/VR event in LA:
While hiring is only one part of building your team, hiring the right candidates to add to your team is a vital step in building a high performing team. Molly Nagy, Senior HR Coordinator, talks about her experience working with hiring managers within and outside of Hanapin, and gives her top tips on building out your digital marketing team.
Read more at PPCHero.com
Instagram tells me Regramming, or the ability to instantly repost someone else’s feed post to your followers like a retweet, is “not happening”, not being built, and not being tested. And that’s good news for all Instagrammers. The denial comes after it initially issued a “no comment” to The Verge’s Casey Newton, who published that he’d seen screenshots of a native Instagram resharing sent to him by a source.
Regramming would be a fundamental shift in how Instagram works, not necessarily in terms of functionality, but in terms of the accepted norms of what and how to post. You could always screenshot, cite the original creator, and post. But Instagram has always been about sharing your window to the world — what you’ve lived and seen. Regramming would legitimize suddenly assuming someone else’s eyes.
The result would be that users couldn’t trust that when they follow someone, that’s whose vision would appear in their feed. Instagram would feel a lot more random and unpredictable. And it’d become more like its big brother Facebook whose News Feed has waned in popularity – susceptible to viral clickbait bullshit, vulnerable to foreign misinformation campaigns, and worst of all, impersonal.
Newton’s report suggested Instagram reposts would appear under the profile picture of the original sharer, and regrams could be regrammed once more in turn, showing a stack of both profile thumbnails of who previously shared it. That would at least prevent massive chains of reposts turning posts into all-consuming feed bombs.
Regramming could certainly widen what appears in your feed, which some might consider more interesting. It could spur growth by creating a much easier way for users to share in feed, especially if they don’t live a glamorous life themself. I can see a case for this being a feature for businesses only, which are already impersonal and act as curators. And Instagram’s algorithm could hide the least engaging regrams.
These benefits are why Instagram has internally considered building regramming for years. CEO Kevin Systrom told Wired last year “We debate the re-share thing a lot . . . But really that decision is about keeping your feed focused on the people you know rather than the people you know finding other stuff for you to see. And I think that is more of a testament of our focus on authenticity.”
See, right now, Instagram profiles are cohesive. You can easily get a feel for what someone posts and make an educated decision about whether to follow them from a quick glance at their grid. What they share reflects on them, so they’re cautious and deliberate. Everyone is putting on a show for Likes, so maybe it’s not quite ‘authentic’, but at least the content is personal. Regramming would make it impossible to tell what someone would post next, and put your feed at the mercy of their impulses without the requisite accountability. If they regram something lame, ugly, or annoying, it’s the original author who’d be blamed.
Instagram already has a release valve for demand for regramming in the form of the ability to turn people’s public feed posts into Stickers you can paste into your Story. Launched in May, you can add your commentary, complimenting on dunking on the author. There, regrams are ephemeral, and your followers have to pull them out of their Stories tray rather than having them force fed via the feed. Effectively, you can reshare others’ content, but not make it a central facet of Instagram or emblem of your identity. And if you want to just make sure a few friends see something awesome you’ve discovered, you can send them people’s feed posts as Direct messages.
Making it much easier to repost to your feed instead of sharing something original could turn Instagram into an echo chamber. It’d turn Instagram even more into a popularity contest, with users jockeying for viral distribution and a chance to plug their SoundCloud mixtapes like on Twitter. Personal self-expression would be overshadowed even further by people playing to the peanut gallery. Businesses might get lazy rather than finding their own styles. If you want to discover something new and unexpected, there’s a whole Explore page full of it.
Newton is a great reporter, and I suspect the screenshots he saw were real, but I think Instagram should have given him the firm denial right away. My guess is that it wanted to give its standard no comment because if it always outright denies inaccurate rumors and speculation, that means journalists can assume they’re right when it does “no comment.”
But once Newton published his report, backlash quickly mounted about how regramming could ruin Instagram. Rather than leaving users worried, confused, and constantly asking when the feature would launch and how it would work, the company decided to issue firm denials after the fact. It became worth diverging from its PR playbook. Maybe it had already chosen to scrap its regramming prototype, maybe the screenshots were just of an early mock-up never meant to be seriously considered, or maybe it hadn’t actually finalized that decision to abort until the public weighed in against the feature yesterday.
In any case, introducing regramming would risk an unforced error. The elemental switch from chronological to the algorithmic feed, while criticized, was critical to Instagram being able to show the best of the massive influx of content. Instagram would eventually break without it. There’s no corresponding urgency to fix what ain’t broke when it comes to not allowing regramming.
Instagram is already growing like crazy. It just hit a billion monthly users. Stories now has 400 million daily users, and that feature is growing six times faster than Snapchat as a whole. The app is utterly dominant in the photo and short video sharing world. Regramming would be an unnecessary gamble.
Building conversational interfaces is a hot new area for developers. Chatbots can be a way to reduce friction in websites and apps and to give customers quick answers to commonly asked questions in a conversational framework. Today, Google announced it was making Dialogflow Enterprise Edition generally available. It had previously been in Beta.
This technology came to them via the API.AI acquisition in 2016. Google wisely decided to change the name of the tool along the way, giving it a moniker that more closely matched what it actually does. The company reports that hundreds of thousands are developers are using the tool already to build conversational interfaces.
This isn’t just an all-Google tool though. It works across voice interface platforms including Google Assistant, Amazon Alexa and Facebook Messenger, giving developers a tool to develop their chat apps once and use them across several devices without having to change the underlying code in a significant way.
What’s more, with today’s release the company is providing increased functionality and making it easier to transition to the enterprise edition at the same time.
“Starting today, you can combine batch operations that would have required multiple API calls into a single API call, reducing lines of code and shortening development time. Dialogflow API V2 is also now the default for all new agents, integrating with Google Cloud Speech-to-Text, enabling agent management via API, supporting gRPC, and providing an easy transition to Enterprise Edition with no code migration,” Dan Aharon Google’s product manager for Cloud AI wrote in a company blog post announcing the tool.
The company showed off a few new customers using Dialogflow to build chat interfaces for their customers including KLM Royal Dutch Airlines, Domino’s and Ticketmaster.
The new tool, which is available today, supports over 30 languages and as a generally available enterprise product comes with a support package and service level agreement (SLA).
- Twitter picks up team from narrative app Lightwell in its latest effort to improve conversations
- T-Mobile hit by hours-long nationwide outage
- New Releases on Hero Academy! Starting Pinterest & Apple Search Ads
- ‘This is Your Life in Silicon Valley’: The League founder and CEO Amanda Bradford on modern dating, and whether Bumble is a ‘real’ startup
- The 11 best startups from Y Combinator’s S19 Demo Day 1