The potential of mobile augmented reality is clear. Last summer Pokemon Go gave a glimpse of just how big this craze could be, as thousands of excited humans converged on parks, bus stops and other locations around the world to chase virtual monsters through the lens of their smartphones.
Apple was also watching. And this summer the company signaled its own conviction in the technology by announcing ARKit: a developer toolkit to support iOS developers to build augmented reality apps. CEO Tim Cook said iOS will become the world’s biggest augmented reality platform once iOS 11 hits consumers’ devices in fall — underlining Cupertino’s expectation that big things are coming down the mobile AR pipe.
Y Combinator-backed, MIT spin-out Escher Reality’s belief in the social power of mobile AR predates both these trigger points. It’s building a cross-platform toolkit and custom backend for mobile AR developers, aiming to lower the barrier to entry to building “compelling experiences”, as the co-founders put it.
“Keep in mind this was before Pokemon Go,” says CEO Ross Finman, discussing how he and CTO Diana Hu founded the company about a year and a half ago, initially as a bit of a side project— before going all in full time last November. “Everyone thought we were crazy at that time, and now this summer it’s the summer for mobile augmented reality… ARKit has been the best thing ever for us.“
But if Apple has ARKit, and you can bet Google will be coming out with an Android equivalent in the not-too-distant future, where exactly does Escher Reality come in?
“Think of us more as the backend for augmented reality,” says Finman. “What we offer is the cross-platform, multiuser and persistent experiences — so those are three things that Apple and ARKit doesn’t do. So if you want to do any type of shared AR experience you need to connect the two different devices together — so then that’s what we offer… There’s a lot of computer vision problems associated with that.”
“Think about the problem of what ARKit doesn’t provide you,” adds Hu. “If you’ve seen a lot of the current demos outside, they’re okay-ish, you can see 3D models there, but when you start thinking longer term what does it take to create compelling AR experiences? And part of that is a lot of the tooling and a lot of the SDK are not there to provide that functionality. Because as game developers or app developers they don’t want to think about all that low level stuff and there’s a lot of really complex techs going on that we have built.
“If you think about in the future, as AR becomes a bigger movement, as the next computing platform, it will need a backend to support a lot of the networking, it will need a lot of the tools that we’re building — in order to build compelling AR experiences.”
“We will be offering Android support for now, but then we imagine Google will probably come out with something like that in the future,” adds Finman, couching that part of the business as the free bit in freemium — and one they’re therefore more than happy to hand off to Google when the time comes.
The team has put together a demo to illustrate the sorts of mobile AR gaming experiences they’re aiming to support— in which two people play the same mobile AR game, each using their own device as a paddle…
What you’re looking at here is “very low latency, custom computer vision network protocols” enabling two players to share augmented reality at the same time, as Hu explains it.
Sketching another scenario the tech could enable, Finman says it could support a version of Pokemon Go in which friends could battle each other at the same time and “see their Pokemons fight in real time”. Or allow players to locate a Gym at a “very specific location — that makes sense in the real-world”.
In essence, the team’s bet is that mobile AR — especially mobile AR gaming — gets a whole lot more interesting with support for richly interactive and multiplayer apps that work cross-platform and cross device. So they’re building tools and a backend to support developers wanting to build apps that can connect Android users and iPhone owners in the same augmented play space.
After all, Apple especially isn’t incentivized to help support AR collaboration on Android. Which leaves room for a neutral third party to help bridge platform and hardware gaps — and smooth AR play for every mobile gamer.
The core tech is essentially knitting different SLAM maps and network connections together in an efficient way, says Finman, i.e. without the latency that would make a game unplayable, so that “it runs in real-time and is a consistent experience”. So tuning everything up for mobile processors.
“We go down to, not just even the network layer, but even to the assembly level so that we can run some of the execution instructions very efficiently and some of the image processing on the GPU for phones,” says Hu. “So on a high level it is a SLAM system, but the exact method and how we engineered it is novel for efficient mobile devices.”
“Consider ARKit as step one, we’re steps two and three,” adds Finman. “You can do multi-user experiences, but then you can also do persistent experiences — once you turn off the app, once you start it up again, all the objects that you left will be in the same location.”
Consider ARKit as step one, we’re steps two and three.
“People can collaborate in AR experiences at the same time,” adds Hu. “That’s one main thing that we can really provide, that Google or Apple wouldn’t provide.”
Hardware wise, their system supports premium smartphones from the last three years. Although, looking ahead, they say they see no reason why they wouldn’t expand to support additional types of hardware — such as headsets — when/if those start gaining traction too.
“In mobile there’s a billion devices out there that can run augmented reality right now,” notes Finman. “Apple has one part of the market, Android has a larger part. That’s where you’re going to see the most adoption by developers in the short term.”
Escher Reality was founded about a year and a half ago, spun out of MIT and initially bootstrapped in Finman’s dorm room — first as a bit of a side project, before they went all in full time in November. The co-founders go back a decade or so as friends, and say they had often kicked around startup ideas and been interested in augmented reality.
Finman describes the business they’ve ended up co-founding as “really just a nice blend of both of our backgrounds”. “For me I was working on my PhD at MIT in 3D perception — it’s the same type of technology underneath,” he tells TechCrunch.
“I’ve been in industry running a lot of different teams in computer vision and data science,” adds Hu. “So a lot of experience bringing research into production and building large scale data systems with low latency.”
They now have five people working full time on the startup, and two part time. At this point the SDK is being used by a limited number of developers, with a wait-list for new sign ups. They’re aiming to open up to all comers in fall.
“We’re targeting games studios to begin with,” says Finman. “The technology can be used across many different industries but we’re going after gaming first because they are usually at the cutting edge of new technology and adoption, and then there’s a whole bunch of really smart developers that are going after interesting new projects.”
“One of the reasons why augmented reality is considered so much bigger, the shared experiences in the real world really opens up a whole lot of new capabilities and interactions and experiences that are going to improve the current thoughts of augmented reality. But really it opens up the door for so many different possibilities,” he adds.
Discussing some of the “compelling experiences” the team see coming down the mobile AR pipe, he points to three areas he reckons the technology can especially support — namely: instruction, visualization and entertainment.
“When you have to look at a piece of paper and imagine what’s in the real world — for building anything, getting direction, having distance professions, that’s all going to need shared augmented reality experiences,” he suggests.
Although, in the nearer term, consumer entertainment (and specifically gaming) is the team’s first bet for traction.
“In the entertainment space in the consumer side, you’re going to see short films — so beyond just Snapchat, it’s kind of real time special effects, that you can video and set up your own kind of movie scene,” he suggests.
Designing games in AR does also present developers with new conceptual and design challenges, of course, which in turn bring additional development challenges — and the toolkit is being designed to help with those challenges.
“If you think about augmented reality there’s two new mechanics that you can work with; one is the position of the phone now matters,” notes Finman. “The second thing is… the real world become content. So like the map data, the real world, can be integrated into the game. So those are two mechanics that didn’t exist in any other medium before.
“From a developer standpoint, one added constraint with augmented reality is because it depends on the real world it’s difficult to debug… so we’ve developed tools so that you can play back logs. So then you can actually go through videos that were in the real world and interact with it in a simulated environment.”
Discussing some of the ideas and “clever mechanics” they’re seeing early developers playing with, he suggests color as one interesting area. “Thinking about the real world as content is really fascinating,” he says. “Think about color as a resource. So then you can mine color from the real world. So if you want more gold, put up more Post-It notes.”
The business model for Escher Reality’s SDK is usage based, meaning they will charge developers for usage on a sliding scale that reflects the success of their applications. It’s also offered as a Unity plug-in so the target developers can easily integrate into current dev environments.
“It’s a very similar model to Unity, which encourages a very healthy indie developer ecosystem where they’re not charging any money until you actually start making money,” says Hu. “So developers can start working on it and during development time they don’t get charged anything, even when they launch it, if they don’t have that many users they don’t get charged, it’s only when they start making money we also start making money — so in that sense a lot of the incentives align pretty well.”
The startup, which is graduating YC in the summer 2017 batch and now headed towards demo day, will be looking to raise funding so they can amp up their bandwidth to support more developers. Once they’ve got additional outside investment secured the plan is to “sign on and work with as many gaming studios as possible”, says Finman, as well as be “head down” on building the product.
“The AR space is just exploding at the moment so we need to make sure we can move fast enough to keep up with it,” he adds.