Between Microsoft Build and Google I / O, there are probably more people saying "AI" this week than any previous week in history. But the AI that these companies deploy tends to live in a cloud somewhere – XNOR puts it on devices that are not even able to have an internet connection. The startup just got $ 12 million to continue its quest for AI at the forefront.
I wrote about the company when it detached from AI2, based in Seattle and backed by Paul Allen. Its product is essentially an exclusive method of rendering machine-learning models that can be executed quickly by almost any processor. Speed, memory and energy savings are enormous, allowing peripherals with bins processors to perform serious tasks such as recognition and tracking. real-time objects that usually require serious manipulation.
Since its inception, it has received $ 2.6 million in seed funding and has now completed its A round, led by Madrona Venture Group, with NGP Capital, Autotech Ventures and Catapult Ventures.
"The AI has done a good job," said co-founder Ali Farhadi, "but for it to become revolutionary, it must go beyond what it is. she is now. "
The fundamental problem, he said, is that AI is too expensive – both in terms of the processing time and the money required.
Almost all major "AI" products do their magic by means of huge computer banks in the cloud. You send your image or voice clip or other, it performs the processing with a machine learning model hosted in a data center, and then returns the results.
For many things, that's fine. It does not matter if Alexa responds in a second or two, or if your images are enhanced with metadata over a period of hours while you're not paying attention. But if you need a result not just in a second, but in a hundredth of a second, there is no time for the cloud. And more and more, there is no need.
XNOR's technique allows things like computer vision and voice recognition to be stored and executed on devices with extremely limited processing power and RAM. And we are talking about Raspberry Pi Zero here, not like an older iPhone.
If you wanted to have a camera or smart home device in every room of your home, watch for voices, respond to commands, send video feed to monitor unauthorized visitors or emergency situations – this constant pipe to the cloud begins to be crowded very quickly. It is better not to send it at all.
This has the pleasant byproduct of not requiring what could be personal data to a cloud server, where you have to trust that it will not be stored or used against your will. If the data is fully processed on the device, it is never shared with third parties. This is an increasingly attractive proposition.
Developing a model for advanced computing is not cheap, however. Although AI developers are multiplying, relatively few are trying to run on limited-resource devices like old phones or cheap security cameras.
The XNOR model allows a developer or manufacturer to plug in some basic attributes and get a pre-formed template for their needs.
Say you are the manufacturer of cheap security cameras; You have to recognize people and animals and fires, but not cars or boats or plants, you use this or that ARM and camera and you need to render at five frames per second, but only 128 MB of RAM for to work. Ding – here is your model.
Or say you're a parking company and you have to recognize empty places, license plates and people who are mistrustful. You have this or that configuration. Ding – here is your model .
These AI agents can be dropped quite easily into different code bases and never need to phone home or have their data audited or updated, they will work as lightning greased on the platform. Farhadi said that they have established the most common use cases and devices through research and feedback, and many customers should be able to get a model "off the shelf" like this one. This is Phase 1, as he called it, and should be launched this fall.
Phase 2 (early 2019) will allow for greater customization, for example, if your parking model becomes a police parking model and needs to recognize a specific set of cars and people, or use equipment owner. on the list. New models can be trained on demand.
And phase 3 is to take models that normally work on the cloud infrastructure and to adapt them and the "XNOR" for the deployment on the periphery. No chronology on that one.
Although the technology is in a certain way suited to the needs of autonomous cars, Farhadi has said that they do not attack this sector – for the moment. He is still essentially in the prototype phase, he said, and the creators of autonomous vehicles are currently trying to prove that the idea works fundamentally, without trying to optimize it and offer it to lower cost.
Edge-based IA models will certainly become increasingly important as the efficiency of the algorithms will improve, as the power of the devices will increase and the demand for applications fast rotation will increase. XNOR seems to be among the vanguards in this emerging field of the field, but you can almost certainly expect that competition will develop with the market.