The Kinect has had a bad run in the video game world, with the original Xbox 360 sensor failing to catch on, and history repeating itself after the launch of the Xbox One. In October last year, Microsoft stopped manufacturing the Kinect 2.0, but it is making a surprise return this year. At the Microsoft Build conference yesterday, the company announced that the sensor would be repurposed for Azure AI.
Project Kinect for Azure will utilise updated Kinect sensors, which have been shrunk down and upgraded. The sensor is smaller than the one found inside of Kinect 2.0, and the depth sensor has had a resolution bump from 640×480 to 1024×1024. Alex Kipman, one of Microsoft’s AI and Mixed Reality leads, explains that one of the things that makes Project Kinect for Azure unique is the combination of the depth sensor with Azure’s AI services. The idea is that the sensor will allow developers to make ‘the intelligent edge’ more perceptive than before.
Image credit: Microsoft.
By utilising the depth sensor on Kinect, Microsoft also says that developers can utilise smaller networks for deep learning on depth images. This will lead to cheaper-to-deploy AI algorithms.
This is actually Microsoft’s fourth iteration of Kinect. The first two we all know as Xbox peripherals, but a third iteration was implemented in the HoloLens, and now, the fourth iteration of Kinect is being geared towards AI. The sensor may not have found its home in gaming, but the Kinect is alive and well in other areas.
KitGuru Says: As the Build conference continues, we can expect plenty of announcements out of Microsoft. One thing is already clear though, the company’s current focus is AI and cloud services.