Motion tracking for exercise, gaming, and even safety has been around for quite a while now, but it always required cameras or body scanning for effective tracking. That changes now with, as you might have guessed, research and development out of MIT. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) built a sentient magic carpet that might not be able to fly or emote like the one from Aladdin, but it can estimate human poses–without the use of cameras–based strictly on a neural network interpreting the tactile information the mat registers. If we’re being completely honest, it’s a little creepy that it can determine if we’re standing, doing squats, or jumping jacks based on the same two points of contact with the floor, but that’s the kind of magic that gives it such potential for the future. You can read the full paper Intelligent Carpet: Inferring 3D Human Pose from Tactile Signals at the link below if you really want to get into the technical details.