Prepare to have your mind blown. While everyone is talking about AI, university researchers are unlocking next gen, highly consequential AI-powered use cases with tech we already have in our homes.
Let me paint you a scene. You’re home alone, music’s pumping, no one’s watching. You start dancing like nobody’s business – we’ve all been there! Busting out those slick moves you’d never attempt in public. Little did you know, your WiFi router may have been watching the whole time, analyzing each fancy footwork and fluid motion. Sound crazy? Stick with me here…
CMU researchers Jiaqi Geng, Dong Huang, and Fernando De la Torre have been working on something they call “DensePose from WiFi.”
The trio took a look at the current ways we estimate human poses— you know, the science of figuring out how a person is standing or moving based on images or videos. Today we typically would use things like RGB cameras, LiDAR, and radars.
But who needs extra cameras and sensors when you already have a wifi router? The researchers unlocked how to use WiFi antennas for body segmentation and key-point body detection. Yes, the same WiFi that’s probably letting you read this blog right now.
The concept is as fascinating as it is groundbreaking. WiFi signals, bouncing around your room, can be used to map your body’s position and movements without the need for intrusive cameras or expensive sensors. Think of it like this: every time you perform a grand jete, your WiFi signals are taking notes.
The DensePose from WiFi method has some potential advantages compared to using RGB cameras, LiDAR, and radars for human pose estimation:
- Privacy – nobody wants their every movement tracked by high-tech sensors in their own home! Using your wifi means not needing to buy or install a bunch of video recording cameras all over the place or other weird sensors, which makes people nervous.. The WiFi signals pass through clothing and do not reveal private details.
- Occlusions – WiFi can work even if the person is occluded from the camera view, whereas RGB cameras need to see the body parts. Let’s say you’re attempting the Macarena in a dimly lit room, or there’s a pesky potted plant partially blocking the view. Cameras could get confused and might perceive you as a strange mix of human and fern.
- Materials – WiFi signals can pass through some materials that would be opaque to RGB cameras and some radars.
- Cost – WiFi hardware is ubiquitous and inexpensive compared to specialized depth cameras or radars. Radars and LiDAR? Well, they require specialized hardware that could cost a small fortune and eat up power like a teenager devours pizza.
- Line-of-sight – WiFi does not require a direct line-of-sight like cameras, LiDAR and some radars, allowing more flexibility.
- Lighting – WiFi does not rely on visible light and works in all lighting conditions, unlike RGB cameras. So even if you are in a dimly lit room doing the moondance, the wifi can detect those motions whereas the cameras may struggle.
Potential disadvantages compared to other modalities:
- Range – WiFi may be limited to shorter ranges than radars and LiDAR.
- Accuracy – In some cases the accuracy may be lower than vision or depth-based methods.
- Occlusion – Non-line-of-sight occlusion can still block WiFi signals.
Here is how this magic works in simplistic terms: A WiFi transmitter sends out wireless signals that reflect off a person’s body in different ways depending on their pose. A WiFi receiver gathers the reflected signals, which contain information about how the signals bounced off the person’s shape and pose. This is called Channel State Information (CSI). The CSI data is fed into a neural network model as input. The neural network analyzes the patterns in the CSI data to estimate the locations of key body joints in 2D and 3D space. This gives the pose keypoints. The network also looks at fine-grained CSI patterns to generate dense correspondence maps, which indicate how each point on the person’s surface corresponds to a point in some canonical pose. The model was trained on data from a motion capture system to have ground truth poses to compare against. In testing, the WiFi pose estimation was similar in accuracy to vision-based methods for line-of-sight scenarios.
In other words, think of it as teaching your WiFi to see and understand. The WiFi signals bounce off you and your surroundings, painting an invisible picture of your movements. Meanwhile, the machine learning algorithms study these signals to recognize patterns, learn from them, and predict your next twirl or leap. The key idea is that a person’s pose and position relative to a WiFi transmitter affects the wireless signals due to reflection, scattering, and absorption.
There are many implications for tech like this. We’re stepping into a world where your WiFi does more than just watch your dance moves. It could change the way we interact with our homes, turning them into responsive environments that adapt to our behaviors and routines. Imagine your lights dimming as you settle down for a movie night or your thermostat adjusting itself as you snuggle into bed.
This technology has the potential to revolutionize industries. Health care could benefit enormously. For instance, it could detect changes in an elderly person’s walking pace or posture that suggest potential neurological issues like Parkinson’s. For adults with disabilities, it could trigger assistance if an unsafe fall occurs. Parents may use this to detect if a toddler has wandered into a no-go area.
In sports training, athletes could get feedback on their form and technique without the need for expensive motion capture systems. Virtual and augmented reality systems could become more immersive and responsive without the need for cumbersome sensors or controllers. Retail could see virtual changing rooms where you try on clothes in the comfort of your home.
But it goes far beyond that. This tech could monitor driver drowsiness and distraction in automotive settings. It could analyze poses in crowds to assist emergency responders. Factories could use it to optimize worker ergonomics and prevent injuries. WiFi-based motion capture could transform animation and CGI in films and games. Gestures could control devices for improved accessibility. Even underwater pose tracking of divers and robots is possible.
The possibilities for WiFi pose estimation to enhance products, spaces, and experiences are truly boundless. As leaders, we must keep our eyes open to emerging cross-disciplinary technologies like this and imagine how they might shape the future. The next game-changing innovation could come from anywhere.
But we must also be mindful. With every step forward, there are potential missteps. There are still challenges to overcome, questions to answer. How well does the system cope with cluttered environments or rapid movements? How do we ensure that the technology is used responsibly and ethically?
The same technology that allows your WiFi to watch your dance could be used to invade your privacy in unsettling ways. And what about the implications on employment? As these technologies become more sophisticated, they could replace jobs, leading to economic and social disruption.
This could drastically reshape industries like healthcare, retail, and entertainment. Jobs like motion capture specialists may decline, while positions in AI training and implementation surge. As the technology evolves, we will have to re-evaluate notions of privacy and autonomy in the home.
And then there’s the question of when all this will become a reality in our homes and workplaces. The timeline for commercialization isn’t clear, as it usually is with groundbreaking research. There are hurdles to overcome, from technical refinements and rigorous testing to regulatory approvals. But rest assured, the day might not be far off when your WiFi does more than just connect your devices. It will be an integral part of your daily life, responding to your routines, ensuring your safety, and maybe, just maybe, enjoying your spontaneous dance performances.
In this enchanting dance with technology, we must ensure that we lead and not just follow. The possibilities are endless, but so are the pitfalls if not carefully implemented. We need to start having open discussions about the ethics, policies, and societal impact as these technologies come online. I encourage you to engage with organizations looking at the future of AI and join the conversation. Something with such profound potential requires input from us all – because the future should be a collaborative dance, not a solo performance.
While our WiFi may be learning our dance moves, we must remember to choreograph a future that we’d all like to live in.