Event-Based Eye Tracking Is Moving From Lab to Wearables

Research Spotlight: Event-Based Eye Tracking Is Moving From Lab to Wearables

Over the past year, we’ve seen quiet but meaningful progress in event-based eye tracking—a shift from frame cameras to neuromorphic sensors that report only changes. Why this matters: for AR/XR and low-power medical wearables, latency and power beat megapixels.

What’s new:

• A CVPR’24 challenge review documented state-of-the-art methods that infer pupil center from sparse event streams, explicitly emphasizing accuracy–efficiency trade-offs for real-time operation. This signals a community push toward fast and power-aware gaze estimation. (CVF Open Access)
• Researchers demonstrated fully event-based pipelines for smart eyewear, indicating that lower latency and reduced compute are achievable directly on-device—key for all-day AR glasses. (ACM Digital Library)
• A complementary line of work co-designs the computational image sensor with the tracking algorithm to cut readout bandwidth and end-to-end power—precisely the direction we believe the industry must take. (arXiv)

Why we care at Eyechip:

1. Latency is UX. Event cameras unlock micro-timing that gaze-based interfaces can feel.
2. Power is product. Pushing intelligence into the sensor reduces data movement—the biggest hidden energy tax in wearables.
3. Privacy by design. Less raw imagery leaving the sensor means a smaller attack surface for biometric leakage.

What’s next:

We expect broader benchmarks that couple accuracy with system power/latency budgets, not just F1 scores. That’s where in-sensor computing and algorithm–sensor co-design will separate shipping products from demos.
If you’re building AR/XR, HCI, or clinical tools and want to discuss embedded, power-aware eye tracking, let’s connect.

Leave a Reply

Your email address will not be published. Required fields are marked *