Workshop on Event-Based Vision - IROS 2025

Logo

Location

Hangzhou International Expo Center, Room 301

Important Dates

Context

Event-based cameras are bio-inspired visual sensors that mimic the transient pathway of the human visual system, offering key advantages (e.g., microsecond temporal resolution and high dynamic range) that hold the potential to revolutionize robot state estimation and image processing. Since the first commercially available event camera in 2008 and the first Workshop on Event-based Vision at ICRA 2017, the community has witnessed a surge in event-based/-enhanced solutions for robotics and computer vision. However, the community is facing a chicken-and-egg dilemma: on one hand, the high price of event cameras stifles the community growth; on the other hand, the absence of large-scale deployment of event-based solutions discourages mass production of these cameras. To this end, this workshop is dedicated to event-based vision, with a particular focus on its development in state estimation and image processing.


This workshop builds on the tradition of inviting pioneering figures in the community as speakers, while also serving as a bridge between international/domestic start-ups and academia. It aims to promote discussions on identifying roadblocks that hinder progress in the field and foster collaborative solutions to overcome these barriers. Besides, the first-ever Event-based SLAM Challenge will be held in this workshop. This challenge seeks to benchmark state-of-the-art algorithms, encourage innovation in event-driven/-enhanced approaches, and push the boundaries of what is achievable in real-time ultra-frame-rate state estimation for high-speed robots. As a whole, this workshop will place a strong emphasis on the reproducibility of research findings in real-world scenarios and their tangible impact on advancing robotics technology


Program

Time Speaker Topic/Title
13:30pm-13:40pm Organizers Welcome Talk - Introduction of the workshop
13:40pm–14:00pm Prof. Tobias Fischer Localizing Faster and Sooner: Adventures in Event Cameras and Spiking Neural Networks
14:00pm–14:20pm Prof. Yuchao Dai Event Camera Vision: Motion Perception and Generation
14:20pm–14:35pm Dr. Ning Qiao (CEO of SynSense) Neuromorphic Sensing and Computing Empowering Industrial Intelligence
14:35pm–14:50pm Dr. Min Liu (CEO of Dvsense) Revolutionizing Vision with Event Cameras: Insights from an Industry Startup
14:50pm–15:20pm - Tea Break
15:20pm–15:40pm Prof. Yu Lei Integrating Asynchronous Event Data with New Deep Learning Models: Challenges, Techniques, and Future Directions
15:40pm–16:00pm Prof. Jinshan Pan Event-Based Imaging: Advancements in Enhancing Visual Perception under Challenging Conditions
16:00pm–16:15pm Prof. Yulia Sandamirskaya Neuromorphic Computing: From Theory to Applications
16:15pm–16:30pm Prof. Kuk-Jin Yoon Multi-Modal Fusion in Computer Vision: Leveraging Event Data for Enhanced Object Detection and Scene Understanding
16:30pm–16:40pm Organizers Intro of Event-based SLAM Challenge: Background, Setup
16:40pm–16:45pm Organizers Awards Ceremony
16:45pm–17:00pm Winner Winner Presentation
17:00pm–17:30pm Panelists Community Dilemma: High Event Camera Costs vs. Limited Adoption Hindering Growth and Mass Production
17:30pm - End

Note: All times are in the local time zone of IROS 2025 (Beijing).

Speakers  

Image

Localizing Faster and Sooner: Adventures in Event Cameras and Spiking Neural Networks

Tobias Fischer, Queensland University of Technology
Personal website
Abstact

Knowing your location has long been fundamental to robotics and has driven major technological advances from industry to academia. Despite significant research advances, critical challenges to enduring deployment remain, including deploying these advances on resource-constrained robots and providing robust localisation capabilities in GPS-denied challenging environments. This talk explores Visual Place Recognition (VPR), which is the ability to recognise previously visited locations using only visual data. I will demonstrate how energy-efficient neuromorphic approaches using event-based cameras and spiking neural networks can provide low-power edge devices with location information with superior energy efficiency, adaptability, and data efficiency.


Image

Event Camera Vision: Motion Perception and Generation

Yuchao Dai, Northwestern Polytechnical University
Personal website
Abstact

As a new type of neuromorphic vision sensor, the event camera asynchronously responds to pixel-level brightness changes, breaking through the limitations of traditional frame-based cameras in high-speed motion and high-dynamic-range scenarios. Event cameras show great potential in fields such as autonomous driving, robot navigation, military defense, deep space exploration, and high-speed industrial inspection. This talk focuses on our research group's work in event camera-based motion perception and generation, covering sub-tasks such as 2D and 3D motion estimation, long-term point trajectory tracking, moving object tracking and segmentation, video frame generation, and novel view synthesis. The goal is to overcome existing perception bottlenecks of frame-based cameras and demonstrate the potential of event cameras for perception and generation in complex dynamic scenes.


Image

Neuromorphic Sensing and Computing Empowering Industrial Intelligence

Ning Qiao, CEO of SynSense
Personal website
Abstact

TBD


Image

Revolutionizing Vision with Event Cameras: Insights from an Industry Startup

Min Liu, CEO of Dvsense
Personal website
Abstact

TBD


Image

How to Integrate Asynchronous Events in Our Imaging Pipeline?

Lei Yu, Wuhan University
Personal website
Abstact

We explore the integration of asynchronous event-based vision with traditional imaging pipelines to enhance visual perception capabilities. Event cameras, which capture pixel-level brightness changes asynchronously with microsecond temporal resolution, offer significant advantages over conventional frame-based cameras in challenging scenarios such as high-speed motion, extreme lighting conditions, and power-constrained environments. We present novel methodologies for seamlessly incorporating event data into existing imaging systems, including aperture synthesis, auto-focusing, shutter control, and post-processing fusion. Our approach demonstrates substantial improvements across all components of the imaging system and exhibits significant potential for downstream tasks including tracking and scene reconstruction, particularly in scenarios where traditional cameras struggle. We will discuss the key challenges and future perspectives for developing next-generation computer vision systems that can leverage the complementary strengths of both event-based and frame-based sensing modalities.


Image

Event-Based Imaging: Advancements in Enhancing Visual Perception under Challenging Conditions

Jinshan Pan, Nanjing University of Science and Technology
Personal website
Abstact

TBD


Image

Neuromorphic Computing: From Theory to Applications

Yulia Sandamirskaya, Zurich University of Applied Sciences
Personal website
Abstact

TBD


Image

Multi-Modal Fusion in Computer Vision: Leveraging Event Data for Enhanced Object Detection and Scene Understanding

Kuk-Jin Yoon, Korea Advanced Institute of Science & Technology (KAIST)
Personal website
Abstact

TBD



Event SLAM Challenge

We introduce a benchmarking framework for the task of event-based state estimation, featuring:

This framework is instantiated through an IROS 2025 Workshop Challenge that benchmarks state-of-the-art methods, yielding insights into optimal architectures and persistent challenges.

Please visit the challenge websites for more details: Overview and Submission.

Cash Awards - First Prize: 3K RMB; Second Prize: 2K RMB; Third Prize: 1K RMB.

Any questions about the challenge can be directed at junkainiu@hnu.edu.cn.

Workshop Organizers

Yi Zhou
Yi Zhou
Hunan University
Personal website
Jianhao Jiao
Jianhao Jiao
UCL
Personal website
Yifu Wang
Yifu Wang
Vertex Lab
Personal website
Boxin Shi
Boxin Shi
Peking University
Personal website
Liyuan Pan
Liyuan Pan
Beijing Institute of Technology
Personal website
Laurent Kneip
Laurent Kneip
ShanghaiTech University
Personal website
Richard Hartley
Richard Hartley
Australian National University
Personal website

Challenge Organizers

Junkai Niu
Junkai Niu
HNU, NAIL Lab
Personal website
Sheng Zhong
Sheng Zhong
HNU, NAIL Lab
Personal website
Kaizhen Sun
Kaizhen Sun
HNU, NAIL Lab
Personal website
Yi Zhou
Yi Zhou
HNU, NAIL Lab
Personal website
Davide Scaramuzza
Davide Scaramuzza
(Advisory Board)
UZH, RPG Lab
Personal website
Guillermo Gallego
Guillermo Gallego
(Advisory Board)
TU Berlin, Robotic Interactive Perception Lab
Personal website

Sponsor

SyncSense
SyncSense
IniVation
IniVation

Contact

  Email Responsibility
Prof.Yi Zhou eeyzhou(at)hnu(dot)edu(dot)cn General workshop inquiries
Dr.Jianhao Jiao jiaojh1994(at))gmail(dot)com Website and advertising-related questions
Dr.Yifu Wang usasuper(at)126(dot)com Speaker information and program details