Localizing Faster and Sooner: Adventures in Event Cameras and Spiking Neural Networks
Tobias Fischer, Queensland University of TechnologyPersonal website
Hangzhou International Expo Center, Room 301
Time | Speaker | Topic/Title |
---|---|---|
13:30pm-13:40pm | Organizers | Welcome Talk - Introduction of the workshop |
13:40pm–14:00pm | Prof. Tobias Fischer | Localizing Faster and Sooner: Adventures in Event Cameras and Spiking Neural Networks |
14:00pm–14:20pm | Prof. Yuchao Dai | Event Camera Vision: Motion Perception and Generation |
14:20pm–14:35pm | Dr. Ning Qiao (CEO of SynSense) | Neuromorphic Sensing and Computing Empowering Industrial Intelligence |
14:35pm–14:50pm | Dr. Min Liu (CEO of Dvsense) | Revolutionizing Vision with Event Cameras: Insights from an Industry Startup |
14:50pm–15:20pm | - | Tea Break |
15:20pm–15:40pm | Prof. Yu Lei | Integrating Asynchronous Event Data with New Deep Learning Models: Challenges, Techniques, and Future Directions |
15:40pm–16:00pm | Prof. Jinshan Pan | Event-Based Imaging: Advancements in Enhancing Visual Perception under Challenging Conditions |
16:00pm–16:15pm | Prof. Yulia Sandamirskaya | Neuromorphic Computing: From Theory to Applications |
16:15pm–16:30pm | Prof. Kuk-Jin Yoon | Multi-Modal Fusion in Computer Vision: Leveraging Event Data for Enhanced Object Detection and Scene Understanding |
16:30pm–16:40pm | Organizers | Intro of Event-based SLAM Challenge: Background, Setup |
16:40pm–16:45pm | Organizers | Awards Ceremony |
16:45pm–17:00pm | Winner | Winner Presentation |
17:00pm–17:30pm | Panelists | Community Dilemma: High Event Camera Costs vs. Limited Adoption Hindering Growth and Mass Production |
17:30pm | - | End |
Note: All times are in the local time zone of IROS 2025 (Beijing).
Knowing your location has long been fundamental to robotics and has driven major technological advances from industry to academia. Despite significant research advances, critical challenges to enduring deployment remain, including deploying these advances on resource-constrained robots and providing robust localisation capabilities in GPS-denied challenging environments. This talk explores Visual Place Recognition (VPR), which is the ability to recognise previously visited locations using only visual data. I will demonstrate how energy-efficient neuromorphic approaches using event-based cameras and spiking neural networks can provide low-power edge devices with location information with superior energy efficiency, adaptability, and data efficiency.
As a new type of neuromorphic vision sensor, the event camera asynchronously responds to pixel-level brightness changes, breaking through the limitations of traditional frame-based cameras in high-speed motion and high-dynamic-range scenarios. Event cameras show great potential in fields such as autonomous driving, robot navigation, military defense, deep space exploration, and high-speed industrial inspection. This talk focuses on our research group's work in event camera-based motion perception and generation, covering sub-tasks such as 2D and 3D motion estimation, long-term point trajectory tracking, moving object tracking and segmentation, video frame generation, and novel view synthesis. The goal is to overcome existing perception bottlenecks of frame-based cameras and demonstrate the potential of event cameras for perception and generation in complex dynamic scenes.
TBD
TBD
We explore the integration of asynchronous event-based vision with traditional imaging pipelines to enhance visual perception capabilities. Event cameras, which capture pixel-level brightness changes asynchronously with microsecond temporal resolution, offer significant advantages over conventional frame-based cameras in challenging scenarios such as high-speed motion, extreme lighting conditions, and power-constrained environments. We present novel methodologies for seamlessly incorporating event data into existing imaging systems, including aperture synthesis, auto-focusing, shutter control, and post-processing fusion. Our approach demonstrates substantial improvements across all components of the imaging system and exhibits significant potential for downstream tasks including tracking and scene reconstruction, particularly in scenarios where traditional cameras struggle. We will discuss the key challenges and future perspectives for developing next-generation computer vision systems that can leverage the complementary strengths of both event-based and frame-based sensing modalities.
TBD
TBD
TBD
We introduce a benchmarking framework for the task of event-based state estimation, featuring:
This framework is instantiated through an IROS 2025 Workshop Challenge that benchmarks state-of-the-art methods, yielding insights into optimal architectures and persistent challenges.
Please visit the challenge websites for more details: Overview and Submission.
Cash Awards - First Prize: 3K RMB; Second Prize: 2K RMB; Third Prize: 1K RMB.
Any questions about the challenge can be directed at junkainiu@hnu.edu.cn.
![]() Yi Zhou Hunan University Personal website |
![]() Jianhao Jiao UCL Personal website |
![]() Yifu Wang Vertex Lab Personal website |
![]() Boxin Shi Peking University Personal website |
![]() Liyuan Pan Beijing Institute of Technology Personal website |
![]() Laurent Kneip ShanghaiTech University Personal website |
![]() Richard Hartley Australian National University Personal website |
![]() Junkai Niu HNU, NAIL Lab Personal website |
![]() Sheng Zhong HNU, NAIL Lab Personal website |
![]() Kaizhen Sun HNU, NAIL Lab Personal website |
![]() Yi Zhou HNU, NAIL Lab Personal website |
![]() Davide Scaramuzza (Advisory Board) UZH, RPG Lab Personal website |
![]() Guillermo Gallego (Advisory Board) TU Berlin, Robotic Interactive Perception Lab Personal website |
![]() SyncSense |
![]() IniVation |
Responsibility | ||
---|---|---|
Prof.Yi Zhou | eeyzhou(at)hnu(dot)edu(dot)cn | General workshop inquiries |
Dr.Jianhao Jiao | jiaojh1994(at))gmail(dot)com | Website and advertising-related questions |
Dr.Yifu Wang | usasuper(at)126(dot)com | Speaker information and program details |