Chenghao Qian

![]() | ![]() | ![]() | ![]() |
Hi!👋 I am currently pursuing a Ph.D. at Virtuocity. Prior to this, I completed my master’s degree at the University of Sydney and spent several years working in the fields of robotics and autonomous driving at XPENG and UBTech. My overarching goal is to enable robots to operate reliably across any location, under any weather, at any time— or even one day, jump on the moon.🚀
My current research aims to tackle the core challenges faced by autonomous driving in adverse conditions. To address this, I focus on the following methodologies:
- Robust Perception: Designing perception algorithms that for reliable scene understanding across challenging real-world scenarios including snowy, rainy, foggy and nighttime.
- Photorealistic Simulation: Developing high-fidelity simulations that replicate challenging scenarios to support foundation model validation.
- Generative Modelling: Using generative frameworks to synthesize long-tailed, safety-critical driving scenes and mitigate weather-induced visual effects for dataset enrichment and improve visual clarity.
Always open to collaboration — if you have any questions about my research or would like to collaborate, please feel free to email me at
tscq@leeds.ac.uk. I’d be glad to connect and chat!
Publications
2025
- Arxiv
- ICRAWeatherGS: 3D Scene Reconstruction in Adverse Weather Conditions via Gaussian SplattingIEEE International Conference on Robotics and Automation, 2025
- RA-LWeatherDG: LLM-assisted procedural weather generation for domain-generalized semantic segmentationIEEE Robotics and Automation Letters, 2025
2024
- ICPR🌟 Best Paper Award (1/2106): AllWeather-Net: Unified Image Enhancement for Autonomous Driving Under Adverse Weather and Low-Light ConditionsIn International Conference on Pattern Recognition, 2024