Yan Di
I am a finishing PhD student at Chair for Computer Aided Medical Procedures & Augmented Reality (CAMP), Technical University of Munich. My supervisors are PD Dr. Federico Tombari and Prof. Nassir Navab. I also work with Dr. Fabian Manhardt from Google.
I joined Google Munich as a Student Researcher in August 2023.
In 2020, I received my master degree at Department of Automation in Tsinghua University, under the supervision of Prof. Xiangyang Ji. In 2017, I received the B.Eng. degree at Department of Automation in Tsinghua University.
Research
My research topic is object pose estimation and its applications in 3D part assembly, shape retrieval, shape matching and robotic grasping.
News
- [2024/02 ] 5 papers are accepted to CVPR2024. KP-RED and ShapeMaker focus on joint shape canonicalization, segmentation, retrieval and deformation. HiPose achieves nearly SOTA performance on instance-level pose estimation, whilst running super fast. SecondPose outperforms competitors on category-level pose estimation. MOHO uses a synthetic-to-real strategy for hand-held object reconstruction, and provides a new synthetic dataset for training.
- [2024/01 ] Our paper SG-Bot on scene-graph-based object rearrangement is accepted to ICRA2024.
- [2023/09 ] Our paper DDF-HO on hand-held object reconstruction is accepted to NeurIPS2023.
- [2023/09 ] Our paper CommonScenes on scene generation from scene graph is accepted to NeurIPS2023.
- [2023/07 ] Our paper U-RED on unsupervised shape retrieval and deformation in indoor scenes is accepted to ICCV2023.
- [2023/03 ] Our paper SST on neural reconstruction from RGB sequences is accepted to ICME2023.
- [2023/02 ] Our paper IPCC-TP on trajectory prediction in traffic scenes is accepted to CVPR2023.
- [2023/02 ] Our paper on self-supervised category-level pose estimation is accepted to RAL2023.
- [2023/01 ] Our robotic grasping paper MonoGraspNet is accepted to ICRA2023.
- [2023/01 ] Our 3D object detection method (category-level pose estimation in traffic scenes) OPA-3D is accepted to the IEEE Robotics and Automation Letters (RAL2023).
- [2022/10 ] Our method ZebraPoseSAT won the ‘Overall Best Segmentation Method’, ‘Best BlenderProc-Trained Segmentation Method’ in BOP Challenge, ECCV 2022. Our method is also the second best RGB-Only Pose Estimation method. I contributed part of the code.
- [2022/06 ] Our category-level pose estimation works GPV-Pose, RBP-Pose, SSP-Pose are accepted to CVPR2022, ECCV2022, IROS2022 respectively.
- [2021/06 ] Our instance-level pose estimation work SO-Pose is accepted at ICCV2021.
- [2020/06 ] Our dynamic reconstruction works are accepted at ICCV2019, ICRA2020 respectively.