« Back to List
From 2D to 3D / From Image to Video
Time / Place:
⏱️ 09/15 (Fri.) 14:00-14:30 at R2 - 2nd Conference Room
Abstract:
Generative models, particularly diffusion models, have revolutionized the field of artificial intelligence by enabling machines to autonomously generate realistic and creative content. While the primary emphasis has been on generating and manipulating static 2D images, recent months have borne witness to a captivating surge in advancements pertaining to 3D content and dynamic videos. This juncture marks a significant extension in both spatial and temporal dimensions, transitioning from 2D to 3D and from images to videos, respectively. In this intricate expansion lies a confluence of intriguing shared properties and formidable challenges. This talk will briefly introduce the current research landscape for both 3D and video synthesis, shedding light on their distinctive attributes and common hurdles.
Biography:
- 李昕穎 Hsin-Ying Lee
Website: http://hsinyinglee.com/
- Snap Inc. / Senior Research Scientist, Creative Vision Team, Snap Research
- Hsin-Ying Lee is a senior research scientist in the Creative Vision team at Snap Research. He has been working on generative models since 2017. His recent research interests lie in 3D and 4D generation.