Efficient Cloth Try-on Image Synthesis via Denoising Diffusion Model
Pose-guided person image synthesis task aims to render a person’s image with a desired pose and appearance. Specifically, the appearance is defined by a given source image and the pose by a set of keypoints. Having control over the synthesized person images in terms of pose and style is an important requisite for applications such as ecommerce, virtual reality, metaverse and content generation for the entertainment industry. However, the current solutions usually require a significant amount of time to generate an image. This project aims to develop a new diffusion model that can generate images in seconds.
Supervisor: Mingming Gong, Feng Liu