Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

This paper explores a new way of controlling GANs that includes "dragging" any points of the image to reach target points in a user-interactive manner - DragGAN. It can help to deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, etc.

Sophia profile image
by Sophia
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

Xingang Pan 1,2   Ayush Tewari 3   Thomas Leimkühler 1   Lingjie Liu 1,4   Abhimitra Meka 5   Christian Theobalt 1,2  
1Max Planck Institute for Informatics   2Saarbrücken Research Center for Visual Computing, Interaction and AI   3MIT   4University of Pennsylvania   5Google AR/VR

Abstract

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs) via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality. In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. To achieve this, we propose DragGAN, which consists of two main components including: 1) a feature-based motion supervision that drives the handle point to move towards the target position, and 2) a new point tracking approach that leverages the discriminative GAN features to keep localizing the position of the handle points. Through DragGAN, anyone can deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc. As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object's rigidity. Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking. We also showcase the manipulation of real images through GAN inversion.

Results

0:00
/
0:00
/
Sophia profile image
by Sophia

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More