High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis

Chao Yang
Xin Lu
Zhe Lin
Eli Shechtman
Oliver Wang Hao Li
USC/Adobe Research
CVPR 2017



High-resolution inpainting results on held-out images of ImageNet.


New Thank Arunabh Sharma to share his inpainting result by editing his own photos (click for original pictures):




New We shared the inpainting results of 200 ImageNet images and 100 Paris StreetView Images.
New Faster inpainting code increasing the speed by 6x.
New All raw images available to download on project website.

Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.


Demo and Source Code

Code
[GitHub]
[Content Network]
[Texture Prototxt]
[Content and Texture Model]


Paper and Supplementary Material


[Paper] 
[Supplementary Material]
[Raw Images] [CVPR Poster] [ImageNet 200 Images Inpainting Results]
[Paris StreetView 100 Images Inpainting Results]

Citation
 
Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, Hao Li. High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis. In CVPR 2017.

Bibtex
 
@InProceedings{Yang_2017_CVPR,
author = {Yang, Chao and Lu, Xin and Lin, Zhe and Shechtman, Eli and Wang, Oliver and Li, Hao},
title = {High-Resolution Image Inpainting Using Multi-Scale Neural Patch Synthesis},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}




Acknowledgements

This research is supported in part by Adobe, Oculus & Facebook, Huawei, the Google Faculty Research Award, the Okawa Foundation Research Grant, the Office of Naval Research (ONR) / U.S. Navy, under award number N00014-15-1-2639, the Office of the Director of National Intelligence (ODNI) and Intelligence Advanced Research Projects Activity (IARPA), under contract number 2014- 14071600010, and the U.S. Army Research Laboratory (ARL) under contract W911NF-14-D-0005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, ARL, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon.

Template: Colorful people!