Skip to content

Kenkenzaii/PrefPaint

Repository files navigation


PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference

Neurips 2024
Kendong Liu · Zhiyu Zhu* · Chuanhao Li* · Hui Liu · Huanqiang Zeng · Junhui Hou ·

Arxiv Project Page Google Colab


This repository contains the pytorch implementation for the paper PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference, Neurips 2024. teaser.png

News

  • [2024-12-16] Release a Colab notebook for inpainting results assessment Open In Colab.
  • [2024-12-16] Inpainting model and InpaintReward model Release.
  • [2024-12-16] Reinforcement Learning Code Release.
  • [2024-09-26] Our paper has been accpetd by Neurips 2024!

Todo

  • Release training code.
  • Release inpainting model and inpaint reward model.
  • Release InpaintReward training code.
  • Release Datasets.

Setup

Cloning the Repository

Use the following command to clone:

git clone https://github.com/Kenkenzaii/PrefPaint.git

Python Environment

To prepare the Python environment needed to run PrefPaint, execute the following commands:

conda create -n prefpaint python=3.8
conda activate prefpaint

# By default, we use pytorch=2.0.0+cu118.
pip install -r requirements.txt
pip install clip@git+https://github.com/openai/CLIP.git

Usage

import os
from PIL import Image
from diffusers import AutoPipelineForInpainting

pipe = AutoPipelineForInpainting.from_pretrained('kd5678/prefpaint-v1.0').to("cuda")

color_path = './examples/image.jpg'
mask_path = './example/mask.jpg'
os.makedirs('results', exist_ok=True)

image, mask = Image.open(color_path), Image.open(mask_path).convert('L')
# You can provide your prompt here.
prompt = ""
result = pipe(prompt=prompt, image=image, mask_image=mask, eta=1.0).images[0]            
result.save('./results/result.jpg')

Training the inpaining reward model

If you want to train your inpainting reward model, please follow these steps:

  1. Prepare Datasets. Please download the training dataset from this Google Drive link.
  2. Modify the path in the config files (configs/imagereward_train_configs.yaml).
  3. Follow the steps in the ImageReward/README.md

Training the inpainting model

If you want to train your inpainting model, please follow these steps:

  1. Prepare Datasets. Please download the training dataset from this Google Drive link.
  2. Download the inpaint reward model.
import os
from huggingface_hub import hf_hub_download

def RewardModel_download(url: str, root: str):
    os.makedirs(root, exist_ok=True)
    filename = os.path.basename(url)
    download_target = os.path.join(root, filename)
    hf_hub_download(repo_id="kd5678/prefpaintReward", filename=filename, local_dir=root)
    return download_target

MODEL_NAME= 'https://huggingface.co/kd5678/prefpaintReward/blob/main/prefpaintReward.pt'

model_path = RewardModel_download(MODEL_NAME, './checkpoint')
  1. Run bash train.sh or accelerate launch train_inpainting_model.py.

Testing

For testing performance of our inpainting model, you can upload your own input on . A Colab notebook Google Colab is also provided for a quick evaluation of the inpainted results.

Acknowledgement

Thanks for the following wonderful works: Diffusers, ImageReward

Citation

@article{liu2024prefpaint,
  title={Prefpaint: Aligning image inpainting diffusion model with human preference},
  author={Liu, Kendong and Zhu, Zhiyu and Li, Chuanhao and Liu, Hui and Zeng, Huanqiang and Hou, Junhui},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={30554--30589},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published