Spaces:
Running
on
Zero
Running
on
Zero
yrr
commited on
Commit
•
4f92f9f
1
Parent(s):
a713a09
update inference code
Browse files
README.md
CHANGED
@@ -1,180 +1,12 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
</a>
|
14 |
-
<a href="https://huggingface.co/Shitao/OmniGen-v1">
|
15 |
-
<img alt="Build" src="https://img.shields.io/badge/HF%20Model-🤗-yellow">
|
16 |
-
</a>
|
17 |
-
</p>
|
18 |
-
|
19 |
-
<h4 align="center">
|
20 |
-
<p>
|
21 |
-
<a href=#1-news>News</a> |
|
22 |
-
<a href=#3-methodology>Methodology</a> |
|
23 |
-
<a href=#4-what-can-omnigen-do>Capabilities</a> |
|
24 |
-
<a href=#5-quick-start>Quick Start</a> |
|
25 |
-
<a href="#6-finetune">Finetune</a> |
|
26 |
-
<a href="#license">License</a> |
|
27 |
-
<a href="#citation">Citation</a>
|
28 |
-
<p>
|
29 |
-
</h4>
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
## 1. News
|
34 |
-
- 2024-10-28: We release new version of inference code, optimizing the memory usage and time cost. You can refer to [docs/inference.md](docs/inference.md#requiremented-resources) for detailed information.
|
35 |
-
- 2024-10-22: :fire: We release the code for OmniGen. Inference: [docs/inference.md](docs/inference.md) Train: [docs/fine-tuning.md](docs/fine-tuning.md)
|
36 |
-
- 2024-10-22: :fire: We release the first version of OmniGen. Model Weight: [Shitao/OmniGen-v1](https://huggingface.co/Shitao/OmniGen-v1) HF Demo: [🤗](https://huggingface.co/spaces/Shitao/OmniGen)
|
37 |
-
|
38 |
-
|
39 |
-
## 2. Overview
|
40 |
-
|
41 |
-
OmniGen is a unified image generation model that can generate a wide range of images from multi-modal prompts. It is designed to be simple, flexible and easy to use. We provide [inference code](#5-quick-start) so that everyone can explore more functionalities of OmniGen.
|
42 |
-
|
43 |
-
Existing image generation models often require loading several additional network modules (such as ControlNet, IP-Adapter, Reference-Net, etc.) and performing extra preprocessing steps (e.g., face detection, pose estimation, cropping, etc.) to generate a satisfactory image. However, **we believe that the future image generation paradigm should be more simple and flexible, that is, generating various images directly through arbitrarily multi-modal instructions without the need for additional plugins and operations, similar to how GPT works in language generation.**
|
44 |
-
|
45 |
-
Due to the limited resources, OmniGen still has room for improvement. We will continue to optimize it, and hope it inspire more universal image generation models. You can also easily fine-tune OmniGen without worrying about designing networks for specific tasks; you just need to prepare the corresponding data, and then run the [script](#6-finetune). Imagination is no longer limited; everyone can construct any image generation task, and perhaps we can achieve very interesting, wonderful and creative things.
|
46 |
-
|
47 |
-
If you have any questions, ideas or interesting tasks you want OmniGen to accomplish, feel free to discuss with us: [email protected], [email protected], [email protected]. We welcome any feedback to help us improve the model.
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
## 3. Methodology
|
53 |
-
|
54 |
-
You can see details in our [paper](https://arxiv.org/abs/2409.11340).
|
55 |
-
|
56 |
-
|
57 |
-
## 4. What Can OmniGen do?
|
58 |
-
|
59 |
-
|
60 |
-
OmniGen is a unified image generation model that you can use to perform various tasks, including but not limited to text-to-image generation, subject-driven generation, Identity-Preserving Generation, image editing, and image-conditioned generation. **OmniGen don't need additional plugins or operations, it can automatically identify the features (e.g., required object, human pose, depth mapping) in input images according the text prompt.**
|
61 |
-
We showcase some examples in [inference.ipynb](inference.ipynb). And in [inference_demo.ipynb](inference_demo.ipynb), we show an interesting pipeline to generate and modify a image.
|
62 |
-
|
63 |
-
Here is the illustration of OmniGen's capabilities:
|
64 |
-
- You can control the image generation flexibly via OmniGen
|
65 |
-
![demo](./imgs/demo_cases.png)
|
66 |
-
- Referring Expression Generation: You can input multiple images and use simple, general language to refer to the objects within those images. OmniGen can automatically recognize the necessary objects in each image and generate new images based on them. No additional operations, such as image cropping or face detection, are required.
|
67 |
-
![demo](./imgs/referring.png)
|
68 |
-
|
69 |
-
If you are not entirely satisfied with certain functionalities or wish to add new capabilities, you can try [fine-tuning OmniGen](#6-finetune).
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
## 5. Quick Start
|
74 |
-
|
75 |
-
|
76 |
-
### Using OmniGen
|
77 |
-
Install via Github:
|
78 |
-
```bash
|
79 |
-
git clone https://github.com/staoxiao/OmniGen.git
|
80 |
-
cd OmniGen
|
81 |
-
pip install -e .
|
82 |
-
```
|
83 |
-
|
84 |
-
Here are some examples:
|
85 |
-
```python
|
86 |
-
from OmniGen import OmniGenPipeline
|
87 |
-
|
88 |
-
pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1")
|
89 |
-
|
90 |
-
# Text to Image
|
91 |
-
images = pipe(
|
92 |
-
prompt="A curly-haired man in a red shirt is drinking tea.",
|
93 |
-
height=1024,
|
94 |
-
width=1024,
|
95 |
-
guidance_scale=2.5,
|
96 |
-
seed=0,
|
97 |
-
)
|
98 |
-
images[0].save("example_t2i.png") # save output PIL Image
|
99 |
-
|
100 |
-
# Multi-modal to Image
|
101 |
-
# In prompt, we use the placeholder to represent the image. The image placeholder should be in the format of <img><|image_*|></img>
|
102 |
-
# You can add multiple images in the input_images. Please ensure that each image has its placeholder. For example, for the list input_images [img1_path, img2_path], the prompt needs to have two placeholders: <img><|image_1|></img>, <img><|image_2|></img>.
|
103 |
-
images = pipe(
|
104 |
-
prompt="A man in a black shirt is reading a book. The man is the right man in <img><|image_1|></img>.",
|
105 |
-
input_images=["./imgs/test_cases/two_man.jpg"],
|
106 |
-
height=1024,
|
107 |
-
width=1024,
|
108 |
-
guidance_scale=2.5,
|
109 |
-
img_guidance_scale=1.6,
|
110 |
-
seed=0
|
111 |
-
)
|
112 |
-
images[0].save("example_ti2i.png") # save output PIL image
|
113 |
-
```
|
114 |
-
- For thre required resources and the method to run OmniGen efficiently, please refer to [docs/inference.md#requiremented-resources](docs/inference.md#requiremented-resources).
|
115 |
-
- For more examples for image generation, you can refer to [inference.ipynb](inference.ipynb) and [inference_demo.ipynb](inference_demo.ipynb)
|
116 |
-
- For more details about the argument in inference, please refer to [docs/inference.md](docs/inference.md).
|
117 |
-
|
118 |
-
|
119 |
-
### Using Diffusers
|
120 |
-
Coming soon.
|
121 |
-
|
122 |
-
|
123 |
-
### Gradio Demo
|
124 |
-
|
125 |
-
We construct an online demo in [Huggingface](https://huggingface.co/spaces/Shitao/OmniGen).
|
126 |
-
|
127 |
-
For the local gradio demo, you need to install `pip install gradio spaces` , and then you can run:
|
128 |
-
```python
|
129 |
-
pip install gradio spaces
|
130 |
-
python app.py
|
131 |
-
```
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## 6. Finetune
|
136 |
-
We provide a training script `train.py` to fine-tune OmniGen.
|
137 |
-
Here is a toy example about LoRA finetune:
|
138 |
-
```bash
|
139 |
-
accelerate launch --num_processes=1 train.py \
|
140 |
-
--model_name_or_path Shitao/OmniGen-v1 \
|
141 |
-
--batch_size_per_device 2 \
|
142 |
-
--condition_dropout_prob 0.01 \
|
143 |
-
--lr 1e-3 \
|
144 |
-
--use_lora \
|
145 |
-
--lora_rank 8 \
|
146 |
-
--json_file ./toy_data/toy_subject_data.jsonl \
|
147 |
-
--image_path ./toy_data/images \
|
148 |
-
--max_input_length_limit 18000 \
|
149 |
-
--keep_raw_resolution \
|
150 |
-
--max_image_size 1024 \
|
151 |
-
--gradient_accumulation_steps 1 \
|
152 |
-
--ckpt_every 10 \
|
153 |
-
--epochs 200 \
|
154 |
-
--log_every 1 \
|
155 |
-
--results_dir ./results/toy_finetune_lora
|
156 |
-
```
|
157 |
-
|
158 |
-
Please refer to [docs/fine-tuning.md](docs/fine-tuning.md) for more details (e.g. full finetune).
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
## License
|
163 |
-
This repo is licensed under the [MIT License](LICENSE).
|
164 |
-
|
165 |
-
|
166 |
-
## Citation
|
167 |
-
If you find this repository useful, please consider giving a star ⭐ and citation
|
168 |
-
```
|
169 |
-
@article{xiao2024omnigen,
|
170 |
-
title={Omnigen: Unified image generation},
|
171 |
-
author={Xiao, Shitao and Wang, Yueze and Zhou, Junjie and Yuan, Huaying and Xing, Xingrun and Yan, Ruiran and Wang, Shuting and Huang, Tiejun and Liu, Zheng},
|
172 |
-
journal={arXiv preprint arXiv:2409.11340},
|
173 |
-
year={2024}
|
174 |
-
}
|
175 |
-
```
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
|
|
1 |
+
---
|
2 |
+
title: OmniGen
|
3 |
+
emoji: 🖼
|
4 |
+
colorFrom: purple
|
5 |
+
colorTo: red
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 5.0.1
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|