Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -250,17 +250,17 @@ For example, use an image of a woman to generate a new image:
|
|
250 |
prompt = "A woman holds a bouquet of flowers and faces the camera. Thw woman is \<img\>\<|image_1|\>\</img\>."
|
251 |
|
252 |
Tips:
|
253 |
-
- For image editing task and controlnet task, we recommend
|
254 |
- For out-of-memory or time cost, you can set `offload_model=True` or refer to [./docs/inference.md#requiremented-resources](https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/inference.md#requiremented-resources) to select a appropriate setting.
|
255 |
- If inference time is too long when inputting multiple images, please try to reduce the `max_input_image_size`. For more details please refer to [./docs/inference.md#requiremented-resources](https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/inference.md#requiremented-resources).
|
256 |
- Oversaturated: If the image appears oversaturated, please reduce the `guidance_scale`.
|
257 |
- Low-quality: More detailed prompts will lead to better results.
|
258 |
-
- Animate Style: If the
|
259 |
-
- Edit generated image. If you generate
|
260 |
- For image editing tasks, we recommend placing the image before the editing instruction. For example, use `<img><|image_1|></img> remove suit`, rather than `remove suit <img><|image_1|></img>`.
|
261 |
|
262 |
|
263 |
-
HF Spaces often encounter errors due to quota limitations, so recommend to run it locally
|
264 |
|
265 |
"""
|
266 |
|
@@ -268,7 +268,7 @@ article = """
|
|
268 |
---
|
269 |
**Citation**
|
270 |
<br>
|
271 |
-
If you find this repository useful, please consider giving a star ⭐ and citation
|
272 |
```
|
273 |
@article{xiao2024omnigen,
|
274 |
title={Omnigen: Unified image generation},
|
|
|
250 |
prompt = "A woman holds a bouquet of flowers and faces the camera. Thw woman is \<img\>\<|image_1|\>\</img\>."
|
251 |
|
252 |
Tips:
|
253 |
+
- For image editing task and controlnet task, we recommend setting the height and width of output image as the same as input image. For example, if you want to edit a 512x512 image, you should set the height and width of output image as 512x512. You also can set the `use_input_image_size_as_output` to automatically set the height and width of output image as the same as input image.
|
254 |
- For out-of-memory or time cost, you can set `offload_model=True` or refer to [./docs/inference.md#requiremented-resources](https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/inference.md#requiremented-resources) to select a appropriate setting.
|
255 |
- If inference time is too long when inputting multiple images, please try to reduce the `max_input_image_size`. For more details please refer to [./docs/inference.md#requiremented-resources](https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/inference.md#requiremented-resources).
|
256 |
- Oversaturated: If the image appears oversaturated, please reduce the `guidance_scale`.
|
257 |
- Low-quality: More detailed prompts will lead to better results.
|
258 |
+
- Animate Style: If the generated images are in animate style, you can try to add `photo` to the prompt`.
|
259 |
+
- Edit generated image. If you generate an image by omnigen and then want to edit it, you cannot use the same seed to edit this image. For example, use seed=0 to generate image, and should use seed=1 to edit this image.
|
260 |
- For image editing tasks, we recommend placing the image before the editing instruction. For example, use `<img><|image_1|></img> remove suit`, rather than `remove suit <img><|image_1|></img>`.
|
261 |
|
262 |
|
263 |
+
**HF Spaces often encounter errors due to quota limitations, so recommend to run it locally.**
|
264 |
|
265 |
"""
|
266 |
|
|
|
268 |
---
|
269 |
**Citation**
|
270 |
<br>
|
271 |
+
If you find this repository useful, please consider giving a star ⭐ and a citation
|
272 |
```
|
273 |
@article{xiao2024omnigen,
|
274 |
title={Omnigen: Unified image generation},
|