--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ tags: - text-to-image - diffusers-training - diffusers - sd3 - sd3-diffusers - template:sd-lora instance_prompt: a photo of ohwx dog widget: - text: A photo of ohwx dog sitting on the tool in the studio with gray background output: url: image_0.png - text: A photo of ohwx dog sitting on the tool in the studio with gray background output: url: image_1.png - text: A photo of ohwx dog sitting on the tool in the studio with gray background output: url: image_2.png - text: A photo of ohwx dog sitting on the tool in the studio with gray background output: url: image_3.png --- # SD3 DreamBooth - ainjarts/smalldog-sd3 ## Model description These are ainjarts/smalldog-sd3 DreamBooth weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was the text encoder fine-tuned? False. ## Trigger words You should use `a photo of ohwx dog` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('ainjarts/smalldog-sd3', torch_dtype=torch.float16).to('cuda') image = pipeline('A photo of ohwx dog sitting on the tool in the studio with gray background').images[0] ``` ## License Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]