Expected speed on Colab
What is the expected processing time in Google Colab? I am trying this in Colab and it shows the expected time to be half an hour. Bit confused.
The expected time depends on the device you use in Colab. If you run it on A100 GPU, it takes just a few seconds. However, 30 minutes is still too long even for CPU inference. Could you provide more info (GPU/CPU type, timesteps, batch size) about it?
running on cpu on colab for me required 6mins to generate the apple
Downloading (…)ain/model_index.json: 100%
560/560 [00:00<00:00, 33.4kB/s]
safety_checker/model.safetensors not found
Fetching 15 files: 100%
15/15 [02:00<00:00, 8.69s/it]
Downloading (…)rocessor_config.json: 100%
518/518 [00:00<00:00, 17.9kB/s]
Downloading (…)_checker/config.json: 100%
4.79k/4.79k [00:00<00:00, 159kB/s]
Downloading (…)cial_tokens_map.json: 100%
472/472 [00:00<00:00, 27.0kB/s]
Downloading (…)cheduler_config.json: 100%
459/459 [00:00<00:00, 10.4kB/s]
Downloading (…)tokenizer/vocab.json: 100%
1.06M/1.06M [00:00<00:00, 1.36MB/s]
Downloading (…)okenizer_config.json: 100%
836/836 [00:00<00:00, 12.3kB/s]
Downloading (…)_encoder/config.json: 100%
560/560 [00:00<00:00, 9.16kB/s]
Downloading (…)tokenizer/merges.txt: 100%
525k/525k [00:00<00:00, 894kB/s]
Downloading (…)140/unet/config.json: 100%
748/748 [00:00<00:00, 54.7kB/s]
Downloading (…)8140/vae/config.json: 100%
581/581 [00:00<00:00, 29.4kB/s]
Downloading pytorch_model.bin: 100%
1.22G/1.22G [01:02<00:00, 20.7MB/s]
Downloading pytorch_model.bin: 100%
492M/492M [00:25<00:00, 21.4MB/s]
Downloading (…)on_pytorch_model.bin: 100%
335M/335M [00:17<00:00, 20.2MB/s]
Downloading (…)on_pytorch_model.bin: 100%
2.32G/2.32G [01:56<00:00, 21.0MB/s]
Loading pipeline components...: 100%
7/7 [00:05<00:00, 1.50it/s]text_config_dict
is provided which will be used to initialize CLIPTextConfig
. The value text_config["id2label"]
will be overriden.text_config_dict
is provided which will be used to initialize CLIPTextConfig
. The value text_config["bos_token_id"]
will be overriden.text_config_dict
is provided which will be used to initialize CLIPTextConfig
. The value text_config["eos_token_id"]
will be overriden.
The config attributes {'predict_epsilon': True} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:128: FutureWarning: The configuration file of this scheduler: DPMSolverMultistepScheduler {
"_class_name": "DPMSolverMultistepScheduler",
"_diffusers_version": "0.20.2",
"algorithm_type": "dpmsolver++",
"beta_end": 0.012,
"beta_schedule": "scaled_linear",
"beta_start": 0.00085,
"dynamic_thresholding_ratio": 0.995,
"lambda_min_clipped": -Infinity,
"lower_order_final": true,
"num_train_timesteps": 1000,
"predict_epsilon": true,
"prediction_type": "epsilon",
"sample_max_value": 1.0,
"solver_order": 2,
"solver_type": "midpoint",
"steps_offset": 0,
"thresholding": false,
"timestep_spacing": "linspace",
"trained_betas": null,
"use_karras_sigmas": false,
"variance_type": null
}
is outdated. steps_offset
should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving steps_offset
might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the scheduler/scheduler_config.json
file
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
100%
50/50 [06:46<00:00, 7.46s/it]