Architecture of VAE
In your paper, you claimed to have used the basic architecture of Taming Transformers, but when I pretrained using the LDM3D-4C network, the checkpoints were completely out of place.
Global seed set to 23
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1
. You can also use weights=VGG16_Weights.DEFAULT
to get the most up-to-date weights.
warnings.warn(msg)
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/loggers/test_tube.py:105: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the pytorch_lightning.loggers.TensorBoardLogger
as an alternative.
rank_zero_deprecation(
Multiprocessing is handled by SLURM.
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:51: LightningDeprecationWarning: Setting Trainer(resume_from_checkpoint=)
is deprecated in v1.5 and will be removed in v1.7. Please pass Trainer.fit(ckpt_path=)
directly instead.
rank_zero_deprecation(
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py:91: PossibleUserWarning: max_epochs
was not set. Setting it to 1000 epochs. To train without an epoch limit, set max_epochs=-1
.
rank_zero_warn(
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:151: LightningDeprecationWarning: Setting Trainer(checkpoint_callback=<pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint object at 0x7f110227fd30>)
is deprecated in v1.5 and will be removed in v1.7. Please consider using Trainer(enable_checkpointing=<pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint object at 0x7f110227fd30>)
.
rank_zero_deprecation(
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:808: LightningDeprecationWarning: trainer.resume_from_checkpoint
is deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path with trainer.fit(ckpt_path=)
instead.
ckpt_path = ckpt_path or self.resume_from_checkpoint
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py:391: LightningDeprecationWarning: The Callback.on_pretrain_routine_start
hook has been deprecated in v1.6 and will be removed in v1.8. Please use Callback.on_fit_start
instead.
rank_zero_deprecation(
/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py:342: LightningDeprecationWarning: Base Callback.on_train_batch_end
hook signature has changed in v1.5. The dataloader_idx
argument will be removed in v1.7.
rank_zero_deprecation(
Restoring states from the checkpoint path at /mnt/petrelfs/zhangmengchen/impaint/VAE/taming-transformers/logs/ldm3d-4c-vae/checkpoints/last.ckpt
Running on GPUs 0,
Working with z of shape (1, 4, 16, 16) = 1024 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
accumulate_grad_batches = 1
Setting learning rate to 2.25e-05 = 1 (accumulate_grad_batches) * 1 (num_gpus) * 5 (batchsize) * 4.50e-06 (base_lr)
Summoning checkpoint.
Traceback (most recent call last):
File "/mnt/petrelfs/zhangmengchen/impaint/VAE/taming-transformers/main.py", line 565, in
trainer.fit(model, data)
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit
self._call_and_handle_interrupt(
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1180, in _run
self._restore_modules_and_callbacks(ckpt_path)
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1142, in _restore_modules_and_callbacks
self._checkpoint_connector.restore_model()
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 179, in restore_model
self.trainer.strategy.load_model_state_dict(self._loaded_checkpoint)
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 321, in load_model_state_dict
self.lightning_module.load_state_dict(checkpoint["state_dict"])
File "/mnt/petrelfs/zhangmengchen/anaconda3/envs/pt20/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VQModel:
Missing key(s) in state_dict: "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.conv1.weight", "encoder.down.0.block.0.conv1.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.conv2.weight", "encoder.down.0.block.0.conv2.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm1.bias", "encoder.down.0.block.1.conv1.weight", "encoder.down.0.block.1.conv1.bias", "encoder.down.0.block.1.norm2.weight", "encoder.down.0.block.1.norm2.bias", "encoder.down.0.block.1.conv2.weight", "encoder.down.0.block.1.conv2.bias", "encoder.down.0.downsample.conv.weight", "encoder.down.0.downsample.conv.bias", "encoder.down.1.block.0.norm1.weight", "encoder.down.1.block.0.norm1.bias", "encoder.down.1.block.0.conv1.weight", "encoder.down.1.block.0.conv1.bias", "encoder.down.1.block.0.norm2.weight", "encoder.down.1.block.0.norm2.bias", "encoder.down.1.block.0.conv2.weight", "encoder.down.1.block.0.conv2.bias", "encoder.down.1.block.1.norm1.weight", "encoder.down.1.block.1.norm1.bias", "encoder.down.1.block.1.conv1.weight", "encoder.down.1.block.1.conv1.bias", "encoder.down.1.block.1.norm2.weight", "encoder.down.1.block.1.norm2.bias", "encoder.down.1.block.1.conv2.weight", "encoder.down.1.block.1.conv2.bias", "encoder.down.1.downsample.conv.weight", "encoder.down.1.downsample.conv.bias", "encoder.down.2.block.0.norm1.weight", "encoder.down.2.block.0.norm1.bias", "encoder.down.2.block.0.conv1.weight", "encoder.down.2.block.0.conv1.bias", "encoder.down.2.block.0.norm2.weight", "encoder.down.2.block.0.norm2.bias", "encoder.down.2.block.0.conv2.weight", "encoder.down.2.block.0.conv2.bias", "encoder.down.2.block.0.nin_shortcut.weight", "encoder.down.2.block.0.nin_shortcut.bias", "encoder.down.2.block.1.norm1.weight", "encoder.down.2.block.1.norm1.bias", "encoder.down.2.block.1.conv1.weight", "encoder.down.2.block.1.conv1.bias", "encoder.down.2.block.1.norm2.weight", "encoder.down.2.block.1.norm2.bias", "encoder.down.2.block.1.conv2.weight", "encoder.down.2.block.1.conv2.bias", "encoder.down.2.downsample.conv.weight", "encoder.down.2.downsample.conv.bias", "encoder.down.3.block.0.norm1.weight", "encoder.down.3.block.0.norm1.bias", "encoder.down.3.block.0.conv1.weight", "encoder.down.3.block.0.conv1.bias", "encoder.down.3.block.0.norm2.weight", "encoder.down.3.block.0.norm2.bias", "encoder.down.3.block.0.conv2.weight", "encoder.down.3.block.0.conv2.bias", "encoder.down.3.block.1.norm1.weight", "encoder.down.3.block.1.norm1.bias", "encoder.down.3.block.1.conv1.weight", "encoder.down.3.block.1.conv1.bias", "encoder.down.3.block.1.norm2.weight", "encoder.down.3.block.1.norm2.bias", "encoder.down.3.block.1.conv2.weight", "encoder.down.3.block.1.conv2.bias", "encoder.down.3.downsample.conv.weight", "encoder.down.3.downsample.conv.bias", "encoder.down.4.block.0.norm1.weight", "encoder.down.4.block.0.norm1.bias", "encoder.down.4.block.0.conv1.weight", "encoder.down.4.block.0.conv1.bias", "encoder.down.4.block.0.norm2.weight", "encoder.down.4.block.0.norm2.bias", "encoder.down.4.block.0.conv2.weight", "encoder.down.4.block.0.conv2.bias", "encoder.down.4.block.0.nin_shortcut.weight", "encoder.down.4.block.0.nin_shortcut.bias", "encoder.down.4.block.1.norm1.weight", "encoder.down.4.block.1.norm1.bias", "encoder.down.4.block.1.conv1.weight", "encoder.down.4.block.1.conv1.bias", "encoder.down.4.block.1.norm2.weight", "encoder.down.4.block.1.norm2.bias", "encoder.down.4.block.1.conv2.weight", "encoder.down.4.block.1.conv2.bias", "encoder.down.4.attn.0.norm.weight", "encoder.down.4.attn.0.norm.bias", "encoder.down.4.attn.0.q.weight", "encoder.down.4.attn.0.q.bias", "encoder.down.4.attn.0.k.weight", "encoder.down.4.attn.0.k.bias", "encoder.down.4.attn.0.v.weight", "encoder.down.4.attn.0.v.bias", "encoder.down.4.attn.0.proj_out.weight", "encoder.down.4.attn.0.proj_out.bias", "encoder.down.4.attn.1.norm.weight", "encoder.down.4.attn.1.norm.bias", "encoder.down.4.attn.1.q.weight", "encoder.down.4.attn.1.q.bias", "encoder.down.4.attn.1.k.weight", "encoder.down.4.attn.1.k.bias", "encoder.down.4.attn.1.v.weight", "encoder.down.4.attn.1.v.bias", "encoder.down.4.attn.1.proj_out.weight", "encoder.down.4.attn.1.proj_out.bias", "encoder.mid.block_1.norm1.weight", "encoder.mid.block_1.norm1.bias", "encoder.mid.block_1.conv1.weight", "encoder.mid.block_1.conv1.bias", "encoder.mid.block_1.norm2.weight", "encoder.mid.block_1.norm2.bias", "encoder.mid.block_1.conv2.weight", "encoder.mid.block_1.conv2.bias", "encoder.mid.attn_1.norm.weight", "encoder.mid.attn_1.norm.bias", "encoder.mid.attn_1.q.weight", "encoder.mid.attn_1.q.bias", "encoder.mid.attn_1.k.weight", "encoder.mid.attn_1.k.bias", "encoder.mid.attn_1.v.weight", "encoder.mid.attn_1.v.bias", "encoder.mid.attn_1.proj_out.weight", "encoder.mid.attn_1.proj_out.bias", "encoder.mid.block_2.norm1.weight", "encoder.mid.block_2.norm1.bias", "encoder.mid.block_2.conv1.weight", "encoder.mid.block_2.conv1.bias", "encoder.mid.block_2.norm2.weight", "encoder.mid.block_2.norm2.bias", "encoder.mid.block_2.conv2.weight", "encoder.mid.block_2.conv2.bias", "encoder.norm_out.weight", "encoder.norm_out.bias", "decoder.mid.block_1.norm1.weight", "decoder.mid.block_1.norm1.bias", "decoder.mid.block_1.conv1.weight", "decoder.mid.block_1.conv1.bias", "decoder.mid.block_1.norm2.weight", "decoder.mid.block_1.norm2.bias", "decoder.mid.block_1.conv2.weight", "decoder.mid.block_1.conv2.bias", "decoder.mid.attn_1.norm.weight", "decoder.mid.attn_1.norm.bias", "decoder.mid.attn_1.q.weight", "decoder.mid.attn_1.q.bias", "decoder.mid.attn_1.k.weight", "decoder.mid.attn_1.k.bias", "decoder.mid.attn_1.v.weight", "decoder.mid.attn_1.v.bias", "decoder.mid.attn_1.proj_out.weight", "decoder.mid.attn_1.proj_out.bias", "decoder.mid.block_2.norm1.weight", "decoder.mid.block_2.norm1.bias", "decoder.mid.block_2.conv1.weight", "decoder.mid.block_2.conv1.bias", "decoder.mid.block_2.norm2.weight", "decoder.mid.block_2.norm2.bias", "decoder.mid.block_2.conv2.weight", "decoder.mid.block_2.conv2.bias", "decoder.up.0.block.0.norm1.weight", "decoder.up.0.block.0.norm1.bias", "decoder.up.0.block.0.conv1.weight", "decoder.up.0.block.0.conv1.bias", "decoder.up.0.block.0.norm2.weight", "decoder.up.0.block.0.norm2.bias", "decoder.up.0.block.0.conv2.weight", "decoder.up.0.block.0.conv2.bias", "decoder.up.0.block.1.norm1.weight", "decoder.up.0.block.1.norm1.bias", "decoder.up.0.block.1.conv1.weight", "decoder.up.0.block.1.conv1.bias", "decoder.up.0.block.1.norm2.weight", "decoder.up.0.block.1.norm2.bias", "decoder.up.0.block.1.conv2.weight", "decoder.up.0.block.1.conv2.bias", "decoder.up.0.block.2.norm1.weight", "decoder.up.0.block.2.norm1.bias", "decoder.up.0.block.2.conv1.weight", "decoder.up.0.block.2.conv1.bias", "decoder.up.0.block.2.norm2.weight", "decoder.up.0.block.2.norm2.bias", "decoder.up.0.block.2.conv2.weight", "decoder.up.0.block.2.conv2.bias", "decoder.up.1.block.0.norm1.weight", "decoder.up.1.block.0.norm1.bias", "decoder.up.1.block.0.conv1.weight", "decoder.up.1.block.0.conv1.bias", "decoder.up.1.block.0.norm2.weight", "decoder.up.1.block.0.norm2.bias", "decoder.up.1.block.0.conv2.weight", "decoder.up.1.block.0.conv2.bias", "decoder.up.1.block.0.nin_shortcut.weight", "decoder.up.1.block.0.nin_shortcut.bias", "decoder.up.1.block.1.norm1.weight", "decoder.up.1.block.1.norm1.bias", "decoder.up.1.block.1.conv1.weight", "decoder.up.1.block.1.conv1.bias", "decoder.up.1.block.1.norm2.weight", "decoder.up.1.block.1.norm2.bias", "decoder.up.1.block.1.conv2.weight", "decoder.up.1.block.1.conv2.bias", "decoder.up.1.block.2.norm1.weight", "decoder.up.1.block.2.norm1.bias", "decoder.up.1.block.2.conv1.weight", "decoder.up.1.block.2.conv1.bias", "decoder.up.1.block.2.norm2.weight", "decoder.up.1.block.2.norm2.bias", "decoder.up.1.block.2.conv2.weight", "decoder.up.1.block.2.conv2.bias", "decoder.up.1.upsample.conv.weight", "decoder.up.1.upsample.conv.bias", "decoder.up.2.block.0.norm1.weight", "decoder.up.2.block.0.norm1.bias", "decoder.up.2.block.0.conv1.weight", "decoder.up.2.block.0.conv1.bias", "decoder.up.2.block.0.norm2.weight", "decoder.up.2.block.0.norm2.bias", "decoder.up.2.block.0.conv2.weight", "decoder.up.2.block.0.conv2.bias", "decoder.up.2.block.1.norm1.weight", "decoder.up.2.block.1.norm1.bias", "decoder.up.2.block.1.conv1.weight", "decoder.up.2.block.1.conv1.bias", "decoder.up.2.block.1.norm2.weight", "decoder.up.2.block.1.norm2.bias", "decoder.up.2.block.1.conv2.weight", "decoder.up.2.block.1.conv2.bias", "decoder.up.2.block.2.norm1.weight", "decoder.up.2.block.2.norm1.bias", "decoder.up.2.block.2.conv1.weight", "decoder.up.2.block.2.conv1.bias", "decoder.up.2.block.2.norm2.weight", "decoder.up.2.block.2.norm2.bias", "decoder.up.2.block.2.conv2.weight", "decoder.up.2.block.2.conv2.bias", "decoder.up.2.upsample.conv.weight", "decoder.up.2.upsample.conv.bias", "decoder.up.3.block.0.norm1.weight", "decoder.up.3.block.0.norm1.bias", "decoder.up.3.block.0.conv1.weight", "decoder.up.3.block.0.conv1.bias", "decoder.up.3.block.0.norm2.weight", "decoder.up.3.block.0.norm2.bias", "decoder.up.3.block.0.conv2.weight", "decoder.up.3.block.0.conv2.bias", "decoder.up.3.block.0.nin_shortcut.weight", "decoder.up.3.block.0.nin_shortcut.bias", "decoder.up.3.block.1.norm1.weight", "decoder.up.3.block.1.norm1.bias", "decoder.up.3.block.1.conv1.weight", "decoder.up.3.block.1.conv1.bias", "decoder.up.3.block.1.norm2.weight", "decoder.up.3.block.1.norm2.bias", "decoder.up.3.block.1.conv2.weight", "decoder.up.3.block.1.conv2.bias", "decoder.up.3.block.2.norm1.weight", "decoder.up.3.block.2.norm1.bias", "decoder.up.3.block.2.conv1.weight", "decoder.up.3.block.2.conv1.bias", "decoder.up.3.block.2.norm2.weight", "decoder.up.3.block.2.norm2.bias", "decoder.up.3.block.2.conv2.weight", "decoder.up.3.block.2.conv2.bias", "decoder.up.3.upsample.conv.weight", "decoder.up.3.upsample.conv.bias", "decoder.up.4.block.0.norm1.weight", "decoder.up.4.block.0.norm1.bias", "decoder.up.4.block.0.conv1.weight", "decoder.up.4.block.0.conv1.bias", "decoder.up.4.block.0.norm2.weight", "decoder.up.4.block.0.norm2.bias", "decoder.up.4.block.0.conv2.weight", "decoder.up.4.block.0.conv2.bias", "decoder.up.4.block.1.norm1.weight", "decoder.up.4.block.1.norm1.bias", "decoder.up.4.block.1.conv1.weight", "decoder.up.4.block.1.conv1.bias", "decoder.up.4.block.1.norm2.weight", "decoder.up.4.block.1.norm2.bias", "decoder.up.4.block.1.conv2.weight", "decoder.up.4.block.1.conv2.bias", "decoder.up.4.block.2.norm1.weight", "decoder.up.4.block.2.norm1.bias", "decoder.up.4.block.2.conv1.weight", "decoder.up.4.block.2.conv1.bias", "decoder.up.4.block.2.norm2.weight", "decoder.up.4.block.2.norm2.bias", "decoder.up.4.block.2.conv2.weight", "decoder.up.4.block.2.conv2.bias", "decoder.up.4.attn.0.norm.weight", "decoder.up.4.attn.0.norm.bias", "decoder.up.4.attn.0.q.weight", "decoder.up.4.attn.0.q.bias", "decoder.up.4.attn.0.k.weight", "decoder.up.4.attn.0.k.bias", "decoder.up.4.attn.0.v.weight", "decoder.up.4.attn.0.v.bias", "decoder.up.4.attn.0.proj_out.weight", "decoder.up.4.attn.0.proj_out.bias", "decoder.up.4.attn.1.norm.weight", "decoder.up.4.attn.1.norm.bias", "decoder.up.4.attn.1.q.weight", "decoder.up.4.attn.1.q.bias", "decoder.up.4.attn.1.k.weight", "decoder.up.4.attn.1.k.bias", "decoder.up.4.attn.1.v.weight", "decoder.up.4.attn.1.v.bias", "decoder.up.4.attn.1.proj_out.weight", "decoder.up.4.attn.1.proj_out.bias", "decoder.up.4.attn.2.norm.weight", "decoder.up.4.attn.2.norm.bias", "decoder.up.4.attn.2.q.weight", "decoder.up.4.attn.2.q.bias", "decoder.up.4.attn.2.k.weight", "decoder.up.4.attn.2.k.bias", "decoder.up.4.attn.2.v.weight", "decoder.up.4.attn.2.v.bias", "decoder.up.4.attn.2.proj_out.weight", "decoder.up.4.attn.2.proj_out.bias", "decoder.up.4.upsample.conv.weight", "decoder.up.4.upsample.conv.bias", "decoder.norm_out.weight", "decoder.norm_out.bias", "loss.perceptual_loss.scaling_layer.shift", "loss.perceptual_loss.scaling_layer.scale", "loss.perceptual_loss.net.slice1.0.weight", "loss.perceptual_loss.net.slice1.0.bias", "loss.perceptual_loss.net.slice1.2.weight", "loss.perceptual_loss.net.slice1.2.bias", "loss.perceptual_loss.net.slice2.5.weight", "loss.perceptual_loss.net.slice2.5.bias", "loss.perceptual_loss.net.slice2.7.weight", "loss.perceptual_loss.net.slice2.7.bias", "loss.perceptual_loss.net.slice3.10.weight", "loss.perceptual_loss.net.slice3.10.bias", "loss.perceptual_loss.net.slice3.12.weight", "loss.perceptual_loss.net.slice3.12.bias", "loss.perceptual_loss.net.slice3.14.weight", "loss.perceptual_loss.net.slice3.14.bias", "loss.perceptual_loss.net.slice4.17.weight", "loss.perceptual_loss.net.slice4.17.bias", "loss.perceptual_loss.net.slice4.19.weight", "loss.perceptual_loss.net.slice4.19.bias", "loss.perceptual_loss.net.slice4.21.weight", "loss.perceptual_loss.net.slice4.21.bias", "loss.perceptual_loss.net.slice5.24.weight", "loss.perceptual_loss.net.slice5.24.bias", "loss.perceptual_loss.net.slice5.26.weight", "loss.perceptual_loss.net.slice5.26.bias", "loss.perceptual_loss.net.slice5.28.weight", "loss.perceptual_loss.net.slice5.28.bias", "loss.perceptual_loss.lin0.model.1.weight", "loss.perceptual_loss.lin1.model.1.weight", "loss.perceptual_loss.lin2.model.1.weight", "loss.perceptual_loss.lin3.model.1.weight", "loss.perceptual_loss.lin4.model.1.weight", "loss.discriminator.main.0.weight", "loss.discriminator.main.0.bias", "loss.discriminator.main.2.weight", "loss.discriminator.main.3.weight", "loss.discriminator.main.3.bias", "loss.discriminator.main.3.running_mean", "loss.discriminator.main.3.running_var", "loss.discriminator.main.5.weight", "loss.discriminator.main.6.weight", "loss.discriminator.main.6.bias", "loss.discriminator.main.6.running_mean", "loss.discriminator.main.6.running_var", "loss.discriminator.main.8.weight", "loss.discriminator.main.9.weight", "loss.discriminator.main.9.bias", "loss.discriminator.main.9.running_mean", "loss.discriminator.main.9.running_var", "loss.discriminator.main.11.weight", "loss.discriminator.main.11.bias", "quantize.embedding.weight".
Unexpected key(s) in state_dict: "state_dict", "encoder.conv_norm_out.bias", "encoder.conv_norm_out.weight", "encoder.down_blocks.0.downsamplers.0.conv.bias", "encoder.down_blocks.0.downsamplers.0.conv.weight", "encoder.down_blocks.0.resnets.0.conv1.bias", "encoder.down_blocks.0.resnets.0.conv1.weight", "encoder.down_blocks.0.resnets.0.conv2.bias", "encoder.down_blocks.0.resnets.0.conv2.weight", "encoder.down_blocks.0.resnets.0.norm1.bias", "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm2.bias", "encoder.down_blocks.0.resnets.0.norm2.weight", "encoder.down_blocks.0.resnets.1.conv1.bias", "encoder.down_blocks.0.resnets.1.conv1.weight", "encoder.down_blocks.0.resnets.1.conv2.bias", "encoder.down_blocks.0.resnets.1.conv2.weight", "encoder.down_blocks.0.resnets.1.norm1.bias", "encoder.down_blocks.0.resnets.1.norm1.weight", "encoder.down_blocks.0.resnets.1.norm2.bias", "encoder.down_blocks.0.resnets.1.norm2.weight", "encoder.down_blocks.1.downsamplers.0.conv.bias", "encoder.down_blocks.1.downsamplers.0.conv.weight", "encoder.down_blocks.1.resnets.0.conv1.bias", "encoder.down_blocks.1.resnets.0.conv1.weight", "encoder.down_blocks.1.resnets.0.conv2.bias", "encoder.down_blocks.1.resnets.0.conv2.weight", "encoder.down_blocks.1.resnets.0.conv_shortcut.bias", "encoder.down_blocks.1.resnets.0.conv_shortcut.weight", "encoder.down_blocks.1.resnets.0.norm1.bias", "encoder.down_blocks.1.resnets.0.norm1.weight", "encoder.down_blocks.1.resnets.0.norm2.bias", "encoder.down_blocks.1.resnets.0.norm2.weight", "encoder.down_blocks.1.resnets.1.conv1.bias", "encoder.down_blocks.1.resnets.1.conv1.weight", "encoder.down_blocks.1.resnets.1.conv2.bias", "encoder.down_blocks.1.resnets.1.conv2.weight", "encoder.down_blocks.1.resnets.1.norm1.bias", "encoder.down_blocks.1.resnets.1.norm1.weight", "encoder.down_blocks.1.resnets.1.norm2.bias", "encoder.down_blocks.1.resnets.1.norm2.weight", "encoder.down_blocks.2.downsamplers.0.conv.bias", "encoder.down_blocks.2.downsamplers.0.conv.weight", "encoder.down_blocks.2.resnets.0.conv1.bias", "encoder.down_blocks.2.resnets.0.conv1.weight", "encoder.down_blocks.2.resnets.0.conv2.bias", "encoder.down_blocks.2.resnets.0.conv2.weight", "encoder.down_blocks.2.resnets.0.conv_shortcut.bias", "encoder.down_blocks.2.resnets.0.conv_shortcut.weight", "encoder.down_blocks.2.resnets.0.norm1.bias", "encoder.down_blocks.2.resnets.0.norm1.weight", "encoder.down_blocks.2.resnets.0.norm2.bias", "encoder.down_blocks.2.resnets.0.norm2.weight", "encoder.down_blocks.2.resnets.1.conv1.bias", "encoder.down_blocks.2.resnets.1.conv1.weight", "encoder.down_blocks.2.resnets.1.conv2.bias", "encoder.down_blocks.2.resnets.1.conv2.weight", "encoder.down_blocks.2.resnets.1.norm1.bias", "encoder.down_blocks.2.resnets.1.norm1.weight", "encoder.down_blocks.2.resnets.1.norm2.bias", "encoder.down_blocks.2.resnets.1.norm2.weight", "encoder.down_blocks.3.resnets.0.conv1.bias", "encoder.down_blocks.3.resnets.0.conv1.weight", "encoder.down_blocks.3.resnets.0.conv2.bias", "encoder.down_blocks.3.resnets.0.conv2.weight", "encoder.down_blocks.3.resnets.0.norm1.bias", "encoder.down_blocks.3.resnets.0.norm1.weight", "encoder.down_blocks.3.resnets.0.norm2.bias", "encoder.down_blocks.3.resnets.0.norm2.weight", "encoder.down_blocks.3.resnets.1.conv1.bias", "encoder.down_blocks.3.resnets.1.conv1.weight", "encoder.down_blocks.3.resnets.1.conv2.bias", "encoder.down_blocks.3.resnets.1.conv2.weight", "encoder.down_blocks.3.resnets.1.norm1.bias", "encoder.down_blocks.3.resnets.1.norm1.weight", "encoder.down_blocks.3.resnets.1.norm2.bias", "encoder.down_blocks.3.resnets.1.norm2.weight", "encoder.mid_block.attentions.0.group_norm.bias", "encoder.mid_block.attentions.0.group_norm.weight", "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "encoder.mid_block.resnets.0.conv1.bias", "encoder.mid_block.resnets.0.conv1.weight", "encoder.mid_block.resnets.0.conv2.bias", "encoder.mid_block.resnets.0.conv2.weight", "encoder.mid_block.resnets.0.norm1.bias", "encoder.mid_block.resnets.0.norm1.weight", "encoder.mid_block.resnets.0.norm2.bias", "encoder.mid_block.resnets.0.norm2.weight", "encoder.mid_block.resnets.1.conv1.bias", "encoder.mid_block.resnets.1.conv1.weight", "encoder.mid_block.resnets.1.conv2.bias", "encoder.mid_block.resnets.1.conv2.weight", "encoder.mid_block.resnets.1.norm1.bias", "encoder.mid_block.resnets.1.norm1.weight", "encoder.mid_block.resnets.1.norm2.bias", "encoder.mid_block.resnets.1.norm2.weight", "decoder.conv_norm_out.bias", "decoder.conv_norm_out.weight", "decoder.mid_block.attentions.0.group_norm.bias", "decoder.mid_block.attentions.0.group_norm.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight", "decoder.mid_block.resnets.0.conv1.bias", "decoder.mid_block.resnets.0.conv1.weight", "decoder.mid_block.resnets.0.conv2.bias", "decoder.mid_block.resnets.0.conv2.weight", "decoder.mid_block.resnets.0.norm1.bias", "decoder.mid_block.resnets.0.norm1.weight", "decoder.mid_block.resnets.0.norm2.bias", "decoder.mid_block.resnets.0.norm2.weight", "decoder.mid_block.resnets.1.conv1.bias", "decoder.mid_block.resnets.1.conv1.weight", "decoder.mid_block.resnets.1.conv2.bias", "decoder.mid_block.resnets.1.conv2.weight", "decoder.mid_block.resnets.1.norm1.bias", "decoder.mid_block.resnets.1.norm1.weight", "decoder.mid_block.resnets.1.norm2.bias", "decoder.mid_block.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.0.conv1.bias", "decoder.up_blocks.0.resnets.0.conv1.weight", "decoder.up_blocks.0.resnets.0.conv2.bias", "decoder.up_blocks.0.resnets.0.conv2.weight", "decoder.up_blocks.0.resnets.0.norm1.bias", "decoder.up_blocks.0.resnets.0.norm1.weight", "decoder.up_blocks.0.resnets.0.norm2.bias", "decoder.up_blocks.0.resnets.0.norm2.weight", "decoder.up_blocks.0.resnets.1.conv1.bias", "decoder.up_blocks.0.resnets.1.conv1.weight", "decoder.up_blocks.0.resnets.1.conv2.bias", "decoder.up_blocks.0.resnets.1.conv2.weight", "decoder.up_blocks.0.resnets.1.norm1.bias", "decoder.up_blocks.0.resnets.1.norm1.weight", "decoder.up_blocks.0.resnets.1.norm2.bias", "decoder.up_blocks.0.resnets.1.norm2.weight", "decoder.up_blocks.0.resnets.2.conv1.bias", "decoder.up_blocks.0.resnets.2.conv1.weight", "decoder.up_blocks.0.resnets.2.conv2.bias", "decoder.up_blocks.0.resnets.2.conv2.weight", "decoder.up_blocks.0.resnets.2.norm1.bias", "decoder.up_blocks.0.resnets.2.norm1.weight", "decoder.up_blocks.0.resnets.2.norm2.bias", "decoder.up_blocks.0.resnets.2.norm2.weight", "decoder.up_blocks.0.upsamplers.0.conv.bias", "decoder.up_blocks.0.upsamplers.0.conv.weight", "decoder.up_blocks.1.resnets.0.conv1.bias", "decoder.up_blocks.1.resnets.0.conv1.weight", "decoder.up_blocks.1.resnets.0.conv2.bias", "decoder.up_blocks.1.resnets.0.conv2.weight", "decoder.up_blocks.1.resnets.0.norm1.bias", "decoder.up_blocks.1.resnets.0.norm1.weight", "decoder.up_blocks.1.resnets.0.norm2.bias", "decoder.up_blocks.1.resnets.0.norm2.weight", "decoder.up_blocks.1.resnets.1.conv1.bias", "decoder.up_blocks.1.resnets.1.conv1.weight", "decoder.up_blocks.1.resnets.1.conv2.bias", "decoder.up_blocks.1.resnets.1.conv2.weight", "decoder.up_blocks.1.resnets.1.norm1.bias", "decoder.up_blocks.1.resnets.1.norm1.weight", "decoder.up_blocks.1.resnets.1.norm2.bias", "decoder.up_blocks.1.resnets.1.norm2.weight", "decoder.up_blocks.1.resnets.2.conv1.bias", "decoder.up_blocks.1.resnets.2.conv1.weight", "decoder.up_blocks.1.resnets.2.conv2.bias", "decoder.up_blocks.1.resnets.2.conv2.weight", "decoder.up_blocks.1.resnets.2.norm1.bias", "decoder.up_blocks.1.resnets.2.norm1.weight", "decoder.up_blocks.1.resnets.2.norm2.bias", "decoder.up_blocks.1.resnets.2.norm2.weight", "decoder.up_blocks.1.upsamplers.0.conv.bias", "decoder.up_blocks.1.upsamplers.0.conv.weight", "decoder.up_blocks.2.resnets.0.conv1.bias", "decoder.up_blocks.2.resnets.0.conv1.weight", "decoder.up_blocks.2.resnets.0.conv2.bias", "decoder.up_blocks.2.resnets.0.conv2.weight", "decoder.up_blocks.2.resnets.0.conv_shortcut.bias", "decoder.up_blocks.2.resnets.0.conv_shortcut.weight", "decoder.up_blocks.2.resnets.0.norm1.bias", "decoder.up_blocks.2.resnets.0.norm1.weight", "decoder.up_blocks.2.resnets.0.norm2.bias", "decoder.up_blocks.2.resnets.0.norm2.weight", "decoder.up_blocks.2.resnets.1.conv1.bias", "decoder.up_blocks.2.resnets.1.conv1.weight", "decoder.up_blocks.2.resnets.1.conv2.bias", "decoder.up_blocks.2.resnets.1.conv2.weight", "decoder.up_blocks.2.resnets.1.norm1.bias", "decoder.up_blocks.2.resnets.1.norm1.weight", "decoder.up_blocks.2.resnets.1.norm2.bias", "decoder.up_blocks.2.resnets.1.norm2.weight", "decoder.up_blocks.2.resnets.2.conv1.bias", "decoder.up_blocks.2.resnets.2.conv1.weight", "decoder.up_blocks.2.resnets.2.conv2.bias", "decoder.up_blocks.2.resnets.2.conv2.weight", "decoder.up_blocks.2.resnets.2.norm1.bias", "decoder.up_blocks.2.resnets.2.norm1.weight", "decoder.up_blocks.2.resnets.2.norm2.bias", "decoder.up_blocks.2.resnets.2.norm2.weight", "decoder.up_blocks.2.upsamplers.0.conv.bias", "decoder.up_blocks.2.upsamplers.0.conv.weight", "decoder.up_blocks.3.resnets.0.conv1.bias", "decoder.up_blocks.3.resnets.0.conv1.weight", "decoder.up_blocks.3.resnets.0.conv2.bias", "decoder.up_blocks.3.resnets.0.conv2.weight", "decoder.up_blocks.3.resnets.0.conv_shortcut.bias", "decoder.up_blocks.3.resnets.0.conv_shortcut.weight", "decoder.up_blocks.3.resnets.0.norm1.bias", "decoder.up_blocks.3.resnets.0.norm1.weight", "decoder.up_blocks.3.resnets.0.norm2.bias", "decoder.up_blocks.3.resnets.0.norm2.weight", "decoder.up_blocks.3.resnets.1.conv1.bias", "decoder.up_blocks.3.resnets.1.conv1.weight", "decoder.up_blocks.3.resnets.1.conv2.bias", "decoder.up_blocks.3.resnets.1.conv2.weight", "decoder.up_blocks.3.resnets.1.norm1.bias", "decoder.up_blocks.3.resnets.1.norm1.weight", "decoder.up_blocks.3.resnets.1.norm2.bias", "decoder.up_blocks.3.resnets.1.norm2.weight", "decoder.up_blocks.3.resnets.2.conv1.bias", "decoder.up_blocks.3.resnets.2.conv1.weight", "decoder.up_blocks.3.resnets.2.conv2.bias", "decoder.up_blocks.3.resnets.2.conv2.weight", "decoder.up_blocks.3.resnets.2.norm1.bias", "decoder.up_blocks.3.resnets.2.norm1.weight", "decoder.up_blocks.3.resnets.2.norm2.bias", "decoder.up_blocks.3.resnets.2.norm2.weight".