RichardForests
's Collections
Diffusion models
updated
FusionFrames: Efficient Architectural Aspects for Text-to-Video
Generation Pipeline
Paper
•
2311.13073
•
Published
•
56
MetaDreamer: Efficient Text-to-3D Creation With Disentangling Geometry
and Texture
Paper
•
2311.10123
•
Published
•
15
GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via
Blender-Oriented GPT Planning
Paper
•
2311.12631
•
Published
•
13
VMC: Video Motion Customization using Temporal Attention Adaption for
Text-to-Video Diffusion Models
Paper
•
2312.00845
•
Published
•
36
DiffiT: Diffusion Vision Transformers for Image Generation
Paper
•
2312.02139
•
Published
•
13
AnimateZero: Video Diffusion Models are Zero-Shot Image Animators
Paper
•
2312.03793
•
Published
•
17
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a
Single Image
Paper
•
2312.04543
•
Published
•
21
Self-conditioned Image Generation via Generating Representations
Paper
•
2312.03701
•
Published
•
7
Schrodinger Bridges Beat Diffusion Models on Text-to-Speech Synthesis
Paper
•
2312.03491
•
Published
•
34
Analyzing and Improving the Training Dynamics of Diffusion Models
Paper
•
2312.02696
•
Published
•
31
GenTron: Delving Deep into Diffusion Transformers for Image and Video
Generation
Paper
•
2312.04557
•
Published
•
12
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations
Paper
•
2312.04655
•
Published
•
20
DiffMorpher: Unleashing the Capability of Diffusion Models for Image
Morphing
Paper
•
2312.07409
•
Published
•
22
Mosaic-SDF for 3D Generative Models
Paper
•
2312.09222
•
Published
•
15
FreeInit: Bridging Initialization Gap in Video Diffusion Models
Paper
•
2312.07537
•
Published
•
26
Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model
Paper
•
2312.13252
•
Published
•
27
Adaptive Guidance: Training-free Acceleration of Conditional Diffusion
Models
Paper
•
2312.12487
•
Published
•
9
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
Paper
•
2312.13913
•
Published
•
22
PanGu-Draw: Advancing Resource-Efficient Text-to-Image Synthesis with
Time-Decoupled Training and Reusable Coop-Diffusion
Paper
•
2312.16486
•
Published
•
6
VideoDrafter: Content-Consistent Multi-Scene Video Generation with LLM
Paper
•
2401.01256
•
Published
•
19
PIXART-δ: Fast and Controllable Image Generation with Latent
Consistency Models
Paper
•
2401.05252
•
Published
•
45
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
Paper
•
2402.10210
•
Published
•
29
FiT: Flexible Vision Transformer for Diffusion Model
Paper
•
2402.12376
•
Published
•
48
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized
Diffusion Model
Paper
•
2402.17412
•
Published
•
21
Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion
Distillation
Paper
•
2403.12015
•
Published
•
64
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Paper
•
2311.05556
•
Published
•
80
Isotropic3D: Image-to-3D Generation Based on a Single CLIP Embedding
Paper
•
2403.10395
•
Published
•
7
lllyasviel/sd-controlnet-scribble
Image-to-Image
•
Updated
•
5.16k
•
50
stabilityai/stable-diffusion-2-depth
Updated
•
373k
•
382