From 1aef162643d34db4d2765fee87d59e24c3c1ace8 Mon Sep 17 00:00:00 2001 From: Zhang Peiyuan Date: Tue, 18 Feb 2025 11:44:22 -0800 Subject: [PATCH] Update index.md --- content/blogs/sta/index.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/content/blogs/sta/index.md b/content/blogs/sta/index.md index cde7168..13ecfda 100644 --- a/content/blogs/sta/index.md +++ b/content/blogs/sta/index.md @@ -150,11 +150,9 @@ Since early sampling steps are crucial for global structure, we retain full atte STA accelerates attention by exploiting redundancy in 3D full attention. Another approach to speeding up video generation focuses on caching, leveraging redundancy across diffusion steps. We demonstrate that STA is **compatible** to [TeaCache](https://github.com/ali-vilab/TeaCache), a state-of-the-art diffusion acceleration technique based on caching. Together, our solution brings **3x** speedup, reducing DiT inference time from 945 seconds to **317** seconds with no quality loss. -We evaluate our method on 200 randomly selected prompts from the MovieGen Bench. Below, we provide additional qualitative comparisons between the original Hunyuan model and our 3× speedup solution. The embedded webpage below is scrollable. - -{{< rawhtml >}} - -{{< /rawhtml >}} +We evaluate our method on 200 randomly selected prompts from the MovieGen Bench. Below, we provide additional **uncherry-picked** qualitative comparisons between the original Hunyuan model and our 3× speedup solution. +{{}} +More results can be found [here](https://fast-video.github.io/). ## Training with STA Unlocks Greater Speedup