Skip to content

Conversation

@Edenzzzz
Copy link
Collaborator

@Edenzzzz Edenzzzz commented Aug 5, 2025

torch will automatically call this when it runs out of memory, and manually calling will only slow program down.
See https://discuss.pytorch.org/t/about-torch-cuda-empty-cache/34232/9

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Edenzzzz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing performance and simplifying memory management within the PyTorch-based components. By removing manual empty_cache() and gc.collect() calls, the system will now rely entirely on PyTorch's automatic memory handling, which is generally more efficient and less prone to introducing performance bottlenecks. Additionally, a minor update was included in an example script.

Highlights

  • Performance Optimization: Explicit calls to torch.cuda.empty_cache() and torch.mps.empty_cache() have been removed across the codebase. This change is based on the understanding that PyTorch's memory allocator automatically manages GPU memory efficiently, and manual calls can introduce unnecessary overhead or even slow down execution.
  • Memory Management Simplification: Associated gc.collect() calls, which were often paired with empty_cache() calls, have also been removed. This streamlines the memory management logic, relying more on PyTorch's internal mechanisms.
  • Example Update: A minor update was made to an inference example (examples/inference/lora/wan_lora_inference_from_ckpt.py), adjusting a LoRA checkpoint path and adding an additional video generation call with a new prompt.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request focuses on removing manual calls to torch.cuda.empty_cache() and torch.mps.empty_cache(), which is a good optimization to avoid unnecessary synchronization and performance overhead. The changes across the codebase are consistent with this goal.

However, I've identified some scope creep in examples/inference/lora/wan_lora_inference_from_ckpt.py. The modifications in this file, including a change in a checkpoint path and the addition of a new example, are unrelated to the core purpose of this PR. To maintain a clean and atomic commit history, I've recommended moving these changes to a separate pull request.

Once the out-of-scope changes are addressed, this PR will be a valuable improvement to the codebase's performance and clarity.

text_encoder_cpu_offload=True,
pin_cpu_memory=False,
lora_path="checkpoints/wan_t2v_finetune_lora/checkpoint-1250/transformer",
lora_path="checkpoints/wan_t2v_finetune_lora/checkpoint-1000/transformer",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change to the lora_path seems unrelated to the main purpose of this pull request, which is to remove empty_cache calls. To maintain a clean and atomic commit history, it's best to revert this change and submit it in a separate PR if needed. This helps in reviewing and understanding the purpose of each change.

Comment on lines 37 to 43
prompt = "A colorful puzzle ball is being crushed by a large metal cylinder, which flattens the objects as if they were under a hydraulic press."
video = generator.generate_video(
prompt,
output_path=OUTPUT_PATH,
save_video=True,
**kwargs
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This new video generation block appears to be out of scope for this PR. The goal here is to remove empty_cache calls. To maintain a clean commit history, please move this example to a separate pull request.

@Edenzzzz Edenzzzz added the go Trigger Buildkite CI label Aug 5, 2025
@SolitaryThinker
Copy link
Collaborator

going to retry ssim, as I'm not sure why it took so long

@Edenzzzz Edenzzzz force-pushed the remove_empty_cache branch from c59db30 to eaa65e0 Compare August 7, 2025 17:56
@SolitaryThinker
Copy link
Collaborator

I'm not 100% sure about these changes. @RandNMR73 could you take a look at the mps changes? Maybe try this branch on MPS?

@RandNMR73
Copy link
Collaborator

what branch is this under?

@Edenzzzz
Copy link
Collaborator Author

Edenzzzz commented Aug 8, 2025

https://github.com/Edenzzzz/FastVideo/tree/remove_empty_cache

@RandNMR73
Copy link
Collaborator

Everything works on the mps backend with these changes.

@SolitaryThinker SolitaryThinker merged commit 6c6bcd9 into hao-ai-lab:main Aug 8, 2025
1 check passed
@SolitaryThinker
Copy link
Collaborator

thanks for checking

qimcis pushed a commit to qimcis/FastVideo that referenced this pull request Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go Trigger Buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants