Skip to content
This repository was archived by the owner on Nov 10, 2023. It is now read-only.
This repository was archived by the owner on Nov 10, 2023. It is now read-only.

why optimizer.zero_grad() after optimizer.step()? #15

@littleWangyu

Description

@littleWangyu

in train epoch:
why optimizer.zero_grad() after optimizer.step()?
Does it matter?
It's usually optimizer.step()--loss.backward()--optimizer.step()

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions