Skip to content

Make use of learning rate scheduler optional#449

Merged
ashleve merged 1 commit intoashleve:mainfrom
amorehead:patch-1
Oct 5, 2022
Merged

Make use of learning rate scheduler optional#449
ashleve merged 1 commit intoashleve:mainfrom
amorehead:patch-1

Conversation

@amorehead
Copy link
Contributor

@amorehead amorehead commented Sep 28, 2022

What does this PR do?

Made trainer.configure_optimizers() robust to an unspecified learning rate scheduler.

Before submitting

  • Did you make sure title is self-explanatory and the description concisely explains the PR?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you test your PR locally with pytest command?
  • Did you run pre-commit hooks with pre-commit run -a command?

Did you have fun?

Yep 🙃

* Made `trainer.configure_optimizers()` robust to unspecified learning rate schedulers
@amorehead amorehead marked this pull request as draft October 5, 2022 22:14
@amorehead
Copy link
Contributor Author

@ashleve, LGTY?

@amorehead amorehead marked this pull request as ready for review October 5, 2022 22:15
@ashleve
Copy link
Owner

ashleve commented Oct 5, 2022

@amorehead Thank you for the ping!

I like this improvement.

Copy link
Owner

@ashleve ashleve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ashleve ashleve merged commit ad0f46c into ashleve:main Oct 5, 2022
@ashleve ashleve added enhancement New feature or request refactoring labels Oct 5, 2022
Unity-Billal-mesloub referenced this pull request in Unity-Billal-mesloub/lightning-hydra-template Dec 30, 2025
* Dev (#135)

* remove logging best so far metric values from mnist model

* update tests

* update README.md

* comment out wandb tests

* Dev (#136)

* comment improvements

* add __init__.py

* add disabling pin memory on fast dev run

* rename template_utils.py to utils.py

* improve README.md

* add num_classes property to mnist datamodule

* add sh to requirements.txt

* bump package versions

* Release/0.9 (#141)

* add flake8 and prettier to pre-commit-config
* add setup.cfg
* add workers=True to seed_everything()
* update lightning badge logo
* bump package versions
* update README.md
* add __init__.py files
* add more logger configs parameters
* add default Dockerfile
* change .env.template to .env.example
* move inference example to readme
* remove img_dataset.py
* simplify names of wandb callbacks
* remove wandb test marker
* format files with prettier

* Refactor dockerfile (#149)

* refactor dockerfile to allow mounting

* update README.md

* Release/1.0 (#156)

* update to Hydra 1.1
* add bash folder with scripts for conda setup and run scheduling
* add saving seed in `log_hyperparameters()` method
* redesign Dockerfile to make it weight less
* add `test_after_training` parameter to config
* add inheritance to trainer configs
* remove forcing ddp-friendly configuration
* refactor tests
* remove conda_env_gpu.yaml
* remove default langage version from pre-commit config
* add mnist datamodule unit test
* rename test folder from 'smoke' to 'shell'
* remove wandb import from utils.py
* change 'use_artifact' to 'log_artifact' in wandb callbacks
* add rank zero decorator to wandb callbacks
* add dumping rich tree config to file
* get rid of `=` character in ckpt names
* update requirements.txt
* update LICENSE
* update README.md

* Update README.md

* Update wandb_callbacks.py (#152)

* Update wandb_callbacks.py
* UploadCodeAsArtifact: Add the functionality to use git to decide which files are source files. Can upload all files that are not ignored by git instead of all '*.py' file.
* UploadCheckpointsAsArtifact: ckpts are output of the run, so use `experiment.log_artifact(ckpts)` instead

* Release/1.1 (#174)

* introduce different running modes: default, debug, experiment
* fix pytorch installation in setup_conda.sh
* fix incorrect calculation of precision, recall and f1 score in wandb callback
* add `_self_` to config.yaml for compatibility with hydra1.1
* fix setting seed in `train.py` so it's skipped when `seed=null`
* add exception message when trying to use wandb callbacks with `trainer.fast_dev_run=true`
* change `axis=-1` to `dim=-1` in LogImagePredictions callback
* add 'Reproducibilty' section to README.md

* Enanche trainer debug #177 (#179)

*modifiy `configs/trainer/debug.yaml` to enable some debug options
*remove unused `if config.get("debug"):` in `extras`
*update the documentation

* Move LICENSE to README.md (#180)

* Hotfix (#187)

* make debug mode automatically set level of all command line loggers to `DEBUG`
* make debug mode automatically set the trainer config to `debug.yaml`
* add generator seed to fix test data leaking to train data in datamodule `setup()`
* move Dockerfile to `dockerfiles` branch
* update README.md

* Update `mnist_datamodule.py`

* General documentation improvements (#193)

*add a new way for accessing datamodule attributes to the README
*general documentation improvements

* Clarify meaning of EarlyStopping patience (#195)

*specify that EarlyStopping patience is counted in validation epochs and not in training epochs.

* Release/1.2 (#198)

* update template to pytorch 1.10+ and lightning 1.5+
* add experiment mode to all experiment configs and implement special logging paths for experiment mode
* add `MaxMetric` to model, for computation of best so far validation accuracy
* add RichProgressBar to default callbacks
* get rid of trick for preventing auto hparam logging, since lightning now supports it with `self.save_hyperparameters(logger=False)`
* add `self.save_hyperparameters()` to datamodule since lightinng now supports it
* deprecate Apex support
* deprecate bash script for conda setup
* change `terminate_on_nan` debug option to `detect_anomaly` for compatibility with lightning v1.5
* specify model and datamodule during `trainer.test()`, for compatibility with lightning v1.5
* remove `configs/trainer/all_params.yaml`
* make hyperparameter optimization compatible with lightning v1.5
* general documentation improvements

* Release/1.2 (#199)

*add manual resetting of metrics at the end of every epoch
*set `pytorch-lightning>=1.5` in `requirements.txt`

* Update documentation

* Local config files (#205)

* Introduce local config files

* Add vendor dir (#207)

Implement "best practice" for storing third party code

* Unify log dir structure (#211)

* Unifiy log dir paths
* Add a little helper script `logs/latest` which can be used to quickly change into the latest log dir of a specific kind
* Update README

* Avoid conflict between isort and black (#214)

Specify black profile for isort inside .pre-commit-config.yaml just in case someone deletes the setup.cfg

* Fix missing parameter in "Accessing datamodule attributes" trick in the README (#219)

* Remove deprecated trainer arguments (#223)

* Remove deprecated trainer arguments: `weight summary` and `progress_bar_refresh_rate`
* Remove deprecated example from README
* Add RichModelSummary to the callbacks

* Rename accelerator="ddp" to strategy="ddp" (#228)

* Remove redundant parts in filenames (#213)

* Remove redundant parts in config filenames
* Change `mnist_model.py` to `mnist_module.py`
* Change `MNISTLitModel` to `MNISTLitModule`
* Change folder `modules/` to `components/`
* Update README

* Add metric name validation for hyperparameter search (#241)

* Documentation and comment improvements (#242)

* Update requirements.txt for mac compatibility (#247)

* Refactoring configs (#248)

- make every run be an experiment by default
- deprecate `mode` configs
- introduce `debug` configs
- introduce `log_dir` configs
- remove unnecessary utilities from `utils.extras()`
- add config flag for skipping training
- update README.md

* Refactor rich config printing (#249)

- Refactor `print_config()` method
- Modify `print_config()` so all config groups are always printend

* Multiple pipelines - training and evaluation (#250)

- introduce best practice for multiple pipelines
- Implement evaluation pipeline
- allow for using relative ckpt paths
- general refactoring
- update README.md

* Add nbstripout to pre-commit hooks (#252)

- Add jupyter notebook output cell clearing before commit

* Rename folder `bash` to `scripts` (#253)

* Move wandb callbacks to the `wandb-callbacks` branch (#254)

* Remove `src/callbacks/`
* Remove `configs/callbacks/wandb.yaml`
* Update readme

* Update requirements and pre-commit hooks (#255)

* Update package versions in requirements.txt
* Update pre-commit hooks versions in pre-commit-config.yaml
* Specify testpath in setup.cfg
* Update readme

* Remove optuna default values (#235)

* Remove some of the optuna default values
* Add more explanatory comments in mnist_optuna.yaml

* fix : Not printing best ckeckpoint (#261)

* change `config.trainer.get("train")` to `config.get("train")` in `training_pipeline.py`

* Recursively instantiate `SimpleDenseNet` using Hydra (#209)

* Add recursive instantiation of `net` object which is passed to `MNISTLitModule` on init

* Log model hparams first (#263)

* Change the order of logging hparams for more convenient tensorboard usage

* Fix logging path in mlflow (#269)

* make MLFlow logger place all run logs in the same directory so UI can read all of them at once

* Add docformatter to pre-commit hooks (#270)

* Add docformatter to pre-commit hooks
* Autoformat all docstrings

* Add veryfing logger is not None before logging hparams (#275)

* Readme improvements (#276)

* Fix errors in 'accessing datamodule attributes trick' in the readme (#279)

* fix accessing attribute from config syntax in the readme
* fix config printing warning in the readme

* Fix val_acc_best calculation (#287)

* Add val_acc_best metric reset at the start of the run to prevent storing results from validation sanity checks

* Remove logs/latest script

* Rename `quee` to `queue` in `print_config()`

* Refactor ckpt_path (#293)

* Move setting seed to `utils.extras()` (#294)

* Move optimized metric retrieval to `utils.get_metric_value()` (#295)

* Improve comments and code formatting (#296)

* Rename `config` to `cfg` (#297)

* Update requirements.txt and pre-commit hooks (#298)

* Add `utils.finish()` to test pipeline (#299)

* Move callbacks and loggers instantation to utils (#300)

* Remove `_convert_=partial` from trainer instantiation (#301)

* Add pipeline wrapper decorator (#304)

* Add _pipeline_wrapper.py

* Add pyrootutils (#305)

* Refactor configs (#306)

* Format files with pre-commit hooks and improve comments

* Refactoring tests (#307)

* Rename test.py to eval.py (#309)

* Rename "pipelines" to "tasks" (#311)

* Add object instantiation to module files for quick debugging (#312)

* Adapt template to hydra 1.2 - no more changing work dir by default (#313)

* Fix evaluation config (#315)

* Fix optuna search space (#316)

* Add bandit (#317)

* Github Actions (#318)

* Rename utils.extras() to utils.start() (#319)

* Replace setup.cfg with pyproject.toml (#320)

* Add closing neptune, mlflow and comet to utils.finish() (#321)

* Fix output dir path in logger configs (#323)

* Replace `os` with `pathlib` (#324)

* Fix MLFlowLogger class name (#326)

* Disable callbacks in overfit.yaml (#329)

* Improve comments (#330)

* Split utils into several files (#332)

* Improve trainer configs (#333)

* Add code coverage to CI (#335)

* Disable hydra command line logger during debugging (#336)

* Disable `pyrootutils` changing work dir by default (#337)

* Update `train.py` and `eval.py` (#338)

* Add test coverage settings to `pyproject.toml` (#339)

* Add `deterministic` flag to trainer config (#342)

* Add `mps.yaml` trainer config for accelerated training on mac (#344)

* Add enforcing tags (#340)

* Remove experiment name (#345)

* Move extra config utils to `configs/extras/default.yaml` (#346)

* Configure Optimizer in Hydra using _partial_: true (#334)

* Add saving tags to file (#354)

* Replace asserts with warnings in task_utils.py (#353)

* Improve main in mnist_datamodule.py and simple_dense_net.py (#355)

* Comment out experiment name in logger configs (#357)

* Fix typos in configs (#358)

* Add sampler override to default hparams_search config (#360)

* Split callbacks config into multiple files (#365)

* Refactor utils (#366)

* Update train_task.py (#367)

* Update logger configs (#368)

* Update tests (#370)

* Add setup.py (#369)

* Update train.py and eval.py (#371)

* Add code quality workflows for main branch and pull requests (#364)

* Add `logs/` folder (#373)

* Fix missing logging of seed hyperparameter (#374)

* Add missing function imports to `utils/__init__.py` (#376)

* Add docstrings to tests (#375)

* Add codespell linter and fix spelling mistakes (#377)

* Update README.md (#372)

* Remove tracking gradient norm from default debug config (#378)

* Make tasks return metric dict and object dict (#379)

* Replace switching optuna sampler with a comment (#381)

* Add testing metric values to `test_train.py` and `test_eval.py` (#380)

* Remove comment

* Add names to all steps of workflows (#382)

* Update train_task.py

* Rename `task_utils.py` to `utils.py` (#383)

* Fix typo

* Add pull request template (#385)

* Remove unnecessary comments (#384)

* Construct config path from root (#387)

* Add `md-format` to pre-commits (#386)

* Add Makefile (#388)

* Update README.md (#389)

* Add warning when best ckpt is not found (#390)

* Fix warning message in `train_task.py` (#391)

* Set `pythonpath=True` in model and datamodule (#392)

* Rename workflows (#395)

* Update README.md (#394)

* Add `work_dir` to config paths (#397)

* Update `requirements.txt` (#398)

* Update dependabot (#399)

* Bump torchmetrics from 0.8.0 to 0.9.2 (#400)

* Fix typing (#401)

* Fix missing cpu trainer (#402)

* Bump pytorch-lightning from 1.6.5 to 1.7.1 (#408)

* Update README.md (#419)

* Add separate job for windows in tests worfklow (#422)

* Update Makefile (#423)

* Move tasks code inside entry files (#421)

* Update README.md (#425)

* Fix docstring (#428)

* Improve comments (#429)

* Update `make clean-logs` in Makefile (#430)

* Pre-commit config updates for jupyter notebooks and flake8 W503 (#435)

* Fix logging metrics in DDP mode (#426)

* Add shellcheck linter (#427)

* Add Vertex AI integration repo to readme (#440)

* Improve comments (#441)

* Update readme (#442)

* Fix broken link of datamodule (#444)

* Add scheduler example (#439)

* Make use of learning rate scheduler optional (#449)

* Made `trainer.configure_optimizers()` robust to unspecified learning rate schedulers

* Bump torchmetrics from 0.9.3 to 0.10.0 (#454)

* Bump pytorch-lightning from 1.7.1 to 1.8.1 (#468)

* Update command for using tags in readme (#465)

* Rename `step()` to `model_step()` for compatibility with recent lightning release (#472)

* Fix deprecated TPU import (#473)

* Move macos testing to separate workflow task (#474)

* Add `task` and `num_classes` args to accuracy metric (#475)

* Improve comments (#476)

* Send hparams to all loggers (#479)

* Upgrade to hydra 1.3 (#480)

* Remove debug from makefile (#482)

* Improve utils warnings (#483)

* Fix loading MNISTLitModule from ckpt (#481)

* Add explicit comment/warning to `training_epoch_end()` (#486)

* Add `codecov.yml` to prevent failing pipeline on coverage decrease (#484)

* Refactor `task_wrapper` decorator (#488)

* Bump hydra-core from 1.3.0 to 1.3.1 (#492)

* Add release drafter (#493)

* Update code examples in readme (#494)

* Add `dev` branch to pr tests workflow (#497)

* Improve root setup (#496)

* Refactor tests (#498)

* Rename `datamodules` folder to `data` (#501)

* Update `README.md` (#499)

* Remove object instantiation from `__main__` methods  (#502)

* Add PR authors to release draft config (#503)

* Fix readme typo (#507)

* Fix use of deprecated LightningLoggerBase class (#517)

* Hotfix for isort poetry incompatibility (#515)

* Bump pytorch-lightning from 1.8.3 to 1.9.1 (#522)

* Deprecate Python 3.7 (#523)

* Update `README.md` (#524)

* Bump hydra-core from 1.3.1 to 1.3.2 (#536)

Bumps [hydra-core](https://github.com/facebookresearch/hydra) from 1.3.1 to 1.3.2.
- [Release notes](https://github.com/facebookresearch/hydra/releases)
- [Changelog](https://github.com/facebookresearch/hydra/blob/v1.3.2/NEWS.md)
- [Commits](facebookresearch/hydra@v1.3.1...v1.3.2)

---
updated-dependencies:
- dependency-name: hydra-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Set hydra version to 1.3 in tests (#542)

* Encourage resetting all validation metrics when training starts (#540)

* Support for installing dependencies with conda (#532)

* Add `__init__.py` to `configs/` folder (#539)

* Release v2.0.0 (#545)

* Support for logging with Aim (#534)

* Update template to Lightning 2.0 (#548)

* Update pre-commit hooks (#549)

* Refactor utils (#541)

* Add option for pytorch 2.0 model compilation (#550)

* Update `README.md` (#551)

---------

Co-authored-by: Mattie Tesfaldet <mattie@meta.com>
Co-authored-by: Johnny <johnnynuca14@gmail.com>

* Update `README.md` (#553)

* Fix name `instantiatiators.py` -> `instantiators.py` (#558)

* Fix dead links in callback configs (#557)

* Set strategy to `ddp` in ddp config (#571)

* Set `sync_dist=True` when logging best so far validation accuracy (#572)

* Lightning + Aim dependency fix in conda environment.yaml and setup.py. (#575)

* Change "pytorch_lightning" to "lightning" in instantiators.py (#577)

* Fix WandB config improper hierarchical display of keys (#583)

* Remove yaml extension from hydra defaults lists (#584)

* Docstrings revamp (#589)

* Rename `pyrootutils` to `rootutils` (#592)

* Fix `.log` being saved in project root dir instead of log dir (#588)

* Fix accelerator in `tests/test_train.py` (#595)

* Update Lightning DDP documentation links in `README.md` (#601)

* Fix `torch.compile` on `nn.module` instead of on `LightningModule` (#587)

* Update function arguments typing (#603)

* DDP-related improvements to datamodule and logging (#594)

* Dividing batch size by number of devices in MNISTDataModule's setup fn
* .log file is now the same across devices when training in a DDP setting
* Adding rank-aware pylogger

* Add FSDP support to `MNISTLitModule` (#604)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Łukasz Zalewski <ashlevegalaxy@gmail.com>
Co-authored-by: Yiyao Wei <yiyao.wei@icloud.com>
Co-authored-by: Zhengyu Yang <25852061+zhengyu-yang@users.noreply.github.com>
Co-authored-by: Giuseppe Scriva <67858959+gscriva@users.noreply.github.com>
Co-authored-by: ashleve <zalewski.ukas@gmail.com>
Co-authored-by: Eungbean Lee <27231912+eungbean@users.noreply.github.com>
Co-authored-by: Charles Gaydon <11660435+CharlesGaydon@users.noreply.github.com>
Co-authored-by: Nils Werner <nils@hey.com>
Co-authored-by: Zhenyu Jiang <stevetod98@gmail.com>
Co-authored-by: charles <charlesguan94@gmail.com>
Co-authored-by: LEE HANBIN <hanbin@kakao.com>
Co-authored-by: Xin Hu <laohuxin@gmail.com>
Co-authored-by: Johnny <johnnync13@gmail.com>
Co-authored-by: lyp <38273833+yipliu@users.noreply.github.com>
Co-authored-by: yu-xiang-wang <104632366+yu-xiang-wang@users.noreply.github.com>
Co-authored-by: Eli Simhayev <elisimhayev@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alexander Tong <alexandertongdev@gmail.com>
Co-authored-by: Yongtae723 <50616781+Yongtae723@users.noreply.github.com>
Co-authored-by: yangliz5 <yangyang.li@northwestern.edu>
Co-authored-by: Alex Morehead <alex.morehead@gmail.com>
Co-authored-by: Lukas <lukasz.zalewski.ai@gmail.com>
Co-authored-by: YuCao16 <62466929+YuCao16@users.noreply.github.com>
Co-authored-by: Guilherme Pires <mail@gpir.es>
Co-authored-by: Mattie Tesfaldet <tesfaldet@hotmail.com>
Co-authored-by: Mattie Tesfaldet <mattie@meta.com>
Co-authored-by: Johnny <johnnynuca14@gmail.com>
Co-authored-by: Yunchong Gan <yunchong@pku.edu.cn>
Co-authored-by: Théis Bazin <9104039+tbazin@users.noreply.github.com>
Co-authored-by: André Aquilina <32460579+dreaquil@users.noreply.github.com>
Co-authored-by: Stefan Geyer <git@caplett.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request refactoring

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants