mirror of
https://github.com/labmlai/annotated_deep_learning_paper_implementations.git
synced 2025-11-01 20:28:41 +08:00
optimizers path fix
This commit is contained in:
@ -53,7 +53,7 @@ and
|
||||
* [Adam Optimizer with warmup](https://lab-ml.com/labml_nn/optimizers/adam_warmup.html)
|
||||
* [Noam Optimizer](https://lab-ml.com/labml_nn/optimizers/noam.html)
|
||||
* [Rectified Adam Optimizer](https://lab-ml.com/labml_nn/optimizers/radam.html)
|
||||
* [AdaBelief Optimizer](https://lab-ml.com/labml_nn/optimizers/adabelief.html)
|
||||
* [AdaBelief Optimizer](https://lab-ml.com/labml_nn/optimizers/ada_belief.html)
|
||||
|
||||
### Installation
|
||||
|
||||
|
||||
@ -14,7 +14,7 @@ summary: >
|
||||
* [Adam Optimizer with warmup](adam_warmup.html)
|
||||
* [Noam Optimizer](noam.html)
|
||||
* [Rectified Adam Optimizer](radam.html)
|
||||
* [AdaBelief Optimizer](adabelief.html)
|
||||
* [AdaBelief Optimizer](ada_belief.html)
|
||||
|
||||
This [MNIST example](mnist_experiment.html) uses these optimizers.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user