Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

Commit 91ad266

Browse files
TEChopra1000aaronmarkham
authored andcommitted
fixed broken links across multiple files (#16581)
1 parent 5296ddc commit 91ad266

File tree

17 files changed

+35
-38
lines changed

17 files changed

+35
-38
lines changed

docs/python_docs/python/tutorials/getting-started/crash-course/5-predict.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ A saved model can be used in multiple places, such as to continue training, to f
2121

2222
## Prerequisites
2323

24-
Please run the [previous tutorial](train.md) to train the network and save its parameters to file. You will need this file to run the following steps.
24+
Please run the [previous tutorial](4-train.html) to train the network and save its parameters to file. You will need this file to run the following steps.
2525

2626
```{.python .input n=1}
2727
from mxnet import nd

docs/python_docs/python/tutorials/getting-started/crash-course/6-use_gpus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ net(x)
9999

100100
Finally, we show how to use multiple GPUs to jointly train a neural network through data parallelism. Let's assume there are *n* GPUs. We split each data batch into *n* parts, and then each GPU will run the forward and backward passes using one part of the data.
101101

102-
Let's first copy the data definitions and the transform function from the [previous tutorial](predict.md).
102+
Let's first copy the data definitions and the transform function from the [previous tutorial](5-predict.html).
103103

104104
```{.python .input}
105105
batch_size = 256

docs/python_docs/python/tutorials/getting-started/gluon_from_experiment_to_deployment.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020

2121
## Overview
2222
MXNet Gluon API comes with a lot of great features, and it can provide you everything you need: from experimentation to deploying the model. In this tutorial, we will walk you through a common use case on how to build a model using gluon, train it on your data, and deploy it for inference.
23-
This tutorial covers training and inference in Python, please continue to [C++ inference part](https://mxnet.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html) after you finish.
23+
This tutorial covers training and inference in Python, please continue to [C++ inference part](/api/cpp/docs/tutorials/cpp_inference) after you finish.
2424

2525
Let's say you need to build a service that provides flower species recognition. A common problem is that you don't have enough data to train a good model. In such cases, a technique called Transfer Learning can be used to make a more robust model.
2626
In Transfer Learning we make use of a pre-trained model that solves a related task, and was trained on a very large standard dataset, such as ImageNet. ImageNet is from a different domain, but we can utilize the knowledge in this pre-trained model to perform the new task at hand.
@@ -77,7 +77,7 @@ from mxnet.gluon.data.vision import transforms
7777
from mxnet.gluon.model_zoo.vision import resnet50_v2
7878
```
7979

80-
Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](../packages/gluon/training/learning_rates/learning_rate_schedules.html) to adjust learning rates during training.
80+
Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](/api/python/docs/tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html) to adjust learning rates during training.
8181
Here we set the `epochs` to 1 for quick demonstration, please change to 40 for actual training.
8282

8383
```python
@@ -161,7 +161,7 @@ test_data = gluon.data.DataLoader(
161161

162162
We will use pre-trained ResNet50_v2 model which was pre-trained on the [ImageNet Dataset](http://www.image-net.org/) with 1000 classes. To match the classes in the Flower dataset, we must redefine the last softmax (output) layer to be 102, then initialize the parameters.
163163

164-
Before we go to training, one unique Gluon feature you should be aware of is hybridization. It allows you to convert your imperative code to a static symbolic graph, which is much more efficient to execute. There are two main benefits of hybridizing your model: better performance and easier serialization for deployment. The best part is that it's as simple as just calling `net.hybridize()`. To know more about Gluon hybridization, please follow the [hybridization tutorial](https://mxnet.apache.org/tutorials/gluon/hybrid.html).
164+
Before we go to training, one unique Gluon feature you should be aware of is hybridization. It allows you to convert your imperative code to a static symbolic graph, which is much more efficient to execute. There are two main benefits of hybridizing your model: better performance and easier serialization for deployment. The best part is that it's as simple as just calling `net.hybridize()`. To know more about Gluon hybridization, please follow the [hybridization tutorial](/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html).
165165

166166

167167

@@ -265,7 +265,7 @@ finetune_net.export("flower-recognition", epoch=epochs)
265265
## Load the model and run inference using the MXNet Module API
266266

267267
MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily.
268-
Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](https://mxnet.apache.org/api/python/module/module.html), [Java](https://mxnet.apache.org/api/java/index.html), [Scala](https://mxnet.apache.org/api/scala/index.html), and [C++](https://mxnet.apache.org/api/c++/index.html) APIs.
268+
Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](/api/python.html), [Java](/api/java.html), [Scala](/api/scala.html), and [C++](/api/cpp) APIs.
269269

270270
Here we will briefly introduce how to run inference using Module API in Python. There is more detailed explanation available in the [Predict Image Tutorial](https://mxnet.apache.org/tutorials/python/predict_image.html).
271271
In general, prediction consists of the following steps:
@@ -315,7 +315,7 @@ You can continue to the [next tutorial](https://mxnet.apache.org/versions/master
315315

316316
You can also find more ways to run inference and deploy your models here:
317317
1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
318-
2. [Scala Inference examples](https://mxnet.apache.org/tutorials/scala/)
318+
2. [Scala Inference examples](/api/scala/docs/tutorials/infer)
319319
4. [MXNet Model Server Examples](https://github.com/awslabs/mxnet-model-server/tree/master/examples)
320320

321321
## References

docs/python_docs/python/tutorials/getting-started/to-mxnet/pytorch.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ mx_trainer = gluon.Trainer(mx_net.collect_params(),
164164
'sgd', {'learning_rate': 0.1})
165165
```
166166

167-
The code difference between frameworks is small. The main difference is that in Apache MXNet we use [Trainer](https://mxnet.apache.org/api/python/docs/api/gluon/mxnet.gluon.Trainer.html) class, which accepts optimization algorithm as an argument. We also use [.collect_params()](/api/python/docs/api/gluon/_autogen/mxnet.gluon.nn.Block.collect_params.html) method to get parameters of the network.
167+
The code difference between frameworks is small. The main difference is that in Apache MXNet we use [Trainer](/api/python/docs/api/gluon/trainer.html) class, which accepts optimization algorithm as an argument. We also use [.collect_params()](/api/python/docs/api/gluon/block.html#mxnet.gluon.Block.collect_params) method to get parameters of the network.
168168

169169
### 4. Training
170170

@@ -212,13 +212,13 @@ Some of the differences in Apache MXNet when compared to PyTorch are as follows:
212212

213213
* In Apache MXNet, you don't need to flatten the 4-D input into 2-D when feeding the data into forward pass.
214214

215-
* In Apache MXNet, you need to perform the calculation within the [autograd.record()](/api/python/docs/api/gluon-related/_autogen/mxnet.autograd.record.html) scope so that it can be automatically differentiated in the backward pass.
215+
* In Apache MXNet, you need to perform the calculation within the [autograd.record()](/api/python/docs/api/autograd/index.html?autograd%20record#mxnet.autograd.record) scope so that it can be automatically differentiated in the backward pass.
216216

217217
* It is not necessary to clear the gradient every time as with PyTorch's `trainer.zero_grad()` because by default the new gradient is written in, not accumulated.
218218

219-
* You need to specify the update step size (usually batch size) when performing [step()](/api/python/docs/api/gluon/_autogen/mxnet.gluon.Trainer.step.html) on the trainer.
219+
* You need to specify the update step size (usually batch size) when performing [step()](/api/python/docs/api/gluon/trainer.html?#mxnet.gluon.Trainer.step) on the trainer.
220220

221-
* You need to call [.asscalar()](/api/python/docs/api/ndarray/_autogen/mxnet.ndarray.NDArray.asscalar.html) to turn a multidimensional array into a scalar.
221+
* You need to call [.asscalar()](/api/python/docs/api/ndarray/ndarray.html?#mxnet.ndarray.NDArray.asscalar) to turn a multidimensional array into a scalar.
222222

223223
* In this sample, Apache MXNet is twice as fast as PyTorch. Though you need to be cautious with such toy comparisons.
224224

@@ -230,9 +230,9 @@ As we saw above, Apache MXNet Gluon API and PyTorch have many similarities. The
230230

231231
While Apache MXNet Gluon API is very similar to PyTorch, there are some extra functionality that can make your code even faster.
232232

233-
* Check out [Hybridize tutorial](/api/python/docs/guide/packages/gluon/hybridize.html) to learn how to write imperative code which can be converted to symbolic one.
233+
* Check out [Hybridize tutorial](/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html) to learn how to write imperative code which can be converted to symbolic one.
234234

235-
* Also, check out how to extend Apache MXNet with your own [custom layers](/api/python/docs/guide/extend/custom_layer.html).
235+
* Also, check out how to extend Apache MXNet with your own [custom layers](/api/python/docs/tutorials/packages/gluon/blocks/custom-layer.html?custom_layers).
236236

237237
## Appendix
238238

docs/python_docs/python/tutorials/packages/gluon/image/mnist.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ to train the MLP network we defined above.
112112

113113
For our training, we will make use of the stochastic gradient descent (SGD) optimizer. In particular, we'll be using mini-batch SGD. Standard SGD processes train data one example at a time. In practice, this is very slow and one can speed up the process by processing examples in small batches. In this case, our batch size will be 100, which is a reasonable choice. Another parameter we select here is the learning rate, which controls the step size the optimizer takes in search of a solution. We'll pick a learning rate of 0.02, again a reasonable choice. Settings such as batch size and learning rate are what are usually referred to as hyper-parameters. What values we give them can have a great impact on training performance.
114114

115-
We will use [Trainer](https://mxnet.io/api/python/docs/api/gluon/mxnet.gluon.Trainer.html) class to apply the
115+
We will use [Trainer](/api/python/docs/api/gluon/trainer.html) class to apply the
116116
[SGD optimizer](https://mxnet.io/api/python/docs/api/gluon-related/_autogen/mxnet.optimizer.SGD.html) on the
117117
initialized parameters.
118118

docs/static_site/src/pages/api/architecture/note_data_loading.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ then compress into JPEG format.
125125
After that, we save a header that indicates the index and label
126126
for that image to be used when constructing the *Data* field for that record.
127127
We then pack several images together into a file.
128-
You may want to also review the [example using im2rec.py to create a RecordIO dataset](https://mxnet.apache.org/tutorials/basic/data.html#loading-data-using-image-iterators).
128+
You may want to also review the [example using im2rec.py to create a RecordIO dataset](https://mxnet.apache.org/api/faq/recordio).
129129

130130
### Access Arbitrary Parts Of Data
131131

docs/static_site/src/pages/api/cpp/docs/tutorials/mxnet_cpp_inference_tutorial.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ tag: cpp
2929
## Overview
3030
MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily.
3131
Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python]({{'/api/python/docs/api/symbol-related/mxnet.module'|relative_url}}), [Java]({{'/api/java/docs/api'|relative_url}}), [Scala]({{'/api/scala/docs/api'|relative_url}}), and [C++]({{'/api/cpp/docs/api'|relative_url}}) APIs.
32-
We will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/example/inference) for our use case.
32+
We will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) for our use case.
3333

3434
## Prerequisites
3535

@@ -105,7 +105,7 @@ class Predictor {
105105
106106
### Load the model, synset file, and normalization values
107107
108-
In the Predictor constructor, you need to provide paths to saved json and param files. After that, add the following methods `LoadModel` and `LoadParameters` to load the network and its parameters. This part is the same as [the example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/inception_inference.cpp).
108+
In the Predictor constructor, you need to provide paths to saved json and param files. After that, add the following methods `LoadModel` and `LoadParameters` to load the network and its parameters. This part is the same as [the example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp).
109109
110110
Next, we need to load synset file, and normalization values. We have made the following change since our synset file contains flower names and we used both mean and standard deviation for image normalization.
111111
@@ -280,12 +280,12 @@ Then it will predict your image:
280280

281281
Now you can explore more ways to run inference and deploy your models:
282282
1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
283-
2. [Scala Inference examples]({{'/api/scala/docs/tutorials'|relative_url}})
284-
3. [ONNX model inference examples]({{'/api/python/docs/tutorials/deploy/index.html'|relative_url}})
283+
2. [Scala Inference examples](/api/scala/docs/tutorials)
284+
3. [ONNX model inference examples](/api/python/docs/tutorials/deploy/index.html)
285285
4. [MXNet Model Server Examples](https://github.com/awslabs/mxnet-model-server/tree/master/examples)
286286

287287
## References
288288

289-
1. [Gluon end to end tutorial]({{'/api/python/docs/tutorials/packages/gluon/gluon_from_experiment_to_deployment.html'|relative_url}})
289+
1. [Gluon end to end tutorial](/api/python/docs/tutorials/getting-started/gluon_from_experiment_to_deployment.html)
290290
2. [Gluon C++ inference example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/)
291291
3. [Gluon C++ package](https://github.com/apache/incubator-mxnet/tree/master/cpp-package)

docs/static_site/src/pages/api/faq/distributed_training.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ In the case of distributed training though, we would need to divide the dataset
9191

9292
Typically, this split of data for each worker happens through the data iterator,
9393
on passing the number of parts and the index of parts to iterate over.
94-
Some iterators in MXNet that support this feature are [mxnet.io.MNISTIterator]({{'//api/mxnet/io/index.html#mxnet.io.MNISTIter'|relative_url}}) and [mxnet.io.ImageRecordIter]({{'/api/mxnet/io/index.html#mxnet.io.ImageRecordIter'|relative_url}}).
94+
Some iterators in MXNet that support this feature are [mxnet.io.MNISTIterator](/api/python/docs/api/mxnet/io/index.html?MNISTIter#mxnet.io.MNISTIter) and [mxnet.io.ImageRecordIter](api/python/docs/api/mxnet/io/index.html?imagerecorditer#mxnet.io.ImageRecordIter).
9595
If you are using a different iterator, you can look at how the above iterators implement this.
9696
We can use the kvstore object to get the number of workers (`kv.num_workers`) and rank of the current worker (`kv.rank`).
9797
These can be passed as arguments to the iterator.
@@ -101,7 +101,7 @@ to see an example usage.
101101
### Updating weights
102102
KVStore server supports two modes, one which aggregates the gradients and updates the weights using those gradients, and second where the server only aggregates gradients. In the latter case, when a worker process pulls from kvstore, it gets the aggregated gradients. The worker then uses these gradients and applies the weights locally.
103103

104-
When using Gluon there is an option to choose between these modes by passing `update_on_kvstore` variable when you create the [Trainer]({{'/api/python/docs/api/gluon/mxnet.gluon.Trainer.html'|relative_url}}) object like this:
104+
When using Gluon there is an option to choose between these modes by passing `update_on_kvstore` variable when you create the [Trainer](/api/python/docs/api/gluon/trainer.html) object like this:
105105

106106
```
107107
trainer = gluon.Trainer(net.collect_params(), optimizer='sgd',
@@ -190,7 +190,7 @@ git clone --recursive https://github.com/apache/incubator-mxnet
190190
```
191191

192192
#### Example
193-
Let us consider training a VGG11 model on the CIFAR10 dataset using [example/gluon/image_classification.py](https://github.com/apache/incubator-mxnet/blob/master/example/gluon/image_classification.py).
193+
Let us consider training a VGG11 model on the CIFAR10 dataset using [example/gluon/image_classification.py](https://github.com/apache/incubator-mxnet/blob/master/tools/launch.py).
194194
```
195195
cd example/gluon/
196196
```

docs/static_site/src/pages/api/faq/float16.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,21 +39,21 @@ The float16 data type is a 16 bit floating point representation according to the
3939
- CUDA 9 or higher
4040
- cuDNN v7 or higher
4141

42-
This tutorial also assumes understanding of how to train a network with float32 (the default). Please refer to [logistic regression tutorial](https://mxnet.apache.org/versions/master/tutorials/gluon/logistic_regression_explained.html) to get started with Apache MXNet and Gluon API. This tutorial focuses on the changes needed to switch from float32 to mixed precision and tips on achieving the best performance with mixed precision.
42+
This tutorial also assumes understanding of how to train a network with float32 (the default). Please refer to [logistic regression tutorial](/api/python/docs/tutorials/getting-started/logistic_regression_explained.html) to get started with Apache MXNet and Gluon API. This tutorial focuses on the changes needed to switch from float32 to mixed precision and tips on achieving the best performance with mixed precision.
4343

4444
## Using the Gluon API
4545

4646
### Training or Inference
4747

4848
With Gluon API, you need to take care of three things to convert a model to support computation with float16.
4949

50-
1. Cast Gluon `Block`'s parameters and expected input type to float16 by calling the [cast]({{'/api/python/docs/api/gluon/mxnet.gluon.nn.Block.html#mxnet.gluon.nn.Block.cast'|relative_url}}) method of the `Block` representing the network.
50+
1. Cast Gluon `Block`'s parameters and expected input type to float16 by calling the [cast](/api/python/docs/api/gluon/block.html?cast#mxnet.gluon.Block.cast) method of the `Block` representing the network.
5151

5252
```python
5353
net.cast('float16')
5454
```
5555

56-
2. Ensure the data input to the network is of float16 type. If your `DataLoader` or `Iterator` produces output in another datatype, then you would have to cast your data. There are different ways you can do this. The easiest would be to use the [astype]({{'/api/python/docs/api/ndarray/_autogen/mxnet.ndarray.NDArray.astype.html#mxnet.ndarray.NDArray.astype'|relative_url}}) method of NDArrays.
56+
2. Ensure the data input to the network is of float16 type. If your `DataLoader` or `Iterator` produces output in another datatype, then you would have to cast your data. There are different ways you can do this. The easiest would be to use the [astype](/api/python/docs/api/ndarray/ndarray.html?astype#mxnet.ndarray.NDArray.astype) method of NDArrays.
5757

5858
```python
5959
data = data.astype('float16', copy=False)
@@ -98,7 +98,7 @@ net.features = pretrained_net.features
9898
net.cast('float16')
9999
```
100100

101-
You can check the parameters of the model by calling [summary]({{'/api/python/docs/api/gluon/mxnet.gluon.nn.Block.html#mxnet.gluon.nn.Block.summary'|relative_url}}) with some fake data. Notice the provided `dtype=np.float16` in the line below. As it was mentioned earlier, we have to provide data as float16 as well.
101+
You can check the parameters of the model by calling [summary](/api/python/docs/api/gluon/block.html?block%20summary#mxnet.gluon.Block.summary) with some fake data. Notice the provided `dtype=np.float16` in the line below. As it was mentioned earlier, we have to provide data as float16 as well.
102102

103103
```python
104104
net.summary(mx.nd.uniform(shape=(1, 3, 224, 224), dtype=np.float16))

0 commit comments

Comments
 (0)