|
20 | 20 |
|
21 | 21 | ## Overview |
22 | 22 | MXNet Gluon API comes with a lot of great features, and it can provide you everything you need: from experimentation to deploying the model. In this tutorial, we will walk you through a common use case on how to build a model using gluon, train it on your data, and deploy it for inference. |
23 | | -This tutorial covers training and inference in Python, please continue to [C++ inference part](https://mxnet.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html) after you finish. |
| 23 | +This tutorial covers training and inference in Python, please continue to [C++ inference part](/api/cpp/docs/tutorials/cpp_inference) after you finish. |
24 | 24 |
|
25 | 25 | Let's say you need to build a service that provides flower species recognition. A common problem is that you don't have enough data to train a good model. In such cases, a technique called Transfer Learning can be used to make a more robust model. |
26 | 26 | In Transfer Learning we make use of a pre-trained model that solves a related task, and was trained on a very large standard dataset, such as ImageNet. ImageNet is from a different domain, but we can utilize the knowledge in this pre-trained model to perform the new task at hand. |
@@ -77,7 +77,7 @@ from mxnet.gluon.data.vision import transforms |
77 | 77 | from mxnet.gluon.model_zoo.vision import resnet50_v2 |
78 | 78 | ``` |
79 | 79 |
|
80 | | -Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](../packages/gluon/training/learning_rates/learning_rate_schedules.html) to adjust learning rates during training. |
| 80 | +Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](/api/python/docs/tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html) to adjust learning rates during training. |
81 | 81 | Here we set the `epochs` to 1 for quick demonstration, please change to 40 for actual training. |
82 | 82 |
|
83 | 83 | ```python |
@@ -161,7 +161,7 @@ test_data = gluon.data.DataLoader( |
161 | 161 |
|
162 | 162 | We will use pre-trained ResNet50_v2 model which was pre-trained on the [ImageNet Dataset](http://www.image-net.org/) with 1000 classes. To match the classes in the Flower dataset, we must redefine the last softmax (output) layer to be 102, then initialize the parameters. |
163 | 163 |
|
164 | | -Before we go to training, one unique Gluon feature you should be aware of is hybridization. It allows you to convert your imperative code to a static symbolic graph, which is much more efficient to execute. There are two main benefits of hybridizing your model: better performance and easier serialization for deployment. The best part is that it's as simple as just calling `net.hybridize()`. To know more about Gluon hybridization, please follow the [hybridization tutorial](https://mxnet.apache.org/tutorials/gluon/hybrid.html). |
| 164 | +Before we go to training, one unique Gluon feature you should be aware of is hybridization. It allows you to convert your imperative code to a static symbolic graph, which is much more efficient to execute. There are two main benefits of hybridizing your model: better performance and easier serialization for deployment. The best part is that it's as simple as just calling `net.hybridize()`. To know more about Gluon hybridization, please follow the [hybridization tutorial](/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html). |
165 | 165 |
|
166 | 166 |
|
167 | 167 |
|
@@ -265,7 +265,7 @@ finetune_net.export("flower-recognition", epoch=epochs) |
265 | 265 | ## Load the model and run inference using the MXNet Module API |
266 | 266 |
|
267 | 267 | MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily. |
268 | | -Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](https://mxnet.apache.org/api/python/module/module.html), [Java](https://mxnet.apache.org/api/java/index.html), [Scala](https://mxnet.apache.org/api/scala/index.html), and [C++](https://mxnet.apache.org/api/c++/index.html) APIs. |
| 268 | +Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](/api/python.html), [Java](/api/java.html), [Scala](/api/scala.html), and [C++](/api/cpp) APIs. |
269 | 269 |
|
270 | 270 | Here we will briefly introduce how to run inference using Module API in Python. There is more detailed explanation available in the [Predict Image Tutorial](https://mxnet.apache.org/tutorials/python/predict_image.html). |
271 | 271 | In general, prediction consists of the following steps: |
@@ -315,7 +315,7 @@ You can continue to the [next tutorial](https://mxnet.apache.org/versions/master |
315 | 315 |
|
316 | 316 | You can also find more ways to run inference and deploy your models here: |
317 | 317 | 1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer) |
318 | | -2. [Scala Inference examples](https://mxnet.apache.org/tutorials/scala/) |
| 318 | +2. [Scala Inference examples](/api/scala/docs/tutorials/infer) |
319 | 319 | 4. [MXNet Model Server Examples](https://github.com/awslabs/mxnet-model-server/tree/master/examples) |
320 | 320 |
|
321 | 321 | ## References |
|
0 commit comments