Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

Commit b1e4911

Browse files
anirudh2290szhaptrendxTaoLvJiangZhaoh
authored
Multithreaded Inference Support (#16654)
* Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests * Fix download cmd in runtime_functions * Add CI changes * Add stage Fix indentation * Fix lint * Change to DEFAULT for C API * Fix mxnet_unit_tests path * export correct LD_LIBRARY_PATH * Add cpp include dirs * Build test with USE_CPP_PACKAGE * Add cached op threadsafe version with corresponding C APIs, CPP Package changes, CI changes and tests * Fix download cmd in runtime_functions * Merge * change mkldnn lib name * Add static_alloc, static_Shape support * Address review comments * Make GetCachedOpThreadSafeState similar to cached_op * Address review comments: comments for locking strategy * multithreaded inference tutorial * [Estimator] handle composite metrics in estimator (#16676) * handle composite metrics in estimator * fix composite metric case in handlers * remove unused import * [Estimator] refactor estimator to allow overriding evaluate/fit of a batch (#16678) * refactor estimator to allow overriding evaluate/fit of a batch * add doc to explain call structure and how to override * fix and doc * Pointwise fusion for GPU (#15167) * Beginning of RTC of pointwise ops * Code generation from the given JSON * add initial simple_partition_pass and use it for pointwise fusion * fix the fusion, use a symbol.Copy() at the beginning of binding function, use the name of input nodes in the cuda code * Fixes * Adding support for attribute inference for backward nodes when fusing * keep proper input ordering for fused Op * instantiate the indexed_graph before starting the subgraph replacement, return a new graph to reset the indexed_graph * Fuse backward * fix ordering of subgraph node inputs using subgraph topological ordering instead of main graph topological ordering, add tvm.patch * excluse forward node fusion during the fusion of the nodes in the backward graph * Dealing with fused backward nodes inferattr * use subgraph.indexed_graph() instead of main for _FusedOpHelper nodes node_id, invert control_deps loop to modify topology of subgraph before calling its indexed_graph(), check that all node of the first DFSVisit are actually in the subgraph * Adding support for other reqs in codegen * Fix * Cleaning * Change the TVM submodule * More cleaning * Making linter happy * Do fusion only if default context is GPU * Fixes for tests Add powerscalar and rpowerscalar, fix return type of zero and one Cleaning, fixing lint Go back to proper TVM submodule * Fix the TVM commit * Fix lint * Guard fusion with MXNET_USE_CUDA * Fix * Fix clang-tidy * Add erf and erfinv backward * Gluon support for fusion * Cleaning * Cleaning and allow shape/type change in FusedOp * Fixing Gluon bugs * Fixing after rebase * Fixing race condition and guarding against races when using NVRTC * Cleaning and renaming FusedOp to _FusedOp * Going easy on Windows compiler * Disable fusion on Windows for now * Refactor InferAttr and InferShapeAttr * Added slice and half2 support to FusedOp * Fix lint errors * Added multiple types support for vector loading/storing * add slice fusion when it's at the beginning of subgraphs * Removed constant ndim assumption in fused op * Fix memory alignment issue in slice for FusedOp * Fixes * Fix lint errors * Do not include cuda_fp16.h * Refactor fused op op lists * Make linter happy * Changes from review * Fixes after rebase * Expand FusedOp support for slice * Fix for fp16 _zeros and _ones * Fix * Moving aux functions to unnamed namespace and detail namespace -> fusion namespace * Disabling fusion if it alters topological order of inputs * Print code only when env variable is set * Fix * Fix lint and 2 tests that specify the same names for multiple inputs * Fixes from review and disabling fusion of slice with non-default step * Add amp_cast to fusion, fixes * Add amp_multicast and its backward to the list of support ops * Apply wording suggestions from code review Co-Authored-By: Aaron Markham <markhama@amazon.com> * Apply wording suggestions from code review Co-Authored-By: Aaron Markham <markhama@amazon.com> * Make clearer comment * Adding punctuation and capitalization to \brief descriptions * Fix * Fix * Add backward_cast to fusion * Adding unittests for fusion. Fix for erfinv_grad * Adding slice ops and add_n to tests * Fixes from review * Setting inplace option * Fix lint * Storing double in half * Retrigger CI * Slight relaxing of the relative tolerance in the test * Move the env variable check to the end * Fix a race condition between InferShape and scheduled Forward * Fix flakey test_fusion test involving fp32 erfinv op. * Fix from review * Added broadcast_like and slice_like to fused op * Minor fix and cleanup * Added negative axis support in slice_axis, temporarily disabled fusion of slice_like and broadcast_like * Added axes support to slice_like * Added axis support to broadcast_like * Add fast_load_slice function to fused op code * Added runtime switch for choosing fast and slow slice kernel * Fix lint and warning * Going easy on Windows compiler (again) * Fix slice_like * Debug broadcast_like fusion * Fix lint * Fix lint * Trigger CI * Get rid of the initializer list * Fix backward calls with different gradient type * avoid cycle when adding node specific for inputs of subgraph for pointwise fusion * Fix lint * Add namespace to the fusion implementations * Set launch bounds on the fused kernel * Fix NumPy tests * Test showcasing an issue fixed in PR #16553 * Cast scalarts to FP32 and perform (a*1.0/b) instead of (a/b) Fix lint errors Fix lint * Fix a bug in cycle detection for inputs only op in pointwise fusion * Add comments to simple_partition_pass.h file * fix install dir (#16690) * [numpy] add numpy operator : append (#16564) * add operator : append ; fix op concatenate when axis = None * pylint disable remove mistake disable pylint * Initializer.__eq__ (#16680) * fix binary dependencies in CD and nightly (#16693) * [MKL-DNN] Add mxnet mkldnn cmake tutorial (#16688) * add mxnet mkldnn cmake instruction * imporve doc * OMP->OpenMP * Revert "[MKLDNN]Fix reorder2default (#16602)" (#16697) This reverts commit dd4eaf5. * [Estimator] refactor estimator and clarify docs (#16694) * refactor estimator and clarify docs * fix info message and test * clean up after releasing logging handler * Eliminate common expressions (#15657) * Eliminate common expressions from a graph * Guarding against optimizing out stateful ops and ops that require resource * Fix lint * Added THasDeterministicOutput to multiple ops * DDebug eliminate common expr * Added test * Expose get_optimized_symbol * Fix * Fix 2 * Add doc to the Python call * Add env var MXNET_ELIMINATE_COMMON_EXPR, default true * Add comments, improve readability of eliminate_common_expr_pass.cc * Expand testing * Lower priority of THasDeterministicOutput attr for equal Node test * Change mx.gpu() to mx.cpu() in tests * Skip CSE test on Windows (as env variable setting during test does not work there) * Add missing import sys * Add missing import logging * Backport of #16711, #16737, #16408 to 1.6 branch (#16763) * support mixed-precision true_divide (#16711) * [MKLDNN] use dim_t instead of int in slice/transpose operators (#16737) * use dim_t instead of int * fix same issue in pooling * rebase code * trigger CI * Add MXNet Ops for fast multihead attention (#16408) * add MXNet Ops for fast multihead attention * add cutlass as 3rdparty dependency * add cutlass to compilation flags * remove all cutlass stuff * add better error message and description and remove cutlass from compilation flags * change credit for the approach since the code have changed * fix typos * correct another typo * Add all the cuda/cublas helper functions * remove tests using kAddTo * only use cublasStridedBatchedGemm if CUDA >= 9.1 * add equivalent mxnet code in description of mha ops * remove a wrong copy-paste * add _contrib for namespace and add GPU only on description * add warning in bwd_ignore_zero_init description, also test with fp32 * add error return if bwd_ignore_zero_init is used without MXNET_EXEC_ENABLE_ADDTO * remove std::move for clang * remove bwd_ignore_zero_init flag * remove bwd_ignore_zero_init in test_operator_gpu.py * fix typo * fix another typo * Removed unrelated test * Add example and documentation for multi threaded inference * Add LICENSE * Add get_model.py * Add license for README * Refactor cached op and cached op threadsafe * Add limitation * Add tests for naive engine * Add latest test changes * Thread Safety tests in NaiveEngine mode * Thread Safety tests update * Update thread safety tests, add unsupported use cases * Changes to doc and refactor * Fix todo owner, indentation and mx_float->float * Refactor cached op code, remove num_threads arg from example * Fix lint * Fix warning * Add back cython, required for unix-gpu build * Fix for windows * Add bulking support for thread safe cached op version * Add support for subgraph testing * import mxnet before calling get_backend_symbol * Fix symbol json name * Refactor DynamicForward * Add comments * Add DMLC_ATTRIBUTE_UNUSED * Fix use_naive_run issue * Fix lint * Revert unittest_cpp to old test since it doesnt test thread safety * Fix doc Co-authored-by: Sheng Zha <szha@users.noreply.github.com> Co-authored-by: Przemyslaw Tredak <ptrendx@gmail.com> Co-authored-by: Tao Lv <tao.a.lv@intel.com> Co-authored-by: JiangZhaoh <54654391+JiangZhaoh@users.noreply.github.com> Co-authored-by: Leonard Lausen <leonard@lausen.nl> Co-authored-by: Xinyu Chen <xinyu1.chen@intel.com> Co-authored-by: Zhennan Qin <zhennan.qin@intel.com>
1 parent a726c40 commit b1e4911

File tree

26 files changed

+2361
-330
lines changed

26 files changed

+2361
-330
lines changed

CMakeLists.txt

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -314,6 +314,10 @@ if(USE_MKLDNN)
314314
set(INSTALL_MKLDNN ON)
315315
endif()
316316

317+
if(USE_CPP_PACKAGE)
318+
add_definitions(-DMXNET_USE_CPP_PACKAGE=1)
319+
endif()
320+
317321
# Allow Cuda compiles outside of src tree to find things in 'src' and 'include'
318322
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/include)
319323
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src)
@@ -853,7 +857,6 @@ if(MSVC AND USE_MXNET_LIB_NAMING)
853857
set_target_properties(mxnet PROPERTIES OUTPUT_NAME "libmxnet")
854858
endif()
855859

856-
add_subdirectory(tests)
857860

858861
include(GNUInstallDirs)
859862
install(TARGETS ${MXNET_INSTALL_TARGETS}
@@ -915,6 +918,7 @@ endif()
915918
if(BUILD_CPP_EXAMPLES)
916919
add_subdirectory(example/image-classification/predict-cpp)
917920
endif()
921+
add_subdirectory(tests)
918922

919923
# ---[ Linter target
920924
if(MSVC)

Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -646,6 +646,7 @@ $(BIN) :
646646
# CPP Package
647647
ifeq ($(USE_CPP_PACKAGE), 1)
648648
include cpp-package/cpp-package.mk
649+
CFLAGS += -DMXNET_USE_CPP_PACKAGE=1
649650
endif
650651

651652
include mkldnn.mk

ci/docker/runtime_functions.sh

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -786,7 +786,27 @@ build_ubuntu_gpu_cuda101_cudnn7() {
786786
CUDA_ARCH="$CI_CUDA_COMPUTE_CAPABILITIES" \
787787
USE_SIGNAL_HANDLER=1 \
788788
-j$(nproc)
789+
make cython PYTHON=python2
790+
make cython PYTHON=python3
791+
}
789792

793+
build_ubuntu_gpu_cuda101_cudnn7_mkldnn_cpp_test() {
794+
set -ex
795+
build_ccache_wrappers
796+
make \
797+
DEV=1 \
798+
USE_BLAS=openblas \
799+
USE_MKLDNN=1 \
800+
USE_CUDA=1 \
801+
USE_CUDA_PATH=/usr/local/cuda \
802+
USE_CUDNN=1 \
803+
USE_TVM_OP=0 \
804+
USE_CPP_PACKAGE=1 \
805+
USE_DIST_KVSTORE=1 \
806+
CUDA_ARCH="$CI_CUDA_COMPUTE_CAPABILITIES" \
807+
USE_SIGNAL_HANDLER=1 \
808+
-j$(nproc)
809+
make test USE_CPP_PACKAGE=1 -j$(nproc)
790810
make cython PYTHON=python2
791811
make cython PYTHON=python3
792812
}
@@ -1323,6 +1343,24 @@ integrationtest_ubuntu_gpu_cpp_package() {
13231343
cpp-package/tests/ci_test.sh
13241344
}
13251345

1346+
integrationtest_ubuntu_gpu_capi_cpp_package() {
1347+
set -ex
1348+
export PYTHONPATH=./python/
1349+
export LD_LIBRARY_PATH=/work/mxnet/lib:$LD_LIBRARY_PATH
1350+
python3 -c "import mxnet as mx; mx.test_utils.download_model(\"imagenet1k-resnet-18\"); mx.test_utils.download_model(\"imagenet1k-resnet-152\"); mx.test_utils.download_model(\"imagenet1k-resnet-50\");"
1351+
# Load symbol, convert symbol to leverage fusion with subgraphs, save the model
1352+
python3 -c "import mxnet as mx; x = mx.sym.load(\"imagenet1k-resnet-152-symbol.json\"); x.get_backend_symbol(\"MKLDNN\"); x.save(\"imagenet1k-resnet-152-subgraph-symbol.json\");"
1353+
# Copy params file with a different name, used in subgraph symbol testing
1354+
cp imagenet1k-resnet-152-0000.params imagenet1k-resnet-152-subgraph-0000.params
1355+
build/tests/cpp/mxnet_unit_tests --gtest_filter="ThreadSafety.*"
1356+
build/tests/cpp/mxnet_unit_tests --gtest_filter="ThreadSafety.*" --thread-safety-with-cpu
1357+
# Also run thread safety tests in NaiveEngine mode
1358+
export MXNET_ENGINE_TYPE=NaiveEngine
1359+
build/tests/cpp/mxnet_unit_tests --gtest_filter="ThreadSafety.*"
1360+
build/tests/cpp/mxnet_unit_tests --gtest_filter="ThreadSafety.*" --thread-safety-with-cpu
1361+
unset MXNET_ENGINE_TYPE
1362+
}
1363+
13261364
integrationtest_ubuntu_cpu_dist_kvstore() {
13271365
set -ex
13281366
pushd .

ci/jenkins/Jenkins_steps.groovy

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/l
3939
mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, build/libcustomop_lib.so, build/libcustomop_gpu_lib.so, build/libsubgraph_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
4040
mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, lib/libonnx_proto.so, lib/libonnx.so'
4141
mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, build/libcustomop_lib.so, build/libcustomop_gpu_lib.so, build/libsubgraph_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
42+
mx_lib_cpp_capi = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, lib/tvmop.conf, libsample_lib.so, lib/libmkldnn.so.1, lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so, build/tests/cpp/mxnet_unit_tests'
4243
mx_lib_cpp_examples_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, build/libcustomop_lib.so, build/libcustomop_gpu_lib.so, build/libsubgraph_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
4344
mx_lib_cpp_examples_cpu = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/tvmop.conf, build/cpp-package/example/*'
4445

@@ -261,6 +262,20 @@ def compile_unix_full_gpu() {
261262
}]
262263
}
263264

265+
def compile_unix_full_gpu_mkldnn_cpp_test() {
266+
return ['GPU: CUDA10.1+cuDNN7+MKLDNN+CPPTEST': {
267+
node(NODE_LINUX_CPU) {
268+
ws('workspace/build-gpu-mkldnn-cpp') {
269+
timeout(time: max_time, unit: 'MINUTES') {
270+
utils.init_git()
271+
utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_cuda101_cudnn7_mkldnn_cpp_test', false)
272+
utils.pack_lib('gpu_mkldnn_cpp_test', mx_lib_cpp_capi)
273+
}
274+
}
275+
}
276+
}]
277+
}
278+
264279
def compile_unix_full_gpu_no_tvm_op() {
265280
return ['GPU: CUDA10.1+cuDNN7 TVM_OP OFF': {
266281
node(NODE_LINUX_CPU) {
@@ -1010,6 +1025,20 @@ def test_unix_cpp_package_gpu() {
10101025
}]
10111026
}
10121027

1028+
def test_unix_capi_cpp_package() {
1029+
return ['capi-cpp-package GPU': {
1030+
node(NODE_LINUX_GPU) {
1031+
ws('workspace/it-capi-cpp-package') {
1032+
timeout(time: max_time, unit: 'MINUTES') {
1033+
utils.unpack_and_init('gpu_mkldnn_cpp_test', mx_lib_cpp_capi)
1034+
utils.docker_run('ubuntu_gpu_cu101', 'integrationtest_ubuntu_gpu_capi_cpp_package', true)
1035+
utils.publish_test_coverage()
1036+
}
1037+
}
1038+
}
1039+
}]
1040+
}
1041+
10131042
def test_unix_scala_cpu() {
10141043
return ['Scala: CPU': {
10151044
node(NODE_LINUX_CPU) {

ci/jenkins/Jenkinsfile_unix_gpu

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ core_logic: {
4343
custom_steps.compile_unix_int64_gpu(),
4444
custom_steps.compile_unix_full_gpu_no_tvm_op(),
4545
custom_steps.compile_unix_cmake_gpu_no_tvm_op(),
46+
custom_steps.compile_unix_full_gpu_mkldnn_cpp_test()
4647
])
4748

4849
utils.parallel_stage('Tests', [
@@ -64,6 +65,7 @@ core_logic: {
6465
custom_steps.test_unix_distributed_kvstore_gpu(),
6566
custom_steps.test_static_python_gpu(),
6667
custom_steps.test_unix_python3_gpu_no_tvm_op(),
68+
custom_steps.test_unix_capi_cpp_package(),
6769

6870
// Disabled due to: https://github.com/apache/incubator-mxnet/issues/11407
6971
//custom_steps.test_unix_caffe_gpu()

cpp-package/include/mxnet-cpp/ndarray.hpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ inline NDArray::NDArray(const mx_float *data, const Shape &shape,
7474
CHECK_EQ(MXNDArrayCreate(shape.data(), shape.ndim(), context.GetDeviceType(),
7575
context.GetDeviceId(), false, &handle),
7676
0);
77-
MXNDArraySyncCopyFromCPU(handle, data, shape.Size());
77+
CHECK_EQ(MXNDArraySyncCopyFromCPU(handle, data, shape.Size()), 0);
7878
blob_ptr_ = std::make_shared<NDBlob>(handle);
7979
}
8080
inline NDArray::NDArray(const std::vector<mx_float> &data, const Shape &shape,

cpp-package/include/mxnet-cpp/symbol.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,8 @@ class Symbol {
174174
*unnamed (empty string).
175175
*/
176176
std::vector<std::string> ListArguments() const;
177+
/*! \return lists all argument names and aux states of the symbol */
178+
std::vector<std::string> ListInputs() const;
177179
/*! \return get the descriptions of outputs for this symbol */
178180
std::vector<std::string> ListOutputs() const;
179181
/*! \return get the descriptions of auxiliary data for this symbol */

cpp-package/include/mxnet-cpp/symbol.hpp

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,18 @@ inline std::vector<std::string> Symbol::ListArguments() const {
151151
}
152152
return ret;
153153
}
154+
155+
inline std::vector<std::string> Symbol::ListInputs() const {
156+
std::vector<std::string> ret;
157+
mx_uint size;
158+
const char **sarr;
159+
NNSymbolListInputNames(GetHandle(), 0, &size, &sarr);
160+
for (mx_uint i = 0; i < size; ++i) {
161+
ret.push_back(std::string(sarr[i]));
162+
}
163+
return ret;
164+
}
165+
154166
inline std::vector<std::string> Symbol::ListOutputs() const {
155167
std::vector<std::string> ret;
156168
mx_uint size;
Lines changed: 199 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,199 @@
1+
---
2+
layout: page_api
3+
title: Multi Threaded Inference
4+
action: Get Started
5+
action_url: /get_started
6+
permalink: /api/cpp/docs/tutorials/multi_threaded_inference
7+
is_tutorial: true
8+
tag: cpp
9+
---
10+
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
11+
<!--- or more contributor license agreements. See the NOTICE file -->
12+
<!--- distributed with this work for additional information -->
13+
<!--- regarding copyright ownership. The ASF licenses this file -->
14+
<!--- to you under the Apache License, Version 2.0 (the -->
15+
<!--- "License"); you may not use this file except in compliance -->
16+
<!--- with the License. You may obtain a copy of the License at -->
17+
18+
<!--- http://www.apache.org/licenses/LICENSE-2.0 -->
19+
20+
<!--- Unless required by applicable law or agreed to in writing, -->
21+
<!--- software distributed under the License is distributed on an -->
22+
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
23+
<!--- KIND, either express or implied. See the License for the -->
24+
<!--- specific language governing permissions and limitations -->
25+
<!--- under the License. -->
26+
27+
# Multi Threaded Inference API
28+
29+
A long standing request from MXNet users has been to invoke parallel inference on a model from multiple threads while sharing the parameters.
30+
With this use case in mind, the threadsafe version of CachedOp was added to provide a way for customers to do multi-threaded inference for MXNet users.
31+
This doc attempts to do the following:
32+
1. Discuss the current state of thread safety in MXNet
33+
2. Explain how one can use C API and thread safe version of cached op, along with CPP package to achieve iultithreaded inference. This will be useful for end users as well as frontend developers of different language bindings
34+
3. Discuss the limitations of the above approach
35+
4. Future Work
36+
37+
## Current state of Thread Safety in MXNet
38+
39+
Examining the current state of thread safety in MXNet we can arrive to the following conclusion:
40+
41+
1. MXNet Dependency Engine is thread safe (except for WaitToRead invoked inside a spawned thread. Please see Limitations section)
42+
2. Graph Executor which is Module/Symbolic/C Predict API backend is not thread safe
43+
3. Cached Op (Gluon Backend) is not thread safe
44+
45+
The CachedOpThreadSafe and corresponding C APIs were added to address point 3 above and provide a way
46+
for MXNet users to do multi-threaded inference.
47+
48+
```
49+
/*!
50+
* \brief create cached operator, allows to choose thread_safe version
51+
* of cachedop
52+
*/
53+
MXNET_DLL int MXCreateCachedOpEX(SymbolHandle handle,
54+
int num_flags,
55+
const char** keys,
56+
const char** vals,
57+
CachedOpHandle *out,
58+
bool thread_safe DEFAULT(false));
59+
```
60+
61+
## Multithreaded inference in MXNet with C API and CPP Package
62+
63+
### Prerequisites
64+
To complete this tutorial you need to:
65+
- Learn the basics about [MXNet C++ API](/api/cpp)
66+
- Build MXNet from source with make/cmake
67+
- Build the multi-threaded inference example
68+
69+
### Setup the MXNet C++ API
70+
To use the C++ API in MXNet, you need to build MXNet from source with C++ package. Please follow the [built from source guide](/get_started/ubuntu_setup.html), and [C++ Package documentation](/api/cpp)
71+
The summary of those two documents is that you need to build MXNet from source with `USE_CPP_PACKAGE` flag set to 1. For example: `make -j USE_CPP_PACKAGE=1 USE_CUDA=1 USE_CUDNN=1`.
72+
This example requires a build with CUDA and CUDNN.
73+
74+
### Build the example
75+
If you have built mxnet from source with make, then do the following:
76+
77+
```bash
78+
$ cd example/multi_threaded_inference
79+
$ make
80+
```
81+
82+
If you have built mxnet from source with cmake, please uncomment the specific lines for cmake build or set the following environment variables: `MKLDNN_BUILD_DIR (default is $(MXNET_ROOT)/3rdparty/mkldnn/build)`, `MKLDNN_INCLUDE_DIR (default is $(MXNET_ROOT)/3rdparty/mkldnn/include)`, `MXNET_LIB_DIR (default is $(MXNET_ROOT)/lib)`.
83+
84+
### Download the model and run multi threaded inference example
85+
To download a model use the `get_model.py` script. This downloads a model to run inference.
86+
87+
```python
88+
python3 get_model.py --model <model_name>
89+
```
90+
e.g.
91+
```python
92+
python3 get_model.py --model imagenet1k-inception-bn
93+
```
94+
Only the supported models with `get_model.py` work with multi threaded inference.
95+
96+
To run the multi threaded inference example:
97+
98+
First export `LD_LIBRARY_PATH`:
99+
100+
```bash
101+
$ export LD_LIBRARY_PATH=<MXNET_LIB_DIR>:$LD_LIBRARY_PATH
102+
```
103+
104+
```bash
105+
$ ./multi_threaded_inference [model_name] [is_gpu] [file_names]
106+
```
107+
e.g.
108+
109+
```bash
110+
./multi_threaded_inference imagenet1k-inception-bn 2 1 grace_hopper.jpg dog.jpg
111+
```
112+
113+
The above script spawns 2 threads, shares the same cachedop and params among two threads, and runs inference on GPU. It returns the inference results in the order in which files are provided.
114+
115+
NOTE: This example is to demonstrate the multi-threaded-inference with cached op. The inference results work well only with specific models (e.g. imagenet1k-inception-bn). The results may not necessarily be very accurate because of different preprocessing step required etc.
116+
117+
### Code walkthrough multi-threaded inference with CachedOp
118+
119+
The multi threaded inference example (`multi_threaded_inference.cc`) involves the following steps:
120+
121+
1. Parse arguments and load input image into ndarray
122+
2. Prepare input data and load parameters, copying data to a specific context
123+
3. Preparing arguments to pass to the CachedOp and calling C API to **create cached op**
124+
4. Prepare lambda function which will run in spawned threads. Call C API to **invoke cached op** within the lambda function.
125+
5. Spawn multiple threads and wait for all threads to complete.
126+
6. Post process data to obtain inference results and cleanup.
127+
128+
### Step 1: Parse arguments and load input image into ndarray
129+
130+
[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L299-L341](multi_threaded_inference.cc#L299-L341)
131+
132+
The above code parses arguments, loads the image file into a ndarray with a specific shape. There are a few things that are set by default and not configurable. For example, `static_alloc` and `static_shape` are by default set to true.
133+
134+
135+
### Step 2: Prepare input data and load parameters, copying data to a specific context
136+
137+
[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L147-L205](multi_threaded_inference.cc#L147-L205)
138+
139+
The above code loads params and copies input data and params to specific context.
140+
141+
### Step 3: Preparing arguments to pass to the CachedOp and calling C API to create cached op
142+
143+
[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L207-L233](multi_threaded_inference.cc#L207-233)
144+
145+
The above code prepares `flag_key_cstrs` and `flag_val_cstrs` to be passed the Cached op.
146+
The C API call is made with `MXCreateCachedOpEX`. This will lead to creation of thread safe cached
147+
op since the `thread_safe` (which is the last parameter to `MXCreateCachedOpEX`) is set to
148+
true. When this is set to false, it will invoke CachedOp instead of CachedOpThreadSafe.
149+
150+
151+
### Step 4: Prepare lambda function which will run in spawned threads
152+
153+
[https://github.com/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L248-L262](multi_threaded_inference.cc#L248-262)
154+
155+
The above creates the lambda function taking the thread number as the argument.
156+
If `random_sleep` is set it will sleep for a random number (secs) generated between 0 to 5 seconds.
157+
Following this, it invokes `MXInvokeCachedOpEx`(from the hdl it determines whether to invoke cached op threadsafe version or not).
158+
When this is set to false, it will invoke CachedOp instead of CachedOpThreadSafe.
159+
160+
### Step 5: Spawn multiple threads and wait for all threads to complete
161+
162+
[https://github.com/anirudh2290/apache/incubator-mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L264-L276](multi_threaded_inference.cc#L264-L276)
163+
164+
Spawns multiple threads, joins and waits to wait for all ops to complete.
165+
The other alternative is to wait in the thread on the output ndarray and remove the WaitAll after join.
166+
167+
### Step 6: Post process data to obtain inference results and cleanup
168+
169+
[https://github.com/apache/incubator-/mxnet/example/multi_threaded_inference/multi_threaded_inference.cc#L286-L293](multi_threaded_inference.cc#L286-293)
170+
171+
The above code outputs results for different threads and cleans up the thread safe cached op.
172+
173+
## Current Limitations
174+
175+
1. Only operators tested with the existing model coverage are supported. Other operators and operator types (stateful operators, custom operators are not supported. Existing model coverage is as follows (this list will keep growing as we test more models with different model types):
176+
177+
|Models Tested|MKLDNN|CUDNN|NO-CUDNN|
178+
| --- | --- | --- | --- |
179+
| imagenet1k-resnet-18 | Yes | Yes | Yes |
180+
| imagenet1k-resnet-152 | Yes | Yes | Yes |
181+
| imagenet1k-resnet-50 | Yes | Yes | Yes |
182+
183+
2. Only dense storage types are supported currently.
184+
3. Multi GPU Inference not supported currently.
185+
4. Instantiating multiple instances of SymbolBlockThreadSafe is not supported. Can run parallel inference only on one model per process.
186+
5. dynamic shapes not supported in thread safe cached op.
187+
6. Bulking of ops is not supported.
188+
7. This only supports inference use cases currently, training use cases are not supported.
189+
8. Graph rewrites with subgraph API currently not supported.
190+
9. There is currently no frontend API support to run multi threaded inference. Users can use CreateCachedOpEX and InvokeCachedOp in combination with
191+
the CPP frontend to run multi-threaded inference as of today.
192+
10. Multi threaded inference with threaded engine with Module/Symbolic API and C Predict API are not currently supported.
193+
11. Exception thrown with `wait_to_read` in individual threads can cause issues. Calling invoke from each thread and calling WaitAll after thread joins should still work fine.
194+
12. Tested only on environments supported by CI. This means that MacOS is not supported.
195+
196+
## Future Work
197+
198+
Future work includes Increasing model coverage and addressing most of the limitations mentioned under Current Limitations except the training use case.
199+
For more updates, please subscribe to discussion activity on RFC: https://github.com/apache/incubator-mxnet/issues/16431.

0 commit comments

Comments
 (0)