mindspore 2.3.0rc1__cp38-cp38-manylinux1_x86_64.whl → 2.3.0rc2__cp38-cp38-manylinux1_x86_64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mindspore might be problematic. Click here for more details.

Files changed (223) hide show
  1. mindspore/.commit_id +1 -1
  2. mindspore/__init__.py +1 -1
  3. mindspore/_akg/akg/utils/tbe_codegen_utils.py +13 -3
  4. mindspore/_c_dataengine.cpython-38-x86_64-linux-gnu.so +0 -0
  5. mindspore/_c_expression.cpython-38-x86_64-linux-gnu.so +0 -0
  6. mindspore/_checkparam.py +20 -0
  7. mindspore/_extends/parse/parser.py +1 -1
  8. mindspore/_extends/parse/standard_method.py +6 -5
  9. mindspore/_mindspore_offline_debug.cpython-38-x86_64-linux-gnu.so +0 -0
  10. mindspore/amp.py +5 -5
  11. mindspore/boost/boost_cell_wrapper.py +1 -1
  12. mindspore/boost/group_loss_scale_manager.py +1 -1
  13. mindspore/common/__init__.py +4 -2
  14. mindspore/common/_register_for_recompute.py +48 -0
  15. mindspore/common/_stub_tensor.py +1 -0
  16. mindspore/common/api.py +56 -4
  17. mindspore/common/dtype.py +5 -3
  18. mindspore/common/dump.py +2 -2
  19. mindspore/common/hook_handle.py +51 -4
  20. mindspore/common/initializer.py +1 -1
  21. mindspore/common/jit_config.py +17 -6
  22. mindspore/common/parameter.py +7 -2
  23. mindspore/common/recompute.py +247 -0
  24. mindspore/common/sparse_tensor.py +2 -2
  25. mindspore/common/symbol.py +1 -1
  26. mindspore/common/tensor.py +74 -36
  27. mindspore/communication/__init__.py +3 -3
  28. mindspore/communication/management.py +30 -30
  29. mindspore/context.py +28 -15
  30. mindspore/dataset/__init__.py +5 -5
  31. mindspore/dataset/audio/__init__.py +2 -2
  32. mindspore/dataset/audio/transforms.py +51 -51
  33. mindspore/dataset/callback/ds_callback.py +2 -2
  34. mindspore/dataset/engine/cache_client.py +1 -1
  35. mindspore/dataset/engine/datasets.py +3 -3
  36. mindspore/dataset/engine/datasets_audio.py +14 -14
  37. mindspore/dataset/engine/datasets_standard_format.py +3 -3
  38. mindspore/dataset/engine/datasets_text.py +38 -38
  39. mindspore/dataset/engine/datasets_user_defined.py +3 -3
  40. mindspore/dataset/engine/datasets_vision.py +68 -68
  41. mindspore/dataset/text/__init__.py +3 -3
  42. mindspore/dataset/text/transforms.py +26 -26
  43. mindspore/dataset/transforms/__init__.py +1 -1
  44. mindspore/dataset/vision/__init__.py +3 -3
  45. mindspore/dataset/vision/transforms.py +92 -92
  46. mindspore/dataset/vision/utils.py +1 -1
  47. mindspore/experimental/optim/adadelta.py +2 -2
  48. mindspore/experimental/optim/adagrad.py +2 -2
  49. mindspore/experimental/optim/adam.py +2 -2
  50. mindspore/experimental/optim/adamax.py +2 -2
  51. mindspore/experimental/optim/adamw.py +2 -2
  52. mindspore/experimental/optim/asgd.py +2 -2
  53. mindspore/experimental/optim/lr_scheduler.py +24 -20
  54. mindspore/experimental/optim/nadam.py +2 -2
  55. mindspore/experimental/optim/optimizer.py +1 -1
  56. mindspore/experimental/optim/radam.py +2 -2
  57. mindspore/experimental/optim/rmsprop.py +2 -2
  58. mindspore/experimental/optim/rprop.py +2 -2
  59. mindspore/experimental/optim/sgd.py +2 -2
  60. mindspore/hal/stream.py +2 -0
  61. mindspore/include/mindapi/base/types.h +5 -0
  62. mindspore/lib/libdnnl.so.2 +0 -0
  63. mindspore/lib/libmindspore.so +0 -0
  64. mindspore/lib/libmindspore_backend.so +0 -0
  65. mindspore/lib/libmindspore_common.so +0 -0
  66. mindspore/lib/libmindspore_core.so +0 -0
  67. mindspore/lib/libmindspore_gpr.so.15 +0 -0
  68. mindspore/lib/libmindspore_grpc++.so.1 +0 -0
  69. mindspore/lib/libmindspore_grpc.so.15 +0 -0
  70. mindspore/lib/libmindspore_shared_lib.so +0 -0
  71. mindspore/lib/libopencv_core.so.4.5 +0 -0
  72. mindspore/lib/libopencv_imgcodecs.so.4.5 +0 -0
  73. mindspore/lib/libopencv_imgproc.so.4.5 +0 -0
  74. mindspore/lib/plugin/ascend/custom_aicpu_ops/op_impl/cpu/aicpu_kernel/impl/libcust_cpu_kernels.so +0 -0
  75. mindspore/lib/plugin/ascend/custom_aicpu_ops/op_impl/cpu/config/cust_aicpu_kernel.json +6 -6
  76. mindspore/lib/plugin/ascend/custom_aicpu_ops/op_proto/libcust_op_proto.so +0 -0
  77. mindspore/lib/plugin/ascend/libdvpp_utils.so +0 -0
  78. mindspore/lib/plugin/ascend/libmindspore_cpu_kernels.so +0 -0
  79. mindspore/lib/plugin/gpu/libcuda_ops.so.10 +0 -0
  80. mindspore/lib/plugin/gpu/libcuda_ops.so.11 +0 -0
  81. mindspore/lib/plugin/gpu10.1/libnccl.so.2 +0 -0
  82. mindspore/lib/plugin/gpu11.1/libnccl.so.2 +0 -0
  83. mindspore/lib/plugin/gpu11.6/libnccl.so.2 +0 -0
  84. mindspore/lib/plugin/libmindspore_ascend.so.2 +0 -0
  85. mindspore/lib/plugin/libmindspore_gpu.so.10.1 +0 -0
  86. mindspore/lib/plugin/libmindspore_gpu.so.11.1 +0 -0
  87. mindspore/lib/plugin/libmindspore_gpu.so.11.6 +0 -0
  88. mindspore/log.py +2 -2
  89. mindspore/mint/__init__.py +457 -0
  90. mindspore/mint/nn/__init__.py +430 -0
  91. mindspore/mint/nn/functional.py +424 -0
  92. mindspore/mint/optim/__init__.py +24 -0
  93. mindspore/mint/optim/adamw.py +186 -0
  94. mindspore/multiprocessing/__init__.py +4 -0
  95. mindspore/nn/__init__.py +3 -0
  96. mindspore/nn/cell.py +51 -47
  97. mindspore/nn/extend/__init__.py +29 -0
  98. mindspore/nn/extend/basic.py +140 -0
  99. mindspore/nn/extend/embedding.py +143 -0
  100. mindspore/nn/extend/layer/__init__.py +27 -0
  101. mindspore/nn/extend/layer/normalization.py +107 -0
  102. mindspore/nn/extend/pooling.py +117 -0
  103. mindspore/nn/generator.py +297 -0
  104. mindspore/nn/layer/basic.py +109 -1
  105. mindspore/nn/layer/container.py +2 -2
  106. mindspore/nn/layer/conv.py +6 -6
  107. mindspore/nn/layer/embedding.py +1 -1
  108. mindspore/nn/layer/normalization.py +21 -43
  109. mindspore/nn/layer/padding.py +4 -0
  110. mindspore/nn/optim/ada_grad.py +2 -2
  111. mindspore/nn/optim/adadelta.py +1 -1
  112. mindspore/nn/optim/adafactor.py +1 -1
  113. mindspore/nn/optim/adam.py +7 -7
  114. mindspore/nn/optim/adamax.py +2 -2
  115. mindspore/nn/optim/adasum.py +2 -2
  116. mindspore/nn/optim/asgd.py +2 -2
  117. mindspore/nn/optim/ftrl.py +1 -1
  118. mindspore/nn/optim/lamb.py +3 -3
  119. mindspore/nn/optim/lars.py +1 -1
  120. mindspore/nn/optim/lazyadam.py +2 -2
  121. mindspore/nn/optim/momentum.py +2 -2
  122. mindspore/nn/optim/optimizer.py +2 -2
  123. mindspore/nn/optim/proximal_ada_grad.py +2 -2
  124. mindspore/nn/optim/rmsprop.py +2 -2
  125. mindspore/nn/optim/rprop.py +2 -2
  126. mindspore/nn/optim/sgd.py +2 -2
  127. mindspore/nn/optim/thor.py +2 -2
  128. mindspore/nn/wrap/cell_wrapper.py +9 -9
  129. mindspore/nn/wrap/grad_reducer.py +5 -5
  130. mindspore/ops/_grad_experimental/grad_comm_ops.py +4 -2
  131. mindspore/ops/_vmap/vmap_grad_nn_ops.py +41 -2
  132. mindspore/ops/_vmap/vmap_math_ops.py +27 -8
  133. mindspore/ops/_vmap/vmap_nn_ops.py +66 -8
  134. mindspore/ops/auto_generate/cpp_create_prim_instance_helper.py +73 -1
  135. mindspore/ops/auto_generate/gen_arg_dtype_cast.py +12 -3
  136. mindspore/ops/auto_generate/gen_arg_handler.py +24 -0
  137. mindspore/ops/auto_generate/gen_extend_func.py +274 -0
  138. mindspore/ops/auto_generate/gen_ops_def.py +889 -22
  139. mindspore/ops/auto_generate/gen_ops_prim.py +3541 -253
  140. mindspore/ops/auto_generate/pyboost_inner_prim.py +282 -0
  141. mindspore/ops/composite/multitype_ops/_compile_utils.py +2 -1
  142. mindspore/ops/composite/multitype_ops/_constexpr_utils.py +9 -0
  143. mindspore/ops/extend/__init__.py +9 -1
  144. mindspore/ops/extend/array_func.py +134 -27
  145. mindspore/ops/extend/math_func.py +3 -3
  146. mindspore/ops/extend/nn_func.py +363 -2
  147. mindspore/ops/function/__init__.py +19 -2
  148. mindspore/ops/function/array_func.py +463 -439
  149. mindspore/ops/function/clip_func.py +7 -18
  150. mindspore/ops/function/grad/grad_func.py +5 -5
  151. mindspore/ops/function/linalg_func.py +4 -4
  152. mindspore/ops/function/math_func.py +260 -243
  153. mindspore/ops/function/nn_func.py +825 -62
  154. mindspore/ops/function/random_func.py +73 -4
  155. mindspore/ops/function/sparse_unary_func.py +1 -1
  156. mindspore/ops/function/vmap_func.py +1 -1
  157. mindspore/ops/functional.py +2 -2
  158. mindspore/ops/op_info_register.py +1 -31
  159. mindspore/ops/operations/__init__.py +2 -3
  160. mindspore/ops/operations/_grad_ops.py +2 -107
  161. mindspore/ops/operations/_inner_ops.py +5 -5
  162. mindspore/ops/operations/_sequence_ops.py +2 -2
  163. mindspore/ops/operations/array_ops.py +11 -233
  164. mindspore/ops/operations/comm_ops.py +32 -32
  165. mindspore/ops/operations/custom_ops.py +7 -89
  166. mindspore/ops/operations/manually_defined/ops_def.py +329 -4
  167. mindspore/ops/operations/math_ops.py +13 -163
  168. mindspore/ops/operations/nn_ops.py +9 -316
  169. mindspore/ops/operations/random_ops.py +1 -1
  170. mindspore/ops/operations/sparse_ops.py +3 -3
  171. mindspore/ops/primitive.py +2 -2
  172. mindspore/ops_generate/arg_dtype_cast.py +12 -3
  173. mindspore/ops_generate/arg_handler.py +24 -0
  174. mindspore/ops_generate/gen_ops_inner_prim.py +2 -0
  175. mindspore/ops_generate/gen_pyboost_func.py +13 -6
  176. mindspore/ops_generate/pyboost_utils.py +2 -17
  177. mindspore/parallel/__init__.py +3 -2
  178. mindspore/parallel/_auto_parallel_context.py +106 -1
  179. mindspore/parallel/_parallel_serialization.py +34 -2
  180. mindspore/parallel/_utils.py +16 -0
  181. mindspore/parallel/algo_parameter_config.py +4 -4
  182. mindspore/parallel/checkpoint_transform.py +249 -77
  183. mindspore/parallel/cluster/process_entity/_api.py +1 -1
  184. mindspore/parallel/parameter_broadcast.py +1 -1
  185. mindspore/parallel/shard.py +1 -1
  186. mindspore/profiler/parser/ascend_analysis/fwk_cann_parser.py +1 -0
  187. mindspore/profiler/parser/ascend_analysis/profiler_info_parser.py +17 -5
  188. mindspore/profiler/parser/ascend_msprof_exporter.py +3 -3
  189. mindspore/profiler/parser/ascend_msprof_generator.py +10 -3
  190. mindspore/profiler/parser/ascend_op_generator.py +26 -9
  191. mindspore/profiler/parser/ascend_timeline_generator.py +7 -4
  192. mindspore/profiler/parser/profiler_info.py +11 -1
  193. mindspore/profiler/profiling.py +13 -5
  194. mindspore/rewrite/api/node.py +12 -12
  195. mindspore/rewrite/api/symbol_tree.py +11 -11
  196. mindspore/run_check/_check_version.py +1 -1
  197. mindspore/safeguard/rewrite_obfuscation.py +2 -2
  198. mindspore/train/amp.py +4 -4
  199. mindspore/train/anf_ir_pb2.py +8 -2
  200. mindspore/train/callback/_backup_and_restore.py +2 -2
  201. mindspore/train/callback/_callback.py +4 -4
  202. mindspore/train/callback/_checkpoint.py +2 -2
  203. mindspore/train/callback/_early_stop.py +2 -2
  204. mindspore/train/callback/_landscape.py +4 -4
  205. mindspore/train/callback/_loss_monitor.py +2 -2
  206. mindspore/train/callback/_on_request_exit.py +2 -2
  207. mindspore/train/callback/_reduce_lr_on_plateau.py +2 -2
  208. mindspore/train/callback/_summary_collector.py +2 -2
  209. mindspore/train/callback/_time_monitor.py +2 -2
  210. mindspore/train/dataset_helper.py +8 -3
  211. mindspore/train/loss_scale_manager.py +2 -2
  212. mindspore/train/metrics/metric.py +3 -3
  213. mindspore/train/mind_ir_pb2.py +22 -17
  214. mindspore/train/model.py +15 -15
  215. mindspore/train/serialization.py +18 -18
  216. mindspore/train/summary/summary_record.py +7 -7
  217. mindspore/train/train_thor/convert_utils.py +3 -3
  218. mindspore/version.py +1 -1
  219. {mindspore-2.3.0rc1.dist-info → mindspore-2.3.0rc2.dist-info}/METADATA +1 -1
  220. {mindspore-2.3.0rc1.dist-info → mindspore-2.3.0rc2.dist-info}/RECORD +223 -209
  221. {mindspore-2.3.0rc1.dist-info → mindspore-2.3.0rc2.dist-info}/WHEEL +0 -0
  222. {mindspore-2.3.0rc1.dist-info → mindspore-2.3.0rc2.dist-info}/entry_points.txt +0 -0
  223. {mindspore-2.3.0rc1.dist-info → mindspore-2.3.0rc2.dist-info}/top_level.txt +0 -0
@@ -138,14 +138,14 @@ def init(backend_name=None):
138
138
 
139
139
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
140
140
  Please see the `rank table Startup
141
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
141
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
142
142
  for more details.
143
143
 
144
144
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
145
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
145
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
146
146
 
147
147
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
148
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
148
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
149
149
 
150
150
  >>> from mindspore.communication import init
151
151
  >>> init()
@@ -226,14 +226,14 @@ def release():
226
226
 
227
227
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
228
228
  Please see the `rank table Startup
229
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
229
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
230
230
  for more details.
231
231
 
232
232
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
233
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
233
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
234
234
 
235
235
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
236
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
236
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
237
237
 
238
238
  >>> from mindspore.communication import init, release
239
239
  >>> init()
@@ -270,14 +270,14 @@ def get_rank(group=GlobalComm.WORLD_COMM_GROUP):
270
270
 
271
271
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
272
272
  Please see the `rank table Startup
273
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
273
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
274
274
  for more details.
275
275
 
276
276
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
277
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
277
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
278
278
 
279
279
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
280
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
280
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
281
281
 
282
282
  >>> from mindspore.communication import init, get_rank
283
283
  >>> init()
@@ -320,14 +320,14 @@ def get_local_rank(group=GlobalComm.WORLD_COMM_GROUP):
320
320
 
321
321
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
322
322
  Please see the `rank table Startup
323
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
323
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
324
324
  for more details.
325
325
 
326
326
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
327
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
327
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
328
328
 
329
329
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
330
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
330
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
331
331
 
332
332
  >>> import mindspore as ms
333
333
  >>> from mindspore.communication import init, get_rank, get_local_rank
@@ -373,14 +373,14 @@ def get_group_size(group=GlobalComm.WORLD_COMM_GROUP):
373
373
 
374
374
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
375
375
  Please see the `rank table Startup
376
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
376
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
377
377
  for more details.
378
378
 
379
379
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
380
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
380
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
381
381
 
382
382
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
383
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
383
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
384
384
 
385
385
  >>> import mindspore as ms
386
386
  >>> from mindspore.communication import init, get_group_size
@@ -425,14 +425,14 @@ def get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP):
425
425
 
426
426
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
427
427
  Please see the `rank table Startup
428
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
428
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
429
429
  for more details.
430
430
 
431
431
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
432
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
432
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
433
433
 
434
434
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
435
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
435
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
436
436
 
437
437
  >>> import mindspore as ms
438
438
  >>> from mindspore.communication import init, get_local_rank_size
@@ -480,14 +480,14 @@ def get_world_rank_from_group_rank(group, group_rank_id):
480
480
 
481
481
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
482
482
  Please see the `rank table Startup
483
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
483
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
484
484
  for more details.
485
485
 
486
486
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
487
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_
487
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_
488
488
 
489
489
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
490
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
490
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
491
491
 
492
492
  >>> import mindspore as ms
493
493
  >>> from mindspore import set_context
@@ -539,14 +539,14 @@ def get_group_rank_from_world_rank(world_rank_id, group):
539
539
 
540
540
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
541
541
  Please see the `rank table Startup
542
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
542
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
543
543
  for more details.
544
544
 
545
545
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
546
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_
546
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_
547
547
 
548
548
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
549
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
549
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
550
550
 
551
551
  >>> import mindspore as ms
552
552
  >>> from mindspore import set_context
@@ -595,14 +595,14 @@ def create_group(group, rank_ids):
595
595
 
596
596
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
597
597
  Please see the `rank table Startup
598
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
598
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
599
599
  for more details.
600
600
 
601
601
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun Startup
602
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
602
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
603
603
 
604
604
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
605
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
605
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
606
606
 
607
607
  >>> import mindspore as ms
608
608
  >>> from mindspore import set_context
@@ -648,14 +648,14 @@ def destroy_group(group):
648
648
 
649
649
  For the Ascend devices, users need to prepare the rank table, set rank_id and device_id.
650
650
  Please see the `rank table startup
651
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/rank_table.html>`_
651
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/rank_table.html>`_
652
652
  for more details.
653
653
 
654
654
  For the GPU devices, users need to prepare the host file and mpi, please see the `mpirun startup
655
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/mpirun.html>`_ .
655
+ <https://www.mindspore.cn/tutorials/experts/en/master/parallel/mpirun.html>`_ .
656
656
 
657
657
  For the CPU device, users need to write a dynamic cluster startup script, please see the `Dynamic Cluster
658
- Startup <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/parallel/dynamic_cluster.html>`_ .
658
+ Startup <https://www.mindspore.cn/tutorials/experts/en/master/parallel/dynamic_cluster.html>`_ .
659
659
 
660
660
  >>> import mindspore as ms
661
661
  >>> from mindspore import set_context
mindspore/context.py CHANGED
@@ -701,6 +701,7 @@ class _Context:
701
701
  "enable_task_opt": (ms_ctx_param.enable_task_opt, bool),
702
702
  "enable_grad_comm_opt": (ms_ctx_param.enable_grad_comm_opt, bool),
703
703
  "interleaved_matmul_comm": (ms_ctx_param.interleaved_matmul_comm, bool),
704
+ "bias_add_comm_swap": (ms_ctx_param.bias_add_comm_swap, bool),
704
705
  "enable_opt_shard_comm_opt": (ms_ctx_param.enable_opt_shard_comm_opt, bool),
705
706
  "enable_begin_end_inline_opt": (ms_ctx_param.enable_begin_end_inline_opt, bool),
706
707
  "enable_concat_eliminate_opt": (ms_ctx_param.enable_concat_eliminate_opt, bool),
@@ -758,7 +759,8 @@ def _context():
758
759
  strategy_ckpt_save_file=str, full_batch=bool, enable_parallel_optimizer=bool, enable_alltoall=bool,
759
760
  all_reduce_fusion_config=list, pipeline_stages=int, pipeline_segments=int,
760
761
  pipeline_result_broadcast=bool, parallel_optimizer_config=dict,
761
- comm_fusion=dict, strategy_ckpt_config=dict)
762
+ pipeline_config=dict,
763
+ comm_fusion=dict, strategy_ckpt_config=dict, force_fp32_communication=bool)
762
764
  def set_auto_parallel_context(**kwargs):
763
765
  r"""
764
766
  Set auto parallel context, only data parallel supported on CPU.
@@ -782,11 +784,10 @@ def set_auto_parallel_context(**kwargs):
782
784
  parallel_mode parameter_broadcast
783
785
  all_reduce_fusion_config strategy_ckpt_load_file
784
786
  enable_parallel_optimizer strategy_ckpt_save_file
785
- parallel_optimizer_config full_batch
786
- enable_alltoall dataset_strategy
787
- \ pipeline_stages
788
- \ pipeline_result_broadcast
789
- \ auto_parallel_search_mode
787
+ parallel_optimizer_config dataset_strategy
788
+ enable_alltoall pipeline_stages
789
+ pipeline_config auto_parallel_search_mode
790
+ force_fp32_communication pipeline_result_broadcast
790
791
  \ comm_fusion
791
792
  \ strategy_ckpt_config
792
793
  \ group_ckpt_save_file
@@ -850,6 +851,9 @@ def set_auto_parallel_context(**kwargs):
850
851
  data parallel training in the benefit of time and memory saving. Currently, auto and semi auto
851
852
  parallel mode support all optimizers in both Ascend and GPU. Data parallel mode only supports
852
853
  `Lamb` and `AdamWeightDecay` in Ascend . Default: ``False`` .
854
+ force_fp32_communication (bool): A switch that determines whether reduce operators (AllReduce, ReduceScatter)
855
+ are forced to use the fp32 data type for communication during communication. True is the enable
856
+ switch. Default: ``False`` .
853
857
  enable_alltoall (bool): A switch that allows AllToAll operators to be generated during communication. If its
854
858
  value is ``False`` , there will be a combination of operators such as AllGather, Split and
855
859
  Concat instead of AllToAll. Default: ``False`` .
@@ -861,6 +865,12 @@ def set_auto_parallel_context(**kwargs):
861
865
  Default: ``1`` .
862
866
  pipeline_result_broadcast (bool): A switch that broadcast the last stage result to all other stage in pipeline
863
867
  parallel inference. Default: ``False`` .
868
+ pipeline_config (dict): A dict contains the keys and values for setting the pipeline parallelism configuration.
869
+ It supports the following keys:
870
+
871
+ - pipeline_interleave(bool): Indicates whether to enable the interleaved execution mode.
872
+ - pipeline_scheduler(str): Indicates the scheduling mode for pipeline parallelism. Only support
873
+ ``gpipe/1f1b``.
864
874
  parallel_optimizer_config (dict): A dict contains the keys and values for setting the parallel optimizer
865
875
  configure. The configure provides more detailed behavior control about parallel training
866
876
  when parallel optimizer is enabled. The configure will be effective when we use
@@ -999,6 +1009,7 @@ def reset_auto_parallel_context():
999
1009
  - strategy_ckpt_save_file: ''.
1000
1010
  - full_batch: False.
1001
1011
  - enable_parallel_optimizer: False.
1012
+ - force_fp32_communication: False
1002
1013
  - enable_alltoall: False.
1003
1014
  - pipeline_stages: 1.
1004
1015
  - pipeline_result_broadcast: False.
@@ -1289,7 +1300,7 @@ def set_context(**kwargs):
1289
1300
  If enable_graph_kernel is set to ``True`` , acceleration can be enabled.
1290
1301
  For details of graph kernel fusion, please check
1291
1302
  `Enabling Graph Kernel Fusion
1292
- <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/optimize/graph_fusion_engine.html>`_.
1303
+ <https://www.mindspore.cn/tutorials/experts/en/master/optimize/graph_fusion_engine.html>`_.
1293
1304
  graph_kernel_flags (str):
1294
1305
  Optimization options of graph kernel fusion, and the priority is higher when it conflicts
1295
1306
  with enable_graph_kernel. Only for experienced users.
@@ -1373,7 +1384,7 @@ def set_context(**kwargs):
1373
1384
  range of ['ON', 'OFF'], and the default value is ``'OFF'`` .
1374
1385
 
1375
1386
  - ON: Enable the memory Offload function. On Ascend hardware platform, this parameter does not take effect
1376
- when jit_level of JitConfig is not set 'O0'; This parameter does not take effect when
1387
+ when the graph compilation level is not 'O0'; This parameter does not take effect when
1377
1388
  memory_optimize_level is set 'O1'.
1378
1389
  - OFF: Turn off the memory Offload function.
1379
1390
  ascend_config (dict): Set the parameters specific to Ascend hardware platform. It is not set by default.
@@ -1384,16 +1395,18 @@ def set_context(**kwargs):
1384
1395
  is ``force_fp16`` . The value range is as follows:
1385
1396
 
1386
1397
  - force_fp16: When the operator supports both float16 and float32, select float16 directly.
1387
- - allow_fp32_to_fp16: When the operator does not support the float32 data type, directly reduce
1388
- the precision of float16.
1398
+ - allow_fp32_to_fp16: For cube operators, use the float16. For vector operators,
1399
+ prefer to keep the origin dtype, if the operator in model can support float32,
1400
+ it will keep original dtype, otherwise it will reduce to float16.
1389
1401
  - allow_mix_precision: Automatic mixing precision, facing the whole network operator, according
1390
1402
  to the built-in optimization strategy, automatically reduces the precision of some operators
1391
1403
  to float16 or bfloat16.
1392
1404
  - must_keep_origin_dtype: Keep the accuracy of the original drawing.
1393
1405
  - force_fp32: When the input of the matrix calculation operator is float16 and the output supports
1394
1406
  float16 and float32, output is forced to float32.
1395
- - allow_fp32_to_bf16: When the operator does not support the float32 data type, directly reduce
1396
- the precision of bfloat16.
1407
+ - allow_fp32_to_bf16: For cube operators, use the bfloat16. For vector operators,
1408
+ prefer to keep the origin dtype, if the operator in model can support float32,
1409
+ it will keep original dtype, otherwise it will reduce to bfloat16.
1397
1410
  - allow_mix_precision_fp16: Automatic mixing precision, facing the whole network operator, automatically
1398
1411
  reduces the precision of some operators to float16 according to the built-in optimization strategy.
1399
1412
  - allow_mix_precision_bf16: Automatic mixing precision, facing the whole network operator, according to
@@ -1434,7 +1447,7 @@ def set_context(**kwargs):
1434
1447
 
1435
1448
  - parallel_speed_up_json_path(Union[str, None]): The path to the parallel speed up json file, configuration
1436
1449
  can refer to `parallel_speed_up.json
1437
- <https://gitee.com/mindspore/mindspore/blob/r2.3.q1/config/parallel_speed_up.json>`_ .
1450
+ <https://gitee.com/mindspore/mindspore/blob/master/config/parallel_speed_up.json>`_ .
1438
1451
  If its value is None or '', it does not take effect. Default None.
1439
1452
 
1440
1453
  - recompute_comm_overlap (bool): Enable overlap between recompute ops and communication ops if True.
@@ -1445,11 +1458,11 @@ def set_context(**kwargs):
1445
1458
  Default: False.
1446
1459
  - enable_grad_comm_opt (bool): Enable overlap between dx ops and data parallel communication ops if True.
1447
1460
  Currently, do not support
1448
- `LazyInline <https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore/mindspore.lazy_inline.html>`
1461
+ `LazyInline <https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.lazy_inline.html>`
1449
1462
  Default: False.
1450
1463
  - enable_opt_shard_comm_opt (bool): Enable overlap between forward ops
1451
1464
  and optimizer parallel allgather communication if True. Currently, do not support
1452
- `LazyInline <https://www.mindspore.cn/docs/en/r2.3.q1/api_python/mindspore/mindspore.lazy_inline.html>`
1465
+ `LazyInline <https://www.mindspore.cn/docs/en/master/api_python/mindspore/mindspore.lazy_inline.html>`
1453
1466
  Default: False.
1454
1467
  - compute_communicate_fusion_level (int): Enable the fusion between compute and communicate.
1455
1468
  Default: ``0``.
@@ -21,7 +21,7 @@ Besides, this module provides APIs to sample data while loading.
21
21
 
22
22
  We can enable cache in most of the dataset with its key arguments 'cache'. Please notice that cache is not supported
23
23
  on Windows platform yet. Do not use it while loading and processing data on Windows. More introductions and limitations
24
- can refer `Single-Node Tensor Cache <https://www.mindspore.cn/tutorials/experts/en/r2.3.q1/dataset/cache.html>`_ .
24
+ can refer `Single-Node Tensor Cache <https://www.mindspore.cn/tutorials/experts/en/master/dataset/cache.html>`_ .
25
25
 
26
26
  Common imported modules in corresponding API examples are as follows:
27
27
 
@@ -55,11 +55,11 @@ The specific steps are as follows:
55
55
  - Dataset operation: The user uses the dataset object method `.shuffle` / `.filter` / `.skip` / `.split` /
56
56
  `.take` / ... to further shuffle, filter, skip, and obtain the maximum number of samples of datasets;
57
57
  - Dataset sample transform operation: The user can add data transform operations
58
- ( `vision transform <https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.\
58
+ ( `vision transform <https://mindspore.cn/docs/en/master/api_python/mindspore.\
59
59
  dataset.transforms.html#module-mindspore.dataset.vision>`_ ,
60
- `NLP transform <https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.\
60
+ `NLP transform <https://mindspore.cn/docs/en/master/api_python/mindspore.\
61
61
  dataset.transforms.html#module-mindspore.dataset.text>`_ ,
62
- `audio transform <https://mindspore.cn/docs/en/r2.3.q1/api_python/mindspore.\
62
+ `audio transform <https://mindspore.cn/docs/en/master/api_python/mindspore.\
63
63
  dataset.transforms.html#module-mindspore.dataset.audio>`_ ) to the map
64
64
  operation to perform transformations. During data preprocessing, multiple map operations can be defined to
65
65
  perform different transform operations to different fields. The data transform operation can also be a
@@ -73,7 +73,7 @@ Quick start of Dataset Pipeline
73
73
  -------------------------------
74
74
 
75
75
  For a quick start of using Dataset Pipeline, download `Load & Process Data With Dataset Pipeline
76
- <https://www.mindspore.cn/docs/en/r2.3.q1/api_python/samples/dataset/dataset_gallery.html>`_
76
+ <https://www.mindspore.cn/docs/en/master/api_python/samples/dataset/dataset_gallery.html>`_
77
77
  to local and run in sequence.
78
78
 
79
79
  """
@@ -40,10 +40,10 @@ Descriptions of common data processing terms are as follows:
40
40
  The data transform operation can be executed in the data processing pipeline or in the eager mode:
41
41
 
42
42
  - Pipeline mode is generally used to process big datasets. Examples refer to
43
- `introduction to data processing pipeline <https://www.mindspore.cn/docs/en/r2.3.q1/api_python/
43
+ `introduction to data processing pipeline <https://www.mindspore.cn/docs/en/master/api_python/
44
44
  mindspore.dataset.html#introduction-to-data-processing-pipeline>`_ .
45
45
  - Eager mode is more like a function call to process data. Examples refer to
46
- `Lightweight Data Processing <https://www.mindspore.cn/tutorials/en/r2.3.q1/advanced/dataset/eager.html>`_ .
46
+ `Lightweight Data Processing <https://www.mindspore.cn/tutorials/en/master/advanced/dataset/eager.html>`_ .
47
47
  """
48
48
  from __future__ import absolute_import
49
49