azureml-registry-tools 0.1.0a27__py3-none-any.whl → 0.1.0a29__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,2 +1,65 @@
1
- <!-- REMOVE THESE HEADERS AFTER UPDATING -->
2
- <!-- `description.md` is required. -->
1
+ # Description
2
+
3
+ Include 1-2 sentences as your core value prop of the model.
4
+
5
+ ## Azure Direct Models
6
+ For Microsoft PMs: Include 1-2 sentences as your core value prop of Azure Direct Models.
7
+
8
+ # Technical specs
9
+ Short paragraph describing the model and model family.
10
+
11
+ ### Training cut-off date
12
+ Date the training was finished.
13
+
14
+ ### Input formats
15
+ Specify the preferred input format for interacting with the model, especially if it was trained on fine-tuned on structured prompts. Provide response output schema to illustrated expected format.
16
+
17
+ ### Supported language
18
+ Supported input human languages.
19
+
20
+ ## Supported Azure regions
21
+ List of supported Azure regions.
22
+
23
+ ## Sample JSON response
24
+ Input and output.
25
+
26
+ ## Model architecture
27
+ 10-20 word description of the model architecture.
28
+
29
+ # Long context
30
+ Indicate whether the model supports extended context lengths and describe the types of tasks this enables.
31
+
32
+ ## Optimizing model performance
33
+ Describes the methods and best practices used to improve a model's efficiency, accuracy, and cost-effectiveness in real-world use.
34
+
35
+ ## Additional assets
36
+ List of additional assets (e.g. training data, technical reports data processing code, model training code, model inference code, model evaluation code), if any, that are made available with a link, description of how each can be accessed and what licenses, if any, relate to their use.
37
+
38
+ # Key capabilities
39
+
40
+ ## About this model
41
+ A blurb to share more about the model, what it excels in, why it's valuable for developers.
42
+
43
+ ## Key model capabilities
44
+ List of 3-4 valuable core capabilities by model, include a value proposition statement.
45
+
46
+ # Pricing
47
+ Pricing is based on a number of factors. See pricing details here. (Please link the last sentence to ACOM pricing page)
48
+
49
+ # Use cases
50
+ See Responsible AI for additional consideration for responsible use.
51
+
52
+ ## Key use cases
53
+ Key use cases are the main practical applications of a model. Industry and task specific. Short paragraph describing the intended uses of the model. Highlight specific scenarios where the model excels.
54
+
55
+ ## Out of scope use cases
56
+ Short paragraph describing model limitations constraints and identifying any restricted or prohibited uses.
57
+
58
+ # Distribution channels
59
+ A list of the methods of distribution (e.g. enterprise or subscription-based access through existing software suites or enterprise-specific solutions; public or subscription-based access through an API; public or proprietary access through integrated development environments, device-specific applications or firmware, open-source repositories) through which the model can be made available to downstream providers both within and outside the EU.
60
+
61
+ # Azure Direct Models
62
+ For Microsoft PMs: Include 1-2 sentences as your core value prop of Azure Direct Models.
63
+
64
+ # More information
65
+ Whatever you would like to go here!
@@ -1,2 +1,10 @@
1
- <!-- REMOVE THESE HEADERS AFTER UPDATING -->
2
- <!-- `evaluation.md` is highly recommended, but not required. It captures information about the performance of the model. We highly recommend including this section as this information is often used to decide what model to use. -->
1
+ # Benchmarks
2
+
3
+ ## Quality and performance evaluations
4
+ Summarize the model’s performance across public and internal benchmarks and describe how it compares to other third-party models. Include the types of benchmarks used, grouped by capability areas (reasoning, language understanding, math, multilingual, etc.) and limitations (knowledge capacity, etc.). Include specific datasets and tasks that demonstrate the model’s performance across diverse domains, highlight any custom or adversarial evaluations, and note any planned mitigations
5
+
6
+ ## Benchmarking methodology
7
+ Share additional details about your team’s benchmarking methodology, including how prompts are standardized across models for fair comparison. Note any exceptions and clarify what is and isn’t allowed when adapting prompts. If needed, organize the appendix (A, B, C, etc.) based on specific benchmarks such as robustness, short & long context, multilingual, etc.
8
+
9
+ ## Public data summary
10
+ Link to the relevant public data summary or summaries for this model
@@ -251,7 +251,7 @@
251
251
  "inputModalities": {
252
252
  "description": "Input modalities supported (e.g., text, image) as a comma-separated string",
253
253
  "type": "string",
254
- "pattern": "^(audio|csv|embeddings|image|json|pdf|text|video)(?:\\s*,\\s*(audio|csv|embeddings|image|json|pdf|text|video))*$"
254
+ "pattern": "^(audio|csv|embeddings|image|json|pdf|text|video|code)(?:\\s*,\\s*(audio|csv|embeddings|image|json|pdf|text|video|code))*$"
255
255
  },
256
256
  "inference_supported_envs": {
257
257
  "description": "Supported inference environments",
@@ -361,7 +361,7 @@
361
361
  "modelCapabilities": {
362
362
  "description": "Model capabilities (e.g., agents, assistants) as a comma-separated string",
363
363
  "type": "string",
364
- "pattern": "^(agents|assistants|routing|reasoning|streaming|tool-calling)(?:\\s*,\\s*(agents|assistants|routing|reasoning|streaming|tool-calling))*$"
364
+ "pattern": "^(agents|agentsV2|assistants|routing|reasoning|streaming|tool-calling|function-calling|image-input)(?:\\s*,\\s*(agents|agentsV2|assistants|routing|reasoning|streaming|tool-calling|function-calling|image-input))*$"
365
365
  },
366
366
  "modelHash": {
367
367
  "description": "Hash of the model",
@@ -395,7 +395,7 @@
395
395
  "task": {
396
396
  "description": "Tasks supported by the model as a comma-separated string",
397
397
  "type": "string",
398
- "pattern": "^(audio-analysis|audio-classification|audio-generation|automatic-speech-recognition|chat-completion|completions|content-filters|content-safety|conversational-ai|custom-extraction|data-generation|document-analysis|document-ingestion|document-translation|embeddings|face-detection|fill-mask|forecasting|image-analysis|image-classification|image-feature-extraction|image-text-to-text|image-to-image|image-to-text|intelligent-content-processing|intelligent-document-processing|text-pii-extraction|conversation-pii-extraction|document-pii-extraction|detect-language|optical-character-recognition|protein-sequence-generation|protein-structure-prediction|responses|responsible-ai|retrosynthesis-prediction|summarization|text-analysis|text-analytics|text-classification|text-generation|text-to-image|text-to-speech|time-series-forecasting|translation|speech-to-text|speech-translation|video-analysis|video-generation|video-text-to-text|visual-question-answering|zero-shot-classification|zero-shot-image-classification|materials-design|atomistic-modelling|image-to-3D|text-to-3D|3D-generation|task-completion-verification|action-affordance|next-plausible-action-prediction|Structure-Prediction|Genomics|biomolecular-complex-structure-prediction|structure-prediction|protein-folding)(?:\\s*,\\s*(audio-analysis|audio-classification|audio-generation|automatic-speech-recognition|chat-completion|completions|content-filters|content-safety|conversational-ai|custom-extraction|data-generation|document-analysis|document-ingestion|document-translation|embeddings|face-detection|fill-mask|forecasting|image-analysis|image-classification|image-feature-extraction|image-text-to-text|image-to-image|image-to-text|intelligent-content-processing|intelligent-document-processing|optical-character-recognition|protein-sequence-generation|protein-structure-prediction|responses|responsible-ai|summarization|text-analysis|text-analytics|text-classification|text-generation|text-pii-extraction|conversation-pii-extraction|document-pii-extraction|detect-language|text-to-image|text-to-speech|time-series-forecasting|translation|speech-to-text|speech-translation|video-analysis|video-generation|video-text-to-text|visual-question-answering|zero-shot-classification|zero-shot-image-classification|materials-design|atomistic-modelling|image-to-3D|text-to-3D|3D-generation|task-completion-verification|action-affordance|next-plausible-action-prediction|Structure-Prediction|Genomics|biomolecular-complex-structure-prediction|structure-prediction|protein-folding))*$"
398
+ "pattern": "^(audio-analysis|audio-classification|audio-generation|automatic-speech-recognition|chat-completion|completions|content-filters|content-safety|conversational-ai|custom-extraction|data-generation|document-analysis|document-ingestion|document-translation|embeddings|face-detection|fill-mask|forecasting|image-analysis|image-classification|image-feature-extraction|image-text-to-text|image-to-image|image-to-text|intelligent-content-processing|intelligent-document-processing|text-pii-extraction|conversation-pii-extraction|document-pii-extraction|detect-language|optical-character-recognition|protein-sequence-generation|protein-structure-prediction|responses|responsible-ai|retrosynthesis-prediction|summarization|text-analysis|text-analytics|text-classification|text-generation|text-to-image|text-to-speech|time-series-forecasting|translation|speech-to-text|speech-translation|video-analysis|video-generation|video-text-to-text|visual-question-answering|zero-shot-classification|zero-shot-image-classification|materials-design|atomistic-modelling|image-to-3D|text-to-3D|3D-generation|task-completion-verification|action-affordance|next-plausible-action-prediction|Structure-Prediction|Genomics|biomolecular-complex-structure-prediction|structure-prediction|protein-folding|web-agent-tasks|gui-grounding|messages)(?:\\s*,\\s*(audio-analysis|audio-classification|audio-generation|automatic-speech-recognition|chat-completion|completions|content-filters|content-safety|conversational-ai|custom-extraction|data-generation|document-analysis|document-ingestion|document-translation|embeddings|face-detection|fill-mask|forecasting|image-analysis|image-classification|image-feature-extraction|image-text-to-text|image-to-image|image-to-text|intelligent-content-processing|intelligent-document-processing|optical-character-recognition|protein-sequence-generation|protein-structure-prediction|responses|responsible-ai|summarization|text-analysis|text-analytics|text-classification|text-generation|text-pii-extraction|conversation-pii-extraction|document-pii-extraction|detect-language|text-to-image|text-to-speech|time-series-forecasting|translation|speech-to-text|speech-translation|video-analysis|video-generation|video-text-to-text|visual-question-answering|zero-shot-classification|zero-shot-image-classification|materials-design|atomistic-modelling|image-to-3D|text-to-3D|3D-generation|task-completion-verification|action-affordance|next-plausible-action-prediction|Structure-Prediction|Genomics|biomolecular-complex-structure-prediction|structure-prediction|protein-folding|web-agent-tasks|gui-grounding|messages))*$"
399
399
  },
400
400
  "textContextWindow": {
401
401
  "description": "Context window size",
@@ -593,11 +593,14 @@
593
593
  "type": "string",
594
594
  "enum": [
595
595
  "agents",
596
+ "agentsV2",
596
597
  "assistants",
597
598
  "routing",
598
599
  "reasoning",
599
600
  "streaming",
600
- "tool-calling"
601
+ "tool-calling",
602
+ "function-calling",
603
+ "image-input"
601
604
  ]
602
605
  },
603
606
  "description": "Model capabilities (e.g., agents, assistants)"
@@ -1,2 +1,22 @@
1
- <!-- REMOVE THESE HEADERS AFTER UPDATING -->
2
- <!-- `notes.md` is highly recommended, but not required. It captures information about how your model is created. We highly recommend including this section to provide transparency for the customers. -->
1
+ # Responsible AI considerations
2
+
3
+ ## Safety techniques
4
+ Policies, techniques and safety frameworks applied to enable model hosting on Foundry (i.e. Provenance).
5
+
6
+ Describe the safety alignment strategy used during post-training. Include the types of datasets leveraged, specify the techniques applied along with the safety objectives they target.
7
+
8
+ ## Safety evaluations
9
+ Describe the evaluation methods used to assess model safety prior to release. Include both quantitative and qualitive approaches and specify the risk categories evaluated [i.e. disallowed content (sexual, violent, hateful, or self-harm content), copyright content/IP, and jailbreaks] and any collaboration with internal or external safety teams. Refer to other relevant documentation for more details.
10
+
11
+ ## Known limitations
12
+ Outline known limitations and potential risks associated with the model, including fairness, representation, offensive content, reliability, and misuse. Clearly state areas where the model may underperform (non-English languages, sensitive domains, etc.). Provide specific guidance for developers on applying responsible AI practices, legal compliance, and appropriate safeguards for high-risk or consequential use cases.
13
+
14
+ # Acceptable use
15
+
16
+ ## Acceptable use policy
17
+ Link to any relevant acceptable use policies. Otherwise state N/A.
18
+
19
+ # Terms of Service
20
+
21
+ ## Terms of Service Link
22
+ Type or category of license (e.g. free/open source or proprietary). If there is no license, describe how access to the model is provided. State the license.
@@ -18,6 +18,7 @@ from azure.identity import DefaultAzureCredential
18
18
  from azureml.registry.data.validate_model_schema import validate_model_schema
19
19
  from azureml.registry.data.validate_model_variant_schema import validate_model_variant_schema
20
20
  from azureml.registry._rest_client.registry_management_client import RegistryManagementClient
21
+ from azureml.registry.mgmt.util import resolve_from_file_for_asset
21
22
 
22
23
  # Windows compatibility patch - must be applied before importing azureml.assets
23
24
  from subprocess import run
@@ -98,6 +99,9 @@ def put_system_metadata(ml_client: MLClient, asset: AssetConfig, registry: str,
98
99
  registry (str): Name of registry.
99
100
  system_metadata (dict): System metadata payload.
100
101
  """
102
+ # First, transform system metadata such that any files are read and their content used
103
+ system_metadata = {k: resolve_from_file_for_asset(asset, v) for k, v in system_metadata.items()}
104
+
101
105
  # Use RegistryManagementClient for discovery
102
106
  registry_mgmt_client = RegistryManagementClient(registry_name=registry)
103
107
  discovery = registry_mgmt_client.discovery()
@@ -120,7 +124,7 @@ def put_system_metadata(ml_client: MLClient, asset: AssetConfig, registry: str,
120
124
  }
121
125
 
122
126
  print("Attempting to PUT system metadata")
123
- print(f"Payload: {system_metadata}")
127
+ print(f"System Metadata Payload: {system_metadata}")
124
128
 
125
129
  response = requests.put(url, headers=headers, json=system_metadata)
126
130
 
@@ -0,0 +1,75 @@
1
+ """File resolution utilities for asset management."""
2
+
3
+ import os
4
+ from pathlib import Path, PurePath
5
+ from typing import Tuple, Union, Any
6
+
7
+ import azureml.assets as assets # noqa: E402
8
+
9
+
10
+ def is_file_relative_to_asset_path(asset: assets.AssetConfig, value: Any) -> bool:
11
+ """Check if the value is a file with respect to the asset path.
12
+
13
+ Args:
14
+ asset (AssetConfig): the asset to try and resolve the value for
15
+ value: value to check
16
+
17
+ Returns:
18
+ bool: True if value represents a file relative to asset path, False otherwise
19
+ """
20
+ if not isinstance(value, str) and not isinstance(value, PurePath):
21
+ return False
22
+
23
+ path_value = value if isinstance(value, Path) else Path(value)
24
+
25
+ if not path_value.is_relative_to(asset.file_path):
26
+ path_value = asset._append_to_file_path(path_value)
27
+
28
+ return os.path.isfile(path_value)
29
+
30
+
31
+ def resolve_from_file_for_asset(asset: assets.AssetConfig, value: Any) -> Any:
32
+ """Resolve the value from a file for an asset if it is a file, otherwise returns the value.
33
+
34
+ Args:
35
+ asset (AssetConfig): the asset to try and resolve the value for
36
+ value: value to try and resolve
37
+
38
+ Returns:
39
+ Any: File content if value is a file path relative to asset, otherwise the original value
40
+ """
41
+ if not is_file_relative_to_asset_path(asset, value):
42
+ return value
43
+
44
+ path_value = value if isinstance(value, Path) else Path(value)
45
+
46
+ if not path_value.is_relative_to(asset.file_path):
47
+ path_value = asset._append_to_file_path(path_value)
48
+
49
+ is_resolved_from_file, resolved_value = _resolve_from_file(path_value)
50
+
51
+ if is_resolved_from_file:
52
+ return resolved_value
53
+ else:
54
+ return value
55
+
56
+
57
+ def _resolve_from_file(value: Union[str, Path]) -> Tuple[bool, Union[str, None]]:
58
+ """Resolve file content (internal helper).
59
+
60
+ Args:
61
+ value: File path to resolve
62
+
63
+ Returns:
64
+ Tuple[bool, Union[str, None]]: (success, content) where success indicates
65
+ if file was read and content is the file content
66
+ """
67
+ if os.path.isfile(value):
68
+ try:
69
+ with open(value, 'r') as f:
70
+ content = f.read()
71
+ return (True, content)
72
+ except Exception as e:
73
+ raise Exception(f"Failed to read file {value}: {e}")
74
+ else:
75
+ return (False, None)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: azureml-registry-tools
3
- Version: 0.1.0a27
3
+ Version: 0.1.0a29
4
4
  Summary: AzureML Registry tools and CLI
5
5
  Author: Microsoft Corp
6
6
  License: https://aka.ms/azureml-sdk-license
@@ -10,29 +10,30 @@ azureml/registry/_rest_client/registry_management_client.py,sha256=NsqWRaDdOlPyI
10
10
  azureml/registry/_rest_client/registry_model_client.py,sha256=LqJGTuYQtBnHSeSbOl5KVbiO6vGBkKQ7HD5jc4fvV3k,14466
11
11
  azureml/registry/data/__init__.py,sha256=cW3X6HATz6XF-K_7uKdybTbJb9EZSecBN4J27NGdZmU,231
12
12
  azureml/registry/data/asset.yaml.template,sha256=WTgfuvKEBp-EVFSQ0JpU0h4z_ULJdULO9kHmB_9Fr1o,96
13
- azureml/registry/data/description.md.template,sha256=wQLk54U8hoXU1y9235R4irc6FGYPXGO-x9EHUorP15Q,84
14
- azureml/registry/data/evaluation.md.template,sha256=JaDecIfLV9vZDUrZzVRPzVHHnKD-BQGBgQ-cw1dHavU,277
13
+ azureml/registry/data/description.md.template,sha256=DiVAQEXGXoKmhV4LPqE3NupxVtcsuDZ1pn2UA5Fzd6U,2821
14
+ azureml/registry/data/evaluation.md.template,sha256=FC9U8EI_1Dg9Vz18ftAFDDmTqvpwELDkIDlYqB8C9Dk,1031
15
15
  azureml/registry/data/model-variant.schema.json,sha256=AT4Dy6cCtp_SFUfSqYIqcER8AldpYm0QIEy1abY3QWE,1699
16
- azureml/registry/data/model.schema.json,sha256=LAclWqHYMY0f62Hp33tGETjv2mjSffAwVHNIuXngp6o,46832
16
+ azureml/registry/data/model.schema.json,sha256=GbpT9kqZ-7t_2YxoU4QX1ojI_yo9XJfR2bAhrG0LgZc,47085
17
17
  azureml/registry/data/model.yaml.template,sha256=h5uqAN22FLaWrbPxIb8yVKH9cGDBrIwooXYYfsKhxDw,245
18
- azureml/registry/data/notes.md.template,sha256=zSRyOR__9NGL2j0tugY7HgFkwkAdcE2pJyyyGsz1SAk,248
18
+ azureml/registry/data/notes.md.template,sha256=rgGGHQaxfVg6COIzZU8EVBa48sRPnNezVkCMGNyRRNo,1528
19
19
  azureml/registry/data/validate_model_schema.py,sha256=OQp2E01kdxSphvUQYQvelSiD24-qUG6nTFuzW60wX2c,8322
20
20
  azureml/registry/data/validate_model_variant_schema.py,sha256=JPVNtRBn6qciMu4PaRXOvS86OGGW0cocL2Rri4xYKo8,3629
21
21
  azureml/registry/mgmt/__init__.py,sha256=LMhqcEC8ItmmpKZljElGXH-6olHlT3SLl0dJU01OvuM,226
22
- azureml/registry/mgmt/asset_management.py,sha256=kt607xr81-sTa7qq5ndpVfq4IaCutNsrqZWmmBoZz2g,14189
22
+ azureml/registry/mgmt/asset_management.py,sha256=NeYjjtOFlXJT1c0s9pW5mBjJgyxQ7zqZbB-YzmNOg9s,14465
23
23
  azureml/registry/mgmt/create_asset_template.py,sha256=ejwLuIsmzJOoUePoxbM-eGMg2E3QHfdX-nPMBzYUVMQ,3525
24
24
  azureml/registry/mgmt/create_manifest.py,sha256=N9wRmjAKO09A3utN_lCUsM_Ufpj7PL0SJz-XHPHWuyM,9528
25
25
  azureml/registry/mgmt/create_model_spec.py,sha256=1PdAcUf-LomvljoT8wKQihXMTLd7DoTgN0qDX4Lol1A,10473
26
26
  azureml/registry/mgmt/model_management.py,sha256=STTr_uvdPKV2NaJ5UvS5aMi3yejVF6Hkj9DjofJLQik,7453
27
27
  azureml/registry/mgmt/syndication_manifest.py,sha256=8Sfd49QuCA5en5_mIOLE21kZVpnReUXowx_g0TVRgWg,9025
28
+ azureml/registry/mgmt/util.py,sha256=BeVUsiMbZdO6rc9tHlsFg0AkVX62lzKn4Ko_y329r-I,2385
28
29
  azureml/registry/tools/__init__.py,sha256=IAuWWpGfZm__pAkBIxmpJz84QskpkxBr0yDk1TUSnkE,223
29
30
  azureml/registry/tools/config.py,sha256=tjPaoBsWtPXBL8Ww1hcJtsr2SuIjPKt79dR8iovcebg,3639
30
31
  azureml/registry/tools/create_or_update_assets.py,sha256=7LcuBzwU-HNE9ADG2igFXI696aKR028yTYxMuDtjVmA,16095
31
32
  azureml/registry/tools/registry_utils.py,sha256=zgYlCiOONtQJ4yZ9wg8tKVoE8dh6rrjB8hYBGhpV9-0,1403
32
33
  azureml/registry/tools/repo2registry_config.py,sha256=eXp_tU8Jyi30g8xGf7wbpLgKEPpieohBANKxMSLzq7s,4873
33
- azureml_registry_tools-0.1.0a27.dist-info/licenses/LICENSE.txt,sha256=n20rxwp7_NGrrShv9Qvcs90sjI1l3Pkt3m-5OPCWzgs,845
34
- azureml_registry_tools-0.1.0a27.dist-info/METADATA,sha256=vLbpMUGdaBjXBOIPby_z9ZtQ3xgdbizApSaYpcVccDQ,522
35
- azureml_registry_tools-0.1.0a27.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
36
- azureml_registry_tools-0.1.0a27.dist-info/entry_points.txt,sha256=iRUkAeQidMnO6RQzpLqMUBTcyYtNzAfSin9WnSdVGLw,147
37
- azureml_registry_tools-0.1.0a27.dist-info/top_level.txt,sha256=ZOeEa0TAXo6i5wOjwBoqfIGEuxOcKuscGgNSpizqREY,8
38
- azureml_registry_tools-0.1.0a27.dist-info/RECORD,,
34
+ azureml_registry_tools-0.1.0a29.dist-info/licenses/LICENSE.txt,sha256=n20rxwp7_NGrrShv9Qvcs90sjI1l3Pkt3m-5OPCWzgs,845
35
+ azureml_registry_tools-0.1.0a29.dist-info/METADATA,sha256=l6pLVmMr-nw8Cd7XPT2XSlcxd2r6prXEkAK8ch6OGqE,522
36
+ azureml_registry_tools-0.1.0a29.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
37
+ azureml_registry_tools-0.1.0a29.dist-info/entry_points.txt,sha256=iRUkAeQidMnO6RQzpLqMUBTcyYtNzAfSin9WnSdVGLw,147
38
+ azureml_registry_tools-0.1.0a29.dist-info/top_level.txt,sha256=ZOeEa0TAXo6i5wOjwBoqfIGEuxOcKuscGgNSpizqREY,8
39
+ azureml_registry_tools-0.1.0a29.dist-info/RECORD,,