monocle-apptrace 0.0.1__py3-none-any.whl → 0.1.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of monocle-apptrace might be problematic. Click here for more details.

Files changed (31) hide show
  1. monocle_apptrace/README.md +52 -28
  2. monocle_apptrace/__init__.py +0 -2
  3. monocle_apptrace/constants.py +22 -0
  4. monocle_apptrace/exporters/file_exporter.py +63 -0
  5. monocle_apptrace/haystack/__init__.py +5 -24
  6. monocle_apptrace/haystack/wrap_node.py +1 -1
  7. monocle_apptrace/haystack/wrap_openai.py +1 -9
  8. monocle_apptrace/haystack/wrap_pipeline.py +22 -9
  9. monocle_apptrace/instrumentor.py +29 -32
  10. monocle_apptrace/langchain/__init__.py +5 -94
  11. monocle_apptrace/llamaindex/__init__.py +7 -63
  12. monocle_apptrace/metamodel/README.md +47 -0
  13. monocle_apptrace/metamodel/entities/README.md +54 -0
  14. monocle_apptrace/metamodel/entities/entity_types.json +157 -0
  15. monocle_apptrace/metamodel/entities/entity_types.py +51 -0
  16. monocle_apptrace/metamodel/maps/haystack_methods.json +25 -0
  17. monocle_apptrace/metamodel/maps/lang_chain_methods.json +106 -0
  18. monocle_apptrace/metamodel/maps/llama_index_methods.json +70 -0
  19. monocle_apptrace/metamodel/spans/README.md +121 -0
  20. monocle_apptrace/metamodel/spans/span_example.json +140 -0
  21. monocle_apptrace/metamodel/spans/span_format.json +55 -0
  22. monocle_apptrace/utils.py +56 -16
  23. monocle_apptrace/wrap_common.py +143 -46
  24. monocle_apptrace/wrapper.py +3 -3
  25. monocle_apptrace-0.1.1.dist-info/METADATA +111 -0
  26. monocle_apptrace-0.1.1.dist-info/RECORD +29 -0
  27. monocle_apptrace-0.0.1.dist-info/METADATA +0 -76
  28. monocle_apptrace-0.0.1.dist-info/RECORD +0 -17
  29. {monocle_apptrace-0.0.1.dist-info → monocle_apptrace-0.1.1.dist-info}/WHEEL +0 -0
  30. {monocle_apptrace-0.0.1.dist-info → monocle_apptrace-0.1.1.dist-info}/licenses/LICENSE +0 -0
  31. {monocle_apptrace-0.0.1.dist-info → monocle_apptrace-0.1.1.dist-info}/licenses/NOTICE +0 -0
@@ -0,0 +1,47 @@
1
+ # Monocle metamodel
2
+
3
+ ## Overview
4
+ Monocle metamodel is the way to manage standardization across all supported GenAI component stack. It includes the list of components that Monocle can identify and extract metadata. This help understanding and analyzing the traces from applications that include multiple components and can evolve over time. This is one of core value that Monocle provides to it's user community.
5
+
6
+ ## Meta model
7
+ The Monocle metamodel comprises of three things,
8
+ - Entity types, definitions of technology types and supported vendor implementations.
9
+ - A JSON format that overlays on top of Open Telemetry tracing format that includes the common attributes for each entity type.
10
+ - Map of component menthods to trace with instrumentation methods provided by Monocle.
11
+
12
+ ### Entity type
13
+ The entity type defines the type of GenAI component that Monocle understand. The monocle instrumentation can extract the relevenat information for this entity. There are a fixed set of [entity types](./entity_types.py) that are defined by Monocle out of the box, eg workflow, model etc. As the GenAI landscape evolves, the Monocle community will introduce a new entity type if the current entities won't represent a new technology component.
14
+ Each entity types has number of supported technology components that Monocle handles out of the box, eg. LlamaIndex is a supported workflow. Monocle community will continue to expand the breadth of the project by adding more components.
15
+
16
+ ### Span types
17
+ The GenAI application have specific [types of spans](./spans/README.md#span-types-and-events) where diffrent entities integrate. Monocle metamodel defines these types and specifies format for tracing data and metadata generated in such spans.
18
+
19
+ ### Consistent trace format
20
+ Monocle generates [traces](../../../Monocle_User_Guide.md#traces) which comprises of [spans](../../../Monocle_User_Guide.md#spans). Note that Monocle trace is [OpenTelemetry format](https://opentelemetry.io/docs/concepts/signals/traces/) compatible. Each span is essentially a step in the execution that interacts with one of more GenAI technology components. The please refer to the [full spec of the json format](./span_format.json) and a detailed [example](./span_example.json).
21
+ The ```attribute``` section of the span includes a list of such entities that are used in that span.
22
+ The runtime data and metadata collected during the execution of that span are included in the ```events``` section of the trace (as per the Otel spec). Each entry in the event corrosponds to the entity involved in that trace execution if it has produced any runtime outputs.
23
+ Please see the [span format](./spans/README.md) for details.
24
+
25
+ ### Instrumentation method map
26
+ The map dectates what Monocle tracing method is relevant for the a given GenAI tech component method/API. It also specifies the name for that span to set in the trace output.
27
+ ```python
28
+ {
29
+ "package": "llama_index.core.base.base_query_engine",
30
+ "object": "BaseQueryEngine",
31
+ "method": "query",
32
+ "span_name": "llamaindex.query",
33
+ "wrapper_package": "wrap_common",
34
+ "wrapper_method": "task_wrapper"
35
+ }
36
+ ```
37
+
38
+ ## Extending the meta model
39
+ Monocle is highly extensible. This section describe when one would need to extend the meta model. Please refer to Monocle [User guide](../../../Monocle_User_Guide.md) and [Contributor guide](../../../Monocle_contributor_guide.md) for detailed steps.
40
+ ### Trace a new method/API
41
+ If you have overloaded an existing functionality in one of the supported components by creating a new function. Monocle doesn't know that this function should be traced, say because it's calling an LLM. You could define a new mapping so Monocle instrumentation can trace this function the say way it handles other LLM invocation functions.
42
+
43
+ ### Adding a new component provider
44
+ Let's say there's a new database that supports vector search capability which is not supported by the Monocle. In this case, first you'll need to add that database under the ``MonocleEntity.VectorDB`` list. Then you'll need to extend the method map and test if the existing Monocle tracing functions has logic to effectively trace the new component. If not, then you might need to implement new method to cover the gap and update the mapping table according.
45
+
46
+ ### Support new type of entity
47
+ If there's new component that can't be mapped to any of the existing entity types, then it'll require extending the metamodel and implement new instrumetation to support it. We recommend you initiate a discussion with Monocle community to add the support.
@@ -0,0 +1,54 @@
1
+ # Monocle Entities
2
+ The entity type defines the type of GenAI component that Monocle understand. The monocle instrumentation can extract the relevenat information for this entity. There are a fixed set of [entity types](./entity_types.py) that are defined by Monocle out of the box, eg workflow, model etc. As the GenAI landscape evolves, the Monocle community will introduce a new entity type if the current entities won't represent a new technology component.
3
+
4
+ ## Entity Types
5
+ Following attributes are supported for all entities
6
+ | Name | Description | Required |
7
+ | - | - | - |
8
+ | name | Entity name generated by Monocle | Required |
9
+ | type | Monocle Entity type | True |
10
+
11
+ ### MonocleEntity.Workflow
12
+ Workflow ie the core application code. Supported types are -
13
+ - generic
14
+ - langchain
15
+ - llama_index
16
+ - haystack
17
+
18
+ ### MonocleEntity.Model
19
+ GenAI models. Supported types are -
20
+ - generic
21
+ - llm
22
+ - embedding
23
+ Following attributes are supported for all model type entities
24
+ | Name | Description | Required |
25
+ | - | - | - |
26
+ | model_name | Name of model | True |
27
+
28
+
29
+ ### MonocleEntity.AppHosting
30
+ Application host services where the workflow code is run. Supported types are -
31
+ - generic
32
+ - aws_lambda
33
+ - aws_sagemaker
34
+ - azure_func
35
+ - github_codespace
36
+ - azure_mlw
37
+
38
+ ### MonocleEntity.Inference
39
+ The model hosting infrastructure services. Supported types are -
40
+ - generic
41
+ - nvidia_triton
42
+ - openai
43
+ - azure_oai
44
+ - aws_sagemaker
45
+ - aws_bedrock
46
+ - hugging_face
47
+
48
+ ### MonocleEntity.VectorStore
49
+ Vector search data stores. Supported types are -
50
+ - generic
51
+ - chroma
52
+ - aws_es
53
+ - milvus
54
+ - pinecone
@@ -0,0 +1,157 @@
1
+ {
2
+ "description": "Monocle entities represents kinds GenAI technology components and their implementations supported by Monocle",
3
+ "monocle_entities": [
4
+ {
5
+ "attributes" : [
6
+ {
7
+ "attribute_name": "name",
8
+ "attribute_description": "Monocle entity name",
9
+ "required": true
10
+ },
11
+ {
12
+ "attribute_name": "type",
13
+ "attribute_description": "Monocle entity type",
14
+ "required": true
15
+ }
16
+ ],
17
+ "entities": [
18
+ {
19
+ "name": "workflow",
20
+ "attributes" : [],
21
+ "types": [
22
+ {
23
+ "type": "llama_index",
24
+ "attributes" : []
25
+ },
26
+ {
27
+ "type": "langchain",
28
+ "attributes" : []
29
+ },
30
+ {
31
+ "type": "haystack",
32
+ "attributes" : []
33
+ },
34
+ {
35
+ "type": "generic",
36
+ "attributes" : []
37
+ }
38
+ ]
39
+ },
40
+ {
41
+ "name": "model",
42
+ "attributes" : [
43
+ {
44
+ "attribute_name": "model_name",
45
+ "attribute_description": "Model name",
46
+ "required": true
47
+ }
48
+ ],
49
+ "types": [
50
+ {
51
+ "type": "llm",
52
+ "attributes" : []
53
+ },
54
+ {
55
+ "type": "embedding",
56
+ "attributes" : []
57
+ },
58
+ {
59
+ "type": "generic",
60
+ "attributes" : []
61
+ }
62
+ ]
63
+ },
64
+ {
65
+ "name": "vector_store",
66
+ "attributes" : [],
67
+ "types": [
68
+ {
69
+ "type": "chroma",
70
+ "attributes" : []
71
+ },
72
+ {
73
+ "type": "aws_es",
74
+ "attributes" : []
75
+ },
76
+ {
77
+ "type": "milvus",
78
+ "attributes" : []
79
+ },
80
+ {
81
+ "type": "pinecone",
82
+ "attributes" : []
83
+ },
84
+ {
85
+ "type": "generic",
86
+ "attributes" : []
87
+ }
88
+ ]
89
+ },
90
+ {
91
+ "name": "app_hosting",
92
+ "attributes" : [],
93
+ "types": [
94
+ {
95
+ "type": "aws_lambda",
96
+ "attributes" : []
97
+ },
98
+ {
99
+ "type": "aws_sagemaker",
100
+ "attributes" : []
101
+ },
102
+ {
103
+ "type": "azure_func",
104
+ "attributes" : []
105
+ },
106
+ {
107
+ "type": "azure_mlw",
108
+ "attributes" : []
109
+ },
110
+ {
111
+ "type": "github_codespace",
112
+ "attributes" : []
113
+ },
114
+ {
115
+ "type": "generic",
116
+ "attributes" : []
117
+ }
118
+ ]
119
+ },
120
+ {
121
+ "name": "inference",
122
+ "attributes" : [],
123
+ "types": [
124
+ {
125
+ "type": "aws_sagemaker",
126
+ "attributes" : []
127
+ },
128
+ {
129
+ "type": "aws_bedrock",
130
+ "attributes" : []
131
+ },
132
+ {
133
+ "type": "azure_oai",
134
+ "attributes" : []
135
+ },
136
+ {
137
+ "type": "openai",
138
+ "attributes" : []
139
+ },
140
+ {
141
+ "type": "nvidia_triton",
142
+ "attributes" : []
143
+ },
144
+ {
145
+ "type": "hugging_face",
146
+ "attributes" : []
147
+ },
148
+ {
149
+ "type": "generic",
150
+ "attributes" : []
151
+ }
152
+ ]
153
+ }
154
+ ]
155
+ }
156
+ ]
157
+ }
@@ -0,0 +1,51 @@
1
+ # Monocle meta model:
2
+ # Monocle Entities --> Entity Type --> Entity
3
+
4
+ import enum
5
+
6
+ class MonocleEntity(enum):
7
+ # Supported Workflow/language frameworks
8
+ class Workflow(enum):
9
+ generic = 0
10
+ langchain = 1
11
+ llama_index = 2
12
+ haystack = 3
13
+
14
+ # Supported model types
15
+ class Model(enum):
16
+ generic = 0
17
+ llm = 1
18
+ embedding = 2
19
+
20
+ # Support Vector databases
21
+ class VectorStore(enum):
22
+ generic = 0
23
+ chroma = 1
24
+ aws_es = 2
25
+ Milvus = 3
26
+ Pinecone = 4
27
+
28
+ # Support application hosting frameworks
29
+ class AppHosting(enum):
30
+ generic = 0
31
+ aws_lambda = 1
32
+ aws_sagemaker = 2
33
+ azure_func = 3
34
+ github_codespace = 4
35
+ azure_mlw = 5
36
+
37
+ # Supported inference infra/services
38
+ class Inference(enum):
39
+ generic = 0
40
+ nvidia_triton = 1
41
+ openai = 2
42
+ azure_oai = 3
43
+ aws_sagemaker = 4
44
+ aws_bedrock = 5
45
+ hugging_face = 6
46
+
47
+ class SpanType(enum):
48
+ internal = 0
49
+ retrieval = 2
50
+ inference = 3
51
+ workflow = 4
@@ -0,0 +1,25 @@
1
+ {
2
+ "wrapper_methods" : [
3
+ {
4
+ "package": "haystack.components.generators.openai",
5
+ "object": "OpenAIGenerator",
6
+ "method": "run",
7
+ "wrapper_package": "haystack.wrap_openai",
8
+ "wrapper_method": "wrap_openai"
9
+ },
10
+ {
11
+ "package": "haystack.components.generators.chat.openai",
12
+ "object": "OpenAIChatGenerator",
13
+ "method": "run",
14
+ "wrapper_package": "haystack.wrap_openai",
15
+ "wrapper_method": "wrap_openai"
16
+ },
17
+ {
18
+ "package": "haystack.core.pipeline.pipeline",
19
+ "object": "Pipeline",
20
+ "method": "run",
21
+ "wrapper_package": "haystack.wrap_pipeline",
22
+ "wrapper_method": "wrap"
23
+ }
24
+ ]
25
+ }
@@ -0,0 +1,106 @@
1
+ {
2
+ "wrapper_methods" : [
3
+ {
4
+ "package": "langchain.prompts.base",
5
+ "object": "BasePromptTemplate",
6
+ "method": "invoke",
7
+ "wrapper_package": "wrap_common",
8
+ "wrapper_method": "task_wrapper"
9
+ },
10
+ {
11
+ "package": "langchain.prompts.base",
12
+ "object": "BasePromptTemplate",
13
+ "method": "ainvoke",
14
+ "wrapper_package": "wrap_common",
15
+ "wrapper_method": "atask_wrapper"
16
+ },
17
+ {
18
+ "package": "langchain.chat_models.base",
19
+ "object": "BaseChatModel",
20
+ "method": "invoke",
21
+ "wrapper_package": "wrap_common",
22
+ "wrapper_method": "llm_wrapper"
23
+ },
24
+ {
25
+ "package": "langchain.chat_models.base",
26
+ "object": "BaseChatModel",
27
+ "method": "ainvoke",
28
+ "wrapper_package": "wrap_common",
29
+ "wrapper_method": "allm_wrapper"
30
+ },
31
+ {
32
+ "package": "langchain_core.language_models.llms",
33
+ "object": "LLM",
34
+ "method": "_generate",
35
+ "wrapper_package": "wrap_common",
36
+ "wrapper_method": "llm_wrapper"
37
+ },
38
+ {
39
+ "package": "langchain_core.language_models.llms",
40
+ "object": "LLM",
41
+ "method": "_agenerate",
42
+ "wrapper_package": "wrap_common",
43
+ "wrapper_method": "llm_wrapper"
44
+ },
45
+ {
46
+ "package": "langchain_core.retrievers",
47
+ "object": "BaseRetriever",
48
+ "method": "invoke",
49
+ "wrapper_package": "wrap_common",
50
+ "wrapper_method": "task_wrapper"
51
+ },
52
+ {
53
+ "package": "langchain_core.retrievers",
54
+ "object": "BaseRetriever",
55
+ "method": "ainvoke",
56
+ "wrapper_package": "wrap_common",
57
+ "wrapper_method": "atask_wrapper"
58
+ },
59
+ {
60
+ "package": "langchain.schema",
61
+ "object": "BaseOutputParser",
62
+ "method": "invoke",
63
+ "wrapper_package": "wrap_common",
64
+ "wrapper_method": "task_wrapper"
65
+ },
66
+ {
67
+ "package": "langchain.schema",
68
+ "object": "BaseOutputParser",
69
+ "method": "ainvoke",
70
+ "wrapper_package": "wrap_common",
71
+ "wrapper_method": "atask_wrapper"
72
+ },
73
+ {
74
+ "package": "langchain.schema.runnable",
75
+ "object": "RunnableSequence",
76
+ "method": "invoke",
77
+ "span_name": "langchain.workflow",
78
+ "wrapper_package": "wrap_common",
79
+ "wrapper_method": "task_wrapper"
80
+ },
81
+ {
82
+ "package": "langchain.schema.runnable",
83
+ "object": "RunnableSequence",
84
+ "method": "ainvoke",
85
+ "span_name": "langchain.workflow",
86
+ "wrapper_package": "wrap_common",
87
+ "wrapper_method": "atask_wrapper"
88
+ },
89
+ {
90
+ "package": "langchain.schema.runnable",
91
+ "object": "RunnableParallel",
92
+ "method": "invoke",
93
+ "span_name": "langchain.workflow",
94
+ "wrapper_package": "wrap_common",
95
+ "wrapper_method": "task_wrapper"
96
+ },
97
+ {
98
+ "package": "langchain.schema.runnable",
99
+ "object": "RunnableParallel",
100
+ "method": "ainvoke",
101
+ "span_name": "langchain.workflow",
102
+ "wrapper_package": "wrap_common",
103
+ "wrapper_method": "atask_wrapper"
104
+ }
105
+ ]
106
+ }
@@ -0,0 +1,70 @@
1
+ {
2
+ "wrapper_methods" : [
3
+ {
4
+ "package": "llama_index.core.indices.base_retriever",
5
+ "object": "BaseRetriever",
6
+ "method": "retrieve",
7
+ "span_name": "llamaindex.retrieve",
8
+ "wrapper_package": "wrap_common",
9
+ "wrapper_method": "task_wrapper"
10
+ },
11
+ {
12
+ "package": "llama_index.core.indices.base_retriever",
13
+ "object": "BaseRetriever",
14
+ "method": "aretrieve",
15
+ "span_name": "llamaindex.retrieve",
16
+ "wrapper_package": "wrap_common",
17
+ "wrapper_method": "atask_wrapper"
18
+ },
19
+ {
20
+ "package": "llama_index.core.base.base_query_engine",
21
+ "object": "BaseQueryEngine",
22
+ "method": "query",
23
+ "span_name": "llamaindex.query",
24
+ "wrapper_package": "wrap_common",
25
+ "wrapper_method": "task_wrapper"
26
+ },
27
+ {
28
+ "package": "llama_index.core.base.base_query_engine",
29
+ "object": "BaseQueryEngine",
30
+ "method": "aquery",
31
+ "span_name": "llamaindex.query",
32
+ "wrapper_package": "wrap_common",
33
+ "wrapper_method": "atask_wrapper"
34
+ },
35
+ {
36
+ "package": "llama_index.core.llms.custom",
37
+ "object": "CustomLLM",
38
+ "method": "chat",
39
+ "span_name": "llamaindex.llmchat",
40
+ "wrapper_package": "wrap_common",
41
+ "wrapper_method": "task_wrapper"
42
+ },
43
+ {
44
+ "package": "llama_index.core.llms.custom",
45
+ "object": "CustomLLM",
46
+ "method": "achat",
47
+ "span_name": "llamaindex.llmchat",
48
+ "wrapper_package": "wrap_common",
49
+ "wrapper_method": "atask_wrapper"
50
+ },
51
+ {
52
+ "package": "llama_index.llms.openai.base",
53
+ "object": "OpenAI",
54
+ "method": "chat",
55
+ "span_name": "llamaindex.openai",
56
+ "wrapper_package": "wrap_common",
57
+ "wrapper_method": "llm_wrapper",
58
+ "span_name_getter_package" : "llamaindex",
59
+ "span_name_getter_mothod" : "get_llm_span_name_for_openai"
60
+ },
61
+ {
62
+ "package": "llama_index.llms.openai.base",
63
+ "object": "OpenAI",
64
+ "method": "achat",
65
+ "span_name": "llamaindex.openai",
66
+ "wrapper_package": "wrap_common",
67
+ "wrapper_method": "allm_wrapper"
68
+ }
69
+ ]
70
+ }
@@ -0,0 +1,121 @@
1
+ # Monocle Span format
2
+ Monocle generates [traces](../../../../Monocle_User_Guide.md#traces) which comprises of [spans](../../../../Monocle_User_Guide.md#spans). Note that Monocle trace is [OpenTelemetry format](https://opentelemetry.io/docs/concepts/signals/traces/) compatible. Each span is essentially a step in the execution that interacts with one of more GenAI technology components. This document explains the [span format](./span_format.json) that Monocle generates for GenAI application tracing.
3
+
4
+ Per the OpenTelemetry convention, each span contains an attribute section and event section. In Monocle generated trace, the attribute sections includes details of GenAI entities used in the span. The event section includes the input, output and metadata related to the execution of that span.
5
+
6
+ ## Attributes
7
+ The attribute sections includes details of GenAI entities used in the span. For each entity used in the span in includes the entity name and entity type. For every type of entity, there are required and optional attributes listed below.
8
+ ### Json format
9
+ ```json
10
+ attributes:
11
+ "span.type": "Monocle-span-type",
12
+ "entity.count": "count-of-entities",
13
+
14
+ "entity.<index>.name": "Monocle-Entity-name",
15
+ "entity.<index>.type": "MonocleEntity.<entity-type>"
16
+ ...
17
+ ```
18
+ The ```entity.count``` indicates total number of entities used in the given span. For each entity, the details are captured in ```entity.<index>.X```. For example,
19
+ ```json
20
+ "attributes": {
21
+ "span.type": "Inference",
22
+ "entity.count": 2,
23
+ "entity.1.name": "AzureOpenAI",
24
+ "entity.1.type": "Inference.Azure_oai",
25
+ "entity.2.name": "gpt-35-turbo",
26
+ "entity.2.type": "Model.LLM",
27
+ "entity.2.model_name": "gpt-35-turbo",
28
+ ```
29
+
30
+ ### Entity type specific attributes
31
+ #### MonocleEntity.Workflow
32
+ | Name | Description | Values | Required |
33
+ | - | - | - | - |
34
+ | name | Entity name generated by Monocle | Name String | Required |
35
+ | type | Monocle Entity type | MonocleEntity.Workflow | Required |
36
+ | optional-attribute | Additional attribute specific to entity | | Optional |
37
+
38
+ ### MonocleEntity.Model
39
+ | Name | Description | Values | Required |
40
+ | - | - | - | - |
41
+ | name | Entity name generated by Monocle | Name String | Required |
42
+ | type | Monocle Entity type | MonocleEntity.Model | Required |
43
+ | model_name | Name of model | String | Required |
44
+ | optional-attribute | Additional attribute specific to entity | | Optional |
45
+
46
+ ### MonocleEntity.AppHosting
47
+ | Name | Description | Values | Required |
48
+ | - | - | - | - |
49
+ | name | Entity name generated by Monocle | Name String | Required |
50
+ | type | Monocle Entity type | MonocleEntity.AppHosting | Required |
51
+ | optional-attribute | Additional attribute specific to entity | | Optional |
52
+
53
+ ### MonocleEntity.Inference
54
+ | Name | Description | Values | Required |
55
+ | - | - | - | - |
56
+ | name | Entity name generated by Monocle | Name String | Required |
57
+ | type | Monocle Entity type | MonocleEntity.Inference | Required |
58
+ | optional-attribute | Additional attribute specific to entity | | Optional |
59
+
60
+ ### MonocleEntity.VectorDB
61
+ | Name | Description | Values | Required |
62
+ | - | - | - | - |
63
+ | name | Entity name generated by Monocle | Name String | Required |
64
+ | type | Monocle Entity type | MonocleEntity.VectorDB | Required |
65
+ | optional-attribute | Additional attribute specific to entity | | Optional |
66
+
67
+ ## Events
68
+ The event section includes the input, output and metadata generated by that span execution. For each type of span, there are required and option input, output and metadata items listed below. If there's no data genearated in the space, the events will be an empty array.
69
+
70
+ ### Json format
71
+ ```json
72
+ "events" : [
73
+ {
74
+ "name": "data.input",
75
+ "timestamp": "UTC timestamp",
76
+ "attributes": {
77
+ "input_attribute": "value"
78
+ }
79
+ },
80
+ {
81
+ "name": "data.output",
82
+ "timestamp": "UTC timestamp",
83
+ "attributes": {
84
+ "output_attribute": "value"
85
+ }
86
+ },
87
+ {
88
+ "name": "metadata",
89
+ "timestamp": "UTC timestamp",
90
+ "attributes": {
91
+ "metadata_attribute": "value"
92
+ }
93
+ }
94
+ ]
95
+ ```
96
+
97
+ ## Span types and events
98
+ The ```span.type``` captured in ```attributes``` section of the span dectates the format of the ```events```
99
+ ### SpanType.Retrieval
100
+ | Name | Description | Values | Required |
101
+ | - | - | - | - |
102
+ | name | event name | data.input or data.output or metadata | Required |
103
+ | timestamp | timestap when the event occurred | UTC timestamp | Required |
104
+ | attributes | input/output/metadata attributes generated in span | Dictionary | Required |
105
+
106
+ ### SpanType.Inference
107
+ | Name | Description | Values | Required |
108
+ | - | - | - | - |
109
+ | name | event name | data.input or data.output or metadata | Required |
110
+ | timestamp | timestap when the event occurred | UTC timestamp | Required |
111
+ | attributes | input/output/metadata attributes generated in span | Dictionary | Required |
112
+
113
+ ### SpanType.Workflow
114
+ | Name | Description | Values | Required |
115
+ | - | - | - | - |
116
+ | name | event name | data.input or data.output or metadata | Required |
117
+ | timestamp | timestap when the event occurred | UTC timestamp | Required |
118
+ | attributes | input/output/metadata attributes generated in span | Dictionary | Required |
119
+
120
+ ### SpanType.Internal
121
+ Events will be empty