hume 0.9.2 → 0.9.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (140) hide show
  1. package/.mock/definition/api.yml +6 -6
  2. package/.mock/definition/empathic-voice/__package__.yml +2810 -2703
  3. package/.mock/definition/empathic-voice/chat.yml +143 -143
  4. package/.mock/definition/empathic-voice/chatGroups.yml +579 -508
  5. package/.mock/definition/empathic-voice/chats.yml +490 -449
  6. package/.mock/definition/empathic-voice/configs.yml +913 -871
  7. package/.mock/definition/empathic-voice/customVoices.yml +255 -234
  8. package/.mock/definition/empathic-voice/prompts.yml +523 -526
  9. package/.mock/definition/empathic-voice/tools.yml +588 -588
  10. package/.mock/definition/expression-measurement/batch/__package__.yml +1758 -1758
  11. package/.mock/definition/expression-measurement/stream/__package__.yml +486 -485
  12. package/.mock/fern.config.json +3 -3
  13. package/api/resources/empathicVoice/resources/chatGroups/client/Client.d.ts +11 -0
  14. package/api/resources/empathicVoice/resources/chatGroups/client/Client.js +77 -3
  15. package/api/resources/empathicVoice/resources/chatGroups/client/requests/ChatGroupsGetAudioRequest.d.ts +25 -0
  16. package/api/resources/empathicVoice/resources/chatGroups/client/requests/index.d.ts +1 -0
  17. package/api/resources/empathicVoice/resources/chats/client/Client.d.ts +10 -0
  18. package/api/resources/empathicVoice/resources/chats/client/Client.js +63 -2
  19. package/api/resources/empathicVoice/resources/configs/client/Client.js +9 -9
  20. package/api/resources/empathicVoice/resources/configs/client/requests/ConfigsListConfigsRequest.d.ts +1 -1
  21. package/api/resources/empathicVoice/resources/customVoices/client/Client.d.ts +2 -7
  22. package/api/resources/empathicVoice/resources/customVoices/client/Client.js +8 -13
  23. package/api/resources/empathicVoice/resources/customVoices/client/requests/CustomVoicesListCustomVoicesRequest.d.ts +1 -1
  24. package/api/resources/empathicVoice/resources/customVoices/client/requests/PostedCustomVoiceName.d.ts +0 -6
  25. package/api/resources/empathicVoice/resources/prompts/client/Client.js +9 -9
  26. package/api/resources/empathicVoice/resources/prompts/client/requests/PromptsListPromptsRequest.d.ts +2 -2
  27. package/api/resources/empathicVoice/resources/tools/client/Client.js +9 -9
  28. package/api/resources/empathicVoice/resources/tools/client/requests/ToolsListToolsRequest.d.ts +1 -1
  29. package/api/resources/empathicVoice/types/AudioOutput.d.ts +1 -1
  30. package/api/resources/empathicVoice/types/PostedCustomVoice.d.ts +2 -2
  31. package/api/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.d.ts +3 -2
  32. package/api/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.js +2 -1
  33. package/api/resources/empathicVoice/types/PostedCustomVoiceParameters.d.ts +51 -9
  34. package/api/resources/empathicVoice/types/PostedVoice.d.ts +1 -1
  35. package/api/resources/empathicVoice/types/ReturnCustomVoice.d.ts +2 -2
  36. package/api/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.d.ts +3 -2
  37. package/api/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.js +2 -1
  38. package/api/resources/empathicVoice/types/ReturnCustomVoiceParameters.d.ts +51 -9
  39. package/api/resources/empathicVoice/types/ReturnVoice.d.ts +1 -1
  40. package/api/resources/empathicVoice/types/UserInput.d.ts +3 -1
  41. package/api/resources/empathicVoice/types/VoiceNameEnum.d.ts +2 -1
  42. package/api/resources/empathicVoice/types/VoiceNameEnum.js +1 -0
  43. package/api/resources/empathicVoice/types/index.d.ts +0 -2
  44. package/api/resources/empathicVoice/types/index.js +0 -2
  45. package/api/resources/expressionMeasurement/resources/batch/client/Client.js +6 -6
  46. package/api/resources/index.d.ts +1 -1
  47. package/api/resources/index.js +2 -2
  48. package/dist/api/resources/empathicVoice/resources/chatGroups/client/Client.d.ts +11 -0
  49. package/dist/api/resources/empathicVoice/resources/chatGroups/client/Client.js +77 -3
  50. package/dist/api/resources/empathicVoice/resources/chatGroups/client/requests/ChatGroupsGetAudioRequest.d.ts +25 -0
  51. package/dist/api/resources/empathicVoice/resources/chatGroups/client/requests/index.d.ts +1 -0
  52. package/dist/api/resources/empathicVoice/resources/chats/client/Client.d.ts +10 -0
  53. package/dist/api/resources/empathicVoice/resources/chats/client/Client.js +63 -2
  54. package/dist/api/resources/empathicVoice/resources/configs/client/Client.js +9 -9
  55. package/dist/api/resources/empathicVoice/resources/configs/client/requests/ConfigsListConfigsRequest.d.ts +1 -1
  56. package/dist/api/resources/empathicVoice/resources/customVoices/client/Client.d.ts +2 -7
  57. package/dist/api/resources/empathicVoice/resources/customVoices/client/Client.js +8 -13
  58. package/dist/api/resources/empathicVoice/resources/customVoices/client/requests/CustomVoicesListCustomVoicesRequest.d.ts +1 -1
  59. package/dist/api/resources/empathicVoice/resources/customVoices/client/requests/PostedCustomVoiceName.d.ts +0 -6
  60. package/dist/api/resources/empathicVoice/resources/prompts/client/Client.js +9 -9
  61. package/dist/api/resources/empathicVoice/resources/prompts/client/requests/PromptsListPromptsRequest.d.ts +2 -2
  62. package/dist/api/resources/empathicVoice/resources/tools/client/Client.js +9 -9
  63. package/dist/api/resources/empathicVoice/resources/tools/client/requests/ToolsListToolsRequest.d.ts +1 -1
  64. package/dist/api/resources/empathicVoice/types/AudioOutput.d.ts +1 -1
  65. package/dist/api/resources/empathicVoice/types/PostedCustomVoice.d.ts +2 -2
  66. package/dist/api/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.d.ts +3 -2
  67. package/dist/api/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.js +2 -1
  68. package/dist/api/resources/empathicVoice/types/PostedCustomVoiceParameters.d.ts +51 -9
  69. package/dist/api/resources/empathicVoice/types/PostedVoice.d.ts +1 -1
  70. package/dist/api/resources/empathicVoice/types/ReturnCustomVoice.d.ts +2 -2
  71. package/dist/api/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.d.ts +3 -2
  72. package/dist/api/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.js +2 -1
  73. package/dist/api/resources/empathicVoice/types/ReturnCustomVoiceParameters.d.ts +51 -9
  74. package/dist/api/resources/empathicVoice/types/ReturnVoice.d.ts +1 -1
  75. package/dist/api/resources/empathicVoice/types/UserInput.d.ts +3 -1
  76. package/dist/api/resources/empathicVoice/types/VoiceNameEnum.d.ts +2 -1
  77. package/dist/api/resources/empathicVoice/types/VoiceNameEnum.js +1 -0
  78. package/dist/api/resources/empathicVoice/types/index.d.ts +0 -2
  79. package/dist/api/resources/empathicVoice/types/index.js +0 -2
  80. package/dist/api/resources/expressionMeasurement/resources/batch/client/Client.js +6 -6
  81. package/dist/api/resources/index.d.ts +1 -1
  82. package/dist/api/resources/index.js +2 -2
  83. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoice.d.ts +1 -1
  84. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoice.js +1 -1
  85. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.d.ts +1 -1
  86. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.js +1 -1
  87. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoiceParameters.d.ts +9 -2
  88. package/dist/serialization/resources/empathicVoice/types/PostedCustomVoiceParameters.js +9 -2
  89. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoice.d.ts +1 -1
  90. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoice.js +1 -1
  91. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.d.ts +1 -1
  92. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.js +1 -1
  93. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoiceParameters.d.ts +9 -2
  94. package/dist/serialization/resources/empathicVoice/types/ReturnCustomVoiceParameters.js +9 -2
  95. package/dist/serialization/resources/empathicVoice/types/VoiceNameEnum.d.ts +1 -1
  96. package/dist/serialization/resources/empathicVoice/types/VoiceNameEnum.js +1 -0
  97. package/dist/serialization/resources/empathicVoice/types/index.d.ts +0 -2
  98. package/dist/serialization/resources/empathicVoice/types/index.js +0 -2
  99. package/dist/serialization/resources/index.d.ts +1 -1
  100. package/dist/serialization/resources/index.js +2 -2
  101. package/dist/version.d.ts +1 -1
  102. package/dist/version.js +1 -1
  103. package/package.json +1 -1
  104. package/reference.md +573 -607
  105. package/serialization/resources/empathicVoice/types/PostedCustomVoice.d.ts +1 -1
  106. package/serialization/resources/empathicVoice/types/PostedCustomVoice.js +1 -1
  107. package/serialization/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.d.ts +1 -1
  108. package/serialization/resources/empathicVoice/types/PostedCustomVoiceBaseVoice.js +1 -1
  109. package/serialization/resources/empathicVoice/types/PostedCustomVoiceParameters.d.ts +9 -2
  110. package/serialization/resources/empathicVoice/types/PostedCustomVoiceParameters.js +9 -2
  111. package/serialization/resources/empathicVoice/types/ReturnCustomVoice.d.ts +1 -1
  112. package/serialization/resources/empathicVoice/types/ReturnCustomVoice.js +1 -1
  113. package/serialization/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.d.ts +1 -1
  114. package/serialization/resources/empathicVoice/types/ReturnCustomVoiceBaseVoice.js +1 -1
  115. package/serialization/resources/empathicVoice/types/ReturnCustomVoiceParameters.d.ts +9 -2
  116. package/serialization/resources/empathicVoice/types/ReturnCustomVoiceParameters.js +9 -2
  117. package/serialization/resources/empathicVoice/types/VoiceNameEnum.d.ts +1 -1
  118. package/serialization/resources/empathicVoice/types/VoiceNameEnum.js +1 -0
  119. package/serialization/resources/empathicVoice/types/index.d.ts +0 -2
  120. package/serialization/resources/empathicVoice/types/index.js +0 -2
  121. package/serialization/resources/index.d.ts +1 -1
  122. package/serialization/resources/index.js +2 -2
  123. package/version.d.ts +1 -1
  124. package/version.js +1 -1
  125. package/api/resources/empathicVoice/types/ExtendedVoiceArgs.d.ts +0 -9
  126. package/api/resources/empathicVoice/types/VoiceArgs.d.ts +0 -13
  127. package/dist/api/resources/empathicVoice/types/ExtendedVoiceArgs.d.ts +0 -9
  128. package/dist/api/resources/empathicVoice/types/ExtendedVoiceArgs.js +0 -5
  129. package/dist/api/resources/empathicVoice/types/VoiceArgs.d.ts +0 -13
  130. package/dist/api/resources/empathicVoice/types/VoiceArgs.js +0 -5
  131. package/dist/serialization/resources/empathicVoice/types/ExtendedVoiceArgs.d.ts +0 -15
  132. package/dist/serialization/resources/empathicVoice/types/ExtendedVoiceArgs.js +0 -36
  133. package/dist/serialization/resources/empathicVoice/types/VoiceArgs.d.ts +0 -19
  134. package/dist/serialization/resources/empathicVoice/types/VoiceArgs.js +0 -40
  135. package/serialization/resources/empathicVoice/types/ExtendedVoiceArgs.d.ts +0 -15
  136. package/serialization/resources/empathicVoice/types/ExtendedVoiceArgs.js +0 -36
  137. package/serialization/resources/empathicVoice/types/VoiceArgs.d.ts +0 -19
  138. package/serialization/resources/empathicVoice/types/VoiceArgs.js +0 -40
  139. /package/api/resources/empathicVoice/{types/ExtendedVoiceArgs.js → resources/chatGroups/client/requests/ChatGroupsGetAudioRequest.js} +0 -0
  140. /package/{api/resources/empathicVoice/types/VoiceArgs.js → dist/api/resources/empathicVoice/resources/chatGroups/client/requests/ChatGroupsGetAudioRequest.js} +0 -0
@@ -1,528 +1,529 @@
1
1
  channel:
2
- path: /v0/stream/models
3
- auth: false
4
- headers:
5
- X-Hume-Api-Key:
6
- type: string
7
- name: humeApiKey
8
- messages:
9
- subscribe:
10
- origin: server
11
- body: SubscribeEvent
12
- publish:
13
- origin: client
14
- body:
15
- type: StreamModelsEndpointPayload
16
- docs: Models endpoint payload
17
- examples:
18
- - messages:
19
- - type: publish
20
- body: {}
21
- - type: subscribe
22
- body: {}
2
+ path: /v0/stream/models
3
+ auth: false
4
+ display-name: Stream
5
+ headers:
6
+ X-Hume-Api-Key:
7
+ type: string
8
+ name: humeApiKey
9
+ messages:
10
+ subscribe:
11
+ origin: server
12
+ body: SubscribeEvent
13
+ publish:
14
+ origin: client
15
+ body:
16
+ type: StreamModelsEndpointPayload
17
+ docs: Models endpoint payload
18
+ examples:
19
+ - messages:
20
+ - type: publish
21
+ body: {}
22
+ - type: subscribe
23
+ body: {}
23
24
  types:
24
- StreamModelPredictionsJobDetails:
25
+ StreamModelPredictionsJobDetails:
26
+ docs: >
27
+ If the job_details flag was set in the request, details about the current
28
+ streaming job will be returned in the response body.
29
+ properties:
30
+ job_id:
31
+ type: optional<string>
32
+ docs: ID of the current streaming job.
33
+ source:
34
+ openapi: streaming-asyncapi.yml
35
+ StreamModelPredictionsBurstPredictionsItem:
36
+ properties:
37
+ time: optional<TimeRange>
38
+ emotions: optional<EmotionEmbedding>
39
+ source:
40
+ openapi: streaming-asyncapi.yml
41
+ StreamModelPredictionsBurst:
42
+ docs: Response for the vocal burst emotion model.
43
+ properties:
44
+ predictions: optional<list<StreamModelPredictionsBurstPredictionsItem>>
45
+ source:
46
+ openapi: streaming-asyncapi.yml
47
+ StreamModelPredictionsFacePredictionsItem:
48
+ properties:
49
+ frame:
50
+ type: optional<double>
51
+ docs: Frame number
52
+ time:
53
+ type: optional<double>
54
+ docs: Time in seconds when face detection occurred.
55
+ bbox: optional<StreamBoundingBox>
56
+ prob:
57
+ type: optional<double>
58
+ docs: The predicted probability that a detected face was actually a face.
59
+ face_id:
60
+ type: optional<string>
61
+ docs: >-
62
+ Identifier for a face. Not that this defaults to `unknown` unless face
63
+ identification is enabled in the face model configuration.
64
+ emotions: optional<EmotionEmbedding>
65
+ facs: optional<EmotionEmbedding>
66
+ descriptions: optional<EmotionEmbedding>
67
+ source:
68
+ openapi: streaming-asyncapi.yml
69
+ StreamModelPredictionsFace:
70
+ docs: Response for the facial expression emotion model.
71
+ properties:
72
+ predictions: optional<list<StreamModelPredictionsFacePredictionsItem>>
73
+ source:
74
+ openapi: streaming-asyncapi.yml
75
+ StreamModelPredictionsFacemeshPredictionsItem:
76
+ properties:
77
+ emotions: optional<EmotionEmbedding>
78
+ source:
79
+ openapi: streaming-asyncapi.yml
80
+ StreamModelPredictionsFacemesh:
81
+ docs: Response for the facemesh emotion model.
82
+ properties:
83
+ predictions: optional<list<StreamModelPredictionsFacemeshPredictionsItem>>
84
+ source:
85
+ openapi: streaming-asyncapi.yml
86
+ StreamModelPredictionsLanguagePredictionsItem:
87
+ properties:
88
+ text:
89
+ type: optional<string>
90
+ docs: A segment of text (like a word or a sentence).
91
+ position: optional<TextPosition>
92
+ emotions: optional<EmotionEmbedding>
93
+ sentiment: optional<Sentiment>
94
+ toxicity: optional<Toxicity>
95
+ source:
96
+ openapi: streaming-asyncapi.yml
97
+ StreamModelPredictionsLanguage:
98
+ docs: Response for the language emotion model.
99
+ properties:
100
+ predictions: optional<list<StreamModelPredictionsLanguagePredictionsItem>>
101
+ source:
102
+ openapi: streaming-asyncapi.yml
103
+ StreamModelPredictionsProsodyPredictionsItem:
104
+ properties:
105
+ time: optional<TimeRange>
106
+ emotions: optional<EmotionEmbedding>
107
+ source:
108
+ openapi: streaming-asyncapi.yml
109
+ StreamModelPredictionsProsody:
110
+ docs: Response for the speech prosody emotion model.
111
+ properties:
112
+ predictions: optional<list<StreamModelPredictionsProsodyPredictionsItem>>
113
+ source:
114
+ openapi: streaming-asyncapi.yml
115
+ StreamModelPredictions:
116
+ docs: Model predictions
117
+ properties:
118
+ payload_id:
119
+ type: optional<string>
25
120
  docs: >
26
- If the job_details flag was set in the request, details about the current
27
- streaming job will be returned in the response body.
28
- properties:
29
- job_id:
30
- type: optional<string>
31
- docs: ID of the current streaming job.
32
- source:
33
- openapi: streaming-asyncapi.yml
34
- StreamModelPredictionsBurstPredictionsItem:
35
- properties:
36
- time: optional<TimeRange>
37
- emotions: optional<EmotionEmbedding>
38
- source:
39
- openapi: streaming-asyncapi.yml
40
- StreamModelPredictionsBurst:
121
+ If a payload ID was passed in the request, the same payload ID will be
122
+ sent back in the response body.
123
+ job_details:
124
+ type: optional<StreamModelPredictionsJobDetails>
125
+ docs: >
126
+ If the job_details flag was set in the request, details about the
127
+ current streaming job will be returned in the response body.
128
+ burst:
129
+ type: optional<StreamModelPredictionsBurst>
41
130
  docs: Response for the vocal burst emotion model.
42
- properties:
43
- predictions: optional<list<StreamModelPredictionsBurstPredictionsItem>>
44
- source:
45
- openapi: streaming-asyncapi.yml
46
- StreamModelPredictionsFacePredictionsItem:
47
- properties:
48
- frame:
49
- type: optional<double>
50
- docs: Frame number
51
- time:
52
- type: optional<double>
53
- docs: Time in seconds when face detection occurred.
54
- bbox: optional<StreamBoundingBox>
55
- prob:
56
- type: optional<double>
57
- docs: The predicted probability that a detected face was actually a face.
58
- face_id:
59
- type: optional<string>
60
- docs: >-
61
- Identifier for a face. Not that this defaults to `unknown` unless face
62
- identification is enabled in the face model configuration.
63
- emotions: optional<EmotionEmbedding>
64
- facs: optional<EmotionEmbedding>
65
- descriptions: optional<EmotionEmbedding>
66
- source:
67
- openapi: streaming-asyncapi.yml
68
- StreamModelPredictionsFace:
131
+ face:
132
+ type: optional<StreamModelPredictionsFace>
69
133
  docs: Response for the facial expression emotion model.
70
- properties:
71
- predictions: optional<list<StreamModelPredictionsFacePredictionsItem>>
72
- source:
73
- openapi: streaming-asyncapi.yml
74
- StreamModelPredictionsFacemeshPredictionsItem:
75
- properties:
76
- emotions: optional<EmotionEmbedding>
77
- source:
78
- openapi: streaming-asyncapi.yml
79
- StreamModelPredictionsFacemesh:
134
+ facemesh:
135
+ type: optional<StreamModelPredictionsFacemesh>
80
136
  docs: Response for the facemesh emotion model.
81
- properties:
82
- predictions: optional<list<StreamModelPredictionsFacemeshPredictionsItem>>
83
- source:
84
- openapi: streaming-asyncapi.yml
85
- StreamModelPredictionsLanguagePredictionsItem:
86
- properties:
87
- text:
88
- type: optional<string>
89
- docs: A segment of text (like a word or a sentence).
90
- position: optional<TextPosition>
91
- emotions: optional<EmotionEmbedding>
92
- sentiment: optional<Sentiment>
93
- toxicity: optional<Toxicity>
94
- source:
95
- openapi: streaming-asyncapi.yml
96
- StreamModelPredictionsLanguage:
137
+ language:
138
+ type: optional<StreamModelPredictionsLanguage>
97
139
  docs: Response for the language emotion model.
98
- properties:
99
- predictions: optional<list<StreamModelPredictionsLanguagePredictionsItem>>
100
- source:
101
- openapi: streaming-asyncapi.yml
102
- StreamModelPredictionsProsodyPredictionsItem:
103
- properties:
104
- time: optional<TimeRange>
105
- emotions: optional<EmotionEmbedding>
106
- source:
107
- openapi: streaming-asyncapi.yml
108
- StreamModelPredictionsProsody:
140
+ prosody:
141
+ type: optional<StreamModelPredictionsProsody>
109
142
  docs: Response for the speech prosody emotion model.
110
- properties:
111
- predictions: optional<list<StreamModelPredictionsProsodyPredictionsItem>>
112
- source:
113
- openapi: streaming-asyncapi.yml
114
- StreamModelPredictions:
115
- docs: Model predictions
116
- properties:
117
- payload_id:
118
- type: optional<string>
119
- docs: >
120
- If a payload ID was passed in the request, the same payload ID will be
121
- sent back in the response body.
122
- job_details:
123
- type: optional<StreamModelPredictionsJobDetails>
124
- docs: >
125
- If the job_details flag was set in the request, details about the
126
- current streaming job will be returned in the response body.
127
- burst:
128
- type: optional<StreamModelPredictionsBurst>
129
- docs: Response for the vocal burst emotion model.
130
- face:
131
- type: optional<StreamModelPredictionsFace>
132
- docs: Response for the facial expression emotion model.
133
- facemesh:
134
- type: optional<StreamModelPredictionsFacemesh>
135
- docs: Response for the facemesh emotion model.
136
- language:
137
- type: optional<StreamModelPredictionsLanguage>
138
- docs: Response for the language emotion model.
139
- prosody:
140
- type: optional<StreamModelPredictionsProsody>
141
- docs: Response for the speech prosody emotion model.
142
- source:
143
- openapi: streaming-asyncapi.yml
144
- JobDetails:
143
+ source:
144
+ openapi: streaming-asyncapi.yml
145
+ JobDetails:
146
+ docs: >
147
+ If the job_details flag was set in the request, details about the current
148
+ streaming job will be returned in the response body.
149
+ properties:
150
+ job_id:
151
+ type: optional<string>
152
+ docs: ID of the current streaming job.
153
+ source:
154
+ openapi: streaming-asyncapi.yml
155
+ StreamErrorMessage:
156
+ docs: Error message
157
+ properties:
158
+ error:
159
+ type: optional<string>
160
+ docs: Error message text.
161
+ code:
162
+ type: optional<string>
163
+ docs: Unique identifier for the error.
164
+ payload_id:
165
+ type: optional<string>
145
166
  docs: >
146
- If the job_details flag was set in the request, details about the current
147
- streaming job will be returned in the response body.
148
- properties:
149
- job_id:
150
- type: optional<string>
151
- docs: ID of the current streaming job.
152
- source:
153
- openapi: streaming-asyncapi.yml
154
- StreamErrorMessage:
155
- docs: Error message
156
- properties:
157
- error:
158
- type: optional<string>
159
- docs: Error message text.
160
- code:
161
- type: optional<string>
162
- docs: Unique identifier for the error.
163
- payload_id:
164
- type: optional<string>
165
- docs: >
166
- If a payload ID was passed in the request, the same payload ID will be
167
- sent back in the response body.
168
- job_details:
169
- type: optional<JobDetails>
170
- docs: >
171
- If the job_details flag was set in the request, details about the
172
- current streaming job will be returned in the response body.
173
- source:
174
- openapi: streaming-asyncapi.yml
175
- StreamWarningMessageJobDetails:
167
+ If a payload ID was passed in the request, the same payload ID will be
168
+ sent back in the response body.
169
+ job_details:
170
+ type: optional<JobDetails>
176
171
  docs: >
177
- If the job_details flag was set in the request, details about the current
178
- streaming job will be returned in the response body.
179
- properties:
180
- job_id:
181
- type: optional<string>
182
- docs: ID of the current streaming job.
183
- source:
184
- openapi: streaming-asyncapi.yml
185
- StreamWarningMessage:
186
- docs: Warning message
187
- properties:
188
- warning:
189
- type: optional<string>
190
- docs: Warning message text.
191
- code:
192
- type: optional<string>
193
- docs: Unique identifier for the error.
194
- payload_id:
195
- type: optional<string>
196
- docs: >
197
- If a payload ID was passed in the request, the same payload ID will be
198
- sent back in the response body.
199
- job_details:
200
- type: optional<StreamWarningMessageJobDetails>
201
- docs: >
202
- If the job_details flag was set in the request, details about the
203
- current streaming job will be returned in the response body.
204
- source:
205
- openapi: streaming-asyncapi.yml
206
- SubscribeEvent:
207
- discriminated: false
208
- union:
209
- - type: StreamModelPredictions
210
- docs: Model predictions
211
- - type: StreamErrorMessage
212
- docs: Error message
213
- - type: StreamWarningMessage
214
- docs: Warning message
215
- source:
216
- openapi: streaming-asyncapi.yml
217
- StreamFace:
172
+ If the job_details flag was set in the request, details about the
173
+ current streaming job will be returned in the response body.
174
+ source:
175
+ openapi: streaming-asyncapi.yml
176
+ StreamWarningMessageJobDetails:
177
+ docs: >
178
+ If the job_details flag was set in the request, details about the current
179
+ streaming job will be returned in the response body.
180
+ properties:
181
+ job_id:
182
+ type: optional<string>
183
+ docs: ID of the current streaming job.
184
+ source:
185
+ openapi: streaming-asyncapi.yml
186
+ StreamWarningMessage:
187
+ docs: Warning message
188
+ properties:
189
+ warning:
190
+ type: optional<string>
191
+ docs: Warning message text.
192
+ code:
193
+ type: optional<string>
194
+ docs: Unique identifier for the error.
195
+ payload_id:
196
+ type: optional<string>
218
197
  docs: >
219
- Configuration for the facial expression emotion model.
198
+ If a payload ID was passed in the request, the same payload ID will be
199
+ sent back in the response body.
200
+ job_details:
201
+ type: optional<StreamWarningMessageJobDetails>
202
+ docs: >
203
+ If the job_details flag was set in the request, details about the
204
+ current streaming job will be returned in the response body.
205
+ source:
206
+ openapi: streaming-asyncapi.yml
207
+ SubscribeEvent:
208
+ discriminated: false
209
+ union:
210
+ - type: StreamModelPredictions
211
+ docs: Model predictions
212
+ - type: StreamErrorMessage
213
+ docs: Error message
214
+ - type: StreamWarningMessage
215
+ docs: Warning message
216
+ source:
217
+ openapi: streaming-asyncapi.yml
218
+ StreamFace:
219
+ docs: >
220
+ Configuration for the facial expression emotion model.
220
221
 
221
222
 
222
- Note: Using the `reset_stream` parameter does not have any effect on face
223
- identification. A single face identifier cache is maintained over a full
224
- session whether `reset_stream` is used or not.
225
- properties:
226
- facs:
227
- type: optional<map<string, unknown>>
228
- docs: >-
229
- Configuration for FACS predictions. If missing or null, no FACS
230
- predictions will be generated.
231
- descriptions:
232
- type: optional<map<string, unknown>>
233
- docs: >-
234
- Configuration for Descriptions predictions. If missing or null, no
235
- Descriptions predictions will be generated.
236
- identify_faces:
237
- type: optional<boolean>
238
- docs: >
239
- Whether to return identifiers for faces across frames. If true, unique
240
- identifiers will be assigned to face bounding boxes to differentiate
241
- different faces. If false, all faces will be tagged with an "unknown"
242
- ID.
243
- default: false
244
- fps_pred:
245
- type: optional<double>
246
- docs: >
247
- Number of frames per second to process. Other frames will be omitted
248
- from the response.
249
- default: 3
250
- prob_threshold:
251
- type: optional<double>
252
- docs: >
253
- Face detection probability threshold. Faces detected with a
254
- probability less than this threshold will be omitted from the
255
- response.
256
- default: 3
257
- min_face_size:
258
- type: optional<double>
259
- docs: >
260
- Minimum bounding box side length in pixels to treat as a face. Faces
261
- detected with a bounding box side length in pixels less than this
262
- threshold will be omitted from the response.
263
- default: 3
264
- source:
265
- openapi: streaming-asyncapi.yml
266
- StreamLanguage:
267
- docs: Configuration for the language emotion model.
268
- properties:
269
- sentiment:
270
- type: optional<map<string, unknown>>
271
- docs: >-
272
- Configuration for sentiment predictions. If missing or null, no
273
- sentiment predictions will be generated.
274
- toxicity:
275
- type: optional<map<string, unknown>>
276
- docs: >-
277
- Configuration for toxicity predictions. If missing or null, no
278
- toxicity predictions will be generated.
279
- granularity:
280
- type: optional<string>
281
- docs: >-
282
- The granularity at which to generate predictions. Values are `word`,
283
- `sentence`, `utterance`, or `passage`. To get a single prediction for
284
- the entire text of your streaming payload use `passage`. Default value
285
- is `word`.
286
- source:
287
- openapi: streaming-asyncapi.yml
288
- Config:
223
+ Note: Using the `reset_stream` parameter does not have any effect on face
224
+ identification. A single face identifier cache is maintained over a full
225
+ session whether `reset_stream` is used or not.
226
+ properties:
227
+ facs:
228
+ type: optional<map<string, unknown>>
229
+ docs: >-
230
+ Configuration for FACS predictions. If missing or null, no FACS
231
+ predictions will be generated.
232
+ descriptions:
233
+ type: optional<map<string, unknown>>
234
+ docs: >-
235
+ Configuration for Descriptions predictions. If missing or null, no
236
+ Descriptions predictions will be generated.
237
+ identify_faces:
238
+ type: optional<boolean>
239
+ docs: >
240
+ Whether to return identifiers for faces across frames. If true, unique
241
+ identifiers will be assigned to face bounding boxes to differentiate
242
+ different faces. If false, all faces will be tagged with an "unknown"
243
+ ID.
244
+ default: false
245
+ fps_pred:
246
+ type: optional<double>
247
+ docs: >
248
+ Number of frames per second to process. Other frames will be omitted
249
+ from the response.
250
+ default: 3
251
+ prob_threshold:
252
+ type: optional<double>
253
+ docs: >
254
+ Face detection probability threshold. Faces detected with a
255
+ probability less than this threshold will be omitted from the
256
+ response.
257
+ default: 3
258
+ min_face_size:
259
+ type: optional<double>
289
260
  docs: >
290
- Configuration used to specify which models should be used and with what
291
- settings.
292
- properties:
293
- burst:
294
- type: optional<map<string, unknown>>
295
- docs: |
296
- Configuration for the vocal burst emotion model.
261
+ Minimum bounding box side length in pixels to treat as a face. Faces
262
+ detected with a bounding box side length in pixels less than this
263
+ threshold will be omitted from the response.
264
+ default: 3
265
+ source:
266
+ openapi: streaming-asyncapi.yml
267
+ StreamLanguage:
268
+ docs: Configuration for the language emotion model.
269
+ properties:
270
+ sentiment:
271
+ type: optional<map<string, unknown>>
272
+ docs: >-
273
+ Configuration for sentiment predictions. If missing or null, no
274
+ sentiment predictions will be generated.
275
+ toxicity:
276
+ type: optional<map<string, unknown>>
277
+ docs: >-
278
+ Configuration for toxicity predictions. If missing or null, no
279
+ toxicity predictions will be generated.
280
+ granularity:
281
+ type: optional<string>
282
+ docs: >-
283
+ The granularity at which to generate predictions. Values are `word`,
284
+ `sentence`, `utterance`, or `passage`. To get a single prediction for
285
+ the entire text of your streaming payload use `passage`. Default value
286
+ is `word`.
287
+ source:
288
+ openapi: streaming-asyncapi.yml
289
+ Config:
290
+ docs: >
291
+ Configuration used to specify which models should be used and with what
292
+ settings.
293
+ properties:
294
+ burst:
295
+ type: optional<map<string, unknown>>
296
+ docs: |
297
+ Configuration for the vocal burst emotion model.
297
298
 
298
- Note: Model configuration is not currently available in streaming.
299
+ Note: Model configuration is not currently available in streaming.
299
300
 
300
- Please use the default configuration by passing an empty object `{}`.
301
- face:
302
- type: optional<StreamFace>
303
- docs: >
304
- Configuration for the facial expression emotion model.
301
+ Please use the default configuration by passing an empty object `{}`.
302
+ face:
303
+ type: optional<StreamFace>
304
+ docs: >
305
+ Configuration for the facial expression emotion model.
305
306
 
306
307
 
307
- Note: Using the `reset_stream` parameter does not have any effect on
308
- face identification. A single face identifier cache is maintained over
309
- a full session whether `reset_stream` is used or not.
310
- facemesh:
311
- type: optional<map<string, unknown>>
312
- docs: |
313
- Configuration for the facemesh emotion model.
308
+ Note: Using the `reset_stream` parameter does not have any effect on
309
+ face identification. A single face identifier cache is maintained over
310
+ a full session whether `reset_stream` is used or not.
311
+ facemesh:
312
+ type: optional<map<string, unknown>>
313
+ docs: |
314
+ Configuration for the facemesh emotion model.
314
315
 
315
- Note: Model configuration is not currently available in streaming.
316
+ Note: Model configuration is not currently available in streaming.
316
317
 
317
- Please use the default configuration by passing an empty object `{}`.
318
- language:
319
- type: optional<StreamLanguage>
320
- docs: Configuration for the language emotion model.
321
- prosody:
322
- type: optional<map<string, unknown>>
323
- docs: |
324
- Configuration for the speech prosody emotion model.
318
+ Please use the default configuration by passing an empty object `{}`.
319
+ language:
320
+ type: optional<StreamLanguage>
321
+ docs: Configuration for the language emotion model.
322
+ prosody:
323
+ type: optional<map<string, unknown>>
324
+ docs: |
325
+ Configuration for the speech prosody emotion model.
325
326
 
326
- Note: Model configuration is not currently available in streaming.
327
+ Note: Model configuration is not currently available in streaming.
327
328
 
328
- Please use the default configuration by passing an empty object `{}`.
329
- source:
330
- openapi: streaming-asyncapi.yml
331
- StreamModelsEndpointPayload:
332
- docs: Models endpoint payload
333
- properties:
334
- data:
335
- type: optional<string>
336
- models:
337
- type: optional<Config>
338
- docs: >
339
- Configuration used to specify which models should be used and with
340
- what settings.
341
- stream_window_ms:
342
- type: optional<double>
343
- docs: >
344
- Length in milliseconds of streaming sliding window.
329
+ Please use the default configuration by passing an empty object `{}`.
330
+ source:
331
+ openapi: streaming-asyncapi.yml
332
+ StreamModelsEndpointPayload:
333
+ docs: Models endpoint payload
334
+ properties:
335
+ data:
336
+ type: optional<string>
337
+ models:
338
+ type: optional<Config>
339
+ docs: >
340
+ Configuration used to specify which models should be used and with
341
+ what settings.
342
+ stream_window_ms:
343
+ type: optional<double>
344
+ docs: >
345
+ Length in milliseconds of streaming sliding window.
345
346
 
346
347
 
347
- Extending the length of this window will prepend media context from
348
- past payloads into the current payload.
348
+ Extending the length of this window will prepend media context from
349
+ past payloads into the current payload.
349
350
 
350
351
 
351
- For example, if on the first payload you send 500ms of data and on the
352
- second payload you send an additional 500ms of data, a window of at
353
- least 1000ms will allow the model to process all 1000ms of stream
354
- data.
352
+ For example, if on the first payload you send 500ms of data and on the
353
+ second payload you send an additional 500ms of data, a window of at
354
+ least 1000ms will allow the model to process all 1000ms of stream
355
+ data.
355
356
 
356
357
 
357
- A window of 600ms would append the full 500ms of the second payload to
358
- the last 100ms of the first payload.
358
+ A window of 600ms would append the full 500ms of the second payload to
359
+ the last 100ms of the first payload.
359
360
 
360
361
 
361
- Note: This feature is currently only supported for audio data and
362
- audio models. For other file types and models this parameter will be
363
- ignored.
364
- default: 5000
365
- validation:
366
- min: 500
367
- max: 10000
368
- reset_stream:
369
- type: optional<boolean>
370
- docs: >
371
- Whether to reset the streaming sliding window before processing the
372
- current payload.
362
+ Note: This feature is currently only supported for audio data and
363
+ audio models. For other file types and models this parameter will be
364
+ ignored.
365
+ default: 5000
366
+ validation:
367
+ min: 500
368
+ max: 10000
369
+ reset_stream:
370
+ type: optional<boolean>
371
+ docs: >
372
+ Whether to reset the streaming sliding window before processing the
373
+ current payload.
373
374
 
374
375
 
375
- If this parameter is set to `true` then past context will be deleted
376
- before processing the current payload.
376
+ If this parameter is set to `true` then past context will be deleted
377
+ before processing the current payload.
377
378
 
378
379
 
379
- Use reset_stream when one audio file is done being processed and you
380
- do not want context to leak across files.
381
- default: false
382
- raw_text:
383
- type: optional<boolean>
384
- docs: >
385
- Set to `true` to enable the data parameter to be parsed as raw text
386
- rather than base64 encoded bytes.
380
+ Use reset_stream when one audio file is done being processed and you
381
+ do not want context to leak across files.
382
+ default: false
383
+ raw_text:
384
+ type: optional<boolean>
385
+ docs: >
386
+ Set to `true` to enable the data parameter to be parsed as raw text
387
+ rather than base64 encoded bytes.
387
388
 
388
- This parameter is useful if you want to send text to be processed by
389
- the language model, but it cannot be used with other file types like
390
- audio, image, or video.
391
- default: false
392
- job_details:
393
- type: optional<boolean>
394
- docs: >
395
- Set to `true` to get details about the job.
389
+ This parameter is useful if you want to send text to be processed by
390
+ the language model, but it cannot be used with other file types like
391
+ audio, image, or video.
392
+ default: false
393
+ job_details:
394
+ type: optional<boolean>
395
+ docs: >
396
+ Set to `true` to get details about the job.
396
397
 
397
398
 
398
- This parameter can be set in the same payload as data or it can be set
399
- without data and models configuration to get the job details between
400
- payloads.
399
+ This parameter can be set in the same payload as data or it can be set
400
+ without data and models configuration to get the job details between
401
+ payloads.
401
402
 
402
403
 
403
- This parameter is useful to get the unique job ID.
404
- default: false
405
- payload_id:
406
- type: optional<string>
407
- docs: >
408
- Pass an arbitrary string as the payload ID and get it back at the top
409
- level of the socket response.
404
+ This parameter is useful to get the unique job ID.
405
+ default: false
406
+ payload_id:
407
+ type: optional<string>
408
+ docs: >
409
+ Pass an arbitrary string as the payload ID and get it back at the top
410
+ level of the socket response.
410
411
 
411
412
 
412
- This can be useful if you have multiple requests running
413
- asynchronously and want to disambiguate responses as they are
414
- received.
415
- source:
416
- openapi: streaming-asyncapi.yml
417
- EmotionEmbeddingItem:
418
- properties:
419
- name:
420
- type: optional<string>
421
- docs: Name of the emotion being expressed.
422
- score:
423
- type: optional<double>
424
- docs: Embedding value for the emotion being expressed.
425
- source:
426
- openapi: streaming-asyncapi.yml
427
- EmotionEmbedding:
428
- docs: A high-dimensional embedding in emotion space.
429
- type: list<EmotionEmbeddingItem>
430
- StreamBoundingBox:
431
- docs: A bounding box around a face.
432
- properties:
433
- x:
434
- type: optional<double>
435
- docs: x-coordinate of bounding box top left corner.
436
- validation:
437
- min: 0
438
- "y":
439
- type: optional<double>
440
- docs: y-coordinate of bounding box top left corner.
441
- validation:
442
- min: 0
443
- w:
444
- type: optional<double>
445
- docs: Bounding box width.
446
- validation:
447
- min: 0
448
- h:
449
- type: optional<double>
450
- docs: Bounding box height.
451
- validation:
452
- min: 0
453
- source:
454
- openapi: streaming-asyncapi.yml
455
- TimeRange:
456
- docs: A time range with a beginning and end, measured in seconds.
457
- properties:
458
- begin:
459
- type: optional<double>
460
- docs: Beginning of time range in seconds.
461
- validation:
462
- min: 0
463
- end:
464
- type: optional<double>
465
- docs: End of time range in seconds.
466
- validation:
467
- min: 0
468
- source:
469
- openapi: streaming-asyncapi.yml
470
- TextPosition:
471
- docs: >
472
- Position of a segment of text within a larger document, measured in
473
- characters. Uses zero-based indexing. The beginning index is inclusive and
474
- the end index is exclusive.
475
- properties:
476
- begin:
477
- type: optional<double>
478
- docs: The index of the first character in the text segment, inclusive.
479
- validation:
480
- min: 0
481
- end:
482
- type: optional<double>
483
- docs: The index of the last character in the text segment, exclusive.
484
- validation:
485
- min: 0
486
- source:
487
- openapi: streaming-asyncapi.yml
488
- SentimentItem:
489
- properties:
490
- name:
491
- type: optional<string>
492
- docs: Level of sentiment, ranging from 1 (negative) to 9 (positive)
493
- score:
494
- type: optional<double>
495
- docs: Prediction for this level of sentiment
496
- source:
497
- openapi: streaming-asyncapi.yml
498
- Sentiment:
499
- docs: >-
500
- Sentiment predictions returned as a distribution. This model predicts the
501
- probability that a given text could be interpreted as having each
502
- sentiment level from 1 (negative) to 9 (positive).
413
+ This can be useful if you have multiple requests running
414
+ asynchronously and want to disambiguate responses as they are
415
+ received.
416
+ source:
417
+ openapi: streaming-asyncapi.yml
418
+ EmotionEmbeddingItem:
419
+ properties:
420
+ name:
421
+ type: optional<string>
422
+ docs: Name of the emotion being expressed.
423
+ score:
424
+ type: optional<double>
425
+ docs: Embedding value for the emotion being expressed.
426
+ source:
427
+ openapi: streaming-asyncapi.yml
428
+ EmotionEmbedding:
429
+ docs: A high-dimensional embedding in emotion space.
430
+ type: list<EmotionEmbeddingItem>
431
+ StreamBoundingBox:
432
+ docs: A bounding box around a face.
433
+ properties:
434
+ x:
435
+ type: optional<double>
436
+ docs: x-coordinate of bounding box top left corner.
437
+ validation:
438
+ min: 0
439
+ 'y':
440
+ type: optional<double>
441
+ docs: y-coordinate of bounding box top left corner.
442
+ validation:
443
+ min: 0
444
+ w:
445
+ type: optional<double>
446
+ docs: Bounding box width.
447
+ validation:
448
+ min: 0
449
+ h:
450
+ type: optional<double>
451
+ docs: Bounding box height.
452
+ validation:
453
+ min: 0
454
+ source:
455
+ openapi: streaming-asyncapi.yml
456
+ TimeRange:
457
+ docs: A time range with a beginning and end, measured in seconds.
458
+ properties:
459
+ begin:
460
+ type: optional<double>
461
+ docs: Beginning of time range in seconds.
462
+ validation:
463
+ min: 0
464
+ end:
465
+ type: optional<double>
466
+ docs: End of time range in seconds.
467
+ validation:
468
+ min: 0
469
+ source:
470
+ openapi: streaming-asyncapi.yml
471
+ TextPosition:
472
+ docs: >
473
+ Position of a segment of text within a larger document, measured in
474
+ characters. Uses zero-based indexing. The beginning index is inclusive and
475
+ the end index is exclusive.
476
+ properties:
477
+ begin:
478
+ type: optional<double>
479
+ docs: The index of the first character in the text segment, inclusive.
480
+ validation:
481
+ min: 0
482
+ end:
483
+ type: optional<double>
484
+ docs: The index of the last character in the text segment, exclusive.
485
+ validation:
486
+ min: 0
487
+ source:
488
+ openapi: streaming-asyncapi.yml
489
+ SentimentItem:
490
+ properties:
491
+ name:
492
+ type: optional<string>
493
+ docs: Level of sentiment, ranging from 1 (negative) to 9 (positive)
494
+ score:
495
+ type: optional<double>
496
+ docs: Prediction for this level of sentiment
497
+ source:
498
+ openapi: streaming-asyncapi.yml
499
+ Sentiment:
500
+ docs: >-
501
+ Sentiment predictions returned as a distribution. This model predicts the
502
+ probability that a given text could be interpreted as having each
503
+ sentiment level from 1 (negative) to 9 (positive).
503
504
 
504
505
 
505
- Compared to returning one estimate of sentiment, this enables a more
506
- nuanced analysis of a text's meaning. For example, a text with very
507
- neutral sentiment would have an average rating of 5. But also a text that
508
- could be interpreted as having very positive sentiment or very negative
509
- sentiment would also have an average rating of 5. The average sentiment is
510
- less informative than the distribution over sentiment, so this API returns
511
- a value for each sentiment level.
512
- type: list<SentimentItem>
513
- ToxicityItem:
514
- properties:
515
- name:
516
- type: optional<string>
517
- docs: Category of toxicity.
518
- score:
519
- type: optional<double>
520
- docs: Prediction for this category of toxicity
521
- source:
522
- openapi: streaming-asyncapi.yml
523
- Toxicity:
524
- docs: >-
525
- Toxicity predictions returned as probabilities that the text can be
526
- classified into the following categories: toxic, severe_toxic, obscene,
527
- threat, insult, and identity_hate.
528
- type: list<ToxicityItem>
506
+ Compared to returning one estimate of sentiment, this enables a more
507
+ nuanced analysis of a text's meaning. For example, a text with very
508
+ neutral sentiment would have an average rating of 5. But also a text that
509
+ could be interpreted as having very positive sentiment or very negative
510
+ sentiment would also have an average rating of 5. The average sentiment is
511
+ less informative than the distribution over sentiment, so this API returns
512
+ a value for each sentiment level.
513
+ type: list<SentimentItem>
514
+ ToxicityItem:
515
+ properties:
516
+ name:
517
+ type: optional<string>
518
+ docs: Category of toxicity.
519
+ score:
520
+ type: optional<double>
521
+ docs: Prediction for this category of toxicity
522
+ source:
523
+ openapi: streaming-asyncapi.yml
524
+ Toxicity:
525
+ docs: >-
526
+ Toxicity predictions returned as probabilities that the text can be
527
+ classified into the following categories: toxic, severe_toxic, obscene,
528
+ threat, insult, and identity_hate.
529
+ type: list<ToxicityItem>