thinkingdata-ruby 1.1.0 → 1.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: af3bee9acd623d7588b142a62e6ecef998c359c5
4
- data.tar.gz: d9f7170bfc7dc600a58fcafbbfba45f92049f6d0
2
+ SHA256:
3
+ metadata.gz: cdd6d7eb248bfbddc35fdc947a9323ff6a1a16d10c2e396b5cd3f80dd272f866
4
+ data.tar.gz: e6ee651f3d35226a5f640f0c57868964abe0eba5841335249b262a81b264a018
5
5
  SHA512:
6
- metadata.gz: 6adee954d6aee35f30ed2a97074ce00500a71b4c02f3b4c1dded0ca44bcfc0e1636a0a564b49b4640114eae5d5a34c978630a8247d59bc6d0547f47deacd2ba6
7
- data.tar.gz: d9052f017aae1f57252b7a15abd921f4e23d026c6010bba6d868b39f638a63f782a6a47c1f2faaa46352da11372aae62769601c65e53cde3895ec8d12cbd3be6
6
+ metadata.gz: 442a30338ce592df0f6c9e3dd505255390e8797d405326009064531f45637bd2c86f1bbca10efd14a8e6dd8b12ebbfe183be0795c8e029fcec19362627024033
7
+ data.tar.gz: 605e8773859e0147486ed023c7ab187cd308aaf4e810cc1f633d9e72810cbadfd01fe19b45bebd1042e8741b1ac0e791d8662c36c5097e6d94694d5b051d0a0d
data/.gitignore ADDED
@@ -0,0 +1,59 @@
1
+ *.gem
2
+ *.rbc
3
+ /.config
4
+ /coverage/
5
+ /InstalledFiles
6
+ /pkg/
7
+ /spec/reports/
8
+ /spec/examples.txt
9
+ /test/tmp/
10
+ /test/version_tmp/
11
+ /tmp/
12
+
13
+ # Used by dotenv library to load environment variables.
14
+ # .env
15
+
16
+ # Ignore Byebug command history file.
17
+ .byebug_history
18
+
19
+ ## Specific to RubyMotion:
20
+ .dat*
21
+ .repl_history
22
+ build/
23
+ *.bridgesupport
24
+ build-iPhoneOS/
25
+ build-iPhoneSimulator/
26
+
27
+ ## Specific to RubyMotion (use of CocoaPods):
28
+ #
29
+ # We recommend against adding the Pods directory to your .gitignore. However
30
+ # you should judge for yourself, the pros and cons are mentioned at:
31
+ # https://guides.cocoapods.org/using/using-cocoapods.html#should-i-check-the-pods-directory-into-source-control
32
+ #
33
+ # vendor/Pods/
34
+
35
+ ## Documentation cache and generated files:
36
+ /.yardoc/
37
+ /_yardoc/
38
+ /doc/
39
+ /rdoc/
40
+
41
+ ## Environment normalization:
42
+ /.bundle/
43
+ /vendor/bundle
44
+ /lib/bundler/man/
45
+
46
+ # for a library or gem, you might want to ignore these files since the code is
47
+ # intended to run in multiple environments; otherwise, check them in:
48
+ # Gemfile.lock
49
+ # .ruby-version
50
+ # .ruby-gemset
51
+
52
+ # unless supporting rvm < 1.11.0 or doing something fancy, ignore this:
53
+ .rvmrc
54
+
55
+ # Used by RuboCop. Remote config files pulled in from inherit_from directive.
56
+ # .rubocop-https?--*
57
+
58
+ .idea/*
59
+ Gemfile.lock
data/CHANGELOG.md CHANGED
@@ -1,10 +1,16 @@
1
- **v1.1.0** (2020/02/11)
2
- - 数据类型支持array类型
3
- - 新增 user_append 接口,支持用户的数组类型的属性追加
4
- - BatchConsumer 性能优化:支持选择是否压缩;移除 Base64 编码
5
- - DebugConsumer 优化: 在服务端对数据进行更完备准确地校验
6
-
7
- **v1.0.0** (2019-11-20)
8
- - 支持三种模式的上报: DebugConsumer, BatchConsumer, LoggerConsumer.
9
- - 支持事件上报和用户属性上报.
10
- - 支持公共事件属性.
1
+ ### v1.2.1
2
+ **Date:** 2023/03/20
3
+
4
+ **Notes:**
5
+
6
+ * Compatible ruby 3
7
+ * Supports a different '#app_id' for each event
8
+
9
+
10
+ ### v1.2.0
11
+ **Date:** 2020/08/28
12
+
13
+ **Notes:**
14
+
15
+ * Added track_update interface to support updatable events
16
+ * Added track_overwrite interface to support rewritable events
data/README.md CHANGED
@@ -1,202 +1,10 @@
1
- # ThinkingData Analytics API for Ruby
1
+ # ThinkingData SDK for Ruby
2
+ ![output](https://user-images.githubusercontent.com/53337625/205621683-ed9b97ef-6a52-4903-a2c0-a955dddebb7d.png)
2
3
 
3
- thinkingdata-ruby 是数数科技提供给客户,方便客户导入用户数据的 Ruby 接口实现, 支持 Ruby 2.0 以上版本。如需了解详细信息,请参考 [数数科技官方网站](https://www.thinkingdata.cn).
4
+ This is the [ThinkingData](https://www.thinkingdata.cn)™ SDK for Ruby. Documentation is available on our help center in the following languages:
4
5
 
5
- ### 一、集成 SDK
6
+ - [English](https://docs.thinkingdata.cn/ta-manual/latest/en/installation/installation_menu/server_sdk/ruby_sdk_installation/ruby_sdk_installation.html)
7
+ - [中文](https://docs.thinkingdata.cn/ta-manual/latest/installation/installation_menu/server_sdk/ruby_sdk_installation/ruby_sdk_installation.html)
8
+ - [日本語](https://docs.thinkingdata.io/ta-manual/v4.0/ja/installation/installation_menu/server_sdk/ruby_sdk_installation/ruby_sdk_installation.html)
6
9
 
7
- #### 1. 安装 SDK
8
-
9
- ```sh
10
- # 获取 SDK
11
- gem install thinkingdata-ruby
12
- ```
13
- #### 2. 创建 SDK 实例
14
- 首先在代码文件开头引入 `thinkingdata-ruby`:
15
- ```ruby
16
- require 'thinkingdata-ruby'
17
- ```
18
-
19
- 使用 SDK 上传数据,需要首先创建 `TDAnalytics::Tracker` 对象. `TDAnalytics::Tracker` 是数据上报的核心类,使用此类上报事件数据和更新用户属性. 创建 `Tracker` 对象需要传入 consumer 对象,consumer 决定了如何处理格式化的数据(存储在本地日志文件还是上传到服务端).
20
-
21
- ```ruby
22
- ta = TDAnalytics::Tracker.new(consumer)
23
- ta.track('your_event', distinct_id: 'distinct_id_of_user')
24
- ```
25
- TDAnalytics 提供了三种 consumer 实现:
26
-
27
- **(1) LoggerConsumer**: 将数据实时写入本地文件,文件以 天/小时 切分,并需要与 LogBus 搭配使用进行数据上传.
28
- ```ruby
29
- # 默认写入当前目录的文件,按日期命名(daily),例如: tda.log.2019-11-15
30
- consumer = TDAnalytics::LoggerConsumer.new
31
-
32
- # 也可以修改配置,如下配置会创建 LoggerConsumer,并将数据写入: /path/to/log/demolog.2019-11-15-18 (18 为小时)
33
- consumer = TDAnalytics::LoggerConsumer.new('/path/to/log', 'hourly', prefix: 'demolog')
34
- ```
35
-
36
- **(2) DebugConsumer**: 逐条实时向 TA 服务器传输数据,当数据格式错误时会返回详细的错误信息。建议先使用 DebugConsumer 校验数据格式。初始化传入项目 APP ID 和接收端地址.
37
- ```ruby
38
- # 创建 DebugConsumer
39
- consumer = TDAnalytics::DebugConsumer.new(SERVER_URL, YOUR_APPID)
40
- ```
41
-
42
- **(3) BatchConsumer**: 批量实时地向 TA 服务器传输数据,不需要搭配传输工具。在网络条件不好的情况下有可能会导致数据丢失,因此不建议在生产环境中大量使用. 初始化传入项目 APP ID 和接收端地址.
43
-
44
- BatchConsumer 会先将数据存放在缓冲区中,当数据条数超过设定的缓冲区最大值(max_buffer_length, 默认为20),触发上报. 您也可以在初始化 SDK 时传入整数类型的参数配置缓冲区大小:
45
- ```ruby
46
- # BatchConsumer,数据将先存入缓冲区,达到指定条数时上报,默认为 20 条
47
- consumer = TDAnalytics::BatchConsumer.new(SERVER_URL, YOUR_APPID)
48
-
49
- # 创建指定缓冲区大小为 3 条的 BatchConsumer
50
- consumer = TDAnalytics::BatchConsumer.new(SERVER_URL, YOUR_APPID, 3)
51
- ```
52
-
53
- 您也可以传入自己实现的 Consumer,只需实现以下接口:
54
- - add(message): (必须) 接受 Hash 类型的数据对象
55
- - flush: (可选) 将缓冲区的数据发送到指定地址
56
- - close: (可选) 程序退出时用户可以主动调用此接口以保证安全退出
57
-
58
- #### 3. 上报数据
59
- SDK 初始化完成后,后续即可使用 ta 的接口来上报数据.
60
-
61
- ### 使用示例
62
-
63
- #### a. 发送事件
64
- 您可以调用 track 来上传事件,建议您根据预先梳理的文档来设置事件的属性以及发送信息的条件。上传事件示例如下:
65
- ```ruby
66
- # 定义事件数据
67
- event = {
68
- # 事件名称 (必填)
69
- event_name: 'test_event',
70
- # 账号 ID (可选)
71
- account_id: 'ruby_test_aid',
72
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
73
- distinct_id: 'ruby_distinct_id',
74
- # 事件时间 (可选) 如果不填,将以调用接口时的时间作为事件时间
75
- time: Time.now,
76
- # 事件 IP (可选) 当传入 IP 地址时,后台可以解析所在地
77
- ip: '202.38.64.1',
78
- # 事件属性 (可选)
79
- properties: {
80
- prop_date: Time.now,
81
- prop_double: 134.1,
82
- prop_string: 'hello world',
83
- prop_bool: true,
84
- },
85
- # 跳过本地格式校验 (可选)
86
- # skip_local_check: true,
87
- }
88
-
89
- # 上传事件
90
- ta.track(event)
91
- ```
92
-
93
- 参数说明:
94
- * 事件的名称只能以字母开头,可包含数字,字母和下划线“_”,长度最大为 50 个字符,对字母大小写不敏感
95
- * 事件的属性是 Hash 类型,其中每个元素代表一个属性
96
- * 事件属性的 Key 值为属性的名称,为 string 类型,规定只能以字母开头,包含数字,字母和下划线“_”,长度最大为 50 个字符,对字母大小写不敏感
97
- * 事件属性的 Value 值为该属性的值,支持 String、数值类型、bool、Time
98
-
99
- SDK 会在本地对数据格式做校验,如果希望跳过本地校验,可以在调用 track 接口的时候传入 skip_local_check 参数.
100
-
101
- #### 2. 设置公共事件属性
102
- 公共事件属性是每个事件都会包含的属性. 也可以设置动态公共属性。如果有相同的属性,则动态公共属性会覆盖公共事件属性。
103
-
104
- ```ruby
105
- # 定义公共属性
106
- super_properties = {
107
- super_string: 'super_string',
108
- super_int: 1,
109
- super_bool: false,
110
- super_date: Time.rfc2822("Thu, 26 Oct 2019 02:26:12 +0545")
111
- }
112
-
113
- # 设置公共事件属性,公共事件属性会添加到每个事件中
114
- ta.set_super_properties(super_properties)
115
-
116
- # 清空公共事件属性
117
- ta.clear_super_properties
118
- ```
119
-
120
- #### 3. 设置用户属性
121
- 对于一般的用户属性,您可以调用 user_set 来进行设置. 使用该接口上传的属性将会覆盖原有的属性值,如果之前不存在该用户属性,则会新建该用户属性:
122
- ```ruby
123
- # 定义用户属性数据
124
- user_data = {
125
- # 账号 ID (可选)
126
- account_id: 'ruby_test_aid',
127
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
128
- distinct_id: 'ruby_distinct_id',
129
- # 用户属性
130
- properties: {
131
- prop_date: Time.now,
132
- prop_double: 134.12,
133
- prop_string: 'hello',
134
- prop_int: 666,
135
- },
136
- }
137
-
138
- # 设置用户属性
139
- ta.user_set(user_data);
140
- ```
141
- 如果您要上传的用户属性只要设置一次,则可以调用 user_set_once 来进行设置,当该属性之前已经有值的时候,将会忽略这条信息:
142
- ```ruby
143
- # 设置用户属性,如果已有同名属性,则忽略新设置属性
144
- ta.user_set_once(user_data);
145
- ```
146
- 当您要上传数值型的属性时,可以调用 user_add 来对该属性进行累加操作,如果该属性还未被设置,则会赋值 0 后再进行计算:
147
- ```ruby
148
- # 对数值类型的属性进行累加操作
149
- ta.user_add(distinct_id: 'ruby_distinct_id', properties: {prop_int: 10, prop_double: 15.88})
150
- ```
151
-
152
- 当您需要删除某个用户属性的值时,可以调用 user_unset.
153
- ```ruby
154
- # 删除某个用户属性
155
- ta.user_unset(distinct_id: 'ruby_distinct_id', property: :prop_string)
156
-
157
- # 删除一组用户属性
158
- ta.user_unset(distinct_id: 'ruby_distinct_id', property: Array.[](:prop_a, :prop_b, :prob_c))
159
- ```
160
-
161
- 如果您要删除某个用户,可以调用 user_del 将这名用户删除. 之后您将无法再查询该用户的用户属性,但该用户产生的事件仍然可以被查询到:
162
- ```ruby
163
- # 删除用户
164
- ta.user_del(
165
- # 账号 ID (可选)
166
- account_id: 'ruby_test_aid',
167
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
168
- distinct_id: 'ruby_distinct_id',
169
- );
170
- ```
171
-
172
- #### 4. 立即进行数据 IO
173
- 此操作与具体的 Consumer 实现有关. 在收到数据时, Consumer 可以先将数据存放在缓冲区, 并在特定情况下触发真正的数据 IO 操作, 以提高整体性能. 在某些情况下需要立即提交数据,可以调用 flush 接口:
174
- ```ruby
175
- # 立即提交数据到相应的接收端
176
- ta.flush
177
- ```
178
-
179
- #### 5. 关闭 SDK
180
- 请在退出程序前调用本接口,以避免缓存内的数据丢失:
181
- ```ruby
182
- # 关闭并退出 SDK
183
- ta.close
184
- ```
185
-
186
- #### 6 其他说明
187
- 默认情况下,除初始化参数不合法外,其他 Error 会被忽略,如果您希望自己处理接口调用中的 Error,可以传入自定义的 error handler.
188
-
189
- ```ruby
190
- # (可选) 定义一个错误处理器,当出现 Error 时会调用
191
- class MyErrorHandler < TDAnalytics::ErrorHandler
192
- def handle(error)
193
- puts error
194
- raise error
195
- end
196
- end
197
- my_error_handler = MyErrorHandler.new
198
-
199
- # 创建 TA 实例, 第一个参数为任意一种 Consumer, 第二个参数可选,如果设定了会在出错时调用
200
- ta = TDAnalytics::Tracker.new(consumer, my_error_handler, uuid: true)
201
- ```
202
- uuid 如果为 true,每条数据都会被带上随机 UUID 作为 #uuid 属性的值上报,该值不会入库,仅仅用于后台做数据重复检测.
10
+ ---
data/demo/demo.rb CHANGED
@@ -2,19 +2,13 @@ $LOAD_PATH.unshift File.expand_path('../../lib', __FILE__)
2
2
 
3
3
  require 'thinkingdata-ruby'
4
4
  require 'time'
5
- #require 'pry'
6
5
 
7
6
  if __FILE__ == $0
8
- # 替换 DEMO_APPID 为您项目的 APP ID
9
- DEMO_APPID = 'APPID'
10
- # 替换 SERVER_URL 为您项目的 URL
11
- SERVER_URL = 'https://sdk.tga.thinkinggame.cn'
12
- # 账号 ID
13
- DEMO_ACCOUNT_ID = 'ruby_demo_aid'
14
- # 访客 ID
15
- DEMO_DISTINCT_ID = 'ruby_demo_did'
16
-
17
- # (可选) 定义一个错误处理器,当出现 Error 时会调用
7
+ DEMO_APPID = 'app id'
8
+ SERVER_URL = 'server url'
9
+ DEMO_ACCOUNT_ID = '123'
10
+ DEMO_DISTINCT_ID = 'aaa'
11
+
18
12
  class MyErrorHandler < TDAnalytics::ErrorHandler
19
13
  def handle(error)
20
14
  puts error
@@ -23,120 +17,89 @@ if __FILE__ == $0
23
17
  end
24
18
  my_error_handler = MyErrorHandler.new
25
19
 
26
- # 定义 consumer: consumer 实现了 add、flush、close 等接口,将经过 SDK 格式化的数据以不同的方式存储或者发送到接收端
20
+ TDAnalytics::set_stringent(false)
21
+ TDAnalytics::set_enable_log(true)
22
+
27
23
  consumer = nil
28
- $ARGV = 1
24
+ $ARGV = 0
29
25
  case $ARGV
30
26
  when 0
31
- # LoggerConsumer,数据将写入本地文件(当前目录,按小时切分,前缀为 demolog),需要配合 Logbus 上传数据到 TA 服务器
32
- consumer = TDAnalytics::LoggerConsumer.new '.', 'hourly', prefix: 'demolog'
27
+ consumer = TDAnalytics::LoggerConsumer.new './log', 'hourly'
33
28
  when 1
34
- # DebugConsumer,数据将被逐条同步的上报到 TA 服务器。出错时会返回详细的错误信息
35
- consumer = TDAnalytics::DebugConsumer.new(SERVER_URL, DEMO_APPID)
36
- # 如果不想上传到TA,只想校验数据格式,可以如下初始化
29
+ consumer = TDAnalytics::DebugConsumer.new(SERVER_URL, DEMO_APPID, device_id: "123456789")
37
30
  # consumer = TDAnalytics::DebugConsumer.new(SERVER_URL, DEMO_APPID,false)
38
31
  when 2
39
- # BatchConsumer,数据将先存入缓冲区,达到指定条数时上报,默认为 20 条
40
32
  consumer = TDAnalytics::BatchConsumer.new(SERVER_URL, DEMO_APPID, 30)
41
- #设置是否压缩数据,默认gzip压缩,内网可以这样设置
42
33
  #consumer._set_compress(false)
43
34
  else
44
- # LoggerConsumer,数据将写入本地文件(当前目录,按天切分,前缀为 tda.log),需要配合 Logbus 上传数据到 TA 服务器
45
35
  consumer = TDAnalytics::LoggerConsumer.new
46
36
  end
47
37
 
48
- # 创建 TA 实例, 第一个参数为任意一种 Consumer, 第二个参数可选,如果设定了会在出错时调用
49
38
  ta = TDAnalytics::Tracker.new(consumer, my_error_handler, uuid: true)
50
39
 
51
- # 定义公共属性
52
40
  super_properties = {
53
41
  super_string: 'super_string',
54
42
  super_int: 1,
55
43
  super_bool: false,
56
- super_date: Time.rfc2822("Thu, 26 Oct 2019 02:26:12 +0545")
44
+ super_date: Time.rfc2822("Thu, 26 Oct 2019 02:26:12 +0545"),
45
+ '#app_id': "123123123123123"
57
46
  }
58
47
 
59
- # 设置公共事件属性,公共事件属性会添加到每个事件中
60
48
  ta.set_super_properties(super_properties)
61
49
 
62
- # 定义事件数据
63
- event = {
64
- # 事件名称 (必填)
65
- event_name: 'test_event',
66
- # 账号 ID (可选)
67
- account_id: DEMO_ACCOUNT_ID,
68
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
69
- distinct_id: DEMO_DISTINCT_ID,
70
- # 事件时间 (可选) 如果不填,将以调用接口时的时间作为事件时间
71
- time: Time.now,
72
- # 事件 IP (可选) 当传入 IP 地址时,后台可以解析所在地
73
- ip: '202.38.64.1',
74
- # 事件属性 (可选)
75
- properties: {
76
- array: ["str1", "11", "22.22", "2020-02-11 17:02:52.415"],
77
- prop_date: Time.now,
78
- prop_double: 134.1,
79
- prop_string: 'hello world',
80
- prop_bool: true,
81
- },
50
+ properties = {
51
+ array: ["str1", "11", Time.now, "2020-02-11 17:02:52.415"],
52
+ prop_date: Time.now,
53
+ prop_double: 134.1,
54
+ prop_string: 'hello world',
55
+ prop_bool: true,
56
+ '#ip': '123.123.123.123',
57
+ '#uuid': 'aaabbbccc',
82
58
  }
83
59
 
84
- # 上报事件
85
- 5.times do
86
- ta.track(event)
87
- ta.clear_super_properties
60
+ ta.set_dynamic_super_properties do
61
+ {:dynamic_time => Time.now}
88
62
  end
89
63
 
90
- # 定义用户属性数据
64
+ ta.track(event_name: 'test_event', distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: properties)
65
+
66
+ ta.clear_dynamic_super_properties
67
+ ta.clear_super_properties
68
+
69
+ ta.track(event_name: 'test_event', distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: properties)
70
+
91
71
  user_data = {
92
- # 账号 ID (可选)
93
- account_id: DEMO_ACCOUNT_ID,
94
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
95
- distinct_id: DEMO_DISTINCT_ID,
96
- # 用户属性
97
- properties: {
98
- array: ["str1", 11, 22.22],
99
- prop_date: Time.now,
100
- prop_double: 134.12,
101
- prop_string: 'hello',
102
- prop_int: 666,
103
- },
72
+ array: ["str1", 11, 22.22],
73
+ prop_date: Time.now,
74
+ prop_double: 134.12,
75
+ prop_string: 'hello',
76
+ prop_int: 666,
104
77
  }
105
- # 设置用户属性, 覆盖同名属性
106
- ta.user_set(user_data)
107
-
108
- #追加user的一个或者多个列表的属性
109
- user_data_arr = {
110
- # 账号 ID (可选)
111
- account_id: DEMO_ACCOUNT_ID,
112
- # 访客 ID (可选),账号 ID 和访客 ID 不可以都为空
113
- distinct_id: DEMO_DISTINCT_ID,
114
- # 用户属性
115
- properties: {
116
- array: ["33", "44"],
117
- },
78
+ ta.user_set(distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: user_data)
79
+
80
+ user_append_data = {
81
+ array: %w[33 44]
118
82
  }
83
+ ta.user_append(distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: user_append_data)
119
84
 
120
- ta.user_append(user_data_arr)
85
+ user_uniq_append_data = {
86
+ array: %w[44 55]
87
+ }
88
+ ta.user_uniq_append(distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: user_uniq_append_data)
121
89
 
122
- # 设置用户属性,不会覆盖已经设置的同名属性
123
- user_data[:properties][:prop_int_new] = 800
124
- ta.user_set_once(user_data)
90
+ user_set_once_data = {
91
+ prop_int_new: 888,
92
+ }
93
+ ta.user_set_once(distinct_id: DEMO_DISTINCT_ID, account_id: DEMO_ACCOUNT_ID, properties: user_set_once_data)
125
94
 
126
- # 累加用户属性
127
95
  ta.user_add(distinct_id: DEMO_DISTINCT_ID, properties: {prop_int: 10, prop_double: 15.88})
128
96
 
129
-
130
- # 删除某个用户属性
131
97
  ta.user_unset(distinct_id: DEMO_DISTINCT_ID, property: [:prop_string, :prop_int])
132
98
 
99
+ ta.user_del(distinct_id: DEMO_DISTINCT_ID)
133
100
 
134
- # 删除用户。此操作之前的事件数据不会被删除
135
- # ta.user_del(distinct_id: DEMO_DISTINCT_ID)
136
-
137
- #binding.pry
101
+ ta.flush
138
102
 
139
- # 退出前调用此接口
140
103
  ta.close
141
104
  end
142
105
 
@@ -2,13 +2,13 @@ require 'json'
2
2
  require 'net/http'
3
3
 
4
4
  module TDAnalytics
5
- # BatchConsumer 批量同步的发送数据.
6
- # 有数据时,首先会加入本地缓冲区,当条数到达上限后会发起上报
7
5
  class BatchConsumer
8
- # 默认缓冲区大小
9
- MAX_LENGTH = 20
10
6
 
11
- def initialize(server_url, app_id, max_buffer_length = MAX_LENGTH)
7
+ # buffer count
8
+ DEFAULT_LENGTH = 20
9
+ MAX_LENGTH = 2000
10
+
11
+ def initialize(server_url, app_id, max_buffer_length = DEFAULT_LENGTH)
12
12
  @server_uri = URI.parse(server_url)
13
13
  @server_uri.path = '/sync_server'
14
14
  @app_id = app_id
@@ -43,7 +43,13 @@ module TDAnalytics
43
43
  data = chunk.to_json
44
44
  end
45
45
  compress_type = @compress ? 'gzip' : 'none'
46
- headers = {'Content-Type' => 'application/plaintext', 'appid' => @app_id, 'compress' => compress_type}
46
+ headers = {'Content-Type' => 'application/plaintext',
47
+ 'appid' => @app_id,
48
+ 'compress' => compress_type,
49
+ 'TA-Integration-Type'=>'Ruby',
50
+ 'TA-Integration-Version'=>TDAnalytics::VERSION,
51
+ 'TA-Integration-Count'=>@buffers.count,
52
+ 'TA_Integration-Extra'=>'batch'}
47
53
  request = CaseSensitivePost.new(@server_uri.request_uri, headers)
48
54
  request.body = data
49
55
 
@@ -86,7 +92,6 @@ module TDAnalytics
86
92
  end
87
93
  end
88
94
 
89
- # 内部使用,为了兼容老版本服务端,将 Header 名称限定为小写
90
95
  class CaseSensitivePost < Net::HTTP::Post
91
96
  def initialize_http_header(headers)
92
97
  @header = {}
@@ -2,22 +2,34 @@ require 'json'
2
2
  require 'net/http'
3
3
 
4
4
  module TDAnalytics
5
- # DebugConsumer 逐条、同步地向服务端上报数据
6
- # DebugConsumer 会返回详细的报错信息,建议在集成阶段先使用 DebugConsumer 调试接口
5
+ # The data is reported one by one, and when an error occurs, the log will be printed on the console.
7
6
  class DebugConsumer
8
7
 
9
- def initialize(server_url, app_id, write_data = true)
8
+ def test
9
+
10
+ end
11
+
12
+ def initialize(server_url, app_id, write_data = true, device_id: nil)
10
13
  @server_uri = URI.parse(server_url)
11
14
  @server_uri.path = '/data_debug'
12
15
  @app_id = app_id
13
16
  @write_data = write_data
17
+ @device_id = device_id
14
18
  end
15
19
 
16
20
  def add(message)
17
21
  puts message.to_json
22
+ headers = {
23
+ 'TA-Integration-Type'=>'Ruby',
24
+ 'TA-Integration-Version'=>TDAnalytics::VERSION,
25
+ 'TA-Integration-Count'=>'1',
26
+ 'TA_Integration-Extra'=>'debug'
27
+ }
18
28
  form_data = {"data" => message.to_json, "appid" => @app_id, "dryRun" => @write_data ? "0" : "1", "source" => "server"}
29
+ @device_id.is_a?(String) ? form_data["deviceId"] = @device_id : nil
30
+
19
31
  begin
20
- response_code, response_body = request(@server_uri, form_data)
32
+ response_code, response_body = request(@server_uri, form_data,headers)
21
33
  rescue => e
22
34
  raise ConnectionError.new("Could not connect to TA server, with error \"#{e.message}\".")
23
35
  end
@@ -36,8 +48,8 @@ module TDAnalytics
36
48
  end
37
49
  end
38
50
 
39
- def request(uri, form_data)
40
- request = Net::HTTP::Post.new(uri.request_uri)
51
+ def request(uri, form_data,headers)
52
+ request = Net::HTTP::Post.new(uri.request_uri,headers)
41
53
  request.set_form_data(form_data)
42
54
 
43
55
  client = Net::HTTP.new(uri.host, uri.port)
@@ -1,21 +1,16 @@
1
1
  module TDAnalytics
2
2
 
3
- # TD Analytics SDK 的错误
4
3
  TDAnalyticsError = Class.new(StandardError)
5
4
 
6
- # 参数不合法
7
5
  IllegalParameterError = Class.new(TDAnalyticsError)
8
6
 
9
- # 网络连接错误
10
7
  ConnectionError = Class.new(TDAnalyticsError)
11
8
 
12
- # 服务器返回错误
13
9
  ServerError = Class.new(TDAnalyticsError)
14
10
 
15
11
 
16
- # 默认情况下,所有异常都不会被抛出。如果希望自己处理异常,可以实现继承自 ErrorHandler 的
17
- # 错误处理类,并在初始化 SDK 的时候作为参数传入.
18
- # 例如:
12
+ # use example:
13
+ #
19
14
  # class MyErrorHandler < TDAnalytics::ErrorHandler
20
15
  # def handle(error)
21
16
  # puts error
@@ -2,14 +2,25 @@ require 'logger'
2
2
  require 'thinkingdata-ruby/errors'
3
3
 
4
4
  module TDAnalytics
5
- # 将数据写入本地文件, 需配合 LogBus 将数据上传到服务器
6
- # 由于 LogBus 有完善的失败重传机制,因此建议用户首先考虑此方案
5
+
6
+ # dismantle the header and save it under another name
7
+ class HeadlessLogger < Logger
8
+ def initialize(logdev, shift_age = 0, shift_size = 1048576)
9
+ super(nil )
10
+ if logdev
11
+ @logdev = HeadlessLogger::LogDevice.new(logdev, shift_age: shift_age, shift_size: shift_size)
12
+ end
13
+ end
14
+
15
+ class LogDevice < ::Logger::LogDevice
16
+ def add_log_header(file); end
17
+ end
18
+ end
19
+
20
+ # write data to file, it works with LogBus
7
21
  class LoggerConsumer
8
- # LoggerConsumer 构造函数
9
- # log_path: 日志文件存放目录
10
- # mode: 日志文件切分模式,可选 daily/hourly
11
- # prefix: 日志文件前缀,默认为 'tda.log', 日志文件名格式为: tda.log.2019-11-15
12
- def initialize(log_path='.', mode='daily', prefix:'tda.log')
22
+
23
+ def initialize(log_path='.', mode='daily', prefix:'te.log')
13
24
  case mode
14
25
  when 'hourly'
15
26
  @suffix_mode = '%Y-%m-%d-%H'
@@ -22,9 +33,8 @@ module TDAnalytics
22
33
  raise IllegalParameterError.new("prefix couldn't be empty") if prefix.nil? || prefix.length == 0
23
34
 
24
35
  @current_suffix = Time.now.strftime(@suffix_mode)
25
-
26
- @full_prefix = "#{log_path}/#{prefix}."
27
-
36
+ @log_path = log_path
37
+ @full_prefix = "#{log_path}/#{prefix}"
28
38
  _reset
29
39
  end
30
40
 
@@ -37,17 +47,16 @@ module TDAnalytics
37
47
  @logger.info(msg.to_json)
38
48
  end
39
49
 
40
- # 关闭 logger
41
50
  def close
42
51
  @logger.close
43
52
  end
44
53
 
45
54
  private
46
55
 
47
- # 重新创建 logger 对象. LogBus 判断新文件会同时考虑文件名和 inode,因此默认的切分方式会导致数据重传
48
56
  def _reset
49
- @logger = Logger.new("#{@full_prefix}#{@current_suffix}")
50
- @logger.level = Logger::INFO
57
+ Dir::mkdir(@log_path) unless Dir::exist?(@log_path)
58
+ @logger = HeadlessLogger.new("#{@full_prefix}.#{@current_suffix}")
59
+ @logger.level = HeadlessLogger::INFO
51
60
  @logger.formatter = proc do |severity, datetime, progname, msg|
52
61
  "#{msg}\n"
53
62
  end
@@ -3,34 +3,39 @@ require 'thinkingdata-ruby/errors'
3
3
  require 'thinkingdata-ruby/version'
4
4
 
5
5
  module TDAnalytics
6
- # TDAnalytics::Tracker 是数据上报的核心类,使用此类上报事件数据和更新用户属性.
7
- # 创建 Tracker 类需要传入 consumer 对象,consumer 决定了如何处理格式化的数据(存储在本地日志文件还是上传到服务端).
8
- #
9
- # ta = TDAnalytics::Tracker.new(consumer)
10
- # ta.track('your_event', distinct_id: 'distinct_id_of_user')
11
- #
12
- # TDAnalytics 提供了三种 consumer 实现:
13
- # LoggerConsumer: 数据写入本地文件
14
- # DebugConsumer: 数据逐条、同步的发送到服务端,并返回详细的报错信息
15
- # BatchConsumer: 数据批量、同步的发送到服务端
16
- #
17
- # 您也可以传入自己实现的 Consumer,只需实现以下接口:
18
- # add(message): 接受 hash 类型的数据对象
19
- # flush: (可选) 将缓冲区的数据发送到指定地址
20
- # close: (可选) 程序退出时用户可以主动调用此接口以保证安全退出
21
- class Tracker
6
+ @is_enable_log = false
7
+ @is_stringent = false
8
+
9
+ def self.set_enable_log(enable)
10
+ unless [true, false].include? enable
11
+ enable = false
12
+ end
13
+ @is_enable_log = enable
14
+ end
15
+
16
+ def self.get_enable_log
17
+ @is_enable_log
18
+ end
19
+
20
+ def self.set_stringent(enable)
21
+ unless [true, false].include? enable
22
+ enable = false
23
+ end
24
+ @is_stringent = enable
25
+ end
26
+
27
+ def self.get_stringent
28
+ @is_stringent
29
+ end
22
30
 
31
+ class Tracker
23
32
  LIB_PROPERTIES = {
24
33
  '#lib' => 'ruby',
25
34
  '#lib_version' => TDAnalytics::VERSION,
26
35
  }
27
36
 
28
- # SDK 构造函数,传入 consumer 对象
29
- #
30
- # 默认情况下,除参数不合法外,其他 Error 会被忽略,如果您希望自己处理接口调用中的 Error,可以传入自定义的 error handler.
31
- # ErrorHandler 的定义可以参考 thinkingdata-ruby/errors.rb
32
- #
33
- # uuid 如果为 true,每条数据都会被带上随机 UUID 作为 #uuid 属性的值上报,该值不会入库,仅仅用于后台做数据重复检测
37
+ @@dynamic_block = nil
38
+
34
39
  def initialize(consumer, error_handler = nil, uuid: false)
35
40
  @error_handler = error_handler || ErrorHandler.new
36
41
  @consumer = consumer
@@ -38,10 +43,9 @@ module TDAnalytics
38
43
  @uuid = uuid
39
44
  end
40
45
 
41
- # 设置公共事件属性,公共事件属性是所有事件都会带上的属性. 此方法会将传入的属性与当前公共属性合并.
42
- # 如果希望跳过本地格式校验,可以传入值为 true 的 skip_local_check 参数
46
+ # set common properties
43
47
  def set_super_properties(properties, skip_local_check = false)
44
- unless skip_local_check || _check_properties(:track, properties)
48
+ unless TDAnalytics::get_stringent == false || skip_local_check || _check_properties(:track, properties)
45
49
  @error_handler.handle(IllegalParameterError.new("Invalid super properties"))
46
50
  return false
47
51
  end
@@ -54,20 +58,28 @@ module TDAnalytics
54
58
  end
55
59
  end
56
60
 
57
- # 清除公共事件属性
58
61
  def clear_super_properties
59
62
  @super_properties = {}
60
63
  end
61
64
 
62
- # 上报事件. 每个事件都包含一个事件名和 Hash 对象的时间属性. 其参数说明如下:
63
- # event_name: (必须) 事件名 必须是英文字母开头,可以包含字母、数字和 _, 长度不超过 50 个字符.
64
- # distinct_id: (可选) 访客 ID
65
- # account_id: (可选) 账号ID distinct_id 和 account_id 不能同时为空
66
- # properties: (可选) Hash 事件属性。支持四种类型的值:字符串、数值、Time、boolean
67
- # time: (可选)Time 事件发生时间,如果不传默认为系统当前时间
68
- # ip: (可选) 事件 IP,如果传入 IP 地址,后端可以通过 IP 地址解析事件发生地点
69
- # skip_local_check: (可选) boolean 表示是否跳过本地检测
70
- def track(event_name: nil, distinct_id: nil, account_id: nil, properties: {}, time: nil, ip: nil, skip_local_check: false)
65
+ def set_dynamic_super_properties(&block)
66
+ @@dynamic_block = block
67
+ end
68
+
69
+ def clear_dynamic_super_properties
70
+ @@dynamic_block = nil
71
+ end
72
+
73
+ # report ordinary event
74
+ # event_name: (require) A string of 50 letters and digits that starts with '#' or a letter
75
+ # distinct_id: (optional) distinct ID
76
+ # account_id: (optional) account ID. distinct_id, account_id can't both be empty.
77
+ # properties: (optional) string、number、Time、boolean
78
+ # time: (optional)Time
79
+ # ip: (optional) ip
80
+ # first_check_id: (optional) The value cannot be null for the first event
81
+ # skip_local_check: (optional) check data or not
82
+ def track(event_name: nil, distinct_id: nil, account_id: nil, properties: {}, time: nil, ip: nil,first_check_id:nil, skip_local_check: false)
71
83
  begin
72
84
  _check_name event_name
73
85
  _check_id(distinct_id, account_id)
@@ -79,21 +91,44 @@ module TDAnalytics
79
91
  return false
80
92
  end
81
93
 
82
- data = {}
83
- data[:event_name] = event_name
84
- data[:distinct_id] = distinct_id if distinct_id
85
- data[:account_id] = account_id if account_id
86
- data[:time] = time if time
87
- data[:ip] = ip if ip
88
- data[:properties] = properties
94
+ _internal_track(:track, event_name: event_name, distinct_id: distinct_id, account_id: account_id, properties: properties, time: time, ip: ip, first_check_id: first_check_id)
95
+ end
89
96
 
90
- _internal_track(:track, data)
97
+ # report overridable event
98
+ def track_overwrite(event_name: nil,event_id: nil, distinct_id: nil, account_id: nil, properties: {}, time: nil, ip: nil, skip_local_check: false)
99
+ begin
100
+ _check_name event_name
101
+ _check_event_id event_id
102
+ _check_id(distinct_id, account_id)
103
+ unless skip_local_check
104
+ _check_properties(:track_overwrite, properties)
105
+ end
106
+ rescue TDAnalyticsError => e
107
+ @error_handler.handle(e)
108
+ return false
109
+ end
110
+
111
+ _internal_track(:track_overwrite, event_name: event_name, event_id: event_id, distinct_id: distinct_id, account_id: account_id, properties: properties, time: time, ip: ip)
91
112
  end
92
113
 
93
- # 设置用户属性. 如果出现同名属性,则会覆盖之前的值.
94
- # distinct_id: (可选) 访客 ID
95
- # account_id: (可选) 账号ID distinct_id 和 account_id 不能同时为空
96
- # properties: (可选) Hash 用户属性。支持四种类型的值:字符串、数值、Time、boolean
114
+ # report updatable event
115
+ def track_update(event_name: nil,event_id: nil, distinct_id: nil, account_id: nil, properties: {}, time: nil, ip: nil, skip_local_check: false)
116
+ begin
117
+ _check_name event_name
118
+ _check_event_id event_id
119
+ _check_id(distinct_id, account_id)
120
+ unless skip_local_check
121
+ _check_properties(:track_update, properties)
122
+ end
123
+ rescue TDAnalyticsError => e
124
+ @error_handler.handle(e)
125
+ return false
126
+ end
127
+
128
+ _internal_track(:track_update, event_name: event_name, event_id: event_id, distinct_id: distinct_id, account_id: account_id, properties: properties, time: time, ip: ip)
129
+ end
130
+
131
+ # set user properties. would overwrite existing names.
97
132
  def user_set(distinct_id: nil, account_id: nil, properties: {}, ip: nil)
98
133
  begin
99
134
  _check_id(distinct_id, account_id)
@@ -103,15 +138,10 @@ module TDAnalytics
103
138
  return false
104
139
  end
105
140
 
106
- _internal_track(:user_set,
107
- distinct_id: distinct_id,
108
- account_id: account_id,
109
- properties: properties,
110
- ip: ip,
111
- )
141
+ _internal_track(:user_set, distinct_id: distinct_id, account_id: account_id, properties: properties, ip: ip)
112
142
  end
113
143
 
114
- # 设置用户属性. 如果有重名属性,则丢弃, 参数与 user_set 相同
144
+ # set user properties, If such property had been set before, this message would be neglected.
115
145
  def user_set_once(distinct_id: nil, account_id: nil, properties: {}, ip: nil)
116
146
  begin
117
147
  _check_id(distinct_id, account_id)
@@ -129,7 +159,7 @@ module TDAnalytics
129
159
  )
130
160
  end
131
161
 
132
- # 追加用户的一个或多个列表类型的属性
162
+ # to add user properties of array type.
133
163
  def user_append(distinct_id: nil, account_id: nil, properties: {})
134
164
  begin
135
165
  _check_id(distinct_id, account_id)
@@ -146,7 +176,23 @@ module TDAnalytics
146
176
  )
147
177
  end
148
178
 
149
- # 删除用户属性, property 可以传入需要删除的用户属性的 key 值,或者 key 值数组
179
+ def user_uniq_append(distinct_id: nil, account_id: nil, properties: {})
180
+ begin
181
+ _check_id(distinct_id, account_id)
182
+ _check_properties(:user_uniq_append, properties)
183
+ rescue TDAnalyticsError => e
184
+ @error_handler.handle(e)
185
+ return false
186
+ end
187
+
188
+ _internal_track(:user_uniq_append,
189
+ distinct_id: distinct_id,
190
+ account_id: account_id,
191
+ properties: properties,
192
+ )
193
+ end
194
+
195
+ # clear the user properties of users.
150
196
  def user_unset(distinct_id: nil, account_id: nil, property: nil)
151
197
  properties = {}
152
198
  if property.is_a?(Array)
@@ -172,10 +218,7 @@ module TDAnalytics
172
218
  )
173
219
  end
174
220
 
175
- # 累加用户属性, 如果用户属性不存在,则会设置为 0,然后再累加
176
- # distinct_id: (可选) 访客 ID
177
- # account_id: (可选) 账号ID distinct_id 和 account_id 不能同时为空
178
- # properties: (可选) Hash 数值类型的用户属性
221
+ # to accumulate operations against the property.
179
222
  def user_add(distinct_id: nil, account_id: nil, properties: {})
180
223
  begin
181
224
  _check_id(distinct_id, account_id)
@@ -192,7 +235,7 @@ module TDAnalytics
192
235
  )
193
236
  end
194
237
 
195
- # 删除用户,用户之前的事件数据不会被删除
238
+ # delete a user, This operation cannot be undone.
196
239
  def user_del(distinct_id: nil, account_id: nil)
197
240
  begin
198
241
  _check_id(distinct_id, account_id)
@@ -207,7 +250,7 @@ module TDAnalytics
207
250
  )
208
251
  end
209
252
 
210
- # 立即上报数据,对于 BatchConsumer 会触发上报
253
+ # report data immediately.
211
254
  def flush
212
255
  return true unless defined? @consumer.flush
213
256
  ret = true
@@ -220,7 +263,7 @@ module TDAnalytics
220
263
  ret
221
264
  end
222
265
 
223
- # 退出前调用,保证 Consumer 安全退出
266
+ # Close and exit sdk
224
267
  def close
225
268
  return true unless defined? @consumer.close
226
269
  ret = true
@@ -235,35 +278,39 @@ module TDAnalytics
235
278
 
236
279
  private
237
280
 
238
- # 出现异常的时候返回 false, 否则 true
239
- def _internal_track(type, properties: {}, event_name: nil, account_id: nil, distinct_id: nil, ip: nil, time: Time.now)
240
- if account_id == nil && distinct_id == nil
241
- raise IllegalParameterError.new('account id or distinct id must be provided.')
281
+ def _internal_track(type, properties: {}, event_name: nil, event_id:nil, account_id: nil, distinct_id: nil, ip: nil,first_check_id: nil, time: nil)
282
+ if type == :track || type == :track_update || type == :track_overwrite
283
+ dynamic_properties = @@dynamic_block.respond_to?(:call) ? @@dynamic_block.call : {}
284
+ properties = LIB_PROPERTIES.merge(@super_properties).merge(dynamic_properties).merge(properties)
242
285
  end
243
286
 
244
- if type == :track
245
- raise IllegalParameterError.new('event name is empty for track') if event_name == nil
246
- properties = {'#zone_offset': time.utc_offset / 3600.0}.merge(LIB_PROPERTIES).merge(@super_properties).merge(properties)
247
- end
287
+ data = {
288
+ '#type' => type,
289
+ }
248
290
 
249
- # 格式化 Time 类型
250
291
  properties.each do |k, v|
251
292
  if v.is_a?(Time)
252
293
  properties[k] = _format_time(v)
253
294
  end
254
295
  end
255
296
 
256
- data = {
257
- '#type' => type,
258
- '#time' => _format_time(time),
259
- 'properties' => properties,
260
- }
297
+ _move_preset_properties([:'#ip', :"#time", :"#app_id", :"#uuid"], data, properties: properties)
298
+
299
+ if data[:'#time'] == nil
300
+ if time == nil
301
+ time = Time.now
302
+ end
303
+ data[:'#time'] = _format_time(time)
304
+ end
261
305
 
262
- data['#event_name'] = event_name if type == :track
306
+ data['properties'] = properties
307
+ data['#event_name'] = event_name if (type == :track || type == :track_update || type == :track_overwrite)
308
+ data['#event_id'] = event_id if (type == :track_update || type == :track_overwrite)
263
309
  data['#account_id'] = account_id if account_id
264
310
  data['#distinct_id'] = distinct_id if distinct_id
265
311
  data['#ip'] = ip if ip
266
- data['#uuid'] = SecureRandom.uuid if @uuid
312
+ data['#first_check_id'] = first_check_id if first_check_id
313
+ data[:'#uuid'] = SecureRandom.uuid if @uuid and data[:'#uuid'] == nil
267
314
 
268
315
  ret = true
269
316
  begin
@@ -276,33 +323,44 @@ module TDAnalytics
276
323
  ret
277
324
  end
278
325
 
279
- # 将 Time 类型格式化为数数指定格式的字符串
280
326
  def _format_time(time)
281
327
  time.strftime("%Y-%m-%d %H:%M:%S.#{((time.to_f * 1000.0).to_i % 1000).to_s.rjust(3, "0")}")
282
328
  end
283
329
 
284
- # 属性名或者事件名检查
330
+ def _check_event_id(event_id)
331
+ if TDAnalytics::get_stringent == false
332
+ return true
333
+ end
334
+
335
+ raise IllegalParameterError.new("the event_id or property cannot be nil") if event_id.nil?
336
+ true
337
+ end
338
+
285
339
  def _check_name(name)
340
+ if TDAnalytics::get_stringent == false
341
+ return true
342
+ end
343
+
286
344
  raise IllegalParameterError.new("the name of event or property cannot be nil") if name.nil?
287
345
 
288
346
  unless name.instance_of?(String) || name.instance_of?(Symbol)
289
347
  raise IllegalParameterError.new("#{name} is invalid. It must be String or Symbol")
290
348
  end
291
-
292
- unless name =~ /^[a-zA-Z][a-zA-Z0-9_]{1,49}$/
293
- raise IllegalParameterError.new("#{name} is invalid. It must be string starts with letters and contains letters, numbers, and _ with max length of 50")
294
- end
295
349
  true
296
350
  end
297
351
 
298
- # 属性类型检查
299
352
  def _check_properties(type, properties)
353
+ if TDAnalytics::get_stringent == false
354
+ return true
355
+ end
356
+
300
357
  unless properties.instance_of? Hash
301
358
  return false
302
359
  end
303
360
 
304
361
  properties.each do |k, v|
305
362
  _check_name k
363
+ next if v.nil?
306
364
  unless v.is_a?(Integer) || v.is_a?(Float) || v.is_a?(Symbol) || v.is_a?(String) || v.is_a?(Time) || !!v == v || v.is_a?(Array)
307
365
  raise IllegalParameterError.new("The value of properties must be type in Integer, Float, Symbol, String, Array,and Time")
308
366
  end
@@ -321,16 +379,30 @@ module TDAnalytics
321
379
  true
322
380
  end
323
381
 
324
- # 检查用户 ID 合法性
325
382
  def _check_id(distinct_id, account_id)
383
+ if TDAnalytics::get_stringent == false
384
+ return true
385
+ end
386
+
326
387
  raise IllegalParameterError.new("account id or distinct id must be provided.") if distinct_id.nil? && account_id.nil?
388
+ end
327
389
 
328
- unless distinct_id.nil?
329
- raise IllegalParameterError.new("The length of distinct id should in (0, 64]") if distinct_id.to_s.length < 1 || distinct_id.to_s.length > 64
330
- end
390
+ def _move_preset_properties(keys, data, properties: {})
391
+ property_keys = properties.keys
392
+ keys.each { |k|
393
+ if property_keys.include? k
394
+ data[k] = properties[k]
395
+ properties.delete(k)
396
+ end
397
+ }
398
+ end
399
+ end
331
400
 
332
- unless account_id.nil?
333
- raise IllegalParameterError.new("The length of account id should in (0, 64]") if account_id.to_s.length < 1 || account_id.to_s.length > 64
401
+ class TELog
402
+ def self.info(*msg)
403
+ if TDAnalytics::get_enable_log
404
+ print("[ThinkingEngine][#{Time.now}][info]-")
405
+ puts(msg)
334
406
  end
335
407
  end
336
408
  end
@@ -1,3 +1,3 @@
1
1
  module TDAnalytics
2
- VERSION = '1.1.0'
2
+ VERSION = '1.2.1'
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: thinkingdata-ruby
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.0
4
+ version: 1.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - ThinkingData
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-02-11 00:00:00.000000000 Z
11
+ date: 2023-03-21 00:00:00.000000000 Z
12
12
  dependencies: []
13
13
  description: The official ThinkingData Analytics API for ruby
14
14
  email: sdk@thinkingdata.cn
@@ -16,6 +16,7 @@ executables: []
16
16
  extensions: []
17
17
  extra_rdoc_files: []
18
18
  files:
19
+ - ".gitignore"
19
20
  - CHANGELOG.md
20
21
  - Gemfile
21
22
  - LICENSE
@@ -48,8 +49,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
48
49
  - !ruby/object:Gem::Version
49
50
  version: '0'
50
51
  requirements: []
51
- rubyforge_project:
52
- rubygems_version: 2.5.2.3
52
+ rubygems_version: 3.0.9
53
53
  signing_key:
54
54
  specification_version: 4
55
55
  summary: Official ThinkingData Analytics API for ruby