redtrack 0.0.1

Sign up to get free protection for your applications and to get access to all the features.
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: b09961b41ddfee0668b0c3afc544d7f56618592b
4
+ data.tar.gz: d209715373eef894cf6c4cb70141edea015c1916
5
+ SHA512:
6
+ metadata.gz: e03069f4f3ee8ba7a8e02b267ee2cd142dfcdb8dc81d33131b523d5957a4a25a152fac71fb649d31a7d529144e2d3fb498e4a22e69b7d0b4128ddcb3bd00a1eb
7
+ data.tar.gz: 352dc2543e113fdf26e42f393b5221ab45aaccec76743b9f85bfe1f9032f4d7aa6135a7a9f81ab0368a24359f69f269d4134916543ae205d03c5c1c24a81f52c
@@ -0,0 +1,3 @@
1
+ # Intellij
2
+ .idea/
3
+ *.iml
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in redtrack.gemspec
4
+ gemspec
data/LICENSE ADDED
@@ -0,0 +1,22 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2014 Red Hot Labs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
@@ -0,0 +1,173 @@
1
+ RedTrack
2
+ ========
3
+ RedTrack provides Infrastructure for tracking and loading events into [AWS Redshift](http://aws.amazon.com/redshift/) using [AWS Kinesis](http://aws.amazon.com/kinesis/) as a data broker. For more information on its motivation, design goals, and architecture, please see this blog post:
4
+
5
+ # Installation / Dependencies
6
+
7
+ Add to Gemfile
8
+ ```
9
+ gem 'redtrack', git: 'git://github.com/redhotlabs/redtrack.git'
10
+ ```
11
+
12
+ Once installed, the library can be used by requiring it
13
+ ```
14
+ require 'redtrack'
15
+ ```
16
+
17
+ You need a Redshift cluster. If you don't have one, launch one starting here: [Redshift AWS console](https://console.aws.amazon.com/redshift/home)
18
+
19
+ # Getting Started
20
+
21
+ A full application example showing usage here: https://github.com/lrajlich/sinatra_example
22
+
23
+ Redtrack is used through a client object. In order to get started, you need to configure & create a redtrack client, ensure you have the proper AWS resources provisioned & configured, and then you can call the APIs.
24
+
25
+ ### Configure & Create RedTrack client
26
+ To construct a client object, pass a hash of options, [documented in next section](https://github.com/redhotlabs/redtrack/blob/master/README.md#constructor-options), to its constructor:
27
+ ```ruby
28
+ redtrack_options = {
29
+ :PARAMETER_NAME => PARAMETER_VALUE
30
+ ...
31
+ }
32
+ redtrack_client = RedTrack::Client.new(redtrack_options)
33
+ ...
34
+ ```
35
+
36
+ ##### Constructor options
37
+ ```:access_key_id``` Required. String. Passed to the [aws ruby sdk](https://github.com/aws/aws-sdk-ruby)<br/>
38
+ ```:secret_access_key``` Required. String. Passed to the [aws ruby sdk](https://github.com/aws/aws-sdk-ruby)<br/>
39
+ ```:s3_bucket``` Required. String. Name of the bucket to store file uploads. Must be in same region as Redshift cluster.<br/>
40
+ ```:region``` Required. String. AWS region. Passed to aws-sdk.<br/>
41
+ ```:redshift_cluster_name``` Required. String. Fill in Name of the redshift cluster from redshift cluster configuration<br/>
42
+ ```:redshift_host``` Required. String. This is the Endpoint under Cluster Database Properties on redshift cluster configuration<br/>
43
+ ```:redshift_port``` Required. String. Port under Cluster Database Properties on redshift cluster configuration. Default is 5439<br/>
44
+ ```:redshift_dbname``` Required. String. Database Name under Cluster Database Properties on redshift cluster configuration<br/>
45
+ ```:redshift_user``` Required. String. Master Username under Cluster Database Properties on redshift cluster configuration<br/>
46
+ ```:redshift_password``` Required. String. Password used for the above user<br/>
47
+ ```:redshift_schema``` Required. Hash. Schema definition for redshift. For more information, see [Redshift Schema section](https://github.com/redhotlabs/redtrack#redshift-schema)<br/>
48
+ ```:kinesis_enabled``` Required. Bool. When "true", uses Kinesis for data broker. When "false", writes to a file as a broker instead of Kinesis (use that configuration for development only).<br/>
49
+
50
+ For an example / template configuration, see [example configuration](https://github.com/lrajlich/sinatra_example/blob/master/configuration.rb)
51
+
52
+ ### Creating AWS resources
53
+
54
+ RedTrack depends on a number of AWS resources to be provisioned and configured. These are:
55
+
56
+ ###### 1) Redshift cluster
57
+ This has to be done manually via the [Redshift AWS console](https://console.aws.amazon.com/redshift/home)
58
+
59
+ ###### 2) Redshift Database
60
+ You have to make sure the configuration parameter ```redshift_dbname``` has a corresponding database in redshift, otherwise loading events will fail. By default, your Redshift Cluster will have a database when you create the cluster. You can create additional databases using ```psql``` and using the ```CREATE DATABASE``` command.
61
+
62
+ ###### 3) Redshift Tables
63
+ For every table in your schema, you need to make sure there is a Redshift table with the same name; otherwise, loading events will fail. RedTrack client provides a helper method for creating these tables:
64
+ ```ruby
65
+ redtrack_client.create_table_from_schema('SOME_TABLE_NAME')
66
+ ```
67
+
68
+ An example usage can be seen here: [Create table example](https://github.com/lrajlich/sinatra_example/blob/master/setup_redtrack_aws_resources.rb#L12)
69
+
70
+ ###### 4) Kinesis Streams
71
+ For every table in your schema, you need to make sure there is a Kinesis stream that has a name following the convention ```<redshift_cluster_name>.<redshift_db_name>.<table_name>```. RedTrack provides a helper method for creating these streams:
72
+ ```ruby
73
+ redtrack_client.create_kinesis_stream_for_table('SOME_TABLE_NAME')
74
+ ```
75
+
76
+ An example usage can be seen here: [Create kinesis stream exampe](https://github.com/lrajlich/sinatra_example/blob/master/setup_redtrack_aws_resources.rb#L26)
77
+
78
+ ###### 5) Tracking Tables
79
+ The final component is that RedTrack keeps internal state to track what events have already been loaded. The ```kinesis_loads``` table has to exist in the database that you are loading. Like the above, there is a helper method for creating this table:
80
+ ```ruby
81
+ redtrack_client.create_kinesis_loads_table()
82
+ ```
83
+
84
+ An example usage can be seen here: [Create kinesis table example](https://github.com/lrajlich/sinatra_example/blob/master/setup_redtrack_aws_resources.rb#L19)
85
+
86
+ # Interface
87
+ There's 2 interfaces for Redtrack - Write and Loader. The gist is that the Write api is called inline with application logic and writes events to the broker and the Loader is called asynchronously by a recurring job to read events from the broker and load them into redshift. For an overview of the architecture, see: <INSERT LINK HERE>.
88
+
89
+ #### Write Api
90
+ You web application will interact with the Write API in-line with web transactions. Write will validate the passed data validates against the redtrack schema (since the data is loaded asynchronously into redshift, redtrack does not validate the write against redshift directly) and then write it to the appropriate stream in kinesis.
91
+
92
+ A simple example:
93
+ ```ruby
94
+ redtrack_client = RedTrack::Client.new(options)
95
+ data = {
96
+ :message => "foo",
97
+ :timestamp => Time.now.to_i
98
+ }
99
+ result = redtrack_client.write("SOME_TABLE",data)
100
+ ```
101
+
102
+ For an application example, see [this example usage](https://github.com/lrajlich/sinatra_example/blob/master/app.rb#L34)
103
+
104
+ #### Loader
105
+ The loader is run asynchronously to consume events off of the broker and load them into the warehouse. In this case, events are read from Kinesis from the last load point, uploaded to S3, and then copied into Redshift. There is a single function and it takes 2 parameters - a table name, and a stream shard index. The stream shard index corresponds to the index in the array of shards returned by a [DescribeStream](http://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html) request
106
+
107
+ A simple example:
108
+ ```ruby
109
+ loader = redtrack_client.new_loader()
110
+ stream_shard_index=0
111
+ loader_result = loader.load_redshift_from_broker("SOME_TABLE_NAME",stream_shard_index)
112
+ ```
113
+ For an application example, see [this load_redshift script example](https://github.com/lrajlich/sinatra_example/blob/master/load_redshift.rb)
114
+
115
+ # Redshift Schema
116
+ One of the features of redtrack is the ability to pass in a schema matching table schema. Redtrack can validate that passed events match the schema, as well, it can generate a SQL statements to create a table matching that schema or create the table directly. To get an overview of what the available redshift schema definition is, see [The docs](http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html)
117
+
118
+ In order to pass schema, you pass in a hash like this:
119
+ ```ruby
120
+ SCHEMA = {
121
+ :SOME_TABLE_NAME => {
122
+ :columns => {
123
+ :SOME_COLUMN_NAME => {
124
+ :type => 'varchar(32)',
125
+ :constraint => 'not null'
126
+ },
127
+ ... (OTHER COLUMNS)
128
+ },
129
+ :sortkey => 'SOME_COLUMN_NAME',
130
+ :distkey => 'SOME_COLUMN_NAME'
131
+ },
132
+ ... (OTHER TABLES)
133
+ }
134
+ ```
135
+
136
+ A simple example looks like this:
137
+ ```ruby
138
+ SCHEMAS= {
139
+ :test_events => {
140
+ :columns => {
141
+ :client_ip => { :type => 'varchar(32)', :constraint => 'not null'},
142
+ :timestamp => { :type => 'integer', :constraint => 'not null'},
143
+ :message => { :type => 'varchar(128)' }
144
+ },
145
+ :sortkey => 'timestamp'
146
+ }
147
+ }
148
+ ```
149
+
150
+ #### Redshift Type Support
151
+
152
+ Since Redtrack does asynchronous loading of events, the events are filtered before they are written to the broker in order to avoid COPY errors and to provide direct feedback to the caller of the ```write``` function
153
+
154
+ ```varchar(n)``` Supported. Current behavior is to truncate any strings that exceed the provided length<br/>
155
+ ```char``` Supported. <br/>
156
+ ```smallint``` Supported. <br/>
157
+ ```bigint``` Supported. <br/>
158
+ ```timestamp``` Partially Supported. Not all time formats are supported. Timeformat for Redshift is very restrictive (simply checking for a valid Ruby time is not sufficient) and thus this is done via string matching. [Documentation](http://docs.aws.amazon.com/redshift/latest/dg/r_DATEFORMAT_and_TIMEFORMAT_strings.html)<br/>
159
+ ```decimal``` Supported. Checks that the value is a numeric, eg, converts to float.
160
+
161
+ Redtrack type filtering is done [here](https://github.com/redhotlabs/redtrack/blob/master/lib/redtrack_datatypes.rb) and contributions to filtering logic are welcome:
162
+
163
+ #### Unsopported Redshift schema options
164
+
165
+ 1) Creating Redshift tables with Redshift Column Attributes, [From Docs](http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html). This includes the following parameters: DEFAULT, IDENTITY, and ENCODE. DISTKEY and SORTKEY will be created as table attributes, but not as column attributes. You can manually set attributes on the columns.
166
+
167
+ 2) Creating Redshift Tables with table Constraints, [From Docs](http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html). This includes UNIQUE, PRIMARY KEY, and FOREIGN_KEY constraints. You can manually set these values on the table schema.
168
+
169
+ 3) Enforcement of Unique column constraints, [From Docs](http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html), The RedTrack client will not verify that an event's property is actually unique. What will happen is that the events will fail to load.
170
+
171
+ # Documentation / Further reading
172
+
173
+ Redshift supports a handful of types. [Redshift Types](http://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html)
@@ -0,0 +1,2 @@
1
+ require "bundler/gem_tasks"
2
+
@@ -0,0 +1,16 @@
1
+ # Copyright (c) 2014 RedHotLabs, Inc.
2
+ # Licensed under the MIT License
3
+
4
+ # Dependent requires
5
+ require 'logger'
6
+ require 'aws-sdk'
7
+ require 'json'
8
+ require 'pg'
9
+ require 'time'
10
+
11
+ # Require all of redtrack library
12
+ require 'redtrack_client'
13
+ require 'redtrack_kinesisclient'
14
+ require 'redtrack_loader'
15
+ require 'redtrack_local_file_stream'
16
+ require 'redtrack_datatypes'
@@ -0,0 +1,286 @@
1
+ # The Client provides an application interface for redtrack
2
+ #
3
+ # Copyright (c) 2014 RedHotLabs, Inc.
4
+ # Licensed under the MIT License
5
+
6
+ module RedTrack
7
+ class Client
8
+
9
+ TAG='RedTrack::Client'
10
+
11
+ @broker = nil
12
+ @redshift_conn = nil
13
+ @options = nil
14
+ @data_types = nil
15
+ @valid_data_types = nil
16
+
17
+ @logger = nil
18
+
19
+ # Constructor for the client - initialize instance variables
20
+ #
21
+ # @param [Hash] options Options to the client - see README.md
22
+ def initialize(options)
23
+
24
+ # Create logger and add to options (passed to other objects)
25
+ @logger = Logger.new(STDOUT)
26
+ options[:logger] = @logger
27
+
28
+ # Create the appropriate broker
29
+ if options[:kinesis_enabled] == true
30
+ @logger.debug("#{TAG} Kinesis enabled. create KinesisClient")
31
+ @broker = RedTrack::KinesisClient.new(options)
32
+ else
33
+ @logger.debug("#{TAG} Kinesis disabled. create FileClient")
34
+ @broker = RedTrack::FileClient.new(options)
35
+ end
36
+
37
+ # Bind to the interface for checking data types
38
+ @data_types = RedTrack::DataTypes.new(options)
39
+ @valid_data_types = @data_types.valid_data_types
40
+
41
+ aws_options = {
42
+ :access_key_id => options[:access_key_id],
43
+ :secret_access_key => options[:secret_access_key],
44
+ :region => options[:region]
45
+ }
46
+ AWS.config(aws_options)
47
+
48
+ @options = options
49
+ end
50
+
51
+
52
+ # Create a new loader client
53
+ #
54
+ # @param [Hash] loader_options The options to pass to the loader
55
+ # @return [RedTrack::Loader] The loader client
56
+ def new_loader(loader_options={})
57
+ merged_options = merge_options(loader_options)
58
+
59
+ if @redshift_conn == nil
60
+ @redshift_conn = new_redshift_connection(loader_options)
61
+ end
62
+
63
+ return RedTrack::Loader.new(merged_options,@broker,@redshift_conn)
64
+ end
65
+
66
+ # Create a new redshift connection
67
+ #
68
+ # @param [Hash] connection_options A set of options to pass to PG.connect. Uses options passed to redtrack client by default
69
+ # @return [PG::Connection] Postgres client connection
70
+ def new_redshift_connection(connection_options={})
71
+ merged_options = merge_options(connection_options)
72
+
73
+ @redshift_conn = PG.connect(
74
+ :host => merged_options[:redshift_host],
75
+ :port => merged_options[:redshift_port],
76
+ :dbname => merged_options[:redshift_dbname],
77
+ :user => merged_options[:redshift_user],
78
+ :password => merged_options[:redshift_password])
79
+
80
+ return @redshift_conn
81
+ end
82
+
83
+ # Check the data to ensure it conforms to the table schema and write to the databroker for the table.
84
+ # Determines which shard to write to randomly
85
+ #
86
+ # @param [String] table The name of the redshift table to write to
87
+ # @param [Hash] data hash containing data to write to the table. Key is column name
88
+ # @param [String] partition_key optional, used to determine which kinesis shard to write the data to
89
+ # @return [Boolean] Whether or not the write succeeded
90
+ def write(table,data,partition_key=nil)
91
+
92
+ ## Get table schema...
93
+ schema = get_table_schema(table)
94
+
95
+ if schema == nil
96
+ raise "Scheme does not exist for table name ='#{table}'"
97
+ end
98
+
99
+ ## Ensure that the keys in the passed data are symbols (this is what's expected)
100
+ data.keys.each do |key|
101
+ if(key.is_a?(Symbol) == false)
102
+ raise "Data key #{key} is not a symbol!"
103
+ # TODO: CONVERT string keys to symbols instead of raising
104
+ end
105
+ end
106
+
107
+ intersection = schema[:columns].keys & data.keys
108
+
109
+ ## Validate no data keys are passed that are not in table schema
110
+ data.keys.each do |key|
111
+ if(intersection.include?(key) == false)
112
+ raise "Data key #{key} is not in schema for #{table} table!!"
113
+ end
114
+ end
115
+
116
+ ## Validate that columns are not null
117
+ schema[:columns].each do |column_name,column|
118
+ if(column.keys.include?(:constraint) == true && column[:constraint] == "not null" && intersection.include?(column_name) == false)
119
+ raise "Column #{column_name} is missing from passed data"
120
+ end
121
+ end
122
+
123
+ ## Validate column types
124
+ schema[:columns].each do |column_name,column|
125
+ if(intersection.include?(column_name) == true)
126
+
127
+ value = data[column_name.to_sym]
128
+ column_type = column[:type]
129
+
130
+ if column_type["("] != nil
131
+ type_name = column_type[/(.*)\(.*/,1]
132
+ else
133
+ type_name = column_type
134
+ end
135
+
136
+ if @valid_data_types.include? type_name
137
+ data[column_name.to_sym] = @data_types.send("check_#{type_name}".to_sym,value,column_type,column_name)
138
+ else
139
+ raise "Invalid data type #{type_name}. Valid types [#{@valid_data_types.join(",")}]"
140
+ end
141
+ end
142
+ end
143
+
144
+ ## Serialize as json, we load the data as JSON into redshift
145
+ data_string=data.to_json
146
+
147
+ ## Write the serialized data string to the broker
148
+ partition_key = partition_key || rand(100).to_s
149
+ stream_name = @broker.stream_name(table)
150
+ result = @broker.stream_write(stream_name, data_string, partition_key)
151
+
152
+ return result
153
+ end
154
+
155
+ # Gets a schema hash object for a specific table
156
+ #
157
+ # @param [String] table The name of the redshift table
158
+ # @return [Hash] Hash object containing the column definitions
159
+ def get_table_schema(table)
160
+ if (@options[:redshift_schema] == nil)
161
+ raise 'Must pass :redshift_schema as option when creating RedTrack client'
162
+ end
163
+
164
+ schema = @options[:redshift_schema]
165
+
166
+ if schema[table.to_sym]
167
+ result = schema[table.to_sym]
168
+ elsif schema["#{table}"]
169
+ result = schema["#{table}"]
170
+ end
171
+
172
+ return result
173
+ end
174
+
175
+
176
+ # Returns a SQL statement for creating a Redshift per the defined schema above
177
+ #
178
+ # @param [String] table The name of the table
179
+ # @param [Boolean] exec Whether to execute the statement
180
+ # @param [Hash] table_schema The table schema to use - if not provided, get from passed schema
181
+ # @return [String] Returns the create table string
182
+ def create_table_from_schema(table,exec=true,schema=nil)
183
+
184
+ if schema == nil
185
+ schema = get_table_schema(table)
186
+ if !schema
187
+ @logger.warn("#{TAG} No schema exists for table #{table}")
188
+ return false
189
+ end
190
+ end
191
+
192
+ query = "create table #{table} (\n"
193
+ schema[:columns].each_with_index do |(column_name,column),index|
194
+
195
+ query += "#{column_name} " + column[:type]
196
+ if column[:constraint] != nil
197
+ query += " " + column[:constraint]
198
+ end
199
+ if index != schema[:columns].size - 1
200
+ query += ","
201
+ end
202
+ query += "\n"
203
+ end
204
+ query += ")"
205
+ if schema[:sortkey] != nil
206
+ query += "\nsortkey(" + schema[:sortkey] + ");\n"
207
+ else
208
+ query += ";\n"
209
+ end
210
+
211
+ if exec
212
+ conn = new_redshift_connection()
213
+ result = conn.exec(query)
214
+ else
215
+ result = query
216
+ end
217
+
218
+ return result
219
+ end
220
+
221
+ # @return [String] Executes query against redshift and returns the result
222
+ def create_kinesis_loads_table
223
+ schema= {
224
+ :columns => {
225
+ :stream_name => { :type => 'varchar(64)' },
226
+ :shard_id => { :type => 'varchar(64)' },
227
+ :table_name => { :type => 'varchar(64)' },
228
+ :starting_sequence_number => { :type => 'varchar(64)' },
229
+ :ending_sequence_number => { :type => 'varchar(64)' },
230
+ :load_timestamp => { :type => 'timestamp', :constraint => 'not null' }
231
+ },
232
+ :sortkey => 'load_timestamp'
233
+ }
234
+
235
+ return create_table_from_schema('kinesis_loads',true,schema)
236
+ end
237
+
238
+ # Create a kinesis stream for the table - use configuration
239
+ #
240
+ # @param [String] table The name of the table
241
+ # @param [integer] shard_count The number of shards in the stream
242
+ def create_kinesis_stream_for_table(table,shard_count=1)
243
+ result = false
244
+ if @options[:kinesis_enabled]
245
+ result = @broker.create_kinesis_stream_for_table(table,shard_count)
246
+ else
247
+ @logger.warn("#{TAG} Kinesis is not enabled. Nothing done.")
248
+ end
249
+ return result
250
+ end
251
+
252
+ private
253
+
254
+ # Merge options between passed options and the default options in RedTrack client
255
+ #
256
+ # @param [Hash] options The set of options passed
257
+ def merge_options(options)
258
+ merged_options=@options
259
+ options.each do |passed_option_key,passed_option_value|
260
+ merged_options[passed_option_key] = passed_option_value
261
+ end
262
+ return merged_options
263
+ end
264
+
265
+ # Determine whether the typed value is a legit number, (eg, string)
266
+ #
267
+ # @param [Numeric] value The value to check as valid numeric
268
+ # @return [Boolean] Whether or not the value is a numeric
269
+ def is_numeric(value)
270
+ Float(value) != nil rescue false
271
+ end
272
+
273
+ # Determine whether the typed value is a timestamp as defined by redshift. This is more restrictive than ruby parsing b/c of redshift
274
+ # See: http://docs.aws.amazon.com/redshift/latest/dg/r_DATEFORMAT_and_TIMEFORMAT_strings.html
275
+ #
276
+ # @param [String] value The value to check as a valid timestamp: "YYYY-MM-DD HH:mm:ss" is only accepted format
277
+ # @return [Boolean] Whether or not the value is a timestamp as accepted by redshift
278
+ def is_redshift_timestamp(value)
279
+ if value.is_a?(String) && value[/\A\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\z/] != nil
280
+ return true
281
+ end
282
+ return false
283
+ end
284
+
285
+ end
286
+ end