dorothy2 0.0.3 → 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- data/README.md +30 -6
- data/TODO +21 -0
- data/bin/dorothy_start +7 -6
- data/bin/dparser_start +13 -1
- data/etc/ddl/dorothive.ddl +2 -31
- data/lib/doroParser.rb +30 -23
- data/lib/dorothy2/BFM.rb +5 -13
- data/lib/dorothy2/do-utils.rb +4 -5
- data/lib/dorothy2/version.rb +1 -1
- data/lib/dorothy2.rb +5 -8
- data/lib/mu/xtractr/about.rb +57 -0
- data/lib/mu/xtractr/content.rb +68 -0
- data/lib/mu/xtractr/field.rb +178 -0
- data/lib/mu/xtractr/flow.rb +162 -0
- data/lib/mu/xtractr/flows.rb +118 -0
- data/lib/mu/xtractr/host.rb +87 -0
- data/lib/mu/xtractr/packet.rb +138 -0
- data/lib/mu/xtractr/packets.rb +122 -0
- data/lib/mu/xtractr/service.rb +77 -0
- data/lib/mu/xtractr/stream/http.rb +103 -0
- data/lib/mu/xtractr/stream.rb +132 -0
- data/lib/mu/xtractr/term.rb +73 -0
- data/lib/mu/xtractr/test/stream/tc_http.rb +53 -0
- data/lib/mu/xtractr/test/tc_field.rb +140 -0
- data/lib/mu/xtractr/test/tc_flow.rb +79 -0
- data/lib/mu/xtractr/test/tc_flows.rb +94 -0
- data/lib/mu/xtractr/test/tc_host.rb +116 -0
- data/lib/mu/xtractr/test/tc_packet.rb +110 -0
- data/lib/mu/xtractr/test/tc_packets.rb +84 -0
- data/lib/mu/xtractr/test/tc_service.rb +66 -0
- data/lib/mu/xtractr/test/tc_stream.rb +56 -0
- data/lib/mu/xtractr/test/tc_term.rb +59 -0
- data/lib/mu/xtractr/test/tc_views.rb +118 -0
- data/lib/mu/xtractr/test/tc_xtractr.rb +151 -0
- data/lib/mu/xtractr/test/test.rb +19 -0
- data/lib/mu/xtractr/views.rb +204 -0
- data/lib/mu/xtractr.rb +257 -0
- metadata +32 -4
data/README.md
CHANGED
@@ -13,7 +13,7 @@ Dorothy2 is a continuation of my Bachelor degree's final project ([Dorothy: insi
|
|
13
13
|
The main framework's structure remained almost the same, and it has been fully detailed in my degree's final project or in this short [paper](http://www.honeynet.it/wp-content/uploads/Dorothy/EC2ND-Dorothy.pdf). More information about the whole project can be found on the Italian Honeyproject [website](http://www.honeynet.it).
|
14
14
|
|
15
15
|
|
16
|
-
The framework is
|
16
|
+
The framework is mainly composed by four big elements that can be even executed separately:
|
17
17
|
|
18
18
|
* The Dorothy analysis engine (included in this gem)
|
19
19
|
|
@@ -33,7 +33,7 @@ The framework is manly composed by four big elements that can be even executed s
|
|
33
33
|
|
34
34
|
The first three modules are (or will be soon) publicly released under GPL 2/3 license as tribute to the the [Honeynet Project Alliance](http://www.honeynet.org).
|
35
35
|
All the information generated by the framework - i.e. binary info, timestamps, dissected network analysis - are stored into a postgres DB (Dorothive) in order to be used for further analysis.
|
36
|
-
A no-SQL database (CouchDB) is also used to mass
|
36
|
+
A no-SQL database (CouchDB) is also used to mass store all the traffic dumps thanks to the [pcapr/xtractr](https://code.google.com/p/pcapr/wiki/Xtractr) technology.
|
37
37
|
|
38
38
|
I started to code this project in late 2009 while learning Ruby at the same time. Since then, I´ve been changing/improving it as long as my Ruby coding skills were improving. Because of that, you may find some parts of code not-really-tidy :)
|
39
39
|
|
@@ -55,6 +55,11 @@ Dorothy needs the following software (not expressly in the same host) in order t
|
|
55
55
|
* [pcapr-local](https://github.com/mudynamics/pcapr-local ) (only used by doroParser)
|
56
56
|
* MaxMind libraries (only used by doroParser)
|
57
57
|
|
58
|
+
Regarding the Operating System
|
59
|
+
|
60
|
+
* Dorothy has been designed to run on any *nix system. So far it was successfully tested on OSX and Linux.
|
61
|
+
* The virtual machines used as sandboxes are meant to be Windows based (successfully tested on XP)
|
62
|
+
* Only pcapr-local strictly requires Linux, if you want to use a Mac for executing this gem (like I do), install it into the NAM (as this guide suggests)
|
58
63
|
|
59
64
|
## Installation
|
60
65
|
|
@@ -116,7 +121,7 @@ It is recommended to follow this step2step process:
|
|
116
121
|
|
117
122
|
#gem install pcapr-local
|
118
123
|
|
119
|
-
* Start pcapr-local by using the dorothy's account and configure it. When prompted, insert the folder path used to store the network dumps
|
124
|
+
* Start pcapr-local by using the dorothy's system account and configure it. When prompted, insert the folder path used to store the network dumps
|
120
125
|
|
121
126
|
$startpcapr
|
122
127
|
....
|
@@ -164,7 +169,7 @@ or
|
|
164
169
|
2. Configure a dedicated postgres user for Dorothy (or use the default postgres user instead, up to you :)
|
165
170
|
|
166
171
|
> Note:
|
167
|
-
> If you want to use Postgres "as is", and then configure Dorothy to use "postgres"
|
172
|
+
> If you want to use Postgres "as is", and then configure Dorothy to use "postgres" default the user, configure a password for this user at least (by default it comes with no password)
|
168
173
|
|
169
174
|
3. Install the following packages
|
170
175
|
|
@@ -222,8 +227,8 @@ The first time you execute Dorothy, it will ask you to fill those information in
|
|
222
227
|
--infoflow, -i: Print the analysis flow
|
223
228
|
--source, -s <s>: Choose a source (from the ones defined in etc/sources.yml)
|
224
229
|
--daemon, -d: Stay in the background, by constantly pooling datasources
|
225
|
-
|
226
|
-
|
230
|
+
--SandboxUpdate, -S: Update Dorothive with the new Sandbox file
|
231
|
+
--DorothiveInit, -D: (RE)Install the Dorothy Database (Dorothive)
|
227
232
|
--help, -h: Show this message
|
228
233
|
|
229
234
|
|
@@ -231,6 +236,10 @@ The first time you execute Dorothy, it will ask you to fill those information in
|
|
231
236
|
>
|
232
237
|
$dorothy_start -v -s malwarefolder
|
233
238
|
|
239
|
+
After the execution, if everything went fine, you will find the analysis output (screens/pcap/bin) into the analysis folder that you have configured e.g. dorothy/opt/analyzed/[:digit:]/
|
240
|
+
Other information will be stored into Dorothive.
|
241
|
+
If executed in daemon mode, Dorothy2 will poll the datasources every X seconds (where X is defined by the "dtimeout:" field in the configuration file) looking for new binaries.
|
242
|
+
|
234
243
|
### DoroParser usage:
|
235
244
|
|
236
245
|
$dparser_start [options]
|
@@ -245,6 +254,9 @@ The first time you execute Dorothy, it will ask you to fill those information in
|
|
245
254
|
$dparser_start -d start
|
246
255
|
$dparser_stop
|
247
256
|
|
257
|
+
|
258
|
+
After the execution, if everything went fine, doroParser will store all the donwloaded files into the binary's analysis folder e.g. dorothy/opt/analyzed/[:digit:]/downloads
|
259
|
+
Other information -i.e. Network data- will be stored into Dorothive.
|
248
260
|
If executed in daemon mode, DoroParser will poll the database every X seconds (where X is defined by the "dtimeout:" field in the configuration file) looking for new pcaps that has been inserted.
|
249
261
|
|
250
262
|
###6. Debugging problems
|
@@ -268,6 +280,18 @@ Below there are some tips about how understand the root-cause of your crash.
|
|
268
280
|
|
269
281
|
------------------------------------------
|
270
282
|
|
283
|
+
## Acknowledgements
|
284
|
+
|
285
|
+
Thanks to all the people who have contributed in making the Dorothy2 project up&running:
|
286
|
+
|
287
|
+
* Marco C. (research)
|
288
|
+
* Davide C. (Dorothive)
|
289
|
+
* Andrea V. (WGUI)
|
290
|
+
* Domenico C. - Patrizia P. (Dorothive/JDrone)
|
291
|
+
* [All](https://www.honeynet.it/research) the graduating students from [UniMI](http://cdlonline.di.unimi.it/) who have contributed.
|
292
|
+
* Sabrina P. (our students "headhunter" :)
|
293
|
+
* Jorge C. and Nelson M. (betatesting/first release feedbacks)
|
294
|
+
|
271
295
|
## Contributing
|
272
296
|
|
273
297
|
1. Fork it
|
data/TODO
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
##############
|
2
|
+
#DOROTHY-TODO#
|
3
|
+
##############
|
4
|
+
|
5
|
+
-PORT TO Ruby 2.0
|
6
|
+
-WGUI
|
7
|
+
|
8
|
+
-BINARY STATIC ANALYSIS
|
9
|
+
-ANALYZE SYSTEM CHANGES
|
10
|
+
-SYSTEM ANALYSIS -VMWARE API: QueryChangedDiskAreas
|
11
|
+
-LIST PROCESSES-> pm.ListProcessesInGuest(:vm => vm, :auth => auth).inspect
|
12
|
+
|
13
|
+
-CODE- CATCH CTRL-C AND EXIT GRACEFULLY
|
14
|
+
-INTERACTIVE CONSOLE FOR NETWORK ANALYSIS
|
15
|
+
|
16
|
+
-REVIEW DOROTHIVE (binary fullpath?)
|
17
|
+
|
18
|
+
-ADD EMAIL AS SOURCETYPE (use ruby mail gem for retreiving the emails, and parse them)
|
19
|
+
|
20
|
+
-REPORT PLUGIN
|
21
|
+
-REPORT - MAEC
|
data/bin/dorothy_start
CHANGED
@@ -37,7 +37,7 @@ opts = Trollop.options do
|
|
37
37
|
opt :source, "Choose a source (from the ones defined in etc/sources.yml)", :type => :string
|
38
38
|
opt :daemon, "Stay in the backround, by constantly pooling datasources"
|
39
39
|
opt :SandboxUpdate, "Update Dorothive with the new Sandbox file"
|
40
|
-
opt :DorothiveInit, "(RE)Install the Dorothy Database (Dorothive)"
|
40
|
+
opt :DorothiveInit, "(RE)Install the Dorothy Database (Dorothive)", :type => :string
|
41
41
|
|
42
42
|
end
|
43
43
|
|
@@ -103,6 +103,12 @@ end
|
|
103
103
|
sfile = home + '/etc/sources.yml'
|
104
104
|
sboxfile = home + '/etc/sandboxes.yml'
|
105
105
|
|
106
|
+
if opts[:DorothiveInit]
|
107
|
+
Util.init_db(opts[:DorothiveInit])
|
108
|
+
puts "[Dorothy]".yellow + " Database loaded, now you can restart Dorothy!"
|
109
|
+
exit(0)
|
110
|
+
end
|
111
|
+
|
106
112
|
#INIT DB Connector
|
107
113
|
begin
|
108
114
|
db = Insertdb.new
|
@@ -120,11 +126,6 @@ rescue => e
|
|
120
126
|
end
|
121
127
|
|
122
128
|
|
123
|
-
if opts[:DorothiveInit]
|
124
|
-
Util.init_db
|
125
|
-
exit(0)
|
126
|
-
end
|
127
|
-
|
128
129
|
if opts[:SandboxUpdate]
|
129
130
|
puts "[Dorothy]".yellow + " Loading #{sboxfile} into Dorothive"
|
130
131
|
DoroConfig.init_sandbox(sboxfile)
|
data/bin/dparser_start
CHANGED
@@ -63,11 +63,23 @@ LOGGER_PARSER.sev_threshold = DoroSettings.env[:loglevel]
|
|
63
63
|
LOGGER = DoroLogger.new(logout, DoroSettings.env[:logage])
|
64
64
|
LOGGER.sev_threshold = DoroSettings.env[:loglevel]
|
65
65
|
|
66
|
+
begin
|
67
|
+
|
68
|
+
rescue
|
69
|
+
exit(1)
|
70
|
+
|
71
|
+
end
|
72
|
+
|
73
|
+
|
74
|
+
|
66
75
|
begin
|
67
76
|
DoroParser.start(daemon)
|
68
77
|
rescue => e
|
69
78
|
puts "[PARSER]".yellow + " An error occurred: ".red + $!
|
70
|
-
|
79
|
+
if daemon
|
80
|
+
puts "[PARSER]".yellow + " For more information check the logfile" + $!
|
81
|
+
puts "[PARSER]".yellow + "Dorothy-Parser has been stopped"
|
82
|
+
end
|
71
83
|
LOGGER_PARSER.error "Parser", "An error occurred: " + $!
|
72
84
|
LOGGER_PARSER.debug "Parser", "#{e.inspect} --BACKTRACE: #{e.backtrace}"
|
73
85
|
LOGGER_PARSER.info "Parser", "Dorothy-Parser has been stopped"
|
data/etc/ddl/dorothive.ddl
CHANGED
@@ -128,26 +128,6 @@ CREATE TYPE layer7_protocols AS ENUM (
|
|
128
128
|
|
129
129
|
ALTER TYPE dorothy.layer7_protocols OWNER TO postgres;
|
130
130
|
|
131
|
-
--
|
132
|
-
-- Name: sample_type; Type: TYPE; Schema: dorothy; Owner: postgres
|
133
|
-
--
|
134
|
-
|
135
|
-
CREATE TYPE sample_type AS ENUM (
|
136
|
-
'mz',
|
137
|
-
'pe',
|
138
|
-
'elf'
|
139
|
-
);
|
140
|
-
|
141
|
-
|
142
|
-
ALTER TYPE dorothy.sample_type OWNER TO postgres;
|
143
|
-
|
144
|
-
--
|
145
|
-
-- Name: TYPE sample_type; Type: COMMENT; Schema: dorothy; Owner: postgres
|
146
|
-
--
|
147
|
-
|
148
|
-
COMMENT ON TYPE sample_type IS 'Sample file type';
|
149
|
-
|
150
|
-
|
151
131
|
--
|
152
132
|
-- Name: sanbox_type; Type: TYPE; Schema: dorothy; Owner: postgres
|
153
133
|
--
|
@@ -245,7 +225,6 @@ SELECT pg_catalog.setval('analyses_id_seq', 1, true);
|
|
245
225
|
CREATE TABLE samples (
|
246
226
|
hash character(64) NOT NULL,
|
247
227
|
size integer NOT NULL,
|
248
|
-
type sample_type,
|
249
228
|
path character(256),
|
250
229
|
filename character(256),
|
251
230
|
md5 character(64),
|
@@ -276,14 +255,6 @@ COMMENT ON COLUMN samples.hash IS 'SHA256 checksum hash';
|
|
276
255
|
|
277
256
|
COMMENT ON COLUMN samples.size IS 'Sample size';
|
278
257
|
|
279
|
-
|
280
|
-
--
|
281
|
-
-- Name: COLUMN samples.type; Type: COMMENT; Schema: dorothy; Owner: postgres
|
282
|
-
--
|
283
|
-
|
284
|
-
COMMENT ON COLUMN samples.type IS 'Sample type';
|
285
|
-
|
286
|
-
|
287
258
|
--
|
288
259
|
-- Name: CONSTRAINT size_notneg ON samples; Type: COMMENT; Schema: dorothy; Owner: postgres
|
289
260
|
--
|
@@ -298,7 +269,7 @@ COMMENT ON CONSTRAINT size_notneg ON samples IS 'Sample size must not be negativ
|
|
298
269
|
CREATE TABLE traffic_dumps (
|
299
270
|
hash character(64) NOT NULL,
|
300
271
|
size integer NOT NULL,
|
301
|
-
pcapr_id character(
|
272
|
+
pcapr_id character(32),
|
302
273
|
"binary" character varying,
|
303
274
|
parsed boolean
|
304
275
|
);
|
@@ -1323,7 +1294,7 @@ COPY roles (id, type, comment) FROM stdin;
|
|
1323
1294
|
-- Data for Name: samples; Type: TABLE DATA; Schema: dorothy; Owner: postgres
|
1324
1295
|
--
|
1325
1296
|
|
1326
|
-
COPY samples (hash, size,
|
1297
|
+
COPY samples (hash, size, path, filename, md5, long_type) FROM stdin;
|
1327
1298
|
\.
|
1328
1299
|
|
1329
1300
|
|
data/lib/doroParser.rb
CHANGED
@@ -17,7 +17,6 @@
|
|
17
17
|
|
18
18
|
|
19
19
|
require 'rubygems'
|
20
|
-
require 'mu/xtractr'
|
21
20
|
require 'md5'
|
22
21
|
require 'rbvmomi'
|
23
22
|
require 'rest_client'
|
@@ -32,15 +31,17 @@ require 'pg'
|
|
32
31
|
require 'iconv'
|
33
32
|
require 'tmail'
|
34
33
|
require 'ipaddr'
|
34
|
+
require 'net/http'
|
35
|
+
require 'json'
|
35
36
|
|
36
37
|
require File.dirname(__FILE__) + '/dorothy2/environment'
|
38
|
+
require File.dirname(__FILE__) + '/mu/xtractr'
|
37
39
|
require File.dirname(__FILE__) + '/dorothy2/DEM'
|
38
40
|
require File.dirname(__FILE__) + '/dorothy2/do-utils'
|
39
41
|
require File.dirname(__FILE__) + '/dorothy2/do-logger'
|
40
42
|
require File.dirname(__FILE__) + '/dorothy2/deep_symbolize'
|
41
43
|
|
42
44
|
|
43
|
-
|
44
45
|
module DoroParser
|
45
46
|
#Host roles
|
46
47
|
|
@@ -85,12 +86,34 @@ module DoroParser
|
|
85
86
|
|
86
87
|
|
87
88
|
begin
|
88
|
-
|
89
|
+
|
90
|
+
#check if the pcap has been correctly indexed by pcapr
|
91
|
+
xtractr = Doroxtractr.create "http://#{DoroSettings.pcapr[:host]}:#{DoroSettings.pcapr[:port]}/pcaps/1/pcap/#{dump['pcapr_id'].rstrip}"
|
89
92
|
|
90
93
|
rescue => e
|
91
|
-
LOGGER_PARSER.fatal "PARSER", "Can't
|
94
|
+
LOGGER_PARSER.fatal "PARSER", "Can't connect to the PCAPR server."
|
92
95
|
LOGGER_PARSER.debug "PARSER", "#{$!}"
|
93
|
-
LOGGER_PARSER.debug "PARSER", e
|
96
|
+
LOGGER_PARSER.debug "PARSER", e.backtrace if VERBOSE
|
97
|
+
return false
|
98
|
+
end
|
99
|
+
|
100
|
+
#it may happen that Pcapr has created an instance, but it is still indexing the pcap.
|
101
|
+
#The following section is to avoid a crash while quering such (still-empty instance)
|
102
|
+
#In addition, an added check is inserted, to see if the pcapr instance really match the pcap filename
|
103
|
+
begin
|
104
|
+
pcapr_query = URI.parse "http://#{DoroSettings.pcapr[:host]}:#{DoroSettings.pcapr[:port]}/pcaps/1/about/#{dump['pcapr_id'].rstrip}"
|
105
|
+
pcapr_response = Net::HTTP.get_response(pcapr_query)
|
106
|
+
pcapname = File.basename(JSON.parse(pcapr_response.body)["filename"], ".pcap")
|
107
|
+
|
108
|
+
t ||= $1 if pcapname =~ /[0-9]*\-(.*)$/
|
109
|
+
raise NameError.new if t != dump['sample'].rstrip
|
110
|
+
|
111
|
+
rescue NameError
|
112
|
+
LOGGER_PARSER.error "PARSER", "The pcapr filename mismatchs the one present in Dorothive!. Skipping."
|
113
|
+
next
|
114
|
+
|
115
|
+
rescue
|
116
|
+
LOGGER_PARSER.error "PARSER", "Can't find the PCAP into Pcapr, maybe it has not been indexed yet. Skipping."
|
94
117
|
next
|
95
118
|
end
|
96
119
|
|
@@ -103,18 +126,14 @@ module DoroParser
|
|
103
126
|
|
104
127
|
flowdeep = xtractr.flows("flow.id:#{flow.id}")
|
105
128
|
|
106
|
-
|
107
|
-
|
108
129
|
#Skipping if NETBIOS spreading activity:
|
109
130
|
if flow.dport == 135 or flow.dport == 445
|
110
131
|
LOGGER_PARSER.info "PARSER", "Netbios connections, skipping flow" unless NONETBIOS
|
111
132
|
next
|
112
133
|
end
|
113
134
|
|
114
|
-
|
115
135
|
title = flow.title[0..200].gsub(/'/,"") #xtool bug ->')
|
116
136
|
|
117
|
-
|
118
137
|
#insert hosts (geo) info into db
|
119
138
|
#TODO: check if is a localaddress
|
120
139
|
localip = xtractr.flows.first.src.address
|
@@ -258,10 +277,6 @@ module DoroParser
|
|
258
277
|
|
259
278
|
end
|
260
279
|
|
261
|
-
|
262
|
-
|
263
|
-
|
264
|
-
|
265
280
|
end
|
266
281
|
|
267
282
|
#case MAIL
|
@@ -269,8 +284,6 @@ module DoroParser
|
|
269
284
|
LOGGER_PARSER.info "SMTP", "FOUND an SMTP request..".white
|
270
285
|
#insert mail
|
271
286
|
#by from to subject data id time connection
|
272
|
-
|
273
|
-
|
274
287
|
streamdata.each do |m|
|
275
288
|
mailfrom = 'null'
|
276
289
|
mailto = 'null'
|
@@ -303,8 +316,6 @@ module DoroParser
|
|
303
316
|
@insertdb.insert("emails", mailvalues )
|
304
317
|
end
|
305
318
|
|
306
|
-
|
307
|
-
|
308
319
|
#case FTP
|
309
320
|
when "FTP" then
|
310
321
|
LOGGER_PARSER.info "FTP", "FOUND an FTP request".white
|
@@ -324,9 +335,7 @@ module DoroParser
|
|
324
335
|
end
|
325
336
|
end
|
326
337
|
|
327
|
-
|
328
338
|
else
|
329
|
-
|
330
339
|
LOGGER_PARSER.info "PARSER", "Unknown traffic, try see if it is IRC traffic"
|
331
340
|
|
332
341
|
if Parser.guess(streamdata.inspect).class.inspect =~ /IRC/
|
@@ -358,14 +367,12 @@ module DoroParser
|
|
358
367
|
end
|
359
368
|
end
|
360
369
|
|
361
|
-
|
362
370
|
@p.each do |d|
|
363
371
|
|
364
372
|
begin
|
365
373
|
|
366
374
|
dns = DoroDNS.new(d)
|
367
375
|
|
368
|
-
|
369
376
|
dnsvalues = ["default", dns.name, dns.cls_i.inspect, dns.qry?, dns.ttl, flowid, dns.address.to_s, dns.data, dns.type_i.inspect]
|
370
377
|
|
371
378
|
LOGGER_PARSER.debug "DB", " Inserting DNS data from #{flow.dst.address.to_s}".blue if VERBOSE
|
@@ -404,8 +411,8 @@ module DoroParser
|
|
404
411
|
|
405
412
|
rescue => e
|
406
413
|
|
407
|
-
LOGGER_PARSER.error "PARSER", "Error while analyzing flow #{flow.id}"
|
408
|
-
LOGGER_PARSER.debug "PARSER", "#{e.
|
414
|
+
LOGGER_PARSER.error "PARSER", "Error while analyzing flow #{flow.id}: #{e.inspect}"
|
415
|
+
LOGGER_PARSER.debug "PARSER", "#{e.backtrace}" if VERBOSE
|
409
416
|
LOGGER_PARSER.info "PARSER", "Flow #{flow.id} will be skipped"
|
410
417
|
next
|
411
418
|
end
|
data/lib/dorothy2/BFM.rb
CHANGED
@@ -7,15 +7,15 @@
|
|
7
7
|
###BINARY FETCHER MODULE###
|
8
8
|
### ###
|
9
9
|
###########################
|
10
|
-
|
10
|
+
#The BFM module is in charge of retreiving the binary from the sources configured in the sources.yml file.
|
11
|
+
#It receive the source hash, and return the downloaded binaries objects.
|
11
12
|
module Dorothy
|
12
13
|
|
13
|
-
|
14
14
|
class DorothyFetcher
|
15
15
|
attr_reader :bins
|
16
16
|
|
17
|
-
|
18
|
-
def initialize(source)
|
17
|
+
#Source struct: Hash, {:dir => "#{HOME}/bins/honeypot", :typeid=> 0 ..}
|
18
|
+
def initialize(source)
|
19
19
|
ndownloaded = 0
|
20
20
|
|
21
21
|
@bins = []
|
@@ -26,7 +26,6 @@ module Dorothy
|
|
26
26
|
when "ssh" then
|
27
27
|
LOGGER.info "BFM", " Fetching trojan from > Honeypot"
|
28
28
|
#file = "/opt/dionaea/var/dionaea/binaries/"
|
29
|
-
|
30
29
|
#puts "Start to download malware"
|
31
30
|
|
32
31
|
files = []
|
@@ -37,7 +36,6 @@ module Dorothy
|
|
37
36
|
unless files.include? "#{source["localdir"]}/" + File.basename(name)
|
38
37
|
ndownloaded += 1
|
39
38
|
files.push "#{source["localdir"]}/" + File.basename(name)
|
40
|
-
# puts ""
|
41
39
|
end
|
42
40
|
# print "#{File.basename(name)}: #{sent}/#{total}\r"
|
43
41
|
# $stdout.flush
|
@@ -45,26 +43,22 @@ module Dorothy
|
|
45
43
|
LOGGER.info "BFM", "#{ndownloaded} files downloaded"
|
46
44
|
end
|
47
45
|
|
48
|
-
|
49
46
|
rescue => e
|
50
47
|
LOGGER.error "BFM", "An error occurred while downloading malwares from honeypot sensor: " + $!
|
51
48
|
LOGGER.error "BFM", "Error: #{$!}, #{e.inspect}, #{e.backtrace}"
|
52
49
|
end
|
53
50
|
|
54
51
|
#DIRTY WORKAROUND for scp-ing only files without directory
|
55
|
-
|
56
52
|
FileUtils.mv(Dir.glob(source["localdir"] + "/binaries/*"), source["localdir"])
|
57
53
|
Dir.rmdir(source["localdir"] + "/binaries")
|
58
54
|
|
59
55
|
|
60
56
|
begin
|
61
|
-
|
62
57
|
unless DoroSettings.env[:testmode]
|
63
58
|
Net::SSH.start(source["ip"], source["user"], :password => source["pass"], :port => source["port"]) do |ssh|
|
64
59
|
ssh.exec "mv #{source["remotedir"]}/* #{source["remotedir"]}/../analyzed "
|
65
60
|
end
|
66
61
|
end
|
67
|
-
|
68
62
|
rescue
|
69
63
|
LOGGER.error "BFM", "An error occurred while erasing parsed malwares in the honeypot sensor: " + $!
|
70
64
|
end
|
@@ -87,8 +81,6 @@ module Dorothy
|
|
87
81
|
end
|
88
82
|
end
|
89
83
|
|
90
|
-
|
91
|
-
|
92
84
|
private
|
93
85
|
def load_malw(f, typeid, sourceinfo = nil)
|
94
86
|
|
@@ -100,7 +92,7 @@ module Dorothy
|
|
100
92
|
return false
|
101
93
|
end
|
102
94
|
|
103
|
-
samplevalues = [bin.sha, bin.size, bin.
|
95
|
+
samplevalues = [bin.sha, bin.size, bin.dir_bin, filename, bin.md5, bin.type ]
|
104
96
|
sighvalues = [bin.sha, typeid, bin.ctime, "null"]
|
105
97
|
|
106
98
|
begin
|
data/lib/dorothy2/do-utils.rb
CHANGED
@@ -16,16 +16,16 @@ module Dorothy
|
|
16
16
|
File.exist?(file)
|
17
17
|
end
|
18
18
|
|
19
|
-
def init_db(force=false)
|
20
|
-
LOGGER.warn "DB", "The database is going to be initialized, all the data
|
19
|
+
def init_db(ddl=DoroSettings.dorothive[:ddl], force=false)
|
20
|
+
LOGGER.warn "DB", "The database is going to be initialized with the file #{ddl}. If the Dorothive is already present, " + "all the its data will be lost".red + ". Continue?(write yes)"
|
21
21
|
answ = "yes"
|
22
22
|
answ = gets.chop unless force
|
23
23
|
|
24
24
|
if answ == "yes"
|
25
25
|
begin
|
26
26
|
#ugly, I know, but couldn't find a better and easier way..
|
27
|
-
raise 'An error occurred' unless system "psql -h #{DoroSettings.dorothive[:dbhost]} -U #{DoroSettings.dorothive[:dbuser]} -f #{
|
28
|
-
LOGGER.info "DB", "Database correctly initialized."
|
27
|
+
raise 'An error occurred' unless system "psql -h #{DoroSettings.dorothive[:dbhost]} -U #{DoroSettings.dorothive[:dbuser]} -f #{ddl} 1> /dev/null"
|
28
|
+
LOGGER.info "DB", "Database correctly initialized. Now you can restart Dorothy!"
|
29
29
|
rescue => e
|
30
30
|
LOGGER.error "DB", $!
|
31
31
|
LOGGER.debug "DB", e.inspect
|
@@ -248,7 +248,6 @@ module Dorothy
|
|
248
248
|
@binpath = file
|
249
249
|
@filename = File.basename file
|
250
250
|
@extension = File.extname file
|
251
|
-
@dbtype = "null" #TODO: remove type column in sample table
|
252
251
|
|
253
252
|
File.open(file, 'rb') do |fh1|
|
254
253
|
while buffer1 = fh1.read(1024)
|
data/lib/dorothy2/version.rb
CHANGED
data/lib/dorothy2.rb
CHANGED
@@ -5,7 +5,6 @@
|
|
5
5
|
##for irb debug:
|
6
6
|
##from $home, irb and :
|
7
7
|
##load 'lib/dorothy2.rb'; include Dorothy; LOGGER = DoroLogger.new(STDOUT, "weekly"); DoroSettings.load!('etc/dorothy.yml')
|
8
|
-
#$LOAD_PATH.unshift '/opt/local/lib/ruby/gems/1.8/gems/ruby-filemagic-0.4.2/lib'
|
9
8
|
|
10
9
|
require 'net/ssh'
|
11
10
|
require 'net/scp'
|
@@ -152,8 +151,8 @@ module Dorothy
|
|
152
151
|
vsm.copy_file("#{bin.md5}#{bin.extension}",filecontent)
|
153
152
|
|
154
153
|
#Start Sniffer
|
155
|
-
dumpname = bin.md5
|
156
|
-
pid = @nam.start_sniffer(guestvm[2],DoroSettings.nam[:interface], dumpname, DoroSettings.nam[:pcaphome])
|
154
|
+
dumpname = anal_id.to_s + "-" + bin.md5
|
155
|
+
pid = @nam.start_sniffer(guestvm[2],DoroSettings.nam[:interface], dumpname, DoroSettings.nam[:pcaphome])
|
157
156
|
LOGGER.info "NAM","VM#{guestvm[0]} ".yellow + "Start sniffing module"
|
158
157
|
LOGGER.debug "NAM","VM#{guestvm[0]} ".yellow + "Tcpdump instance #{pid} started" if VERBOSE
|
159
158
|
|
@@ -216,8 +215,7 @@ module Dorothy
|
|
216
215
|
|
217
216
|
#Downloading PCAP
|
218
217
|
LOGGER.info "NAM", "VM#{guestvm[0]} ".yellow + "Downloading #{dumpname}.pcap to #{bin.dir_pcap}"
|
219
|
-
|
220
|
-
Ssh.download(DoroSettings.nam[:host], DoroSettings.nam[:user],DoroSettings.nam[:pass], DoroSettings.nam[:pcaphome] + "/" + dumpname + ".pcap", bin.dir_pcap)
|
218
|
+
Ssh.download(DoroSettings.nam[:host], DoroSettings.nam[:user],DoroSettings.nam[:pass], DoroSettings.nam[:pcaphome] + "/#{dumpname}.pcap", bin.dir_pcap)
|
221
219
|
|
222
220
|
#Downloading Screenshots from esx
|
223
221
|
LOGGER.info "NAM", "VM#{guestvm[0]} ".yellow + "Downloading Screenshots"
|
@@ -231,11 +229,10 @@ module Dorothy
|
|
231
229
|
#UPDATE DOROTHIBE DB#
|
232
230
|
#####################
|
233
231
|
|
234
|
-
|
235
|
-
dump = Loadmalw.new(pcapfile)
|
232
|
+
dump = Loadmalw.new(bin.dir_pcap + dumpname + ".pcap")
|
236
233
|
|
237
234
|
#pcaprpath = bin.md5 + "/pcap/" + dump.filename
|
238
|
-
pcaprid = Loadmalw.calc_pcaprid(dump.filename, dump.size)
|
235
|
+
pcaprid = Loadmalw.calc_pcaprid(dump.filename, dump.size).rstrip
|
239
236
|
|
240
237
|
LOGGER.debug "NAM", "VM#{guestvm[0]} ".yellow + "Pcaprid: " + pcaprid if VERBOSE
|
241
238
|
|
@@ -0,0 +1,57 @@
|
|
1
|
+
# "THE BEER-WARE LICENSE" (Revision 42):
|
2
|
+
# Mu[http://www.mudynamics.com] wrote this file. As long as you retain this
|
3
|
+
# notice you can do whatever you want with this stuff. If we meet some day,
|
4
|
+
# and you think this stuff is worth it, you can buy us a beer in return.
|
5
|
+
#
|
6
|
+
# All about pcapr
|
7
|
+
# * http://www.pcapr.net
|
8
|
+
# * http://groups.google.com/group/pcapr-forum
|
9
|
+
# * http://twitter.com/pcapr
|
10
|
+
#
|
11
|
+
# Mu Dynamics
|
12
|
+
# * http://www.mudynamics.com
|
13
|
+
# * http://labs.mudynamics.com
|
14
|
+
|
15
|
+
module Mu
|
16
|
+
class Xtractr
|
17
|
+
# = About
|
18
|
+
# Contains the meta data about the index including the number of packets,
|
19
|
+
# flows, hosts, services and also the duration (in seconds) of the indexed
|
20
|
+
# pcaps.
|
21
|
+
#
|
22
|
+
# xtractr.about.duration
|
23
|
+
# xtractr.about.packets
|
24
|
+
class About
|
25
|
+
# Returns the version of the xtractr server
|
26
|
+
attr_reader :version
|
27
|
+
|
28
|
+
# Returns the ##packets in the index
|
29
|
+
attr_reader :packets
|
30
|
+
|
31
|
+
# Returns the ##flows in the index
|
32
|
+
attr_reader :flows
|
33
|
+
|
34
|
+
# Returns the ##hosts in the index
|
35
|
+
attr_reader :hosts
|
36
|
+
|
37
|
+
# Returns the ##services in the index
|
38
|
+
attr_reader :services
|
39
|
+
|
40
|
+
# Returns the total duration of all the pcaps in the index
|
41
|
+
attr_reader :duration
|
42
|
+
|
43
|
+
def initialize json # :nodoc:
|
44
|
+
@version = json['version']
|
45
|
+
@packets = json['packets']
|
46
|
+
@flows = json['flows']
|
47
|
+
@hosts = json['hosts']
|
48
|
+
@services = json['services']
|
49
|
+
@duration = json['duration']
|
50
|
+
end
|
51
|
+
|
52
|
+
def inspect # :nodoc:
|
53
|
+
"#<about \##{flows} flows, \##{packets} packets>"
|
54
|
+
end
|
55
|
+
end
|
56
|
+
end # Xtractr
|
57
|
+
end # Mu
|
@@ -0,0 +1,68 @@
|
|
1
|
+
# "THE BEER-WARE LICENSE" (Revision 42):
|
2
|
+
# Mu[http://www.mudynamics.com] wrote this file. As long as you retain this
|
3
|
+
# notice you can do whatever you want with this stuff. If we meet some day,
|
4
|
+
# and you think this stuff is worth it, you can buy us a beer in return.
|
5
|
+
#
|
6
|
+
# All about pcapr
|
7
|
+
# * http://www.pcapr.net
|
8
|
+
# * http://groups.google.com/group/pcapr-forum
|
9
|
+
# * http://twitter.com/pcapr
|
10
|
+
#
|
11
|
+
# Mu Dynamics
|
12
|
+
# * http://www.mudynamics.com
|
13
|
+
# * http://labs.mudynamics.com
|
14
|
+
|
15
|
+
module Mu
|
16
|
+
class Xtractr
|
17
|
+
# = Content
|
18
|
+
# Content is the next level of abstraction beyond Message. When a stream is
|
19
|
+
# fetched from xtractr, all registered stream processors are invoked on the
|
20
|
+
# various messages. For example, the HTTP content processor, pulls out the
|
21
|
+
# response body from HTTP requests and responses, dechunks them and potentially
|
22
|
+
# unzips the content. The resulting content represents the HTML file or a JPEG
|
23
|
+
# image that can be saved off.
|
24
|
+
#
|
25
|
+
# xtractr.packets('http.content.type:gif').first.flow.stream.contents.each do |c|
|
26
|
+
# c.save
|
27
|
+
# end
|
28
|
+
class Content
|
29
|
+
# The name of the content (like a jpeg or pdf file).
|
30
|
+
attr_accessor :name
|
31
|
+
|
32
|
+
# The encoding (base64, gzip, deflate, etc) of this content.
|
33
|
+
attr_accessor :encoding
|
34
|
+
|
35
|
+
# The mime type of this content.
|
36
|
+
attr_accessor :type
|
37
|
+
|
38
|
+
# The message from which this content was extracted.
|
39
|
+
attr_reader :message
|
40
|
+
|
41
|
+
# The actual body of the content (gunzip'd PDF file, for example)
|
42
|
+
attr_accessor :body
|
43
|
+
|
44
|
+
def initialize message # :nodoc:
|
45
|
+
@name = "content.#{message.stream.flow.id}.#{message.index}"
|
46
|
+
@type = 'application/unknown'
|
47
|
+
@body = nil
|
48
|
+
@message = message
|
49
|
+
end
|
50
|
+
|
51
|
+
# Save the content to a file. If the filename is not provided then the
|
52
|
+
# content name is used instead. This is a convenience method used for
|
53
|
+
# method chaining.
|
54
|
+
# flow.stream.contents.first.save
|
55
|
+
def save filename=nil
|
56
|
+
open(filename || name, "w") do |ios|
|
57
|
+
ios.write body
|
58
|
+
end
|
59
|
+
return self
|
60
|
+
end
|
61
|
+
|
62
|
+
def inspect # :nodoc:
|
63
|
+
preview = body[0..32].inspect
|
64
|
+
"#<content #{name} #{type} #{encoding} #{preview}>"
|
65
|
+
end
|
66
|
+
end
|
67
|
+
end # Xtractr
|
68
|
+
end # Mu
|