scooter 0.0.0 → 3.2.19
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +15 -0
- data/.env +5 -0
- data/.gitignore +47 -19
- data/Gemfile +3 -0
- data/HISTORY.md +1539 -0
- data/README.md +69 -10
- data/Rakefile +7 -0
- data/docs/http_dispatchers.md +79 -0
- data/lib/scooter.rb +11 -3
- data/lib/scooter/httpdispatchers.rb +12 -0
- data/lib/scooter/httpdispatchers/activity.rb +46 -0
- data/lib/scooter/httpdispatchers/activity/v1/v1.rb +50 -0
- data/lib/scooter/httpdispatchers/classifier.rb +376 -0
- data/lib/scooter/httpdispatchers/classifier/v1/v1.rb +99 -0
- data/lib/scooter/httpdispatchers/code_manager.rb +31 -0
- data/lib/scooter/httpdispatchers/code_manager/v1/v1.rb +17 -0
- data/lib/scooter/httpdispatchers/consoledispatcher.rb +132 -0
- data/lib/scooter/httpdispatchers/httpdispatcher.rb +168 -0
- data/lib/scooter/httpdispatchers/orchestrator/v1/v1.rb +87 -0
- data/lib/scooter/httpdispatchers/orchestratordispatcher.rb +83 -0
- data/lib/scooter/httpdispatchers/puppetdb/v4/v4.rb +51 -0
- data/lib/scooter/httpdispatchers/puppetdbdispatcher.rb +390 -0
- data/lib/scooter/httpdispatchers/rbac.rb +231 -0
- data/lib/scooter/httpdispatchers/rbac/v1/directory_service.rb +68 -0
- data/lib/scooter/httpdispatchers/rbac/v1/v1.rb +116 -0
- data/lib/scooter/ldap.rb +349 -0
- data/lib/scooter/ldap/ldap_fixtures.rb +60 -0
- data/lib/scooter/middleware/rbac_auth_token.rb +35 -0
- data/lib/scooter/utilities.rb +9 -0
- data/lib/scooter/utilities/beaker_utilities.rb +41 -0
- data/lib/scooter/utilities/string_utilities.rb +32 -0
- data/lib/scooter/version.rb +3 -1
- data/scooter.gemspec +23 -6
- data/spec/scooter/beaker_utilities_spec.rb +53 -0
- data/spec/scooter/httpdispatchers/activity/activity_spec.rb +218 -0
- data/spec/scooter/httpdispatchers/classifier/classifier_spec.rb +542 -0
- data/spec/scooter/httpdispatchers/code_manager/code-manager_spec.rb +67 -0
- data/spec/scooter/httpdispatchers/consoledispatcher_spec.rb +80 -0
- data/spec/scooter/httpdispatchers/httpdispatcher_spec.rb +91 -0
- data/spec/scooter/httpdispatchers/middleware/rbac_auth_token_spec.rb +58 -0
- data/spec/scooter/httpdispatchers/orchestratordispatcher_spec.rb +195 -0
- data/spec/scooter/httpdispatchers/puppetdbdispatcher_spec.rb +246 -0
- data/spec/scooter/httpdispatchers/rbac/rbac_spec.rb +387 -0
- data/spec/scooter/string_utilities_spec.rb +83 -0
- data/spec/spec_helper.rb +8 -0
- metadata +270 -18
- data/LICENSE.txt +0 -15
data/README.md
CHANGED
|
@@ -1,21 +1,80 @@
|
|
|
1
1
|
# Scooter
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
#### Table of Contents
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
1. [Overview](#overview)
|
|
6
|
+
2. [Usage](#usage)
|
|
7
|
+
3. [Versioning](#versioning)
|
|
8
|
+
4. [Releasing](#releasing)
|
|
9
|
+
* [Pushing a new version to the internal rubygems mirror](#pushing-a-new-version-to-the-internal-rubygems-mirror)
|
|
10
|
+
5. [Rdocs](#rdocs)
|
|
11
|
+
6. [Contributing](#contributing)
|
|
6
12
|
|
|
7
|
-
|
|
13
|
+
## Overview
|
|
8
14
|
|
|
9
|
-
|
|
15
|
+
Scooter is a ruby gem developed by QA to facilitate http traffic between the
|
|
16
|
+
test runner and a Puppet Enterprise Installation–specifically the services
|
|
17
|
+
available in the pe-console-services process: Classifier, RBAC, and Activity
|
|
18
|
+
Service.
|
|
10
19
|
|
|
11
|
-
|
|
20
|
+
## Usage
|
|
12
21
|
|
|
13
|
-
|
|
22
|
+
Scooter only supports versions of Puppet Enterprise 3.7 and higher. Scooter is only available on the internal server, rubygems.delivery.puppetlabs.net.
|
|
14
23
|
|
|
15
|
-
|
|
24
|
+
To install Scooter, simply use the gem command with the source flag set to the internal rubygems mirror–remember that you will have to be on Puppet's DNS to see the mirrored gem server.
|
|
16
25
|
|
|
17
|
-
|
|
26
|
+
```
|
|
27
|
+
$ gem install scooter --source http://rubygems.delivery.puppetlabs.net
|
|
28
|
+
```
|
|
18
29
|
|
|
19
|
-
|
|
30
|
+
Scooter is currently divvied into the following sections:
|
|
31
|
+
|
|
32
|
+
- [HttpDispatchers](docs/http_dispatchers.md) – These are modules that can be mixed into classes that represent real users: whitelisted certificate users, local console users, or users connected through an LDAP directory. Currently, there is only one dispatcher currently defined--ConsoleDispatcher–but there could be new dispatchers created to facilitate traffic to other products, such as Puppet Server and PuppetDB.
|
|
33
|
+
|
|
34
|
+
- LDAPdispatcher – This class extends the Net::LDAP library, which is a requirement to for RBAC testing with LDAP fixtures.
|
|
35
|
+
- Utilities – Currently, this houses random string generators and convenience methods to use beaker to acquire certificates to impersonate whitelisted certificate users.
|
|
36
|
+
|
|
37
|
+
## Versioning
|
|
38
|
+
|
|
39
|
+
Scooter supports semantic versioning, with any 1.x release of scooter supporting all PE versions between 3.7.x and 4.0.
|
|
40
|
+
|
|
41
|
+
If you are looking for scooter support for PE 4.0 aka shallow gravy, please use any scooter version that is 2.x. Routes to the services changed between PE 3.7 and PE 4.0, requiring a major version bump of scooter between those versions.
|
|
42
|
+
|
|
43
|
+
## Releasing
|
|
44
|
+
|
|
45
|
+
The plan is to release Scooter at a regular cadence, probably once a week. Early on, we will release more often, as the port from qatests is not totally complete. Early feedback may significantly change the structure, so be cautious about building any significant dependencies yet. Once the dust has settled, a 1.0 release will be cut and support normal semantic versioning.
|
|
46
|
+
|
|
47
|
+
Discussion is still ongoing about whether this library will be publicly available on rubygems or not. Please feel free to email the the QA team for any further information regarding a public release.
|
|
48
|
+
|
|
49
|
+
- One issue blocking a public release of Scooter is to avoid possibly leaking information about unreleased features/products that Scooter might have information on. This could be mitigated by careful version control of Scooter, releasing it to the public only periodically, but releasing internally at a more frequent basis for internal testing.
|
|
50
|
+
|
|
51
|
+
- Should the gem accept PR's from the public? That seems to require significant overhead in terms of testing and stability of PR's. Perhaps make the gem public without accepting PR's from the public? Make the gem available on rubygems.org while the repo stays private?
|
|
52
|
+
|
|
53
|
+
### Pushing a new version to the internal rubygems mirror
|
|
54
|
+
|
|
55
|
+
1. Log into jenkins-qe
|
|
56
|
+
|
|
57
|
+
2. Trigger the Scooter Release Pipeline
|
|
58
|
+
|
|
59
|
+
a. The pipeline is responsible for checking in the version bump, generating a new HISTORY.md file, creating and pushing the gem
|
|
60
|
+
|
|
61
|
+
b. Select an appropriate version number *** WARNING - whatever version number you select will be auto-created and pushed liver, BE CAREFUL ***
|
|
62
|
+
|
|
63
|
+
## Rdocs
|
|
64
|
+
|
|
65
|
+
Much of the documentation of Scooter is embedded in the ruby code itself, using rdoc standards for documentation. Currently, Puppet does not have an internal server delivering yard documentation; if you wish to view the rdocs, you must build it out yourself after you have downloaded the gem.
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
prompt:~$ yard server --gems
|
|
69
|
+
|
|
70
|
+
#>> YARD 0.8.7.4 documentation server at http://0.0.0.0:8808
|
|
71
|
+
#Thin web server (v1.6.2 codename Doc Brown)
|
|
72
|
+
#Maximum connections set to 1024
|
|
73
|
+
#Listening on 0.0.0.0:8808, CTRL+C to stop
|
|
74
|
+
#
|
|
75
|
+
#goto: http://0.0.0.0:8808/docs/scooter/frames
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
## Contributing
|
|
20
79
|
|
|
21
|
-
|
|
80
|
+
You are encouraged to fork and submit PR's to Scooter. Sam Woods or Tony Vu are your best bet for getting a PR merged in; that list will grow as QA adds more regular contributors to the repo.
|
data/Rakefile
CHANGED
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
## HttpDispatchers
|
|
2
|
+
|
|
3
|
+
### Overview
|
|
4
|
+
|
|
5
|
+
Currently, there is only one class available in this section, the [`ConsoleDispatcher`](#console-dispatcher). A few, general ideas about the organization and procedure are as follows (All of this page needs better organization):
|
|
6
|
+
|
|
7
|
+
The underlying basis for the `ConsoleDispatcher` is the [Faraday](https://github.com/lostisland/faraday) library. A `Faraday::Connection` object should be utilized, probably as a `@connection` object defined as an `attr_accessor` for any other new httpdispatchers.
|
|
8
|
+
|
|
9
|
+
An `HttpDispatcher` should include modules representing the products/services and a mechanism for changing the versions of the service.
|
|
10
|
+
|
|
11
|
+
Method names at the service level, should, in general, not use the `@connection` object but use the methods defined in the version module; they do not necessarily need to be representative of the endpoints of the service.
|
|
12
|
+
|
|
13
|
+
Method names defined in the version module should be representative of the endpoints of the service.
|
|
14
|
+
|
|
15
|
+
```
|
|
16
|
+
├── classifier
|
|
17
|
+
│ └── v1
|
|
18
|
+
│ └── v1.rb
|
|
19
|
+
├── classifier.rb
|
|
20
|
+
├── consoledispatcher.rb
|
|
21
|
+
├── rbac
|
|
22
|
+
│ └── v1
|
|
23
|
+
│ ├── directory_service.rb
|
|
24
|
+
│ └── v1.rb
|
|
25
|
+
└── rbac.rb
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
#### Why Faraday and not HTTParty?
|
|
29
|
+
|
|
30
|
+
Faraday was designed with the concept of middleware–classes that you can add to your connection stack to act on requests and responses. That middleware stack has proven to be very valuable for dealing with redundant methods for dealing with API calls, while still retaining much of the versatility that HTTParty had. Faraday's middleware also allows for easy inclusion of other independent libraries, such as the Faraday cookie jar that the ConsoleDispatcher uses for its default connection.
|
|
31
|
+
|
|
32
|
+
### Console Dispatcher
|
|
33
|
+
|
|
34
|
+
The `ConsoleDispatcher` is designed to speak to any of the services packaged up in pe-console-services: Node Classifer, RBAC, and Activity Service. Beyond those services, the Console Dispatcher also handles connecting to the /auth endpoints, handling sessions and the CSRF token if necessary.
|
|
35
|
+
|
|
36
|
+
#### Certificate Dispatcher
|
|
37
|
+
|
|
38
|
+
``` ruby
|
|
39
|
+
require 'scooter'
|
|
40
|
+
|
|
41
|
+
#example beaker config
|
|
42
|
+
#HOSTS:
|
|
43
|
+
# ubuntu1404:
|
|
44
|
+
# vmname: ubuntu-1404
|
|
45
|
+
# roles:
|
|
46
|
+
# - master
|
|
47
|
+
# - agent
|
|
48
|
+
# - database
|
|
49
|
+
# - dashboard
|
|
50
|
+
# platform: ubuntu-14.04-amd64
|
|
51
|
+
|
|
52
|
+
certificate_dispatcher = Scooter::HttpDispatchers::ConsoleDispatcher.new(dashboard)
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
A `ConsoleDispatcher` assumes it is a certificate dispatcher if no credentials are passed in during initialization. As a certificate dispatcher, you do not need to maintain a session or signin. Once created, a certificate dispatcher talks directly to the services, defined by the dashboard object passed in.
|
|
56
|
+
|
|
57
|
+
#### Credential Dispatcher
|
|
58
|
+
|
|
59
|
+
``` ruby
|
|
60
|
+
require 'scooter'
|
|
61
|
+
|
|
62
|
+
#example beaker config
|
|
63
|
+
#HOSTS:
|
|
64
|
+
# ubuntu1404:
|
|
65
|
+
# vmname: ubuntu-1404
|
|
66
|
+
# roles:
|
|
67
|
+
# - master
|
|
68
|
+
# - agent
|
|
69
|
+
# - database
|
|
70
|
+
# - dashboard
|
|
71
|
+
# platform: ubuntu-14.04-amd64
|
|
72
|
+
|
|
73
|
+
|
|
74
|
+
certificate_dispatcher = Scooter::HttpDispatchers::ConsoleDispatcher.new(dashboard,
|
|
75
|
+
login: 'admin',
|
|
76
|
+
password: 'password')
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
A `ConsoleDispatcher` assumes it is a credential dispatcher if credentials–login and password–are passed in as parameters during creation. As a credential dispatcher, you will need to successfully signin to get a session to make further successful calls to any of the services.
|
data/lib/scooter.rb
CHANGED
|
@@ -1,5 +1,13 @@
|
|
|
1
|
-
require
|
|
1
|
+
require 'scooter/version'
|
|
2
|
+
require 'beaker'
|
|
3
|
+
require 'net/ldap'
|
|
4
|
+
require 'faraday'
|
|
5
|
+
require 'faraday_middleware'
|
|
6
|
+
require 'faraday-cookie_jar'
|
|
7
|
+
require 'nokogiri'
|
|
2
8
|
|
|
3
9
|
module Scooter
|
|
4
|
-
|
|
5
|
-
|
|
10
|
+
%w( utilities httpdispatchers ldap ).each do |lib|
|
|
11
|
+
require "scooter/#{lib}"
|
|
12
|
+
end
|
|
13
|
+
end
|
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
%w( httpdispatcher consoledispatcher puppetdbdispatcher orchestratordispatcher).each do |lib|
|
|
2
|
+
require "scooter/httpdispatchers/#{lib}"
|
|
3
|
+
end
|
|
4
|
+
|
|
5
|
+
module Scooter
|
|
6
|
+
# This module is just the housing for the single dispatcher we have right now,
|
|
7
|
+
# ConsoleDispatcher, but should eventually include other dispatchers for other
|
|
8
|
+
# services that talk over http, such as the Puppet Server and PuppetDB.
|
|
9
|
+
module HttpDispatchers
|
|
10
|
+
end
|
|
11
|
+
end
|
|
12
|
+
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
%w( v1 ).each do |lib|
|
|
2
|
+
require "scooter/httpdispatchers/activity/v1/#{lib}"
|
|
3
|
+
end
|
|
4
|
+
|
|
5
|
+
module Scooter
|
|
6
|
+
module HttpDispatchers
|
|
7
|
+
module Activity
|
|
8
|
+
include Scooter::HttpDispatchers::Activity::V1
|
|
9
|
+
include Scooter::Utilities
|
|
10
|
+
|
|
11
|
+
def set_activity_service_path(connection=self.connection)
|
|
12
|
+
set_url_prefix
|
|
13
|
+
connection.url_prefix.path = '/activity-api'
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
# Used to compare replica activity to master. Raises exception if it does not match.
|
|
17
|
+
# @param [BeakerHost] host_name
|
|
18
|
+
def activity_database_matches_self?(host_name)
|
|
19
|
+
original_host_name = self.host
|
|
20
|
+
begin
|
|
21
|
+
self.host = host_name.to_s
|
|
22
|
+
initialize_connection
|
|
23
|
+
other_rbac_events = get_rbac_events.env.body
|
|
24
|
+
other_classifier_events = get_classifier_events.env.body
|
|
25
|
+
ensure
|
|
26
|
+
self.host = original_host_name
|
|
27
|
+
initialize_connection
|
|
28
|
+
end
|
|
29
|
+
|
|
30
|
+
self_rbac_events = get_rbac_events.env.body
|
|
31
|
+
self_classifier_events = get_classifier_events.env.body
|
|
32
|
+
|
|
33
|
+
rbac_events_match = other_rbac_events == self_rbac_events
|
|
34
|
+
classifier_events_match = other_classifier_events == self_classifier_events
|
|
35
|
+
|
|
36
|
+
errors = ''
|
|
37
|
+
errors << "Rbac events do not match\r\n" unless rbac_events_match
|
|
38
|
+
errors << "Classifier events do not match\r\n" unless classifier_events_match
|
|
39
|
+
|
|
40
|
+
@faraday_logger.warn(errors.chomp) unless errors.empty?
|
|
41
|
+
errors.empty?
|
|
42
|
+
end
|
|
43
|
+
|
|
44
|
+
end
|
|
45
|
+
end
|
|
46
|
+
end
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
module Scooter
|
|
2
|
+
module HttpDispatchers
|
|
3
|
+
module Activity
|
|
4
|
+
# Methods here are generally representative of endpoints, and depending
|
|
5
|
+
# on the method, return either a Faraday response object or some sort of
|
|
6
|
+
# instance of the object created/modified.
|
|
7
|
+
module V1
|
|
8
|
+
# Gets events from classifier
|
|
9
|
+
#
|
|
10
|
+
# @param [Hash] filters query strings to filter the activity events:
|
|
11
|
+
# @option filters [String] :subject_type [optional; required only when subject_id is provided]
|
|
12
|
+
# @option filters [String] :subject_id [optional; comma-separated list of subject_ids]
|
|
13
|
+
# @option filters [String] :object_type [optional; required only when object_id is provided]
|
|
14
|
+
# @option filters [String] :object_id [optional; comma-separated list of object_ids]
|
|
15
|
+
# @option filters [String] :offset [optional; skip n event commits]
|
|
16
|
+
# @option filters [String] :limit [optional; return no more than n event commits; defaults to 1000]
|
|
17
|
+
# @return [Object] The events queried
|
|
18
|
+
def get_classifier_events(filters = {})
|
|
19
|
+
set_activity_service_path
|
|
20
|
+
@connection.get 'v1/events' do |request|
|
|
21
|
+
request.params['service_id'] = 'classifier'
|
|
22
|
+
filters.each { |param, value|
|
|
23
|
+
request.params[param] = value
|
|
24
|
+
}
|
|
25
|
+
end
|
|
26
|
+
end
|
|
27
|
+
|
|
28
|
+
# Gets events from rbac
|
|
29
|
+
#
|
|
30
|
+
# @param [Hash] filters query strings to filter the activity events:
|
|
31
|
+
# @option filters [String] :subject_type [optional; required only when subject_id is provided]
|
|
32
|
+
# @option filters [String] :subject_id [optional; comma-separated list of subject_ids]
|
|
33
|
+
# @option filters [String] :object_type [optional; required only when object_id is provided]
|
|
34
|
+
# @option filters [String] :object_id [optional; comma-separated list of object_ids]
|
|
35
|
+
# @option filters [String] :offset [optional; skip n event commits]
|
|
36
|
+
# @option filters [String] :limit [optional; return no more than n event commits; defaults to 1000]
|
|
37
|
+
# @return [Object] The events queried
|
|
38
|
+
def get_rbac_events(filters = {})
|
|
39
|
+
set_activity_service_path
|
|
40
|
+
@connection.get 'v1/events' do |request|
|
|
41
|
+
request.params['service_id'] = 'rbac'
|
|
42
|
+
filters.each { |param, value|
|
|
43
|
+
request.params[param] = value
|
|
44
|
+
}
|
|
45
|
+
end
|
|
46
|
+
end
|
|
47
|
+
end
|
|
48
|
+
end
|
|
49
|
+
end
|
|
50
|
+
end
|
|
@@ -0,0 +1,376 @@
|
|
|
1
|
+
%w( v1 ).each do |lib|
|
|
2
|
+
require "scooter/httpdispatchers/classifier/v1/#{lib}"
|
|
3
|
+
end
|
|
4
|
+
module Scooter
|
|
5
|
+
module HttpDispatchers
|
|
6
|
+
# Methods added here are not representative of endpoints, but are more
|
|
7
|
+
# generalized to be helpers to to acquire/transform data, such as getting
|
|
8
|
+
# the uuid of a node group based on the name. Be cautious about using
|
|
9
|
+
# these methods if you are utilizing a dispatcher with credentials;
|
|
10
|
+
# the user is not guaranteed to have privileges for all the methods
|
|
11
|
+
# defined here, or the user may not be signed in. If you have a method
|
|
12
|
+
# defined here that is using the connection object directly, you should
|
|
13
|
+
# probably be using a method defined in the version module instead.
|
|
14
|
+
module Classifier
|
|
15
|
+
|
|
16
|
+
include Scooter::HttpDispatchers::Classifier::V1
|
|
17
|
+
include Scooter::Utilities
|
|
18
|
+
Rootuuid = '00000000-0000-4000-8000-000000000000'
|
|
19
|
+
|
|
20
|
+
def set_classifier_path(connection=self.connection)
|
|
21
|
+
set_url_prefix
|
|
22
|
+
connection.url_prefix.path = '/classifier-api'
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
# This returns a tree-like hash of all node groups in the classifier; each
|
|
26
|
+
# key is a uuid, each value is an array of direct children. If no direct
|
|
27
|
+
# children are found, the value is an empty array. This representation
|
|
28
|
+
# is useful for iterating over specific children of known node groups,
|
|
29
|
+
# and is using primarily in <tt>delete_node_group_descendents</tt> and
|
|
30
|
+
# <tt>delete_tree_recursion</tt>.
|
|
31
|
+
# === Example return hash for default PE Installation
|
|
32
|
+
# { "00000000-0000-4000-8000-000000000000" => # root group with one child
|
|
33
|
+
# ["6d37be98-42ee-400d-a66e-ebced989546c"],
|
|
34
|
+
# "6d37be98-42ee-400d-a66e-ebced989546c" => # node group with 5 kids
|
|
35
|
+
# ["42e385ca-8fb2-442d-a0af-3b14c86d321b",
|
|
36
|
+
# "32e6a36d-59ad-44ea-8708-feb910907058",
|
|
37
|
+
# "44ebbb50-501c-45f1-8c92-95d5b0313d24",
|
|
38
|
+
# "58448b7d-3175-4695-ac32-31cf4ee25754",
|
|
39
|
+
# "00350026-bfb6-4ce7-bd06-1a1cfea445f9",
|
|
40
|
+
# "b6400234-2a61-4417-b85e-c2dcc123686b"],
|
|
41
|
+
# "42e385ca-8fb2-442d-a0af-3b14c86d321b" => [], # childless node group
|
|
42
|
+
# "32e6a36d-59ad-44ea-8708-feb910907058" => [],
|
|
43
|
+
# "44ebbb50-501c-45f1-8c92-95d5b0313d24" => [],
|
|
44
|
+
# "58448b7d-3175-4695-ac32-31cf4ee25754" => [],
|
|
45
|
+
# "00350026-bfb6-4ce7-bd06-1a1cfea445f9" => [],
|
|
46
|
+
# "b6400234-2a61-4417-b85e-c2dcc123686b" => [] }
|
|
47
|
+
def get_node_group_trees_of_direct_descendents
|
|
48
|
+
node_groups = get_list_of_node_groups
|
|
49
|
+
groups = node_groups.map { |each| [each['id'], each['parent']] }
|
|
50
|
+
|
|
51
|
+
# Constants for the array of tuples just created.
|
|
52
|
+
id = 0
|
|
53
|
+
parent = 1
|
|
54
|
+
|
|
55
|
+
# Create root node and insert it into the tree hash.
|
|
56
|
+
rootindex = groups.find_index { |e| e[id] == e[parent] }
|
|
57
|
+
rootid = (groups.delete_at(rootindex))[id]
|
|
58
|
+
tree = Object::Hash.new
|
|
59
|
+
tree[rootid] = Object::Array.new
|
|
60
|
+
|
|
61
|
+
# Construct the rest of the tree as a hash of
|
|
62
|
+
# id => [ child1, child2,...] nodes.
|
|
63
|
+
groups.each do |g|
|
|
64
|
+
tree[g[id]] = Object::Array.new
|
|
65
|
+
if tree.has_key?(g[parent]) then
|
|
66
|
+
tree[g[parent]] << g[id]
|
|
67
|
+
else
|
|
68
|
+
tree[g[parent]] = Object::Array.new
|
|
69
|
+
tree[g[parent]] << g[id]
|
|
70
|
+
end
|
|
71
|
+
end
|
|
72
|
+
tree
|
|
73
|
+
end
|
|
74
|
+
|
|
75
|
+
# This method deletes all the descendents of the Rootuuid; the root group
|
|
76
|
+
# can never be deleted. If you are looking to clean out a system entirely,
|
|
77
|
+
# consider using <tt>import_baseline_hierarchy</tt> instead, as this
|
|
78
|
+
# method doesn't clean out any classes or other settings the root group
|
|
79
|
+
# might have.
|
|
80
|
+
def delete_all_node_groups
|
|
81
|
+
delete_node_group_descendents(get_node_group(Rootuuid))
|
|
82
|
+
end
|
|
83
|
+
|
|
84
|
+
# This takes an optional hash of node group parameters, and auto-fills
|
|
85
|
+
# any required keys for node group generation. It returns the response
|
|
86
|
+
# body from the server.
|
|
87
|
+
def create_new_node_group_model(options={})
|
|
88
|
+
# name, classes, parent are the only required keys
|
|
89
|
+
name = options['name'] || RandomString.generate
|
|
90
|
+
classes = options['classes'] || {}
|
|
91
|
+
parent = options['parent'] || Rootuuid
|
|
92
|
+
rule = options['rule']
|
|
93
|
+
id = options['id']
|
|
94
|
+
environment = options['environment']
|
|
95
|
+
variables = options['variables']
|
|
96
|
+
description = options['description']
|
|
97
|
+
environment_trumps = options['environment_trumps']
|
|
98
|
+
|
|
99
|
+
hash = { "name" => name,
|
|
100
|
+
"parent" => parent,
|
|
101
|
+
"classes" => classes }
|
|
102
|
+
|
|
103
|
+
if environment_trumps
|
|
104
|
+
hash['environment_trumps'] = environment_trumps
|
|
105
|
+
end
|
|
106
|
+
if rule
|
|
107
|
+
hash['rule'] = rule
|
|
108
|
+
end
|
|
109
|
+
if environment
|
|
110
|
+
hash['environment'] = environment
|
|
111
|
+
end
|
|
112
|
+
if variables
|
|
113
|
+
hash['variables'] = variables
|
|
114
|
+
end
|
|
115
|
+
if description
|
|
116
|
+
hash['description'] = description
|
|
117
|
+
end
|
|
118
|
+
if id
|
|
119
|
+
hash['id'] = id
|
|
120
|
+
end
|
|
121
|
+
|
|
122
|
+
create_node_group(hash).env.body
|
|
123
|
+
end
|
|
124
|
+
|
|
125
|
+
alias_method :generate_node_group, :create_new_node_group_model
|
|
126
|
+
|
|
127
|
+
# This takes an optional hash of node group parameters. If a "name" option
|
|
128
|
+
# is provided it will attempt to find an existing node group with that
|
|
129
|
+
# name in the environment specified in the "environment" option (default:
|
|
130
|
+
# "production").
|
|
131
|
+
#
|
|
132
|
+
# If an existing node group is found, it will be updated according to the
|
|
133
|
+
# provided options (via `#replace_node_group_with_update_hash`).
|
|
134
|
+
#
|
|
135
|
+
# If no existing node group is found, the options hash will be used to
|
|
136
|
+
# create a new node group (via `#create_new_node_group_model`).
|
|
137
|
+
def find_or_create_node_group_model(options={})
|
|
138
|
+
options["environment"] ||= "production"
|
|
139
|
+
if options["name"] && existing = get_node_group_by_name(options["name"], options["environment"])
|
|
140
|
+
replace_node_group_with_update_hash(existing, options)
|
|
141
|
+
else
|
|
142
|
+
create_new_node_group_model(options)
|
|
143
|
+
end
|
|
144
|
+
end
|
|
145
|
+
|
|
146
|
+
# If for some reason your node group model is out of sync with the
|
|
147
|
+
# server's state for that node group, you can use this method to just
|
|
148
|
+
# update your model with the server state.
|
|
149
|
+
def refresh_node_group_model(node_group_model)
|
|
150
|
+
get_node_group(node_group_model['id'])
|
|
151
|
+
end
|
|
152
|
+
|
|
153
|
+
# This will delete anything that inherits from the node group specified,
|
|
154
|
+
# but not the actual node group itself.
|
|
155
|
+
def delete_node_group_descendents(node_group_model)
|
|
156
|
+
id = node_group_model['id']
|
|
157
|
+
tree = get_node_group_trees_of_direct_descendents
|
|
158
|
+
tree[id].each do |childid|
|
|
159
|
+
delete_tree_recursion(tree, childid)
|
|
160
|
+
end
|
|
161
|
+
end
|
|
162
|
+
|
|
163
|
+
# This will return a node group hash given the name of the node group. It
|
|
164
|
+
# defaults to production for the environment, since node names can be
|
|
165
|
+
# the same for different environments.
|
|
166
|
+
def get_node_group_by_name(name, environment='production')
|
|
167
|
+
nodes = get_list_of_node_groups
|
|
168
|
+
nodes.each do |node|
|
|
169
|
+
if node['name'] == name && node['environment'] == environment
|
|
170
|
+
return node
|
|
171
|
+
end
|
|
172
|
+
end
|
|
173
|
+
nil # return nil if no matching name found
|
|
174
|
+
end
|
|
175
|
+
|
|
176
|
+
# This will return the node group id given the name of the node group. It
|
|
177
|
+
# defaults to production for the environment, since node names can be
|
|
178
|
+
# the same for different environments.
|
|
179
|
+
def get_node_group_id_by_name(name, environment='production')
|
|
180
|
+
nodes = get_list_of_node_groups
|
|
181
|
+
nodes.each do |node|
|
|
182
|
+
if node['name'] == name && node['environment'] == environment
|
|
183
|
+
return node['id']
|
|
184
|
+
end
|
|
185
|
+
end
|
|
186
|
+
nil # return nil if no matching name found
|
|
187
|
+
end
|
|
188
|
+
|
|
189
|
+
# The tree parameter required here is generated from the method
|
|
190
|
+
# <tt>get_node_group_trees_with_direct_descendents</tt>. This method
|
|
191
|
+
# will also delete the node_group_id as well.
|
|
192
|
+
def delete_tree_recursion(tree, node_group_id)
|
|
193
|
+
tree[node_group_id].each do |childid|
|
|
194
|
+
delete_tree_recursion(tree, childid)
|
|
195
|
+
end
|
|
196
|
+
#protect against trying to delete the Rootuuid
|
|
197
|
+
delete_node_group(node_group_id) if node_group_id != Rootuuid
|
|
198
|
+
end
|
|
199
|
+
|
|
200
|
+
# This method imports a bare root group into the NC, cleaning out and
|
|
201
|
+
# deleting any node groups that might have been available. Consider
|
|
202
|
+
# using this or <tt>delete_all_node_groups</tt> at the beginning of your
|
|
203
|
+
# test, depending on requirements of the test.
|
|
204
|
+
def import_baseline_hierarchy
|
|
205
|
+
hierarchy = [{ "environment_trumps" => false,
|
|
206
|
+
"parent" => Rootuuid,
|
|
207
|
+
"name" => "default",
|
|
208
|
+
"rule" => ["and", ["~", "name", ".*"]],
|
|
209
|
+
"variables" => {},
|
|
210
|
+
"id" => Rootuuid,
|
|
211
|
+
"environment" => "production",
|
|
212
|
+
"classes" => {} }]
|
|
213
|
+
import_hierarchy(hierarchy)
|
|
214
|
+
end
|
|
215
|
+
|
|
216
|
+
# This doesn't have a home right now, so it will exist in this module
|
|
217
|
+
# until it has a proper home.
|
|
218
|
+
def deep_merge(group, update_hash)
|
|
219
|
+
# TODO : This doesn't work if v is ever an array. Needs to be
|
|
220
|
+
# reimplemented a la is_deep_subset?
|
|
221
|
+
|
|
222
|
+
update_hash.each do |k, v|
|
|
223
|
+
if v.is_a? Hash
|
|
224
|
+
group[k] ||= {}
|
|
225
|
+
deep_merge(group[k], update_hash[k])
|
|
226
|
+
else
|
|
227
|
+
group[k] = update_hash[k]
|
|
228
|
+
end
|
|
229
|
+
end
|
|
230
|
+
end
|
|
231
|
+
|
|
232
|
+
private :deep_merge
|
|
233
|
+
|
|
234
|
+
def remove_nil_values(hash)
|
|
235
|
+
hash.each do |k, v|
|
|
236
|
+
case v
|
|
237
|
+
when Hash
|
|
238
|
+
remove_nil_values(v)
|
|
239
|
+
else
|
|
240
|
+
hash.delete(k) if v == nil
|
|
241
|
+
end
|
|
242
|
+
end
|
|
243
|
+
end
|
|
244
|
+
|
|
245
|
+
private :remove_nil_values
|
|
246
|
+
|
|
247
|
+
# This uses a PUTs instead of a POST to update a node group; when using
|
|
248
|
+
# PUTs, it will delete and replace the entire node group instead of just
|
|
249
|
+
# updating the keys provided.
|
|
250
|
+
def replace_node_group_with_update_hash(node_group_model, update_hash)
|
|
251
|
+
merged_model = node_group_model.merge(update_hash)
|
|
252
|
+
replace_node_group(merged_model['id'], merged_model)
|
|
253
|
+
# no verification of the response for now, will need to write some code
|
|
254
|
+
# that verifies this and takes care of array ordering
|
|
255
|
+
end
|
|
256
|
+
|
|
257
|
+
# This uses the POST method to update a node group; when using POST, it
|
|
258
|
+
# will only send and update the specified keys.
|
|
259
|
+
def update_node_group_with_node_group_model(node_group_model, update_hash)
|
|
260
|
+
id = node_group_model['id']
|
|
261
|
+
response = update_node_group(id, update_hash)
|
|
262
|
+
|
|
263
|
+
deep_merge(node_group_model, update_hash)
|
|
264
|
+
node_group_model = remove_nil_values(node_group_model)
|
|
265
|
+
|
|
266
|
+
# check to see if the update hash had any class changes that require
|
|
267
|
+
# transforms to the groups['deleted'] object
|
|
268
|
+
if node_group_model['deleted'] && node_group_model['classes'] != {} && update_hash['classes']
|
|
269
|
+
update_hash['classes'].each do |classname, parameters|
|
|
270
|
+
if node_group_model['deleted'][classname] == nil
|
|
271
|
+
next
|
|
272
|
+
end
|
|
273
|
+
parameters.each do |parameter, value|
|
|
274
|
+
if value == nil
|
|
275
|
+
node_group_model['deleted'][classname].delete(parameter)
|
|
276
|
+
else
|
|
277
|
+
node_group_model['deleted'][classname][parameter]['value'] = value
|
|
278
|
+
end
|
|
279
|
+
end
|
|
280
|
+
end
|
|
281
|
+
end
|
|
282
|
+
|
|
283
|
+
# check to see if we need to delete any classes from the model's
|
|
284
|
+
# node_group_model{'deleted'] key
|
|
285
|
+
if node_group_model['deleted']
|
|
286
|
+
node_group_model['deleted'].each do |classname, parameters|
|
|
287
|
+
if node_group_model['classes'][classname] == nil || node_group_model['deleted'][classname].keys == ['puppetlabs.classifier/deleted']
|
|
288
|
+
node_group_model['deleted'].delete(classname)
|
|
289
|
+
end
|
|
290
|
+
end
|
|
291
|
+
node_group_model.delete('deleted') if node_group_model['deleted'] == {}
|
|
292
|
+
end
|
|
293
|
+
|
|
294
|
+
if node_group_model != response.env.body
|
|
295
|
+
raise "node_group_model did not match the server response:\n#{node_group_model}\n#{response.env.body}"
|
|
296
|
+
end
|
|
297
|
+
|
|
298
|
+
# If we got this far, return the "model" hash.
|
|
299
|
+
node_group_model
|
|
300
|
+
end
|
|
301
|
+
|
|
302
|
+
# Used to compare replica classifier to master. Raises exception if it does not match.
|
|
303
|
+
# @param [BeakerHost] host_name
|
|
304
|
+
def classifier_database_matches_self?(host_name)
|
|
305
|
+
original_host_name = self.host
|
|
306
|
+
begin
|
|
307
|
+
self.host = host_name.to_s
|
|
308
|
+
initialize_connection
|
|
309
|
+
other_nodes = get_list_of_nodes
|
|
310
|
+
other_classes = get_list_of_classes
|
|
311
|
+
other_environments = get_list_of_environments
|
|
312
|
+
other_groups = get_list_of_node_groups
|
|
313
|
+
ensure
|
|
314
|
+
self.host = original_host_name
|
|
315
|
+
initialize_connection
|
|
316
|
+
end
|
|
317
|
+
|
|
318
|
+
self_nodes = get_list_of_nodes
|
|
319
|
+
self_classes = get_list_of_classes
|
|
320
|
+
self_environments = get_list_of_environments
|
|
321
|
+
self_groups = get_list_of_node_groups
|
|
322
|
+
|
|
323
|
+
nodes_match = nodes_match?(other_nodes, self_nodes)
|
|
324
|
+
classes_match = classes_match?(other_classes, self_classes)
|
|
325
|
+
environments_match = environments_match?(other_environments, self_environments)
|
|
326
|
+
groups_match = groups_match?(other_groups, self_groups)
|
|
327
|
+
|
|
328
|
+
errors = ''
|
|
329
|
+
errors << "Nodes do not match\r\n" unless nodes_match
|
|
330
|
+
errors << "Classes do not match\r\n" unless classes_match
|
|
331
|
+
errors << "Environments do not match\r\n" unless environments_match
|
|
332
|
+
errors << "Groups do not match\r\n" unless groups_match
|
|
333
|
+
|
|
334
|
+
@faraday_logger.warn(errors.chomp) unless errors.empty?
|
|
335
|
+
errors.empty?
|
|
336
|
+
end
|
|
337
|
+
|
|
338
|
+
# Check to see if all nodes match between two query responses
|
|
339
|
+
# @param [Object] other_nodes - response from get_list_of_nodes
|
|
340
|
+
# @param [Object] self_nodes - response from get_list_of_nodes
|
|
341
|
+
# @return [Boolean]
|
|
342
|
+
def nodes_match?(other_nodes, self_nodes=nil)
|
|
343
|
+
self_nodes = get_list_of_nodes if self_nodes.nil?
|
|
344
|
+
return other_nodes == self_nodes
|
|
345
|
+
end
|
|
346
|
+
|
|
347
|
+
# Check to see if all classes match between two query responses
|
|
348
|
+
# @param [Object] other_classes - response from get_list_of_classes
|
|
349
|
+
# @param [Object] self_classes - response from get_list_of_classes
|
|
350
|
+
# @return [Boolean]
|
|
351
|
+
def classes_match?(other_classes, self_classes=nil)
|
|
352
|
+
self_classes = get_list_of_classes if self_classes.nil?
|
|
353
|
+
return other_classes == self_classes
|
|
354
|
+
end
|
|
355
|
+
|
|
356
|
+
# Check to see if all groups match between two query responses
|
|
357
|
+
# @param [Object] other_groups - response from get_list_of_node_groups
|
|
358
|
+
# @param [Object] self_groups - response from get_list_of_node_groups
|
|
359
|
+
# @return [Boolean]
|
|
360
|
+
def groups_match?(other_groups, self_groups=nil)
|
|
361
|
+
self_groups = get_list_of_node_groups if self_groups.nil?
|
|
362
|
+
return other_groups == self_groups
|
|
363
|
+
end
|
|
364
|
+
|
|
365
|
+
# Check to see if all environments match between two query responses
|
|
366
|
+
# @param [Object] other_environments - response from get_list_of_environments
|
|
367
|
+
# @param [Object] self_environments - response from get_list_of_environments
|
|
368
|
+
# @return [Boolean]
|
|
369
|
+
def environments_match?(other_environments, self_environments=nil)
|
|
370
|
+
self_environments = get_list_of_environments if self_environments.nil?
|
|
371
|
+
return other_environments == self_environments
|
|
372
|
+
end
|
|
373
|
+
|
|
374
|
+
end
|
|
375
|
+
end
|
|
376
|
+
end
|