knuckle_cluster 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.gitignore +13 -0
- data/.rspec +2 -0
- data/.travis.yml +5 -0
- data/CODE_OF_CONDUCT.md +13 -0
- data/Gemfile +4 -0
- data/LICENSE.txt +21 -0
- data/README.md +227 -0
- data/Rakefile +6 -0
- data/bin/console +7 -0
- data/bin/knuckle_cluster +45 -0
- data/bin/setup +8 -0
- data/knuckle_cluster.gemspec +31 -0
- data/lib/knuckle_cluster/agent.rb +28 -0
- data/lib/knuckle_cluster/agent_registry.rb +75 -0
- data/lib/knuckle_cluster/configuration.rb +52 -0
- data/lib/knuckle_cluster/container.rb +15 -0
- data/lib/knuckle_cluster/task.rb +25 -0
- data/lib/knuckle_cluster/task_registry.rb +73 -0
- data/lib/knuckle_cluster/version.rb +3 -0
- data/lib/knuckle_cluster.rb +196 -0
- metadata +136 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: d1938a27084e6da4bd72ce501a8e373a5a919b18
|
4
|
+
data.tar.gz: e27ce437a1519ce998b79429d00ca18c793a9fae
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 3b470a48b031d80f4761aa538243862beb946e46825b37fbdd361db52216452d21bca92ae02641e62804303b50823590edf2a5eea91ff081d6f26c94a685872c
|
7
|
+
data.tar.gz: 56ef57f3f84bc4cc707296d67592b5ac8e5867a94663b0f2fe21eda609c154fbc70c9f94feb6f9e231a05e09c6dd9f247a789bebe40151d528ef1f10cbd46ad7
|
data/.gitignore
ADDED
data/.rspec
ADDED
data/.travis.yml
ADDED
data/CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,13 @@
|
|
1
|
+
# Contributor Code of Conduct
|
2
|
+
|
3
|
+
As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
|
4
|
+
|
5
|
+
We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion.
|
6
|
+
|
7
|
+
Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct.
|
8
|
+
|
9
|
+
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team.
|
10
|
+
|
11
|
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers.
|
12
|
+
|
13
|
+
This Code of Conduct is adapted from the [Contributor Covenant](http:contributor-covenant.org), version 1.0.0, available at [http://contributor-covenant.org/version/1/0/0/](http://contributor-covenant.org/version/1/0/0/)
|
data/Gemfile
ADDED
data/LICENSE.txt
ADDED
@@ -0,0 +1,21 @@
|
|
1
|
+
The MIT License (MIT)
|
2
|
+
|
3
|
+
Copyright (c) 2017 Peter Hofmann
|
4
|
+
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
7
|
+
in the Software without restriction, including without limitation the rights
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
10
|
+
furnished to do so, subject to the following conditions:
|
11
|
+
|
12
|
+
The above copyright notice and this permission notice shall be included in
|
13
|
+
all copies or substantial portions of the Software.
|
14
|
+
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
21
|
+
THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,227 @@
|
|
1
|
+
# KnuckleCluster
|
2
|
+
|
3
|
+
Have you ever wanted to shuck away the hard, rough exterior of an ECS cluster and get to the soft, chewy innards? Sounds like you need KnuckleCluster!
|
4
|
+
This tool provides scripts, invoked via cli or rakefile, to list, connect to and/or run commands on ecs agents and containers via ssh. This makes it very easy to interrogate ECS agents and containers without having to go digging for IP addresses and things.
|
5
|
+
|
6
|
+
## Features
|
7
|
+
* See what agents in your ECS cluster are doing
|
8
|
+
* Easily connect to running agents
|
9
|
+
* Easily connect and get a console inside running containers
|
10
|
+
* Create shortcuts to oft-used commands and run them easily
|
11
|
+
* Optionally integrates with [aws-vault](https://github.com/99designs/aws-vault) for AWS authentication
|
12
|
+
|
13
|
+
## Development Status
|
14
|
+
Is being used in production for various projects and is considered stable. Any new features/bug fixes etc are most welcome! Test coverage is minimal...
|
15
|
+
|
16
|
+
## Installation
|
17
|
+
|
18
|
+
KnuckleCluster can be used in two ways:
|
19
|
+
1. As a system-wide executable with a config file (preferred)
|
20
|
+
1. As a rake task in your project
|
21
|
+
|
22
|
+
Install the executable with:
|
23
|
+
|
24
|
+
$ gem install knuckle_cluster
|
25
|
+
|
26
|
+
OR
|
27
|
+
|
28
|
+
Add this line to your application's Gemfile:
|
29
|
+
|
30
|
+
```ruby
|
31
|
+
source 'https://rubygems.envato.com/' do
|
32
|
+
gem 'knuckle_cluster'
|
33
|
+
end
|
34
|
+
```
|
35
|
+
|
36
|
+
And then execute:
|
37
|
+
|
38
|
+
$ bundle
|
39
|
+
|
40
|
+
|
41
|
+
## Usage
|
42
|
+
|
43
|
+
Knuckle Cluster can be used either as a part of an application via a rakefile, or at a system level.
|
44
|
+
|
45
|
+
You'll need to execute knuckle_cluster with appropriate AWS permissions on the cluster in question, stored in your ENV. I like to use `aws-vault` to handle this for me.
|
46
|
+
|
47
|
+
### System usage
|
48
|
+
|
49
|
+
Create a file: `~/.ssh/knuckle_cluster`. This is the config file that will be used to make connections from the command line. It is a yaml file. The connection name is the key, and all parameters are below it. EG:
|
50
|
+
```
|
51
|
+
platform:
|
52
|
+
cluster_name: platform-ecs-cluster-ABC123
|
53
|
+
region: us-east-1
|
54
|
+
bastion: platform_bastion
|
55
|
+
rsa_key_location: ~/.ssh/platform_rsa_key
|
56
|
+
ssh_username: ubuntu
|
57
|
+
sudo: true
|
58
|
+
aws_vault_profile: platform_super_user
|
59
|
+
shortcuts:
|
60
|
+
console:
|
61
|
+
container: web
|
62
|
+
command: bundle exec rails console
|
63
|
+
db:
|
64
|
+
container: worker
|
65
|
+
command: script/db_console
|
66
|
+
tunnels:
|
67
|
+
db:
|
68
|
+
local_port: 54321
|
69
|
+
remote_host: postgres-db.yourcompany.com
|
70
|
+
remote_port: 5432
|
71
|
+
```
|
72
|
+
|
73
|
+
You can also use inheritance to simplify the inclusion of multiple similar targets:
|
74
|
+
```
|
75
|
+
super_platform:
|
76
|
+
<<: *default_platform
|
77
|
+
cluster_name: super-platform-ecs-cluster-ABC123
|
78
|
+
|
79
|
+
ultra_platform:
|
80
|
+
<<: *default_platform
|
81
|
+
cluster_name: ultra-platform-ecs-cluster-DEF987
|
82
|
+
sudo: false
|
83
|
+
|
84
|
+
default_platform: &default_platform
|
85
|
+
region: us-east-1
|
86
|
+
bastion: platform_bastion
|
87
|
+
rsa_key_location: ~/.ssh/platform_rsa_key
|
88
|
+
ssh_username: ubuntu
|
89
|
+
sudo: true
|
90
|
+
aws_vault_profile: platform_super_user
|
91
|
+
```
|
92
|
+
|
93
|
+
See [Options for Knuckle Cluster](#options-for-knuckle-cluster) below for a list of what each option does.
|
94
|
+
|
95
|
+
Command line options:
|
96
|
+
|
97
|
+
```
|
98
|
+
knuckle_cluster CLUSTER_PROFILE agents - list all agents and select one to start a shell
|
99
|
+
knuckle_cluster CLUSTER_PROFILE containers - list all containers and select one to start a shell
|
100
|
+
knuckle_cluster CLUSTER_PROFILE logs CONTAINER_NAME - tail the logs for a container
|
101
|
+
knuckle_cluster CLUSTER_PROFILE CONTAINER_NAME [OPTIONAL COMMANDS] - connect to a container and start a shell or run a command
|
102
|
+
knuckle_cluster CLUSTER_PROFILE SHORTCUT_NAME - run a shortcut defined in your knuckle_cluster configuration
|
103
|
+
knuckle_cluster CLUSTER_PROFILE tunnel TUNNEL_NAME - open a tunnel defined in your knuckle_cluster configuration
|
104
|
+
```
|
105
|
+
|
106
|
+
### Rakefile usage
|
107
|
+
|
108
|
+
It takes one argument at minimum: `cluster_name` . A region is likely also required as it will default to `us-east-1`.
|
109
|
+
Eg:
|
110
|
+
```
|
111
|
+
kc = KnuckleCluster.new(
|
112
|
+
cluster_name: 'platform-ecs-cluster-ABC123',
|
113
|
+
region: 'us-east-1',
|
114
|
+
bastion: 'platform_bastion',
|
115
|
+
rsa_key_location: "~/.ssh/platform_rsa_key",
|
116
|
+
ssh_username: "ubuntu",
|
117
|
+
sudo: true
|
118
|
+
)
|
119
|
+
task :agents do
|
120
|
+
kc.connect_to_agents
|
121
|
+
end
|
122
|
+
|
123
|
+
task :containers do
|
124
|
+
kc.connect_to_containers
|
125
|
+
end
|
126
|
+
```
|
127
|
+
|
128
|
+
invoke with `rake agents` or `rake containers`
|
129
|
+
|
130
|
+
|
131
|
+
Once you have an instance of KnuckleCluster, you can now do things!
|
132
|
+
```
|
133
|
+
$ knuckle_cluster super_platform containers
|
134
|
+
```
|
135
|
+
Which will give you the output and run bash for you on the actual docker container:
|
136
|
+
```
|
137
|
+
Listing Containers
|
138
|
+
TASK | AGENT | INDEX | CONTAINER
|
139
|
+
------------------|---------------------|-------|--------------------
|
140
|
+
task-one | i-123abc123abc123ab | 1 | t1-container-one
|
141
|
+
task-two | i-123abc123abc123ab | 2 | t2-container-one
|
142
|
+
| | 3 | t2-container-two
|
143
|
+
task-three | i-456def456def456de | 4 | t3-container-one
|
144
|
+
| | 5 | t3-container-two
|
145
|
+
| | 6 | t3-container-three
|
146
|
+
|
147
|
+
Connect to which container?
|
148
|
+
```
|
149
|
+
|
150
|
+
Same with connecting directly to agents
|
151
|
+
```
|
152
|
+
$ knuckle_cluster super_platform agents
|
153
|
+
```
|
154
|
+
```
|
155
|
+
Listing Agents
|
156
|
+
INDEX | INSTANCE_ID | TASK | CONTAINER
|
157
|
+
------|---------------------|------------|--------------------
|
158
|
+
1 | i-123abc123abc123ab | task-one | t1-container-one
|
159
|
+
| | task-two | t2-container-one
|
160
|
+
| | | t2-container-two
|
161
|
+
2 | i-456def456def456de | task-three | t3-container-one
|
162
|
+
| | | t3-container-two
|
163
|
+
| | | t3-container-three
|
164
|
+
|
165
|
+
Connect to which agent?
|
166
|
+
```
|
167
|
+
|
168
|
+
Both `connect_to_containers` and `connect_to_agents` can have the following optional arguments:
|
169
|
+
|
170
|
+
Argument | Description
|
171
|
+
-------- | -----------
|
172
|
+
command | Runs a command on the specified container/agent. When connecting to containers, this defaults to `bash`
|
173
|
+
auto | Automatically connects to the first container/agent it can find. Handy when used in conjunction with `command`.
|
174
|
+
|
175
|
+
|
176
|
+
|
177
|
+
Eg: To connect to a container, echo something and then immediately disconnect you could use:
|
178
|
+
```
|
179
|
+
kc.connect_to_containers(auto: true, command: "echo I love KnuckleCluster!")
|
180
|
+
```
|
181
|
+
|
182
|
+
|
183
|
+
## Options for Knuckle Cluster
|
184
|
+
Possible options are below. If left blank, they will be ignored and defaults used where available.:
|
185
|
+
|
186
|
+
Argument | Description
|
187
|
+
-------- | -----------
|
188
|
+
cluster_name | The name of the cluster (not the ARN). eg 'my-super-cluster'. Required
|
189
|
+
region | The AWS region you would like to use. Defaults to `us-east-1`
|
190
|
+
bastion | if you have a bastion to proxy to your ecs cluster via ssh, put the name of it here as defined in your `~/.ssh/config` file.
|
191
|
+
rsa_key_location | The RSA key needed to connect to an ecs agent eg `~/.ssh/id_rsa`.
|
192
|
+
ssh_username | The username to conncet. Will default to `ec2-user`
|
193
|
+
sudo | true or false - will sudo the `docker` command on the target machine. Usually not needed unless the user is not a part of the `docker` group.
|
194
|
+
aws_vault_profile | If you use the `aws-vault` tool to manage your AWS credentials, you can specify a profile here that will be automatically used to connect to this cluster.
|
195
|
+
profile | Another profile to inherit settings from. Settings from lower profiles can be overridden in higher ones.
|
196
|
+
|
197
|
+
|
198
|
+
## Maintainer
|
199
|
+
[Envato](https://github.com/envato)
|
200
|
+
|
201
|
+
## Contributors
|
202
|
+
- [Peter Hofmann](https://github.com/envatopoho)
|
203
|
+
- [Giancarlo Salamanca](https://github.com/salamagd)
|
204
|
+
- [Jiexin Huang](https://github.com/jiexinhuang)
|
205
|
+
|
206
|
+
## License
|
207
|
+
|
208
|
+
`KnuckleCluster` uses MIT license. See
|
209
|
+
[`LICENSE.txt`](https://github.com/envato/knuckle_cluster/blob/master/LICENSE.txt) for
|
210
|
+
details.
|
211
|
+
|
212
|
+
## Code of conduct
|
213
|
+
|
214
|
+
We welcome contribution from everyone. Read more about it in
|
215
|
+
[`CODE_OF_CONDUCT.md`](https://github.com/envato/knuckle_cluster/blob/master/CODE_OF_CONDUCT.md)
|
216
|
+
|
217
|
+
## Contributing
|
218
|
+
|
219
|
+
For bug fixes, documentation changes, and small features:
|
220
|
+
|
221
|
+
1. Fork it ( https://github.com/[my-github-username]/knuckle_cluster/fork )
|
222
|
+
2. Create your feature branch (git checkout -b my-new-feature)
|
223
|
+
3. Commit your changes (git commit -am 'Add some feature')
|
224
|
+
4. Push to the branch (git push origin my-new-feature)
|
225
|
+
5. Create a new Pull Request
|
226
|
+
|
227
|
+
For larger new features: Do everything as above, but first also make contact with the project maintainers to be sure your change fits with the project direction and you won't be wasting effort going in the wrong direction.
|
data/Rakefile
ADDED
data/bin/console
ADDED
data/bin/knuckle_cluster
ADDED
@@ -0,0 +1,45 @@
|
|
1
|
+
#!/usr/bin/env bundle exec ruby
|
2
|
+
require "knuckle_cluster"
|
3
|
+
require "yaml"
|
4
|
+
|
5
|
+
profile = ARGV[0]
|
6
|
+
|
7
|
+
if ARGV.count < 2
|
8
|
+
puts <<~USAGE
|
9
|
+
knuckle_cluster CLUSTER_PROFILE agents - list all agents and select one to start a shell
|
10
|
+
knuckle_cluster CLUSTER_PROFILE containers - list all containers and select one to start a shell
|
11
|
+
knuckle_cluster CLUSTER_PROFILE logs CONTAINER_NAME - tail the logs for a container
|
12
|
+
knuckle_cluster CLUSTER_PROFILE CONTAINER_NAME [OPTIONAL COMMANDS] - connect to a container and start a shell or run a command
|
13
|
+
knuckle_cluster CLUSTER_PROFILE SHORTCUT_NAME - run a shortcut defined in your knuckle_cluster configuration
|
14
|
+
knuckle_cluster CLUSTER_PROFILE tunnel TUNNEL_NAME - open a tunnel defined in your knuckle_cluster configuration
|
15
|
+
USAGE
|
16
|
+
exit
|
17
|
+
end
|
18
|
+
|
19
|
+
begin
|
20
|
+
config = KnuckleCluster::Configuration.load_parameters(profile: profile)
|
21
|
+
rescue => e
|
22
|
+
puts "ERROR: There was a problem loading your configuration: #{e}"
|
23
|
+
exit
|
24
|
+
end
|
25
|
+
|
26
|
+
kc = KnuckleCluster.new(
|
27
|
+
config
|
28
|
+
)
|
29
|
+
|
30
|
+
if ARGV[1] == 'agents'
|
31
|
+
kc.connect_to_agents
|
32
|
+
elsif ARGV[1] == 'containers'
|
33
|
+
kc.connect_to_containers
|
34
|
+
elsif ARGV[1] == 'logs'
|
35
|
+
kc.container_logs(name: ARGV[2])
|
36
|
+
elsif ARGV[1] == 'tunnel'
|
37
|
+
kc.open_tunnel(name: ARGV[2])
|
38
|
+
else
|
39
|
+
command = ARGV.drop(2)
|
40
|
+
if command.any?
|
41
|
+
kc.connect_to_container(name: ARGV[1], command: command.join(' '))
|
42
|
+
else
|
43
|
+
kc.connect_to_container(name: ARGV[1])
|
44
|
+
end
|
45
|
+
end
|
data/bin/setup
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
# coding: utf-8
|
2
|
+
lib = File.expand_path('../lib', __FILE__)
|
3
|
+
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
4
|
+
require 'knuckle_cluster/version'
|
5
|
+
|
6
|
+
Gem::Specification.new do |spec|
|
7
|
+
spec.name = "knuckle_cluster"
|
8
|
+
spec.version = KnuckleCluster::VERSION
|
9
|
+
spec.authors = ["Envato"]
|
10
|
+
spec.email = ["rubygems@envato.com"]
|
11
|
+
|
12
|
+
spec.summary = %q{Handy cluster tool}
|
13
|
+
spec.description = %q{Interrogation of AWS ECS clusters, with the ability to directly connect to hosts and containers.}
|
14
|
+
spec.homepage = "https://github.com/envato/knuckle_cluster"
|
15
|
+
spec.license = "MIT"
|
16
|
+
|
17
|
+
spec.files = `git ls-files -z`.split("\x0").reject do |f|
|
18
|
+
f.match(%r{^(test|spec|features)/})
|
19
|
+
end
|
20
|
+
spec.require_paths = ["lib"]
|
21
|
+
|
22
|
+
spec.add_development_dependency "bundler", "~> 1.14"
|
23
|
+
spec.add_development_dependency "rake", "~> 10.0"
|
24
|
+
spec.add_development_dependency "rspec", "~> 3.0"
|
25
|
+
|
26
|
+
spec.add_dependency 'aws-sdk', '~> 2.8'
|
27
|
+
spec.add_dependency 'table_print', '~> 1.5'
|
28
|
+
|
29
|
+
spec.bindir = "bin"
|
30
|
+
spec.executables = 'knuckle_cluster'
|
31
|
+
end
|
@@ -0,0 +1,28 @@
|
|
1
|
+
module KnuckleCluster
|
2
|
+
class Agent
|
3
|
+
def initialize(
|
4
|
+
index:,
|
5
|
+
instance_id:,
|
6
|
+
public_ip:,
|
7
|
+
private_ip:,
|
8
|
+
availability_zone:,
|
9
|
+
container_instance_arn:,
|
10
|
+
task_registry:
|
11
|
+
)
|
12
|
+
@index = index
|
13
|
+
@instance_id = instance_id
|
14
|
+
@public_ip = public_ip
|
15
|
+
@private_ip = private_ip
|
16
|
+
@availability_zone = availability_zone
|
17
|
+
@container_instance_arn = container_instance_arn
|
18
|
+
@task_registry = task_registry
|
19
|
+
end
|
20
|
+
|
21
|
+
attr_reader :index, :instance_id, :public_ip, :private_ip,
|
22
|
+
:availability_zone, :container_instance_arn, :task_registry
|
23
|
+
|
24
|
+
def tasks
|
25
|
+
task_registry.where(container_instance_arn: container_instance_arn)
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
@@ -0,0 +1,75 @@
|
|
1
|
+
require 'knuckle_cluster/agent'
|
2
|
+
require 'knuckle_cluster/task_registry'
|
3
|
+
|
4
|
+
require 'forwardable'
|
5
|
+
|
6
|
+
module KnuckleCluster
|
7
|
+
class AgentRegistry
|
8
|
+
extend Forwardable
|
9
|
+
|
10
|
+
def initialize(aws_client_config:, cluster_name:)
|
11
|
+
@aws_client_config = aws_client_config
|
12
|
+
@cluster_name = cluster_name
|
13
|
+
end
|
14
|
+
|
15
|
+
def agents
|
16
|
+
@agents ||= load_agents
|
17
|
+
end
|
18
|
+
|
19
|
+
def find_by(container_instance_arn:)
|
20
|
+
agents_by_container_instance_arn[container_instance_arn]&.first
|
21
|
+
end
|
22
|
+
|
23
|
+
def_delegators :task_registry, :tasks, :containers
|
24
|
+
|
25
|
+
private
|
26
|
+
|
27
|
+
attr_reader :aws_client_config, :cluster_name
|
28
|
+
|
29
|
+
def load_agents
|
30
|
+
container_instance_arns = ecs_client.list_container_instances(cluster: cluster_name)
|
31
|
+
.container_instance_arns
|
32
|
+
return [] if container_instance_arns.empty?
|
33
|
+
|
34
|
+
ecs_instances_by_id = ecs_client.describe_container_instances(
|
35
|
+
cluster: cluster_name,
|
36
|
+
container_instances: container_instance_arns,
|
37
|
+
).container_instances.group_by(&:ec2_instance_id)
|
38
|
+
|
39
|
+
ec2_instance_reservations = ec2_client.describe_instances(instance_ids: ecs_instances_by_id.keys)
|
40
|
+
.reservations
|
41
|
+
|
42
|
+
ec2_instance_reservations.map(&:instances).flatten.map.with_index do |instance, index|
|
43
|
+
Agent.new(
|
44
|
+
index: index + 1,
|
45
|
+
instance_id: instance[:instance_id],
|
46
|
+
public_ip: instance[:public_ip_address],
|
47
|
+
private_ip: instance[:private_ip_address],
|
48
|
+
availability_zone: instance.dig(:placement, :availability_zone),
|
49
|
+
container_instance_arn: ecs_instances_by_id[instance[:instance_id]].first.container_instance_arn,
|
50
|
+
task_registry: task_registry,
|
51
|
+
)
|
52
|
+
end
|
53
|
+
end
|
54
|
+
|
55
|
+
def agents_by_container_instance_arn
|
56
|
+
@agents_by_container_instance_arn ||= agents.group_by(&:container_instance_arn)
|
57
|
+
end
|
58
|
+
|
59
|
+
def task_registry
|
60
|
+
@task_registry ||= TaskRegistry.new(
|
61
|
+
ecs_client: ecs_client,
|
62
|
+
cluster_name: cluster_name,
|
63
|
+
agent_registry: self,
|
64
|
+
)
|
65
|
+
end
|
66
|
+
|
67
|
+
def ec2_client
|
68
|
+
@ec2_client ||= Aws::EC2::Client.new(aws_client_config)
|
69
|
+
end
|
70
|
+
|
71
|
+
def ecs_client
|
72
|
+
@ecs_client ||= Aws::ECS::Client.new(aws_client_config)
|
73
|
+
end
|
74
|
+
end
|
75
|
+
end
|
@@ -0,0 +1,52 @@
|
|
1
|
+
class KnuckleCluster::Configuration
|
2
|
+
require 'yaml'
|
3
|
+
DEFAULT_PROFILE_FILE = File.join(ENV['HOME'],'.ssh/knuckle_cluster').freeze
|
4
|
+
|
5
|
+
def self.load_parameters(profile:, profile_file: nil)
|
6
|
+
profile_file ||= DEFAULT_PROFILE_FILE
|
7
|
+
raise "File #{profile_file} not found" unless File.exists?(profile_file)
|
8
|
+
|
9
|
+
data = YAML.load_file(profile_file)
|
10
|
+
|
11
|
+
unless data.keys.include?(profile)
|
12
|
+
raise "Config file does not include profile for #{profile}"
|
13
|
+
end
|
14
|
+
|
15
|
+
#Figure out all the profiles to inherit from
|
16
|
+
tmp_data = data[profile]
|
17
|
+
profile_inheritance = [profile]
|
18
|
+
while(tmp_data && tmp_data.keys.include?('profile'))
|
19
|
+
profile_name = tmp_data['profile']
|
20
|
+
break if profile_inheritance.include? profile_name
|
21
|
+
profile_inheritance << profile_name
|
22
|
+
tmp_data = data[profile_name]
|
23
|
+
end
|
24
|
+
|
25
|
+
#Starting at the very lowest profile, build an options hash
|
26
|
+
output = {}
|
27
|
+
profile_inheritance.reverse.each do |prof|
|
28
|
+
output.merge!(data[prof] || {})
|
29
|
+
end
|
30
|
+
|
31
|
+
output.delete('profile')
|
32
|
+
|
33
|
+
keys_to_symbols(output)
|
34
|
+
end
|
35
|
+
|
36
|
+
private
|
37
|
+
|
38
|
+
def self.keys_to_symbols(data)
|
39
|
+
#Implemented here - beats including activesupport
|
40
|
+
return data unless Hash === data
|
41
|
+
ret = {}
|
42
|
+
data.each do |k,v|
|
43
|
+
if Hash === v
|
44
|
+
#Look, doesnt need to be recursive but WHY NOT?!?
|
45
|
+
ret[k.to_sym] = keys_to_symbols(v)
|
46
|
+
else
|
47
|
+
ret[k.to_sym] = v
|
48
|
+
end
|
49
|
+
end
|
50
|
+
ret
|
51
|
+
end
|
52
|
+
end
|
@@ -0,0 +1,25 @@
|
|
1
|
+
module KnuckleCluster
|
2
|
+
class Task
|
3
|
+
def initialize(
|
4
|
+
arn:,
|
5
|
+
container_instance_arn:,
|
6
|
+
agent:,
|
7
|
+
definition:,
|
8
|
+
name:,
|
9
|
+
task_registry:
|
10
|
+
)
|
11
|
+
@arn = arn
|
12
|
+
@container_instance_arn = container_instance_arn
|
13
|
+
@agent = agent
|
14
|
+
@definition = definition
|
15
|
+
@name = name
|
16
|
+
@task_registry = task_registry
|
17
|
+
end
|
18
|
+
|
19
|
+
attr_reader :arn, :container_instance_arn, :agent, :definition, :name, :task_registry
|
20
|
+
|
21
|
+
def containers
|
22
|
+
task_registry.containers_where(task: self)
|
23
|
+
end
|
24
|
+
end
|
25
|
+
end
|
@@ -0,0 +1,73 @@
|
|
1
|
+
require 'knuckle_cluster/task'
|
2
|
+
require 'knuckle_cluster/container'
|
3
|
+
|
4
|
+
module KnuckleCluster
|
5
|
+
class TaskRegistry
|
6
|
+
def initialize(ecs_client:, cluster_name:, agent_registry:)
|
7
|
+
@ecs_client = ecs_client
|
8
|
+
@cluster_name = cluster_name
|
9
|
+
@agent_registry = agent_registry
|
10
|
+
end
|
11
|
+
|
12
|
+
def tasks
|
13
|
+
@tasks ||= load_tasks
|
14
|
+
end
|
15
|
+
|
16
|
+
def containers
|
17
|
+
tasks && all_containers
|
18
|
+
end
|
19
|
+
|
20
|
+
def where(container_instance_arn:)
|
21
|
+
tasks_by_container_instance_arn[container_instance_arn]
|
22
|
+
end
|
23
|
+
|
24
|
+
def containers_where(task:)
|
25
|
+
containers_by_task[task]
|
26
|
+
end
|
27
|
+
|
28
|
+
private
|
29
|
+
|
30
|
+
attr_reader :ecs_client, :cluster_name, :agent_registry, :all_containers
|
31
|
+
|
32
|
+
def load_tasks
|
33
|
+
task_arns = ecs_client.list_tasks(cluster: cluster_name).task_arns
|
34
|
+
task_ids = task_arns.map { |x| x[/.*\/(.*)/,1] }
|
35
|
+
|
36
|
+
return [] if task_ids.empty?
|
37
|
+
|
38
|
+
@all_containers = []
|
39
|
+
index = 0
|
40
|
+
|
41
|
+
ecs_client.describe_tasks(tasks: task_ids, cluster: cluster_name).tasks.flat_map do |task|
|
42
|
+
agent = agent_registry.find_by(container_instance_arn: task.container_instance_arn)
|
43
|
+
|
44
|
+
Task.new(
|
45
|
+
arn: task.task_arn,
|
46
|
+
container_instance_arn: task.container_instance_arn,
|
47
|
+
agent: agent,
|
48
|
+
definition: task.task_definition_arn[/.*\/(.*):.*/,1],
|
49
|
+
name: task.task_definition_arn[/.*\/(.*):\d/,1],
|
50
|
+
task_registry: self,
|
51
|
+
).tap do |new_task|
|
52
|
+
task.containers.each do |container|
|
53
|
+
index += 1
|
54
|
+
|
55
|
+
all_containers << Container.new(
|
56
|
+
index: index,
|
57
|
+
name: container.name,
|
58
|
+
task: new_task,
|
59
|
+
)
|
60
|
+
end
|
61
|
+
end
|
62
|
+
end
|
63
|
+
end
|
64
|
+
|
65
|
+
def tasks_by_container_instance_arn
|
66
|
+
@tasks_by_container_instance_arn ||= tasks.group_by(&:container_instance_arn)
|
67
|
+
end
|
68
|
+
|
69
|
+
def containers_by_task
|
70
|
+
@containers_by_task ||= all_containers.group_by(&:task)
|
71
|
+
end
|
72
|
+
end
|
73
|
+
end
|
@@ -0,0 +1,196 @@
|
|
1
|
+
require 'knuckle_cluster/agent_registry'
|
2
|
+
require "knuckle_cluster/version"
|
3
|
+
require "knuckle_cluster/configuration"
|
4
|
+
|
5
|
+
require 'aws-sdk'
|
6
|
+
require 'forwardable'
|
7
|
+
require 'table_print'
|
8
|
+
|
9
|
+
module KnuckleCluster
|
10
|
+
class << self
|
11
|
+
extend Forwardable
|
12
|
+
|
13
|
+
def new(
|
14
|
+
cluster_name:,
|
15
|
+
region: 'us-east-1',
|
16
|
+
bastion: nil,
|
17
|
+
rsa_key_location: nil,
|
18
|
+
ssh_username: 'ec2-user',
|
19
|
+
sudo: false,
|
20
|
+
aws_vault_profile: nil,
|
21
|
+
shortcuts: {},
|
22
|
+
tunnels: {})
|
23
|
+
@cluster_name = cluster_name
|
24
|
+
@region = region
|
25
|
+
@bastion = bastion
|
26
|
+
@rsa_key_location = rsa_key_location
|
27
|
+
@ssh_username = ssh_username
|
28
|
+
@sudo = sudo
|
29
|
+
@aws_vault_profile = aws_vault_profile
|
30
|
+
@shortcuts = shortcuts
|
31
|
+
@tunnels = tunnels
|
32
|
+
self
|
33
|
+
end
|
34
|
+
|
35
|
+
def connect_to_agents(command: nil, auto: false)
|
36
|
+
agent = select_agent(auto: auto)
|
37
|
+
run_command_in_agent(agent: agent, command: command)
|
38
|
+
end
|
39
|
+
|
40
|
+
def connect_to_containers(command: 'bash', auto: false)
|
41
|
+
container = select_container(auto: auto)
|
42
|
+
run_command_in_container(container: container, command: command)
|
43
|
+
end
|
44
|
+
|
45
|
+
def connect_to_container(name:, command: 'bash')
|
46
|
+
if shortcut = shortcuts[name.to_sym]
|
47
|
+
name = shortcut[:container]
|
48
|
+
new_command = shortcut[:command]
|
49
|
+
new_command += " #{command}" unless command == 'bash'
|
50
|
+
command = new_command
|
51
|
+
end
|
52
|
+
|
53
|
+
container = find_container(name: name)
|
54
|
+
run_command_in_container(container: container, command: command)
|
55
|
+
end
|
56
|
+
|
57
|
+
def container_logs(name:)
|
58
|
+
container = find_container(name: name)
|
59
|
+
subcommand = "#{'sudo ' if sudo}docker logs -f \\`#{get_container_id_command(container.name)}\\`"
|
60
|
+
run_command_in_agent(agent: container.task.agent, command: subcommand)
|
61
|
+
end
|
62
|
+
|
63
|
+
def open_tunnel(name:)
|
64
|
+
if tunnel = tunnels[name.to_sym]
|
65
|
+
agent = select_agent(auto: true)
|
66
|
+
open_tunnel_via_agent(tunnel.merge(agent: agent))
|
67
|
+
else
|
68
|
+
puts "ERROR: A tunnel configuration was not found for '#{name}'"
|
69
|
+
end
|
70
|
+
end
|
71
|
+
|
72
|
+
private
|
73
|
+
|
74
|
+
attr_reader :cluster_name, :region, :bastion, :rsa_key_location, :ssh_username,
|
75
|
+
:sudo, :aws_vault_profile, :shortcuts, :tunnels
|
76
|
+
|
77
|
+
def select_agent(auto:)
|
78
|
+
return agents.first if auto
|
79
|
+
|
80
|
+
puts "\nListing Agents"
|
81
|
+
|
82
|
+
tp agents,
|
83
|
+
:index,
|
84
|
+
:instance_id,
|
85
|
+
# :public_ip,
|
86
|
+
# :private_ip,
|
87
|
+
# :availability_zone,
|
88
|
+
{ task: { display_method: 'tasks.name', width: 999 } },
|
89
|
+
{ container: { display_method: 'tasks.containers.name', width: 999 } }
|
90
|
+
|
91
|
+
puts "\nConnect to which agent?"
|
92
|
+
agents[STDIN.gets.strip.to_i - 1]
|
93
|
+
end
|
94
|
+
|
95
|
+
def select_container(auto:)
|
96
|
+
return containers.first if auto
|
97
|
+
|
98
|
+
puts "\nListing Containers"
|
99
|
+
|
100
|
+
tp tasks,
|
101
|
+
{ task: { display_method: :name, width: 999 } },
|
102
|
+
{ agent: { display_method: 'agent.instance_id' } },
|
103
|
+
{ index: { display_method: 'containers.index' } },
|
104
|
+
{ container: { display_method: 'containers.name', width: 999 } }
|
105
|
+
|
106
|
+
puts "\nConnect to which container?"
|
107
|
+
containers[STDIN.gets.strip.to_i - 1]
|
108
|
+
end
|
109
|
+
|
110
|
+
def find_container(name:)
|
111
|
+
matching = containers.select { |container| container.name.include?(name) }
|
112
|
+
puts "\nAttempting to find a container matching '#{name}'..."
|
113
|
+
|
114
|
+
if matching.empty?
|
115
|
+
puts "No container with a name matching '#{name}' was found"
|
116
|
+
Process.exit
|
117
|
+
end
|
118
|
+
|
119
|
+
unique_names = matching.map(&:name).uniq
|
120
|
+
|
121
|
+
if unique_names.uniq.count > 1
|
122
|
+
puts "Containers with the following names were found, please be more specific:"
|
123
|
+
puts unique_names
|
124
|
+
Process.exit
|
125
|
+
end
|
126
|
+
|
127
|
+
# If there are multiple containers with the same name, choose any one
|
128
|
+
container = matching.first
|
129
|
+
puts "Found container #{container.name} on #{container.task.agent.instance_id}\n\n"
|
130
|
+
container
|
131
|
+
end
|
132
|
+
|
133
|
+
def run_command_in_container(container:, command:)
|
134
|
+
subcommand = "#{'sudo ' if sudo}docker exec -it \\`#{get_container_id_command(container.name)}\\` #{command}"
|
135
|
+
run_command_in_agent(agent: container.task.agent, command: subcommand)
|
136
|
+
end
|
137
|
+
|
138
|
+
def get_container_id_command(container_name)
|
139
|
+
"#{'sudo ' if sudo}docker ps --filter 'label=com.amazonaws.ecs.container-name=#{container_name}' | tail -1 | awk '{print \\$1}'"
|
140
|
+
end
|
141
|
+
|
142
|
+
def run_command_in_agent(agent:, command:)
|
143
|
+
command = generate_connection_string(agent: agent, subcommand: command)
|
144
|
+
system(command)
|
145
|
+
end
|
146
|
+
|
147
|
+
def open_tunnel_via_agent(agent:, local_port:, remote_host:, remote_port:)
|
148
|
+
command = generate_connection_string(
|
149
|
+
agent: agent,
|
150
|
+
port_forward: [local_port, remote_host, remote_port].join(':'),
|
151
|
+
subcommand: <<~SCRIPT
|
152
|
+
echo ""
|
153
|
+
echo "localhost:#{local_port} is now tunneled to #{remote_host}:#{remote_port}"
|
154
|
+
echo "Press Enter to close the tunnel once you are finished."
|
155
|
+
read
|
156
|
+
SCRIPT
|
157
|
+
)
|
158
|
+
system(command)
|
159
|
+
end
|
160
|
+
|
161
|
+
def aws_client_config
|
162
|
+
@aws_client_config ||= { region: region }.tap do |config|
|
163
|
+
config.merge!(aws_vault_credentials) if aws_vault_profile
|
164
|
+
end
|
165
|
+
end
|
166
|
+
|
167
|
+
def aws_vault_credentials
|
168
|
+
environment = `aws-vault exec #{aws_vault_profile} -- env | grep AWS_`
|
169
|
+
vars = environment.split.map { |pair| pair.split('=') }.group_by(&:first)
|
170
|
+
{}.tap do |credentials|
|
171
|
+
%i{access_key_id secret_access_key session_token}.map do |var_name|
|
172
|
+
credentials[var_name] = vars["AWS_#{var_name.upcase}"]&.first&.last
|
173
|
+
end
|
174
|
+
end
|
175
|
+
end
|
176
|
+
|
177
|
+
def generate_connection_string(agent:, subcommand: nil, port_forward: nil)
|
178
|
+
ip = bastion ? agent.private_ip : agent.public_ip
|
179
|
+
command = "ssh #{ip} -l#{ssh_username}"
|
180
|
+
command += " -i #{rsa_key_location}" if rsa_key_location
|
181
|
+
command += " -o ProxyCommand='ssh -qxT #{bastion} nc #{ip} 22'" if bastion
|
182
|
+
command += " -L #{port_forward}" if port_forward
|
183
|
+
command += " -t \"#{subcommand}\"" if subcommand
|
184
|
+
command
|
185
|
+
end
|
186
|
+
|
187
|
+
def agent_registry
|
188
|
+
@agent_registry ||= AgentRegistry.new(
|
189
|
+
aws_client_config: aws_client_config,
|
190
|
+
cluster_name: cluster_name,
|
191
|
+
)
|
192
|
+
end
|
193
|
+
|
194
|
+
def_delegators :agent_registry, :agents, :tasks, :containers
|
195
|
+
end
|
196
|
+
end
|
metadata
ADDED
@@ -0,0 +1,136 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: knuckle_cluster
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 1.0.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Envato
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2017-09-18 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: bundler
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "~>"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '1.14'
|
20
|
+
type: :development
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "~>"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '1.14'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: rake
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - "~>"
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '10.0'
|
34
|
+
type: :development
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - "~>"
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '10.0'
|
41
|
+
- !ruby/object:Gem::Dependency
|
42
|
+
name: rspec
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - "~>"
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: '3.0'
|
48
|
+
type: :development
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - "~>"
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: '3.0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: aws-sdk
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - "~>"
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: '2.8'
|
62
|
+
type: :runtime
|
63
|
+
prerelease: false
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
65
|
+
requirements:
|
66
|
+
- - "~>"
|
67
|
+
- !ruby/object:Gem::Version
|
68
|
+
version: '2.8'
|
69
|
+
- !ruby/object:Gem::Dependency
|
70
|
+
name: table_print
|
71
|
+
requirement: !ruby/object:Gem::Requirement
|
72
|
+
requirements:
|
73
|
+
- - "~>"
|
74
|
+
- !ruby/object:Gem::Version
|
75
|
+
version: '1.5'
|
76
|
+
type: :runtime
|
77
|
+
prerelease: false
|
78
|
+
version_requirements: !ruby/object:Gem::Requirement
|
79
|
+
requirements:
|
80
|
+
- - "~>"
|
81
|
+
- !ruby/object:Gem::Version
|
82
|
+
version: '1.5'
|
83
|
+
description: Interrogation of AWS ECS clusters, with the ability to directly connect
|
84
|
+
to hosts and containers.
|
85
|
+
email:
|
86
|
+
- rubygems@envato.com
|
87
|
+
executables:
|
88
|
+
- knuckle_cluster
|
89
|
+
extensions: []
|
90
|
+
extra_rdoc_files: []
|
91
|
+
files:
|
92
|
+
- ".gitignore"
|
93
|
+
- ".rspec"
|
94
|
+
- ".travis.yml"
|
95
|
+
- CODE_OF_CONDUCT.md
|
96
|
+
- Gemfile
|
97
|
+
- LICENSE.txt
|
98
|
+
- README.md
|
99
|
+
- Rakefile
|
100
|
+
- bin/console
|
101
|
+
- bin/knuckle_cluster
|
102
|
+
- bin/setup
|
103
|
+
- knuckle_cluster.gemspec
|
104
|
+
- lib/knuckle_cluster.rb
|
105
|
+
- lib/knuckle_cluster/agent.rb
|
106
|
+
- lib/knuckle_cluster/agent_registry.rb
|
107
|
+
- lib/knuckle_cluster/configuration.rb
|
108
|
+
- lib/knuckle_cluster/container.rb
|
109
|
+
- lib/knuckle_cluster/task.rb
|
110
|
+
- lib/knuckle_cluster/task_registry.rb
|
111
|
+
- lib/knuckle_cluster/version.rb
|
112
|
+
homepage: https://github.com/envato/knuckle_cluster
|
113
|
+
licenses:
|
114
|
+
- MIT
|
115
|
+
metadata: {}
|
116
|
+
post_install_message:
|
117
|
+
rdoc_options: []
|
118
|
+
require_paths:
|
119
|
+
- lib
|
120
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
121
|
+
requirements:
|
122
|
+
- - ">="
|
123
|
+
- !ruby/object:Gem::Version
|
124
|
+
version: '0'
|
125
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
126
|
+
requirements:
|
127
|
+
- - ">="
|
128
|
+
- !ruby/object:Gem::Version
|
129
|
+
version: '0'
|
130
|
+
requirements: []
|
131
|
+
rubyforge_project:
|
132
|
+
rubygems_version: 2.6.8
|
133
|
+
signing_key:
|
134
|
+
specification_version: 4
|
135
|
+
summary: Handy cluster tool
|
136
|
+
test_files: []
|