kite 0.2.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (116) hide show
  1. checksums.yaml +5 -5
  2. data/CHANGELOG.md +130 -66
  3. data/LICENSE.md +202 -0
  4. data/README.md +35 -11
  5. data/bin/concourse/out +16 -20
  6. data/docs/kite-concourse-resource.md +45 -0
  7. data/kite.gemspec +1 -0
  8. data/lib/kite.rb +3 -1
  9. data/lib/kite/cloud.rb +1 -0
  10. data/lib/kite/core.rb +8 -2
  11. data/lib/kite/generate.rb +12 -46
  12. data/lib/kite/helpers.rb +0 -72
  13. data/lib/kite/helpers/concourse.rb +3 -2
  14. data/lib/kite/module.rb +76 -0
  15. data/lib/kite/terraform.rb +45 -0
  16. data/lib/kite/version.rb +1 -1
  17. data/tpl/aws/environment/main.tf.tt +5 -0
  18. data/tpl/aws/environment/s3.tf.tt +13 -0
  19. data/tpl/gcp/environment/gcs.tf.tt +18 -0
  20. data/tpl/gcp/environment/main.tf.tt +5 -0
  21. data/tpl/gcp/environment/outputs.tf.tt +5 -0
  22. data/tpl/service/%output_path%/pipelines/review.yml.tt +55 -37
  23. data/tpl/service/%output_path%/pipelines/tasks/create-pull-requests-tag.yml.tt +1 -1
  24. data/tpl/service/%output_path%/pipelines/tasks/create-repository-tag.yml.tt +1 -1
  25. data/tpl/service/%output_path%/pipelines/tasks/run-unit.yml.tt +7 -0
  26. data/tpl/service/Makefile.tt +11 -7
  27. data/tpl/service/docs/getting-started.md +73 -0
  28. data/tpl/service/docs/service.md +101 -0
  29. data/tpl/skel/Gemfile.tt +0 -9
  30. data/tpl/skel/config/cloud.yml +11 -66
  31. metadata +29 -88
  32. data/lib/kite/render.rb +0 -116
  33. data/tpl/aws/README.md +0 -52
  34. data/tpl/aws/bin/base/bootstrap.sh +0 -35
  35. data/tpl/aws/bin/base/cleanup.sh.tt +0 -19
  36. data/tpl/aws/bin/base/set-env.sh.tt +0 -7
  37. data/tpl/aws/bin/base/setup-tunnel.sh.tt +0 -4
  38. data/tpl/aws/bin/bosh-install.sh.tt +0 -23
  39. data/tpl/aws/bin/concourse-deploy.sh.tt +0 -14
  40. data/tpl/aws/bin/ingress-deploy.sh.tt +0 -7
  41. data/tpl/aws/bin/ingress-update.sh.tt +0 -7
  42. data/tpl/aws/bin/kops-delete.sh.erb +0 -5
  43. data/tpl/aws/bin/kops-deploy.sh.erb +0 -11
  44. data/tpl/aws/bin/oauth-deploy.sh.tt +0 -17
  45. data/tpl/aws/bin/prometheus-deploy.sh.tt +0 -23
  46. data/tpl/aws/bin/vault-deploy.sh.tt +0 -10
  47. data/tpl/aws/bosh-vars.yml.erb +0 -12
  48. data/tpl/aws/config/oauth.yml +0 -59
  49. data/tpl/aws/deployments/bosh/bosh.yml +0 -144
  50. data/tpl/aws/deployments/bosh/cloud-config.yml.tt +0 -86
  51. data/tpl/aws/deployments/bosh/cpi.yml +0 -98
  52. data/tpl/aws/deployments/bosh/jumpbox-user.yml +0 -27
  53. data/tpl/aws/deployments/concourse/concourse.yml.tt +0 -98
  54. data/tpl/aws/deployments/ingress/ingress.yml.erb +0 -78
  55. data/tpl/aws/deployments/oauth/oauth.yml.tt +0 -95
  56. data/tpl/aws/deployments/prometheus/monitor-bosh.yml +0 -518
  57. data/tpl/aws/deployments/prometheus/monitor-kubernetes.yml +0 -30
  58. data/tpl/aws/deployments/prometheus/prometheus.yml.tt +0 -184
  59. data/tpl/aws/deployments/vault/vault.yml.erb +0 -38
  60. data/tpl/aws/docs/bosh.md +0 -31
  61. data/tpl/aws/docs/concourse.md +0 -41
  62. data/tpl/aws/docs/ingress.md +0 -14
  63. data/tpl/aws/docs/kops.md +0 -35
  64. data/tpl/aws/docs/oauth.md +0 -24
  65. data/tpl/aws/docs/prometheus.md +0 -31
  66. data/tpl/aws/docs/vault.md +0 -35
  67. data/tpl/aws/terraform/kite_bucket.tf +0 -8
  68. data/tpl/aws/terraform/main.tf.tt +0 -36
  69. data/tpl/aws/terraform/network.tf.tt +0 -252
  70. data/tpl/aws/terraform/outputs.tf +0 -19
  71. data/tpl/aws/terraform/terraform.tfvars.tt +0 -21
  72. data/tpl/aws/terraform/variables.tf +0 -73
  73. data/tpl/gcp/README.md +0 -54
  74. data/tpl/gcp/bin/base/bootstrap.sh +0 -35
  75. data/tpl/gcp/bin/base/cleanup.sh.tt +0 -20
  76. data/tpl/gcp/bin/base/set-env.sh.tt +0 -10
  77. data/tpl/gcp/bin/base/setup-tunnel.sh.tt +0 -13
  78. data/tpl/gcp/bin/bosh-install.sh.tt +0 -22
  79. data/tpl/gcp/bin/concourse-deploy.sh.tt +0 -14
  80. data/tpl/gcp/bin/ingress-deploy.sh.tt +0 -7
  81. data/tpl/gcp/bin/ingress-update.sh.tt +0 -7
  82. data/tpl/gcp/bin/oauth-deploy.sh.tt +0 -19
  83. data/tpl/gcp/bin/prometheus-deploy.sh.tt +0 -23
  84. data/tpl/gcp/bin/vault-deploy.sh.tt +0 -10
  85. data/tpl/gcp/bosh-vars.yml.erb +0 -9
  86. data/tpl/gcp/config/oauth.yml +0 -59
  87. data/tpl/gcp/deployments/bosh/bosh.yml +0 -144
  88. data/tpl/gcp/deployments/bosh/cloud-config.yml.tt +0 -73
  89. data/tpl/gcp/deployments/bosh/cpi.yml +0 -69
  90. data/tpl/gcp/deployments/bosh/jumpbox-user.yml +0 -27
  91. data/tpl/gcp/deployments/concourse/concourse.yml.tt +0 -104
  92. data/tpl/gcp/deployments/concourse/test/test-credentials.yml +0 -3
  93. data/tpl/gcp/deployments/concourse/test/test-pipeline.yml +0 -24
  94. data/tpl/gcp/deployments/ingress/ingress.yml.erb +0 -111
  95. data/tpl/gcp/deployments/oauth/oauth.yml.tt +0 -95
  96. data/tpl/gcp/deployments/prometheus/monitor-bosh.yml +0 -518
  97. data/tpl/gcp/deployments/prometheus/monitor-kubernetes.yml +0 -30
  98. data/tpl/gcp/deployments/prometheus/prometheus.yml +0 -183
  99. data/tpl/gcp/deployments/vault/vault.yml.erb +0 -37
  100. data/tpl/gcp/docs/bosh.md +0 -36
  101. data/tpl/gcp/docs/concourse.md +0 -41
  102. data/tpl/gcp/docs/ingress.md +0 -12
  103. data/tpl/gcp/docs/oauth.md +0 -24
  104. data/tpl/gcp/docs/prometheus.md +0 -27
  105. data/tpl/gcp/docs/vault.md +0 -36
  106. data/tpl/gcp/terraform/gcs.tf.tt +0 -18
  107. data/tpl/gcp/terraform/main.tf +0 -70
  108. data/tpl/gcp/terraform/network.tf +0 -52
  109. data/tpl/gcp/terraform/outputs.tf +0 -7
  110. data/tpl/gcp/terraform/terraform.tfvars.tt +0 -15
  111. data/tpl/gcp/terraform/variables.tf +0 -37
  112. data/tpl/service/%output_path%/pipelines/tasks/helm-deploy.yml.tt +0 -22
  113. data/tpl/service/%output_path%/pipelines/tasks/run-master-tests.yml.tt +0 -12
  114. data/tpl/service/%output_path%/pipelines/tasks/run-pr-tests.yml.tt +0 -12
  115. data/tpl/skel/docs/index.md.tt +0 -0
  116. data/tpl/skel/docs/quickstart.md.tt +0 -0
@@ -1,30 +0,0 @@
1
- # This file assumes bosh_exporter based Service Discovery is being used: ./monitor-bosh.yml
2
-
3
- # Exporter jobs
4
- - type: replace
5
- path: /instance_groups/name=prometheus/jobs/-
6
- value:
7
- name: kube_state_metrics_exporter
8
- release: prometheus
9
- properties:
10
- kube_state_metrics_exporter:
11
- apiserver: "((kubernetes_apiserver))"
12
- kubeconfig: ((kubernetes_kubeconfig))
13
-
14
- # Prometheus Alerts
15
- - type: replace
16
- path: /instance_groups/name=prometheus/jobs/name=kubernetes_alerts?/release
17
- value: prometheus
18
-
19
- - type: replace
20
- path: /instance_groups/name=prometheus/jobs/name=prometheus/properties/prometheus/rule_files/-
21
- value: /var/vcap/jobs/kubernetes_alerts/*.alerts
22
-
23
- # Grafana Dashboards
24
- - type: replace
25
- path: /instance_groups/name=grafana/jobs/name=kubernetes_dashboards?/release
26
- value: prometheus
27
-
28
- - type: replace
29
- path: /instance_groups/name=grafana/jobs/name=grafana/properties/grafana/prometheus/dashboard_files/-
30
- value: /var/vcap/jobs/kubernetes_dashboards/*.json
@@ -1,184 +0,0 @@
1
- name: prometheus
2
-
3
- instance_groups:
4
- - name: alertmanager
5
- azs:
6
- - z1
7
- instances: 1
8
- vm_type: default
9
- persistent_disk_type: default
10
- stemcell: default
11
- networks:
12
- - name: platform_net
13
- static_ips: [<%= @private_subnet[15] %>]
14
- jobs:
15
- - name: alertmanager
16
- release: prometheus
17
- properties:
18
- alertmanager:
19
- mesh:
20
- password: ((alertmanager_mesh_password))
21
- route:
22
- receiver: default
23
- receivers:
24
- - name: default
25
- test_alert:
26
- daily: true
27
-
28
- - name: prometheus
29
- azs:
30
- - z1
31
- instances: 1
32
- vm_type: default
33
- persistent_disk_type: default
34
- stemcell: default
35
- networks:
36
- - name: platform_net
37
- static_ips: [<%= @private_subnet[16] %>]
38
- jobs:
39
- - name: prometheus
40
- release: prometheus
41
- properties:
42
- prometheus:
43
- rule_files:
44
- - /var/vcap/jobs/postgres_alerts/*.alerts
45
- - /var/vcap/jobs/prometheus_alerts/*.alerts
46
- scrape_configs:
47
- - job_name: prometheus
48
- static_configs:
49
- - targets:
50
- - localhost:9090
51
- - name: postgres_alerts
52
- release: prometheus
53
- - name: prometheus_alerts
54
- release: prometheus
55
-
56
- - name: database
57
- azs:
58
- - z1
59
- instances: 1
60
- vm_type: default
61
- persistent_disk_type: default
62
- stemcell: default
63
- networks:
64
- - name: platform_net
65
- jobs:
66
- - name: postgres
67
- release: postgres
68
- properties:
69
- databases:
70
- port: 5432
71
- databases:
72
- - name: grafana
73
- citext: true
74
- roles:
75
- - name: grafana
76
- password: ((postgres_grafana_password))
77
- - name: postgres_exporter
78
- release: prometheus
79
- properties:
80
- postgres_exporter:
81
- datasource_name: postgresql://grafana:((postgres_grafana_password))@127.0.0.1:5432/?sslmode=disable
82
-
83
- - name: grafana
84
- azs:
85
- - z1
86
- instances: 1
87
- vm_type: default
88
- persistent_disk_type: default
89
- stemcell: default
90
- networks:
91
- - name: platform_net
92
- static_ips: [<%= @private_subnet[17] %>]
93
- jobs:
94
- - name: grafana
95
- release: prometheus
96
- properties:
97
- grafana:
98
- database:
99
- type: postgres
100
- port: 5432
101
- name: grafana
102
- user: grafana
103
- password: ((postgres_grafana_password))
104
- session:
105
- provider: postgres
106
- provider_port: 5432
107
- provider_name: grafana
108
- provider_user: grafana
109
- provider_password: ((postgres_grafana_password))
110
- security:
111
- admin_user: admin
112
- admin_password: ((grafana_password))
113
- secret_key: ((grafana_secret_key))
114
- dashboards:
115
- json:
116
- enabled: true
117
- prometheus:
118
- dashboard_files:
119
- - /var/vcap/jobs/grafana_dashboards/*.json
120
- - /var/vcap/jobs/postgres_dashboards/*.json
121
- - /var/vcap/jobs/prometheus_dashboards/*.json
122
- - name: grafana_dashboards
123
- release: prometheus
124
- - name: postgres_dashboards
125
- release: prometheus
126
- - name: prometheus_dashboards
127
- release: prometheus
128
-
129
- - name: nginx
130
- azs:
131
- - z1
132
- instances: 1
133
- vm_type: default
134
- stemcell: default
135
- networks:
136
- - name: platform_net
137
- static_ips: [<%= @private_subnet[18] %>]
138
- jobs:
139
- - name: nginx
140
- release: prometheus
141
- properties:
142
- nginx:
143
- alertmanager:
144
- auth_username: admin
145
- auth_password: ((alertmanager_password))
146
- prometheus:
147
- auth_username: admin
148
- auth_password: ((prometheus_password))
149
-
150
- variables:
151
- - name: alertmanager_password
152
- type: password
153
- - name: alertmanager_mesh_password
154
- type: password
155
- - name: prometheus_password
156
- type: password
157
- - name: postgres_grafana_password
158
- type: password
159
- - name: grafana_password
160
- type: password
161
- - name: grafana_secret_key
162
- type: password
163
-
164
- update:
165
- canaries: 1
166
- max_in_flight: 32
167
- canary_watch_time: 1000-100000
168
- update_watch_time: 1000-100000
169
- serial: false
170
-
171
- stemcells:
172
- - alias: default
173
- os: ubuntu-trusty
174
- version: latest
175
-
176
- releases:
177
- - name: postgres
178
- version: "20"
179
- url: https://bosh.io/d/github.com/cloudfoundry/postgres-release?v=20
180
- sha1: 3f378bcab294e20316171d4e656636df88763664
181
- - name: prometheus
182
- version: 18.6.2
183
- url: https://github.com/cloudfoundry-community/prometheus-boshrelease/releases/download/v18.6.2/prometheus-18.6.2.tgz
184
- sha1: f6b7ed381a28ce8fef99017a89e1122b718d5556
@@ -1,38 +0,0 @@
1
- ---
2
- name: vault
3
-
4
- releases:
5
- - name: vault
6
- version: latest
7
-
8
- instance_groups:
9
- - name: vault
10
- instances: 1
11
- vm_type: default
12
- azs: [z1]
13
- stemcell: trusty
14
- networks:
15
- - name: platform_net
16
- static_ips: [<%= @private_subnet[11] %>]
17
-
18
- jobs:
19
- - name: vault
20
- release: vault
21
- properties:
22
- vault:
23
- ha:
24
- redirect: ~
25
- storage:
26
- use_file: true
27
-
28
- update:
29
- canaries: 1
30
- max_in_flight: 1
31
- serial: false
32
- canary_watch_time: 1000-60000
33
- update_watch_time: 1000-60000
34
-
35
- stemcells:
36
- - alias: trusty
37
- name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
38
- version: latest
data/tpl/aws/docs/bosh.md DELETED
@@ -1,31 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## BOSH
4
-
5
- ### Prerequisites
6
-
7
- - Terraform IaC applied
8
- - [BOSH CLI v2](https://bosh.io/docs/cli-v2.html#install) installed
9
-
10
- ### Setup
11
-
12
- Render bosh deployment
13
- ```
14
- kite render manifest bosh --cloud=aws
15
- ```
16
-
17
- Setup tunnel
18
- ```
19
- . bin/setup-tunnel.sh
20
- ```
21
-
22
- Install BOSH
23
- ```
24
- ./bin/bosh-install.sh
25
- ```
26
-
27
- Connect to the Director
28
- ```
29
- . bin/set-env.sh
30
-
31
- ```
@@ -1,41 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## Concourse
4
-
5
- ### Prerequisites
6
-
7
- - Vault [deployed and initialized](vault.md)
8
-
9
- ### Setup
10
-
11
- Fill out the "token" field in `deployments/concourse/concourse.yml` with root token received from `vault init`.
12
-
13
- Deploy Concourse by running the script with the Vault token as argument(strong passwords for Concourse auth and db will be generated automatically)
14
- ```
15
- ./bin/concourse-deploy.sh *vault_token*
16
- ```
17
-
18
- ### Connect GitHub oAuth
19
-
20
- To configure GitHub oAuth, you'll first need to [create](https://developer.github.com/apps/building-integrations/setting-up-and-registering-oauth-apps/registering-oauth-apps) a GitHub oAuth app.
21
-
22
- ```
23
- fly set-team -n concourse \
24
- --github-auth-client-id D \
25
- --github-auth-client-secret $CLIENT_SECRET \
26
- --github-auth-team concourse/Pivotal
27
- ```
28
-
29
- ### Test
30
-
31
- To run a test Concourse job:
32
-
33
- - Go to test folder: `cd deployments/concourse/test`
34
- - Fill out `test-credentials.yml`
35
- - Add necessary secrets to your Vault(see [docs/vault.md](docs/vault.md))
36
- - Download the `fly` client from Concourse web panel and add it to your PATH: `mv *path_to_fly* /usr/local/bin`
37
- - Login to Concourse using the `fly` client: `fly -t ci --concourse-url *concourse-url*`
38
- - Create a test pipeline with `fly set-pipeline -t ci -c test-pipeline.yml -p test --load-vars-from test-credentials.yml -n`
39
- - Unpause pipeline: `fly unpause-pipeline -t ci -p test`
40
- - Trigger and unpause the test job: `fly trigger-job -t ci -j test/test-publish`
41
- - See the results on Concourse web panel or use: `fly watch -p test -j test/test-publish`
@@ -1,14 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## Ingress
4
-
5
- ### Prerequisites
6
-
7
- - BOSH environment [ready](bosh.md)
8
- - All hostnames resolve to the VIP configured in cloud.yml (this is mandatory to issue SSL certificates)
9
-
10
- ### Deployment
11
-
12
- To deploy Ingress, use `./bin/ingress-deploy.sh`
13
-
14
- After each new component deployed, run `./bin/ingress-update`
data/tpl/aws/docs/kops.md DELETED
@@ -1,35 +0,0 @@
1
- #### KOPS
2
-
3
- ### Prerequisites
4
-
5
- - [kubectl](https://github.com/kubernetes/kops/blob/master/docs/install.md#kubectl) installed
6
- - [kops](https://github.com/kubernetes/kops/blob/master/docs/install.md) client installed
7
- - SSH key generated(needed for accessing cluster's master)
8
- - Amazon S3 bucket for storing cluster's state created
9
- - Route 53 domain for cluster access
10
- - IAM user with correct policies:
11
- - AmazonEC2FullAccess
12
- - AmazonRoute53FullAccess
13
- - AmazonS3FullAccess
14
- - IAMFullAccess
15
- - AmazonVPCFullAccess
16
-
17
- ### Setup
18
-
19
- Export AWS access keys and ID if you didn't before
20
- ```
21
- export AWS_ACCESS_KEY_ID=<access key>
22
- export AWS_SECRET_ACCESS_KEY=<secret key>
23
- ```
24
-
25
- Deploy the `kops` cluster
26
- ```
27
- ./bin/kops-deploy.sh
28
- ```
29
-
30
- ### Teardown
31
-
32
- To tear down the kops cluster you've created, just run
33
- ```
34
- ./bin/kops-delete.sh
35
- ```
@@ -1,24 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## OAuth (UAA)
4
-
5
- ### Configuration
6
-
7
- If you want to add initial groups and users, change oauth look,
8
- configure mail, etc. - you should edit `config/oauth.yml`.
9
-
10
- Here are links to uaa config documentation:
11
-
12
- * __users:__ [uaa.scim.users](https://bosh.io/jobs/uaa?source=github.com/cloudfoundry/uaa-release&version=52#p=uaa.scim.users)
13
- * __groups:__ [uaa.scim.groups](https://bosh.io/jobs/uaa?source=github.com/cloudfoundry/uaa-release&version=52#p=uaa.scim.groups)
14
- * __oauth clients:__ [uaa.clients](https://bosh.io/jobs/uaa?source=github.com/cloudfoundry/uaa-release&version=52#p=uaa.clients)
15
- * __theming:__ [login.branding](https://bosh.io/jobs/uaa?source=github.com/cloudfoundry/uaa-release&version=52#p=login.branding)
16
- * __email notifications:__ [login.smtp](https://bosh.io/jobs/uaa?source=github.com/cloudfoundry/uaa-release&version=52#p=login.smtp)
17
-
18
- ### Deployment
19
-
20
- After editing config, run `./bin/oauth-deploy.sh`
21
-
22
- ### Usage
23
-
24
- To check if OAuth works, visit [<%= @values['oauth']['hostname'] %>](<%= @values['oauth']['url'] %>).
@@ -1,31 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## Prometheus
4
-
5
- ### Prerequisites
6
-
7
- - BOSH environment [ready](bosh.md)
8
- - Kops cluster [deployed](kops.md)
9
-
10
- ### Setup
11
-
12
- Enter path to your Kubernetes config in `config/cloud.yml` and add the Kubernetes API server address to `config/bosh_vars.yml`.
13
-
14
- Afterwards, deploy Prometheus
15
- ```
16
- ./bin/prometheus-deploy.sh
17
- ```
18
-
19
- ### Access
20
-
21
- After the deployment process is done, you can reach each Prometheus' component's web UI at:
22
-
23
- If you have [Ingress](ingress.md) deployed and DNS record created, each Prometheus stack component should be accessible by its respective address.
24
-
25
- Without Ingress:
26
-
27
- - AlertManager: http://10.0.0.18:9093
28
- - Grafana: http://10.0.0.18:3000
29
- - Prometheus: http://10.0.0.18:9090
30
-
31
- You can find related credentials in `config/creds.yml`
@@ -1,35 +0,0 @@
1
- #### [Back](../README.md)
2
-
3
- ## Vault
4
-
5
- ### Prerequisites
6
-
7
- Before using Vault, you should have the client installed:
8
-
9
- - Download the binary for your OS
10
- - Unzip it and run `chmod +x vault && sudo mv vault /usr/local/bin/vault`
11
- - Check if the Vault is installed by running `vault -v`
12
-
13
- ### Deployment
14
-
15
- To deploy Vault, use `./bin/vault-deploy.sh`
16
-
17
- ### Connection
18
-
19
- - Export your Vault's IP using `export VAULT_ADDR=http://*vault_ip*:8200`
20
- - Run `vault init` to initialize the vault
21
- - Store the keys displayed after init
22
- - Unseal the vault by running `vault unseal` three times using three keys from the previous step
23
- - Authenticate to the vault with `vault auth` using the root token you got from `vault init`
24
-
25
- [Optional]
26
- - Try to store a dummy secret: `vault write secret/handshake knock=knock`
27
- - Read it: `vault read secret/handshake`
28
-
29
- ### Usage with Concourse
30
-
31
- Before using Vault with Concourse you should mount a secrets backend with `vault mount -path=concourse kv`
32
-
33
- To add new secrets accessible for Concourse use `vault write concourse/main/*secret_name* value="*secret_value*"`
34
-
35
- #### It's recommended to create a separate token for Concourse by using `vault token-create`