conlink 2.5.6 → 2.5.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,6 +2,7 @@
2
2
 
3
3
  [![npm](https://img.shields.io/npm/v/conlink.svg)](https://www.npmjs.com/package/conlink)
4
4
  [![docker](https://img.shields.io/docker/v/lonocloud/conlink.svg)](https://hub.docker.com/r/lonocloud/conlink)
5
+ [![Push (compose tests)](https://github.com/Viasat/conlink/actions/workflows/push.yml/badge.svg)](https://github.com/Viasat/conlink/actions/workflows/push.yml)
5
6
 
6
7
  ## Declarative Low-Level Networking for Containers
7
8
 
@@ -38,22 +39,22 @@ services:
38
39
  - {bridge: s1, ip: 10.0.1.1/24}
39
40
  ```
40
41
 
41
- Check out the [runnable examples](https://github.com/LonoCloud/conlink/tree/master/examples)
42
- for more ideas on what is possible. [This guide](https://lonocloud.github.io/conlink/#/guides/examples)
42
+ Check out the [runnable examples](https://github.com/Viasat/conlink/tree/master/examples)
43
+ for more ideas on what is possible. [This guide](https://viasat.github.io/conlink/#/guides/examples)
43
44
  walks through how to run each example.
44
45
 
45
- The [reference documentation](https://lonocloud.github.io/conlink/#/reference/network-configuration-syntax)
46
- contains the full list of configuration options. Be sure to also read [usage notes](https://lonocloud.github.io/conlink/#/usage-notes),
46
+ The [reference documentation](https://viasat.github.io/conlink/#/reference/network-configuration-syntax)
47
+ contains the full list of configuration options. Be sure to also read [usage notes](https://viasat.github.io/conlink/#/usage-notes),
47
48
  which highlight some unique aspects of using conlink-provided networking.
48
49
 
49
50
  Conlink also includes tools that make docker compose a much more
50
51
  powerful development and testing environment (refer to
51
- [Compose Tools](https://lonocloud.github.io/conlink/#/guides/compose-tools) for
52
+ [Compose Tools](https://viasat.github.io/conlink/#/guides/compose-tools) for
52
53
  details):
53
54
 
54
- * [mdc](https://lonocloud.github.io/conlink/#/guides/compose-tools?id=mdc): modular management of multiple compose configurations
55
- * [wait](https://lonocloud.github.io/conlink/#/guides/compose-tools?id=wait): wait for network and file conditions before continuing
56
- * [copy](https://lonocloud.github.io/conlink/#/guides/compose-tools?id=copy): recursively copy files with variable templating
55
+ * [mdc](https://viasat.github.io/conlink/#/guides/compose-tools?id=mdc): modular management of multiple compose configurations
56
+ * [wait](https://viasat.github.io/conlink/#/guides/compose-tools?id=wait): wait for network and file conditions before continuing
57
+ * [copy](https://viasat.github.io/conlink/#/guides/compose-tools?id=copy): recursively copy files with variable templating
57
58
 
58
59
  ## Why conlink?
59
60
 
@@ -88,7 +89,7 @@ Conlink has the following features:
88
89
 
89
90
  General:
90
91
  * docker
91
- * docker-compose version 1.25.4 or later.
92
+ * docker-compose v2
92
93
 
93
94
  Other:
94
95
  * For Open vSwtich (OVS) bridging, the `openvswitch` kernel module
package/conlink CHANGED
@@ -8,4 +8,4 @@ die() { echo >&2 "${*}"; exit 1; }
8
8
 
9
9
  [ -e "${NBB}" ] || die "Missing ${NBB}. Maybe run 'npm install' in ${TOP_DIR}?"
10
10
 
11
- exec ${NBB} -cp "${TOP_DIR}/src" -m conlink.core/main "${@}"
11
+ NODE_PATH="${TOP_DIR}/node_modules" exec ${NBB} -cp "${TOP_DIR}/src" -m conlink.core/main "${@}"
@@ -1,6 +1,6 @@
1
1
  # Examples
2
2
 
3
- The [examples](https://github.com/LonoCloud/conlink/tree/master/examples)
3
+ The [examples](https://github.com/Viasat/conlink/tree/master/examples)
4
4
  directory contains the necessary files to follow along below.
5
5
 
6
6
  The examples also require a conlink docker image. Build the image for both
@@ -286,6 +286,9 @@ sub-interface of the same host (using VLAN ID/tag 5). Static NAT
286
286
  address/interface to the internal address/interface (dummy) where the
287
287
  web server is running.
288
288
 
289
+ NOTE: The conlink container runs in full privileged mode in order to
290
+ be able to create vlans in the root namespace.
291
+
289
292
  Create an environment file with the name of the parent host interface
290
293
  and the external IP addresses to assign to each container:
291
294
 
@@ -415,3 +418,61 @@ interfaces to be configured before continuing. Finally the `client`
415
418
  service also uses `wait` to probe the `webserver` until it accepts
416
419
  TCP connections on port 8080 before, and then it starts its main loop
417
420
  that repeatedly requests a templated file from the `webserver`.
421
+
422
+ ## test12: kubernetes (k3s)
423
+
424
+ This example demonstrates the use of a kubernetes k3s cluster running
425
+ in conlink and communicating with other compose services. It also
426
+ leverage multiple `mdc` modes/modules and the `wait` command.
427
+
428
+ NOTE: The k3s containers run in full privileged mode in order to use
429
+ overlayfs mounts inside the containers. The conlink container must
430
+ also run in privileged mode in order to create veth endpoints in the
431
+ k3s containers.
432
+
433
+ For convenience, export `MODES_DIR` and create a KUBECTL function that
434
+ will run kubectl in the `k3s-server` container.
435
+
436
+ ```
437
+ export MODES_DIR=./examples/test12-k3s/modes
438
+ KUBECTL() { ./mdc k3s exec k3s-server kubectl "${@}"; }
439
+ ```
440
+
441
+ Start the test12 compose configuration:
442
+
443
+ ```
444
+ ./mdc k3s up --build --force-recreate -d
445
+ ```
446
+
447
+ Wait until the nodes are up and the kube-system pods are "Running":
448
+
449
+ ```
450
+ KUBECTL get -w nodes
451
+ KUBECTL get -w pods -A
452
+ ```
453
+
454
+ Deploy an nginx web proxy that will forward web requests to port 8080
455
+ of the top-level `test-server` service (non-kubernetes).
456
+
457
+ ```
458
+ KUBECTL apply -f /test/k3s-web-proxy.yaml
459
+ ```
460
+
461
+ Get the IP of the deployed web-proxy service:
462
+
463
+ ```
464
+ SVC_IP=$(KUBECTL -n nettest get svc/web-proxy -o jsonpath='{.spec.clusterIP}')
465
+ ```
466
+
467
+ From the `test-client` container, do an HTTP GET request to the
468
+ `test-server` container via the k3s deployed nginx proxy:
469
+
470
+ ```
471
+ ./mdc k3s exec test-client wget -q -O- http://${SVC_IP}:80/README.md | head -n10
472
+ ```
473
+
474
+ Show the logs of all non-probe web requests to the proxy:
475
+
476
+ ```
477
+ KUBECTL -n nettest logs deploy/web-proxy | grep -v kube-probe
478
+ ```
package/docs/index.html CHANGED
@@ -14,8 +14,8 @@
14
14
  <script>
15
15
  window.$docsify = {
16
16
  name: 'conlink',
17
- repo: 'LonoCloud/conlink',
18
- homepage: 'https://raw.githubusercontent.com/LonoCloud/conlink/master/README.md',
17
+ repo: 'Viasat/conlink',
18
+ homepage: 'https://raw.githubusercontent.com/Viasat/conlink/master/README.md',
19
19
  // Uncomment to follow a symlink to root README.md and test rendering locally
20
20
  //homepage: '_render-local-README.md',
21
21
 
@@ -0,0 +1,33 @@
1
+ #!/bin/sh
2
+ set -eu
3
+
4
+ ROLE=$1; shift
5
+
6
+ # Point CRI’s default CNI paths at where k3s actually puts them
7
+ mkdir -p /opt/cni /etc/cni
8
+ ln -sfn /var/lib/rancher/k3s/data/cni /opt/cni/bin
9
+ ln -sfn /var/lib/rancher/k3s/agent/etc/cni/net.d /etc/cni/net.d
10
+
11
+ mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/
12
+ cat > /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl <<EOF
13
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
14
+ SystemdCgroup = false
15
+
16
+ [plugins."io.containerd.grpc.v1.cri".cni]
17
+ bin_dir = "/var/lib/rancher/k3s/data/cni"
18
+ conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"
19
+ EOF
20
+
21
+ K3S_ARGS=""
22
+ # Common kubelet arguments to fix cgroup issues
23
+ # NOTE: still has periodic "Failed to kill all the processes attached to cgroup" log message
24
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=cgroup-driver=cgroupfs"
25
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=feature-gates=KubeletInUserNamespace=true"
26
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=fail-swap-on=false"
27
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=cgroup-root=/"
28
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=runtime-cgroups=/systemd/system.slice"
29
+ K3S_ARGS="${K3S_ARGS} --kubelet-arg=kubelet-cgroups=/systemd/system.slice"
30
+
31
+ echo exec k3s "${ROLE}" ${K3S_ARGS} "$@"
32
+ exec k3s "${ROLE}" ${K3S_ARGS} "$@"
33
+
@@ -0,0 +1,82 @@
1
+ apiVersion: v1
2
+ kind: Namespace
3
+ metadata:
4
+ name: nettest
5
+ ---
6
+ apiVersion: v1
7
+ kind: ConfigMap
8
+ metadata:
9
+ name: web-proxy-nginx
10
+ namespace: nettest
11
+ data:
12
+ nginx.conf: |
13
+ events {}
14
+ http {
15
+ # Structured access log to stdout so you can `kubectl logs -f`
16
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
17
+ '$status $body_bytes_sent "$http_referer" "$http_user_agent" '
18
+ 'upstream=$upstream_addr upstream_status=$upstream_status '
19
+ 'rt=$request_time urt=$upstream_response_time';
20
+ access_log /dev/stdout main;
21
+ error_log /dev/stderr warn;
22
+
23
+ server {
24
+ listen 80;
25
+ location / {
26
+ proxy_pass http://10.200.0.11:8080;
27
+ proxy_set_header Host $host;
28
+ proxy_set_header X-Real-IP $remote_addr;
29
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
30
+ proxy_set_header X-Forwarded-Proto $scheme;
31
+ proxy_connect_timeout 2s;
32
+ proxy_read_timeout 10s;
33
+ }
34
+ }
35
+ }
36
+ ---
37
+ apiVersion: apps/v1
38
+ kind: Deployment
39
+ metadata:
40
+ name: web-proxy
41
+ namespace: nettest
42
+ spec:
43
+ replicas: 1 # start with 1 for easier tracing
44
+ selector:
45
+ matchLabels: { app: web-proxy }
46
+ template:
47
+ metadata:
48
+ labels: { app: web-proxy }
49
+ spec:
50
+ containers:
51
+ - name: nginx
52
+ image: nginx:1.27-alpine
53
+ ports: [ { containerPort: 80, name: http } ]
54
+ volumeMounts:
55
+ - name: cfg
56
+ mountPath: /etc/nginx/nginx.conf
57
+ subPath: nginx.conf
58
+ readinessProbe:
59
+ httpGet: { path: /, port: http }
60
+ initialDelaySeconds: 2
61
+ periodSeconds: 3
62
+ livenessProbe:
63
+ httpGet: { path: /, port: http }
64
+ initialDelaySeconds: 5
65
+ periodSeconds: 10
66
+ volumes:
67
+ - name: cfg
68
+ configMap: { name: web-proxy-nginx }
69
+ ---
70
+ apiVersion: v1
71
+ kind: Service
72
+ metadata:
73
+ name: web-proxy
74
+ namespace: nettest
75
+ spec:
76
+ selector: { app: web-proxy }
77
+ ports:
78
+ - name: http
79
+ port: 80
80
+ targetPort: http
81
+ type: ClusterIP
82
+
@@ -0,0 +1,22 @@
1
+ services:
2
+ extract-utils:
3
+ build: {context: .}
4
+ user: ${USER_ID:-0}:${GROUP_ID:-0}
5
+ network_mode: none
6
+ volumes:
7
+ - ./utils:/conlink_utils
8
+ command: cp /utils/wait /utils/wait.sh /utils/copy /utils/copy.sh /utils/echo /conlink_utils/
9
+
10
+ network:
11
+ build: {context: .}
12
+ pid: host
13
+ network_mode: none
14
+ #cap_add: [SYS_ADMIN, NET_ADMIN, SYS_NICE, NET_BROADCAST, IPC_LOCK]
15
+ privileged: true
16
+ security_opt: [ 'apparmor:unconfined' ] # needed on Ubuntu 18.04
17
+ volumes:
18
+ - /var/run/docker.sock:/var/run/docker.sock
19
+ - /var/lib/docker:/var/lib/docker
20
+ - ./:/test
21
+ working_dir: /test
22
+ command: /app/build/conlink.js --compose-file ${COMPOSE_FILE:?COMPOSE_FILE must be set}
@@ -0,0 +1,73 @@
1
+ x-network:
2
+ bridges:
3
+ - { bridge: k0, mode: linux }
4
+ links:
5
+ - { service: k3s-server, bridge: k0, dev: eth1, ip: 10.200.0.1/24 }
6
+ - { service: k3s-agent-1, bridge: k0, dev: eth1, ip: 10.200.0.2/24 }
7
+ - { service: k3s-agent-2, bridge: k0, dev: eth1, ip: 10.200.0.3/24 }
8
+ - { service: test-client, bridge: k0, dev: eth1, ip: 10.200.0.10/24,
9
+ route: ["10.42.0.0/15 via 10.200.0.2 dev eth1"] }
10
+ - { service: test-server, bridge: k0, dev: eth1, ip: 10.200.0.11/24,
11
+ route: ["10.42.0.0/15 via 10.200.0.2 dev eth1"] }
12
+
13
+ x-k3s-base: &k3s-base
14
+ depends_on: {extract-utils: {condition: service_completed_successfully}}
15
+ image: rancher/k3s:latest
16
+ privileged: true
17
+ cgroup: host
18
+ volumes:
19
+ - ./utils:/utils:ro
20
+ - /dev:/dev
21
+ - ./examples/test12-k3s/:/test:ro
22
+ # cgroup quirks
23
+ - /sys/fs/cgroup:/sys/fs/cgroup:rw
24
+ # To write kubeconfig to the host
25
+ #- ./k3s-output:/output
26
+ entrypoint: /utils/wait -i eth1 -- /test/k3s-init.sh
27
+
28
+ services:
29
+ # Main k3s server
30
+ k3s-server:
31
+ <<: *k3s-base
32
+ hostname: k3s-server
33
+ # To expose kubernetes control API on the host
34
+ #ports:
35
+ # - "6443:6443" # kubernetes API
36
+ environment:
37
+ # To write kubeconfig on host for external kubectl
38
+ #- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
39
+ #- K3S_KUBECONFIG_MODE=644
40
+ # fixed token for adding agents
41
+ - K3S_TOKEN=changeme-secret-token
42
+ command: server --disable=traefik --disable=servicelb
43
+
44
+ # k3s Agents
45
+ k3s-agent-1:
46
+ <<: *k3s-base
47
+ hostname: k3s-agent-1
48
+ environment:
49
+ - K3S_URL=https://k3s-server:6443
50
+ - K3S_TOKEN=changeme-secret-token
51
+ command: agent
52
+
53
+ k3s-agent-2:
54
+ <<: *k3s-base
55
+ hostname: k3s-agent-2
56
+ environment:
57
+ - K3S_URL=https://k3s-server:6443
58
+ - K3S_TOKEN=changeme-secret-token
59
+ command: agent
60
+
61
+ ### For testing two way communication
62
+ test-client:
63
+ image: alpine
64
+ network_mode: none
65
+ command: sleep infinity
66
+
67
+ test-server:
68
+ image: python
69
+ network_mode: none
70
+ volumes:
71
+ - ./:/top
72
+ working_dir: /top
73
+ command: python3 -m http.server 8080
@@ -0,0 +1 @@
1
+ base
@@ -89,7 +89,7 @@ Resources:
89
89
 
90
90
  ## Download conlink image and repo
91
91
  docker pull lonocloud/conlink
92
- git clone https://github.com/LonoCloud/conlink /root/conlink
92
+ git clone https://github.com/Viasat/conlink /root/conlink
93
93
  cd /root/conlink
94
94
 
95
95
  #cfn-signal -e 0 --stack ${AWS::StackName} --region ${AWS::Region} --resource WaitHandle
package/mdc CHANGED
@@ -15,7 +15,7 @@ ENV_FILE="${ENV_FILE:-.env}"
15
15
  MDC_FILES_DIR="${MDC_FILES_DIR:-./.files}"
16
16
  LS=$(which ls)
17
17
  RESOLVE_DEPS="${RESOLVE_DEPS-./node_modules/@lonocloud/resolve-deps/resolve-deps.py}"
18
- DOCKER_COMPOSE="${DOCKER_COMPOSE:-docker-compose}"
18
+ DOCKER_COMPOSE="${DOCKER_COMPOSE:-docker compose}"
19
19
 
20
20
  which ${RESOLVE_DEPS} >/dev/null 2>/dev/null \
21
21
  || die "Missing ${RESOLVE_DEPS}. Perhaps 'npm install'?"
package/package.json CHANGED
@@ -1,20 +1,20 @@
1
1
  {
2
2
  "name": "conlink",
3
- "version": "2.5.6",
3
+ "version": "2.5.8",
4
4
  "description": "conlink - Declarative Low-Level Networking for Containers",
5
- "repository": "https://github.com/LonoCloud/conlink",
5
+ "repository": "https://github.com/Viasat/conlink",
6
6
  "license": "SEE LICENSE IN LICENSE",
7
7
  "dependencies": {
8
8
  "@lonocloud/resolve-deps": "^0.1.0",
9
9
  "ajv": "^8.12.0",
10
- "dockerode": "^3.3.4",
10
+ "dockerode": "^4.0.9",
11
11
  "nbb": "^1.2.179",
12
12
  "neodoc": "^2.0.2",
13
13
  "ts-graphviz": "^1.8.1",
14
14
  "yaml": "^2.2.1"
15
15
  },
16
16
  "devDependencies": {
17
- "@lonocloud/dctest": "^0.3.1",
17
+ "@lonocloud/dctest": "^0.5.0",
18
18
  "docsify-cli": "^4.4.4",
19
19
  "shadow-cljs": "^2.25.7",
20
20
  "source-map-support": "^0.5.21"
package/scripts/copy.sh CHANGED
@@ -25,7 +25,7 @@ dst_dir="${1}"; shift || die 2 "Usage: ${0} [-T|--template] SRC_DIR DST_DIR"
25
25
  echo cp -a "${src}" "${dst}"
26
26
  cp -a "${src}" "${dst}" || die 1 "Failed to copy file"
27
27
  # TODO: make this configurable
28
- chown root.root "${dst}" || die 1 "Unable to set ownership"
28
+ chown root:root "${dst}" || die 1 "Unable to set ownership"
29
29
  chmod +w "${dst}" || die 1 "Unable to make writable"
30
30
 
31
31
  [ -z "${TEMPLATE}" ] && continue
@@ -6,7 +6,7 @@
6
6
 
7
7
  ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
8
8
  ;; Network Address functions
9
- ;; - based on github.com/LonoCloud/clj-protocol
9
+ ;; - based on github.com/Viasat/clj-protocol
10
10
 
11
11
  (defn num->string [n base]
12
12
  #?(:cljs (.toString n base)
@@ -680,6 +680,21 @@ General Options:
680
680
  client)
681
681
  #(warn "Could not start docker client on '" path "': " %))))
682
682
 
683
+ (defn docker-event-stream-handler
684
+ "Handle docker event stream chunks (newline-delimited JSON)."
685
+ [event-callback buf-atom chunk]
686
+ (let [data (str @buf-atom (.toString chunk "utf8"))
687
+ parts (js->clj (.split data "\n"))
688
+ tail (last parts)
689
+ events (butlast parts)]
690
+ (reset! buf-atom tail)
691
+ (doseq [line events
692
+ :when (not (S/blank? line))]
693
+ (try
694
+ (event-callback (->clj (js/JSON.parse line)))
695
+ (catch :default e
696
+ ((:error @ctx) "Could not parse docker event" line e))))))
697
+
683
698
  (defn docker-listen
684
699
  "Listen for docker events from 'client' that match filter 'filters'.
685
700
  Calls 'event-callback' function with each decoded event map."
@@ -688,11 +703,15 @@ General Options:
688
703
  (P/catch
689
704
  (P/let
690
705
  [ev-stream ^obj (.getEvents client #js {:filters (json-str filters)})
706
+ buf (atom "")
691
707
  _ ^obj (.on ev-stream "data"
692
- #(event-callback client (->clj (js/JSON.parse %))))]
708
+ (partial docker-event-stream-handler
709
+ (partial event-callback client)
710
+ buf))]
693
711
  ev-stream)
694
712
  #(error "Could not start docker listener"))))
695
713
 
714
+
696
715
  (defn link-repr [{:keys [type dev remote outer-dev bridge ip dev-id]}]
697
716
  (str dev-id
698
717
  (if remote
@@ -788,19 +807,23 @@ General Options:
788
807
  the links for that container and then run any commmands defined for
789
808
  the container. Finally call all-connected-check to check and notify
790
809
  if all containers/services are connected."
791
- [client {:keys [status id]}]
810
+ [client {:keys [status id] :as evt}]
792
811
  (P/let
793
812
  [{:keys [log info network-config network-state compose-opts self-pid]} @ctx
794
- container-obj (get-container client id)
795
- container-data (if (= "die" status)
796
- (P/let [ci (get-in network-state [:containers id])]
797
- (swap! ctx update-in [:network-state :containers]
798
- dissoc id)
799
- ci)
800
- (P/let [ci (query-container-data container-obj)]
801
- (swap! ctx update-in [:network-state :containers]
802
- assoc id ci)
803
- ci))
813
+ status (or status (:Action evt))
814
+ id (or id (:ID evt) (get-in evt [:Actor :ID]) (get-in evt [:Actor :id]))
815
+ container-obj (when (and status id) (get-container client id))
816
+ container-data (if (or (not status) (not id))
817
+ nil
818
+ (if (= "die" status)
819
+ (P/let [ci (get-in network-state [:containers id])]
820
+ (swap! ctx update-in [:network-state :containers]
821
+ dissoc id)
822
+ ci)
823
+ (P/let [ci (query-container-data container-obj)]
824
+ (swap! ctx update-in [:network-state :containers]
825
+ assoc id ci)
826
+ ci)))
804
827
  {cname :name clabels :labels} container-data
805
828
 
806
829
  svc-match? (and (let [p (:project compose-opts)]
@@ -812,19 +835,21 @@ General Options:
812
835
  (get-in network-config [:services (:service clabels)]))
813
836
  links (concat (:links containers) (:links services))
814
837
  commands (concat (:commands containers) (:commands services))]
815
- (if (and (not (seq links)) (not (seq commands)))
816
- (info (str "Event: no matching config for " cname ", ignoring"))
817
- (P/do
818
- (info "Event:" status cname id)
819
- (P/all (for [link links
820
- :let [link (link-instance-enrich
821
- link container-data self-pid)]]
822
- (modify-link link status)))
823
- (when (= "start" status)
824
- (P/all (for [{:keys [command]} commands]
825
- (exec-command cname container-obj command))))
826
-
827
- (all-connected-check)))))
838
+ (if (or (not status) (not id))
839
+ nil
840
+ (if (and (not (seq links)) (not (seq commands)))
841
+ (info (str "Event: no matching config for " cname ", ignoring"))
842
+ (P/do
843
+ (info "Event:" status cname id)
844
+ (P/all (for [link links
845
+ :let [link (link-instance-enrich
846
+ link container-data self-pid)]]
847
+ (modify-link link status)))
848
+ (when (= "start" status)
849
+ (P/all (for [{:keys [command]} commands]
850
+ (exec-command cname container-obj command))))
851
+
852
+ (all-connected-check))))))
828
853
 
829
854
  (defn exit-handler
830
855
  "When the process is exiting, delete all links and bridges that are
@@ -0,0 +1,50 @@
1
+ name: "test12: k3s (kubernetes) and nginx proxy"
2
+
3
+ env:
4
+ DC: "${{ process.env.DOCKER_COMPOSE || 'docker compose' }}"
5
+ MODES_DIR: examples/test12-k3s/modes
6
+
7
+ tests:
8
+ test12:
9
+ name: "deploy and test nginx proxy in k3s (kubernetes)"
10
+ steps:
11
+ - exec: :host
12
+ run: |
13
+ ./mdc k3s
14
+ ${DC} down --remove-orphans --volumes -t1
15
+ ${DC} up -d --force-recreate
16
+
17
+ # Wait for conlink and k3s to start up
18
+ - exec: :host
19
+ run: |
20
+ echo "waiting for conlink startup"
21
+ ${DC} logs network | grep "All links connected"
22
+ repeat: { retries: 30, interval: '1s' }
23
+ - exec: k3s-server
24
+ run: kubectl get nodes 2>/dev/null | grep -c ' Ready ' | grep "^[345]"
25
+ repeat: { retries: 30, interval: '2s' }
26
+ - exec: k3s-server
27
+ run: kubectl get pods -n kube-system 2>/dev/null | grep -c ' Running ' | grep "^[345]"
28
+ repeat: { retries: 30, interval: '2s' }
29
+
30
+ # apply the proxy
31
+ - exec: k3s-server
32
+ run: |
33
+ kubectl apply -f /test/k3s-web-proxy.yaml
34
+
35
+ # test two-way traffic through the proxy
36
+ - id: svc_ip
37
+ exec: k3s-server
38
+ run: |
39
+ kubectl -n nettest get svc/web-proxy -o jsonpath='{.spec.clusterIP}'
40
+ outputs:
41
+ SVC_IP: ${{ step.stdout }}
42
+ - exec: test-client
43
+ run: |
44
+ set -o pipefail
45
+ URL=http://${{ steps['svc_ip'].outputs.SVC_IP }}:80/README.md
46
+ wget -q -O- ${URL} | head -n10
47
+ repeat: { retries: 5, interval: '3s' }
48
+
49
+ - exec: :host
50
+ run: ${DC} down --remove-orphans --volumes -t1