Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

undoing these changes for now #285

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions charts/ziti-controller/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,12 +166,12 @@ ctrlPlane:

## Extra Security for the Management API

You can split the client and management APIs into separate cluster services by setting `managementApi.service.enabled=true`. With this configuration, you'll have an additional cluster service named `{release}-mgmt` that is the management API, and the client API will not have management features.
You can split the client and management APIs into separate cluster services by setting `managementApi.service.enabled=true`. With this configuration, you'll have an additional cluster service named `{release}-mgmt` that is the management API, and the client API will not have management features.

This Helm chart's values allow for both operational scenarios: combined and split. The default choice is to expose the combined client and management APIs as the cluster service named `{release}-client`, which is convenient because you can use the `ziti` CLI immediately. For additional security, you may shelter the management API by splitting these two sets of features, exposing them as separate API servers. After the split, you can access the management API in several ways:
This Helm chart's values allow for both operational scenarios: combined and split. The default choice is to expose the combined client and management APIs as the cluster service named `{release}-client`, which is convenient because you can use the `ziti` CLI immediately. For additional security, you may shelter the management API by splitting these two sets of features, exposing them as separate API servers. After the split, you can access the management API in several ways:

* deploy a tunneler to bind a Ziti service targeting {release}-mgmt.{namespace}.svc:{port}.
* `kubectl -n {namespace} port-forward deployments/{release}-mgmt 8443:{port}`
* deploy a tunneler to bind a Ziti service targeting {release}-mgmt.{namespace}.svc:{port}.
* `kubectl -n {namespace} port-forward deployments/{release}-mgmt 8443:{port}`

The web console (ZAC) is always bound to the same web listener as the management API, so you can access it at that `/zac/` path on the same URL.

Expand Down
6 changes: 3 additions & 3 deletions charts/ziti-edge-tunnel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@

Dial OpenZiti services with a tunneler daemonset

**Homepage:** <https://openziti.io>
**Homepage:** <https://openziti.io>

## Source Code

* &lt;https://github.com/openziti/ziti-tunnel-sdk-c>
* <https://github.com/openziti/ziti-tunnel-sdk-c>

## Requirements

Expand Down Expand Up @@ -182,7 +182,7 @@ Once the image is present on every node, you can proceed to upgrade the tunneler
| imagePullSecrets | list | `[]` | |
| livenessProbe.exec.command[0] | string | `"/bin/bash"` | |
| livenessProbe.exec.command[1] | string | `"-c"` | |
| livenessProbe.exec.command[2] | string | `"if (ziti-edge-tunnel tunnel_status | sed -E 's/(^received\\sresponse\\s&lt;|>$)//g' | jq '.Success'); then true; else false; fi"` | |
| livenessProbe.exec.command[2] | string | `"if (ziti-edge-tunnel tunnel_status | sed -E 's/(^received\\sresponse\\s<|>$)//g' | jq '.Success'); then true; else false; fi"` | |
| livenessProbe.failureThreshold | int | `3` | |
| livenessProbe.initialDelaySeconds | int | `180` | |
| livenessProbe.periodSeconds | int | `60` | |
Expand Down
2 changes: 1 addition & 1 deletion charts/ziti-router/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ identity:
| tunnel.lanIf | string | `"lo"` | interface device name for setting up INPUT firewall rules if fw enabled. It must be set but not needed in containers. Thus, it is set to lo by default |
| tunnel.mode | string | `"none"` | run mode for the router's built-in tunnel component: host, tproxy, proxy, or none |
| tunnel.proxyAdditionalK8sServices | list | `[]` | if tunnel mode is "proxy", create a separate cluster service for each Ziti service listed in "proxyServices" which k8sService == name |
| tunnel.proxyDefaultK8sService | object | `{"enabled":true,"type":"ClusterIP"}` | if tunnel mode is "proxy", create the a cluster service named &lbrace;&lbrace; release }}-proxy-default listening on each "advertisedPort" defined in "proxyServices" |
| tunnel.proxyDefaultK8sService | object | `{"enabled":true,"type":"ClusterIP"}` | if tunnel mode is "proxy", create the a cluster service named {{ release }}-proxy-default listening on each "advertisedPort" defined in "proxyServices" |
| tunnel.proxyServices | list | `[]` | list of Ziti services for which K8s services are to be created by this deployment, default is one cluster service port per Ziti service |
| tunnel.resolver | string | `nil` | Ziti nameserver listener where OS must be configured to send DNS queries (default: udp://127.0.0.1:53) |
| websocket.enableCompression | bool | `true` | enable compression on websocket |
Expand Down
Loading