mirror of https://github.com/cirruslabs/tart.git
Orchard documentation (#897)
* Orchard documentation * Fix typo Co-authored-by: Fedor Korotkov <fedor.korotkov@gmail.com> * architecture-and-security.md: change list order --------- Co-authored-by: Fedor Korotkov <fedor.korotkov@gmail.com>
This commit is contained in:
parent
227301436c
commit
3fde7d08dd
Binary file not shown.
|
After Width: | Height: | Size: 210 KiB |
|
|
@ -0,0 +1,65 @@
|
|||
## Architecture
|
||||
|
||||
Orchard cluster consists of two components:
|
||||
|
||||
* Controller — responsible for managing the cluster and scheduling of resources
|
||||
* Worker — responsible for executing the VMs
|
||||
* Client — responsible for creating, modifying and removing the resources on the Controller, can either be an Orchard CLI or [an API consumer](/orchard/integration-guide)
|
||||
|
||||
Normally you deploy a single Controller that needs to be accessible to both the Clients and Workers. Then you can deploy the Workers, which can reside anywhere and be inaccessible to Clients directly, e.g. behind a NAT.
|
||||
|
||||
## Security
|
||||
|
||||
When an Orchard Client or a Worker connects to the Controller, they need to establish trust and verify that they're talking to the right Controller, so that no [man-in-the-middle attack](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) is possible.
|
||||
|
||||
Similarly to web-browsers (that rely on the [public key infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure)) and SSH (which relies on semi-automated fingerprint verification), Orchard combines these two traits in a hybrid approach by defaulting to automatic PKI verification (can be disabled by [`--no-pki`](#--no-pki-override)) and falling-back to a manual verification for self-signed certificates.
|
||||
|
||||
This hybrid approach is needed because the Controller can be configured in two ways:
|
||||
|
||||
* *Controller with a publicly valid certificate*
|
||||
* can be configured manually by passing `--controller-cert` and `--controller-key` command-line arguments to `orchard controller run`
|
||||
* *Controller with a self-signed certificate*
|
||||
* configured automatically on first Controller start-up when no `--controller-cert` and `--controller-key` command-line arguments are passed
|
||||
|
||||
Below we'll explain how Orchard client and Worker secure the connection when accessing these two Controller types.
|
||||
|
||||
### Client
|
||||
|
||||
Client is associated with the Controller using a `orchard context create` command, which works as follows:
|
||||
|
||||
* Client attempts to connect to the Controller and validate its certificate using host's root CA set (can be disabled with [`--no-pki`](#--no-pki-override))
|
||||
* if the Client encounters a *Controller with a publicly valid certificate*, that would be the last step and the association would succeed
|
||||
* if the Client is dealing with *Controller with a self-signed certificate*, the Client will do another connection attempt to probe the Controller's certificate
|
||||
* the probed Controller's certificate fingerprint is then presented to the user, and if the user agrees to trust it, the Client then considers that certificate to be trusted for a given context
|
||||
* Client finally connects to the Controller again with a trusted CA set containing only that certificate, executes the final API sanity checks, and if everything is OK then the association succeeds
|
||||
|
||||
Afterward, each interaction with the Controller (e.g. `orchard create vm` command) will stick to the chosen verification method and will re-verify the presented Controller's certificate against:
|
||||
|
||||
* *Controller with a self-signed certificate*: a trusted certificate stored in the Orchard's configuration file
|
||||
* *Controller with a publicly valid certificate*: host's root CA set
|
||||
|
||||
### Worker
|
||||
|
||||
To make the Worker connect to the Controller, a Bootstrap Token needs to be obtained using the `orchard get bootstrap-token` command.
|
||||
|
||||
While this approach provides a less ad-hoc experience than that you'd have with `orchard context create`, it allows one to mass-deploy workers non-interactively, using tools such as Ansible.
|
||||
|
||||
This resulting Bootstrap Token will either include the Controller's certificate (when the current context is with a *Controller with a self-signed certificate*) or omit it (when the current context is with a *Controller with a publicly valid certificate*).
|
||||
|
||||
The way Worker connects to the Controller using the `orchard worker run` command is as follows:
|
||||
|
||||
* when the Bootstrap Token contains the Controller's certificate:
|
||||
* the Orchard Worker will try to connect to the Controller with a trusted CA set containing only that certificate
|
||||
* when the Bootstrap Token has no Controller's certificate:
|
||||
* the Orchard Worker will try the PKI approach (can be disabled with [`--no-pki`](#--no-pki-override) to effectively prevent the Worker from connecting) and fail if certificate verification using PKI is not possible
|
||||
|
||||
### `--no-pki` override
|
||||
|
||||
If you only intend to access the *Controller with a self-signed certificate* and want to additionally guard yourself against [CA compromises](https://en.wikipedia.org/wiki/Certificate_authority#CA_compromise) and other PKI-specific attacks, pass a `--no-pki` command-line argument to the following commands:
|
||||
|
||||
* `orchard context create --no-pki`
|
||||
* this will prevent the Client from using PKI and will let you interactively verify the Controller's certificate fingerprint before connecting, thus creating a non-PKI association
|
||||
* `orchard worker run --no-pki`
|
||||
* this will prevent the Worker from trying to use PKI when connecting to the Controller using a Bootstrap Token that has no certificate included in it, thus failing fast and letting you know that you need to create a proper Bootstrap Token
|
||||
|
||||
We've deliberately chosen not to use environment variables (e.g. `ORCHARD_NO_PKI`) because they fail silently (e.g. due to a typo), compared to command-line arguments, which will result in an error that is much easier to detect.
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
## Introduction
|
||||
|
||||
Compared to Worker, which needs to be deployed on a macOS machine, Controller can be deployed on Linux too.
|
||||
|
||||
In fact, we've made a [container image](https://github.com/cirruslabs/orchard/pkgs/container/orchard) to ease deploying the Controller in container-native environments such as Kubernetes.
|
||||
|
||||
Orchard API is secured by default: all requests must be authenticated with credentials of a service account.
|
||||
When you first run Orchard Controller, you can specify `ORCHARD_BOOTSTRAP_ADMIN_TOKEN` which will automatically
|
||||
create a service account named `bootstrap-admin` with all privileges. Let's first generate `ORCHARD_BOOTSTRAP_ADMIN_TOKEN`:
|
||||
|
||||
```bash
|
||||
export ORCHARD_BOOTSTRAP_ADMIN_TOKEN=$(openssl rand -hex 32)
|
||||
```
|
||||
|
||||
## Deployment Methods
|
||||
|
||||
While you can always run `orchard controller run` manually with the required arguments, this method of deploying the Controller is not recommended.
|
||||
|
||||
Instead, we've listed a more persistent methods of a Controller deployment below.
|
||||
|
||||
Now you can run Orchard Controller on a server of your choice. In the following sections you'll find several examples of
|
||||
how to run Orchard Controller in various environments. Feel free to submit PRs with more examples.
|
||||
|
||||
### Google Compute Engine
|
||||
|
||||
An example below will deploy a single instance of Orchard Controller in Google Cloud Compute Engine in `us-central1` region.
|
||||
|
||||
First, let's create a static IP address for our instance:
|
||||
|
||||
```bash
|
||||
gcloud compute addresses create orchard-ip --region=us-central1
|
||||
export ORCHARD_IP=$(gcloud compute addresses describe orchard-ip --format='value(address)' --region=us-central1)
|
||||
```
|
||||
|
||||
Once we have the IP address, we can create a new instance with Orchard Controller running inside a container:
|
||||
|
||||
```bash
|
||||
gcloud compute instances create-with-container orchard-controller \
|
||||
--machine-type=e2-micro \
|
||||
--zone=us-central1-a \
|
||||
--image-family cos-stable \
|
||||
--image-project cos-cloud \
|
||||
--tags=https-server \
|
||||
--address=$ORCHARD_IP \
|
||||
--container-image=ghcr.io/cirruslabs/orchard:latest \
|
||||
--container-env=PORT=443 \
|
||||
--container-env=ORCHARD_BOOTSTRAP_ADMIN_TOKEN=$ORCHARD_BOOTSTRAP_ADMIN_TOKEN \
|
||||
--container-mount-host-path=host-path=/home/orchard-data,mode=rw,mount-path=/data
|
||||
```
|
||||
|
||||
Now you can create a new context for your local client:
|
||||
|
||||
```bash
|
||||
orchard context create --name production \
|
||||
--service-account-name bootstrap-admin \
|
||||
--service-account-token $ORCHARD_BOOTSTRAP_ADMIN_TOKEN \
|
||||
https://$ORCHARD_IP:443
|
||||
```
|
||||
|
||||
And select it as the default context:
|
||||
|
||||
```bash
|
||||
orchard context default production
|
||||
```
|
||||
|
|
@ -0,0 +1,127 @@
|
|||
## Obtain a Boostrap Token
|
||||
|
||||
First, create a service account with a minimal set of roles (`compute:read` and `compute:write`) required for proper Worker functioning:
|
||||
|
||||
```bash
|
||||
orchard create service-account worker-pool-m1 --roles "compute:read" --roles "compute:write"
|
||||
```
|
||||
|
||||
Then, generate a Bootstrap Token for this service account:
|
||||
|
||||
```shell
|
||||
orchard get bootstrap-token worker-pool-m1
|
||||
```
|
||||
|
||||
We will reference the value of the Bootstrap Token generated here as `${BOOTSTRAP_TOKEN}` below.
|
||||
|
||||
Further, we assume that Orchard controller is available on `orchard.example.com`
|
||||
|
||||
## Deployment Methods
|
||||
|
||||
While you can always run `orchard worker run` manually with the required arguments, this method of deploying the Worker is not recommended.
|
||||
|
||||
Instead, we've listed a more persistent methods of a Worker deployment below.
|
||||
|
||||
### launchd
|
||||
|
||||
[launchd](https://launchd.info/) is an init system for macOS that manages daemons, agents and other background processes.
|
||||
|
||||
In this deployment method, we'll create a new job definition file for the launchd to manage on its behalf.
|
||||
|
||||
To begin, first install Orchard:
|
||||
|
||||
```shell
|
||||
brew install cirruslabs/cli/orchard
|
||||
```
|
||||
|
||||
Ensure that the following command:
|
||||
|
||||
```shell
|
||||
which orchard
|
||||
```
|
||||
|
||||
...yields `/opt/homebrew/bin/orchard`. If not, you'll need to replace all of the occurences of `/opt/homebrew/bin/orchard` in the job definition below.
|
||||
|
||||
Then, create a launchd job definition in `/Library/LaunchDaemons/org.cirruslabs.orchard.worker.plist` with the following contents:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>org.cirruslabs.orchard.worker</string>
|
||||
<key>UserName</key>
|
||||
<string>admin</string>
|
||||
<key>Program</key>
|
||||
<string>/opt/homebrew/bin/orchard</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>/opt/homebrew/bin/orchard</string>
|
||||
<string>worker</string>
|
||||
<string>run</string>
|
||||
<string>--bootstrap-token</string>
|
||||
<string>${BOOTSTRAP_TOKEN}</string>
|
||||
<string>orchard.example.com</string>
|
||||
</array>
|
||||
<key>EnvironmentVariables</key>
|
||||
<dict>
|
||||
<key>PATH</key>
|
||||
<string>/bin:/usr/bin:/usr/local/bin:/opt/homebrew/bin</string>
|
||||
</dict>
|
||||
<key>WorkingDirectory</key>
|
||||
<string>/var/empty</string>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>KeepAlive</key>
|
||||
<true/>
|
||||
<key>StandardOutPath</key>
|
||||
<string>/Users/admin/orchard-launchd.log</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>/Users/admin/orchard-launchd.log</string>
|
||||
</dict>
|
||||
</plist>
|
||||
```
|
||||
|
||||
This assumes that your macOS user on the host is named `admin`. If not, change all occurrences of `admin` in the job definition above to `$USER`.
|
||||
|
||||
Finally, change the `orchard.example.com` to the FQDN or an IP-address of your Orchard Controller.
|
||||
|
||||
Now, you can start the job:
|
||||
|
||||
```shell
|
||||
launchctl load -w /Library/LaunchDaemons/org.cirruslabs.orchard.worker.plist
|
||||
```
|
||||
|
||||
### Ansible
|
||||
|
||||
If you have a set of machines that you want to use as Orchard Workers, you can use [Ansible](https://docs.ansible.com/) to configure them.
|
||||
|
||||
We've created the [cirruslabs/ansible-orchard](https://github.com/cirruslabs/ansible-orchard) repository with a basic Ansible playbook for convenient setup.
|
||||
|
||||
To use it, clone it locally:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/cirruslabs/ansible-orchard.git
|
||||
cd ansible-orchard/
|
||||
```
|
||||
|
||||
Make sure that the Ansible Galaxy dependencies are installed:
|
||||
|
||||
```shell
|
||||
ansible-galaxy install -r requirements.yml
|
||||
```
|
||||
|
||||
Then, edit the `production-pool` file and populate the following fields:
|
||||
|
||||
* `hosts` — replace `worker-1.hosts.internal` with your worker FQDN or IP-address and add more hosts if needed
|
||||
* `ansible_user` — set it macOS user on the host for the SSH to work
|
||||
* `orchard_worker_user` — set it macOS user on the host under which the Worker will run, e.g. `admin`
|
||||
* `orchard_worker_controller_url` — set it to FQDN or an IP-address of your Orchard Controller, for example, `orchard.example.com`
|
||||
* `orchard_worker_bootstrap_token` — set it to `${BOOTSTRAP_TOKEN}` we've generated above
|
||||
|
||||
Deploy the playbook:
|
||||
|
||||
```shell
|
||||
ansible-playbook --inventory-file production-pool --ask-pass playbook-workers.yml
|
||||
```
|
||||
|
|
@ -0,0 +1,187 @@
|
|||
Orchard has a REST API that follows [OpenAPI specification](https://swagger.io/specification/) and is described in [`api/openapi.yaml`](https://github.com/cirruslabs/orchard/blob/main/api/openapi.yaml).
|
||||
|
||||
You can run `orchard dev` locally and navigate to `http://127.0.0.1:6120/v1/` for interactive documentation.
|
||||
|
||||

|
||||
|
||||
## Using the API
|
||||
|
||||
Below you'll find examples of using Orchard API via vanilla Python's request library and Golang package that Orchard CLI build on top of.
|
||||
|
||||
### Authentication
|
||||
|
||||
When running in non-development mode, Orchard API expects a [basic access authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) to be provided for each API call.
|
||||
|
||||
Below you'll find two snippets that retrieve controller's information and output its version:
|
||||
|
||||
#### Authentication in Python
|
||||
|
||||
```python
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
|
||||
|
||||
def main():
|
||||
# Authentication
|
||||
basic_auth = HTTPBasicAuth("service account name", "service account token")
|
||||
|
||||
response = requests.get("http://127.0.0.1:6120/v1/info", auth=basic_auth)
|
||||
|
||||
print(response.json()["version"])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
#### Authentication in Golang
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/cirruslabs/orchard/pkg/client"
|
||||
"log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
client, err := client.New()
|
||||
if err != nil {
|
||||
log.Fatalf("failed to initialize Orchard API client: %v", err)
|
||||
}
|
||||
|
||||
controllerInfo, err := client.Controller().Info(context.Background())
|
||||
if err != nil {
|
||||
log.Fatalf("failed to retrieve controller's information: %v", err)
|
||||
}
|
||||
|
||||
fmt.Println(controllerInfo.Version)
|
||||
}
|
||||
```
|
||||
|
||||
Note that we don't provide any credentials for Golang's version of the snippet: this is because Orchard's Golang API client (`github.com/cirruslabs/orchard/pkg/client`) has the ability to read the current's user Orchard context automatically.
|
||||
|
||||
### Creating a VM
|
||||
|
||||
A more intricate example would be spinning off a VM with a startup script that outputs date, reading its logs and removing it from the controller:
|
||||
|
||||
#### Creating a VM in Python
|
||||
|
||||
```python
|
||||
import time
|
||||
import uuid
|
||||
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
|
||||
|
||||
def main():
|
||||
vm_name = str(uuid.uuid4())
|
||||
|
||||
basic_auth = HTTPBasicAuth("service account name", "service account token")
|
||||
|
||||
# Create VM
|
||||
response = requests.post("http://127.0.0.1:6120/v1/vms", auth=basic_auth, json={
|
||||
"name": vm_name,
|
||||
"image": "ghcr.io/cirruslabs/macos-sonoma-base:latest",
|
||||
"cpu": 4,
|
||||
"memory": 4096,
|
||||
"startup_script": {
|
||||
"script_content": "date",
|
||||
}
|
||||
})
|
||||
response.raise_for_status()
|
||||
|
||||
# Retrieve VM's logs
|
||||
while True:
|
||||
response = requests.get(f"http://127.0.0.1:6120/v1/vms/{vm_name}/events", auth=basic_auth)
|
||||
response.raise_for_status()
|
||||
|
||||
result = response.json()
|
||||
|
||||
if isinstance(result, list) and len(result) != 0:
|
||||
print(result[0]["payload"])
|
||||
break
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
# Delete VM
|
||||
response = requests.delete(f"http://127.0.0.1:6120/v1/vms/{vm_name}", auth=basic_auth)
|
||||
response.raise_for_status()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
#### Creating a VM in Golang
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/cirruslabs/orchard/pkg/client"
|
||||
v1 "github.com/cirruslabs/orchard/pkg/resource/v1"
|
||||
"github.com/google/uuid"
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
func main() {
|
||||
vmName := uuid.New().String()
|
||||
|
||||
client, err := client.New()
|
||||
if err != nil {
|
||||
log.Fatalf("failed to initialize Orchard API client: %v", err)
|
||||
}
|
||||
|
||||
// Create VM
|
||||
err = client.VMs().Create(context.Background(), &v1.VM{
|
||||
Meta: v1.Meta{
|
||||
Name: vmName,
|
||||
},
|
||||
Image: "ghcr.io/cirruslabs/macos-sonoma-base:latest",
|
||||
CPU: 4,
|
||||
Memory: 4096,
|
||||
StartupScript: &v1.VMScript{
|
||||
ScriptContent: "date",
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("failed to create VM: %v")
|
||||
}
|
||||
|
||||
// Retrieve VM's logs
|
||||
for {
|
||||
vmLogs, err := client.VMs().Logs(context.Background(), vmName)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to retrieve VM logs")
|
||||
}
|
||||
|
||||
if len(vmLogs) != 0 {
|
||||
fmt.Println(vmLogs[0])
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(time.Second)
|
||||
}
|
||||
|
||||
// Delete VM
|
||||
if err := client.VMs().Delete(context.Background(), vmName); err != nil {
|
||||
log.Fatalf("failed to delete VM: %v", err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Resource management
|
||||
|
||||
Some resources, such as `Worker` and `VM`, have a `resource` field which is a dictionary that maps between resource names and their amounts (amount requested or amount provided, depending on the resource) and is useful for scheduling.
|
||||
|
||||
Well-known resources:
|
||||
|
||||
* `org.cirruslabs.tart-vms` — number of Tart VM slots available on the machine or requested by the VM
|
||||
* this number is `2` for workers and `1` for VMs by default
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
## Backups
|
||||
|
||||
In order to backup the Orchard Controller, simply copy its `ORCHARD_HOME` (which defaults to `~/.orchard/`) directory somewhere safe and restore it when needed.
|
||||
|
||||
This directory contains the BadgerDB database <>.
|
||||
|
||||
## Upgrades
|
||||
|
||||
Since the Orchard's initial release, we've managed to maintain the backwards compatibility between versions up to this day, so generally, it doesn't matter whether you upgrade the Controller or Worker(s) first.
|
||||
|
||||
In case a new functionality is introduced, you might be required to finish the upgrade of both the Controller and the Worker(s) to be able to use it fully.
|
||||
|
||||
In case there will be backwards-incompatible changes introduced in the future, we will try to do our best and highlight this in the [release notes](https://github.com/cirruslabs/orchard/releases) accordingly.
|
||||
|
||||
## Observability
|
||||
|
||||
Both the Controller and Worker produce some useful OpenTelemetry metrics. Metrics are scoped with `org.cirruslabs.orchard` prefix and include information about resource utilization, statuses or Workers, scheduling/pull time and many more.
|
||||
|
||||
By default, the telemetry is sent to `https://localhost:4317` using the gRPC protocol and to `http://localhost:4318` using the HTTP protocol.
|
||||
|
||||
You can override this by setting the [standard OpenTelemetry environment variable](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/) `OTEL_EXPORTER_OTLP_ENDPOINT`.
|
||||
|
||||
Please refer to [OTEL Collector documentation](https://opentelemetry.io/docs/collector/) for instruction on how to setup a sidecar for the metrics collections or find out if your SaaS monitoring has an available OTEL endpoint (see [Honeycomb](https://docs.honeycomb.io/send-data/opentelemetry/) as an example).
|
||||
|
||||
### Sending metrics to Google Cloud Platform
|
||||
|
||||
There are two standard options of ingesting metrics procuded by Orchard Controller and Workers into the GCP:
|
||||
|
||||
* [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) + [Google Cloud Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README.md) — open-source solution that can be later re-purposed to send metrics to any OTLP-compatible endpoint by swapping a single [exporter](https://opentelemetry.io/docs/collector/configuration/#exporters)
|
||||
* [Ops Agent](https://cloud.google.com/monitoring/agent/ops-agent/otlp) — Google-backed solution with a syntax similar to OpenTelemetry Collector, but tied to GCP-only
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
Tart is great for running workloads on a single machine, but what if you have more than one computer at your disposal
|
||||
and
|
||||
a couple of VMs is not enough anymore for your needs? This is where [Orchard](https://github.com/cirruslabs/orchard)
|
||||
comes in to play!
|
||||
|
||||
It allows you to orchestrate multiple Tart-capable hosts from either an Orchard CLI (which we demonstrate below)
|
||||
or [through the API](/orchard/integration-guide).
|
||||
|
||||
The easiest way to start is to run Orchard in local development mode:
|
||||
|
||||
```shell
|
||||
brew install cirruslabs/cli/orchard
|
||||
orchard dev
|
||||
```
|
||||
|
||||
This will run an Orchard Controller and an Orchard Worker in a single process on your local machine, allowing you to
|
||||
test both the CLI functionality and the API from a tool like cURL or programming language of choice, without the need to
|
||||
authenticate requests.
|
||||
|
||||
Note that in production deployments, these two components are started separately and enable security by default. Please
|
||||
refer to [Deploying Controller](/orchard/deploying-controller) and [Deploying Workers](/orchard/deploying-workers) for
|
||||
more information.
|
||||
|
||||
## Creating Virtual Machines
|
||||
|
||||
Now, let's create a Virtual Machine:
|
||||
|
||||
```shell
|
||||
orchard create vm --image ghcr.io/cirruslabs/macos-sonoma-base:latest sonoma-base
|
||||
```
|
||||
|
||||
You can check a list of VM resources to see if the Virtual Machine we've created above is already running:
|
||||
|
||||
```shell
|
||||
orchard list vms
|
||||
```
|
||||
|
||||
## Accessing Virtual Machines
|
||||
|
||||
Orchard has an ability to do port forwarding that `ssh` and `vnc` commands are built on top of. All port forwarding
|
||||
connections are done via the Orchard Controller instance which "proxies" a secure connection to the Orchard Workers.
|
||||
|
||||
Therefore, your workers can be located under a stricter firewall that only allows connections to the Orchard Controller
|
||||
instance. Orchard Controller instance is secured by default and all API calls are authenticated and authorized.
|
||||
|
||||
### SSH
|
||||
|
||||
To SSH into a VM, use the `orchard ssh` command:
|
||||
|
||||
```shell
|
||||
orchard ssh vm sonoma-base
|
||||
```
|
||||
|
||||
You can specify the `--username` and `--password` flags to specify the username/password pair to use for the SSH
|
||||
protocol. By default, `admin`/`admin` is used.
|
||||
|
||||
You can also execute remote commands instead of spawning a login shell, similarly to how OpenSSH's `ssh` command accepts
|
||||
a command argument:
|
||||
|
||||
```shell
|
||||
orchard ssh vm sonoma-base "uname -a"
|
||||
```
|
||||
|
||||
You can execute scripts remotely this way, by telling the remote command-line interpreter to read from the standard
|
||||
input and using the redirection operator as follows:
|
||||
|
||||
```shell
|
||||
orchard ssh vm sonoma-base "bash -s" < script.sh
|
||||
```
|
||||
|
||||
### VNC
|
||||
|
||||
Similarly to `ssh` command, you can use `vnc` command to open Screen Sharing into a remote VM:
|
||||
|
||||
```shell
|
||||
orchard vnc vm sonoma-base
|
||||
```
|
||||
|
||||
You can specify the `--username` and `--password` flags to specify the username/password pair to use for the VNC
|
||||
protocol. By default, `admin`/`admin` is used.
|
||||
|
||||
## Deleting Virtual Machines
|
||||
|
||||
The following command will delete the VM we've created above and clean-up the resources associated with it:
|
||||
|
||||
```shell
|
||||
orchard delete vm sonoma-base
|
||||
```
|
||||
|
||||
## Environment variables
|
||||
|
||||
In addition to controlling the Orchard via the CLI arguments, there are environment variables that may be beneficial
|
||||
both when automating Orchard and in daily use:
|
||||
|
||||
| Variable name | Description |
|
||||
|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `ORCHARD_HOME` | Override Orchard's home directory. Useful when running multiple Orchard instances on the same host and when testing. |
|
||||
| `ORCHARD_LICENSE_TIER` | The default license limit only allows connecting 4 Orchard Workers to the Orchard Controller. If you've purchased a [Gold Tier License](/licensing/), set this variable to `gold` to increase the limit to 20 Orchard Workers. And if you've purchased a [Platinum Tier License](/licensing/), set this variable to `platinum` to increase the limit to 200 Orchard Workers. |
|
||||
| `ORCHARD_URL` | Override controller URL on per-command basis. |
|
||||
| `ORCHARD_SERVICE_ACCOUNT_NAME` | Override service account name (used for controller API auth) on per-command basis. |
|
||||
| `ORCHARD_SERVICE_ACCOUNT_TOKEN` | Override service account token (used for controller API auth) on per-command basis. |
|
||||
18
mkdocs.yml
18
mkdocs.yml
|
|
@ -91,13 +91,19 @@ nav:
|
|||
- "Home": index.md
|
||||
- "Quick Start": quick-start.md
|
||||
- "Integrations":
|
||||
- "Self-hosted CI": integrations/cirrus-cli.md
|
||||
- "GitHub Actions": https://cirrus-runners.app/
|
||||
- "GitLab Runner": integrations/gitlab-runner.md
|
||||
- "Buildkite": integrations/buildkite.md
|
||||
- "Managing VMs": integrations/vm-management.md
|
||||
- "Self-hosted CI": integrations/cirrus-cli.md
|
||||
- "GitHub Actions": https://cirrus-runners.app/
|
||||
- "GitLab Runner": integrations/gitlab-runner.md
|
||||
- "Buildkite": integrations/buildkite.md
|
||||
- "Managing VMs": integrations/vm-management.md
|
||||
- "Support & Licensing": licensing.md
|
||||
- "Orchestration": https://github.com/cirruslabs/orchard
|
||||
- "Orchestration":
|
||||
- "Quick Start": orchard/quick-start.md
|
||||
- "Architecture and Security": orchard/architecture-and-security.md
|
||||
- "Deploying Controller": orchard/deploying-controller.md
|
||||
- "Deploying Workers": orchard/deploying-workers.md
|
||||
- "Managing the Cluster": orchard/managing-cluster.md
|
||||
- "Integrating with the API": orchard/integration-guide.md
|
||||
- "FAQ": faq.md
|
||||
- "Legal":
|
||||
- 'Terms of Service': legal/terms.md
|
||||
|
|
|
|||
Loading…
Reference in New Issue