* Support Vetu virtualization on Linux in addition to Tart on macOS
* api(portForward): ensure that rendezvousConn is closed
* Re-try SSH connections in integration tests
Because a VM might be still booting.
* Implement server-side filtering for VMs by worker
* Parse more than one filter but error out when more than one is provided
* Fix off-by-one
* No need to use "\n" in Debugf()
* Load testing: synthetic VMs, multiple worker support and Grafana k6 test
* echoserver: prevent fallthrough when Accept() fails
* Move default local-dev context logic to CreateDevController()
* Synthetic: add a random delay to startup script echoing
* Ability to set VM's power state and retrieve backing Tart VM's name
* Validate user-provided "powerState" field
* Introduce TestSpecUpdatePowerStateSuspend
* Introduce TestSpecUpdatePowerStateStopped
* OpenAPI specification: add note about suspended VMs to "tartName" desc.
* Sometimes we need to wait more than 30 seconds
* Simplify state reconciliation and support changing Softnet settings
* Remove unused "updateFunc" parameter from syncOnDiskVMs()
* Don't take an address of a loop variable
* ensure → ensures
* updateVMState(): don't forget to update VMState
* Introduce TestSpecUpdateSoftnet integration test
* Update OpenAPI specification to include generation/observedGeneration
* Work around Sequoia's "Local Network" permission with a helper process
* README.md: macOS 15 (Sequoia) warning
* Make "orchard dev" unix-specific too, otherwise Release fails
* Fix typo in "localNetworkHerlper"
* Slightly improve the macOS 15 (Sequoia) note
* orchard worker run: better documentation for --user
* Make sure privilege dropping is the first step we do in runWorker()
* Always randomize MAC address
* Worker: check DHCP lease time and print a warning if it's unconfigured
* Further improve the explanation
* Add two leases example to the explanation
* Add an example of the resulting /var/db/dhcpd_leases
* Startup script: implement retries for connection-related operations
* assert.Equal → assert.Contains
* Wait for at least 1,000 lines of logs
* Join slice of strings before calling assert.Contains()
* TestHostDirs: use require.Contains() instead of require.EqualValues()
* TestHostDirs: wait for at least 4 log lines
* Allow creating VMs with implicit CPU and memory
* Clarify why cpu/memory can be 0 a bit better
* Controller(API): don't forget to update DefaultCPU and DefaultMemory
* Add an integration test for implicit CPU and memory
* Introduce WebSocket-based RPC v2
* go test: add -ldflags="-B gobuildid"
* No need to change the "controller.workerNotifier.Notify()" error message
* No need to modify Protocol Buffers/gRPC generated code
* rpcWatch(): explain that connection shouldn't be normally be closed
* Avoid "port forwarding failed: " repetition in error messages
* Improve comments and avoid repetition in IP resolution errors
* Always Close() the Worker instance
* orchard list vms: show assigned worker for each of the VMs
* Stop the failed VMs before we schedule new VMs
To avoid violating resource constraints.
* syncOnDiskVMs: don't ignore running VMs
* Worker: show correct remote and local VM counts
* Implement restart policy for VMs
* Do not update VM.Resource, we only use it as a read-only specification
* Err()/setErr(): use atomic.Pointer instead of sync.Mutex
* Fail VMs if the worker had crashed/is unhealthy
* OnDiskName: properly handle cases when VM's name contains hyphens
* Worker: introduce Offline() method and check it before scheduling
* tart.List(): use Tart's JSON output
* OnDiskName: remove empty parts check
* Scheduler: move health-checking logic to a separate function
* Only fail "running" VMs
* Only fail orphaned VMs if they're in terminal state
* Integration tests
* Run healthCheckingLoopIteration() before schedulingLoopIteration()
* Worker: sync on-disk VMs only once at start
Before we had two main loops: controller loop to assign VMs and worker loop to start VMs. Each of the loops was performed upon an interval every N seconds.
This change introduces a mechanism for reactively requesting loop execution:
1. Controller loop will be executed upon VM creation to try to immediately schedule.
2. A worker will be notified upon a VM assigment and worker loop will be requested to sync immediately.
Fixes#31