* added containerMode=kubernetes env variables to the runner
* removed unused logging
* restored configs and charts
* restored makefile cert version and acceptance/run
* added workVolumeClaimTemplate in pod definition, including logic
* added claim template name based on the runner
* Apply suggestions from code review
update errors
* added concurrent cleanup before runner pod is deleted
* update manifests
* added retry after 30s if pod cleanup contains err
* added admission webhook check, made workVolumeClaimTemplate mandatory for k8s
* style changes and added comments
* added izZero timestamp check for deleting runner-linked pods
* changed order of local variable to avoid copy if p is deleted
* removed docker from container mode k8s
* restored charts, config, makefile
* restored forked files back and not the ARC ones
* created PersistentVolume on containerMode k8s
* create pv only if storage class name is local-storage
* removed actions if storage class name is local-storage
* added service account validation if container mode kubernetes
* changed the coding style to match rest of the ARC
* added validation to the runnerdeployment webhook
* specified fields more precisely, added webhook validation to the replicaset as well
* remake manifests
* wraped delete runner-linked-pods in kube mode
* fixed empty line
* fixed import
* makefile changes for hooks
* added cleanup secrets
* create manifests
* docs
* update access modes
* update dockerfile
* nit changes
* fixed dockerfile
* rewrite allowing reuse for runners and runnersets
* deepcopy forgot to stage
* changed privileged
* make manifests
* partly moved to finalizer, still need to apply finalizer first
* finalizer added if env variable used in container mode exists
* bump runner version
* error message moved from Error to Info on cleanup pods/secrets
* removed useless dereferencing, added transformation tests of workVolumeClaimTemplate
* Apply suggestions from code review
* Update controllers/utils_test.go
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
* Update controllers/utils_test.go
Co-authored-by: Thomas Boop <52323235+thboop@users.noreply.github.com>
* add hook version to cli, update to 0.1.2
* Apply suggestions from code review
* Update controllers/utils_test.go
* Update runner/Makefile
* Fix missing secret permission and the error handling
* Fix a runnerpod reconciler finalizer to not trigger unnecessary retry
Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Nikola Jokic <97525037+nikola-jokic@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* Fix example manifests for webhook based scaling
I tried running these on my k8s cluster and I got some easy to fix errors, so I am committing them here.
* Fix example manifests for webhook autoscaling with workflow_jobs
* Fix the explation on how to setup webhooks on your cluster
* Replace unclear comment with actual code examples
There was a comment instructing users to add minReplicas and
maxReplicas to all the HRA yamls, so I just removed it and added
these attributes to the yamls themselves for clarity.
* Make clear that using the ingress example is just a suggestion
* Apply some text improvements suggested by @mumoshu
* Update examples so the webhook server is exposed on a NodePort
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* Remove an unnecessary field from one the examples
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* Apply suggestion from @mumoshu
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* Remove namespace fields from webhook autoscaler examples
This change was suggested by @mumoshu
* Apply final suggestion from @mumoshu
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* doc: Use RunnerSet to retain various cache
In relation to #1286 and as a follow-up for #1340
* docs: clarify client vs daemon
* docs: better wording
* Separate RunnerSet examples for docker iimage layer caching
* Revert changes on testdata as it is going to be added via #1471 instead
* Update README.md
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
* fixup! Update README.md
* Remove the outdated RunnerSet limitation
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
* runner: Remove the ability to use the deprecated `--once` flag
Ref #1196
* runner: Ability to opt-out of using --ephemeral
Although we are going to eventually remove the ability to use the legacy --once flag as proposed in #1196, there might be folks still using legacy GHES versions 3.2 or earlier.
This commit removes the existing feature flag to opt-in for --ephemeral, while adding another feature flag RUNNER_FEATURE_FLAG_ONCE to opt-in for --once so that folks stuck in legacy GHES versions
can still use ARC.
Since this change every user starts using --ephemeral by default. If they see any issues on legacy GHES instance, RUNNER_FEATURE_FLAG_ONCE=true can be set to opt-in to keep using --once, which gives one more ARC release until they upgrade their GHES instance.
But beware, we won't support legacy GHES instances forever as it's going to be a maintenance nightmare. Please upgrade!
Ref #1196
It's probably worth highlighting it's ver 0.X.X by design and that breaking changes are possible until we move it to 1.0.0
Co-authored-by: toast-gear <toast-gear@users.noreply.github.com>
The unregister timeout of 1 minute (no matter how long it is) can negatively impact availability of static runner constantly running workflow jobs, and ephemeral runner that runs a long-running job.
We deal with that by completely removing the unregistaration timeout, so that regarldess of the type of runner(static or ephemeral) it waits forever until it successfully to get unregistered before being terminated.
This will work on GHES but GitHub Enterprise Cloud due to excessive GitHub API calls required.
More work is needed, like adding a cache layer to the GitHub client, to make it usable on GitHub Enterprise Cloud.
Fixes additional cases from https://github.com/actions-runner-controller/actions-runner-controller/pull/1012
If GitHub auth is provided in the webhooks controller then runner groups with custom visibility are supported. Otherwise, all runner groups will be assumed to be visible to all repositories
`getScaleUpTargetWithFunction()` will check if there is an HRA available with the following flow:
1. Search for **repository** HRAs - if so it ends here
2. Get available HRAs in k8s
3. Compute visible runner groups
a. If GitHub auth is provided - get all the runner groups that are visible to the repository of the incoming webhook using GitHub API calls.
b. If GitHub auth is not provided - assume all runner groups are visible to all repositories
4. Search for **default organization** runners (a.k.a runners from organization's visible default runner group) with matching labels
5. Search for **default enterprise** runners (a.k.a runners from enterprise's visible default runner group) with matching labels
6. Search for **custom organization runner groups** with matching labels
7. Search for **custom enterprise runner groups** with matching labels
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
* Add env variable to configure `disablupdate` flag
* Write test for entrypoint disable update
* Rename flag, update docs for DISABLE_RUNNER_UPDATE
* chore: bump runner version in makefile
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>