* chore(deps): update quay.io/brancz/kube-rbac-proxy docker tag to v0.11.0
* chore(deps): update quay.io/brancz/kube-rbac-proxy make tag to v0.11.0
Co-authored-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: Callum Tait <15716903+toast-gear@users.noreply.github.com>
* docs: clean up of autoscaling section
* docs: clarifying anti-flapping
* docs: more improvements
* docs: more improvements
* docs: adding duration details and cavaets
* docs: smaller english and better structure
* docs: use consistent wording
* docs: adding limitation cavaet for RunnerSets
* docs: correct helm uprgade order
* docs: lines helm upgrade command with help switch
* docs: use existing limitations section
* docs: fix table of headers and contents
* docs: add link to runnersets on first mention
* docs: adding runnerset limitation
* chore: use new enterprise permission for PAT
* docs: bump example deploy to latest version
* docs: adding oauth apps link
* docs: adding cavaet to the oauth apps doc
* Revert "chore: support app ids as int or strings (#869)"
This reverts commit 0a3d2b686e.
* docs: adding some comments to the code
* docs: adding comment to the chart values
* Create optional serviceAnnotations value in helm chart
* update annotation key
* update annotation key - webhook service
* fix README.md
* docs: using consistent tense
* docs: making the code comments more generic
* feat: Have githubwebhook monitor a single namespace
When using `scope.singleNamespace: true` in Helm, expected behaviour is
that the github webhook server behaves the same way as the controller.
The current behaviour is that the webhook server monitors all the
namespaces.
* Changing the chart README.md to reflect the scope
The documentation now mentions that both the controller and the github
webhook server will make use of the `scope.watchNamespace` field if
`scope.singleNamespace` is set to `true`.
Co-authored-by: Sebastien Le Digabel <sebastien.ledigabel@skyscanner.net>
This fixes the below error on installing the chart:
```
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(MutatingWebhookConfiguration.webhooks[0]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[1]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[2]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook, ValidationError(MutatingWebhookConfiguration.webhooks[3]): missing required field "admissionReviewVersions" in io.k8s.api.admissionregistration.v1.MutatingWebhook]
```
Ref #144
This add support for two upcoming enhancements on the GitHub side of self-hosted runners, ephemeral runners, and `workflow_jow` events. You can't use these yet.
**These features are not yet generally available to all GitHub users**. Please take this pull request as a preparation to make it available to actions-runner-controller users as soon as possible after GitHub released the necessary features on their end.
**Ephemeral runners**:
The former, ephemeral runners, is basically the reliable alternative to `--once`, which we've been using when you enabled `ephemeral: true` (default in actions-runner-controller).
`--once` has been suffering from a race issue #466. `--ephemeral` fixes that.
To enable ephemeral runners with `actions/runner`, you give `--ephemeral` to `config.sh`. This updated version of `actions-runner-controller` does it for you, by using `--ephemeral` instead of `--once` when you set `RUNNER_FEATURE_FLAG_EPHEMERAL=true`.
Please read the section `Ephemeral Runners` in the updated version of our README for more information.
Note that ephemeral runners is not released on GitHub yet. And `RUNNER_FEATURE_FLAG_EPHEMERAL=true` won't work at all until the feature gets released on GitHub. Stay tuned for an announcement from GitHub!
**`workflow_job` events**:
`workflow_job` is the additional webhook event that corresponds to each GitHub Actions workflow job run. It provides `actions-runner-controller` a solid foundation to improve our webhook-based autoscale.
Formerly, we've been exploiting webhook events like `check_run` for autoscaling. However, as none of our supported events has included `labels`, you had to configure an HRA to only match relevant `check_run` events. It wasn't trivial.
In contrast, a `workflow_job` event payload contains `labels` of runners requested. `actions-runner-controller` is able to automatically decide which HRA to scale by filtering the corresponding RunnerDeployment by `labels` included in the webhook payload. So all you need to use webhook-based autoscale will be to enable `workflow_job` on GitHub and expose actions-runner-controller's webhook server to the internet.
Note that the current implementation of `workflow_job` support works in two ways, increment, and decrement. An increment happens when the webhook server receives` workflow_job` of `queued` status. A decrement happens when it receives `workflow_job` of `completed` status. The latter is used to make scaling-down faster so that you waste money less than before. You still don't suffer from flapping, as a scale-down is still subject to `scaleDownDelaySecondsAfterScaleOut `.
Please read the section `Example 3: Scale on each `workflow_job` event` in the updated version of our README for more information on its usage.
* Adding a default docker registry mirror
This change allows the controller to start with a specified default
docker registry mirror and avoid having to specify it in all the runner*
objects.
The change is backward compatible, if a runner has a docker registry
mirror specified, it will supersede the default one.
Add `shortNames` to kube api-resource CRDs. Short-names make it easier when interacting/troubleshooting api-resources with kubectl.
We have tried to follow the naming convention similar to what K8s uses which should help with avoiding any naming conflicts as well. For example:
* `Deployment` has a shortName of deploy, so added rdeploy for `runnerdeployment`
* `HorizontalPodAutoscaler` has a shortName of hpa, so added hra for `HorizontalRunnerAutoscaler`
* `ReplicaSets` has a shortName of rs, so added rrs for `runnerreplicaset`
Co-authored-by: abhinav454 <43758739+abhinav454@users.noreply.github.com>
Improves #664 by adding annotations to the server's service. Beyond general applications, we use these annotations within my own projects to configure various LB values.
* Optional override of runner image in chart
This commit adds the option to override the actions runner image. This
allows running the controller in environments where access to Dockerhub
is restricted.
It uses the parameter [--runner-image](https://github.com/actions-runner-controller/actions-runner-controller/blob/master/main.go#L89) from the controller.
The default value is set as a constant
[here](acb906164b/main.go (L40)).
The default value for the chart is the same.
* Fixing actionsRunner name
... to actionsRunnerRepositoryAndTag for consistency.
* Bumping chart to v0.12.5
There's no reason to create a non-working secret by default. If someone wants to deploy the secrets via the chart they will need to do some config regardless so they might as well also set the create flag
`HRA.Spec.ScaleTargetRef.Kind` is added to denote that the scale-target is a RunnerSet.
It defaults to `RunnerDeployment` for backward compatibility.
```
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: myhra
spec:
scaleTargetRef:
kind: RunnerSet
name: myrunnerset
```
Ref #629
Ref #613
Ref #612
This option expose internally some `KUBERNETES_*` environment variables
that doesn't allow the runner to use KinD (Kubernetes in Docker) since it will
try to connect to the Kubernetes cluster where the runner it's running.
This option it's set by default to `true` in any Kubernetes deployment.
Signed-off-by: Jonathan Gonzalez V <jonathan.gonzalez@enterprisedb.com>
* feat: RunnerSet backed by StatefulSet
Unlike a runner deployment, a runner set can manage a set of stateful runners by combining a statefulset and an admission webhook that mutates statefulset-managed pods with required envvars and registration tokens.
Resolves#613
Ref #612
* Upgrade controller-runtime to 0.9.0
* Bump Go to 1.16.x following controller-runtime 0.9.0
* Upgrade kubebuilder to 2.3.2 for updated etcd and apiserver following local setup
* Fix startup failure due to missing LeaderElectionID
* Fix the issue that any pods become unable to start once actions-runner-controller got failed after the mutating webhook has been registered
* Allow force-updating statefulset
* Fix runner container missing work and certs-client volume mounts and DOCKER_HOST and DOCKER_TLS_VERIFY envvars when dockerdWithinRunner=false
* Fix runnerset-controller not applying statefulset.spec.template.spec changes when there were no changes in runnerset spec
* Enable running acceptance tests against arbitrary kind cluster
* RunnerSet supports non-ephemeral runners only today
* fix: docker-build from root Makefile on intel mac
* fix: arch check fixes for mac and ARM
* ci: aligning test data format and patching checks
* fix: removing namespace in test data
* chore: adding more ignores
* chore: removing leading space in shebang
* Re-add metrics to org hra testdata
* Bump cert-manager to v1.1.1 and fix deploy.sh
Co-authored-by: toast-gear <15716903+toast-gear@users.noreply.github.com>
Co-authored-by: Callum James Tait <callum.tait@photobox.com>
This allows using the `runtimeClassName` directive in the runner's spec.
One of the use-cases for this is Kata Containers, which use `runtimeClassName` in a pod spec as an indicator that the pod should run inside a Kata container. This allows us a greater degree of pod isolation.
* docs: adding upgrade notes for Helm
* chore: adding new ignore
* docs: add in cmd to check for stuck runners
* docs: better format
* docs: removing superfluous steps
* docs: moved location of docs
Co-authored-by: Callum James Tait <callum.tait@photobox.com>
Adds a column to help the operator see if they configured HRA.Spec.ScheduledOverrides correctly, in a form of "next override schedule recognized by the controller":
```
$ k get horizontalrunnerautoscaler
NAME MIN MAX DESIRED SCHEDULE
actions-runner-aos-autoscaler 0 5 0
org 0 5 0 min=0 time=2021-05-21 15:00:00 +0000 UTC
```
Ref https://github.com/actions-runner-controller/actions-runner-controller/issues/484
This fixes human-readable output of `kubectl get` on `runnerdeployment`, `runnerreplicaset`, and `runner`.
Most notably, CURRENT and READY of runner replicasets are now computed and printed correctly. Runner deployments now have UP-TO-DATE and AVAILABLE instead of READY so that it is consistent with columns of K8s deployments.
A few fixes has been also made to runner deployment and runner replicaset controllers so that those numbers stored in Status objects are reliably updated and in-sync with actual values.
Finally, `AGE` columns are added to runnerdeployment, runnerreplicaset, runnner to make that more visible to users.
`kubectl get` outputs should now look like the below examples:
```
# Immediately after runnerdeployment updated/created
$ k get runnerdeployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example-runnerdeploy 0 0 0 0 8d
org-runnerdeploy 5 5 5 0 8d
# A few dozens of seconds after update/create all the runners are registered that "available" numbers increase
$ k get runnerdeployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example-runnerdeploy 0 0 0 0 8d
org-runnerdeploy 5 5 5 5 8d
```
```
$ k get runnerreplicaset
NAME DESIRED CURRENT READY AGE
example-runnerdeploy-wnpf6 0 0 0 61m
org-runnerdeploy-fsnmr 2 2 0 8m41s
```
```
$ k get runner
NAME ENTERPRISE ORGANIZATION REPOSITORY LABELS STATUS AGE
example-runnerdeploy-wnpf6-registration-only actions-runner-controller/mumoshu-actions-test Running 61m
org-runnerdeploy-fsnmr-n8kkx actions-runner-controller ["mylabel 1","mylabel 2"] 21s
org-runnerdeploy-fsnmr-sq6m8 actions-runner-controller ["mylabel 1","mylabel 2"] 21s
```
Fixes#490
This adds the initial version of ScheduledOverrides to HorizontalRunnerAutoscaler.
`MinReplicas` overriding should just work.
When there are two or more ScheduledOverrides, the earliest one that matched is activated. Each ScheduledOverride can be recurring or one-time. If you have two or more ScheduledOverrides, only one of them should be one-time. And the one-time override should be the earliest item in the list to make sense.
Tests will be added in another commit. Logging improvements and additional observability in HRA.Status will also be added in yet another commits.
Ref #484
- Adds `ephemeral` option to `runner.spec`
```
....
template:
spec:
ephemeral: false
repository: mumoshu/actions-runner-controller-ci
....
```
- `ephemeral` defaults to `true`
- `entrypoint.sh` in runner/Dockerfile modified to read `RUNNER_EPHEMERAL` flag
- Runner images are backward-compatible. `--once` is omitted only when the new envvar `RUNNER_EPHEMERAL` is explicitly set to `false`.
Resolves#457
Changes:
- Switched to use `jq` in startup.sh
- Enable docker registry mirror configuration which is useful when e.g. avoiding the Docker Hub rate-limiting
Check #478 for how this feature is tested and supposed to be used.
* chore: adding Helm app version back
* chore: removing redundant values entry
* chore: bumping to newer version
* chore: bumping app version to latest
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Enable the user to set a limit size on the volume of the runner to avoid some runner pod affecting other resources of the same cluster
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
We occasionally encountered those errors while the underlying RunnerReplicaSet is being recreated/replaced on RunnerDeployment.Spec.Template update. It turned out to be due to that the RunnerDeployment controller was waiting for the runner pod becomes `Running`, intead of the new replacement runner to have registered to GitHub. This fixes that, by trying to Runner.Status.Phase to `Running` only after the runner in the runner pod appears to be registered.
A side-effect of this change is that runner controller would call more "ListRunners" GitHub Actions API. I've reviewed and improved the runner controller code and Runner CRD to make make the number of calls minimum. In most cases, ListRunners should be called only twice for each runner creation.
This allows you to trigger autoscaling depending on check_run names(i.e. actions job names). If you are willing to differentiate scale amount only for a specific job, or want to scale only on a specific job, try this.
* add replicaCount
* Add authSecret.existingSecret
* set image.tag null by default
* implement ingress for githubwebhook server
* fix deprecated and secretName template
* backward compat .authSecret.enabled
* existingSecret for github webhook secret
* use secretName template
* set default secret names
* do not use app version based image tag
* create and name variable for secrets
* as pointed out in #281 the currently used image for the
kube-rbac-proxy - gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1" - does not
have an ARM64 image
* hence, trying to use the standard deployment manifest / helm char will
fail on ARM64 systems
* replaced image with quay.io/brancz/kube-rbac-proxy:v0.8.0 which is the
latest version from the upstream maintainer
(https://github.com/brancz/kube-rbac-proxy/blob/master/Makefile#L13)
* successfully tested on both AMD64 and ARM64 clusters
* fixes#281
* feat: HorizontalRunnerAutoscaler Webhook server
This introduces a Webhook server that responds GitHub `check_run`, `pull_request`, and `push` events by scaling up matched HorizontalRunnerAutoscaler by 1 replica. This allows you to immediately add "resource slack" for future GitHub Actions job runs, without waiting next sync period to add insufficient runners.
This feature is highly inspired by https://github.com/philips-labs/terraform-aws-github-runner. terraform-aws-github-runner can manage one set of runners per deployment, where actions-runner-controller with this feature can manage as many sets of runners as you declare with HorizontalRunnerAutoscaler and RunnerDeployment pairs.
On each GitHub event received, the webhook server queries repository-wide and organizational runners from the cluster and searches for the single target to scale up. The webhook server tries to match HorizontalRunnerAutoscaler.Spec.ScaleUpTriggers[].GitHubEvent.[CheckRun|Push|PullRequest] against the event and if it finds only one HRA, it is the scale target. If none or two or more targets are found for repository-wide runners, it does the same on organizational runners.
Changes:
* Fix integration test
* Update manifests
* chart: Add support for github webhook server
* dockerfile: Include github-webhook-server binary
* Do not import unversioned go-github
* Update README
* feat/helm: Bump appVersion to 0.6.1 release
* Also bump chart version to trigger a new chart release
Co-authored-by: Yusuke Kuoka <c-ykuoka@zlab.co.jp>
* Add chart workflows (#1)
* Add chart workflows
* Fix publishing step in CI
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
* Update CI on push-to-master (#3)
* Put helm installation step in the correct CI job
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
* Put helm installation step in the correct CI job (#4)
* Update on-push-master-publish-chart.yml
* Remove references to certmanager dependency
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
* Add ability to customize kube-rbac-proxy image
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
* Only install cert-manager if we're going to spin up KinD
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
* feat: adding maanger secret to Helm
* fix: correcting secret data format
* feat: adding in common labels
* fix: updating default values to have config
The auth config needs to be commented out by default as we don't want to deploy both configs empty. This may break stuff, so we want the user to actively uncomment the auth method they want instead
* chore: updating default format of cert
* chore: wording
Add dockerEnabled option for users who does not need docker and want not to run privileged container.
if `dockerEnabled == false`, dind container not run, and there are no privileged container.
Do the same as closed#96