Starting ARC v0.27.2, we've changed the `docker.sock` path from `/var/run/docker.sock` to `/var/run/docker/docker.sock`. That resulted in breaking some container-based actions due to the hard-coded `docker.sock` path in various places.
Even `actions/runner` seem to use `/var/run/docker.sock` for building container-based actions and for service containers?
Anyway, this fixes that by moving the sock file back to the previous location.
Once this gets merged, users stuck at ARC v0.27.1, previously upgraded to 0.27.2 or 0.27.3 and reverted back to v0.27.1 due to #2519, should be able to upgrade to the upcoming v0.27.4.
Resolves#2519Resolves#2538
#2490 has been happening since v0.27.2 for non-dind runners based on Ubuntu 20.04 runner images. It does not affect Ubuntu 22.04 non-dind runners(i.e. runners with dockerd sidecars) and Ubuntu 20.04/22.04 dind runners(i.e. runners without dockerd sidecars). However, presuming many folks are still using Ubuntu 20.04 runners and non-dind runners, we should fix it.
This change tries to fix it by defaulting to the docker group id 1001 used by Ubuntu 20.04 runners, and use gid 121 for Ubuntu 22.04 runners. We use the image tag to see which Ubuntu version the runner is based on. The algorithm is so simple- we assume it's Ubuntu-22.04-based if the image tag contains "22.04".
This might be a breaking change for folks who have already upgraded to Ubuntu 22.04 runners using their own custom runner images. Note again; we rely on the image tag to detect Ubuntu 22.04 runner images and use the proper docker gid- Folks using our official Ubuntu 22.04 runner images are not affected. It is a breaking change anyway, so I have added a remedy-
ARC got a new flag, `--docker-gid`, which defaults to `1001` but can be set to `121` or whatever gid the operator/admin likes. This can be set to `--docker-gid=121`, for example, if you are using your own custom runner image based on Ubuntu 22.04 and the image tag does not contain "22.04".
Fixes#2490
Co-authored-by: Cory Miller <cory-miller@github.com>
Co-authored-by: Nikola Jokic <nikola-jokic@github.com>
Co-authored-by: Ava Stancu <AvaStancu@github.com>
Co-authored-by: Ferenc Hammerl <fhammerl@github.com>
Co-authored-by: Francesco Renzi <rentziass@github.com>
Co-authored-by: Bassem Dghaidi <Link-@github.com>
* Update controller package names to match the owning API group name
* feedback.
Co-authored-by: Bassem Dghaidi <568794+Link-@users.noreply.github.com>
* Changed folder structure to allow multi group registration
* included actions.github.com directory for resources and controllers
* updated go module to actions/actions-runner-controller
* publish arc packages under actions-runner-controller
* Update charts/actions-runner-controller/docs/UPGRADING.md
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
This fixes the said issue that I found while I was running a series of E2E tests to test other features and pull requestes I have recently contributed.
Setting SecurityContext.Privileged bit to false, which is default,
prevents GKE from admitting Windows pods. Privileged bit is not
supported on Windows.
* 1770 update log format and add runID and Id to worflow logs
update tests, change log format for controllers.HorizontalRunnerAutoscalerGitHubWebhook
use logging package
remove unused modules
add setup name to setuplog
add flag to change log format
change flag name to enableProdLogConfig
move log opts to logger package
remove empty else and reset timeEncoder
update flag description
use get function to handle nil
rename flag and update logger function
Update main.go
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Update controllers/horizontal_runner_autoscaler_webhook.go
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Update logging/logger.go
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
copy log opt per each NewLogger call
revert to use autoscaler.log
update flag descript and remove unused imports
add logFormat to readme
rename setupLog to logger
make fmt
* Fix E2E along the way
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
so that ARC respect the registration timeout, terminationGracePeriodSeconds and RUNNER_GRACEFUL_STOP_TIMEOUT(#1759) when the runner pod was terminated externally too early after its creation
While I was running E2E tests for #1759, I discovered a potential issue that ARC can terminate runner pods without waiting for the registration timeout of 10 minutes.
You won't be affected by this in normal circumstances, as this failure scenario can be triggered only when you or another K8s controller like cluster-autoscaler deleted the runner or the runner pod immediately after the runner or the runner pod has been created. But probably is it worth fixing it anyway because it's not impossible to trigger it?