fix: add health check to detect and recover stale EphemeralRunner registrations ## Summary Adds a GitHub-side registration health check to the `EphemeralRunnerReconciler`. During reconciliation of a running EphemeralRunner that has a valid `RunnerId`, the controller now calls `GetRunner()` against the GitHub Actions API to verify the registration still exists. If the API returns 404 (registration gone), the runner is marked as failed so the `EphemeralRunnerSet` can provision a replacement. This also fixes `deleteRunnerFromService` to tolerate 404 from `RemoveRunner`, since a runner whose registration was already invalidated will also 404 on removal. Previously this caused `markAsFailed` to error and trigger a requeue loop, preventing cleanup of stale runners. A redundant pod phase check was removed from the health check path — `checkRunnerRegistration` is only called from within the `cs.State.Terminated == nil` branch (container confirmed running), so the phase check was unnecessary and could cause false negatives since the local `EphemeralRunner` object may not have the updated phase yet. ## Changes - **`ephemeralrunner_controller.go`**: New `checkRunnerRegistration()` method that calls `GetRunner()` and returns unhealthy only on confirmed 404. Transient API errors (500, timeouts, etc.) are logged and ignored to avoid false positives. Called during reconciliation when a running pod has a non-zero `RunnerId`. - **`ephemeralrunner_controller.go`**: `deleteRunnerFromService()` now treats 404 from `RemoveRunner` as success, since the runner is already gone. - **`ephemeralrunner_controller_test.go`**: Two new test cases — one verifying a 404 from `GetRunner` marks the runner as failed, another verifying a 500 does not. Both simulate a running pod container status before the health check triggers. - **`fake/client.go`**: New `WithRemoveRunnerError` option for the fake client to support testing the 404 removal path. ## Issues Fixed ### Directly fixes - **#4396** — *ARC runners did not recover automatically after GitHub Outage*: Runners that lost their GitHub-side registration during the [2026-03-05 GitHub Actions incident](https://www.githubstatus.com/incidents/g9j4tmfqdd09) remained stuck indefinitely because the `EphemeralRunnerReconciler` never verified registration validity. This change adds exactly that verification — on each reconciliation of a running runner, `GetRunner()` confirms the registration exists. If it returns 404, the runner is marked failed and replaced. - **#4395** — *GHA Self-hosted runner pods are running but the runner status is showing offline*: Runner pods were running for 5-7+ hours showing "Listening for Jobs" but marked offline in GitHub. The pods' registrations were invalidated server-side (same incident), but the controller had no mechanism to detect this. The new health check detects the 404 and triggers cleanup, preventing these zombie runners from accumulating. ### Partially addresses - **#4397** — *Stale TotalAssignedJobs causes permanent over-provisioning after platform incidents*: This issue has two components: (1) zombie runner pods occupying slots, and (2) the listener's `TotalAssignedJobs` remaining inflated. This PR fixes component (1) — by detecting and cleaning up runners with invalidated registrations, the controller stops accumulating zombie pods. However, the listener-side `TotalAssignedJobs` inflation (a separate code path in `listener.go` → `worker.go`) is not addressed by this change and still requires either a session reconnect mechanism or CR deletion to clear. - **#4307** — *Ephemeral Runners seem to get stuck when job is canceled or interrupted*: Reports runners getting stuck with `Registration <uuid> was not found` errors in BrokerServer backoff loops after job cancellation. The health check would detect that the registration is gone (404 from `GetRunner`) and mark these runners as failed rather than leaving them in an infinite backoff loop. - **#4155** — *EphemeralRunner and its pods left stuck Running after runner OOMKill*: Runners that are OOMKilled can end up in a state where the pod is technically running but the runner process is non-functional, and the registration may become stale. The health check provides a secondary detection mechanism — if the GitHub-side registration is invalidated for a non-functional runner, it will be caught and cleaned up. - **#3821** — *AutoscalingRunnerSet gets stuck thinking runners exist when they do not*: Reports the AutoscalingRunnerSet believing runners exist when no pods are present, requiring CR deletion to recover. While the root cause may vary, stale registrations that the controller cannot clean up (due to `RemoveRunner` 404 errors causing requeue loops) are one contributing factor. The 404 tolerance in `deleteRunnerFromService` directly addresses this cleanup path. ## Test Plan - [x] Unit test: `GetRunner` returning 404 → runner marked as failed - [x] Unit test: `GetRunner` returning 500 → runner NOT marked as failed (transient error tolerance) - [x] Unit test: `RemoveRunner` returning 404 → deletion succeeds (runner already gone) - [ ] Integration: Deploy to a test cluster, manually delete a runner's registration via the GitHub API, verify the controller detects and replaces it - [ ] Integration: Simulate incident conditions by blocking `broker.actions.githubusercontent.com` for running runners, then unblocking — verify stale runners are detected and replaced |
||
|---|---|---|
| .github | ||
| acceptance | ||
| apis | ||
| build | ||
| charts | ||
| cmd | ||
| config | ||
| contrib | ||
| controllers | ||
| docs | ||
| github | ||
| hack | ||
| hash | ||
| logging | ||
| pkg | ||
| runner | ||
| simulator | ||
| test | ||
| test_e2e_arc | ||
| testing | ||
| vault | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .golangci.yaml | ||
| .mockery.yaml | ||
| CODEOWNERS | ||
| CODE_OF_CONDUCT.md | ||
| CONTRIBUTING.md | ||
| Dockerfile | ||
| LICENSE | ||
| Makefile | ||
| PROJECT | ||
| README.md | ||
| SECURITY.md | ||
| TROUBLESHOOTING.md | ||
| go.mod | ||
| go.sum | ||
| main.go | ||
README.md
Actions Runner Controller (ARC)
About
Actions Runner Controller (ARC) is a Kubernetes operator that orchestrates and scales self-hosted runners for GitHub Actions.
With ARC, you can create runner scale sets that automatically scale based on the number of workflows running in your repository, organization, or enterprise. Because controlled runners can be ephemeral and based on containers, new runner instances can scale up or down rapidly and cleanly. For more information about autoscaling, see "Autoscaling with self-hosted runners."
You can set up ARC on Kubernetes using Helm, then create and run a workflow that uses runner scale sets. For more information about runner scale sets, see "Deploying runner scale sets with Actions Runner Controller."
People
Actions Runner Controller (ARC) is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various contributors, and the awesome community.
If you think the project is awesome and is adding value to your business, please consider directly sponsoring community maintainers and individual contributors via GitHub Sponsors.
If you are already the employer of one of the contributors, sponsoring via GitHub Sponsors might not be an option. Just support them by other means!
See the sponsorship dashboard for the former and the current sponsors.
Getting Started
To give ARC a try with just a handful of commands, please refer to the Quickstart guide.
For an overview of ARC, please refer to About ARC.
With the introduction of autoscaling runner scale sets, the existing autoscaling modes are now legacy. The legacy modes have certain use cases and will continue to be maintained by the community only.
For further information on what is supported by GitHub and what's managed by the community, please refer to this announcement discussion.
Documentation
ARC documentation is available on docs.github.com.
Legacy documentation
The following documentation is for the legacy autoscaling modes that continue to be maintained by the community:
- Quickstart guide
- About ARC
- Installing ARC
- Authenticating to the GitHub API
- Deploying ARC runners
- Adding ARC runners to a repository, organization, or enterprise
- Automatically scaling runners
- Using custom volumes
- Using ARC runners in a workflow
- Managing access with runner groups
- Configuring Windows runners
- Using ARC across organizations
- Using entrypoint features
- Deploying alternative runners
- Monitoring and troubleshooting
Contributing
We welcome contributions from the community. For more details on contributing to the project (including requirements), please refer to "Getting Started with Contributing."
Troubleshooting
We are very happy to help you with any issues you have. Please refer to the "Troubleshooting" section for common issues.