Merge branch 'master' into master
This commit is contained in:
commit
07c9b64d4e
|
|
@ -11,21 +11,22 @@ Actions Runner Controller (ARC) is a Kubernetes operator that orchestrates and s
|
||||||
With ARC, you can create runner scale sets that automatically scale based on the number of workflows running in your repository, organization, or enterprise. Because controlled runners can be ephemeral and based on containers, new runner instances can scale up or down rapidly and cleanly. For more information about autoscaling, see ["Autoscaling with self-hosted runners."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners)
|
With ARC, you can create runner scale sets that automatically scale based on the number of workflows running in your repository, organization, or enterprise. Because controlled runners can be ephemeral and based on containers, new runner instances can scale up or down rapidly and cleanly. For more information about autoscaling, see ["Autoscaling with self-hosted runners."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners)
|
||||||
|
|
||||||
You can set up ARC on Kubernetes using Helm, then create and run a workflow that uses runner scale sets. For more information about runner scale sets, see ["Deploying runner scale sets with Actions Runner Controller."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#runner-scale-set)
|
You can set up ARC on Kubernetes using Helm, then create and run a workflow that uses runner scale sets. For more information about runner scale sets, see ["Deploying runner scale sets with Actions Runner Controller."](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#runner-scale-set)
|
||||||
|
|
||||||
## People
|
## People
|
||||||
|
|
||||||
Actions Runner Controller (ARC) is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions).
|
Actions Runner Controller (ARC) is an open-source project currently developed and maintained in collaboration with the GitHub Actions team, external maintainers @mumoshu and @toast-gear, various [contributors](https://github.com/actions/actions-runner-controller/graphs/contributors), and the [awesome community](https://github.com/actions/actions-runner-controller/discussions).
|
||||||
|
|
||||||
If you think the project is awesome and is adding value to your business, please consider directly sponsoring [community maintainers](https://github.com/sponsors/actions-runner-controller) and individual contributors via GitHub Sponsors.
|
If you think the project is awesome and is adding value to your business, please consider directly sponsoring [community maintainers](https://github.com/sponsors/actions-runner-controller) and individual contributors via GitHub Sponsors.
|
||||||
|
|
||||||
In case you are already the employer of one of contributors, sponsoring via GitHub Sponsors might not be an option. Just support them in other means!
|
If you are already the employer of one of the contributors, sponsoring via GitHub Sponsors might not be an option. Just support them by other means!
|
||||||
|
|
||||||
See [the sponsorship dashboard](https://github.com/sponsors/actions-runner-controller) for the former and the current sponsors.
|
See [the sponsorship dashboard](https://github.com/sponsors/actions-runner-controller) for the former and the current sponsors.
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
To give ARC a try with just a handful of commands, Please refer to the [Quickstart guide](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller).
|
To give ARC a try with just a handful of commands, please refer to the [Quickstart guide](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller).
|
||||||
|
|
||||||
For an overview of ARC, please refer to [About ARC](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller)
|
For an overview of ARC, please refer to [About ARC](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller).
|
||||||
|
|
||||||
With the introduction of [autoscaling runner scale sets](https://github.com/actions/actions-runner-controller/discussions/2775), the existing [autoscaling modes](./docs/automatically-scaling-runners.md) are now legacy. The legacy modes have certain use cases and will continue to be maintained by the community only.
|
With the introduction of [autoscaling runner scale sets](https://github.com/actions/actions-runner-controller/discussions/2775), the existing [autoscaling modes](./docs/automatically-scaling-runners.md) are now legacy. The legacy modes have certain use cases and will continue to be maintained by the community only.
|
||||||
|
|
||||||
|
|
@ -37,7 +38,7 @@ ARC documentation is available on [docs.github.com](https://docs.github.com/en/a
|
||||||
|
|
||||||
### Legacy documentation
|
### Legacy documentation
|
||||||
|
|
||||||
The following documentation is for the legacy autoscaling modes that continue to be maintained by the community
|
The following documentation is for the legacy autoscaling modes that continue to be maintained by the community:
|
||||||
|
|
||||||
- [Quickstart guide](/docs/quickstart.md)
|
- [Quickstart guide](/docs/quickstart.md)
|
||||||
- [About ARC](/docs/about-arc.md)
|
- [About ARC](/docs/about-arc.md)
|
||||||
|
|
|
||||||
|
|
@ -21,6 +21,10 @@ import (
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// EphemeralRunnerContainerName is the name of the runner container.
|
||||||
|
// It represents the name of the container running the self-hosted runner image.
|
||||||
|
const EphemeralRunnerContainerName = "runner"
|
||||||
|
|
||||||
//+kubebuilder:object:root=true
|
//+kubebuilder:object:root=true
|
||||||
//+kubebuilder:subresource:status
|
//+kubebuilder:subresource:status
|
||||||
// +kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name="GitHub Config URL",type=string
|
// +kubebuilder:printcolumn:JSONPath=".spec.githubConfigUrl",name="GitHub Config URL",type=string
|
||||||
|
|
@ -46,6 +50,23 @@ func (er *EphemeralRunner) IsDone() bool {
|
||||||
return er.Status.Phase == corev1.PodSucceeded || er.Status.Phase == corev1.PodFailed
|
return er.Status.Phase == corev1.PodSucceeded || er.Status.Phase == corev1.PodFailed
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (er *EphemeralRunner) HasContainerHookConfigured() bool {
|
||||||
|
for i := range er.Spec.Spec.Containers {
|
||||||
|
if er.Spec.Spec.Containers[i].Name != EphemeralRunnerContainerName {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, env := range er.Spec.Spec.Containers[i].Env {
|
||||||
|
if env.Name == "ACTIONS_RUNNER_CONTAINER_HOOKS" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// EphemeralRunnerSpec defines the desired state of EphemeralRunner
|
// EphemeralRunnerSpec defines the desired state of EphemeralRunner
|
||||||
type EphemeralRunnerSpec struct {
|
type EphemeralRunnerSpec struct {
|
||||||
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
|
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
|
||||||
|
|
|
||||||
|
|
@ -25,7 +25,7 @@ spec:
|
||||||
labels:
|
labels:
|
||||||
app.kubernetes.io/part-of: gha-rs-controller
|
app.kubernetes.io/part-of: gha-rs-controller
|
||||||
app.kubernetes.io/component: controller-manager
|
app.kubernetes.io/component: controller-manager
|
||||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
app.kubernetes.io/version: {{ .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||||
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 8 }}
|
{{- include "gha-runner-scale-set-controller.selectorLabels" . | nindent 8 }}
|
||||||
{{- with .Values.podLabels }}
|
{{- with .Values.podLabels }}
|
||||||
{{- toYaml . | nindent 8 }}
|
{{- toYaml . | nindent 8 }}
|
||||||
|
|
|
||||||
|
|
@ -136,7 +136,7 @@ volumeMounts:
|
||||||
{{- range $i, $volume := .Values.template.spec.volumes }}
|
{{- range $i, $volume := .Values.template.spec.volumes }}
|
||||||
{{- if eq $volume.name "work" }}
|
{{- if eq $volume.name "work" }}
|
||||||
{{- $createWorkVolume = 0 }}
|
{{- $createWorkVolume = 0 }}
|
||||||
- {{ $volume | toYaml | nindent 2 }}
|
- {{ $volume | toYaml | nindent 2 | trim }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if eq $createWorkVolume 1 }}
|
{{- if eq $createWorkVolume 1 }}
|
||||||
|
|
@ -150,7 +150,7 @@ volumeMounts:
|
||||||
{{- range $i, $volume := .Values.template.spec.volumes }}
|
{{- range $i, $volume := .Values.template.spec.volumes }}
|
||||||
{{- if eq $volume.name "work" }}
|
{{- if eq $volume.name "work" }}
|
||||||
{{- $createWorkVolume = 0 }}
|
{{- $createWorkVolume = 0 }}
|
||||||
- {{ $volume | toYaml | nindent 2 }}
|
- {{ $volume | toYaml | nindent 2 | trim }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if eq $createWorkVolume 1 }}
|
{{- if eq $createWorkVolume 1 }}
|
||||||
|
|
@ -165,7 +165,7 @@ volumeMounts:
|
||||||
{{- define "gha-runner-scale-set.non-work-volumes" -}}
|
{{- define "gha-runner-scale-set.non-work-volumes" -}}
|
||||||
{{- range $i, $volume := .Values.template.spec.volumes }}
|
{{- range $i, $volume := .Values.template.spec.volumes }}
|
||||||
{{- if ne $volume.name "work" }}
|
{{- if ne $volume.name "work" }}
|
||||||
- {{ $volume | toYaml | nindent 2 }}
|
- {{ $volume | toYaml | nindent 2 | trim }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
|
|
@ -255,7 +255,7 @@ volumeMounts:
|
||||||
{{- if eq $volMount.name "github-server-tls-cert" }}
|
{{- if eq $volMount.name "github-server-tls-cert" }}
|
||||||
{{- $mountGitHubServerTLS = 0 }}
|
{{- $mountGitHubServerTLS = 0 }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
- {{ $volMount | toYaml | nindent 4 }}
|
- {{ $volMount | toYaml | nindent 4 | trim }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- end }}
|
{{- end }}
|
||||||
{{- if $mountWork }}
|
{{- if $mountWork }}
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
The `contrib` directory is the place for sharing various example code for deploying and operating `actions-runner-controller`.
|
The `contrib` directory is the place for sharing various example code for deploying and operating `actions-runner-controller`.
|
||||||
|
|
||||||
Anything contained in this directory is provided as-is. The maintainers of `actions-runner-controller` is not yet commited to provide
|
Anything contained in this directory is provided as-is. The maintainers of `actions-runner-controller` are not yet committed to provide
|
||||||
full support for using, fixing, and enhancing it. However, they will do their best effort to collect feedbacks from early adopters and advanced users like you, and may eventually consider graduating any of the examples as an official addition to the project.
|
full support for using, fixing, and enhancing it. However, they will make their best effort to collect feedback from early adopters and advanced users like you, and may eventually consider graduating any of the examples as an official addition to the project.
|
||||||
|
|
||||||
See https://github.com/actions/actions-runner-controller/pull/1375#issuecomment-1258816470 and https://github.com/actions/actions-runner-controller/pull/1559#issuecomment-1258827496 for more context.
|
See https://github.com/actions/actions-runner-controller/pull/1375#issuecomment-1258816470 and https://github.com/actions/actions-runner-controller/pull/1559#issuecomment-1258827496 for more context.
|
||||||
|
|
|
||||||
|
|
@ -8,29 +8,29 @@ All additional docs are kept in the `docs/` folder, this README is solely for do
|
||||||
|
|
||||||
> _Default values are the defaults set in the charts values.yaml, some properties have default configurations in the code for when the property is omitted or invalid_
|
> _Default values are the defaults set in the charts values.yaml, some properties have default configurations in the code for when the property is omitted or invalid_
|
||||||
|
|
||||||
| Key | Description | Default |
|
| Key | Description | Default |
|
||||||
|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------|
|
| ----------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
|
||||||
| `labels` | Set labels to apply to all resources in the chart | |
|
| `labels` | Set labels to apply to all resources in the chart | |
|
||||||
| `replicaCount` | Set the number of runner pods | 1 |
|
| `replicaCount` | Set the number of runner pods | 1 |
|
||||||
| `image.repository` | The "repository/image" of the runner container | summerwind/actions-runner |
|
| `image.repository` | The "repository/image" of the runner container | summerwind/actions-runner |
|
||||||
| `image.tag` | The tag of the runner container | |
|
| `image.tag` | The tag of the runner container | |
|
||||||
| `image.pullPolicy` | The pull policy of the runner image | IfNotPresent |
|
| `image.pullPolicy` | The pull policy of the runner image | IfNotPresent |
|
||||||
| `imagePullSecrets` | Specifies the secret to be used when pulling the runner pod containers | |
|
| `imagePullSecrets` | Specifies the secret to be used when pulling the runner pod containers | |
|
||||||
| `fullnameOverride` | Override the full resource names | |
|
| `fullnameOverride` | Override the full resource names | |
|
||||||
| `nameOverride` | Override the resource name prefix | |
|
| `nameOverride` | Override the resource name prefix | |
|
||||||
| `podAnnotations` | Set annotations for the runner pod | |
|
| `podAnnotations` | Set annotations for the runner pod | |
|
||||||
| `podLabels` | Set labels for the runner pod | |
|
| `podLabels` | Set labels for the runner pod | |
|
||||||
| `podSecurityContext` | Set the security context to runner pod | |
|
| `podSecurityContext` | Set the security context to runner pod | |
|
||||||
| `nodeSelector` | Set the pod nodeSelector | |
|
| `nodeSelector` | Set the pod nodeSelector | |
|
||||||
| `affinity` | Set the runner pod affinity rules | |
|
| `affinity` | Set the runner pod affinity rules | |
|
||||||
| `tolerations` | Set the runner pod tolerations | |
|
| `tolerations` | Set the runner pod tolerations | |
|
||||||
| `env` | Set environment variables for the runner container | |
|
| `env` | Set environment variables for the runner container | |
|
||||||
| `organization` | Github organization where runner will be registered | test |
|
| `organization` | Github organization where the runner will be registered | test |
|
||||||
| `repository` | Github repository where runner will be registered | |
|
| `repository` | Github repository where the runner will be registered | |
|
||||||
| `runnerLabels` | Labels you want to add in your runner | test |
|
| `runnerLabels` | Labels you want to add in your runner | test |
|
||||||
| `autoscaler.enabled` | Enable the HorizontalRunnerAutoscaler, if its enabled then replica count will not be used | true |
|
| `autoscaler.enabled` | Enable the HorizontalRunnerAutoscaler, if its enabled then replica count will not be used | true |
|
||||||
| `autoscaler.minReplicas` | Minimum no of replicas | 1 |
|
| `autoscaler.minReplicas` | Minimum no of replicas | 1 |
|
||||||
| `autoscaler.maxReplicas` | Maximum no of replicas | 5 |
|
| `autoscaler.maxReplicas` | Maximum no of replicas | 5 |
|
||||||
| `autoscaler.scaleDownDelaySecondsAfterScaleOut` | [Anti-Flapping Configuration](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#anti-flapping-configuration) | 120 |
|
| `autoscaler.scaleDownDelaySecondsAfterScaleOut` | [Anti-Flapping Configuration](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#anti-flapping-configuration) | 120 |
|
||||||
| `autoscaler.metrics` | [Pull driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#pull-driven-scaling) | default |
|
| `autoscaler.metrics` | [Pull driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#pull-driven-scaling) | default |
|
||||||
| `autoscaler.scaleUpTriggers` | [Webhook driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#webhook-driven-scaling) | |
|
| `autoscaler.scaleUpTriggers` | [Webhook driven scaling](https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#webhook-driven-scaling) | |
|
||||||
|
|
|
||||||
|
|
@ -26,7 +26,6 @@ import (
|
||||||
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
|
"github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1"
|
||||||
"github.com/actions/actions-runner-controller/github/actions"
|
"github.com/actions/actions-runner-controller/github/actions"
|
||||||
"github.com/go-logr/logr"
|
"github.com/go-logr/logr"
|
||||||
"go.uber.org/multierr"
|
|
||||||
corev1 "k8s.io/api/core/v1"
|
corev1 "k8s.io/api/core/v1"
|
||||||
kerrors "k8s.io/apimachinery/pkg/api/errors"
|
kerrors "k8s.io/apimachinery/pkg/api/errors"
|
||||||
"k8s.io/apimachinery/pkg/runtime"
|
"k8s.io/apimachinery/pkg/runtime"
|
||||||
|
|
@ -38,10 +37,6 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// EphemeralRunnerContainerName is the name of the runner container.
|
|
||||||
// It represents the name of the container running the self-hosted runner image.
|
|
||||||
EphemeralRunnerContainerName = "runner"
|
|
||||||
|
|
||||||
ephemeralRunnerFinalizerName = "ephemeralrunner.actions.github.com/finalizer"
|
ephemeralRunnerFinalizerName = "ephemeralrunner.actions.github.com/finalizer"
|
||||||
ephemeralRunnerActionsFinalizerName = "ephemeralrunner.actions.github.com/runner-registration-finalizer"
|
ephemeralRunnerActionsFinalizerName = "ephemeralrunner.actions.github.com/runner-registration-finalizer"
|
||||||
)
|
)
|
||||||
|
|
@ -81,42 +76,40 @@ func (r *EphemeralRunnerReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||||
}
|
}
|
||||||
|
|
||||||
if controllerutil.ContainsFinalizer(ephemeralRunner, ephemeralRunnerActionsFinalizerName) {
|
if controllerutil.ContainsFinalizer(ephemeralRunner, ephemeralRunnerActionsFinalizerName) {
|
||||||
switch ephemeralRunner.Status.Phase {
|
log.Info("Trying to clean up runner from the service")
|
||||||
case corev1.PodSucceeded:
|
ok, err := r.cleanupRunnerFromService(ctx, ephemeralRunner, log)
|
||||||
// deleted by the runner set, we can just remove finalizer without API calls
|
if err != nil {
|
||||||
err := patch(ctx, r.Client, ephemeralRunner, func(obj *v1alpha1.EphemeralRunner) {
|
log.Error(err, "Failed to clean up runner from service")
|
||||||
controllerutil.RemoveFinalizer(obj, ephemeralRunnerActionsFinalizerName)
|
return ctrl.Result{}, err
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
log.Error(err, "Failed to update ephemeral runner without runner registration finalizer")
|
|
||||||
return ctrl.Result{}, err
|
|
||||||
}
|
|
||||||
log.Info("Successfully removed runner registration finalizer")
|
|
||||||
return ctrl.Result{}, nil
|
|
||||||
default:
|
|
||||||
return r.cleanupRunnerFromService(ctx, ephemeralRunner, log)
|
|
||||||
}
|
}
|
||||||
|
if !ok {
|
||||||
|
log.Info("Runner is not finished yet, retrying in 30s")
|
||||||
|
return ctrl.Result{RequeueAfter: 30 * time.Second}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Info("Runner is cleaned up from the service, removing finalizer")
|
||||||
|
if err := patch(ctx, r.Client, ephemeralRunner, func(obj *v1alpha1.EphemeralRunner) {
|
||||||
|
controllerutil.RemoveFinalizer(obj, ephemeralRunnerActionsFinalizerName)
|
||||||
|
}); err != nil {
|
||||||
|
return ctrl.Result{}, err
|
||||||
|
}
|
||||||
|
log.Info("Removed finalizer from ephemeral runner")
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Finalizing ephemeral runner")
|
log.Info("Finalizing ephemeral runner")
|
||||||
done, err := r.cleanupResources(ctx, ephemeralRunner, log)
|
err := r.cleanupResources(ctx, ephemeralRunner, log)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error(err, "Failed to clean up ephemeral runner owned resources")
|
log.Error(err, "Failed to clean up ephemeral runner owned resources")
|
||||||
return ctrl.Result{}, err
|
return ctrl.Result{}, err
|
||||||
}
|
}
|
||||||
if !done {
|
|
||||||
log.Info("Waiting for ephemeral runner owned resources to be deleted")
|
|
||||||
return ctrl.Result{Requeue: true}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
done, err = r.cleanupContainerHooksResources(ctx, ephemeralRunner, log)
|
if ephemeralRunner.HasContainerHookConfigured() {
|
||||||
if err != nil {
|
log.Info("Runner has container hook configured, cleaning up container hook resources")
|
||||||
log.Error(err, "Failed to clean up container hooks resources")
|
err = r.cleanupContainerHooksResources(ctx, ephemeralRunner, log)
|
||||||
return ctrl.Result{}, err
|
if err != nil {
|
||||||
}
|
log.Error(err, "Failed to clean up container hooks resources")
|
||||||
if !done {
|
return ctrl.Result{}, err
|
||||||
log.Info("Waiting for container hooks resources to be deleted")
|
}
|
||||||
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Removing finalizer")
|
log.Info("Removing finalizer")
|
||||||
|
|
@ -134,15 +127,12 @@ func (r *EphemeralRunnerReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||||
|
|
||||||
if ephemeralRunner.IsDone() {
|
if ephemeralRunner.IsDone() {
|
||||||
log.Info("Cleaning up resources after after ephemeral runner termination", "phase", ephemeralRunner.Status.Phase)
|
log.Info("Cleaning up resources after after ephemeral runner termination", "phase", ephemeralRunner.Status.Phase)
|
||||||
done, err := r.cleanupResources(ctx, ephemeralRunner, log)
|
err := r.cleanupResources(ctx, ephemeralRunner, log)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error(err, "Failed to clean up ephemeral runner owned resources")
|
log.Error(err, "Failed to clean up ephemeral runner owned resources")
|
||||||
return ctrl.Result{}, err
|
return ctrl.Result{}, err
|
||||||
}
|
}
|
||||||
if !done {
|
|
||||||
log.Info("Waiting for ephemeral runner owned resources to be deleted")
|
|
||||||
return ctrl.Result{Requeue: true}, nil
|
|
||||||
}
|
|
||||||
// Stop reconciling on this object.
|
// Stop reconciling on this object.
|
||||||
// The EphemeralRunnerSet is responsible for cleaning it up.
|
// The EphemeralRunnerSet is responsible for cleaning it up.
|
||||||
log.Info("EphemeralRunner has already finished. Stopping reconciliation and waiting for EphemeralRunnerSet to clean it up", "phase", ephemeralRunner.Status.Phase)
|
log.Info("EphemeralRunner has already finished. Stopping reconciliation and waiting for EphemeralRunnerSet to clean it up", "phase", ephemeralRunner.Status.Phase)
|
||||||
|
|
@ -306,52 +296,43 @@ func (r *EphemeralRunnerReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) cleanupRunnerFromService(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (ctrl.Result, error) {
|
func (r *EphemeralRunnerReconciler) cleanupRunnerFromService(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (ok bool, err error) {
|
||||||
if err := r.deleteRunnerFromService(ctx, ephemeralRunner, log); err != nil {
|
if err := r.deleteRunnerFromService(ctx, ephemeralRunner, log); err != nil {
|
||||||
actionsError := &actions.ActionsError{}
|
actionsError := &actions.ActionsError{}
|
||||||
if !errors.As(err, &actionsError) {
|
if !errors.As(err, &actionsError) {
|
||||||
log.Error(err, "Failed to clean up runner from the service (not an ActionsError)")
|
return false, err
|
||||||
return ctrl.Result{}, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if actionsError.StatusCode == http.StatusBadRequest && actionsError.IsException("JobStillRunningException") {
|
if actionsError.StatusCode == http.StatusBadRequest && actionsError.IsException("JobStillRunningException") {
|
||||||
log.Info("Runner is still running the job. Re-queue in 30 seconds")
|
return false, nil
|
||||||
return ctrl.Result{RequeueAfter: 30 * time.Second}, nil
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Error(err, "Failed clean up runner from the service")
|
return false, err
|
||||||
return ctrl.Result{}, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Successfully removed runner registration from service")
|
return true, nil
|
||||||
if err := patch(ctx, r.Client, ephemeralRunner, func(obj *v1alpha1.EphemeralRunner) {
|
|
||||||
controllerutil.RemoveFinalizer(obj, ephemeralRunnerActionsFinalizerName)
|
|
||||||
}); err != nil {
|
|
||||||
return ctrl.Result{}, err
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Successfully removed runner registration finalizer")
|
|
||||||
return ctrl.Result{}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) cleanupResources(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (deleted bool, err error) {
|
func (r *EphemeralRunnerReconciler) cleanupResources(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) error {
|
||||||
log.Info("Cleaning up the runner pod")
|
log.Info("Cleaning up the runner pod")
|
||||||
pod := new(corev1.Pod)
|
pod := new(corev1.Pod)
|
||||||
err = r.Get(ctx, types.NamespacedName{Namespace: ephemeralRunner.Namespace, Name: ephemeralRunner.Name}, pod)
|
err := r.Get(ctx, types.NamespacedName{Namespace: ephemeralRunner.Namespace, Name: ephemeralRunner.Name}, pod)
|
||||||
switch {
|
switch {
|
||||||
case err == nil:
|
case err == nil:
|
||||||
if pod.ObjectMeta.DeletionTimestamp.IsZero() {
|
if pod.ObjectMeta.DeletionTimestamp.IsZero() {
|
||||||
log.Info("Deleting the runner pod")
|
log.Info("Deleting the runner pod")
|
||||||
if err := r.Delete(ctx, pod); err != nil && !kerrors.IsNotFound(err) {
|
if err := r.Delete(ctx, pod); err != nil && !kerrors.IsNotFound(err) {
|
||||||
return false, fmt.Errorf("failed to delete pod: %w", err)
|
return fmt.Errorf("failed to delete pod: %w", err)
|
||||||
}
|
}
|
||||||
|
log.Info("Deleted the runner pod")
|
||||||
|
} else {
|
||||||
|
log.Info("Pod contains deletion timestamp")
|
||||||
}
|
}
|
||||||
return false, nil
|
case kerrors.IsNotFound(err):
|
||||||
case !kerrors.IsNotFound(err):
|
log.Info("Runner pod is deleted")
|
||||||
return false, err
|
default:
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
log.Info("Pod is deleted")
|
|
||||||
|
|
||||||
log.Info("Cleaning up the runner jitconfig secret")
|
log.Info("Cleaning up the runner jitconfig secret")
|
||||||
secret := new(corev1.Secret)
|
secret := new(corev1.Secret)
|
||||||
|
|
@ -361,53 +342,50 @@ func (r *EphemeralRunnerReconciler) cleanupResources(ctx context.Context, epheme
|
||||||
if secret.ObjectMeta.DeletionTimestamp.IsZero() {
|
if secret.ObjectMeta.DeletionTimestamp.IsZero() {
|
||||||
log.Info("Deleting the jitconfig secret")
|
log.Info("Deleting the jitconfig secret")
|
||||||
if err := r.Delete(ctx, secret); err != nil && !kerrors.IsNotFound(err) {
|
if err := r.Delete(ctx, secret); err != nil && !kerrors.IsNotFound(err) {
|
||||||
return false, fmt.Errorf("failed to delete secret: %w", err)
|
return fmt.Errorf("failed to delete secret: %w", err)
|
||||||
}
|
}
|
||||||
|
log.Info("Deleted jitconfig secret")
|
||||||
|
} else {
|
||||||
|
log.Info("Secret contains deletion timestamp")
|
||||||
}
|
}
|
||||||
return false, nil
|
case kerrors.IsNotFound(err):
|
||||||
case !kerrors.IsNotFound(err):
|
log.Info("Runner jitconfig secret is deleted")
|
||||||
return false, err
|
default:
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
log.Info("Secret is deleted")
|
|
||||||
|
|
||||||
return true, nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) cleanupContainerHooksResources(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (done bool, err error) {
|
func (r *EphemeralRunnerReconciler) cleanupContainerHooksResources(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) error {
|
||||||
log.Info("Cleaning up runner linked pods")
|
log.Info("Cleaning up runner linked pods")
|
||||||
done, err = r.cleanupRunnerLinkedPods(ctx, ephemeralRunner, log)
|
var errs []error
|
||||||
if err != nil {
|
if err := r.cleanupRunnerLinkedPods(ctx, ephemeralRunner, log); err != nil {
|
||||||
return false, fmt.Errorf("failed to clean up runner linked pods: %w", err)
|
errs = append(errs, err)
|
||||||
}
|
|
||||||
|
|
||||||
if !done {
|
|
||||||
return false, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Cleaning up runner linked secrets")
|
log.Info("Cleaning up runner linked secrets")
|
||||||
done, err = r.cleanupRunnerLinkedSecrets(ctx, ephemeralRunner, log)
|
if err := r.cleanupRunnerLinkedSecrets(ctx, ephemeralRunner, log); err != nil {
|
||||||
if err != nil {
|
errs = append(errs, err)
|
||||||
return false, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return done, nil
|
return errors.Join(errs...)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedPods(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (done bool, err error) {
|
func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedPods(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) error {
|
||||||
runnerLinedLabels := client.MatchingLabels(
|
runnerLinedLabels := client.MatchingLabels(
|
||||||
map[string]string{
|
map[string]string{
|
||||||
"runner-pod": ephemeralRunner.Name,
|
"runner-pod": ephemeralRunner.Name,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
var runnerLinkedPodList corev1.PodList
|
var runnerLinkedPodList corev1.PodList
|
||||||
err = r.List(ctx, &runnerLinkedPodList, client.InNamespace(ephemeralRunner.Namespace), runnerLinedLabels)
|
if err := r.List(ctx, &runnerLinkedPodList, client.InNamespace(ephemeralRunner.Namespace), runnerLinedLabels); err != nil {
|
||||||
if err != nil {
|
return fmt.Errorf("failed to list runner-linked pods: %w", err)
|
||||||
return false, fmt.Errorf("failed to list runner-linked pods: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(runnerLinkedPodList.Items) == 0 {
|
if len(runnerLinkedPodList.Items) == 0 {
|
||||||
log.Info("Runner-linked pods are deleted")
|
log.Info("Runner-linked pods are deleted")
|
||||||
return true, nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Deleting container hooks runner-linked pods", "count", len(runnerLinkedPodList.Items))
|
log.Info("Deleting container hooks runner-linked pods", "count", len(runnerLinkedPodList.Items))
|
||||||
|
|
@ -425,24 +403,23 @@ func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedPods(ctx context.Context,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false, multierr.Combine(errs...)
|
return errors.Join(errs...)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedSecrets(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) (done bool, err error) {
|
func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedSecrets(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, log logr.Logger) error {
|
||||||
runnerLinkedLabels := client.MatchingLabels(
|
runnerLinkedLabels := client.MatchingLabels(
|
||||||
map[string]string{
|
map[string]string{
|
||||||
"runner-pod": ephemeralRunner.ObjectMeta.Name,
|
"runner-pod": ephemeralRunner.ObjectMeta.Name,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
var runnerLinkedSecretList corev1.SecretList
|
var runnerLinkedSecretList corev1.SecretList
|
||||||
err = r.List(ctx, &runnerLinkedSecretList, client.InNamespace(ephemeralRunner.Namespace), runnerLinkedLabels)
|
if err := r.List(ctx, &runnerLinkedSecretList, client.InNamespace(ephemeralRunner.Namespace), runnerLinkedLabels); err != nil {
|
||||||
if err != nil {
|
return fmt.Errorf("failed to list runner-linked secrets: %w", err)
|
||||||
return false, fmt.Errorf("failed to list runner-linked secrets: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(runnerLinkedSecretList.Items) == 0 {
|
if len(runnerLinkedSecretList.Items) == 0 {
|
||||||
log.Info("Runner-linked secrets are deleted")
|
log.Info("Runner-linked secrets are deleted")
|
||||||
return true, nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Deleting container hooks runner-linked secrets", "count", len(runnerLinkedSecretList.Items))
|
log.Info("Deleting container hooks runner-linked secrets", "count", len(runnerLinkedSecretList.Items))
|
||||||
|
|
@ -460,7 +437,7 @@ func (r *EphemeralRunnerReconciler) cleanupRunnerLinkedSecrets(ctx context.Conte
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false, multierr.Combine(errs...)
|
return errors.Join(errs...)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *EphemeralRunnerReconciler) markAsFailed(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, errMessage string, reason string, log logr.Logger) error {
|
func (r *EphemeralRunnerReconciler) markAsFailed(ctx context.Context, ephemeralRunner *v1alpha1.EphemeralRunner, errMessage string, reason string, log logr.Logger) error {
|
||||||
|
|
@ -536,7 +513,7 @@ func (r *EphemeralRunnerReconciler) updateStatusWithRunnerConfig(ctx context.Con
|
||||||
}
|
}
|
||||||
|
|
||||||
for i := range ephemeralRunner.Spec.Spec.Containers {
|
for i := range ephemeralRunner.Spec.Spec.Containers {
|
||||||
if ephemeralRunner.Spec.Spec.Containers[i].Name == EphemeralRunnerContainerName &&
|
if ephemeralRunner.Spec.Spec.Containers[i].Name == v1alpha1.EphemeralRunnerContainerName &&
|
||||||
ephemeralRunner.Spec.Spec.Containers[i].WorkingDir != "" {
|
ephemeralRunner.Spec.Spec.Containers[i].WorkingDir != "" {
|
||||||
jitSettings.WorkFolder = ephemeralRunner.Spec.Spec.Containers[i].WorkingDir
|
jitSettings.WorkFolder = ephemeralRunner.Spec.Spec.Containers[i].WorkingDir
|
||||||
}
|
}
|
||||||
|
|
@ -876,7 +853,7 @@ func (r *EphemeralRunnerReconciler) SetupWithManager(mgr ctrl.Manager, opts ...O
|
||||||
func runnerContainerStatus(pod *corev1.Pod) *corev1.ContainerStatus {
|
func runnerContainerStatus(pod *corev1.Pod) *corev1.ContainerStatus {
|
||||||
for i := range pod.Status.ContainerStatuses {
|
for i := range pod.Status.ContainerStatuses {
|
||||||
cs := &pod.Status.ContainerStatuses[i]
|
cs := &pod.Status.ContainerStatuses[i]
|
||||||
if cs.Name == EphemeralRunnerContainerName {
|
if cs.Name == v1alpha1.EphemeralRunnerContainerName {
|
||||||
return cs
|
return cs
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -48,7 +48,7 @@ func newExampleRunner(name, namespace, configSecretName string) *v1alpha1.Epheme
|
||||||
Spec: corev1.PodSpec{
|
Spec: corev1.PodSpec{
|
||||||
Containers: []corev1.Container{
|
Containers: []corev1.Container{
|
||||||
{
|
{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
Image: runnerImage,
|
Image: runnerImage,
|
||||||
Command: []string{"/runner/run.sh"},
|
Command: []string{"/runner/run.sh"},
|
||||||
VolumeMounts: []corev1.VolumeMount{
|
VolumeMounts: []corev1.VolumeMount{
|
||||||
|
|
@ -57,6 +57,12 @@ func newExampleRunner(name, namespace, configSecretName string) *v1alpha1.Epheme
|
||||||
MountPath: "/runner",
|
MountPath: "/runner",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
Env: []corev1.EnvVar{
|
||||||
|
{
|
||||||
|
Name: "ACTIONS_RUNNER_CONTAINER_HOOKS",
|
||||||
|
Value: "/tmp/hook/index.js",
|
||||||
|
},
|
||||||
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
InitContainers: []corev1.Container{
|
InitContainers: []corev1.Container{
|
||||||
|
|
@ -379,13 +385,10 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
podCopy := pod.DeepCopy()
|
podCopy := pod.DeepCopy()
|
||||||
pod.Status.Phase = phase
|
pod.Status.Phase = phase
|
||||||
// set container state to force status update
|
// set container state to force status update
|
||||||
pod.Status.ContainerStatuses = append(
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
pod.Status.ContainerStatuses,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
corev1.ContainerStatus{
|
State: corev1.ContainerState{},
|
||||||
Name: EphemeralRunnerContainerName,
|
})
|
||||||
State: corev1.ContainerState{},
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
err := k8sClient.Status().Patch(ctx, pod, client.MergeFrom(podCopy))
|
err := k8sClient.Status().Patch(ctx, pod, client.MergeFrom(podCopy))
|
||||||
Expect(err).To(BeNil(), "failed to patch pod status")
|
Expect(err).To(BeNil(), "failed to patch pod status")
|
||||||
|
|
@ -439,7 +442,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
newPod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
newPod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{},
|
State: corev1.ContainerState{},
|
||||||
})
|
})
|
||||||
err := k8sClient.Status().Patch(ctx, newPod, client.MergeFrom(pod))
|
err := k8sClient.Status().Patch(ctx, newPod, client.MergeFrom(pod))
|
||||||
|
|
@ -545,7 +548,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
}
|
}
|
||||||
|
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{
|
State: corev1.ContainerState{
|
||||||
Terminated: &corev1.ContainerStateTerminated{
|
Terminated: &corev1.ContainerStateTerminated{
|
||||||
ExitCode: 1,
|
ExitCode: 1,
|
||||||
|
|
@ -564,7 +567,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
err := k8sClient.Get(ctx, client.ObjectKey{Name: ephemeralRunner.Name, Namespace: ephemeralRunner.Namespace}, pod)
|
err := k8sClient.Get(ctx, client.ObjectKey{Name: ephemeralRunner.Name, Namespace: ephemeralRunner.Namespace}, pod)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{
|
State: corev1.ContainerState{
|
||||||
Terminated: &corev1.ContainerStateTerminated{
|
Terminated: &corev1.ContainerStateTerminated{
|
||||||
ExitCode: 1,
|
ExitCode: 1,
|
||||||
|
|
@ -611,7 +614,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
pod.Status.Phase = corev1.PodFailed
|
pod.Status.Phase = corev1.PodFailed
|
||||||
pod.Status.Reason = "Evicted"
|
pod.Status.Reason = "Evicted"
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{},
|
State: corev1.ContainerState{},
|
||||||
})
|
})
|
||||||
err := k8sClient.Status().Update(ctx, pod)
|
err := k8sClient.Status().Update(ctx, pod)
|
||||||
|
|
@ -654,7 +657,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
).Should(BeEquivalentTo(true))
|
).Should(BeEquivalentTo(true))
|
||||||
|
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{
|
State: corev1.ContainerState{
|
||||||
Terminated: &corev1.ContainerStateTerminated{
|
Terminated: &corev1.ContainerStateTerminated{
|
||||||
ExitCode: 0,
|
ExitCode: 0,
|
||||||
|
|
@ -702,7 +705,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
|
|
||||||
// first set phase to running
|
// first set phase to running
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{
|
State: corev1.ContainerState{
|
||||||
Running: &corev1.ContainerStateRunning{
|
Running: &corev1.ContainerStateRunning{
|
||||||
StartedAt: metav1.Now(),
|
StartedAt: metav1.Now(),
|
||||||
|
|
@ -797,7 +800,7 @@ var _ = Describe("EphemeralRunner", func() {
|
||||||
}, ephemeralRunnerTimeout, ephemeralRunnerInterval).Should(BeEquivalentTo(true))
|
}, ephemeralRunnerTimeout, ephemeralRunnerInterval).Should(BeEquivalentTo(true))
|
||||||
|
|
||||||
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
pod.Status.ContainerStatuses = append(pod.Status.ContainerStatuses, corev1.ContainerStatus{
|
||||||
Name: EphemeralRunnerContainerName,
|
Name: v1alpha1.EphemeralRunnerContainerName,
|
||||||
State: corev1.ContainerState{
|
State: corev1.ContainerState{
|
||||||
Terminated: &corev1.ContainerStateTerminated{
|
Terminated: &corev1.ContainerStateTerminated{
|
||||||
ExitCode: 0,
|
ExitCode: 0,
|
||||||
|
|
|
||||||
|
|
@ -614,7 +614,7 @@ func (b *ResourceBuilder) newEphemeralRunnerPod(ctx context.Context, runner *v1a
|
||||||
newPod.Spec.Containers = make([]corev1.Container, 0, len(runner.Spec.PodTemplateSpec.Spec.Containers))
|
newPod.Spec.Containers = make([]corev1.Container, 0, len(runner.Spec.PodTemplateSpec.Spec.Containers))
|
||||||
|
|
||||||
for _, c := range runner.Spec.PodTemplateSpec.Spec.Containers {
|
for _, c := range runner.Spec.PodTemplateSpec.Spec.Containers {
|
||||||
if c.Name == EphemeralRunnerContainerName {
|
if c.Name == v1alpha1.EphemeralRunnerContainerName {
|
||||||
c.Env = append(
|
c.Env = append(
|
||||||
c.Env,
|
c.Env,
|
||||||
corev1.EnvVar{
|
corev1.EnvVar{
|
||||||
|
|
|
||||||
|
|
@ -31,7 +31,7 @@ In addition to the increased reliability of the automatic scaling, we have worke
|
||||||
|
|
||||||
[](https://youtu.be/wQ0k5k6KW5Y)
|
[](https://youtu.be/wQ0k5k6KW5Y)
|
||||||
|
|
||||||
> Will take you to Youtube for a short walkthrough of the Autoscaling Runner Scale Sets mode.
|
> Will take you to YouTube for a short walkthrough of the Autoscaling Runner Scale Sets mode.
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
|
|
@ -201,7 +201,7 @@ Please evaluate these changes carefully before upgrading.
|
||||||
1. Document customization for containerModes [#2777](https://github.com/actions/actions-runner-controller/pull/2777)
|
1. Document customization for containerModes [#2777](https://github.com/actions/actions-runner-controller/pull/2777)
|
||||||
1. Bump github.com/cloudflare/circl from 1.1.0 to 1.3.3 [#2628](https://github.com/actions/actions-runner-controller/pull/2628)
|
1. Bump github.com/cloudflare/circl from 1.1.0 to 1.3.3 [#2628](https://github.com/actions/actions-runner-controller/pull/2628)
|
||||||
1. chore(deps): bump github.com/stretchr/testify from 1.8.2 to 1.8.4 [#2716](https://github.com/actions/actions-runner-controller/pull/2716)
|
1. chore(deps): bump github.com/stretchr/testify from 1.8.2 to 1.8.4 [#2716](https://github.com/actions/actions-runner-controller/pull/2716)
|
||||||
1. Move gha-* docs out of preview [#2779](https://github.com/actions/actions-runner-controller/pull/2779)
|
1. Move gha-\* docs out of preview [#2779](https://github.com/actions/actions-runner-controller/pull/2779)
|
||||||
1. Prepare 0.5.0 release [#2783](https://github.com/actions/actions-runner-controller/pull/2783)
|
1. Prepare 0.5.0 release [#2783](https://github.com/actions/actions-runner-controller/pull/2783)
|
||||||
1. Security fix [#2676](https://github.com/actions/actions-runner-controller/pull/2676)
|
1. Security fix [#2676](https://github.com/actions/actions-runner-controller/pull/2676)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,14 +1,12 @@
|
||||||
# signrel
|
# signrel
|
||||||
|
|
||||||
`signrel` is the utility command for downloading `actions-runner-controller` release assets, sigining those, and uploading the signature files.
|
`signrel` is a utility command that downloads `actions-runner-controller` release assets, signs them, and uploads the resulting signature files.
|
||||||
|
|
||||||
## Verifying Release Assets
|
## Verifying Release Assets
|
||||||
|
|
||||||
For users, browse https://keys.openpgp.org/search?q=D8078411E3D8400B574EDB0441B69B728F095A87 and download the public key, or refer to [the instruction](https://keys.openpgp.org/about/usage#gnupg-retrieve) to import the key onto your machine.
|
To get started, browse to <https://keys.openpgp.org/search?q=D8078411E3D8400B574EDB0441B69B728F095A87> to download the public key, or refer to [the instructions](https://keys.openpgp.org/about/usage#gnupg-retrieve) to import the key onto your machine.
|
||||||
|
|
||||||
Next, you'll want to verify the signature of the download asset somehow.
|
Next, verify the signature of the downloaded asset. Using `gpg`, you can do this by downloading both the asset and its signature from our release page, then running `gpg --verify` like so:
|
||||||
|
|
||||||
With `gpg`, you would usually do that by downloading both the asset and the signature files from our specific release page, and run `gpg --verify` like:
|
|
||||||
|
|
||||||
```console
|
```console
|
||||||
# Download the asset
|
# Download the asset
|
||||||
|
|
@ -21,7 +19,7 @@ curl -LO https://github.com/actions/actions-runner-controller/releases/download/
|
||||||
gpg --verify actions-runner-controller.yaml{.asc,}
|
gpg --verify actions-runner-controller.yaml{.asc,}
|
||||||
```
|
```
|
||||||
|
|
||||||
On succesful verification, the gpg command would output:
|
On successful verification, the `gpg` command will output something similar to:
|
||||||
|
|
||||||
```
|
```
|
||||||
gpg: Signature made Tue 10 May 2022 04:15:32 AM UTC
|
gpg: Signature made Tue 10 May 2022 04:15:32 AM UTC
|
||||||
|
|
@ -35,7 +33,7 @@ gpg: Good signature from "Yusuke Kuoka <ykuoka@gmail.com>" [ultimate]
|
||||||
|
|
||||||
## Signing Release Assets
|
## Signing Release Assets
|
||||||
|
|
||||||
Assuming you are a maintainer of the project who has admin permission, run the command like the below to sign assets and upload the signature files:
|
If you are a maintainer of the project with admin permission, you can run the following commands to sign assets and upload the signature files:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ cd hack/signrel
|
$ cd hack/signrel
|
||||||
|
|
@ -60,8 +58,8 @@ Upload completed: *snip*
|
||||||
actions-runner-controller-0.17.2.tgz.asc"}
|
actions-runner-controller-0.17.2.tgz.asc"}
|
||||||
```
|
```
|
||||||
|
|
||||||
To retrieve all the available release tags, run:
|
To retrieve all available release tags, run:
|
||||||
|
|
||||||
```
|
```console
|
||||||
$ go run . tags | jq -r .[].tag_name
|
$ go run . tags | jq -r .[].tag_name
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -3,6 +3,6 @@ That being said, we are likely accept bug reports with concrete reproduction ste
|
||||||
|
|
||||||
To use this, you need to write some Kubernetes manifest and a container image for deployment.
|
To use this, you need to write some Kubernetes manifest and a container image for deployment.
|
||||||
|
|
||||||
For other information, please see the original pull request introduced it.
|
For other information, please see the original pull request that introduced it.
|
||||||
|
|
||||||
https://github.com/actions/actions-runner-controller/pull/682
|
https://github.com/actions/actions-runner-controller/pull/682
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue