fix: configure kubedog rate limiter to prevent context cancellation (#2446)

This commit is contained in:
yxxhero 2026-03-03 19:24:28 +08:00 committed by GitHub
parent 6e21671228
commit ce09f560d9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 887 additions and 48 deletions

208
KUBEDOG_CONFIG.md Normal file
View File

@ -0,0 +1,208 @@
# Kubedog Configuration
This document describes how to configure kubedog resource tracking in Helmfile.
## Overview
Kubedog is a library for tracking Kubernetes resources during deployments. Helmfile uses kubedog when `trackMode: kubedog` is set to monitor the rollout of resources like Deployments, StatefulSets, DaemonSets, and Jobs.
## Configuration Options
### Release-level Configuration
You can configure kubedog settings per release:
```yaml
releases:
- name: my-app
namespace: default
chart: my-chart
trackMode: kubedog
kubedogQPS: 100 # Queries per second (default: 100)
kubedogBurst: 200 # Burst capacity (default: 200)
trackLogs: true
trackKinds:
- Deployment
```
### Global Default Configuration
You can also set defaults in `helmDefaults`:
```yaml
helmDefaults:
trackMode: kubedog
# Note: QPS and Burst can only be configured at release level
```
## Parameters
### kubedogQPS
- **Type**: `float32`
- **Default**: `100`
- **Description**: Sets the maximum number of queries per second to the Kubernetes API server from the kubedog client. This controls the rate of API requests when tracking resources.
**When to increase**:
- Large clusters with many resources
- When tracking multiple releases simultaneously
- When you see rate limiting errors like "client rate limiter Wait returned an error: context canceled"
**When to decrease**:
- Small clusters or development environments
- When you want to reduce load on the API server
### kubedogBurst
- **Type**: `int`
- **Default**: `200`
- **Description**: Sets the maximum burst of requests that can be made to the Kubernetes API server. This allows temporary spikes above the QPS limit.
**When to increase**:
- When tracking releases with many resources
- When you see connection timeout errors
- In production environments with high throughput needs
**When to decrease**:
- In resource-constrained environments
- When API server is under heavy load
## Tuning Guidelines
### For Small Clusters (< 50 resources)
```yaml
releases:
- name: my-app
trackMode: kubedog
kubedogQPS: 50
kubedogBurst: 100
```
### For Medium Clusters (50-200 resources)
```yaml
releases:
- name: my-app
trackMode: kubedog
kubedogQPS: 100 # default
kubedogBurst: 200 # default
```
### For Large Clusters (> 200 resources)
```yaml
releases:
- name: my-app
trackMode: kubedog
kubedogQPS: 200
kubedogBurst: 400
```
### For Multiple Concurrent Releases
When using `--concurrent` flag with multiple releases that use kubedog tracking:
```yaml
releases:
- name: app1
trackMode: kubedog
kubedogQPS: 50
kubedogBurst: 100
- name: app2
trackMode: kubedog
kubedogQPS: 50
kubedogBurst: 100
```
## Troubleshooting
### Rate Limiting Errors
**Error**:
```
E0302 19:38:41.812322 91 reflector.go:204] "Failed to watch" err="client rate limiter Wait returned an error: context canceled"
```
**Solution**: Increase `kubedogQPS` and `kubedogBurst` values.
### Connection Timeouts
**Error**:
```
context canceled while waiting for API server response
```
**Solution**:
1. Check network connectivity to the API server
2. Increase `kubedogBurst` to allow more concurrent requests
3. Decrease number of concurrent releases if using `--concurrent` flag
### Slow Tracking
**Symptom**: Resource tracking takes a long time to complete.
**Solution**:
1. Use `trackKinds` to limit which resource types are tracked
2. Use `skipKinds` to exclude unnecessary resource types
3. Increase `kubedogQPS` to speed up API queries
## Related Configuration
### trackTimeout
Sets the timeout for kubedog tracking (in seconds):
```yaml
releases:
- name: my-app
trackMode: kubedog
trackTimeout: 600 # 10 minutes
```
### trackLogs
Enable/disable log streaming from tracked resources:
```yaml
releases:
- name: my-app
trackMode: kubedog
trackLogs: true # Show pod logs during tracking
```
### trackKinds / skipKinds
Control which resource types to track:
```yaml
releases:
- name: my-app
trackMode: kubedog
trackKinds:
- Deployment
- StatefulSet
skipKinds:
- ConfigMap
- Secret
```
## Implementation Details
The kubedog client configuration uses:
- `k8s.io/client-go` for Kubernetes API communication
- Custom rate limiting via `rest.Config.QPS` and `rest.Config.Burst`
- Separate client cache per unique (kubeContext, kubeconfig, QPS, Burst) combination
The default values (QPS=100, Burst=200) were chosen to:
- Prevent rate limiting errors in most common scenarios
- Support tracking of multiple resource types simultaneously
- Allow reasonable burst capacity for initial resource discovery
- Balance between tracking speed and API server load
## See Also
- [Issue #2445](https://github.com/helmfile/helmfile/issues/2445) - Original issue that led to configurable QPS/Burst
- [Kubedog Documentation](https://github.com/werf/kubedog)
- [Kubernetes client-go `rest.Config` QPS/Burst](https://pkg.go.dev/k8s.io/client-go/rest#Config)

View File

@ -0,0 +1,186 @@
# Example: Kubedog Resource Tracking Configuration
This example demonstrates various ways to configure kubedog resource tracking.
## Basic Example
```yaml
releases:
- name: simple-app
namespace: default
chart: ./charts/simple-app
trackMode: kubedog
```
Uses default QPS (100) and Burst (200).
## Customized Rate Limiting
```yaml
releases:
- name: high-throughput-app
namespace: production
chart: ./charts/app
trackMode: kubedog
# Increased limits for large-scale deployments
kubedogQPS: 200
kubedogBurst: 400
trackTimeout: 600
trackLogs: true
trackKinds:
- Deployment
- StatefulSet
```
## Multiple Releases with Different Settings
```yaml
releases:
# Small app - conservative limits
- name: frontend
namespace: web
chart: ./charts/frontend
trackMode: kubedog
kubedogQPS: 50
kubedogBurst: 100
# Medium app - default limits
- name: backend
namespace: api
chart: ./charts/backend
trackMode: kubedog
# Large app - increased limits
- name: data-processor
namespace: data
chart: ./charts/processor
trackMode: kubedog
kubedogQPS: 150
kubedogBurst: 300
trackKinds:
- Deployment
- StatefulSet
- Job
```
## Environment-Specific Configuration
```yaml
environments:
development:
values:
- kubedogQPS: 50
- kubedogBurst: 100
staging:
values:
- kubedogQPS: 100
- kubedogBurst: 200
production:
values:
- kubedogQPS: 200
- kubedogBurst: 400
releases:
- name: myapp
namespace: {{ .Environment.Name }}
chart: ./charts/myapp
trackMode: kubedog
kubedogQPS: {{ .Values.kubedogQPS }}
kubedogBurst: {{ .Values.kubedogBurst }}
```
## With Global Defaults
```yaml
helmDefaults:
createNamespace: true
timeout: 300
releases:
- name: app1
namespace: default
chart: ./charts/app
trackMode: kubedog
# Uses release-specific settings
kubedogQPS: 150
kubedogBurst: 300
- name: app2
namespace: default
chart: ./charts/app
trackMode: kubedog
# Uses default QPS=100, Burst=200
```
## Selective Tracking
```yaml
releases:
- name: complex-app
namespace: default
chart: ./charts/complex-app
trackMode: kubedog
kubedogQPS: 120
kubedogBurst: 250
# Only track deployments and jobs
trackKinds:
- Deployment
- Job
# Skip these resource types
skipKinds:
- ConfigMap
- Secret
- Ingress
# Track specific resources only
trackResources:
- kind: Deployment
name: main-app
- kind: Job
name: migration-job
namespace: default
```
## Testing the Configuration
To test your kubedog configuration:
```bash
# Apply with kubedog tracking
helmfile apply -n my-namespace -l app=myapp
# With debug logging
helmfile apply -n my-namespace -l app=myapp --log-level debug
# With specific environment
helmfile apply -e production -l app=myapp
```
## Expected Output
When kubedog tracking is working correctly, you should see:
```
Tracking 5 resources from release myapp with kubedog
Tracking 5 resources with kubedog (filtered from 5 total)
┌ Status progress
│ DEPLOYMENT REPLICAS AVAILABLE UP-TO-DATE
│ myapp-main 1/1 1 1
└ Status progress
All resources tracked successfully
UPDATED RELEASES:
NAME NAMESPACE CHART VERSION DURATION
myapp default ./charts/app 1.0.0 1m32s
```
## Troubleshooting Commands
```bash
# Check current kubedog settings
helmfile build -n my-namespace -l app=myapp | grep -A 5 "kubedog"
# Test with increased verbosity
helmfile apply -n my-namespace -l app=myapp --log-level debug 2>&1 | grep -i kubedog
# Monitor API server requests (requires cluster access)
kubectl get --raw /metrics | grep apiserver_request_count
```

View File

@ -0,0 +1,141 @@
# Advanced Kubedog Configuration Examples
# Example 1: Basic kubedog tracking with custom QPS/Burst
releases:
- name: simple-app
namespace: default
chart: ./charts/simple-app
trackMode: kubedog
trackTimeout: 300
trackLogs: true
kubedogQPS: 50
kubedogBurst: 100
---
# Example 2: With resource filtering
releases:
- name: filtered-app
namespace: production
chart: ./charts/complex-app
trackMode: kubedog
trackTimeout: 600
trackLogs: true
trackKinds:
- Deployment
- StatefulSet
skipKinds:
- ConfigMap
- Secret
---
# Example 3: With specific resource tracking
releases:
- name: selective-tracking
namespace: default
chart: ./charts/microservices
trackMode: kubedog
trackResources:
- kind: Deployment
name: api-server
namespace: default
- kind: StatefulSet
name: database
namespace: default
---
# Example 4: Production-grade configuration
releases:
- name: production-app
namespace: production
chart: ./charts/production-app
trackMode: kubedog
trackTimeout: 900
trackLogs: true
kubedogQPS: 30
kubedogBurst: 60
trackKinds:
- Deployment
- StatefulSet
- DaemonSet
- Job
trackResources:
- kind: Deployment
name: frontend
namespace: production
- kind: Deployment
name: backend
namespace: production
- kind: StatefulSet
name: redis
namespace: production
---
# Example 5: With Helm values containing annotations (future feature)
# Note: Annotation support is proposed but not yet implemented
releases:
- name: annotated-app
namespace: default
chart: ./charts/annotated-app
trackMode: kubedog
values:
- values.yaml:
# When annotation support is implemented, these would work:
# metadata:
# annotations:
# helmfile.dev/track-termination-mode: "NonBlocking"
# helmfile.dev/fail-mode: "HopeUntilEndOfDeployProcess"
# helmfile.dev/failures-allowed-per-replica: "2"
# helmfile.dev/log-regex: "^(ERROR|WARN)"
# helmfile.dev/skip-logs-for-containers: "sidecar,init"
# helmfile.dev/show-service-messages: "true"
---
# Example 6: Multi-environment configuration
environments:
production:
values:
- kubedogQPS: 30
- kubedogBurst: 60
- trackTimeout: 900
staging:
values:
- kubedogQPS: 100
- kubedogBurst: 200
- trackTimeout: 300
releases:
- name: multi-env-app
namespace: {{ .Environment.Name }}
chart: ./charts/app
trackMode: kubedog
trackLogs: true
kubedogQPS: {{ .Values.kubedogQPS }}
kubedogBurst: {{ .Values.kubedogBurst }}
trackTimeout: {{ .Values.trackTimeout }}
---
# Example 7: With needs and tracking (tracking happens after dependencies)
releases:
- name: database
namespace: default
chart: ./charts/postgresql
trackMode: kubedog
trackTimeout: 600
- name: backend
namespace: default
chart: ./charts/backend
needs:
- database
trackMode: kubedog
trackTimeout: 300
trackLogs: true
- name: frontend
namespace: default
chart: ./charts/frontend
needs:
- backend
trackMode: kubedog
trackTimeout: 300
trackLogs: true

4
go.mod
View File

@ -25,7 +25,7 @@ require (
github.com/tatsushid/go-prettytable v0.0.0-20141013043238-ed2d14c29939
github.com/tj/assert v0.0.3
github.com/variantdev/dag v1.1.0
github.com/werf/kubedog v0.13.0
github.com/werf/kubedog-for-werf-helm v0.0.0-20241217155728-9d45c48b82b6
github.com/zclconf/go-cty v1.18.0
github.com/zclconf/go-cty-yaml v1.2.0
go.szostok.io/version v1.2.0
@ -108,7 +108,7 @@ require (
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/ini.v1 v1.67.1 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/yaml v1.6.0
sigs.k8s.io/yaml v1.6.0 // indirect
)
require (

4
go.sum
View File

@ -775,8 +775,8 @@ github.com/urfave/cli v1.22.17 h1:SYzXoiPfQjHBbkYxbew5prZHS1TOLT3ierW8SYLqtVQ=
github.com/urfave/cli v1.22.17/go.mod h1:b0ht0aqgH/6pBYzzxURyrM4xXNgsoT/n2ZzwQiEhNVo=
github.com/variantdev/dag v1.1.0 h1:xodYlSng33KWGvIGMpKUyLcIZRXKiNUx612mZJqYrDg=
github.com/variantdev/dag v1.1.0/go.mod h1:pH1TQsNSLj2uxMo9NNl9zdGy01Wtn+/2MT96BrKmVyE=
github.com/werf/kubedog v0.13.0 h1:ys+GyZbIMqm0r2po0HClbONcEnS5cWSFR2BayIfBqsY=
github.com/werf/kubedog v0.13.0/go.mod h1:Y6pesrIN5uhFKqmHnHSoeW4jmVyZlWPFWv5SjB0rUPg=
github.com/werf/kubedog-for-werf-helm v0.0.0-20241217155728-9d45c48b82b6 h1:lpgQPTCp+wNJfTqJWtR6A5gRA4e4m/eRJFV7V18XCoA=
github.com/werf/kubedog-for-werf-helm v0.0.0-20241217155728-9d45c48b82b6/go.mod h1:PA9xGVKX9Il6sCgvPrcB3/FahRme3bXRz4BuylvAssc=
github.com/werf/logboek v0.6.1 h1:oEe6FkmlKg0z0n80oZjLplj6sXcBeLleCkjfOOZEL2g=
github.com/werf/logboek v0.6.1/go.mod h1:Gez5J4bxekyr6MxTmIJyId1F61rpO+0/V4vjCIEIZmk=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=

View File

@ -18,12 +18,16 @@ type TrackOptions struct {
Logs bool
LogsSince time.Duration
Filter *resource.FilterConfig
QPS float32
Burst int
}
func NewTrackOptions() *TrackOptions {
return &TrackOptions{
Timeout: 5 * time.Minute,
LogsSince: 10 * time.Minute,
QPS: 100,
Burst: 200,
}
}
@ -41,3 +45,13 @@ func (o *TrackOptions) WithFilterConfig(config *resource.FilterConfig) *TrackOpt
o.Filter = config
return o
}
func (o *TrackOptions) WithQPS(qps float32) *TrackOptions {
o.QPS = qps
return o
}
func (o *TrackOptions) WithBurst(burst int) *TrackOptions {
o.Burst = burst
return o
}

View File

@ -3,16 +3,23 @@ package kubedog
import (
"context"
"fmt"
"math"
"os"
"strings"
"sync"
"time"
"github.com/werf/kubedog/pkg/kube"
"github.com/werf/kubedog/pkg/tracker"
"github.com/werf/kubedog/pkg/trackers/rollout/multitrack"
"github.com/werf/kubedog-for-werf-helm/pkg/tracker"
"github.com/werf/kubedog-for-werf-helm/pkg/trackers/rollout/multitrack"
"go.uber.org/zap"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/client-go/discovery"
"k8s.io/client-go/discovery/cached/memory"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/restmapper"
"k8s.io/client-go/tools/clientcmd"
"github.com/helmfile/helmfile/pkg/resource"
)
@ -20,19 +27,32 @@ import (
type cacheKey struct {
kubeContext string
kubeconfig string
qps float32
burst int
}
type clientCacheEntry struct {
clientSet kubernetes.Interface
dynamicClient dynamic.Interface
restConfig *rest.Config
discovery discovery.CachedDiscoveryInterface
mapper meta.RESTMapper
}
var (
kubeInitMu sync.Mutex
clientCache = make(map[cacheKey]kubernetes.Interface)
clientCache = make(map[cacheKey]clientCacheEntry)
)
type Tracker struct {
logger *zap.SugaredLogger
clientSet kubernetes.Interface
trackOptions *TrackOptions
filter *resource.ResourceFilter
namespace string
logger *zap.SugaredLogger
clientSet kubernetes.Interface
dynamicClient dynamic.Interface
discovery discovery.CachedDiscoveryInterface
mapper meta.RESTMapper
trackOptions *TrackOptions
filter *resource.ResourceFilter
namespace string
}
type TrackerConfig struct {
@ -41,6 +61,8 @@ type TrackerConfig struct {
KubeContext string
Kubeconfig string
TrackOptions *TrackOptions
KubedogQPS *float32
KubedogBurst *int
}
func NewTracker(config *TrackerConfig) (*Tracker, error) {
@ -54,58 +76,115 @@ func NewTracker(config *TrackerConfig) (*Tracker, error) {
kubeconfig = os.Getenv("KUBECONFIG")
}
clientSet, err := getOrCreateClient(config.KubeContext, kubeconfig)
if err != nil {
return nil, fmt.Errorf("failed to initialize kubernetes client: %w", err)
}
options := config.TrackOptions
if options == nil {
options = NewTrackOptions()
}
qps := options.QPS
if config.KubedogQPS != nil {
qps = *config.KubedogQPS
}
burst := options.Burst
if config.KubedogBurst != nil {
burst = *config.KubedogBurst
}
if qps <= 0 || math.IsInf(float64(qps), 0) || math.IsNaN(float64(qps)) {
return nil, fmt.Errorf("invalid kubedog QPS %v: must be > 0 and finite", qps)
}
if burst < 1 {
return nil, fmt.Errorf("invalid kubedog burst %v: must be >= 1", burst)
}
cacheEntry, err := getOrCreateClients(config.KubeContext, kubeconfig, qps, burst)
if err != nil {
return nil, fmt.Errorf("failed to initialize kubernetes clients: %w", err)
}
var filter *resource.ResourceFilter
if options.Filter != nil {
filter = resource.NewResourceFilter(options.Filter, logger)
}
return &Tracker{
logger: logger,
clientSet: clientSet,
trackOptions: options,
filter: filter,
namespace: config.Namespace,
logger: logger,
clientSet: cacheEntry.clientSet,
dynamicClient: cacheEntry.dynamicClient,
discovery: cacheEntry.discovery,
mapper: cacheEntry.mapper,
trackOptions: options,
filter: filter,
namespace: config.Namespace,
}, nil
}
func getOrCreateClient(kubeContext, kubeconfig string) (kubernetes.Interface, error) {
func getOrCreateClients(kubeContext, kubeconfig string, qps float32, burst int) (clientCacheEntry, error) {
key := cacheKey{
kubeContext: kubeContext,
kubeconfig: kubeconfig,
qps: qps,
burst: burst,
}
kubeInitMu.Lock()
if cache, ok := clientCache[key]; ok {
kubeInitMu.Unlock()
return cache, nil
}
kubeInitMu.Unlock()
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
if kubeconfig != "" {
loadingRules.ExplicitPath = kubeconfig
}
overrides := &clientcmd.ConfigOverrides{}
if kubeContext != "" {
overrides.CurrentContext = kubeContext
}
cc := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, overrides)
restConfig, err := cc.ClientConfig()
if err != nil {
return clientCacheEntry{}, fmt.Errorf("failed to load kubeconfig: %w", err)
}
restConfig.QPS = qps
restConfig.Burst = burst
clientSet, err := kubernetes.NewForConfig(restConfig)
if err != nil {
return clientCacheEntry{}, fmt.Errorf("failed to create kubernetes client: %w", err)
}
dynamicClient, err := dynamic.NewForConfig(restConfig)
if err != nil {
return clientCacheEntry{}, fmt.Errorf("failed to create dynamic client: %w", err)
}
discoveryClient := memory.NewMemCacheClient(clientSet.Discovery())
mapper := restmapper.NewDeferredDiscoveryRESTMapper(discoveryClient)
cache := clientCacheEntry{
clientSet: clientSet,
dynamicClient: dynamicClient,
restConfig: restConfig,
discovery: discoveryClient,
mapper: mapper,
}
kubeInitMu.Lock()
defer kubeInitMu.Unlock()
if client, ok := clientCache[key]; ok {
return client, nil
if existingCache, ok := clientCache[key]; ok {
return existingCache, nil
}
initOpts := kube.InitOptions{
KubeConfigOptions: kube.KubeConfigOptions{
Context: kubeContext,
ConfigPath: kubeconfig,
},
}
clientCache[key] = cache
if err := kube.Init(initOpts); err != nil {
return nil, err
}
client := kube.Kubernetes
clientCache[key] = client
return client, nil
return cache, nil
}
func (t *Tracker) TrackResources(ctx context.Context, resources []*resource.Resource) error {
@ -155,16 +234,29 @@ func (t *Tracker) TrackResources(ctx context.Context, resources []*resource.Reso
Namespace: namespace,
SkipLogs: !t.trackOptions.Logs,
})
case "canary":
specs.Canaries = append(specs.Canaries, multitrack.MultitrackSpec{
ResourceName: res.Name,
Namespace: namespace,
SkipLogs: !t.trackOptions.Logs,
})
default:
t.logger.Debugf("Skipping unsupported kind %s for resource %s/%s", res.Kind, namespace, res.Name)
}
}
if len(specs.Deployments)+len(specs.StatefulSets)+len(specs.DaemonSets)+len(specs.Jobs) == 0 {
t.logger.Info("No trackable resources found (only Deployment, StatefulSet, DaemonSet, and Job are supported)")
totalResources := len(specs.Deployments) + len(specs.StatefulSets) +
len(specs.DaemonSets) + len(specs.Jobs) + len(specs.Canaries)
if totalResources == 0 {
t.logger.Info("No trackable resources found (only Deployment, StatefulSet, DaemonSet, Job, and Canary are supported)")
return nil
}
t.logger.Infof("Tracking breakdown: Deployments=%d, StatefulSets=%d, DaemonSets=%d, Jobs=%d, Canaries=%d",
len(specs.Deployments), len(specs.StatefulSets), len(specs.DaemonSets),
len(specs.Jobs), len(specs.Canaries))
opts := multitrack.MultitrackOptions{
Options: tracker.Options{
ParentContext: ctx,
@ -172,6 +264,9 @@ func (t *Tracker) TrackResources(ctx context.Context, resources []*resource.Reso
LogsFromTime: time.Now().Add(-t.trackOptions.LogsSince),
},
StatusProgressPeriod: 5 * time.Second,
DynamicClient: t.dynamicClient,
DiscoveryClient: t.discovery,
Mapper: t.mapper,
}
err := multitrack.Multitrack(t.clientSet, specs, opts)

View File

@ -1,10 +1,13 @@
package kubedog
import (
"math"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/helmfile/helmfile/pkg/resource"
)
@ -86,3 +89,189 @@ func TestTrackOptions_WithFilterConfig(t *testing.T) {
assert.Equal(t, []string{"Deployment", "StatefulSet"}, opts.Filter.TrackKinds)
assert.Equal(t, []string{"ConfigMap"}, opts.Filter.SkipKinds)
}
func TestTrackOptions_WithQPS(t *testing.T) {
opts := NewTrackOptions()
opts = opts.WithQPS(50.0)
assert.Equal(t, float32(50.0), opts.QPS)
}
func TestTrackOptions_WithBurst(t *testing.T) {
opts := NewTrackOptions()
opts = opts.WithBurst(100)
assert.Equal(t, 100, opts.Burst)
}
func TestTrackOptions_DefaultQPSBurst(t *testing.T) {
opts := NewTrackOptions()
assert.Equal(t, float32(100), opts.QPS)
assert.Equal(t, 200, opts.Burst)
}
func TestTrackerConfig_WithQPSBurst(t *testing.T) {
qps := float32(50.0)
burst := 100
config := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-ctx",
Kubeconfig: "/test/kubeconfig",
TrackOptions: NewTrackOptions(),
KubedogQPS: &qps,
KubedogBurst: &burst,
}
assert.NotNil(t, config)
assert.Equal(t, "test-ns", config.Namespace)
assert.Equal(t, &qps, config.KubedogQPS)
assert.Equal(t, &burst, config.KubedogBurst)
assert.Equal(t, float32(50.0), *config.KubedogQPS)
assert.Equal(t, 100, *config.KubedogBurst)
}
func TestNewTracker_InvalidQPS(t *testing.T) {
invalidQPS := float32(-1.0)
burst := 100
cfg := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-ctx",
Kubeconfig: "/nonexistent/kubeconfig",
TrackOptions: NewTrackOptions(),
KubedogQPS: &invalidQPS,
KubedogBurst: &burst,
}
tr, err := NewTracker(cfg)
assert.Error(t, err)
assert.Nil(t, tr)
assert.Contains(t, err.Error(), "invalid kubedog QPS")
assert.Contains(t, err.Error(), "must be > 0")
}
func TestNewTracker_NaNQPS(t *testing.T) {
nanQPS := float32(math.NaN())
burst := 100
cfg := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-ctx",
Kubeconfig: "/nonexistent/kubeconfig",
TrackOptions: NewTrackOptions(),
KubedogQPS: &nanQPS,
KubedogBurst: &burst,
}
tr, err := NewTracker(cfg)
assert.Error(t, err)
assert.Nil(t, tr)
assert.Contains(t, err.Error(), "invalid kubedog QPS")
assert.Contains(t, err.Error(), "must be > 0 and finite")
}
func TestNewTracker_InfQPS(t *testing.T) {
infQPS := float32(math.Inf(1))
burst := 100
cfg := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-ctx",
Kubeconfig: "/nonexistent/kubeconfig",
TrackOptions: NewTrackOptions(),
KubedogQPS: &infQPS,
KubedogBurst: &burst,
}
tr, err := NewTracker(cfg)
assert.Error(t, err)
assert.Nil(t, tr)
assert.Contains(t, err.Error(), "invalid kubedog QPS")
assert.Contains(t, err.Error(), "must be > 0 and finite")
}
func TestNewTracker_InvalidBurst(t *testing.T) {
qps := float32(50.0)
invalidBurst := 0
cfg := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-ctx",
Kubeconfig: "/nonexistent/kubeconfig",
TrackOptions: NewTrackOptions(),
KubedogQPS: &qps,
KubedogBurst: &invalidBurst,
}
tr, err := NewTracker(cfg)
assert.Error(t, err)
assert.Nil(t, tr)
assert.Contains(t, err.Error(), "invalid kubedog burst")
assert.Contains(t, err.Error(), "must be >= 1")
}
func TestNewTracker_ValidQPSBurst(t *testing.T) {
qps := float32(50.0)
burst := 100
// Create a minimal valid kubeconfig in a temp file
tmpFile, err := os.CreateTemp("", "kubeconfig-*.yaml")
require.NoError(t, err)
defer os.Remove(tmpFile.Name())
kubeconfigContent := `
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://test-server:6443
name: test-cluster
contexts:
- context:
cluster: test-cluster
user: test-user
name: test-context
current-context: test-context
users:
- name: test-user
user:
token: test-token
`
_, err = tmpFile.WriteString(kubeconfigContent)
require.NoError(t, err)
require.NoError(t, tmpFile.Close())
cfg := &TrackerConfig{
Logger: nil,
Namespace: "test-ns",
KubeContext: "test-context",
Kubeconfig: tmpFile.Name(),
TrackOptions: NewTrackOptions(),
KubedogQPS: &qps,
KubedogBurst: &burst,
}
// This should succeed - validation passes and client is created
tr, err := NewTracker(cfg)
// The test should pass validation. It may fail later due to invalid cluster,
// but that's okay - we're testing that QPS/Burst validation works.
if err != nil {
// If there's an error, it should NOT be about invalid QPS/Burst
assert.NotContains(t, err.Error(), "invalid kubedog QPS")
assert.NotContains(t, err.Error(), "invalid kubedog burst")
} else {
// If no error, tracker should be created successfully
assert.NotNil(t, tr)
}
}

View File

@ -480,6 +480,8 @@ func (st *HelmState) trackWithKubedog(ctx context.Context, release *ReleaseSpec,
KubeContext: kubeContext,
Kubeconfig: st.kubeconfig,
TrackOptions: trackOpts,
KubedogQPS: release.KubedogQPS,
KubedogBurst: release.KubedogBurst,
})
if err != nil {
return fmt.Errorf("failed to create kubedog tracker: %w", err)

View File

@ -466,6 +466,10 @@ type ReleaseSpec struct {
SkipKinds []string `yaml:"skipKinds,omitempty"`
// TrackResources is a whitelist of specific resources to track
TrackResources []TrackResourceSpec `yaml:"trackResources,omitempty"`
// KubedogQPS specifies the QPS (queries per second) for kubedog kubernetes client
KubedogQPS *float32 `yaml:"kubedogQPS,omitempty"`
// KubedogBurst specifies the burst for kubedog kubernetes client
KubedogBurst *int `yaml:"kubedogBurst,omitempty"`
}
// TrackResourceSpec specifies a resource to track

View File

@ -38,39 +38,39 @@ func TestGenerateID(t *testing.T) {
run(testcase{
subject: "baseline",
release: ReleaseSpec{Name: "foo", Chart: "incubator/raw"},
want: "foo-values-dd88b94b8",
want: "foo-values-6d799cf798",
})
run(testcase{
subject: "different bytes content",
release: ReleaseSpec{Name: "foo", Chart: "incubator/raw"},
data: []byte(`{"k":"v"}`),
want: "foo-values-6fb7bbb95f",
want: "foo-values-7f885447bf",
})
run(testcase{
subject: "different map content",
release: ReleaseSpec{Name: "foo", Chart: "incubator/raw"},
data: map[string]any{"k": "v"},
want: "foo-values-56d84c9897",
want: "foo-values-86f5d8fb55",
})
run(testcase{
subject: "different chart",
release: ReleaseSpec{Name: "foo", Chart: "stable/envoy"},
want: "foo-values-6644fc9d47",
want: "foo-values-5cd5c65db5",
})
run(testcase{
subject: "different name",
release: ReleaseSpec{Name: "bar", Chart: "incubator/raw"},
want: "bar-values-859cd849bf",
want: "bar-values-c59b4f979",
})
run(testcase{
subject: "specific ns",
release: ReleaseSpec{Name: "foo", Chart: "incubator/raw", Namespace: "myns"},
want: "myns-foo-values-86d544f7f9",
want: "myns-foo-values-56d6cd88cc",
})
for id, n := range ids {