Compare commits

...

106 Commits

Author SHA1 Message Date
Kubernetes Prow Robot 59a3ca2cd1
Merge pull request #291 from yonatankahana/changelog-v4.0.3
docs: add changelog for v4.0.3
2024-02-15 20:07:40 -08:00
Kubernetes Prow Robot 26cbd86586
Merge pull request #327 from yonatankahana/security-2024-01
fix: resolve all trivy vulnerabilities (2024-01-25)
2024-02-15 10:07:16 -08:00
Yonatan Kahana 8e39667063
docs: add changelog for v4.0.3 2024-01-25 19:15:00 +02:00
Yonatan Kahana 0a84e4ba40
fix: resolve all trivy vulnerabilities (2024-01-25) 2024-01-25 19:01:15 +02:00
Kubernetes Prow Robot 814e0d8d92
Merge pull request #301 from damacus/base-image
Update container version
2023-10-25 22:34:16 +02:00
Dan Webb ed64e85e91
Fix boilerplate warning
Running make test-boilerplate returned the following error so fixed
accordingly

Header in kubernetes-sigs/nfs-subdir-external-provisioner/release-tools/../Dockerfile does not match reference, diff:

--- reference
+++ kubernetes-sigs/nfs-subdir-external-provisioner/release-tools/../Dockerfile
@@ -1,4 +1,4 @@
-# Copyright YEAR The Kubernetes Authors.
+# Copyright YEAR-YEAR The Kubernetes Authors.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
2023-08-22 14:46:01 +01:00
Dan Webb 0eb8f747af
Update container version
- Update multiarch build container to use golang:1.19 as the project is tested on 1.19
- Update the multiarch container to run Alpine 3.18
  Updating to this version reduces the number of known vulnerabilities in the image
- Update go.mod to match the container version
2023-08-22 14:38:11 +01:00
Kubernetes Prow Robot 3740e01621
Merge pull request #289 from jongwooo/chore/replace-deprecated-command-with-environment-file
Replace deprecated command with environment file
2023-07-13 08:38:37 -07:00
Kubernetes Prow Robot d6bd72dbba
Merge pull request #287 from yonatankahana/security-2023-06
fix: resolve all trivy vulnerabilities (2023-06-06)
2023-06-18 21:18:20 -07:00
jongwooo 7c6e9564c2 Replace deprecated command with environment file 2023-06-17 00:10:29 +09:00
Yonatan Kahana fed959e469
fix: resolve all trivy vulnerabilities (2023-06-06)
- bump go version to 1.17
- resolved: CVE-2022-21698, CVE-2022-27664, CVE-2022-41723, CVE-2022-41717, CVE-2022-29526, CVE-2022-32149, CVE-2022-28948.
2023-06-06 23:44:58 +03:00
Kubernetes Prow Robot c2a2d5d544
Merge pull request #267 from justdan96/master
Use a PDB if the Enabled in the values.yaml
2023-03-13 13:26:49 -07:00
Dan 63586f9412
remove all the extraneous newlines 2023-03-13 18:56:33 +00:00
Dan 915f227a52
fix syntax issues 2023-03-13 18:55:33 +00:00
Dan 1626a8e137
Merge branch 'master' into master 2023-03-07 07:19:54 -05:00
Dan 9f37ddea34
realign title of table in chart Readme 2023-03-07 07:18:30 -05:00
Dan ff5803a052
rebase and reformat chart Readme 2023-03-07 07:17:14 -05:00
Kubernetes Prow Robot cf9332adf0
Merge pull request #252 from megian/switch-to-registry.k8s.io
Switch from k8s.gcr.io to registry.k8s.io
2023-03-03 15:16:57 -08:00
Dan 6fd66db354
update Chart README with PDB values 2023-02-27 10:36:35 +00:00
Gabriel Mainberger 7c1b24a12b Switch from k8s.gcr.io to registry.k8s.io
Reference: https://github.com/kubernetes/registry.k8s.io#background
2023-02-20 15:47:36 +01:00
Dan 35d0a6b7c6
line endings 2023-02-20 10:18:06 +00:00
Dan cc084df759
integrate suggestions for PDB enhancements 2023-02-20 10:15:54 +00:00
Dan 8af604bdc7
add a simple PDB if the replicaCount > 1 2023-02-17 10:49:37 +00:00
Kubernetes Prow Robot 4c82f69eef
Merge pull request #221 from forselli-stratio/fix-delete
Fix onDelete option for subdirectories
2022-11-24 05:42:05 -08:00
Kubernetes Prow Robot 53d4af87b0
Merge pull request #246 from jsafrane/prow-update-master
master: update release-tools
2022-11-11 14:37:57 -08:00
Jan Safranek 6ffc975321 Merge commit '7f54f258f2cc0c9ddb90c44114736ea976b58190' into prow-update-master 2022-11-11 18:29:46 +01:00
Jan Safranek c711349d1e Squashed 'release-tools/' changes from 31a3f38..78c0fb7
https://github.com/kubernetes-csi/csi-release-tools/commit/78c0fb7 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/208 from jsafrane/skip-selinux
https://github.com/kubernetes-csi/csi-release-tools/commit/36e433e Skip SELinux tests in CI by default
https://github.com/kubernetes-csi/csi-release-tools/commit/348d4a9 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/207 from RaunakShah/reviewers
https://github.com/kubernetes-csi/csi-release-tools/commit/1efc272 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/206 from RaunakShah/update-prow
https://github.com/kubernetes-csi/csi-release-tools/commit/7d410d8 Changes to csi prow to run e2e tests in sidecars
https://github.com/kubernetes-csi/csi-release-tools/commit/cfa5a75 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/203 from humblec/test-vendor
https://github.com/kubernetes-csi/csi-release-tools/commit/4edd1d8 Add RaunakShah to CSI reviewers group
https://github.com/kubernetes-csi/csi-release-tools/commit/7ccc959 release tools update to 1.19
https://github.com/kubernetes-csi/csi-release-tools/commit/d24254f Merge https://github.com/kubernetes-csi/csi-release-tools/pull/202 from xing-yang/kind_0.14.0
https://github.com/kubernetes-csi/csi-release-tools/commit/0faa3fc Update to Kind v0.14.0 images
https://github.com/kubernetes-csi/csi-release-tools/commit/ef4e1b2 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/201 from xing-yang/add_1.24_image
https://github.com/kubernetes-csi/csi-release-tools/commit/4ddce25 Add 1.24 Kind image
https://github.com/kubernetes-csi/csi-release-tools/commit/7fe5149 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/200 from pohly/bump-kubernetes-version
https://github.com/kubernetes-csi/csi-release-tools/commit/70915a8 prow.sh: update snapshotter version

git-subtree-dir: release-tools
git-subtree-split: 78c0fb714fa4448b29962a0f34fa18b7b7d97ae6
2022-11-11 18:29:44 +01:00
Kubernetes Prow Robot a64e0dc323
Merge pull request #228 from pohly/prow-update-master
master: update release-tools
2022-08-08 08:06:19 -07:00
Patrick Ohly 0242f5ea47 Merge commit 'dd31db3bcfa4287989eb45434201a88f3b6ce0ce' into prow-update-master 2022-08-05 17:41:31 +02:00
Patrick Ohly 5a6f321062 Squashed 'release-tools/' changes from e4dab7f..31a3f38
https://github.com/kubernetes-csi/csi-release-tools/commit/31a3f38 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/199 from pohly/bump-kubernetes-version
https://github.com/kubernetes-csi/csi-release-tools/commit/7577454 prow.sh: bump Kubernetes to v1.22.0
https://github.com/kubernetes-csi/csi-release-tools/commit/d29a2e7 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/198 from pohly/csi-test-5.0.0
https://github.com/kubernetes-csi/csi-release-tools/commit/41cb70d prow.sh: sanity testing with csi-test v5.0.0
https://github.com/kubernetes-csi/csi-release-tools/commit/c85a63f Merge https://github.com/kubernetes-csi/csi-release-tools/pull/197 from pohly/fix-alpha-testing
https://github.com/kubernetes-csi/csi-release-tools/commit/b86d8e9 support Kubernetes 1.25 + Ginkgo v2
https://github.com/kubernetes-csi/csi-release-tools/commit/ab0b0a3 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/192 from andyzhangx/patch-1
https://github.com/kubernetes-csi/csi-release-tools/commit/7bbab24 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/196 from humblec/non-alpha
https://github.com/kubernetes-csi/csi-release-tools/commit/e51ff2c introduce control variable for non alpha feature gate configuration
https://github.com/kubernetes-csi/csi-release-tools/commit/ca19ef5 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/195 from pohly/fix-alpha-testing
https://github.com/kubernetes-csi/csi-release-tools/commit/3948331 fix testing with latest Kubernetes
https://github.com/kubernetes-csi/csi-release-tools/commit/9a0260c fix boilerplate header

git-subtree-dir: release-tools
git-subtree-split: 31a3f38b78412afc543a9333a057dedbb8c01289
2022-08-05 17:41:28 +02:00
Kubernetes Prow Robot d080ea9670
Merge pull request #215 from tinguin/172-volumeBindingMode
Fixes Issue #172
2022-08-03 13:31:46 -07:00
Florian Winkler 6d07787c03
Update Chart.yaml 2022-08-03 22:13:29 +02:00
Kubernetes Prow Robot 2d0f29a82b
Merge pull request #220 from NeverBehave/patch-1
fix(docs): managed-nfs-storage -> nfs-client
2022-07-10 14:23:47 -07:00
Francisco Orselli 386af3ff87
Fix onDelete option for subdirectories 2022-07-06 09:45:58 +02:00
NeverBehave 92fd6aac0e
fix(docs): managed-nfs-storage -> nfs-client
Changed at 65b3ac93b6 

But it missed the one in README.md
2022-06-24 14:54:02 -07:00
Florian Winkler aca10bf343 There was no option to set the volumeBindingMode for the StorageClass.
Added the option in
nfs-subdir-external-provisioner/charts/nfs-subdir-external-provisioner/templates/storageclass.yaml
and nfs-subdir-external-provisioner/charts/nfs-subdir-external-provisioner/values.yaml as well as updated charts/nfs-subdir-external-provisioner/README.md

This fixes Issue #172
2022-06-10 11:13:43 +02:00
Kubernetes Prow Robot 0bd4e87b34
Merge pull request #212 from pohly/prow-update-master
master: update release-tools
2022-06-03 16:32:20 -07:00
Patrick Ohly 918cf69774 Merge commit '5dea4236b62f4da49996138fd7b20c498814388a' into prow-update-master 2022-06-03 17:39:01 +02:00
Patrick Ohly 30ffcb5009 Squashed 'release-tools/' changes from 37d1104..e4dab7f
https://github.com/kubernetes-csi/csi-release-tools/commit/e4dab7f Merge https://github.com/kubernetes-csi/csi-release-tools/pull/194 from yselkowitz/registry-k8s-io
https://github.com/kubernetes-csi/csi-release-tools/commit/84a4d5a Move from k8s.gcr.io to registry.k8s.io

git-subtree-dir: release-tools
git-subtree-split: e4dab7ff57c24cf3e8d37cd3365636fddaff7e0a
2022-06-03 17:38:59 +02:00
Kubernetes Prow Robot dc17061230
Merge pull request #208 from yonatankahana/golang-1.18
build: build multiarch image with golang 1.18
2022-05-31 20:47:04 -07:00
Kubernetes Prow Robot e64e04ed3f
Merge pull request #209 from yonatankahana/fix-build-platforms
build: fix build platforms to new scheme
2022-05-31 20:43:04 -07:00
Yonatan Kahana f0a5f68796 build: fix build platforms to new scheme 2022-05-28 23:11:28 +03:00
Yonatan Kahana cb09d0bab2 build: build multiarch image with golang 1.18 2022-05-28 22:27:36 +03:00
Kubernetes Prow Robot c0409446f6
Merge pull request #90 from Weilbyte/Weilbyte/kustomize
Add Kustomize support
2022-05-26 11:03:23 -07:00
Weilbyte aff7adcdde
Move bases to bases instead of resources 2022-05-26 12:29:47 +02:00
Kubernetes Prow Robot c43b38de5c
Merge pull request #207 from yonatankahana/cve-2022-27191
fix: resolve CVE-2022-27191 in golang.org/x/crypto
2022-05-25 20:17:22 -07:00
Yonatan Kahana ce3e66514d fix: resolve CVE-2022-27191 in golang.org/x/crypto
Signed-off-by: Yonatan Kahana <yonatankahana.il@gmail.com>
2022-05-25 21:13:10 +03:00
Kubernetes Prow Robot b8c5a755f1
Merge pull request #199 from humblec/flaky
remove BUILD_PLATFORM variable setting from the makefile
2022-05-10 06:17:44 -07:00
Humble Chirammal 9383bb41cb remove BUILD_PLATFORM variable setting from the makefile
as the tuple of BUILD_PLATFORM changed for release-tools for
multi-arch builds, we need to change either correct the variable
in the repo or delegate that setting to csi-release-tool setting

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-05-10 18:15:31 +05:30
Kubernetes Prow Robot 485d0f2552
Merge pull request #190 from pohly/prow-update-master
master: update release-tools
2022-04-14 02:08:46 -07:00
Patrick Ohly 2d3745ec84 Merge commit '9ee89eec502067088a7775321af5040f57e894fb' into prow-update-master 2022-04-04 12:45:56 +02:00
Patrick Ohly 72b77f082d Squashed 'release-tools/' changes from 335339f..37d1104
https://github.com/kubernetes-csi/csi-release-tools/commit/37d1104 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/191 from pohly/go-1.18
https://github.com/kubernetes-csi/csi-release-tools/commit/db917f5 update to Go 1.18

git-subtree-dir: release-tools
git-subtree-split: 37d1104926b30aa0544abb5a16c980ba82aaa585
2022-04-04 12:45:54 +02:00
Kubernetes Prow Robot aafc18362f
Merge pull request #189 from yonatankahana/component-helpers
build: import GetPersistentVolumeClass from component-helpers
2022-04-03 22:08:10 -07:00
Yonatan Kahana 9015173f00 build: import GetPersistentVolumeClass from component-helpers
Signed-off-by: Yonatan Kahana <yonatankahana.il@gmail.com>
2022-04-02 01:07:15 +03:00
Kubernetes Prow Robot 23794089df
Merge pull request #176 from humblec/new
Handle error and also correct error strings.
2022-03-07 07:02:55 -08:00
Humble Chirammal 7c23a7dd7a Remove unnecessary conversion of resource storage
This commit also does more corrections like error handling,
formatting, typos..etc

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-03-04 14:31:34 +05:30
Kubernetes Prow Robot 69979f8f49
Merge pull request #175 from humblec/kube-1.23
rebase: update kube version to 1.23
2022-03-03 23:18:53 -08:00
Humble Chirammal 656623a00f rebase: update kube version to 1.23
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-03-03 12:58:23 +05:30
Kubernetes Prow Robot 5fb9469c3c
Merge pull request #179 from pohly/prow-update-master
master: update release-tools
2022-02-16 11:38:46 -08:00
Patrick Ohly 78d26129c2 Merge commit 'd8148a00f44ea7e3a06d6846733d92edadc1d840' into prow-update-master 2022-02-16 17:32:52 +01:00
Patrick Ohly faa1cede8d Squashed 'release-tools/' changes from a6a1a79..335339f
https://github.com/kubernetes-csi/csi-release-tools/commit/335339f Merge https://github.com/kubernetes-csi/csi-release-tools/pull/187 from mauriciopoppe/remove-eol-windows-versions
https://github.com/kubernetes-csi/csi-release-tools/commit/890b87a Merge https://github.com/kubernetes-csi/csi-release-tools/pull/188 from pwschuurman/update-release-notes-docs
https://github.com/kubernetes-csi/csi-release-tools/commit/274bc9b Update Sidecar Release Process documentation to reference latest syntax for release-notes tool
https://github.com/kubernetes-csi/csi-release-tools/commit/87b6c37 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/185 from Garima-Negi/fix-OWNERS-files
https://github.com/kubernetes-csi/csi-release-tools/commit/f1de2c6 Fix OWNERS file - squashed commits
https://github.com/kubernetes-csi/csi-release-tools/commit/59ae38b Remove EOL windows versions from BUILD_PLATFORMS
https://github.com/kubernetes-csi/csi-release-tools/commit/5d66471 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/186 from humblec/sp
https://github.com/kubernetes-csi/csi-release-tools/commit/d066f1b Correct prow.sh typo and make codespell linter pass
https://github.com/kubernetes-csi/csi-release-tools/commit/762e22d Merge https://github.com/kubernetes-csi/csi-release-tools/pull/184 from pohly/image-publishing-troubleshooting
https://github.com/kubernetes-csi/csi-release-tools/commit/81e26c3 SIDECAR_RELEASE_PROCESS.md: add troubleshooting for image publishing
https://github.com/kubernetes-csi/csi-release-tools/commit/31aa44d Merge https://github.com/kubernetes-csi/csi-release-tools/pull/182 from chrishenzie/csi-sanity-version
https://github.com/kubernetes-csi/csi-release-tools/commit/f49e141 Update csi-sanity test suite to v4.3.0
https://github.com/kubernetes-csi/csi-release-tools/commit/d9815c2 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/181 from mauriciopoppe/armv7-support
https://github.com/kubernetes-csi/csi-release-tools/commit/05c1801 Add support to build arm/v7 images
https://github.com/kubernetes-csi/csi-release-tools/commit/4aedf35 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/178 from xing-yang/timeout
https://github.com/kubernetes-csi/csi-release-tools/commit/2b9897e Increase build timeout
https://github.com/kubernetes-csi/csi-release-tools/commit/51d3702 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/177 from mauriciopoppe/kind-image-1.23
https://github.com/kubernetes-csi/csi-release-tools/commit/a30efea Add kind image for 1.23

git-subtree-dir: release-tools
git-subtree-split: 335339f059da0b8b1947794a8c75d9e5b973cb79
2022-02-16 17:32:40 +01:00
Humble Chirammal e53fbc3d59 Handle error in a some conditions and also correct error message
This commit also does:
`Replace` string method changed to `ReplaceALL`

Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-02-10 12:26:03 +05:30
Humble Chirammal 2417ea0219 update OWNERS file
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2022-02-10 12:20:25 +05:30
Kubernetes Prow Robot 7708e43674
Merge pull request #166 from yonatankahana/develop
docs: consistent StorageClass names, multiple provisioners installation
2022-02-09 19:57:46 -08:00
Kubernetes Prow Robot aa70554ae0
Merge pull request #163 from pohly/prow-update-master
master: update release-tools
2022-02-09 02:12:17 -08:00
Yonatan Kahana aa7de73518
Update README.md 2022-02-06 00:14:48 +02:00
Yonatan Kahana d12a02587f Merge branch 'master' of github.com:kubernetes-sigs/nfs-subdir-external-provisioner into develop 2022-01-27 22:27:45 +02:00
Yonatan Kahana 65b3ac93b6 docs: consistent StorageClass names, multiple provisioners installation 2022-01-27 22:24:33 +02:00
Kubernetes Prow Robot 8e17ace84f
Merge pull request #171 from danieljkemp/securitycontext
Add securityContext options to Helm Chart, bump chart version - Related to Issue #97
2022-01-27 12:21:49 -08:00
Kubernetes Prow Robot 400c160486
Merge pull request #168 from rkmathi/match-deploy-objects
Match deploy/*.yaml and deploy/objects/*.yaml
2022-01-20 13:26:29 -08:00
Daniel Kemp 5c92c0ae51 Update chart version 2022-01-20 16:19:56 -05:00
Daniel Kemp 9c867b50b8 fix indentation 2022-01-20 16:19:40 -05:00
Richard Kugler 8b7f3ae920 Update Values to support securityContext
Adding securityContext variables to the values file.
2022-01-20 16:16:21 -05:00
Richard Kugler ce2849ccd7 Allow use of securityContext
Update Deployment with securityContext for pod and Container
2022-01-20 16:16:21 -05:00
Ryuichi KAWAMATA d8043f1740 Match deploy/*.yaml and deploy/objects/*.yaml 2022-01-10 17:09:25 +09:00
Patrick Ohly 908d7c01e0 Merge commit 'a881aea5f1a87929fb768f05a395cbcaf9812f30' into prow-update-master 2021-11-30 21:17:25 +01:00
Patrick Ohly 740c55df3b Squashed 'release-tools/' changes from 5489de6..a6a1a79
https://github.com/kubernetes-csi/csi-release-tools/commit/a6a1a79 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/176 from pohly/go-1.17.3
https://github.com/kubernetes-csi/csi-release-tools/commit/0a2cf63 prow.sh: bump Go to 1.17.3
https://github.com/kubernetes-csi/csi-release-tools/commit/fc29fdd Merge https://github.com/kubernetes-csi/csi-release-tools/pull/141 from pohly/prune-replace-optional
https://github.com/kubernetes-csi/csi-release-tools/commit/5b9a1e0 Merge https://github.com/kubernetes-csi/csi-release-tools/pull/175 from jimdaga/patch-1
https://github.com/kubernetes-csi/csi-release-tools/commit/5eeb602 images: use k8s-staging-test-infra/gcb-docker-gcloud
https://github.com/kubernetes-csi/csi-release-tools/commit/b46691a go-get-kubernetes.sh: make replace statement pruning optional

git-subtree-dir: release-tools
git-subtree-split: a6a1a7979bf3ebc2bb10d0e33dd11ab281d6d39e
2021-11-30 21:17:23 +01:00
Kubernetes Prow Robot 2fb90f7760
Merge pull request #140 from olifly/separate-reclaimpolicy-for-sc-and-pv
Separate reclaimPolicies for auto-generated StorageClass and main NFS PV
2021-10-03 13:31:07 -07:00
Ólafur Haukur Flygenring ba68ff6946 Updated documentation. 2021-09-30 11:22:55 +00:00
Ólafur Haukur Flygenring 5fe7e4afff * Change comment location 2021-09-27 16:49:05 +00:00
Ólafur Haukur Flygenring 48fab9587f Bump patch version 2021-09-27 12:54:30 +00:00
Kubernetes Prow Robot ff001a120e
Merge pull request #144 from spiffxp/update-release-tools
Update release-tools to 5489de6
2021-09-20 15:32:23 -07:00
Aaron Crickenberger 63cac3cb3f OWNERS: symlink aliases from release-tools 2021-09-20 15:00:11 -07:00
Aaron Crickenberger a17045a909 Squashed 'release-tools/' changes from 3b6d17b..5489de6
5489de6 Merge pull request #174 from mauriciopoppe/bump-kind-version
0c675d4 Bump kind version to v0.11.1
ef69a88 Merge pull request #173 from nick5616/add-ws2022
44c710c added WS2022 to build platforms
0883be4 Merge pull request #171 from pohly/example-commands
02cda51 build.make: support binaries outside of cmd, with optional go.mod
65922ea Merge pull request #170 from pohly/canary-snapshot-controller
c0bdfb3 prow.sh: deploy canary snapshot-controller in canary jobs
0438f15 Merge pull request #167 from c0va23/feature/release-armv7-image
4786f4d Merge pull request #168 from msau42/update-release-prereq
6a2dc64 Remove requirement to be top-level approver. Only maintainers membership is required to do a release
30a4f7b Release armv7 image
ac8108f Merge pull request #165 from consideRatio/pr/update-github-links-ref-to-master-to-HEAD
999b483 docs: make github links reference HEAD instead of main
fd67069 docs: make github links reference HEAD instead of master
c0a4fb1 Merge pull request #164 from anubha-v-ardhan/patch-1
9c6a6c0 Master to main cleanup
682c686 Merge pull request #162 from pohly/pod-name-via-shell-command
36a29f5 Merge pull request #163 from pohly/remove-bazel
68e43ca prow.sh: remove Bazel build support
c5f59c5 prow.sh: allow shell commands in CSI_PROW_SANITY_POD
71c810a Merge pull request #161 from pohly/mock-test-fixes
9e438f8 prow.sh: fix mock testing
d7146c7 Merge pull request #160 from pohly/kind-update
4b6aa60 prow.sh: update to KinD v0.11.0
7cdc76f Merge pull request #159 from pohly/fix-deployment-selection
ef8bd33 prow.sh: more flexible CSI_PROW_DEPLOYMENT, part II
204bc89 Merge pull request #158 from pohly/fix-deployment-selection
61538bb prow.sh: more flexible CSI_PROW_DEPLOYMENT
2b0e6db Merge pull request #157 from humblec/csi-release
a2fcd6d Adding myself to csi reviewers group
f325590 Merge pull request #149 from pohly/cluster-logs
4b03b30 Merge pull request #155 from pohly/owners
a6453c8 owners: introduce aliases
ad83def Merge pull request #153 from pohly/fix-image-builds
5561780 build.make: fix image publishng
29bd39b Merge pull request #152 from pohly/bump-csi-test
bc42793 prow.sh: use csi-test v4.2.0
b546baa Merge pull request #150 from mauriciopoppe/windows-multiarch-args
bfbb6f3 add parameter base_image and addon_image to BUILD_PARAMETERS
2d61d3b Merge pull request #151 from humblec/cm
48e71f0 Replace `which` command ( non standard)  with `command -v` builtin
feb20e2 prow.sh: collect cluster logs
7b96bea Merge pull request #148 from dobsonj/add-checkpathcmd-to-prow
2d2e03b prow.sh: enable -csi.checkpathcmd option in csi-sanity
09d4151 Merge pull request #147 from pohly/mock-testing
74cfbc9 prow.sh: support mock tests
4a3f110 prow.sh: remove obsolete test suppression
6616a6b Merge pull request #146 from pohly/kubernetes-1.21
510fb0f prow.sh: support Kubernetes 1.21
c63c61b prow.sh: add CSI_PROW_DEPLOYMENT_SUFFIX
51ac11c Merge pull request #144 from pohly/pull-jobs
dd54c92 pull-test.sh: test importing csi-release-tools into other repo
7d2643a Merge pull request #143 from pohly/path-setup
6880b0c prow.sh: avoid creating paths unless really running tests
bc0504a Merge pull request #140 from jsafrane/remove-unused-k8s-libs
5b1de1a go-get-kubernetes.sh: remove unused k8s libs
49b4269 Merge pull request #120 from pohly/add-kubernetes-release
a1e1127 Merge pull request #139 from pohly/kind-for-kubernetes-latest
1c0fb09 prow.sh: use KinD main for latest Kubernetes
1d77cfc Merge pull request #138 from pohly/kind-update-0.10
bff2fb7 prow.sh: KinD 0.10.0
95eac33 Merge pull request #137 from pohly/fix-go-version-check
437e431 verify-go-version.sh: fix check after removal of travis.yml
1748b16 Merge pull request #136 from pohly/go-1.16
ec844ea remove travis.yml, Go 1.16
df76aba Merge pull request #134 from andyzhangx/add-build-arg
e314a56 add build-arg ARCH for building multi-arch images, e.g. ARG ARCH FROM k8s.gcr.io/build-image/debian-base-${ARCH}:v2.1.3
7bc70e5 Merge pull request #129 from pohly/squash-documentation
e0b02e7 README.md: document usage of --squash
316cb95 Merge pull request #132 from yiyang5055/bugfix/boilerplate
26e2ab1 fix: default boilerplate path
1add8c1 Merge pull request #133 from pohly/kubernetes-1.20-tag
3e811d6 prow.sh: fix "on-master" prow jobs
1d60e77 Merge pull request #131 from pohly/kubernetes-1.20-tag
9f10459 prow.sh: support building Kubernetes for a specific version
f7e7ee4 docs: steps for adding testing against new Kubernetes release
fe1f284 Merge pull request #121 from kvaps/namespace-check
8fdf0f7 Merge pull request #128 from fengzixu/master
1c94220 fix: fix a bug of csi-sanity
a4c41e6 Merge pull request #127 from pohly/fix-boilerplate
ece0f50 check namespace for snapshot-controller
dbd8967 verify-boilerplate.sh: fix path to script
9289fd1 Merge pull request #125 from sachinkumarsingh092/optional-spelling-boilerplate-checks
ad29307 Make the spelling and boilerplate checks optional
5f06d02 Merge pull request #124 from sachinkumarsingh092/fix-spellcheck-boilerplate-tests
48186eb Fix spelling and boilerplate errors
71690af Merge pull request #122 from sachinkumarsingh092/include-spellcheck-boilerplate-tests
981be3f Adding spelling and boilerplate checks.
2bb7525 Merge pull request #117 from fengzixu/master
4ab8b15 use the tag to replace commit of csi-test
5d74e45 change the csi-test import path to v4
7dcd0a9 upgrade csi-test to v4.0.2

git-subtree-dir: release-tools
git-subtree-split: 5489de6e743cf8362e5ab8275988cc748d0758b0
2021-09-20 14:59:10 -07:00
Aaron Crickenberger 56f0eca301 Merge commit 'a17045a9096bade0f8bbe669f8ef42db771a062f' into update-release-tools 2021-09-20 14:59:10 -07:00
Ólafur Haukur Flygenring f1ea3f760b Set the default reclaimPolicy for the NFS pv to Retain. 2021-09-10 12:21:25 +00:00
Ólafur Haukur Flygenring 584dc27a9e * New value in values.yaml that allows you to have different reclaimPolicies for the StorageClass and the NFS PV. Closes issue; https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/128 2021-09-10 09:41:17 +00:00
Kubernetes Prow Robot 0a8665a8ca
Merge pull request #132 from sysadmiral/master
test: update image used by test-pod
2021-09-03 21:11:19 -07:00
Kubernetes Prow Robot 4f955d1a21
Merge pull request #130 from crookedstorm/master
helm chart: allow persistentVolumeClaim in psp or pod never launches
2021-08-04 08:27:24 -07:00
Kubernetes Prow Robot 5f97c83a65
Merge pull request #111 from koivunen/patch-1
Describe NFS limitations
2021-08-01 12:13:21 -07:00
Brooke Storm 1271831fbd bump chart version 2021-07-31 20:42:33 -07:00
Amo Chumber 2ee31bdec5
test: update image used by test-pod
Changing the images used in the test-pod to a more recent tag that also has multi-arch builds.

Previously on an arm processor the test-pod would show Failed with not much hints as to why.
2021-07-31 23:47:07 +01:00
Brooke Storm b74a204cda helm chart: allow persistentVolumeClaim in psp or pod never launches
Simple fix, but if you have podsecuritypolicy in your cluster, this
chart doesn't work without this change.
2021-07-30 16:35:33 -07:00
Koivunen 7b9f0e8db2
Update README.md 2021-06-30 15:38:57 +03:00
Kubernetes Prow Robot e289a21201
Merge pull request #108 from larisoncarvalho/larisoncarvalho-patch-1
Update sed command for OpenShift in README.md
2021-06-28 20:54:40 -07:00
Weilbyte 8cd434fc32
Fix apply command 2021-06-17 17:53:42 +02:00
Weilbyte fecf82503f
Merge branch 'master' into Weilbyte/kustomize 2021-06-15 03:07:30 +02:00
Koivunen c848d9c7ce
Describe NFS limitations 2021-06-03 19:46:29 +03:00
Larison Carvalho f552bc6a79
Update README.md
Replace namespace in `./deploy/deployment.yaml` as well for OpenShift deployments
2021-06-02 12:31:02 +05:30
Weilbyte cedae1fee6
Clarify test resource deletion command 2021-04-15 13:43:50 +02:00
Weilbyte ee3f908c60
Change some instructions text 2021-04-15 13:41:52 +02:00
Weilbyte 7560b5ea3b
Trim whitespaces in README 2021-04-15 13:25:00 +02:00
Weilbyte eee6810fe9
Fix instructions to comply with MD031 (blanks-around-fences) 2021-04-15 13:21:34 +02:00
Weilbyte 0d5e4147f5
Add Kustomize instructions to README 2021-04-15 09:28:12 +02:00
Weilbyte 71bb338ca8
Change extension to .yaml 2021-04-14 14:15:48 +02:00
Weilbyte 3158a07d0a
Add Kustomize support 2021-04-14 13:33:07 +02:00
3480 changed files with 1084889 additions and 916 deletions

View File

@ -15,7 +15,7 @@
# limitations under the License.
: ${CSI_PROW_BUILD_PLATFORMS:="linux amd64; linux arm -arm; linux arm64 -arm64; linux ppc64le -ppc64le; linux s390x -s390x"}
: ${CSI_PROW_BUILD_PLATFORMS:="linux amd64 amd64; linux arm arm -arm; linux arm64 arm64 -arm64; linux ppc64le ppc64le -ppc64le; linux s390x s390x -s390x"}
# shellcheck disable=SC1091
. release-tools/cloudbuild.sh

View File

@ -39,8 +39,8 @@ jobs:
elif [[ $GITHUB_REF == refs/pull/* ]]; then
TAGS="${{ secrets.DOCKER_IMAGE }}:pr-${{ github.event.number }}"
fi
echo ::set-output name=tags::${TAGS}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
-
name: Set up QEMU
uses: docker/setup-qemu-action@v1

View File

@ -1,3 +1,13 @@
# v4.0.3
- Prevent mounting of root directory on empty customPath (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/83)
- Upgrade k8s client to v1.23.4 (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/175)
- Add error handling to chmod on volume creation (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/176)
- Import GetPersistentVolumeClass from component-helpers (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/189)
- Resolve CVE-2022-27191 in golang.org/x/crypto (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/207)
- Fix onDelete option for subdirectories (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/221)
- Resolve all trivy vulnerabilities up to 2024-01-25 (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/327)
# v4.0.2
- Add arm7 (32bit) support (https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/pull/58)

View File

@ -1,4 +1,4 @@
# Copyright 2017-2020 The Kubernetes Authors.
# Copyright 2023 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@ -1,4 +1,4 @@
FROM --platform=$BUILDPLATFORM golang:1.14 as build-env
FROM --platform=$BUILDPLATFORM golang:1.19 as build-env
# xx wraps go to automatically configure $GOOS, $GOARCH, and $GOARM
# based on TARGETPLATFORM provided by Docker.
@ -13,9 +13,9 @@ WORKDIR ${APP_FOLDER}
ARG TARGETPLATFORM
RUN CGO_ENABLED=0 go build -a -ldflags '-extldflags "-static"' -o /bin/main ./cmd/nfs-subdir-external-provisioner
FROM --platform=$TARGETPLATFORM alpine:3.12
FROM --platform=$TARGETPLATFORM alpine:3.18
RUN apk update --no-cache && apk add ca-certificates
COPY --from=build-env /bin/main /app/main
ENTRYPOINT ["/app/main"]
ENTRYPOINT ["/app/main"]

View File

@ -17,4 +17,4 @@ all: build
include release-tools/build.make
BUILD_PLATFORMS=linux amd64; linux arm -arm; linux arm64 -arm64; linux ppc64le -ppc64le; linux s390x -s390x
BUILD_PLATFORMS=linux amd64 amd64; linux arm arm -arm; linux arm64 arm64 -arm64; linux ppc64le ppc64le -ppc64le; linux s390x s390x -s390x

7
OWNERS
View File

@ -1,9 +1,10 @@
# See the OWNERS docs at https://go.k8s.io/owners
approvers:
- wongma7
- ashishranjan738
- humblec
- jackielii
- jsafrane
- kmova
- jackielii
- ashishranjan738
- yonatankahana
- wongma7

1
OWNERS_ALIASES Symbolic link
View File

@ -0,0 +1 @@
release-tools/OWNERS_ALIASES

138
README.md
View File

@ -3,7 +3,7 @@
**NFS subdir external provisioner** is an automatic provisioner that use your _existing and already configured_ NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as `${namespace}-${pvcName}-${pvName}`.
Note: This repository is migrated from https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. As part of the migration:
- The container image name and repository has changed to `k8s.gcr.io/sig-storage` and `nfs-subdir-external-provisioner` respectively.
- The container image name and repository has changed to `registry.k8s.io/sig-storage` and `nfs-subdir-external-provisioner` respectively.
- To maintain backward compatibility with earlier deployment files, the naming of NFS Client Provisioner is retained as `nfs-client-provisioner` in the deployment YAMLs.
- One of the pending areas for development on this repository is to add automated e2e tests. If you would like to contribute, please raise an issue or reach us on the Kubernetes slack #sig-storage channel.
@ -24,7 +24,126 @@ $ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/n
--set nfs.path=/exported/path
```
### Without Helm
### With Kustomize
**Step 1: Get connection information for your NFS server**
Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname and exported share path.
**Step 2: Add the base resource**
Create a `kustomization.yaml` file in a directory of your choice, and add the [deploy](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy) directory as a base. This will use the kustomization file within that directory as our base.
```yaml
namespace: nfs-provisioner
bases:
- github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
```
**Step 3: Create namespace resource**
Create a file with your namespace resource. The name can be anything as it will get overwritten by the namespace in your kustomization file.
```yaml
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nfs-provisioner
```
**Step 4: Configure deployment**
To configure the deployment, you will need to patch it's container variables with the connection information for your NFS Server.
```yaml
# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
template:
spec:
containers:
- name: nfs-client-provisioner
env:
- name: NFS_SERVER
value: <YOUR_NFS_SERVER_IP>
- name: NFS_PATH
value: <YOUR_NFS_SERVER_SHARE>
volumes:
- name: nfs-client-root
nfs:
server: <YOUR_NFS_SERVER_IP>
path: <YOUR_NFS_SERVER_SHARE>
```
Replace occurrences of `<YOUR_NFS_SERVER_IP>` and `<YOUR_NFS_SERVER_SHARE>` with your connection information.
**Step 5: Add resources and deploy**
Add the namespace resource and patch you created in earlier steps.
```yaml
namespace: nfs-provisioner
bases:
- github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
resources:
- namespace.yaml
patchesStrategicMerge:
- patch_nfs_details.yaml
```
Deploy (run inside directory with your kustomization file):
```sh
kubectl apply -k .
```
**Step 6: Finally, test your environment!**
Now we'll test your NFS subdir external provisioner by creating a persistent volume claim and a pod that writes a test file to the volume. This will make sure that the provisioner is provisioning and that the NFS server is reachable and writable.
Deploy the test resources:
```sh
$ kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-claim.yaml -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-pod.yaml
```
Now check your NFS Server for the `SUCCESS` inside the PVC's directory.
Delete the test resources:
```sh
$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-claim.yaml -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-pod.yaml
```
Now check the PVC's directory has been deleted.
**Step 7: Deploying your own PersistentVolumeClaims**
To deploy your own PVC, make sure that you have the correct `storageClassName` (by default `nfs-client`). You can also patch the StorageClass resource to change it, like so:
```yaml
# kustomization.yaml
namespace: nfs-provisioner
resources:
- github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
- namespace.yaml
patches:
- target:
kind: StorageClass
name: nfs-client
patch: |-
- op: replace
path: /metadata/name
value: <YOUR-STORAGECLASS-NAME>
```
### Manually
**Step 1: Get connection information for your NFS server**
@ -58,7 +177,7 @@ On OpenShift the service account used to bind volumes does not have the necessar
```sh
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ oc create -f deploy/rbac.yaml
$ oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner
```
@ -89,7 +208,7 @@ spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
@ -127,7 +246,7 @@ This is `deploy/class.yaml` which defines the NFS subdir external provisioner's
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
@ -166,7 +285,7 @@ metadata:
annotations:
nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
storageClassName: managed-nfs-storage
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
@ -181,7 +300,7 @@ To build your own custom container image from this repository, you will have to
```sh
make build
make container
# `nfs-subdir-external-provisioner:latest` will be created.
# `nfs-subdir-external-provisioner:latest` will be created.
# Note: This will build a single-arch image that matches the machine on which container is built.
# To upload this to your custom registry, say `quay.io/myorg` and arch as amd64, you can use
# docker tag nfs-subdir-external-provisioner:latest quay.io/myorg/nfs-subdir-external-provisioner-amd64:latest
@ -206,4 +325,7 @@ The pipeline adds several labels:
* You also need to provide the `DOCKER_IMAGE` secret specifying your Docker image name, e.g., `quay.io/[username]/nfs-subdir-external-provisioner`.
## NFS provisioner limitations/pitfalls
* The provisioned storage is not guaranteed. You may allocate more than the NFS share's total size. The share may also not have enough storage space left to actually accommodate the request.
* The provisioned storage limit is not enforced. The application can expand to use all the available storage regardless of the provisioned size.
* Storage resize/expansion operations are not presently supported in any form. You will end up in an error state: `Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.`

View File

@ -3,7 +3,7 @@ appVersion: 4.0.2
description: nfs-subdir-external-provisioner is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes.
name: nfs-subdir-external-provisioner
home: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
version: 4.0.12
version: 4.0.18
kubeVersion: ">=1.9.0-0"
sources:
- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

View File

@ -30,7 +30,7 @@ $ helm install my-release nfs-subdir-external-provisioner/nfs-subdir-external-pr
--set nfs.path=/exported/path
```
The command deploys the given storage class in the default configuration. It can be used afterswards to provision persistent volumes. The [configuration](#configuration) section lists the parameters that can be configured during installation.
The command deploys the given storage class in the default configuration. It can be used afterwards to provision persistent volumes. The [configuration](#configuration) section lists the parameters that can be configured during installation.
> **Tip**: List all releases using `helm list`
@ -48,38 +48,54 @@ The command removes all the Kubernetes components associated with the chart and
The following tables lists the configurable parameters of this chart and their default values.
| Parameter | Description | Default |
| ----------------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------------- |
| `replicaCount` | Number of provisioner instances to deployed | `1` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `image.repository` | Provisioner image | `k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner` |
| `image.tag` | Version of provisioner image | `v4.0.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `storageClass.name` | Name of the storageClass | `nfs-client` |
| `storageClass.defaultClass` | Set as the default StorageClass | `false` |
| `storageClass.allowVolumeExpansion` | Allow expanding the volume | `true` |
| `storageClass.reclaimPolicy` | Method used to reclaim an obsoleted volume | `Delete` |
| `storageClass.provisionerName` | Name of the provisionerName | null |
| `storageClass.archiveOnDelete` | Archive PVC when deleting | `true` |
| `storageClass.onDelete` | Strategy on PVC deletion. Overrides archiveOnDelete when set to lowercase values 'delete' or 'retain' | null |
| `storageClass.pathPattern` | Specifies a template for the directory name | null |
| `storageClass.accessModes` | Set access mode for PV | `ReadWriteOnce` |
| `storageClass.annotations` | Set additional annotations for the StorageClass | `{}` |
| `leaderElection.enabled` | Enables or disables leader election | `true` |
| `nfs.server` | Hostname of the NFS server (required) | null (ip or hostname) |
| `nfs.path` | Basepath of the mount point to be used | `/nfs-storage` |
| `nfs.mountOptions` | Mount options (e.g. 'nfsvers=3') | null |
| `nfs.volumeName` | Volume name used inside the pods | `nfs-subdir-external-provisioner-root` |
| `resources` | Resources required (e.g. CPU, memory) | `{}` |
| `rbac.create` | Use Role-based Access Control | `true` |
| `podSecurityPolicy.enabled` | Create & use Pod Security Policy resources | `false` |
| `podAnnotations` | Additional annotations for the Pods | `{}` |
| `priorityClassName` | Set pod priorityClassName | null |
| `serviceAccount.create` | Should we create a ServiceAccount | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | null |
| `serviceAccount.annotations` | Additional annotations for the ServiceAccount | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Affinity settings | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `labels` | Additional labels for any resource created | `{}` |
| Parameter | Description | Default |
| ------------------------------------ | ----------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
| `replicaCount` | Number of provisioner instances to deployed | `1` |
| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `image.repository` | Provisioner image | `registry.k8s.io/sig-storage/nfs-subdir-external-provisioner` |
| `image.tag` | Version of provisioner image | `v4.0.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `storageClass.name` | Name of the storageClass | `nfs-client` |
| `storageClass.defaultClass` | Set as the default StorageClass | `false` |
| `storageClass.allowVolumeExpansion` | Allow expanding the volume | `true` |
| `storageClass.reclaimPolicy` | Method used to reclaim an obsoleted volume | `Delete` |
| `storageClass.provisionerName` | Name of the provisionerName | null |
| `storageClass.archiveOnDelete` | Archive PVC when deleting | `true` |
| `storageClass.onDelete` | Strategy on PVC deletion. Overrides archiveOnDelete when set to lowercase values 'delete' or 'retain' | null |
| `storageClass.pathPattern` | Specifies a template for the directory name | null |
| `storageClass.accessModes` | Set access mode for PV | `ReadWriteOnce` |
| `storageClass.volumeBindingMode` | Set volume binding mode for Storage Class | `Immediate` |
| `storageClass.annotations` | Set additional annotations for the StorageClass | `{}` |
| `leaderElection.enabled` | Enables or disables leader election | `true` |
| `nfs.server` | Hostname of the NFS server (required) | null (ip or hostname) |
| `nfs.path` | Basepath of the mount point to be used | `/nfs-storage` |
| `nfs.mountOptions` | Mount options (e.g. 'nfsvers=3') | null |
| `nfs.volumeName` | Volume name used inside the pods | `nfs-subdir-external-provisioner-root` |
| `nfs.reclaimPolicy` | Reclaim policy for the main nfs volume used for subdir provisioning | `Retain` |
| `resources` | Resources required (e.g. CPU, memory) | `{}` |
| `rbac.create` | Use Role-based Access Control | `true` |
| `podSecurityPolicy.enabled` | Create & use Pod Security Policy resources | `false` |
| `podAnnotations` | Additional annotations for the Pods | `{}` |
| `priorityClassName` | Set pod priorityClassName | null |
| `serviceAccount.create` | Should we create a ServiceAccount | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | null |
| `serviceAccount.annotations` | Additional annotations for the ServiceAccount | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Affinity settings | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `labels` | Additional labels for any resource created | `{}` |
| `podDisruptionBudget.enabled` | Create and use Pod Disruption Budget | `false` |
| `podDisruptionBudget.maxUnavailable` | Set maximum unavailable pods in the Pod Disruption Budget | `1` |
## Install Multiple Provisioners
It is possible to install more than one provisioner in your cluster to have access to multiple nfs servers and/or multiple exports from a single nfs server. Each provisioner must have a different `storageClass.provisionerName` and a different `storageClass.name`. For example:
```console
helm install second-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=y.y.y.y \
--set nfs.path=/other/exported/path \
--set storageClass.name=second-nfs-client \
--set storageClass.provisionerName=k8s-sigs.io/second-nfs-subdir-external-provisioner
```

View File

@ -61,6 +61,17 @@ Return the appropriate apiVersion for podSecurityPolicy.
{{- end -}}
{{- end -}}
{{/*
Return the appropriate apiVersion for podDisruptionBudget.
*/}}
{{- define "podDisruptionBudget.apiVersion" -}}
{{- if semverCompare ">=1.21-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "policy/v1" -}}
{{- else -}}
{{- print "policy/v1beta1" -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}

View File

@ -24,6 +24,8 @@ spec:
{{- include "nfs-subdir-external-provisioner.podLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ template "nfs-subdir-external-provisioner.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
@ -43,6 +45,8 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- name: {{ .Values.nfs.volumeName }}
mountPath: /persistentvolumes

View File

@ -12,7 +12,7 @@ spec:
volumeMode: Filesystem
accessModes:
- {{ .Values.storageClass.accessModes }}
persistentVolumeReclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
persistentVolumeReclaimPolicy: {{ .Values.nfs.reclaimPolicy }}
storageClassName: ""
{{- if .Values.nfs.mountOptions }}
mountOptions:

View File

@ -0,0 +1,13 @@
{{- if .Values.podDisruptionBudget.enabled }}
apiVersion: {{ template "podDisruptionBudget.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
labels:
{{- include "nfs-subdir-external-provisioner.labels" . | nindent 4 }}
name: {{ template "nfs-subdir-external-provisioner.fullname" . }}
spec:
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable | default 1 }}
selector:
matchLabels:
{{- include "nfs-subdir-external-provisioner.selectorLabels" . | nindent 6 }}
{{- end }}

View File

@ -13,6 +13,7 @@ spec:
volumes:
- 'secret'
- 'nfs'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false

View File

@ -15,6 +15,7 @@ metadata:
provisioner: {{ template "nfs-subdir-external-provisioner.provisionerName" . }}
allowVolumeExpansion: {{ .Values.storageClass.allowVolumeExpansion }}
reclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
volumeBindingMode: {{ .Values.storageClass.volumeBindingMode }}
parameters:
archiveOnDelete: "{{ .Values.storageClass.archiveOnDelete }}"
{{- if .Values.storageClass.pathPattern }}

View File

@ -2,7 +2,7 @@ replicaCount: 1
strategyType: Recreate
image:
repository: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner
repository: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner
tag: v4.0.2
pullPolicy: IfNotPresent
imagePullSecrets: []
@ -12,6 +12,8 @@ nfs:
path: /nfs-storage
mountOptions:
volumeName: nfs-subdir-external-provisioner-root
# Reclaim policy for the main nfs volume
reclaimPolicy: Retain
# For creating the StorageClass automatically:
storageClass:
@ -49,6 +51,9 @@ storageClass:
# Set access mode - ReadWriteOnce, ReadOnlyMany or ReadWriteMany
accessModes: ReadWriteOnce
# Set volume bindinng mode - Immediate or WaitForFirstConsumer
volumeBindingMode: Immediate
# Storage class annotations
annotations: {}
@ -72,6 +77,10 @@ podAnnotations: {}
## Set pod priorityClassName
# priorityClassName: ""
podSecurityContext: {}
securityContext: {}
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
@ -99,3 +108,7 @@ affinity: {}
# Additional labels for any resource created
labels: {}
podDisruptionBudget:
enabled: false
maxUnavailable: 1

View File

@ -35,7 +35,7 @@ import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/kubernetes/pkg/apis/core/v1/helper"
storagehelpers "k8s.io/component-helpers/storage/volume"
"sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller"
)
@ -62,13 +62,14 @@ func (meta *pvcMetadata) stringParser(str string) string {
for _, r := range result {
switch r[2] {
case "labels":
str = strings.Replace(str, r[0], meta.labels[r[3]], -1)
str = strings.ReplaceAll(str, r[0], meta.labels[r[3]])
case "annotations":
str = strings.Replace(str, r[0], meta.annotations[r[3]], -1)
str = strings.ReplaceAll(str, r[0], meta.annotations[r[3]])
default:
str = strings.Replace(str, r[0], meta.data[r[1]], -1)
str = strings.ReplaceAll(str, r[0], meta.data[r[1]])
}
}
return str
}
@ -111,10 +112,13 @@ func (p *nfsProvisioner) Provision(ctx context.Context, options controller.Provi
}
glog.V(4).Infof("creating path %s", fullPath)
if err := os.MkdirAll(fullPath, 0777); err != nil {
if err := os.MkdirAll(fullPath, 0o777); err != nil {
return nil, controller.ProvisioningFinished, errors.New("unable to create directory to provision new pv: " + err.Error())
}
os.Chmod(fullPath, 0777)
err := os.Chmod(fullPath, 0o777)
if err != nil {
return nil, "", err
}
pv := &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
@ -142,7 +146,7 @@ func (p *nfsProvisioner) Provision(ctx context.Context, options controller.Provi
func (p *nfsProvisioner) Delete(ctx context.Context, volume *v1.PersistentVolume) error {
path := volume.Spec.PersistentVolumeSource.NFS.Path
basePath := filepath.Base(path)
oldPath := filepath.Join(mountPath, basePath)
oldPath := strings.Replace(path, p.path, mountPath, 1)
if _, err := os.Stat(oldPath); os.IsNotExist(err) {
glog.Warningf("path %s does not exist, deletion skipped", oldPath)
@ -155,14 +159,12 @@ func (p *nfsProvisioner) Delete(ctx context.Context, volume *v1.PersistentVolume
}
// Determine if the "onDelete" parameter exists.
// If it exists and has a delete value, delete the directory.
// If it exists and has a retain value, safe the directory.
// If it exists and has a `delete` value, delete the directory.
// If it exists and has a `retain` value, safe the directory.
onDelete := storageClass.Parameters["onDelete"]
switch onDelete {
case "delete":
return os.RemoveAll(oldPath)
case "retain":
return nil
}
@ -186,19 +188,20 @@ func (p *nfsProvisioner) Delete(ctx context.Context, volume *v1.PersistentVolume
return os.Rename(oldPath, archivePath)
}
// getClassForVolume returns StorageClass
// getClassForVolume returns StorageClass.
func (p *nfsProvisioner) getClassForVolume(ctx context.Context, pv *v1.PersistentVolume) (*storage.StorageClass, error) {
if p.client == nil {
return nil, fmt.Errorf("Cannot get kube client")
return nil, fmt.Errorf("cannot get kube client")
}
className := helper.GetPersistentVolumeClass(pv)
className := storagehelpers.GetPersistentVolumeClass(pv)
if className == "" {
return nil, fmt.Errorf("Volume has no storage class")
return nil, fmt.Errorf("volume has no storage class")
}
class, err := p.client.StorageV1().StorageClasses().Get(ctx, className, metav1.GetOptions{})
if err != nil {
return nil, err
}
return class, nil
}

View File

@ -1,7 +1,7 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"

View File

@ -21,7 +21,7 @@ spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes

View File

@ -0,0 +1,4 @@
resources:
- class.yaml
- rbac.yaml
- deployment.yaml

View File

@ -1,7 +1,7 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"

View File

@ -5,6 +5,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole

View File

@ -1,11 +1,18 @@
apiVersion: apps/v1
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
@ -14,7 +21,7 @@ spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
@ -22,11 +29,11 @@ spec:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.10.10.60
value: 10.3.243.101
- name: NFS_PATH
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 10.10.10.60
server: 10.3.243.101
path: /ifs/kubernetes

View File

@ -2,6 +2,8 @@ kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]

View File

@ -2,6 +2,8 @@ kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner

View File

@ -2,3 +2,5 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default

View File

@ -3,7 +3,7 @@ apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:

View File

@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
image: busybox:stable
command:
- "/bin/sh"
args:

118
go.mod
View File

@ -1,44 +1,90 @@
module github.com/kubernetes-sigs/nfs-subdir-external-provisioner
go 1.14
go 1.19
require (
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/miekg/dns v1.1.29 // indirect
github.com/onsi/ginkgo v1.12.0 // indirect
github.com/onsi/gomega v1.9.0 // indirect
github.com/prometheus/client_golang v1.5.1 // indirect
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d // indirect
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect
k8s.io/api v0.18.0
k8s.io/apimachinery v0.18.0
k8s.io/client-go v0.18.0
k8s.io/klog v1.0.0 // indirect
k8s.io/kubernetes v1.18.0
k8s.io/utils v0.0.0-20200327001022-6496210b90e8 // indirect
github.com/golang/glog v1.0.0
k8s.io/api v0.23.4
k8s.io/apimachinery v0.23.4
k8s.io/client-go v0.23.4
k8s.io/component-helpers v0.23.4
sigs.k8s.io/sig-storage-lib-external-provisioner/v6 v6.0.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/go-logr/logr v1.2.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.5 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/imdario/mergo v0.3.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/miekg/dns v1.1.29 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/prometheus/client_golang v1.7.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/crypto v0.14.0 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
golang.org/x/sys v0.15.0 // indirect
golang.org/x/term v0.15.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/klog v1.0.0 // indirect
k8s.io/klog/v2 v2.30.0 // indirect
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/utils v0.0.0-20211116205334-6203023598ed // indirect
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect
)
replace (
k8s.io/api => k8s.io/api v0.18.0
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.18.0
k8s.io/apimachinery => k8s.io/apimachinery v0.18.2-beta.0
k8s.io/apiserver => k8s.io/apiserver v0.18.0
k8s.io/cli-runtime => k8s.io/cli-runtime v0.18.0
k8s.io/client-go => k8s.io/client-go v0.18.0
k8s.io/cloud-provider => k8s.io/cloud-provider v0.18.0
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.18.0
k8s.io/code-generator => k8s.io/code-generator v0.18.3-beta.0
k8s.io/component-base => k8s.io/component-base v0.18.0
k8s.io/cri-api => k8s.io/cri-api v0.18.11-rc.0
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.18.0
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.18.0
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.18.0
k8s.io/kube-proxy => k8s.io/kube-proxy v0.18.0
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.18.0
k8s.io/kubectl => k8s.io/kubectl v0.18.0
k8s.io/kubelet => k8s.io/kubelet v0.18.0
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.18.0
k8s.io/metrics => k8s.io/metrics v0.18.0
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.18.0
)
github.com/prometheus/client_golang => github.com/prometheus/client_golang v1.11.1
golang.org/x/crypto => golang.org/x/crypto v0.17.0
golang.org/x/net => golang.org/x/net v0.17.0
gopkg.in/yaml.v3 => gopkg.in/yaml.v3 v3.0.0-20220521103104-8f96da9f5d5e
k8s.io/api => k8s.io/api v0.23.4
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.23.4
k8s.io/apimachinery => k8s.io/apimachinery v0.23.4
k8s.io/apiserver => k8s.io/apiserver v0.23.4
k8s.io/cli-runtime => k8s.io/cli-runtime v0.23.4
k8s.io/client-go => k8s.io/client-go v0.23.4
k8s.io/cloud-provider => k8s.io/cloud-provider v0.23.4
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.23.4
k8s.io/code-generator => k8s.io/code-generator v0.23.4
k8s.io/component-base => k8s.io/component-base v0.23.4
k8s.io/component-helpers => k8s.io/component-helpers v0.23.4
k8s.io/controller-manager => k8s.io/controller-manager v0.23.4
k8s.io/cri-api => k8s.io/cri-api v0.23.4
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.23.4
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.23.4
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.23.4
k8s.io/kube-proxy => k8s.io/kube-proxy v0.23.4
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.23.4
k8s.io/kubectl => k8s.io/kubectl v0.23.4
k8s.io/kubelet => k8s.io/kubelet v0.23.4
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.23.4
k8s.io/metrics => k8s.io/metrics v0.23.4
k8s.io/mount-utils => k8s.io/mount-utils v0.23.4
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.23.4
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.23.4
k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.23.4
k8s.io/sample-controller => k8s.io/sample-controller v0.23.4
)

1027
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,23 @@
#! /bin/bash -e
# Copyright 2021 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is for testing csi-release-tools itself in Prow. All other
# repos use prow.sh for that, but as csi-release-tools isn't a normal
# repo with some Go code in it, it has a custom Prow test script.
./verify-shellcheck.sh "$(pwd)"
./verify-spelling.sh "$(pwd)"
./verify-boilerplate.sh "$(pwd)"

View File

@ -0,0 +1,38 @@
# See the OWNERS docs: https://git.k8s.io/community/contributors/guide/owners.md
aliases:
# SIG-Storage chairs and leads should always have approval rights in all repos.
# Others may be added as needed here or in each repo.
kubernetes-csi-approvers:
- jsafrane
- msau42
- saad-ali
- xing-yang
# Reviewers are automatically assigned to new PRs. The following
# reviewers will be active in all repos. Other reviewers can be
# added in each repo.
#
# Reviewers are encouraged to set the "Busy" flag in their GitHub status
# when they are temporarily unable to review PRs.
kubernetes-csi-reviewers:
- andyzhangx
- chrishenzie
- ggriffiths
- gnufied
- humblec
- j-griffith
- Jiawei0227
- jingxu97
- jsafrane
- pohly
- RaunakShah
- xing-yang
# This documents who previously contributed to Kubernetes-CSI
# as approver.
emeritus_approvers:
- lpabon
- sbezverk
- vladimirvivien

View File

@ -1,11 +1,8 @@
# See the OWNERS docs: https://git.k8s.io/community/contributors/guide/owners.md
approvers:
- saad-ali
- msau42
- kubernetes-csi-approvers
- pohly
reviewers:
- saad-ali
- msau42
- pohly
- kubernetes-csi-reviewers

View File

@ -0,0 +1 @@
KUBERNETES_CSI_OWNERS_ALIASES

View File

@ -21,7 +21,11 @@ The expected repository layout is:
Dockerfile in the root when only building a single command
- `Makefile` - includes `release-tools/build.make` and sets
configuration variables
- `.travis.yml` - a symlink to `release-tools/.travis.yml`
- `.prow.sh` script which imports `release-tools/prow.sh`
and may contain further customization
- `.cloudbuild.sh` and `cloudbuild.yaml` as symlinks to
the corresponding files in `release-tools` or (if necessary)
as custom files
To create a release, tag a certain revision with a name that
starts with `v`, for example `v1.0.0`, then `make push`
@ -38,16 +42,23 @@ images. Building from master creates the main `canary` image.
Sharing and updating
--------------------
[`git subtree`](https://github.com/git/git/blob/master/contrib/subtree/git-subtree.txt)
[`git subtree`](https://github.com/git/git/blob/HEAD/contrib/subtree/git-subtree.txt)
is the recommended way of maintaining a copy of the rules inside the
`release-tools` directory of a project. This way, it is possible to make
changes also locally, test them and then push them back to the shared
repository at a later time.
We no longer care about importing the full commit history, so `--squash` should be used
when submitting a `release-tools` update. Also make sure that the PR for that
contains the automatically generated commit message in the PR description.
It contains the list of individual commits that were squashed. The script from
https://github.com/kubernetes-csi/csi-release-tools/issues/7 can create such
PRs automatically.
Cheat sheet:
- `git subtree add --prefix=release-tools https://github.com/kubernetes-csi/csi-release-tools.git master` - add release tools to a repo which does not have them yet (only once)
- `git subtree pull --prefix=release-tools https://github.com/kubernetes-csi/csi-release-tools.git master` - update local copy to latest upstream (whenever upstream changes)
- `git subtree add --squash --prefix=release-tools https://github.com/kubernetes-csi/csi-release-tools.git master` - add release tools to a repo which does not have them yet (only once)
- `git subtree pull --squash --prefix=release-tools https://github.com/kubernetes-csi/csi-release-tools.git master` - update local copy to latest upstream (whenever upstream changes)
- edit, `git commit`, `git subtree push --prefix=release-tools git@github.com:<user>/csi-release-tools.git <my-new-or-existing-branch>` - push to a new branch before submitting a PR
verify-shellcheck.sh
@ -78,7 +89,7 @@ main
All Kubernetes-CSI repos are expected to switch to Prow. For details
on what is enabled in Prow, see
https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes-csi
https://github.com/kubernetes/test-infra/tree/HEAD/config/jobs/kubernetes-csi
Test results for periodic jobs are visible in
https://testgrid.k8s.io/sig-storage-csi-ci

View File

@ -4,7 +4,7 @@
# to for triaging and handling of incoming issues.
#
# The below names agree to abide by the
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/HEAD/security-release-process-documentation/security-release-process.md#embargo-policy)
# and will be removed and replaced if they violate that agreement.
#
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE

View File

@ -9,13 +9,8 @@ The release manager must:
* Be a member of the kubernetes-csi organization. Open an
[issue](https://github.com/kubernetes/org/issues/new?assignees=&labels=area%2Fgithub-membership&template=membership.md&title=REQUEST%3A+New+membership+for+%3Cyour-GH-handle%3E) in
kubernetes/org to request membership
* Be a top level approver for the repository. To become a top level approver,
the candidate must demonstrate ownership and deep knowledge of the repository
through active maintainence, responding to and fixing issues, reviewing PRs,
test triage.
* Be part of the maintainers or admin group for the repository. admin is a
superset of maintainers, only maintainers level is required for cutting a
release. Membership can be requested by submitting a PR to kubernetes/org.
* Be part of the maintainers group for the repository.
Membership can be requested by submitting a PR to kubernetes/org.
[Example](https://github.com/kubernetes/org/pull/1467)
## Updating CI Jobs
@ -31,16 +26,16 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
1. "-on-master" jobs are the closest reflection to the new Kubernetes version.
1. Fixes to our prow.sh CI script can be tested in the [CSI hostpath
repo](https://github.com/kubernetes-csi/csi-driver-host-path) by modifying
[prow.sh](https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/release-tools/prow.sh)
[prow.sh](https://github.com/kubernetes-csi/csi-driver-host-path/blob/HEAD/release-tools/prow.sh)
along with any overrides in
[.prow.sh](https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/.prow.sh)
[.prow.sh](https://github.com/kubernetes-csi/csi-driver-host-path/blob/HEAD/.prow.sh)
to mirror the failing environment. Once e2e tests are passing (verify-unit tests
will fail), then the prow.sh changes can be submitted to [csi-release-tools](https://github.com/kubernetes-csi/csi-release-tools).
1. Changes can then be updated in all the sidecar repos and hostpath driver repo
by following the [update
instructions](https://github.com/kubernetes-csi/csi-release-tools/blob/master/README.md#sharing-and-updating).
instructions](https://github.com/kubernetes-csi/csi-release-tools/blob/HEAD/README.md#sharing-and-updating).
1. New pull and CI jobs are configured by adding new K8s versions to the top of
[gen-jobs.sh](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-csi/gen-jobs.sh).
[gen-jobs.sh](https://github.com/kubernetes/test-infra/blob/HEAD/config/jobs/kubernetes-csi/gen-jobs.sh).
New pull jobs that have been unverified should be initially made optional by
setting the new K8s version as
[experimental](https://github.com/kubernetes/test-infra/blob/a1858f46d6014480b130789df58b230a49203a64/config/jobs/kubernetes-csi/gen-jobs.sh#L40).
@ -51,10 +46,13 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
## Release Process
1. Identify all issues and ongoing PRs that should go into the release, and
drive them to resolution.
1. Download v2.8+ [K8s release notes
generator](https://github.com/kubernetes/release/tree/master/cmd/release-notes)
1. Download the latest version of the
[K8s release notes generator](https://github.com/kubernetes/release/tree/HEAD/cmd/release-notes)
1. Create a
[Github personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token)
with `repo:public_repo` access
1. Generate release notes for the release. Replace arguments with the relevant
information.
information.
* Clean up old cached information (also needed if you are generating release
notes for multiple repos)
```bash
@ -62,15 +60,24 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
```
* For new minor releases on master:
```bash
GITHUB_TOKEN=<token> release-notes --discover=mergebase-to-latest
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
GITHUB_TOKEN=<token> release-notes \
--discover=mergebase-to-latest \
--org=kubernetes-csi \
--repo=external-provisioner \
--required-author="" \
--markdown-links \
--output out.md
```
* For new patch releases on a release branch:
```bash
GITHUB_TOKEN=<token> release-notes --discover=patch-to-latest --branch=release-1.1
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
GITHUB_TOKEN=<token> release-notes \
--discover=patch-to-latest \
--branch=release-1.1 \
--org=kubernetes-csi \
--repo=external-provisioner \
--required-author="" \
--markdown-links \
--output out.md
```
1. Compare the generated output to the new commits for the release to check if
any notable change missed a release note.
@ -95,12 +102,79 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
1. Check [image build status](https://k8s-testgrid.appspot.com/sig-storage-image-build).
1. Promote images from k8s-staging-sig-storage to k8s.gcr.io/sig-storage. From
the [k8s image
repo](https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-sig-storage),
repo](https://github.com/kubernetes/k8s.io/tree/HEAD/k8s.gcr.io/images/k8s-staging-sig-storage),
run `./generate.sh > images.yaml`, and send a PR with the updated images.
Once merged, the image promoter will copy the images from staging to prod.
1. Update [kubernetes-csi/docs](https://github.com/kubernetes-csi/docs) sidecar
and feature pages with the new released version.
1. After all the sidecars have been released, update
CSI hostpath driver with the new sidecars in the [CSI repo](https://github.com/kubernetes-csi/csi-driver-host-path/tree/master/deploy)
CSI hostpath driver with the new sidecars in the [CSI repo](https://github.com/kubernetes-csi/csi-driver-host-path/tree/HEAD/deploy)
and [k/k
in-tree](https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/storage-csi/hostpath/hostpath)
in-tree](https://github.com/kubernetes/kubernetes/tree/HEAD/test/e2e/testing-manifests/storage-csi/hostpath/hostpath)
### Troubleshooting
#### Image build jobs
The following jobs are triggered after tagging to produce the corresponding
image(s):
https://k8s-testgrid.appspot.com/sig-storage-image-build
Clicking on a failed build job opens that job in https://prow.k8s.io. Next to
the job title is a rerun icon (circle with arrow). Clicking it opens a popup
with a "rerun" button that maintainers with enough permissions can use. If in
doubt, ask someone on #sig-release to rerun the job.
Another way to rerun a job is to search for it in https://prow.k8s.io and click
the rerun icon in the resulting job list:
https://prow.k8s.io/?job=canary-csi-test-push-images
#### Verify images
Canary and staged images can be viewed at https://console.cloud.google.com/gcr/images/k8s-staging-sig-storage
Promoted images can be viewed at https://console.cloud.google.com/gcr/images/k8s-artifacts-prod/us/sig-storage
## Adding support for a new Kubernetes release
1. Add the new release to `k8s_versions` in
https://github.com/kubernetes/test-infra/blob/090dec5dd535d5f61b7ba52e671a810f5fc13dfd/config/jobs/kubernetes-csi/gen-jobs.sh#L25
to enable generating a job for it. Set `experimental_k8s_version`
in
https://github.com/kubernetes/test-infra/blob/090dec5dd535d5f61b7ba52e671a810f5fc13dfd/config/jobs/kubernetes-csi/gen-jobs.sh#L40
to ensure that the new jobs aren't run for PRs unless explicitly
requested. Generate and submit the new jobs.
1. Create a test PR to try out the new job in some repo with `/test
pull-kubernetes-csi-<repo>-<x.y>-on-kubernetes-<x.y>` where x.y
matches the Kubernetes release. Alternatively, run .prow.sh in that
repo locally with `CSI_PROW_KUBERNETES_VERSION=x.y.z`.
1. Optional: update to a [new
release](https://github.com/kubernetes-sigs/kind/tags) of kind with
pre-built images for the new Kubernetes release. This is optional
if the current version of kind is able to build images for the new
Kubernetes release. However, jobs require less resources when they
don't need to build those images from the Kubernetes source code.
This change needs to be tried out in a PR against a component
first, then get submitted against csi-release-tools.
1. Optional: propagate the updated csi-release-tools to all components
with the script from
https://github.com/kubernetes-csi/csi-release-tools/issues/7#issuecomment-707025402
1. Once it is likely to work in all components, unset
`experimental_k8s_version` and submit the updated jobs.
1. Once all sidecars for the new Kubernetes release are released,
either bump the version number of the images in the existing
[csi-driver-host-path
deployments](https://github.com/kubernetes-csi/csi-driver-host-path/tree/HEAD/deploy)
and/or create a new deployment, depending on what Kubernetes
release an updated sidecar is compatible with. If no new deployment
is needed, then add a symlink to document that there intentionally
isn't a separate deployment. This symlink is not needed for Prow
testing because that will use "kubernetes-latest" as fallback.
Update that link when creating a new deployment.
1. Create a new csi-driver-host-path release.
1. Bump `CSI_PROW_DRIVER_VERSION` in prow.sh to that new release and
(eventually) roll that change out to all repos by updating
`release-tools` in them. This is used when testing manually. The
Prow jobs override that value, so also update
`hostpath_driver_version` in
https://github.com/kubernetes/test-infra/blob/91b04e6af3a40a9bcff25aa030850a4721e2dd2b/config/jobs/kubernetes-csi/gen-jobs.sh#L46-L47

View File

@ -0,0 +1,13 @@
# Copyright YEAR The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,13 @@
# Copyright YEAR The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,13 @@
# Copyright YEAR The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,15 @@
/*
Copyright YEAR The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

View File

@ -0,0 +1,200 @@
#!/usr/bin/env python
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import difflib
import glob
import os
import re
import sys
from datetime import date
parser = argparse.ArgumentParser()
parser.add_argument(
"filenames",
help="list of files to check, all files if unspecified",
nargs='*')
# Rootdir defaults to the directory **above** the repo-infra dir.
rootdir = os.path.dirname(__file__) + "./../../"
rootdir = os.path.abspath(rootdir)
parser.add_argument(
"--rootdir", default=rootdir, help="root directory to examine")
default_boilerplate_dir = os.path.abspath(os.path.dirname(__file__))
parser.add_argument(
"--boilerplate-dir", default=default_boilerplate_dir)
parser.add_argument(
"-v", "--verbose",
help="give verbose output regarding why a file does not pass",
action="store_true")
args = parser.parse_args()
verbose_out = sys.stderr if args.verbose else open("/dev/null", "w")
def get_refs():
refs = {}
for path in glob.glob(os.path.join(args.boilerplate_dir, "boilerplate.*.txt")):
extension = os.path.basename(path).split(".")[1]
ref_file = open(path, 'r')
ref = ref_file.read().splitlines()
ref_file.close()
refs[extension] = ref
return refs
def file_passes(filename, refs, regexs):
try:
f = open(filename, 'r')
except Exception as exc:
print("Unable to open %s: %s" % (filename, exc), file=verbose_out)
return False
data = f.read()
f.close()
basename = os.path.basename(filename)
extension = file_extension(filename)
if extension != "":
ref = refs[extension]
else:
ref = refs[basename]
# remove build tags from the top of Go files
if extension == "go":
p = regexs["go_build_constraints"]
(data, found) = p.subn("", data, 1)
# remove shebang from the top of shell files
if extension == "sh" or extension == "py":
p = regexs["shebang"]
(data, found) = p.subn("", data, 1)
data = data.splitlines()
# if our test file is smaller than the reference it surely fails!
if len(ref) > len(data):
print('File %s smaller than reference (%d < %d)' %
(filename, len(data), len(ref)),
file=verbose_out)
return False
# trim our file to the same number of lines as the reference file
data = data[:len(ref)]
p = regexs["year"]
for d in data:
if p.search(d):
print('File %s is missing the year' % filename, file=verbose_out)
return False
# Replace all occurrences of the regex "CURRENT_YEAR|...|2016|2015|2014" with "YEAR"
p = regexs["date"]
for i, d in enumerate(data):
(data[i], found) = p.subn('YEAR', d)
if found != 0:
break
# if we don't match the reference at this point, fail
if ref != data:
print("Header in %s does not match reference, diff:" % filename, file=verbose_out)
if args.verbose:
print(file=verbose_out)
for line in difflib.unified_diff(ref, data, 'reference', filename, lineterm=''):
print(line, file=verbose_out)
print(file=verbose_out)
return False
return True
def file_extension(filename):
return os.path.splitext(filename)[1].split(".")[-1].lower()
skipped_dirs = ['Godeps', 'third_party', '_gopath', '_output', '.git',
'cluster/env.sh', 'vendor', 'test/e2e/generated/bindata.go',
'repo-infra/verify/boilerplate/test', '.glide']
def normalize_files(files):
newfiles = []
for pathname in files:
if any(x in pathname for x in skipped_dirs):
continue
newfiles.append(pathname)
return newfiles
def get_files(extensions):
files = []
if len(args.filenames) > 0:
files = args.filenames
else:
for root, dirs, walkfiles in os.walk(args.rootdir):
# don't visit certain dirs. This is just a performance improvement
# as we would prune these later in normalize_files(). But doing it
# cuts down the amount of filesystem walking we do and cuts down
# the size of the file list
for d in skipped_dirs:
if d in dirs:
dirs.remove(d)
for name in walkfiles:
pathname = os.path.join(root, name)
files.append(pathname)
files = normalize_files(files)
outfiles = []
for pathname in files:
basename = os.path.basename(pathname)
extension = file_extension(pathname)
if extension in extensions or basename in extensions:
outfiles.append(pathname)
return outfiles
def get_regexs():
regexs = {}
# Search for "YEAR" which exists in the boilerplate, but shouldn't in the real thing
regexs["year"] = re.compile( 'YEAR' )
# dates can be 2014, 2015, 2016, ..., CURRENT_YEAR, company holder names can be anything
years = range(2014, date.today().year + 1)
regexs["date"] = re.compile( '(%s)' % "|".join(map(lambda l: str(l), years)) )
# strip // +build \n\n build constraints
regexs["go_build_constraints"] = re.compile(r"^(// \+build.*\n)+\n", re.MULTILINE)
# strip #!.* from shell scripts
regexs["shebang"] = re.compile(r"^(#!.*\n)\n*", re.MULTILINE)
return regexs
def main():
regexs = get_regexs()
refs = get_refs()
filenames = get_files(refs.keys())
for filename in filenames:
if not file_passes(filename, refs, regexs):
print(filename, file=sys.stdout)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,13 @@
# Copyright YEAR The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,13 @@
# Copyright YEAR The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -12,12 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# force the usage of /bin/bash instead of /bin/sh
SHELL := /bin/bash
.PHONY: build-% build container-% container push-% push clean test
# A space-separated list of all commands in the repository, must be
# set in main Makefile of a repository.
# CMDS=
# Normally, commands are expected in "cmd". That can be changed for a
# repository to something else by setting CMDS_DIR before including build.make.
CMDS_DIR ?= cmd
# This is the default. It can be overridden in the main Makefile after
# including build.make.
REGISTRY_NAME?=quay.io/k8scsi
@ -63,30 +70,39 @@ endif
# Specific packages can be excluded from each of the tests below by setting the *_FILTER_CMD variables
# to something like "| grep -v 'github.com/kubernetes-csi/project/pkg/foobar'". See usage below.
# BUILD_PLATFORMS contains a set of <os> <arch> <suffix> triplets,
# BUILD_PLATFORMS contains a set of tuples [os arch buildx_platform suffix base_image addon_image]
# separated by semicolon. An empty variable or empty entry (= just a
# semicolon) builds for the default platform of the current Go
# toolchain.
BUILD_PLATFORMS =
# Add go ldflags using LDFLAGS at the time of compilation.
IMPORTPATH_LDFLAGS = -X main.version=$(REV)
IMPORTPATH_LDFLAGS = -X main.version=$(REV)
EXT_LDFLAGS = -extldflags "-static"
LDFLAGS =
LDFLAGS =
FULL_LDFLAGS = $(LDFLAGS) $(IMPORTPATH_LDFLAGS) $(EXT_LDFLAGS)
# This builds each command (= the sub-directories of ./cmd) for the target platform(s)
# defined by BUILD_PLATFORMS.
$(CMDS:%=build-%): build-%: check-go-version-go
mkdir -p bin
echo '$(BUILD_PLATFORMS)' | tr ';' '\n' | while read -r os arch suffix; do \
if ! (set -x; CGO_ENABLED=0 GOOS="$$os" GOARCH="$$arch" go build $(GOFLAGS_VENDOR) -a -ldflags '$(FULL_LDFLAGS)' -o "./bin/$*$$suffix" ./cmd/$*); then \
# os_arch_seen captures all of the $$os-$$arch-$$buildx_platform seen for the current binary
# that we want to build, if we've seen an $$os-$$arch-$$buildx_platform before it means that
# we don't need to build it again, this is done to avoid building
# the windows binary multiple times (see the default value of $$BUILD_PLATFORMS)
export os_arch_seen="" && echo '$(BUILD_PLATFORMS)' | tr ';' '\n' | while read -r os arch buildx_platform suffix base_image addon_image; do \
os_arch_seen_pre=$${os_arch_seen%%$$os-$$arch-$$buildx_platform*}; \
if ! [ $${#os_arch_seen_pre} = $${#os_arch_seen} ]; then \
continue; \
fi; \
if ! (set -x; cd ./$(CMDS_DIR)/$* && CGO_ENABLED=0 GOOS="$$os" GOARCH="$$arch" go build $(GOFLAGS_VENDOR) -a -ldflags '$(FULL_LDFLAGS)' -o "$(abspath ./bin)/$*$$suffix" .); then \
echo "Building $* for GOOS=$$os GOARCH=$$arch failed, see error(s) above."; \
exit 1; \
fi; \
os_arch_seen+=";$$os-$$arch-$$buildx_platform"; \
done
$(CMDS:%=container-%): container-%: build-%
docker build -t $*:latest -f $(shell if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi) --label revision=$(REV) .
docker build -t $*:latest -f $(shell if [ -e ./$(CMDS_DIR)/$*/Dockerfile ]; then echo ./$(CMDS_DIR)/$*/Dockerfile; else echo Dockerfile; fi) --label revision=$(REV) .
$(CMDS:%=push-%): push-%: container-%
set -ex; \
@ -121,7 +137,7 @@ DOCKER_BUILDX_CREATE_ARGS ?=
# This target builds a multiarch image for one command using Moby BuildKit builder toolkit.
# Docker Buildx is included in Docker 19.03.
#
# ./cmd/<command>/Dockerfile[.Windows] is used if found, otherwise Dockerfile[.Windows].
# ./$(CMDS_DIR)/<command>/Dockerfile[.Windows] is used if found, otherwise Dockerfile[.Windows].
# It is currently optional: if no such file exists, Windows images are not included,
# even when Windows is listed in BUILD_PLATFORMS. That way, projects can test that
# Windows binaries can be built before adding a Dockerfile for it.
@ -131,29 +147,48 @@ DOCKER_BUILDX_CREATE_ARGS ?=
# the tag for the resulting multiarch image.
$(CMDS:%=push-multiarch-%): push-multiarch-%: check-pull-base-ref build-%
set -ex; \
DOCKER_CLI_EXPERIMENTAL=enabled; \
export DOCKER_CLI_EXPERIMENTAL; \
export DOCKER_CLI_EXPERIMENTAL=enabled; \
docker buildx create $(DOCKER_BUILDX_CREATE_ARGS) --use --name multiarchimage-buildertest; \
trap "docker buildx rm multiarchimage-buildertest" EXIT; \
dockerfile_linux=$$(if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi); \
dockerfile_windows=$$(if [ -e ./cmd/$*/Dockerfile.Windows ]; then echo ./cmd/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \
dockerfile_linux=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile ]; then echo ./$(CMDS_DIR)/$*/Dockerfile; else echo Dockerfile; fi); \
dockerfile_windows=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile.Windows ]; then echo ./$(CMDS_DIR)/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \
if [ '$(BUILD_PLATFORMS)' ]; then build_platforms='$(BUILD_PLATFORMS)'; else build_platforms="linux amd64"; fi; \
if ! [ -f "$$dockerfile_windows" ]; then \
build_platforms="$$(echo "$$build_platforms" | sed -e 's/windows *[^ ]* *.exe//g' -e 's/; *;/;/g')"; \
build_platforms="$$(echo "$$build_platforms" | sed -e 's/windows *[^ ]* *[^ ]* *.exe *[^ ]* *[^ ]*//g' -e 's/; *;/;/g' -e 's/;[ ]*$$//')"; \
fi; \
pushMultiArch () { \
tag=$$1; \
echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do \
echo "$$build_platforms" | tr ';' '\n' | while read -r os arch buildx_platform suffix base_image addon_image; do \
escaped_base_image=$${base_image/:/-}; \
escaped_buildx_platform=$${buildx_platform//\//-}; \
if ! [ -z $$escaped_base_image ]; then escaped_base_image+="-"; fi; \
docker buildx build --push \
--tag $(IMAGE_NAME):$$arch-$$os-$$tag \
--platform=$$os/$$arch \
--tag $(IMAGE_NAME):$$escaped_buildx_platform-$$os-$$escaped_base_image$$tag \
--platform=$$os/$$buildx_platform \
--file $$(eval echo \$${dockerfile_$$os}) \
--build-arg binary=./bin/$*$$suffix \
--build-arg ARCH=$$arch \
--build-arg BASE_IMAGE=$$base_image \
--build-arg ADDON_IMAGE=$$addon_image \
--label revision=$(REV) \
.; \
done; \
images=$$(echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do echo $(IMAGE_NAME):$$arch-$$os-$$tag; done); \
images=$$(echo "$$build_platforms" | tr ';' '\n' | while read -r os arch buildx_platform suffix base_image addon_image; do \
escaped_base_image=$${base_image/:/-}; \
escaped_buildx_platform=$${buildx_platform//\//-}; \
if ! [ -z $$escaped_base_image ]; then escaped_base_image+="-"; fi; \
echo $(IMAGE_NAME):$$escaped_buildx_platform-$$os-$$escaped_base_image$$tag; \
done); \
docker manifest create --amend $(IMAGE_NAME):$$tag $$images; \
echo "$$build_platforms" | tr ';' '\n' | while read -r os arch buildx_platform suffix base_image addon_image; do \
if [ $$os = "windows" ]; then \
escaped_base_image=$${base_image/:/-}; \
if ! [ -z $$escaped_base_image ]; then escaped_base_image+="-"; fi; \
image=$(IMAGE_NAME):$$arch-$$os-$$escaped_base_image$$tag; \
os_version=$$(docker manifest inspect mcr.microsoft.com/windows/$${base_image} | grep "os.version" | head -n 1 | awk '{print $$2}' | sed -e 's/"//g') || true; \
docker manifest annotate --os-version $$os_version $(IMAGE_NAME):$$tag $$image; \
fi; \
done; \
docker manifest push -p $(IMAGE_NAME):$$tag; \
}; \
if [ $(PULL_BASE_REF) = "master" ]; then \
@ -275,3 +310,15 @@ test-shellcheck:
.PHONY: check-go-version-%
check-go-version-%:
./release-tools/verify-go-version.sh "$*"
# Test for spelling errors.
.PHONY: test-spelling
test-spelling:
@ echo; echo "### $@:"
@ ./release-tools/verify-spelling.sh "$(pwd)"
# Test the boilerplates of the files.
.PHONY: test-boilerplate
test-boilerplate:
@ echo; echo "### $@:"
@ ./release-tools/verify-boilerplate.sh "$(pwd)"

View File

@ -1,5 +1,19 @@
#! /bin/bash
# Copyright 2021 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# shellcheck disable=SC1091
. release-tools/prow.sh

View File

@ -10,24 +10,23 @@
# because binaries will get built for different architectures and then
# get copied from the built host into the container image
#
# See https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/README.md
# See https://github.com/kubernetes/test-infra/blob/HEAD/config/jobs/image-pushing/README.md
# for more details on image pushing process in Kubernetes.
#
# To promote release images, see https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-sig-storage.
# To promote release images, see https://github.com/kubernetes/k8s.io/tree/HEAD/k8s.gcr.io/images/k8s-staging-sig-storage.
# This must be specified in seconds. If omitted, defaults to 600s (10 mins).
# Building three images in external-snapshotter takes roughly half an hour,
# sometimes more.
timeout: 3600s
# Building three images in external-snapshotter takes more than an hour.
timeout: 7200s
# This prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future.
options:
substitution_option: ALLOW_LOOSE
steps:
# The image must contain bash and curl. Ideally it should also contain
# the desired version of Go (currently defined in release-tools/travis.yml),
# the desired version of Go (currently defined in release-tools/prow.sh),
# but that just speeds up the build and is not required.
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20200421-a2bf5f8'
- name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20210917-12df099d55'
entrypoint: ./.cloudbuild.sh
env:
- GIT_TAG=${_GIT_TAG}

View File

@ -35,10 +35,18 @@ var (
)
/*
* TestSuite represents a JUnit file. Due to how encoding/xml works, we have
* TestResults represents a JUnit file. Due to how encoding/xml works, we have
* represent all fields that we want to be passed through. It's therefore
* not a complete solution, but good enough for Ginkgo + Spyglass.
*
* Before Kubernetes 1.25 and ginkgo v2, we directly had <testsuite> in the
* JUnit file. Now we get <testsuites> and inside it the <testsuite>.
*/
type TestResults struct {
XMLName string `xml:"testsuites"`
TestSuite TestSuite `xml:"testsuite"`
}
type TestSuite struct {
XMLName string `xml:"testsuite"`
TestCases []TestCase `xml:"testcase"`
@ -93,7 +101,15 @@ func main() {
}
}
if err := xml.Unmarshal(data, &junit); err != nil {
panic(err)
if err.Error() != "expected element type <testsuite> but have <testsuites>" {
panic(err)
}
// Fall back to Ginkgo v2 format.
var junitv2 TestResults
if err := xml.Unmarshal(data, &junitv2); err != nil {
panic(err)
}
junit = junitv2.TestSuite
}
}

View File

@ -13,7 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script can be used while converting a repo from "dep" to "go mod"
# by calling it after "go mod init" or to update the Kubernetes packages
# in a repo that has already been converted. Only packages that are
@ -26,14 +26,43 @@ set -o pipefail
cmd=$0
function help () {
echo "$cmd <kubernetes version = x.y.z> - update all components from kubernetes/kubernetes to that version"
cat <<EOF
$cmd -p <kubernetes version = x.y.z>
Update all components from kubernetes/kubernetes to that version.
By default, replace statements are added for all Kubernetes packages,
whether they are used or not. This is useful when preparing a
repository for using k8s.io/kubernetes, because those replace
statements are needed to avoid "unknown revision v0.0.0" errors
(https://github.com/kubernetes/kubernetes/issues/79384).
With the optional -p flag, all unused replace statements are
pruned. This makes go.mod smaller, but isn't required.
The replace statements are needed for "go get -u ./..." which
otherwise ends up updating Kubernetes packages like client-go to
incompatible versions (in that case, a very old 1.x release which
happens to have a "higher" version number than the current
0.<Kubernetes minor version>.<Kubernetes patch version> numbers.
EOF
}
prune=false
while getopts "ph" o; do
case "$o" in
h) help; exit 0;;
p) prune=true;;
*) help; exit 1;;
esac
done
shift $((OPTIND-1))
if [ $# -ne 1 ]; then
help
exit 1
fi
case "$1" in -h|--help|help) help; exit 0;; esac
die () {
echo >&2 "$@"
@ -55,6 +84,12 @@ mods=$( (set -x; curl --silent --show-error --fail "https://raw.githubuserconten
sed -n 's|.*k8s.io/\(.*\) => ./staging/src/k8s.io/.*|k8s.io/\1|p'
) || die "failed to determine Kubernetes staging modules"
for mod in $mods; do
if $prune && ! (env GO111MODULE=on go mod graph) | grep "$mod@" > /dev/null; then
echo "Kubernetes module $mod is not used, skipping"
# Remove the module from go.mod "replace" that was added by an older version of this script.
(set -x; env GO111MODULE=on go mod edit "-dropreplace=$mod") || die "'go mod edit' failed"
continue
fi
# The presence of a potentially incomplete go.mod file affects this command,
# so move elsewhere.
modinfo=$(set -x; cd /; env GO111MODULE=on go mod download -json "$mod@kubernetes-${k8s}") ||

View File

@ -1,5 +1,5 @@
#! /bin/bash
#
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -33,8 +33,6 @@
# The expected environment is:
# - $GOPATH/src/<import path> for the repository that is to be tested,
# with PR branch merged (when testing a PR)
# - optional: bazel installed (when testing against Kubernetes master),
# must be recent enough for Kubernetes master
# - running on linux-amd64
# - kind (https://github.com/kubernetes-sigs/kind) installed
# - optional: Go already installed
@ -65,7 +63,22 @@ get_versioned_variable () {
echo "$value"
}
configvar CSI_PROW_BUILD_PLATFORMS "linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x; linux arm64 -arm64" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"
# This takes a version string like CSI_PROW_KUBERNETES_VERSION and
# maps it to the corresponding git tag, branch or commit.
version_to_git () {
version="$1"
shift
case "$version" in
latest|master) echo "master";;
release-*) echo "$version";;
*) echo "v$version";;
esac
}
# the list of windows versions was matched from:
# - https://hub.docker.com/_/microsoft-windows-nanoserver
# - https://hub.docker.com/_/microsoft-windows-servercore
configvar CSI_PROW_BUILD_PLATFORMS "linux amd64 amd64; linux ppc64le ppc64le -ppc64le; linux s390x s390x -s390x; linux arm arm -arm; linux arm64 arm64 -arm64; linux arm arm/v7 -armv7; windows amd64 amd64 .exe nanoserver:1809 servercore:ltsc2019; windows amd64 amd64 .exe nanoserver:20H2 servercore:20H2; windows amd64 amd64 .exe nanoserver:ltsc2022 servercore:ltsc2022" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"
# If we have a vendor directory, then use it. We must be careful to only
# use this for "make" invocations inside the project's repo itself because
@ -73,39 +86,17 @@ configvar CSI_PROW_BUILD_PLATFORMS "linux amd64; windows amd64 .exe; linux ppc64
# which is disabled with GOFLAGS=-mod=vendor).
configvar GOFLAGS_VENDOR "$( [ -d vendor ] && echo '-mod=vendor' )" "Go flags for using the vendor directory"
# Go versions can be specified seperately for different tasks
# If the pre-installed Go is missing or a different
# version, the required version here will get installed
# from https://golang.org/dl/.
go_from_travis_yml () {
grep "^ *- go:" "${RELEASE_TOOLS_ROOT}/travis.yml" | sed -e 's/.*go: *//'
}
configvar CSI_PROW_GO_VERSION_BUILD "$(go_from_travis_yml)" "Go version for building the component" # depends on component's source code
configvar CSI_PROW_GO_VERSION_BUILD "1.19" "Go version for building the component" # depends on component's source code
configvar CSI_PROW_GO_VERSION_E2E "" "override Go version for building the Kubernetes E2E test suite" # normally doesn't need to be set, see install_e2e
configvar CSI_PROW_GO_VERSION_SANITY "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building the csi-sanity test suite" # depends on CSI_PROW_SANITY settings below
configvar CSI_PROW_GO_VERSION_KIND "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building 'kind'" # depends on CSI_PROW_KIND_VERSION below
configvar CSI_PROW_GO_VERSION_GINKGO "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building ginkgo" # depends on CSI_PROW_GINKGO_VERSION below
# kind version to use. If the pre-installed version is different,
# the desired version is downloaded from https://github.com/kubernetes-sigs/kind/releases
# (if available), otherwise it is built from source.
configvar CSI_PROW_KIND_VERSION "v0.9.0" "kind"
# kind images to use. Must match the kind version.
# The release notes of each kind release list the supported images.
configvar CSI_PROW_KIND_IMAGES "kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600
kindest/node:v1.18.8@sha256:f4bcc97a0ad6e7abaf3f643d890add7efe6ee4ab90baeb374b4f41a4c95567eb
kindest/node:v1.17.11@sha256:5240a7a2c34bf241afb54ac05669f8a46661912eab05705d660971eeb12f6555
kindest/node:v1.16.15@sha256:a89c771f7de234e6547d43695c7ab047809ffc71a0c3b65aa54eda051c45ed20
kindest/node:v1.15.12@sha256:d9b939055c1e852fe3d86955ee24976cab46cba518abcb8b13ba70917e6547a6
kindest/node:v1.14.10@sha256:ce4355398a704fca68006f8a29f37aafb49f8fc2f64ede3ccd0d9198da910146
kindest/node:v1.13.12@sha256:1c1a48c2bfcbae4d5f4fa4310b5ed10756facad0b7a2ca93c7a4b5bae5db29f5" "kind images"
# Use kind node-image --type=bazel by default, but allow to disable that.
configvar CSI_PROW_USE_BAZEL true "use Bazel during 'kind node-image' invocation"
# ginkgo test runner version to use. If the pre-installed version is
# different, the desired version is built from source.
# different, the desired version is built from source. For Kubernetes,
# the version built via "make WHAT=vendor/github.com/onsi/ginkgo/ginkgo" is
# used, which is guaranteed to match what the Kubernetes e2e.test binary
# needs.
configvar CSI_PROW_GINKGO_VERSION v1.7.0 "Ginkgo"
# Ginkgo runs the E2E test in parallel. The default is based on the number
@ -130,7 +121,7 @@ configvar CSI_PROW_BUILD_JOB true "building code in repo enabled"
# use the same settings as for "latest" Kubernetes. This works
# as long as there are no breaking changes in Kubernetes, like
# deprecating or changing the implementation of an alpha feature.
configvar CSI_PROW_KUBERNETES_VERSION 1.17.0 "Kubernetes"
configvar CSI_PROW_KUBERNETES_VERSION 1.22.0 "Kubernetes"
# CSI_PROW_KUBERNETES_VERSION reduced to first two version numbers and
# with underscore (1_13 instead of 1.13.3) and in uppercase (LATEST
@ -140,10 +131,34 @@ configvar CSI_PROW_KUBERNETES_VERSION 1.17.0 "Kubernetes"
# when a Prow job just defines the Kubernetes version.
csi_prow_kubernetes_version_suffix="$(echo "${CSI_PROW_KUBERNETES_VERSION}" | tr . _ | tr '[:lower:]' '[:upper:]' | sed -e 's/^RELEASE-//' -e 's/\([0-9]*\)_\([0-9]*\).*/\1_\2/')"
# Work directory. It has to allow running executables, therefore /tmp
# is avoided. Cleaning up after the script is intentionally left to
# the caller.
configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csiprow.XXXXXXXXXX")" "work directory"
# Only the latest KinD is (eventually) guaranteed to work with the
# latest Kubernetes. For example, KinD 0.10.0 failed with Kubernetes
# 1.21.0-beta1. Therefore the default version of KinD is "main"
# for that, otherwise the latest stable release for which we then
# list the officially supported images below.
kind_version_default () {
case "${CSI_PROW_KUBERNETES_VERSION}" in
latest|master)
echo main;;
*)
echo v0.14.0;;
esac
}
# kind version to use. If the pre-installed version is different,
# the desired version is downloaded from https://github.com/kubernetes-sigs/kind/releases
# (if available), otherwise it is built from source.
configvar CSI_PROW_KIND_VERSION "$(kind_version_default)" "kind"
# kind images to use. Must match the kind version.
# The release notes of each kind release list the supported images.
configvar CSI_PROW_KIND_IMAGES "kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e
kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae
kindest/node:v1.22.9@sha256:8135260b959dfe320206eb36b3aeda9cffcb262f4b44cda6b33f7bb73f453105
kindest/node:v1.21.12@sha256:f316b33dd88f8196379f38feb80545ef3ed44d9197dca1bfd48bcb1583210207
kindest/node:v1.20.15@sha256:6f2d011dffe182bad80b85f6c00e8ca9d86b5b8922cdf433d53575c4c5212248
kindest/node:v1.19.16@sha256:d9c819e8668de8d5030708e484a9fdff44d95ec4675d136ef0a0a584e587f65c
kindest/node:v1.18.20@sha256:738cdc23ed4be6cc0b7ea277a2ebcc454c8373d7d8fb991a7fcdbd126188e6d7" "kind images"
# By default, this script tests sidecars with the CSI hostpath driver,
# using the install_csi_driver function. That function depends on
@ -171,8 +186,8 @@ configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csip
# CSI_PROW_DEPLOYMENT variable can be set in the
# .prow.sh of each component when there are breaking changes
# that require using a non-default deployment. The default
# is a deployment named "kubernetes-x.yy" (if available),
# otherwise "kubernetes-latest".
# is a deployment named "kubernetes-x.yy${CSI_PROW_DEPLOYMENT_SUFFIX}" (if available),
# otherwise "kubernetes-latest${CSI_PROW_DEPLOYMENT_SUFFIX}".
# "none" disables the deployment of the hostpath driver.
#
# When no deploy script is found (nothing in `deploy` directory,
@ -181,9 +196,10 @@ configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csip
# If the deployment script is called with CSI_PROW_TEST_DRIVER=<file name> as
# environment variable, then it must write a suitable test driver configuration
# into that file in addition to installing the driver.
configvar CSI_PROW_DRIVER_VERSION "v1.3.0" "CSI driver version"
configvar CSI_PROW_DRIVER_VERSION "v1.8.0" "CSI driver version"
configvar CSI_PROW_DRIVER_REPO https://github.com/kubernetes-csi/csi-driver-host-path "CSI driver repo"
configvar CSI_PROW_DEPLOYMENT "" "deployment"
configvar CSI_PROW_DEPLOYMENT_SUFFIX "" "additional suffix in kubernetes-x.yy[suffix].yaml files"
# The install_csi_driver function may work also for other CSI drivers,
# as long as they follow the conventions of the CSI hostpath driver.
@ -208,27 +224,21 @@ configvar CSI_PROW_DRIVER_CANARY_REGISTRY "gcr.io/k8s-staging-sig-storage" "regi
# all generated files are present.
#
# CSI_PROW_E2E_REPO=none disables E2E testing.
tag_from_version () {
version="$1"
shift
case "$version" in
latest) echo "master";;
release-*) echo "$version";;
*) echo "v$version";;
esac
}
configvar CSI_PROW_E2E_VERSION "$(tag_from_version "${CSI_PROW_KUBERNETES_VERSION}")" "E2E version"
configvar CSI_PROW_E2E_VERSION "$(version_to_git "${CSI_PROW_KUBERNETES_VERSION}")" "E2E version"
configvar CSI_PROW_E2E_REPO "https://github.com/kubernetes/kubernetes" "E2E repo"
configvar CSI_PROW_E2E_IMPORT_PATH "k8s.io/kubernetes" "E2E package"
# Local path for e2e tests. Set to "none" to disable.
configvar CSI_PROW_SIDECAR_E2E_IMPORT_PATH "none" "CSI Sidecar E2E package"
# csi-sanity testing from the csi-test repo can be run against the installed
# CSI driver. For this to work, deploying the driver must expose the Unix domain
# csi.sock as a TCP service for use by the csi-sanity command, which runs outside
# of the cluster. The alternative would have been to (cross-)compile csi-sanity
# and install it inside the cluster, which is not necessarily easier.
configvar CSI_PROW_SANITY_REPO https://github.com/kubernetes-csi/csi-test "csi-test repo"
configvar CSI_PROW_SANITY_VERSION 5421d9f3c37be3b95b241b44a094a3db11bee789 "csi-test version" # latest master
configvar CSI_PROW_SANITY_IMPORT_PATH github.com/kubernetes-csi/csi-test "csi-test package"
configvar CSI_PROW_SANITY_VERSION v5.0.0 "csi-test version"
configvar CSI_PROW_SANITY_PACKAGE_PATH github.com/kubernetes-csi/csi-test "csi-test package"
configvar CSI_PROW_SANITY_SERVICE "hostpath-service" "Kubernetes TCP service name that exposes csi.sock"
configvar CSI_PROW_SANITY_POD "csi-hostpathplugin-0" "Kubernetes pod with CSI driver"
configvar CSI_PROW_SANITY_CONTAINER "hostpath" "Kubernetes container with CSI driver"
@ -275,25 +285,43 @@ tests_enabled () {
sanity_enabled () {
[ "${CSI_PROW_TESTS_SANITY}" = "sanity" ] && tests_enabled "sanity"
}
sidecar_tests_enabled () {
[ "${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}" != "none" ]
}
tests_need_kind () {
tests_enabled "parallel" "serial" "serial-alpha" "parallel-alpha" ||
sanity_enabled
sanity_enabled || sidecar_tests_enabled
}
tests_need_non_alpha_cluster () {
tests_enabled "parallel" "serial" ||
sanity_enabled
sanity_enabled || sidecar_tests_enabled
}
tests_need_alpha_cluster () {
tests_enabled "parallel-alpha" "serial-alpha"
}
# Enabling mock tests adds the "CSI mock volume" tests from https://github.com/kubernetes/kubernetes/blob/HEAD/test/e2e/storage/csi_mock_volume.go
# to the e2e.test invocations (serial, parallel, and the corresponding alpha variants).
# When testing canary images, those get used instead of the images specified
# in the e2e.test's normal YAML files.
#
# The default is to enable this for all jobs which use canary images
# and the latest Kubernetes because those images will be used for mock
# testing once they are released. Using them for mock testing with
# older Kubernetes releases is too risky because the deployment files
# can be very old (for example, still using a removed -provisioner
# parameter in external-provisioner).
configvar CSI_PROW_E2E_MOCK "$(if [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ] && [ "${CSI_PROW_KUBERNETES_VERSION}" = "latest" ]; then echo true; else echo false; fi)" "enable CSI mock volume tests"
# Regex for non-alpha, feature-tagged tests that should be run.
#
configvar CSI_PROW_E2E_FOCUS_LATEST '\[Feature:VolumeSnapshotDataSource\]' "non-alpha, feature-tagged tests for latest Kubernetes version"
configvar CSI_PROW_E2E_FOCUS "$(get_versioned_variable CSI_PROW_E2E_FOCUS "${csi_prow_kubernetes_version_suffix}")" "non-alpha, feature-tagged tests"
# Serial vs. parallel is always determined by these regular expressions.
# Individual regular expressions are seperated by spaces for readability
# Individual regular expressions are separated by spaces for readability
# and expected to not contain spaces. Use dots instead. The complete
# regex for Ginkgo will be created by joining the individual terms.
configvar CSI_PROW_E2E_SERIAL '\[Serial\] \[Disruptive\]' "tags for serial E2E tests"
@ -326,15 +354,23 @@ configvar CSI_PROW_E2E_ALPHA "$(get_versioned_variable CSI_PROW_E2E_ALPHA "${csi
# kubernetes-csi components must be updated, either by disabling
# the failing test for "latest" or by updating the test and not running
# it anymore for older releases.
configvar CSI_PROW_E2E_ALPHA_GATES_LATEST 'GenericEphemeralVolume=true,CSIStorageCapacity=true' "alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_ALPHA_GATES_LATEST '' "alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates"
configvar CSI_PROW_E2E_GATES_LATEST '' "non alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_GATES "$(get_versioned_variable CSI_PROW_E2E_GATES "${csi_prow_kubernetes_version_suffix}")" "non alpha E2E feature gates"
# Focus for local tests run in the sidecar E2E repo. Only used if CSI_PROW_SIDECAR_E2E_IMPORT_PATH
# is not set to "none". If empty, all tests in the sidecar repo will be run.
configvar CSI_PROW_SIDECAR_E2E_FOCUS '' "tags for local E2E tests"
configvar CSI_PROW_SIDECAR_E2E_SKIP '' "local tests that need to be skipped"
# Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment
default_csi_snapshotter_version () {
if [ "${CSI_PROW_KUBERNETES_VERSION}" = "latest" ] || [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then
echo "master"
else
echo "v3.0.2"
echo "v4.0.0"
fi
}
configvar CSI_SNAPSHOTTER_VERSION "$(default_csi_snapshotter_version)" "external-snapshotter version tag"
@ -345,16 +381,25 @@ configvar CSI_SNAPSHOTTER_VERSION "$(default_csi_snapshotter_version)" "external
# whether they can run with the current cluster provider, but until
# they are, we filter them out by name. Like the other test selection
# variables, this is again a space separated list of regular expressions.
#
# "different node" test skips can be removed once
# https://github.com/kubernetes/kubernetes/pull/82678 has been backported
# to all the K8s versions we test against
configvar CSI_PROW_E2E_SKIP 'Disruptive|different\s+node' "tests that need to be skipped"
configvar CSI_PROW_E2E_SKIP '\[Disruptive\]|\[Feature:SELinux\]' "tests that need to be skipped"
# This is the directory for additional result files. Usually set by Prow, but
# if not (for example, when invoking manually) it defaults to the work directory.
configvar ARTIFACTS "${CSI_PROW_WORK}/artifacts" "artifacts"
mkdir -p "${ARTIFACTS}"
# This creates directories that are required for testing.
ensure_paths () {
# Work directory. It has to allow running executables, therefore /tmp
# is avoided. Cleaning up after the script is intentionally left to
# the caller.
configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csiprow.XXXXXXXXXX")" "work directory"
# This is the directory for additional result files. Usually set by Prow, but
# if not (for example, when invoking manually) it defaults to the work directory.
configvar ARTIFACTS "${CSI_PROW_WORK}/artifacts" "artifacts"
mkdir -p "${ARTIFACTS}"
# For additional tools.
CSI_PROW_BIN="${CSI_PROW_WORK}/bin"
mkdir -p "${CSI_PROW_BIN}"
PATH="${CSI_PROW_BIN}:$PATH"
}
run () {
echo "$(date) $(go version | sed -e 's/.*version \(go[^ ]*\).*/\1/') $(if [ "$(pwd)" != "${REPO_DIR}" ]; then pwd; fi)\$" "$@" >&2
@ -374,11 +419,6 @@ die () {
exit 1
}
# For additional tools.
CSI_PROW_BIN="${CSI_PROW_WORK}/bin"
mkdir -p "${CSI_PROW_BIN}"
PATH="${CSI_PROW_BIN}:$PATH"
# Ensure that PATH has the desired version of the Go tools, then run command given as argument.
# Empty parameter uses the already installed Go. In Prow, that version is kept up-to-date by
# bumping the container image regularly.
@ -407,20 +447,21 @@ install_kind () {
chmod u+x "${CSI_PROW_WORK}/bin/kind"
else
git_checkout https://github.com/kubernetes-sigs/kind "${GOPATH}/src/sigs.k8s.io/kind" "${CSI_PROW_KIND_VERSION}" --depth=1 &&
(cd "${GOPATH}/src/sigs.k8s.io/kind" && make install INSTALL_DIR="${CSI_PROW_WORK}/bin")
(cd "${GOPATH}/src/sigs.k8s.io/kind" && run_with_go "$CSI_PROW_GO_VERSION_KIND" make install INSTALL_DIR="${CSI_PROW_WORK}/bin")
fi
}
# Ensure that we have the desired version of the ginkgo test runner.
install_ginkgo () {
if [ -e "${CSI_PROW_BIN}/ginkgo" ]; then
return
fi
# CSI_PROW_GINKGO_VERSION contains the tag with v prefix, the command line output does not.
if [ "v$(ginkgo version 2>/dev/null | sed -e 's/.* //')" = "${CSI_PROW_GINKGO_VERSION}" ]; then
return
fi
git_checkout https://github.com/onsi/ginkgo "$GOPATH/src/github.com/onsi/ginkgo" "${CSI_PROW_GINKGO_VERSION}" --depth=1 &&
# We have to get dependencies and hence can't call just "go build".
run_with_go "${CSI_PROW_GO_VERSION_GINKGO}" go get github.com/onsi/ginkgo/ginkgo || die "building ginkgo failed" &&
mv "$GOPATH/bin/ginkgo" "${CSI_PROW_BIN}"
run_with_go "${CSI_PROW_GO_VERSION_GINKGO}" env GOBIN="${CSI_PROW_BIN}" go install "github.com/onsi/ginkgo/ginkgo@${CSI_PROW_GINKGO_VERSION}" || die "building ginkgo failed"
}
# Ensure that we have the desired version of dep.
@ -465,20 +506,22 @@ git_checkout () {
# This clones a repo ("https://github.com/kubernetes/kubernetes")
# in a certain location ("$GOPATH/src/k8s.io/kubernetes") at
# a the head of a specific branch (i.e., release-1.13, master).
# The directory cannot exist.
git_clone_branch () {
local repo path branch parent
# a the head of a specific branch (i.e., release-1.13, master),
# tag (v1.20.0) or commit.
#
# The directory must not exist.
git_clone () {
local repo path name parent
repo="$1"
shift
path="$1"
shift
branch="$1"
name="$1"
shift
parent="$(dirname "$path")"
mkdir -p "$parent"
(cd "$parent" && run git clone --single-branch --branch "$branch" "$repo" "$path") || die "cloning $repo" failed
(cd "$parent" && run git clone --single-branch --branch "$name" "$repo" "$path") || die "cloning $repo" failed
# This is useful for local testing or when switching between different revisions in the same
# repo.
(cd "$path" && run git clean -fdx) || die "failed to clean $path"
@ -562,16 +605,12 @@ start_cluster () {
if [ "$version" = "latest" ]; then
version=master
fi
if ${CSI_PROW_USE_BAZEL}; then
type="bazel"
else
type="docker"
fi
git_clone_branch https://github.com/kubernetes/kubernetes "${CSI_PROW_WORK}/src/kubernetes" "$version" || die "checking out Kubernetes $version failed"
git_clone https://github.com/kubernetes/kubernetes "${CSI_PROW_WORK}/src/kubernetes" "$(version_to_git "$version")" || die "checking out Kubernetes $version failed"
go_version="$(go_version_for_kubernetes "${CSI_PROW_WORK}/src/kubernetes" "$version")" || die "cannot proceed without knowing Go version for Kubernetes"
# Changing into the Kubernetes source code directory is a workaround for https://github.com/kubernetes-sigs/kind/issues/1910
(cd "${CSI_PROW_WORK}/src/kubernetes" && run_with_go "$go_version" kind build node-image --image csiprow/node:latest --type="$type" --kube-root "${CSI_PROW_WORK}/src/kubernetes") || die "'kind build node-image' failed"
# shellcheck disable=SC2046
(cd "${CSI_PROW_WORK}/src/kubernetes" && run_with_go "$go_version" kind build node-image --image csiprow/node:latest --kube-root "${CSI_PROW_WORK}/src/kubernetes") || die "'kind build node-image' failed"
csi_prow_kind_have_kubernetes=true
fi
image="csiprow/node:latest"
@ -605,11 +644,16 @@ EOF
# Deletes kind cluster inside a prow job
delete_cluster_inside_prow_job() {
local name="$1"
# Inside a real Prow job it is better to clean up at runtime
# instead of leaving that to the Prow job cleanup code
# because the later sometimes times out (https://github.com/kubernetes-csi/csi-release-tools/issues/24#issuecomment-554765872).
#
# This is also a good time to collect logs.
if [ "$JOB_NAME" ]; then
if kind get clusters | grep -q csi-prow; then
run kind export logs --name=csi-prow "${ARTIFACTS}/cluster-logs/$name"
run kind delete cluster --name=csi-prow || die "kind delete failed"
fi
unset KUBECONFIG
@ -619,24 +663,38 @@ delete_cluster_inside_prow_job() {
# Looks for the deployment as specified by CSI_PROW_DEPLOYMENT and CSI_PROW_KUBERNETES_VERSION
# in the given directory.
find_deployment () {
local dir file
dir="$1"
# Fixed deployment name? Use it if it exists, otherwise fail.
if [ "${CSI_PROW_DEPLOYMENT}" ]; then
file="$dir/${CSI_PROW_DEPLOYMENT}/deploy.sh"
if ! [ -e "$file" ]; then
return 1
fi
echo "$file"
return 0
fi
local dir="$1"
local file
# major/minor without release- prefix.
local k8sver
# Ignore: See if you can use ${variable//search/replace} instead.
# shellcheck disable=SC2001
file="$dir/kubernetes-$(echo "${CSI_PROW_KUBERNETES_VERSION}" | sed -e 's/\([0-9]*\)\.\([0-9]*\).*/\1.\2/')/deploy.sh"
k8sver="$(echo "${CSI_PROW_KUBERNETES_VERSION}" | sed -e 's/^release-//' -e 's/\([0-9]*\)\.\([0-9]*\).*/\1.\2/')"
# Desired deployment, either specified completely, including version, or derived from other variables.
local deployment
deployment=${CSI_PROW_DEPLOYMENT:-kubernetes-${k8sver}${CSI_PROW_DEPLOYMENT_SUFFIX}}
# Fixed deployment name? Use it if it exists.
if [ "${CSI_PROW_DEPLOYMENT}" ]; then
file="$dir/${CSI_PROW_DEPLOYMENT}/deploy.sh"
if [ -e "$file" ]; then
echo "$file"
return 0
fi
# CSI_PROW_DEPLOYMENT=kubernetes-x.yy must be mapped to kubernetes-latest
# as fallback. Same for kubernetes-distributed-x.yy.
fi
file="$dir/${deployment}/deploy.sh"
if ! [ -e "$file" ]; then
file="$dir/kubernetes-latest/deploy.sh"
# Replace the first xx.yy number with "latest", for example
# kubernetes-1.21-test -> kubernetes-latest-test.
# Ignore: See if you can use ${variable//search/replace} instead.
# shellcheck disable=SC2001
file="$dir/$(echo "$deployment" | sed -e 's/[0-9][0-9]*\.[0-9][0-9]*/latest/')/deploy.sh"
if ! [ -e "$file" ]; then
return 1
fi
@ -696,7 +754,7 @@ install_csi_driver () {
fi
}
# Installs all nessesary snapshotter CRDs
# Installs all necessary snapshotter CRDs
install_snapshot_crds() {
# Wait until volumesnapshot CRDs are in place.
CRD_BASE_DIR="https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${CSI_SNAPSHOTTER_VERSION}/client/config/crd"
@ -743,7 +801,7 @@ install_snapshot_controller() {
fi
echo "$(date +%H:%M:%S)" "waiting for snapshot RBAC setup complete, attempt #$cnt"
cnt=$((cnt + 1))
sleep 10
sleep 10
done
SNAPSHOT_CONTROLLER_YAML="${CONTROLLER_DIR}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"
@ -756,7 +814,7 @@ install_snapshot_controller() {
kind load docker-image --name csi-prow ${NEW_IMG} || die "could not load the snapshot-controller:csiprow image into the kind cluster"
# deploy snapshot-controller
echo "Deploying snapshot-controller"
echo "Deploying snapshot-controller from ${SNAPSHOT_CONTROLLER_YAML} with $NEW_IMG."
# Replace image in SNAPSHOT_CONTROLLER_YAML with snapshot-controller:csiprow and deploy
# NOTE: This logic is similar to the logic here:
# https://github.com/kubernetes-csi/csi-driver-host-path/blob/v1.4.0/deploy/util/deploy-hostpath.sh#L155
@ -773,7 +831,7 @@ install_snapshot_controller() {
modified="$(cat "$i" | while IFS= read -r line; do
nocomments="$(echo "$line" | sed -e 's/ *#.*$//')"
if echo "$nocomments" | grep -q '^[[:space:]]*image:[[:space:]]*'; then
# Split 'image: k8s.gcr.io/sig-storage/snapshot-controller:v3.0.0'
# Split 'image: registry.k8s.io/sig-storage/snapshot-controller:v3.0.0'
# into image (snapshot-controller:v3.0.0),
# name (snapshot-controller),
# tag (v3.0.0).
@ -793,25 +851,37 @@ install_snapshot_controller() {
echo "$modified"
exit 1
fi
echo "kubectl apply -f ${SNAPSHOT_CONTROLLER_YAML}(modified)"
done
elif [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then
echo "Deploying snapshot-controller from ${SNAPSHOT_CONTROLLER_YAML} with canary images."
yaml="$(kubectl apply --dry-run=client -o yaml -f "$SNAPSHOT_CONTROLLER_YAML")"
# Ignore: See if you can use ${variable//search/replace} instead.
# shellcheck disable=SC2001
modified="$(echo "$yaml" | sed -e "s;image: .*/\([^/:]*\):.*;image: ${CSI_PROW_DRIVER_CANARY_REGISTRY}/\1:canary;")"
diff <(echo "$yaml") <(echo "$modified")
if ! echo "$modified" | kubectl apply -f -; then
echo "modified version of $SNAPSHOT_CONTROLLER_YAML:"
echo "$modified"
exit 1
fi
else
echo "kubectl apply -f ${CONTROLLER_DIR}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"
kubectl apply -f "${CONTROLLER_DIR}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml"
echo "kubectl apply -f $SNAPSHOT_CONTROLLER_YAML"
kubectl apply -f "$SNAPSHOT_CONTROLLER_YAML"
fi
cnt=0
expected_running_pods=$(curl https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/"${CSI_SNAPSHOTTER_VERSION}"/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml | grep replicas | cut -d ':' -f 2-)
while [ "$(kubectl get pods -l app=snapshot-controller | grep 'Running' -c)" -lt "$expected_running_pods" ]; do
expected_running_pods=$(kubectl apply --dry-run=client -o "jsonpath={.spec.replicas}" -f "$SNAPSHOT_CONTROLLER_YAML")
expected_namespace=$(kubectl apply --dry-run=client -o "jsonpath={.metadata.namespace}" -f "$SNAPSHOT_CONTROLLER_YAML")
while [ "$(kubectl get pods -n "$expected_namespace" -l app=snapshot-controller | grep 'Running' -c)" -lt "$expected_running_pods" ]; do
if [ $cnt -gt 30 ]; then
echo "snapshot-controller pod status:"
kubectl describe pods -l app=snapshot-controller
kubectl describe pods -n "$expected_namespace" -l app=snapshot-controller
echo >&2 "ERROR: snapshot controller not ready after over 5 min"
exit 1
fi
echo "$(date +%H:%M:%S)" "waiting for snapshot controller deployment to complete, attempt #$cnt"
cnt=$((cnt + 1))
sleep 10
sleep 10
done
}
@ -856,17 +926,46 @@ start_loggers () {
done
}
# Patches the image versions of test/e2e/testing-manifests/storage-csi/mock in the k/k
# source code, if needed.
patch_kubernetes () {
local source="$1" target="$2"
if [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then
# We cannot replace registry.k8s.io/sig-storage with gcr.io/k8s-staging-sig-storage because
# e2e.test does not support it (see test/utils/image/manifest.go). Instead we
# invoke the e2e.test binary with KUBE_TEST_REPO_LIST set to a file that
# overrides that registry.
find "$source/test/e2e/testing-manifests/storage-csi/mock" -name '*.yaml' -print0 | xargs -0 sed -i -e 's;registry.k8s.io/sig-storage/\(.*\):v.*;registry.k8s.io/sig-storage/\1:canary;'
cat >"$target/e2e-repo-list" <<EOF
sigStorageRegistry: gcr.io/k8s-staging-sig-storage
EOF
cat >&2 <<EOF
Using a modified version of k/k/test/e2e:
$(cd "$source" && git diff 2>&1)
EOF
fi
}
# Makes the E2E test suite binary available as "${CSI_PROW_WORK}/e2e.test".
install_e2e () {
if [ -e "${CSI_PROW_WORK}/e2e.test" ]; then
return
fi
if sidecar_tests_enabled; then
run_with_go "${CSI_PROW_GO_VERSION_BUILD}" go test -c -o "${CSI_PROW_WORK}/e2e-local.test" "${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}"
fi
git_checkout "${CSI_PROW_E2E_REPO}" "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}" --depth=1 &&
if [ "${CSI_PROW_E2E_IMPORT_PATH}" = "k8s.io/kubernetes" ]; then
patch_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_WORK}" &&
go_version="${CSI_PROW_GO_VERSION_E2E:-$(go_version_for_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}")}" &&
run_with_go "$go_version" make WHAT=test/e2e/e2e.test "-C${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/e2e.test" "${CSI_PROW_WORK}"
ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/e2e.test" "${CSI_PROW_WORK}" &&
run_with_go "$go_version" make WHAT=vendor/github.com/onsi/ginkgo/ginkgo "-C${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/ginkgo" "${CSI_PROW_BIN}"
else
run_with_go "${CSI_PROW_GO_VERSION_E2E}" go test -c -o "${CSI_PROW_WORK}/e2e.test" "${CSI_PROW_E2E_IMPORT_PATH}/test/e2e"
fi
@ -879,8 +978,8 @@ install_sanity () (
return
fi
git_checkout "${CSI_PROW_SANITY_REPO}" "${GOPATH}/src/${CSI_PROW_SANITY_IMPORT_PATH}" "${CSI_PROW_SANITY_VERSION}" --depth=1 || die "checking out csi-sanity failed"
run_with_go "${CSI_PROW_GO_VERSION_SANITY}" go test -c -o "${CSI_PROW_WORK}/csi-sanity" "${CSI_PROW_SANITY_IMPORT_PATH}/cmd/csi-sanity" || die "building csi-sanity failed"
git_checkout "${CSI_PROW_SANITY_REPO}" "${GOPATH}/src/${CSI_PROW_SANITY_PACKAGE_PATH}" "${CSI_PROW_SANITY_VERSION}" --depth=1 || die "checking out csi-sanity failed"
( cd "${GOPATH}/src/${CSI_PROW_SANITY_PACKAGE_PATH}/cmd/csi-sanity" && run_with_go "${CSI_PROW_GO_VERSION_SANITY}" go build -o "${CSI_PROW_WORK}/csi-sanity" ) || die "building csi-sanity failed"
)
# Captures pod output while running some other command.
@ -909,25 +1008,38 @@ run_e2e () (
# the full Kubernetes E2E testsuite while only running a few tests.
move_junit () {
if ls "${ARTIFACTS}"/junit_[0-9]*.xml 2>/dev/null >/dev/null; then
run_filter_junit -t="External Storage" -o "${ARTIFACTS}/junit_${name}.xml" "${ARTIFACTS}"/junit_[0-9]*.xml && rm -f "${ARTIFACTS}"/junit_[0-9]*.xml
run_filter_junit -t="External.Storage|CSI.mock.volume" -o "${ARTIFACTS}/junit_${name}.xml" "${ARTIFACTS}"/junit_[0-9]*.xml && rm -f "${ARTIFACTS}"/junit_[0-9]*.xml
fi
}
trap move_junit EXIT
cd "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
run_with_loggers ginkgo -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/test-driver.yaml"
if [ "${name}" == "local" ]; then
cd "${GOPATH}/src/${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}" &&
run_with_loggers env KUBECONFIG="$KUBECONFIG" KUBE_TEST_REPO_LIST="$(if [ -e "${CSI_PROW_WORK}/e2e-repo-list" ]; then echo "${CSI_PROW_WORK}/e2e-repo-list"; fi)" ginkgo -v "$@" "${CSI_PROW_WORK}/e2e-local.test" -- -report-dir "${ARTIFACTS}" -report-prefix local
else
cd "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
run_with_loggers env KUBECONFIG="$KUBECONFIG" KUBE_TEST_REPO_LIST="$(if [ -e "${CSI_PROW_WORK}/e2e-repo-list" ]; then echo "${CSI_PROW_WORK}/e2e-repo-list"; fi)" ginkgo -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/test-driver.yaml"
fi
)
# Run csi-sanity against installed CSI driver.
run_sanity () (
install_sanity || die "installing csi-sanity failed"
if [[ "${CSI_PROW_SANITY_POD}" =~ " " ]]; then
# Contains spaces, more complex than a simple pod name.
# Evaluate as a shell command.
pod=$(eval "${CSI_PROW_SANITY_POD}") || die "evaluation failed: CSI_PROW_SANITY_POD=${CSI_PROW_SANITY_POD}"
else
pod="${CSI_PROW_SANITY_POD}"
fi
cat >"${CSI_PROW_WORK}/mkdir_in_pod.sh" <<EOF
#!/bin/sh
kubectl exec "${CSI_PROW_SANITY_POD}" -c "${CSI_PROW_SANITY_CONTAINER}" -- mkdir "\$@" && echo "\$@"
kubectl exec "$pod" -c "${CSI_PROW_SANITY_CONTAINER}" -- mkdir "\$@" && echo "\$@"
EOF
# Using "rm -rf" as fallback for "rmdir" is a workaround for:
# Node Service
# Node Service
# should work
# /nvme/gopath.tmp/src/github.com/kubernetes-csi/csi-test/pkg/sanity/node.go:624
# STEP: reusing connection to CSI driver at dns:///172.17.0.2:30896
@ -949,11 +1061,29 @@ EOF
# why it happened.
cat >"${CSI_PROW_WORK}/rmdir_in_pod.sh" <<EOF
#!/bin/sh
if ! kubectl exec "${CSI_PROW_SANITY_POD}" -c "${CSI_PROW_SANITY_CONTAINER}" -- rmdir "\$@"; then
kubectl exec "${CSI_PROW_SANITY_POD}" -c "${CSI_PROW_SANITY_CONTAINER}" -- rm -rf "\$@"
if ! kubectl exec "$pod" -c "${CSI_PROW_SANITY_CONTAINER}" -- rmdir "\$@"; then
kubectl exec "$pod" -c "${CSI_PROW_SANITY_CONTAINER}" -- rm -rf "\$@"
exit 1
fi
EOF
cat >"${CSI_PROW_WORK}/checkdir_in_pod.sh" <<EOF
#!/bin/sh
CHECK_PATH=\$(cat <<SCRIPT
if [ -f "\$@" ]; then
echo "file"
elif [ -d "\$@" ]; then
echo "directory"
elif [ -e "\$@" ]; then
echo "other"
else
echo "not_found"
fi
SCRIPT
)
kubectl exec "$pod" -c "${CSI_PROW_SANITY_CONTAINER}" -- /bin/sh -c "\${CHECK_PATH}"
EOF
chmod u+x "${CSI_PROW_WORK}"/*dir_in_pod.sh
# This cannot run in parallel, because -csi.junitfile output
@ -969,6 +1099,7 @@ EOF
-csi.createmountpathcmd "${CSI_PROW_WORK}/mkdir_in_pod.sh" \
-csi.removestagingpathcmd "${CSI_PROW_WORK}/rmdir_in_pod.sh" \
-csi.removemountpathcmd "${CSI_PROW_WORK}/rmdir_in_pod.sh" \
-csi.checkpathcmd "${CSI_PROW_WORK}/checkdir_in_pod.sh" \
)
ascii_to_xml () {
@ -998,7 +1129,7 @@ make_test_to_junit () {
echo "$line" # pass through
if echo "$line" | grep -q "^### [^ ]*:$"; then
if [ "$testname" ]; then
# previous test succesful
# previous test successful
echo " </system-out>" >>"$out"
echo " </testcase>" >>"$out"
fi
@ -1072,7 +1203,7 @@ make_test_to_junit () {
# version_gt 1.3.1 v1.2.0 (returns true)
# version_gt 1.1.1 release-1.2.0 (returns false)
# version_gt 1.2.0 1.2.2 (returns false)
function version_gt() {
function version_gt() {
versions=$(for ver in "$@"; do ver=${ver#release-}; ver=${ver#kubernetes-}; echo "${ver#v}"; done)
greaterVersion=${1#"release-"};
greaterVersion=${greaterVersion#"kubernetes-"};
@ -1084,6 +1215,9 @@ main () {
local images ret
ret=0
# Set up work directory.
ensure_paths
images=
if ${CSI_PROW_BUILD_JOB}; then
# A successful build is required for testing.
@ -1132,7 +1266,7 @@ main () {
if [ "$rbac_file_path" == "" ]; then
rbac_file_path="$(pwd)/deploy/kubernetes/rbac.yaml"
fi
if [ -e "$rbac_file_path" ]; then
# This is one of those components which has its own RBAC rules (like external-provisioner).
# We are testing a locally built image and also want to test with the the current,
@ -1143,13 +1277,21 @@ main () {
done
fi
# Run the external driver tests and optionally also mock tests.
local focus="External.Storage"
if "$CSI_PROW_E2E_MOCK"; then
focus="($focus|CSI.mock.volume)"
fi
if tests_need_non_alpha_cluster; then
start_cluster || die "starting the non-alpha cluster failed"
# Need to (re)create the cluster.
start_cluster "${CSI_PROW_E2E_GATES}" || die "starting the non-alpha cluster failed"
# Install necessary snapshot CRDs and snapshot controller
install_snapshot_crds
install_snapshot_controller
# Installing the driver might be disabled.
if ${CSI_PROW_DRIVER_INSTALL} "$images"; then
collect_cluster_info
@ -1164,7 +1306,7 @@ main () {
# Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086
if ! run_e2e parallel ${CSI_PROW_GINKO_PARALLEL} \
-focus="External.Storage" \
-focus="$focus" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_ALPHA}" "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E parallel failed"
ret=1
@ -1174,7 +1316,7 @@ main () {
# Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086
if ! run_e2e parallel-features ${CSI_PROW_GINKO_PARALLEL} \
-focus="External.Storage.*($(regex_join "${CSI_PROW_E2E_FOCUS}"))" \
-focus="$focus.*($(regex_join "${CSI_PROW_E2E_FOCUS}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then
warn "E2E parallel features failed"
ret=1
@ -1183,17 +1325,30 @@ main () {
if tests_enabled "serial"; then
if ! run_e2e serial \
-focus="External.Storage.*($(regex_join "${CSI_PROW_E2E_SERIAL}"))" \
-focus="$focus.*($(regex_join "${CSI_PROW_E2E_SERIAL}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_ALPHA}" "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E serial failed"
ret=1
fi
fi
if sidecar_tests_enabled; then
if ! run_e2e local \
-focus="${CSI_PROW_SIDECAR_E2E_FOCUS}" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then
warn "E2E sidecar failed"
ret=1
fi
fi
fi
delete_cluster_inside_prow_job
delete_cluster_inside_prow_job non-alpha
fi
if tests_need_alpha_cluster && [ "${CSI_PROW_E2E_ALPHA_GATES}" ]; then
# If the cluster for alpha tests doesn't need any feature gates, then we
# could reuse the same cluster as for the other tests. But that would make
# the flow in this script harder and wouldn't help in practice because
# we have separate Prow jobs for alpha and non-alpha tests.
if tests_need_alpha_cluster; then
# Need to (re)create the cluster.
start_cluster "${CSI_PROW_E2E_ALPHA_GATES}" || die "starting alpha cluster failed"
@ -1209,7 +1364,7 @@ main () {
# Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086
if ! run_e2e parallel-alpha ${CSI_PROW_GINKO_PARALLEL} \
-focus="External.Storage.*($(regex_join "${CSI_PROW_E2E_ALPHA}"))" \
-focus="$focus.*($(regex_join "${CSI_PROW_E2E_ALPHA}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E parallel alpha failed"
ret=1
@ -1218,14 +1373,14 @@ main () {
if tests_enabled "serial-alpha"; then
if ! run_e2e serial-alpha \
-focus="External.Storage.*(($(regex_join "${CSI_PROW_E2E_SERIAL}")).*($(regex_join "${CSI_PROW_E2E_ALPHA}"))|($(regex_join "${CSI_PROW_E2E_ALPHA}")).*($(regex_join "${CSI_PROW_E2E_SERIAL}")))" \
-focus="$focus.*(($(regex_join "${CSI_PROW_E2E_SERIAL}")).*($(regex_join "${CSI_PROW_E2E_ALPHA}"))|($(regex_join "${CSI_PROW_E2E_ALPHA}")).*($(regex_join "${CSI_PROW_E2E_SERIAL}")))" \
-skip="$(regex_join "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E serial alpha failed"
ret=1
fi
fi
fi
delete_cluster_inside_prow_job
delete_cluster_inside_prow_job alpha
fi
fi
@ -1245,6 +1400,9 @@ gcr_cloud_build () {
# Required for "docker buildx build --push".
gcloud auth configure-docker
# Might not be needed here, but call it just in case.
ensure_paths
if find . -name Dockerfile | grep -v ^./vendor | xargs --no-run-if-empty cat | grep -q ^RUN; then
# Needed for "RUN" steps on non-linux/amd64 platforms.
# See https://github.com/multiarch/qemu-user-static#getting-started

32
release-tools/pull-test.sh Executable file
View File

@ -0,0 +1,32 @@
#! /bin/sh
# Copyright 2021 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script is called by pull Prow jobs for the csi-release-tools
# repo to ensure that the changes in the PR work when imported into
# some other repo.
set -ex
# It must be called inside the updated csi-release-tools repo.
CSI_RELEASE_TOOLS_DIR="$(pwd)"
# Update the other repo.
cd "$PULL_TEST_REPO_DIR"
git subtree pull --squash --prefix=release-tools "$CSI_RELEASE_TOOLS_DIR" master
git log -n2
# Now fall through to testing.
exec ./.prow.sh

View File

@ -1,21 +0,0 @@
language: go
sudo: required
services:
- docker
git:
depth: false
matrix:
include:
- go: 1.15
before_script:
- mkdir -p bin
- wget https://github.com/golang/dep/releases/download/v0.5.1/dep-linux-amd64 -O bin/dep
- chmod u+x bin/dep
- export PATH=$PWD/bin:$PATH
script:
- make -k all test GOFLAGS_VENDOR=$( [ -d vendor ] && echo '-mod=vendor' )
after_success:
- if [ "${TRAVIS_PULL_REQUEST}" == "false" ]; then
docker login -u "${DOCKER_USERNAME}" -p "${DOCKER_PASSWORD}" quay.io;
make push GOFLAGS_VENDOR=$( [ -d vendor ] && echo '-mod=vendor' );
fi

View File

@ -0,0 +1,54 @@
#!/bin/bash
# Copyright 2014 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
echo "Verifying boilerplate"
if [[ -z "$(command -v python)" ]]; then
echo "Cannot find python. Make link to python3..."
update-alternatives --install /usr/bin/python python /usr/bin/python3 1
fi
# The csi-release-tools directory (absolute path).
TOOLS="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd -P)"
# Directory to check. Default is the parent of the tools themselves.
ROOT="${1:-${TOOLS}/..}"
boiler="${TOOLS}/boilerplate/boilerplate.py"
mapfile -t files_need_boilerplate < <("${boiler}" --rootdir="${ROOT}" --verbose)
# Run boilerplate.py unit tests
unitTestOut="$(mktemp)"
trap cleanup EXIT
cleanup() {
rm "${unitTestOut}"
}
# Run boilerplate check
if [[ ${#files_need_boilerplate[@]} -gt 0 ]]; then
for file in "${files_need_boilerplate[@]}"; do
echo "Boilerplate header is wrong for: ${file}"
done
exit 1
fi
echo "Done"

View File

@ -29,8 +29,9 @@ die () {
version=$("$GO" version) || die "determining version of $GO failed"
# shellcheck disable=SC2001
majorminor=$(echo "$version" | sed -e 's/.*go\([0-9]*\)\.\([0-9]*\).*/\1.\2/')
# shellcheck disable=SC2001
expected=$(grep "^ *- go:" "release-tools/travis.yml" | sed -e 's/.*go: *\([0-9]*\)\.\([0-9]*\).*/\1.\2/')
# SC1091: Not following: release-tools/prow.sh was not specified as input (see shellcheck -x).
# shellcheck disable=SC1091
expected=$(. release-tools/prow.sh >/dev/null && echo "$CSI_PROW_GO_VERSION_BUILD")
if [ "$majorminor" != "$expected" ]; then
cat >&2 <<EOF

View File

@ -84,7 +84,7 @@ done < <(find . -name "*.sh" \
# detect if the host machine has the required shellcheck version installed
# if so, we will use that instead.
HAVE_SHELLCHECK=false
if which shellcheck &>/dev/null; then
if command -v shellcheck &>/dev/null; then
detected_version="$(shellcheck --version | grep 'version: .*')"
if [[ "${detected_version}" = "version: ${SHELLCHECK_VERSION}" ]]; then
HAVE_SHELLCHECK=true

View File

@ -0,0 +1,59 @@
#!/usr/bin/env bash
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
TOOL_VERSION="v0.3.4"
# The csi-release-tools directory (absolute path).
TOOLS="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd -P)"
# Directory to check. Default is the parent of the tools themselves.
ROOT="${1:-${TOOLS}/..}"
# create a temporary directory
TMP_DIR=$(mktemp -d)
# cleanup
exitHandler() (
echo "Cleaning up..."
rm -rf "${TMP_DIR}"
)
trap exitHandler EXIT
if [[ -z "$(command -v misspell)" ]]; then
echo "Cannot find misspell. Installing misspell..."
# perform go get in a temp dir as we are not tracking this version in a go module
# if we do the go get in the repo, it will create / update a go.mod and go.sum
cd "${TMP_DIR}"
GO111MODULE=on GOBIN="${TMP_DIR}" go install "github.com/client9/misspell/cmd/misspell@${TOOL_VERSION}"
export PATH="${TMP_DIR}:${PATH}"
fi
# check spelling
RES=0
echo "Checking spelling..."
ERROR_LOG="${TMP_DIR}/errors.log"
cd "${ROOT}"
git ls-files | grep -v vendor | xargs misspell > "${ERROR_LOG}"
if [[ -s "${ERROR_LOG}" ]]; then
sed 's/^/error: /' "${ERROR_LOG}" # add 'error' to each line to highlight in e2e status
echo "Found spelling errors!"
RES=1
fi
exit "${RES}"

View File

@ -1,5 +1,5 @@
#! /bin/sh -e
#
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");

20
vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View File

@ -0,0 +1,20 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt generated vendored Normal file

File diff suppressed because it is too large Load Diff

316
vendor/github.com/beorn7/perks/quantile/stream.go generated vendored Normal file
View File

@ -0,0 +1,316 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

8
vendor/github.com/cespare/xxhash/v2/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,8 @@
language: go
go:
- "1.x"
- master
env:
- TAGS=""
- TAGS="-tags purego"
script: go test $TAGS -v ./...

22
vendor/github.com/cespare/xxhash/v2/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,22 @@
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

67
vendor/github.com/cespare/xxhash/v2/README.md generated vendored Normal file
View File

@ -0,0 +1,67 @@
# xxhash
[![GoDoc](https://godoc.org/github.com/cespare/xxhash?status.svg)](https://godoc.org/github.com/cespare/xxhash)
[![Build Status](https://travis-ci.org/cespare/xxhash.svg?branch=master)](https://travis-ci.org/cespare/xxhash)
xxhash is a Go implementation of the 64-bit
[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
This package provides a straightforward API:
```
func Sum64(b []byte) uint64
func Sum64String(s string) uint64
type Digest struct{ ... }
func New() *Digest
```
The `Digest` type implements hash.Hash64. Its key methods are:
```
func (*Digest) Write([]byte) (int, error)
func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
This implementation provides a fast pure-Go implementation and an even faster
assembly implementation for amd64.
## Compatibility
This package is in a module and the latest code is in version 2 of the module.
You need a version of Go with at least "minimal module compatibility" to use
github.com/cespare/xxhash/v2:
* 1.9.7+ for Go 1.9
* 1.10.3+ for Go 1.10
* Go 1.11 or later
I recommend using the latest release of Go.
## Benchmarks
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
| input size | purego | asm |
| --- | --- | --- |
| 5 B | 979.66 MB/s | 1291.17 MB/s |
| 100 B | 7475.26 MB/s | 7973.40 MB/s |
| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
the following commands under Go 1.11.2:
```
$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
```
## Projects using this package
- [InfluxDB](https://github.com/influxdata/influxdb)
- [Prometheus](https://github.com/prometheus/prometheus)
- [FreeCache](https://github.com/coocood/freecache)

236
vendor/github.com/cespare/xxhash/v2/xxhash.go generated vendored Normal file
View File

@ -0,0 +1,236 @@
// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
// at http://cyan4973.github.io/xxHash/.
package xxhash
import (
"encoding/binary"
"errors"
"math/bits"
)
const (
prime1 uint64 = 11400714785074694791
prime2 uint64 = 14029467366897019727
prime3 uint64 = 1609587929392839161
prime4 uint64 = 9650029242287828579
prime5 uint64 = 2870177450012600261
)
// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
// possible in the Go code is worth a small (but measurable) performance boost
// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
// convenience in the Go code in a few places where we need to intentionally
// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
// result overflows a uint64).
var (
prime1v = prime1
prime2v = prime2
prime3v = prime3
prime4v = prime4
prime5v = prime5
)
// Digest implements hash.Hash64.
type Digest struct {
v1 uint64
v2 uint64
v3 uint64
v4 uint64
total uint64
mem [32]byte
n int // how much of mem is used
}
// New creates a new Digest that computes the 64-bit xxHash algorithm.
func New() *Digest {
var d Digest
d.Reset()
return &d
}
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
d.v1 = prime1v + prime2
d.v2 = prime2
d.v3 = 0
d.v4 = -prime1v
d.total = 0
d.n = 0
}
// Size always returns 8 bytes.
func (d *Digest) Size() int { return 8 }
// BlockSize always returns 32 bytes.
func (d *Digest) BlockSize() int { return 32 }
// Write adds more data to d. It always returns len(b), nil.
func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
if d.n+n < 32 {
// This new data doesn't even fill the current block.
copy(d.mem[d.n:], b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
copy(d.mem[d.n:], b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
b = b[32-d.n:]
d.n = 0
}
if len(b) >= 32 {
// One or more full blocks left.
nw := writeBlocks(d, b)
b = b[nw:]
}
// Store any remaining partial block.
copy(d.mem[:], b)
d.n = len(b)
return
}
// Sum appends the current hash to b and returns the resulting slice.
func (d *Digest) Sum(b []byte) []byte {
s := d.Sum64()
return append(
b,
byte(s>>56),
byte(s>>48),
byte(s>>40),
byte(s>>32),
byte(s>>24),
byte(s>>16),
byte(s>>8),
byte(s),
)
}
// Sum64 returns the current hash.
func (d *Digest) Sum64() uint64 {
var h uint64
if d.total >= 32 {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = d.v3 + prime5
}
h += d.total
i, end := 0, d.n
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(d.mem[i:i+8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(d.mem[i:i+4])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for i < end {
h ^= uint64(d.mem[i]) * prime5
h = rol11(h) * prime1
i++
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
const (
magic = "xxh\x06"
marshaledSize = len(magic) + 8*5 + 32
)
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (d *Digest) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaledSize)
b = append(b, magic...)
b = appendUint64(b, d.v1)
b = appendUint64(b, d.v2)
b = appendUint64(b, d.v3)
b = appendUint64(b, d.v4)
b = appendUint64(b, d.total)
b = append(b, d.mem[:d.n]...)
b = b[:len(b)+len(d.mem)-d.n]
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (d *Digest) UnmarshalBinary(b []byte) error {
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaledSize {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic):]
b, d.v1 = consumeUint64(b)
b, d.v2 = consumeUint64(b)
b, d.v3 = consumeUint64(b)
b, d.v4 = consumeUint64(b)
b, d.total = consumeUint64(b)
copy(d.mem[:], b)
b = b[len(d.mem):]
d.n = int(d.total % uint64(len(d.mem)))
return nil
}
func appendUint64(b []byte, x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return append(b, a[:]...)
}
func consumeUint64(b []byte) ([]byte, uint64) {
x := u64(b)
return b[8:], x
}
func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
func round(acc, input uint64) uint64 {
acc += input * prime2
acc = rol31(acc)
acc *= prime1
return acc
}
func mergeRound(acc, val uint64) uint64 {
val = round(0, val)
acc ^= val
acc = acc*prime1 + prime4
return acc
}
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }

13
vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
// +build !appengine
// +build gc
// +build !purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
//
//go:noescape
func Sum64(b []byte) uint64
//go:noescape
func writeBlocks(d *Digest, b []byte) int

215
vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s generated vendored Normal file
View File

@ -0,0 +1,215 @@
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
// Register allocation:
// AX h
// CX pointer to advance through b
// DX n
// BX loop end
// R8 v1, k1
// R9 v2
// R10 v3
// R11 v4
// R12 tmp
// R13 prime1v
// R14 prime2v
// R15 prime4v
// round reads from and advances the buffer pointer in CX.
// It assumes that R13 has prime1v and R14 has prime2v.
#define round(r) \
MOVQ (CX), R12 \
ADDQ $8, CX \
IMULQ R14, R12 \
ADDQ R12, r \
ROLQ $31, r \
IMULQ R13, r
// mergeRound applies a merge round on the two registers acc and val.
// It assumes that R13 has prime1v, R14 has prime2v, and R15 has prime4v.
#define mergeRound(acc, val) \
IMULQ R14, val \
ROLQ $31, val \
IMULQ R13, val \
XORQ val, acc \
IMULQ R13, acc \
ADDQ R15, acc
// func Sum64(b []byte) uint64
TEXT ·Sum64(SB), NOSPLIT, $0-32
// Load fixed primes.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
MOVQ ·prime4v(SB), R15
// Load slice.
MOVQ b_base+0(FP), CX
MOVQ b_len+8(FP), DX
LEAQ (CX)(DX*1), BX
// The first loop limit will be len(b)-32.
SUBQ $32, BX
// Check whether we have at least one block.
CMPQ DX, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
MOVQ R13, R8
ADDQ R14, R8
MOVQ R14, R9
XORQ R10, R10
XORQ R11, R11
SUBQ R13, R11
// Loop until CX > BX.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
MOVQ R8, AX
ROLQ $1, AX
MOVQ R9, R12
ROLQ $7, R12
ADDQ R12, AX
MOVQ R10, R12
ROLQ $12, R12
ADDQ R12, AX
MOVQ R11, R12
ROLQ $18, R12
ADDQ R12, AX
mergeRound(AX, R8)
mergeRound(AX, R9)
mergeRound(AX, R10)
mergeRound(AX, R11)
JMP afterBlocks
noBlocks:
MOVQ ·prime5v(SB), AX
afterBlocks:
ADDQ DX, AX
// Right now BX has len(b)-32, and we want to loop until CX > len(b)-8.
ADDQ $24, BX
CMPQ CX, BX
JG fourByte
wordLoop:
// Calculate k1.
MOVQ (CX), R8
ADDQ $8, CX
IMULQ R14, R8
ROLQ $31, R8
IMULQ R13, R8
XORQ R8, AX
ROLQ $27, AX
IMULQ R13, AX
ADDQ R15, AX
CMPQ CX, BX
JLE wordLoop
fourByte:
ADDQ $4, BX
CMPQ CX, BX
JG singles
MOVL (CX), R8
ADDQ $4, CX
IMULQ R13, R8
XORQ R8, AX
ROLQ $23, AX
IMULQ R14, AX
ADDQ ·prime3v(SB), AX
singles:
ADDQ $4, BX
CMPQ CX, BX
JGE finalize
singlesLoop:
MOVBQZX (CX), R12
ADDQ $1, CX
IMULQ ·prime5v(SB), R12
XORQ R12, AX
ROLQ $11, AX
IMULQ R13, AX
CMPQ CX, BX
JL singlesLoop
finalize:
MOVQ AX, R12
SHRQ $33, R12
XORQ R12, AX
IMULQ R14, AX
MOVQ AX, R12
SHRQ $29, R12
XORQ R12, AX
IMULQ ·prime3v(SB), AX
MOVQ AX, R12
SHRQ $32, R12
XORQ R12, AX
MOVQ AX, ret+24(FP)
RET
// writeBlocks uses the same registers as above except that it uses AX to store
// the d pointer.
// func writeBlocks(d *Digest, b []byte) int
TEXT ·writeBlocks(SB), NOSPLIT, $0-40
// Load fixed primes needed for round.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
// Load slice.
MOVQ b_base+8(FP), CX
MOVQ b_len+16(FP), DX
LEAQ (CX)(DX*1), BX
SUBQ $32, BX
// Load vN from d.
MOVQ d+0(FP), AX
MOVQ 0(AX), R8 // v1
MOVQ 8(AX), R9 // v2
MOVQ 16(AX), R10 // v3
MOVQ 24(AX), R11 // v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
// Copy vN back to d.
MOVQ R8, 0(AX)
MOVQ R9, 8(AX)
MOVQ R10, 16(AX)
MOVQ R11, 24(AX)
// The number of bytes written is CX minus the old base pointer.
SUBQ b_base+8(FP), CX
MOVQ CX, ret+32(FP)
RET

76
vendor/github.com/cespare/xxhash/v2/xxhash_other.go generated vendored Normal file
View File

@ -0,0 +1,76 @@
// +build !amd64 appengine !gc purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
func Sum64(b []byte) uint64 {
// A simpler version would be
// d := New()
// d.Write(b)
// return d.Sum64()
// but this is faster, particularly for small inputs.
n := len(b)
var h uint64
if n >= 32 {
v1 := prime1v + prime2
v2 := prime2
v3 := uint64(0)
v4 := -prime1v
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = prime5
}
h += uint64(n)
i, end := 0, len(b)
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(b[i:i+8:len(b)]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for ; i < end; i++ {
h ^= uint64(b[i]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
func writeBlocks(d *Digest, b []byte) int {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
n := len(b)
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
return n - len(b)
}

15
vendor/github.com/cespare/xxhash/v2/xxhash_safe.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
package xxhash
// Sum64String computes the 64-bit xxHash digest of s.
func Sum64String(s string) uint64 {
return Sum64([]byte(s))
}
// WriteString adds more data to d. It always returns len(s), nil.
func (d *Digest) WriteString(s string) (n int, err error) {
return d.Write([]byte(s))
}

46
vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go generated vendored Normal file
View File

@ -0,0 +1,46 @@
// +build !appengine
// This file encapsulates usage of unsafe.
// xxhash_safe.go contains the safe implementations.
package xxhash
import (
"reflect"
"unsafe"
)
// Notes:
//
// See https://groups.google.com/d/msg/golang-nuts/dcjzJy-bSpw/tcZYBzQqAQAJ
// for some discussion about these unsafe conversions.
//
// In the future it's possible that compiler optimizations will make these
// unsafe operations unnecessary: https://golang.org/issue/2205.
//
// Both of these wrapper functions still incur function call overhead since they
// will not be inlined. We could write Go/asm copies of Sum64 and Digest.Write
// for strings to squeeze out a bit more speed. Mid-stack inlining should
// eventually fix this.
// Sum64String computes the 64-bit xxHash digest of s.
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
func Sum64String(s string) uint64 {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return Sum64(b)
}
// WriteString adds more data to d. It always returns len(s), nil.
// It may be faster than Write([]byte(s)) by avoiding a copy.
func (d *Digest) WriteString(s string) (n int, err error) {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return d.Write(b)
}

15
vendor/github.com/davecgh/go-spew/LICENSE generated vendored Normal file
View File

@ -0,0 +1,15 @@
ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

145
vendor/github.com/davecgh/go-spew/spew/bypass.go generated vendored Normal file
View File

@ -0,0 +1,145 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// Go versions prior to 1.4 are disabled because they use a different layout
// for interfaces which make the implementation of unsafeReflectValue more complex.
// +build !js,!appengine,!safe,!disableunsafe,go1.4
package spew
import (
"reflect"
"unsafe"
)
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = false
// ptrSize is the size of a pointer on the current arch.
ptrSize = unsafe.Sizeof((*byte)(nil))
)
type flag uintptr
var (
// flagRO indicates whether the value field of a reflect.Value
// is read-only.
flagRO flag
// flagAddr indicates whether the address of the reflect.Value's
// value may be taken.
flagAddr flag
)
// flagKindMask holds the bits that make up the kind
// part of the flags field. In all the supported versions,
// it is in the lower 5 bits.
const flagKindMask = flag(0x1f)
// Different versions of Go have used different
// bit layouts for the flags type. This table
// records the known combinations.
var okFlags = []struct {
ro, addr flag
}{{
// From Go 1.4 to 1.5
ro: 1 << 5,
addr: 1 << 7,
}, {
// Up to Go tip.
ro: 1<<5 | 1<<6,
addr: 1 << 8,
}}
var flagValOffset = func() uintptr {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
return field.Offset
}()
// flagField returns a pointer to the flag field of a reflect.Value.
func flagField(v *reflect.Value) *flag {
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
// the typical safety restrictions preventing access to unaddressable and
// unexported data. It works by digging the raw pointer to the underlying
// value out of the protected value and generating a new unprotected (unsafe)
// reflect.Value to it.
//
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) reflect.Value {
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
return v
}
flagFieldPtr := flagField(&v)
*flagFieldPtr &^= flagRO
*flagFieldPtr |= flagAddr
return v
}
// Sanity checks against future reflect package changes
// to the type or semantics of the Value.flag field.
func init() {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
panic("reflect.Value flag field has changed kind")
}
type t0 int
var t struct {
A t0
// t0 will have flagEmbedRO set.
t0
// a will have flagStickyRO set
a t0
}
vA := reflect.ValueOf(t).FieldByName("A")
va := reflect.ValueOf(t).FieldByName("a")
vt0 := reflect.ValueOf(t).FieldByName("t0")
// Infer flagRO from the difference between the flags
// for the (otherwise identical) fields in t.
flagPublic := *flagField(&vA)
flagWithRO := *flagField(&va) | *flagField(&vt0)
flagRO = flagPublic ^ flagWithRO
// Infer flagAddr from the difference between a value
// taken from a pointer and not.
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
flagNoPtr := *flagField(&vA)
flagPtr := *flagField(&vPtrA)
flagAddr = flagNoPtr ^ flagPtr
// Check that the inferred flags tally with one of the known versions.
for _, f := range okFlags {
if flagRO == f.ro && flagAddr == f.addr {
return
}
}
panic("reflect.Value read-only flag has changed semantics")
}

38
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go generated vendored Normal file
View File

@ -0,0 +1,38 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is running on Google App Engine, compiled by GopherJS, or
// "-tags safe" is added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe !go1.4
package spew
import "reflect"
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = true
)
// unsafeReflectValue typically converts the passed reflect.Value into a one
// that bypasses the typical safety restrictions preventing access to
// unaddressable and unexported data. However, doing this relies on access to
// the unsafe package. This is a stub version which simply returns the passed
// reflect.Value when the unsafe package is not available.
func unsafeReflectValue(v reflect.Value) reflect.Value {
return v
}

341
vendor/github.com/davecgh/go-spew/spew/common.go generated vendored Normal file
View File

@ -0,0 +1,341 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"reflect"
"sort"
"strconv"
)
// Some constants in the form of bytes to avoid string overhead. This mirrors
// the technique used in the fmt package.
var (
panicBytes = []byte("(PANIC=")
plusBytes = []byte("+")
iBytes = []byte("i")
trueBytes = []byte("true")
falseBytes = []byte("false")
interfaceBytes = []byte("(interface {})")
commaNewlineBytes = []byte(",\n")
newlineBytes = []byte("\n")
openBraceBytes = []byte("{")
openBraceNewlineBytes = []byte("{\n")
closeBraceBytes = []byte("}")
asteriskBytes = []byte("*")
colonBytes = []byte(":")
colonSpaceBytes = []byte(": ")
openParenBytes = []byte("(")
closeParenBytes = []byte(")")
spaceBytes = []byte(" ")
pointerChainBytes = []byte("->")
nilAngleBytes = []byte("<nil>")
maxNewlineBytes = []byte("<max depth reached>\n")
maxShortBytes = []byte("<max>")
circularBytes = []byte("<already shown>")
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("map[")
closeMapBytes = []byte("]")
lenEqualsBytes = []byte("len=")
capEqualsBytes = []byte("cap=")
)
// hexDigits is used to map a decimal value to a hex digit.
var hexDigits = "0123456789abcdef"
// catchPanic handles any panics that might occur during the handleMethods
// calls.
func catchPanic(w io.Writer, v reflect.Value) {
if err := recover(); err != nil {
w.Write(panicBytes)
fmt.Fprintf(w, "%v", err)
w.Write(closeParenBytes)
}
}
// handleMethods attempts to call the Error and String methods on the underlying
// type the passed reflect.Value represents and outputes the result to Writer w.
//
// It handles panics in any called methods by catching and displaying the error
// as the formatted value.
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
// We need an interface to check if the type implements the error or
// Stringer interface. However, the reflect package won't give us an
// interface on certain things like unexported struct fields in order
// to enforce visibility rules. We use unsafe, when it's available,
// to bypass these restrictions since this package does not mutate the
// values.
if !v.CanInterface() {
if UnsafeDisabled {
return false
}
v = unsafeReflectValue(v)
}
// Choose whether or not to do error and Stringer interface lookups against
// the base type or a pointer to the base type depending on settings.
// Technically calling one of these methods with a pointer receiver can
// mutate the value, however, types which choose to satisify an error or
// Stringer interface with a pointer receiver should not be mutating their
// state inside these interface methods.
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
v = unsafeReflectValue(v)
}
if v.CanAddr() {
v = v.Addr()
}
// Is it an error or Stringer?
switch iface := v.Interface().(type) {
case error:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.Error()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.Error()))
return true
case fmt.Stringer:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.String()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.String()))
return true
}
return false
}
// printBool outputs a boolean value as true or false to Writer w.
func printBool(w io.Writer, val bool) {
if val {
w.Write(trueBytes)
} else {
w.Write(falseBytes)
}
}
// printInt outputs a signed integer value to Writer w.
func printInt(w io.Writer, val int64, base int) {
w.Write([]byte(strconv.FormatInt(val, base)))
}
// printUint outputs an unsigned integer value to Writer w.
func printUint(w io.Writer, val uint64, base int) {
w.Write([]byte(strconv.FormatUint(val, base)))
}
// printFloat outputs a floating point value using the specified precision,
// which is expected to be 32 or 64bit, to Writer w.
func printFloat(w io.Writer, val float64, precision int) {
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
}
// printComplex outputs a complex value using the specified float precision
// for the real and imaginary parts to Writer w.
func printComplex(w io.Writer, c complex128, floatPrecision int) {
r := real(c)
w.Write(openParenBytes)
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
i := imag(c)
if i >= 0 {
w.Write(plusBytes)
}
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
w.Write(iBytes)
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.
num := uint64(p)
if num == 0 {
w.Write(nilAngleBytes)
return
}
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
buf := make([]byte, 18)
// It's simpler to construct the hex string right to left.
base := uint64(16)
i := len(buf) - 1
for num >= base {
buf[i] = hexDigits[num%base]
num /= base
i--
}
buf[i] = hexDigits[num]
// Add '0x' prefix.
i--
buf[i] = 'x'
i--
buf[i] = '0'
// Strip unused leading bytes.
buf = buf[i:]
w.Write(buf)
}
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
// elements to be sorted.
type valuesSorter struct {
values []reflect.Value
strings []string // either nil or same len and values
cs *ConfigState
}
// newValuesSorter initializes a valuesSorter instance, which holds a set of
// surrogate keys on which the data should be sorted. It uses flags in
// ConfigState to decide if and how to populate those surrogate keys.
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
vs := &valuesSorter{values: values, cs: cs}
if canSortSimply(vs.values[0].Kind()) {
return vs
}
if !cs.DisableMethods {
vs.strings = make([]string, len(values))
for i := range vs.values {
b := bytes.Buffer{}
if !handleMethods(cs, &b, vs.values[i]) {
vs.strings = nil
break
}
vs.strings[i] = b.String()
}
}
if vs.strings == nil && cs.SpewKeys {
vs.strings = make([]string, len(values))
for i := range vs.values {
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
}
}
return vs
}
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
// directly, or whether it should be considered for sorting by surrogate keys
// (if the ConfigState allows it).
func canSortSimply(kind reflect.Kind) bool {
// This switch parallels valueSortLess, except for the default case.
switch kind {
case reflect.Bool:
return true
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return true
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return true
case reflect.Float32, reflect.Float64:
return true
case reflect.String:
return true
case reflect.Uintptr:
return true
case reflect.Array:
return true
}
return false
}
// Len returns the number of values in the slice. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Len() int {
return len(s.values)
}
// Swap swaps the values at the passed indices. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
if s.strings != nil {
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
}
}
// valueSortLess returns whether the first value should sort before the second
// value. It is used by valueSorter.Less as part of the sort.Interface
// implementation.
func valueSortLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Bool:
return !a.Bool() && b.Bool()
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return a.Int() < b.Int()
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return a.Uint() < b.Uint()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.String:
return a.String() < b.String()
case reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Array:
// Compare the contents of both arrays.
l := a.Len()
for i := 0; i < l; i++ {
av := a.Index(i)
bv := b.Index(i)
if av.Interface() == bv.Interface() {
continue
}
return valueSortLess(av, bv)
}
}
return a.String() < b.String()
}
// Less returns whether the value at index i should sort before the
// value at index j. It is part of the sort.Interface implementation.
func (s *valuesSorter) Less(i, j int) bool {
if s.strings == nil {
return valueSortLess(s.values[i], s.values[j])
}
return s.strings[i] < s.strings[j]
}
// sortValues is a sort function that handles both native types and any type that
// can be converted to error or Stringer. Other inputs are sorted according to
// their Value.String() value to ensure display stability.
func sortValues(values []reflect.Value, cs *ConfigState) {
if len(values) == 0 {
return
}
sort.Sort(newValuesSorter(values, cs))
}

306
vendor/github.com/davecgh/go-spew/spew/config.go generated vendored Normal file
View File

@ -0,0 +1,306 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"os"
)
// ConfigState houses the configuration options used by spew to format and
// display values. There is a global instance, Config, that is used to control
// all top-level Formatter and Dump functionality. Each ConfigState instance
// provides methods equivalent to the top-level functions.
//
// The zero value for ConfigState provides no indentation. You would typically
// want to set it to a space or a tab.
//
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
// with default settings. See the documentation of NewDefaultConfig for default
// values.
type ConfigState struct {
// Indent specifies the string to use for each indentation level. The
// global config instance that all top-level functions use set this to a
// single space by default. If you would like more indentation, you might
// set this to a tab with "\t" or perhaps two spaces with " ".
Indent string
// MaxDepth controls the maximum number of levels to descend into nested
// data structures. The default, 0, means there is no limit.
//
// NOTE: Circular data structures are properly detected, so it is not
// necessary to set this value unless you specifically want to limit deeply
// nested data structures.
MaxDepth int
// DisableMethods specifies whether or not error and Stringer interfaces are
// invoked for types that implement them.
DisableMethods bool
// DisablePointerMethods specifies whether or not to check for and invoke
// error and Stringer interfaces on types which only accept a pointer
// receiver when the current type is not a pointer.
//
// NOTE: This might be an unsafe action since calling one of these methods
// with a pointer receiver could technically mutate the value, however,
// in practice, types which choose to satisify an error or Stringer
// interface with a pointer receiver should not be mutating their state
// inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as
// Google App Engine or with the "safe" build tag specified.
DisablePointerMethods bool
// DisablePointerAddresses specifies whether to disable the printing of
// pointer addresses. This is useful when diffing data structures in tests.
DisablePointerAddresses bool
// DisableCapacities specifies whether to disable the printing of capacities
// for arrays, slices, maps and channels. This is useful when diffing
// data structures in tests.
DisableCapacities bool
// ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer
// interface and return immediately instead of continuing to recurse into
// the internals of the data type.
//
// NOTE: This flag does not have any effect if method invocation is disabled
// via the DisableMethods or DisablePointerMethods options.
ContinueOnMethod bool
// SortKeys specifies map keys should be sorted before being printed. Use
// this to have a more deterministic, diffable output. Note that only
// native types (bool, int, uint, floats, uintptr and string) and types
// that support the error or Stringer interfaces (if methods are
// enabled) are supported, with other types sorted according to the
// reflect.Value.String() output which guarantees display stability.
SortKeys bool
// SpewKeys specifies that, as a last resort attempt, map keys should
// be spewed to strings and sorted by those strings. This is only
// considered if SortKeys is true.
SpewKeys bool
}
// Config is the active configuration of the top-level functions.
// The configuration can be changed by modifying the contents of spew.Config.
var Config = ConfigState{Indent: " "}
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the formatted string as a value that satisfies error. See NewFormatter
// for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, c.convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, c.convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, c.convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a Formatter interface returned by c.NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, c.convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
return fmt.Print(c.convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, c.convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
return fmt.Println(c.convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprint(a ...interface{}) string {
return fmt.Sprint(c.convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, c.convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a Formatter interface returned by c.NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintln(a ...interface{}) string {
return fmt.Sprintln(c.convertArgs(a)...)
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
c.Printf, c.Println, or c.Printf.
*/
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(c, v)
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
fdump(c, w, a...)
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by modifying the public members
of c. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func (c *ConfigState) Dump(a ...interface{}) {
fdump(c, os.Stdout, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func (c *ConfigState) Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(c, &buf, a...)
return buf.String()
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a spew Formatter interface using
// the ConfigState associated with s.
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = newFormatter(c, arg)
}
return formatters
}
// NewDefaultConfig returns a ConfigState with the following default settings.
//
// Indent: " "
// MaxDepth: 0
// DisableMethods: false
// DisablePointerMethods: false
// ContinueOnMethod: false
// SortKeys: false
func NewDefaultConfig() *ConfigState {
return &ConfigState{Indent: " "}
}

211
vendor/github.com/davecgh/go-spew/spew/doc.go generated vendored Normal file
View File

@ -0,0 +1,211 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
Package spew implements a deep pretty printer for Go data structures to aid in
debugging.
A quick overview of the additional features spew provides over the built-in
printing facilities for Go data types are as follows:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output (only when using
Dump style)
There are two different approaches spew allows for dumping Go data structures:
* Dump style which prints with newlines, customizable indentation,
and additional debug information such as types and all pointer addresses
used to indirect to the final value
* A custom Formatter interface that integrates cleanly with the standard fmt
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
similar to the default %v while providing the additional functionality
outlined above and passing unsupported format verbs such as %x and %q
along to fmt
Quick Start
This section demonstrates how to quickly get started with spew. See the
sections below for further details on formatting and configuration options.
To dump a variable with full newlines, indentation, type, and pointer
information use Dump, Fdump, or Sdump:
spew.Dump(myVar1, myVar2, ...)
spew.Fdump(someWriter, myVar1, myVar2, ...)
str := spew.Sdump(myVar1, myVar2, ...)
Alternatively, if you would prefer to use format strings with a compacted inline
printing style, use the convenience wrappers Printf, Fprintf, etc with
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
%#+v (adds types and pointer addresses):
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
Configuration Options
Configuration of spew is handled by fields in the ConfigState type. For
convenience, all of the top-level functions use a global state available
via the spew.Config global.
It is also possible to create a ConfigState instance that provides methods
equivalent to the top-level functions. This allows concurrent configuration
options. See the ConfigState documentation for more details.
The following configuration options are available:
* Indent
String to use for each indentation level for Dump functions.
It is a single space by default. A popular alternative is "\t".
* MaxDepth
Maximum number of levels to descend into nested data structures.
There is no limit by default.
* DisableMethods
Disables invocation of error and Stringer interface methods.
Method invocation is enabled by default.
* DisablePointerMethods
Disables invocation of error and Stringer interface methods on types
which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default.
* DisablePointerAddresses
DisablePointerAddresses specifies whether to disable the printing of
pointer addresses. This is useful when diffing data structures in tests.
* DisableCapacities
DisableCapacities specifies whether to disable the printing of
capacities for arrays, slices, maps and channels. This is useful when
diffing data structures in tests.
* ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default.
* SortKeys
Specifies map keys should be sorted before being printed. Use
this to have a more deterministic, diffable output. Note that
only native types (bool, int, uint, floats, uintptr and string)
and types which implement error or Stringer interfaces are
supported with other types sorted according to the
reflect.Value.String() output which guarantees display
stability. Natural map order is used by default.
* SpewKeys
Specifies that, as a last resort attempt, map keys should be
spewed to strings and sorted by those strings. This is only
considered if SortKeys is true.
Dump Usage
Simply call spew.Dump with a list of variables you want to dump:
spew.Dump(myVar1, myVar2, ...)
You may also call spew.Fdump if you would prefer to output to an arbitrary
io.Writer. For example, to dump to standard error:
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
A third option is to call spew.Sdump to get the formatted output as a string:
str := spew.Sdump(myVar1, myVar2, ...)
Sample Dump Output
See the Dump example for details on the setup of the types and variables being
shown here.
(main.Foo) {
unexportedField: (*main.Bar)(0xf84002e210)({
flag: (main.Flag) flagTwo,
data: (uintptr) <nil>
}),
ExportedField: (map[interface {}]interface {}) (len=1) {
(string) (len=3) "one": (bool) true
}
}
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
command as shown.
([]uint8) (len=32 cap=32) {
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
00000020 31 32 |12|
}
Custom Formatter
Spew provides a custom formatter that implements the fmt.Formatter interface
so that it integrates cleanly with standard fmt package printing functions. The
formatter is useful for inline printing of smaller data types similar to the
standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Custom Formatter Usage
The simplest way to make use of the spew custom formatter is to call one of the
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
functions have syntax you are most likely already familiar with:
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Println(myVar, myVar2)
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
See the Index for the full list convenience functions.
Sample Formatter Output
Double pointer to a uint8:
%v: <**>5
%+v: <**>(0xf8400420d0->0xf8400420c8)5
%#v: (**uint8)5
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
Pointer to circular struct with a uint8 field and a pointer to itself:
%v: <*>{1 <*><shown>}
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
See the Printf example for details on the setup of variables being shown
here.
Errors
Since it is possible for custom Stringer/error interfaces to panic, spew
detects them and handles them internally by printing the panic information
inline with the output. Since spew is intended to provide deep pretty printing
capabilities on structures, it intentionally does not return any errors.
*/
package spew

509
vendor/github.com/davecgh/go-spew/spew/dump.go generated vendored Normal file
View File

@ -0,0 +1,509 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"os"
"reflect"
"regexp"
"strconv"
"strings"
)
var (
// uint8Type is a reflect.Type representing a uint8. It is used to
// convert cgo types to uint8 slices for hexdumping.
uint8Type = reflect.TypeOf(uint8(0))
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
)
// dumpState contains information about the state of a dump operation.
type dumpState struct {
w io.Writer
depth int
pointers map[uintptr]int
ignoreNextType bool
ignoreNextIndent bool
cs *ConfigState
}
// indent performs indentation according to the depth level and cs.Indent
// option.
func (d *dumpState) indent() {
if d.ignoreNextIndent {
d.ignoreNextIndent = false
return
}
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
}
// unpackValue returns values inside of non-nil interfaces when possible.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface && !v.IsNil() {
v = v.Elem()
}
return v
}
// dumpPtr handles formatting of pointers by indirecting them as necessary.
func (d *dumpState) dumpPtr(v reflect.Value) {
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range d.pointers {
if depth >= d.depth {
delete(d.pointers, k)
}
}
// Keep list of all dereferenced pointers to show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by dereferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
cycleFound = true
indirects--
break
}
d.pointers[addr] = d.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type information.
d.w.Write(openParenBytes)
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
d.w.Write([]byte(ve.Type().String()))
d.w.Write(closeParenBytes)
// Display pointer information.
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
d.w.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
d.w.Write(pointerChainBytes)
}
printHexPtr(d.w, addr)
}
d.w.Write(closeParenBytes)
}
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound:
d.w.Write(nilAngleBytes)
case cycleFound:
d.w.Write(circularBytes)
default:
d.ignoreNextType = true
d.dump(ve)
}
d.w.Write(closeParenBytes)
}
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
// reflection) arrays and slices are dumped in hexdump -C fashion.
func (d *dumpState) dumpSlice(v reflect.Value) {
// Determine whether this type should be hex dumped or not. Also,
// for types which should be hexdumped, try to use the underlying data
// first, then fall back to trying to convert them to a uint8 slice.
var buf []uint8
doConvert := false
doHexDump := false
numEntries := v.Len()
if numEntries > 0 {
vt := v.Index(0).Type()
vts := vt.String()
switch {
// C types that need to be converted.
case cCharRE.MatchString(vts):
fallthrough
case cUnsignedCharRE.MatchString(vts):
fallthrough
case cUint8tCharRE.MatchString(vts):
doConvert = true
// Try to use existing uint8 slices and fall back to converting
// and copying if that fails.
case vt.Kind() == reflect.Uint8:
// We need an addressable interface to convert the type
// to a byte slice. However, the reflect package won't
// give us an interface on certain things like
// unexported struct fields in order to enforce
// visibility rules. We use unsafe, when available, to
// bypass these restrictions since this package does not
// mutate the values.
vs := v
if !vs.CanInterface() || !vs.CanAddr() {
vs = unsafeReflectValue(vs)
}
if !UnsafeDisabled {
vs = vs.Slice(0, numEntries)
// Use the existing uint8 slice if it can be
// type asserted.
iface := vs.Interface()
if slice, ok := iface.([]uint8); ok {
buf = slice
doHexDump = true
break
}
}
// The underlying data needs to be converted if it can't
// be type asserted to a uint8 slice.
doConvert = true
}
// Copy and convert the underlying type if needed.
if doConvert && vt.ConvertibleTo(uint8Type) {
// Convert and copy each element into a uint8 byte
// slice.
buf = make([]uint8, numEntries)
for i := 0; i < numEntries; i++ {
vv := v.Index(i)
buf[i] = uint8(vv.Convert(uint8Type).Uint())
}
doHexDump = true
}
}
// Hexdump the entire slice as needed.
if doHexDump {
indent := strings.Repeat(d.cs.Indent, d.depth)
str := indent + hex.Dump(buf)
str = strings.Replace(str, "\n", "\n"+indent, -1)
str = strings.TrimRight(str, d.cs.Indent)
d.w.Write([]byte(str))
return
}
// Recursively call dump for each item.
for i := 0; i < numEntries; i++ {
d.dump(d.unpackValue(v.Index(i)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
// dump is the main workhorse for dumping a value. It uses the passed reflect
// value to figure out what kind of object we are dealing with and formats it
// appropriately. It is a recursive function, however circular data structures
// are detected and handled properly.
func (d *dumpState) dump(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
d.w.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
d.indent()
d.dumpPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !d.ignoreNextType {
d.indent()
d.w.Write(openParenBytes)
d.w.Write([]byte(v.Type().String()))
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
d.ignoreNextType = false
// Display length and capacity if the built-in len and cap functions
// work with the value's kind and the len/cap itself is non-zero.
valueLen, valueCap := 0, 0
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.Chan:
valueLen, valueCap = v.Len(), v.Cap()
case reflect.Map, reflect.String:
valueLen = v.Len()
}
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
d.w.Write(openParenBytes)
if valueLen != 0 {
d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10)
}
if !d.cs.DisableCapacities && valueCap != 0 {
if valueLen != 0 {
d.w.Write(spaceBytes)
}
d.w.Write(capEqualsBytes)
printInt(d.w, int64(valueCap), 10)
}
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
// Call Stringer/error interfaces if they exist and the handle methods flag
// is enabled
if !d.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(d.cs, d.w, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(d.w, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(d.w, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(d.w, v.Uint(), 10)
case reflect.Float32:
printFloat(d.w, v.Float(), 32)
case reflect.Float64:
printFloat(d.w, v.Float(), 64)
case reflect.Complex64:
printComplex(d.w, v.Complex(), 32)
case reflect.Complex128:
printComplex(d.w, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
d.dumpSlice(v)
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.String:
d.w.Write([]byte(strconv.Quote(v.String())))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
d.w.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
numEntries := v.Len()
keys := v.MapKeys()
if d.cs.SortKeys {
sortValues(keys, d.cs)
}
for i, key := range keys {
d.dump(d.unpackValue(key))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.MapIndex(key)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Struct:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
vt := v.Type()
numFields := v.NumField()
for i := 0; i < numFields; i++ {
d.indent()
vtf := vt.Field(i)
d.w.Write([]byte(vtf.Name))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.Field(i)))
if i < (numFields - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(d.w, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(d.w, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it in case any new
// types are added.
default:
if v.CanInterface() {
fmt.Fprintf(d.w, "%v", v.Interface())
} else {
fmt.Fprintf(d.w, "%v", v.String())
}
}
}
// fdump is a helper function to consolidate the logic from the various public
// methods which take varying writers and config states.
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
for _, arg := range a {
if arg == nil {
w.Write(interfaceBytes)
w.Write(spaceBytes)
w.Write(nilAngleBytes)
w.Write(newlineBytes)
continue
}
d := dumpState{w: w, cs: cs}
d.pointers = make(map[uintptr]int)
d.dump(reflect.ValueOf(arg))
d.w.Write(newlineBytes)
}
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func Fdump(w io.Writer, a ...interface{}) {
fdump(&Config, w, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(&Config, &buf, a...)
return buf.String()
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by an exported package global,
spew.Config. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func Dump(a ...interface{}) {
fdump(&Config, os.Stdout, a...)
}

419
vendor/github.com/davecgh/go-spew/spew/format.go generated vendored Normal file
View File

@ -0,0 +1,419 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"reflect"
"strconv"
"strings"
)
// supportedFlags is a list of all the character flags supported by fmt package.
const supportedFlags = "0-+# "
// formatState implements the fmt.Formatter interface and contains information
// about the state of a formatting operation. The NewFormatter function can
// be used to get a new Formatter which can be used directly as arguments
// in standard fmt package printing calls.
type formatState struct {
value interface{}
fs fmt.State
depth int
pointers map[uintptr]int
ignoreNextType bool
cs *ConfigState
}
// buildDefaultFormat recreates the original format string without precision
// and width information to pass in to fmt.Sprintf in the case of an
// unrecognized type. Unless new types are added to the language, this
// function won't ever be called.
func (f *formatState) buildDefaultFormat() (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
buf.WriteRune('v')
format = buf.String()
return format
}
// constructOrigFormat recreates the original format string including precision
// and width information to pass along to the standard fmt package. This allows
// automatic deferral of all format strings this package doesn't support.
func (f *formatState) constructOrigFormat(verb rune) (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
if width, ok := f.fs.Width(); ok {
buf.WriteString(strconv.Itoa(width))
}
if precision, ok := f.fs.Precision(); ok {
buf.Write(precisionBytes)
buf.WriteString(strconv.Itoa(precision))
}
buf.WriteRune(verb)
format = buf.String()
return format
}
// unpackValue returns values inside of non-nil interfaces when possible and
// ensures that types for values which have been unpacked from an interface
// are displayed when the show types flag is also set.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface {
f.ignoreNextType = false
if !v.IsNil() {
v = v.Elem()
}
}
return v
}
// formatPtr handles formatting of pointers by indirecting them as necessary.
func (f *formatState) formatPtr(v reflect.Value) {
// Display nil if top level pointer is nil.
showTypes := f.fs.Flag('#')
if v.IsNil() && (!showTypes || f.ignoreNextType) {
f.fs.Write(nilAngleBytes)
return
}
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range f.pointers {
if depth >= f.depth {
delete(f.pointers, k)
}
}
// Keep list of all dereferenced pointers to possibly show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by derferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
cycleFound = true
indirects--
break
}
f.pointers[addr] = f.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type or indirection level depending on flags.
if showTypes && !f.ignoreNextType {
f.fs.Write(openParenBytes)
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
f.fs.Write([]byte(ve.Type().String()))
f.fs.Write(closeParenBytes)
} else {
if nilFound || cycleFound {
indirects += strings.Count(ve.Type().String(), "*")
}
f.fs.Write(openAngleBytes)
f.fs.Write([]byte(strings.Repeat("*", indirects)))
f.fs.Write(closeAngleBytes)
}
// Display pointer information depending on flags.
if f.fs.Flag('+') && (len(pointerChain) > 0) {
f.fs.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
f.fs.Write(pointerChainBytes)
}
printHexPtr(f.fs, addr)
}
f.fs.Write(closeParenBytes)
}
// Display dereferenced value.
switch {
case nilFound:
f.fs.Write(nilAngleBytes)
case cycleFound:
f.fs.Write(circularShortBytes)
default:
f.ignoreNextType = true
f.format(ve)
}
}
// format is the main workhorse for providing the Formatter interface. It
// uses the passed reflect value to figure out what kind of object we are
// dealing with and formats it appropriately. It is a recursive function,
// however circular data structures are detected and handled properly.
func (f *formatState) format(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
f.fs.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
f.formatPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !f.ignoreNextType && f.fs.Flag('#') {
f.fs.Write(openParenBytes)
f.fs.Write([]byte(v.Type().String()))
f.fs.Write(closeParenBytes)
}
f.ignoreNextType = false
// Call Stringer/error interfaces if they exist and the handle methods
// flag is enabled.
if !f.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(f.cs, f.fs, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(f.fs, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(f.fs, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(f.fs, v.Uint(), 10)
case reflect.Float32:
printFloat(f.fs, v.Float(), 32)
case reflect.Float64:
printFloat(f.fs, v.Float(), 64)
case reflect.Complex64:
printComplex(f.fs, v.Complex(), 32)
case reflect.Complex128:
printComplex(f.fs, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
f.fs.Write(openBracketBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
numEntries := v.Len()
for i := 0; i < numEntries; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(v.Index(i)))
}
}
f.depth--
f.fs.Write(closeBracketBytes)
case reflect.String:
f.fs.Write([]byte(v.String()))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
f.fs.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
f.fs.Write(openMapBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
keys := v.MapKeys()
if f.cs.SortKeys {
sortValues(keys, f.cs)
}
for i, key := range keys {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(key))
f.fs.Write(colonBytes)
f.ignoreNextType = true
f.format(f.unpackValue(v.MapIndex(key)))
}
}
f.depth--
f.fs.Write(closeMapBytes)
case reflect.Struct:
numFields := v.NumField()
f.fs.Write(openBraceBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
vt := v.Type()
for i := 0; i < numFields; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
vtf := vt.Field(i)
if f.fs.Flag('+') || f.fs.Flag('#') {
f.fs.Write([]byte(vtf.Name))
f.fs.Write(colonBytes)
}
f.format(f.unpackValue(v.Field(i)))
}
}
f.depth--
f.fs.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(f.fs, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(f.fs, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it if any get added.
default:
format := f.buildDefaultFormat()
if v.CanInterface() {
fmt.Fprintf(f.fs, format, v.Interface())
} else {
fmt.Fprintf(f.fs, format, v.String())
}
}
}
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
// details.
func (f *formatState) Format(fs fmt.State, verb rune) {
f.fs = fs
// Use standard formatting for verbs that are not v.
if verb != 'v' {
format := f.constructOrigFormat(verb)
fmt.Fprintf(fs, format, f.value)
return
}
if f.value == nil {
if fs.Flag('#') {
fs.Write(interfaceBytes)
}
fs.Write(nilAngleBytes)
return
}
f.format(reflect.ValueOf(f.value))
}
// newFormatter is a helper function to consolidate the logic from the various
// public methods which take varying config states.
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
fs := &formatState{value: v, cs: cs}
fs.pointers = make(map[uintptr]int)
return fs
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
Printf, Println, or Fprintf.
*/
func NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(&Config, v)
}

148
vendor/github.com/davecgh/go-spew/spew/spew.go generated vendored Normal file
View File

@ -0,0 +1,148 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"fmt"
"io"
)
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the formatted string as a value that satisfies error. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a default Formatter interface returned by NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
func Print(a ...interface{}) (n int, err error) {
return fmt.Print(convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
func Println(a ...interface{}) (n int, err error) {
return fmt.Println(convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprint(a ...interface{}) string {
return fmt.Sprint(convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintln(a ...interface{}) string {
return fmt.Sprintln(convertArgs(a)...)
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a default spew Formatter interface.
func convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = NewFormatter(arg)
}
return formatters
}

29
vendor/github.com/go-logr/logr/.golangci.yaml generated vendored Normal file
View File

@ -0,0 +1,29 @@
run:
timeout: 1m
tests: true
linters:
disable-all: true
enable:
- asciicheck
- deadcode
- errcheck
- forcetypeassert
- gocritic
- gofmt
- goimports
- gosimple
- govet
- ineffassign
- misspell
- revive
- staticcheck
- structcheck
- typecheck
- unused
- varcheck
issues:
exclude-use-default: false
max-issues-per-linter: 0
max-same-issues: 10

6
vendor/github.com/go-logr/logr/CHANGELOG.md generated vendored Normal file
View File

@ -0,0 +1,6 @@
# CHANGELOG
## v1.0.0-rc1
This is the first logged release. Major changes (including breaking changes)
have occurred since earlier tags.

17
vendor/github.com/go-logr/logr/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,17 @@
# Contributing
Logr is open to pull-requests, provided they fit within the intended scope of
the project. Specifically, this library aims to be VERY small and minimalist,
with no external dependencies.
## Compatibility
This project intends to follow [semantic versioning](http://semver.org) and
is very strict about compatibility. Any proposed changes MUST follow those
rules.
## Performance
As a logging library, logr must be as light-weight as possible. Any proposed
code change must include results of running the [benchmark](./benchmark)
before and after the change.

201
vendor/github.com/go-logr/logr/LICENSE generated vendored Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

278
vendor/github.com/go-logr/logr/README.md generated vendored Normal file
View File

@ -0,0 +1,278 @@
# A minimal logging API for Go
[![Go Reference](https://pkg.go.dev/badge/github.com/go-logr/logr.svg)](https://pkg.go.dev/github.com/go-logr/logr)
logr offers an(other) opinion on how Go programs and libraries can do logging
without becoming coupled to a particular logging implementation. This is not
an implementation of logging - it is an API. In fact it is two APIs with two
different sets of users.
The `Logger` type is intended for application and library authors. It provides
a relatively small API which can be used everywhere you want to emit logs. It
defers the actual act of writing logs (to files, to stdout, or whatever) to the
`LogSink` interface.
The `LogSink` interface is intended for logging library implementers. It is a
pure interface which can be implemented by logging frameworks to provide the actual logging
functionality.
This decoupling allows application and library developers to write code in
terms of `logr.Logger` (which has very low dependency fan-out) while the
implementation of logging is managed "up stack" (e.g. in or near `main()`.)
Application developers can then switch out implementations as necessary.
Many people assert that libraries should not be logging, and as such efforts
like this are pointless. Those people are welcome to convince the authors of
the tens-of-thousands of libraries that *DO* write logs that they are all
wrong. In the meantime, logr takes a more practical approach.
## Typical usage
Somewhere, early in an application's life, it will make a decision about which
logging library (implementation) it actually wants to use. Something like:
```
func main() {
// ... other setup code ...
// Create the "root" logger. We have chosen the "logimpl" implementation,
// which takes some initial parameters and returns a logr.Logger.
logger := logimpl.New(param1, param2)
// ... other setup code ...
```
Most apps will call into other libraries, create structures to govern the flow,
etc. The `logr.Logger` object can be passed to these other libraries, stored
in structs, or even used as a package-global variable, if needed. For example:
```
app := createTheAppObject(logger)
app.Run()
```
Outside of this early setup, no other packages need to know about the choice of
implementation. They write logs in terms of the `logr.Logger` that they
received:
```
type appObject struct {
// ... other fields ...
logger logr.Logger
// ... other fields ...
}
func (app *appObject) Run() {
app.logger.Info("starting up", "timestamp", time.Now())
// ... app code ...
```
## Background
If the Go standard library had defined an interface for logging, this project
probably would not be needed. Alas, here we are.
### Inspiration
Before you consider this package, please read [this blog post by the
inimitable Dave Cheney][warning-makes-no-sense]. We really appreciate what
he has to say, and it largely aligns with our own experiences.
### Differences from Dave's ideas
The main differences are:
1. Dave basically proposes doing away with the notion of a logging API in favor
of `fmt.Printf()`. We disagree, especially when you consider things like output
locations, timestamps, file and line decorations, and structured logging. This
package restricts the logging API to just 2 types of logs: info and error.
Info logs are things you want to tell the user which are not errors. Error
logs are, well, errors. If your code receives an `error` from a subordinate
function call and is logging that `error` *and not returning it*, use error
logs.
2. Verbosity-levels on info logs. This gives developers a chance to indicate
arbitrary grades of importance for info logs, without assigning names with
semantic meaning such as "warning", "trace", and "debug." Superficially this
may feel very similar, but the primary difference is the lack of semantics.
Because verbosity is a numerical value, it's safe to assume that an app running
with higher verbosity means more (and less important) logs will be generated.
## Implementations (non-exhaustive)
There are implementations for the following logging libraries:
- **a function** (can bridge to non-structured libraries): [funcr](https://github.com/go-logr/logr/tree/master/funcr)
- **github.com/google/glog**: [glogr](https://github.com/go-logr/glogr)
- **k8s.io/klog** (for Kubernetes): [klogr](https://git.k8s.io/klog/klogr)
- **go.uber.org/zap**: [zapr](https://github.com/go-logr/zapr)
- **log** (the Go standard library logger): [stdr](https://github.com/go-logr/stdr)
- **github.com/sirupsen/logrus**: [logrusr](https://github.com/bombsimon/logrusr)
- **github.com/wojas/genericr**: [genericr](https://github.com/wojas/genericr) (makes it easy to implement your own backend)
- **logfmt** (Heroku style [logging](https://www.brandur.org/logfmt)): [logfmtr](https://github.com/iand/logfmtr)
- **github.com/rs/zerolog**: [zerologr](https://github.com/go-logr/zerologr)
## FAQ
### Conceptual
#### Why structured logging?
- **Structured logs are more easily queryable**: Since you've got
key-value pairs, it's much easier to query your structured logs for
particular values by filtering on the contents of a particular key --
think searching request logs for error codes, Kubernetes reconcilers for
the name and namespace of the reconciled object, etc.
- **Structured logging makes it easier to have cross-referenceable logs**:
Similarly to searchability, if you maintain conventions around your
keys, it becomes easy to gather all log lines related to a particular
concept.
- **Structured logs allow better dimensions of filtering**: if you have
structure to your logs, you've got more precise control over how much
information is logged -- you might choose in a particular configuration
to log certain keys but not others, only log lines where a certain key
matches a certain value, etc., instead of just having v-levels and names
to key off of.
- **Structured logs better represent structured data**: sometimes, the
data that you want to log is inherently structured (think tuple-link
objects.) Structured logs allow you to preserve that structure when
outputting.
#### Why V-levels?
**V-levels give operators an easy way to control the chattiness of log
operations**. V-levels provide a way for a given package to distinguish
the relative importance or verbosity of a given log message. Then, if
a particular logger or package is logging too many messages, the user
of the package can simply change the v-levels for that library.
#### Why not named levels, like Info/Warning/Error?
Read [Dave Cheney's post][warning-makes-no-sense]. Then read [Differences
from Dave's ideas](#differences-from-daves-ideas).
#### Why not allow format strings, too?
**Format strings negate many of the benefits of structured logs**:
- They're not easily searchable without resorting to fuzzy searching,
regular expressions, etc.
- They don't store structured data well, since contents are flattened into
a string.
- They're not cross-referenceable.
- They don't compress easily, since the message is not constant.
(Unless you turn positional parameters into key-value pairs with numerical
keys, at which point you've gotten key-value logging with meaningless
keys.)
### Practical
#### Why key-value pairs, and not a map?
Key-value pairs are *much* easier to optimize, especially around
allocations. Zap (a structured logger that inspired logr's interface) has
[performance measurements](https://github.com/uber-go/zap#performance)
that show this quite nicely.
While the interface ends up being a little less obvious, you get
potentially better performance, plus avoid making users type
`map[string]string{}` every time they want to log.
#### What if my V-levels differ between libraries?
That's fine. Control your V-levels on a per-logger basis, and use the
`WithName` method to pass different loggers to different libraries.
Generally, you should take care to ensure that you have relatively
consistent V-levels within a given logger, however, as this makes deciding
on what verbosity of logs to request easier.
#### But I really want to use a format string!
That's not actually a question. Assuming your question is "how do
I convert my mental model of logging with format strings to logging with
constant messages":
1. Figure out what the error actually is, as you'd write in a TL;DR style,
and use that as a message.
2. For every place you'd write a format specifier, look to the word before
it, and add that as a key value pair.
For instance, consider the following examples (all taken from spots in the
Kubernetes codebase):
- `klog.V(4).Infof("Client is returning errors: code %v, error %v",
responseCode, err)` becomes `logger.Error(err, "client returned an
error", "code", responseCode)`
- `klog.V(4).Infof("Got a Retry-After %ds response for attempt %d to %v",
seconds, retries, url)` becomes `logger.V(4).Info("got a retry-after
response when requesting url", "attempt", retries, "after
seconds", seconds, "url", url)`
If you *really* must use a format string, use it in a key's value, and
call `fmt.Sprintf` yourself. For instance: `log.Printf("unable to
reflect over type %T")` becomes `logger.Info("unable to reflect over
type", "type", fmt.Sprintf("%T"))`. In general though, the cases where
this is necessary should be few and far between.
#### How do I choose my V-levels?
This is basically the only hard constraint: increase V-levels to denote
more verbose or more debug-y logs.
Otherwise, you can start out with `0` as "you always want to see this",
`1` as "common logging that you might *possibly* want to turn off", and
`10` as "I would like to performance-test your log collection stack."
Then gradually choose levels in between as you need them, working your way
down from 10 (for debug and trace style logs) and up from 1 (for chattier
info-type logs.)
#### How do I choose my keys?
Keys are fairly flexible, and can hold more or less any string
value. For best compatibility with implementations and consistency
with existing code in other projects, there are a few conventions you
should consider.
- Make your keys human-readable.
- Constant keys are generally a good idea.
- Be consistent across your codebase.
- Keys should naturally match parts of the message string.
- Use lower case for simple keys and
[lowerCamelCase](https://en.wiktionary.org/wiki/lowerCamelCase) for
more complex ones. Kubernetes is one example of a project that has
[adopted that
convention](https://github.com/kubernetes/community/blob/HEAD/contributors/devel/sig-instrumentation/migration-to-structured-logging.md#name-arguments).
While key names are mostly unrestricted (and spaces are acceptable),
it's generally a good idea to stick to printable ascii characters, or at
least match the general character set of your log lines.
#### Why should keys be constant values?
The point of structured logging is to make later log processing easier. Your
keys are, effectively, the schema of each log message. If you use different
keys across instances of the same log line, you will make your structured logs
much harder to use. `Sprintf()` is for values, not for keys!
#### Why is this not a pure interface?
The Logger type is implemented as a struct in order to allow the Go compiler to
optimize things like high-V `Info` logs that are not triggered. Not all of
these implementations are implemented yet, but this structure was suggested as
a way to ensure they *can* be implemented. All of the real work is behind the
`LogSink` interface.
[warning-makes-no-sense]: http://dave.cheney.net/2015/11/05/lets-talk-about-logging

54
vendor/github.com/go-logr/logr/discard.go generated vendored Normal file
View File

@ -0,0 +1,54 @@
/*
Copyright 2020 The logr Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package logr
// Discard returns a Logger that discards all messages logged to it. It can be
// used whenever the caller is not interested in the logs. Logger instances
// produced by this function always compare as equal.
func Discard() Logger {
return Logger{
level: 0,
sink: discardLogSink{},
}
}
// discardLogSink is a LogSink that discards all messages.
type discardLogSink struct{}
// Verify that it actually implements the interface
var _ LogSink = discardLogSink{}
func (l discardLogSink) Init(RuntimeInfo) {
}
func (l discardLogSink) Enabled(int) bool {
return false
}
func (l discardLogSink) Info(int, string, ...interface{}) {
}
func (l discardLogSink) Error(error, string, ...interface{}) {
}
func (l discardLogSink) WithValues(...interface{}) LogSink {
return l
}
func (l discardLogSink) WithName(string) LogSink {
return l
}

496
vendor/github.com/go-logr/logr/logr.go generated vendored Normal file
View File

@ -0,0 +1,496 @@
/*
Copyright 2019 The logr Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This design derives from Dave Cheney's blog:
// http://dave.cheney.net/2015/11/05/lets-talk-about-logging
// Package logr defines a general-purpose logging API and abstract interfaces
// to back that API. Packages in the Go ecosystem can depend on this package,
// while callers can implement logging with whatever backend is appropriate.
//
// Usage
//
// Logging is done using a Logger instance. Logger is a concrete type with
// methods, which defers the actual logging to a LogSink interface. The main
// methods of Logger are Info() and Error(). Arguments to Info() and Error()
// are key/value pairs rather than printf-style formatted strings, emphasizing
// "structured logging".
//
// With Go's standard log package, we might write:
// log.Printf("setting target value %s", targetValue)
//
// With logr's structured logging, we'd write:
// logger.Info("setting target", "value", targetValue)
//
// Errors are much the same. Instead of:
// log.Printf("failed to open the pod bay door for user %s: %v", user, err)
//
// We'd write:
// logger.Error(err, "failed to open the pod bay door", "user", user)
//
// Info() and Error() are very similar, but they are separate methods so that
// LogSink implementations can choose to do things like attach additional
// information (such as stack traces) on calls to Error().
//
// Verbosity
//
// Often we want to log information only when the application in "verbose
// mode". To write log lines that are more verbose, Logger has a V() method.
// The higher the V-level of a log line, the less critical it is considered.
// Log-lines with V-levels that are not enabled (as per the LogSink) will not
// be written. Level V(0) is the default, and logger.V(0).Info() has the same
// meaning as logger.Info(). Negative V-levels have the same meaning as V(0).
//
// Where we might have written:
// if flVerbose >= 2 {
// log.Printf("an unusual thing happened")
// }
//
// We can write:
// logger.V(2).Info("an unusual thing happened")
//
// Logger Names
//
// Logger instances can have name strings so that all messages logged through
// that instance have additional context. For example, you might want to add
// a subsystem name:
//
// logger.WithName("compactor").Info("started", "time", time.Now())
//
// The WithName() method returns a new Logger, which can be passed to
// constructors or other functions for further use. Repeated use of WithName()
// will accumulate name "segments". These name segments will be joined in some
// way by the LogSink implementation. It is strongly recommended that name
// segments contain simple identifiers (letters, digits, and hyphen), and do
// not contain characters that could muddle the log output or confuse the
// joining operation (e.g. whitespace, commas, periods, slashes, brackets,
// quotes, etc).
//
// Saved Values
//
// Logger instances can store any number of key/value pairs, which will be
// logged alongside all messages logged through that instance. For example,
// you might want to create a Logger instance per managed object:
//
// With the standard log package, we might write:
// log.Printf("decided to set field foo to value %q for object %s/%s",
// targetValue, object.Namespace, object.Name)
//
// With logr we'd write:
// // Elsewhere: set up the logger to log the object name.
// obj.logger = mainLogger.WithValues(
// "name", obj.name, "namespace", obj.namespace)
//
// // later on...
// obj.logger.Info("setting foo", "value", targetValue)
//
// Best Practices
//
// Logger has very few hard rules, with the goal that LogSink implementations
// might have a lot of freedom to differentiate. There are, however, some
// things to consider.
//
// The log message consists of a constant message attached to the log line.
// This should generally be a simple description of what's occurring, and should
// never be a format string. Variable information can then be attached using
// named values.
//
// Keys are arbitrary strings, but should generally be constant values. Values
// may be any Go value, but how the value is formatted is determined by the
// LogSink implementation.
//
// Key Naming Conventions
//
// Keys are not strictly required to conform to any specification or regex, but
// it is recommended that they:
// * be human-readable and meaningful (not auto-generated or simple ordinals)
// * be constant (not dependent on input data)
// * contain only printable characters
// * not contain whitespace or punctuation
// * use lower case for simple keys and lowerCamelCase for more complex ones
//
// These guidelines help ensure that log data is processed properly regardless
// of the log implementation. For example, log implementations will try to
// output JSON data or will store data for later database (e.g. SQL) queries.
//
// While users are generally free to use key names of their choice, it's
// generally best to avoid using the following keys, as they're frequently used
// by implementations:
// * "caller": the calling information (file/line) of a particular log line
// * "error": the underlying error value in the `Error` method
// * "level": the log level
// * "logger": the name of the associated logger
// * "msg": the log message
// * "stacktrace": the stack trace associated with a particular log line or
// error (often from the `Error` message)
// * "ts": the timestamp for a log line
//
// Implementations are encouraged to make use of these keys to represent the
// above concepts, when necessary (for example, in a pure-JSON output form, it
// would be necessary to represent at least message and timestamp as ordinary
// named values).
//
// Break Glass
//
// Implementations may choose to give callers access to the underlying
// logging implementation. The recommended pattern for this is:
// // Underlier exposes access to the underlying logging implementation.
// // Since callers only have a logr.Logger, they have to know which
// // implementation is in use, so this interface is less of an abstraction
// // and more of way to test type conversion.
// type Underlier interface {
// GetUnderlying() <underlying-type>
// }
//
// Logger grants access to the sink to enable type assertions like this:
// func DoSomethingWithImpl(log logr.Logger) {
// if underlier, ok := log.GetSink()(impl.Underlier) {
// implLogger := underlier.GetUnderlying()
// ...
// }
// }
//
// Custom `With*` functions can be implemented by copying the complete
// Logger struct and replacing the sink in the copy:
// // WithFooBar changes the foobar parameter in the log sink and returns a
// // new logger with that modified sink. It does nothing for loggers where
// // the sink doesn't support that parameter.
// func WithFoobar(log logr.Logger, foobar int) logr.Logger {
// if foobarLogSink, ok := log.GetSink()(FoobarSink); ok {
// log = log.WithSink(foobarLogSink.WithFooBar(foobar))
// }
// return log
// }
//
// Don't use New to construct a new Logger with a LogSink retrieved from an
// existing Logger. Source code attribution might not work correctly and
// unexported fields in Logger get lost.
//
// Beware that the same LogSink instance may be shared by different logger
// instances. Calling functions that modify the LogSink will affect all of
// those.
package logr
import (
"context"
)
// New returns a new Logger instance. This is primarily used by libraries
// implementing LogSink, rather than end users.
func New(sink LogSink) Logger {
logger := Logger{}
logger.setSink(sink)
sink.Init(runtimeInfo)
return logger
}
// setSink stores the sink and updates any related fields. It mutates the
// logger and thus is only safe to use for loggers that are not currently being
// used concurrently.
func (l *Logger) setSink(sink LogSink) {
l.sink = sink
}
// GetSink returns the stored sink.
func (l Logger) GetSink() LogSink {
return l.sink
}
// WithSink returns a copy of the logger with the new sink.
func (l Logger) WithSink(sink LogSink) Logger {
l.setSink(sink)
return l
}
// Logger is an interface to an abstract logging implementation. This is a
// concrete type for performance reasons, but all the real work is passed on to
// a LogSink. Implementations of LogSink should provide their own constructors
// that return Logger, not LogSink.
//
// The underlying sink can be accessed through GetSink and be modified through
// WithSink. This enables the implementation of custom extensions (see "Break
// Glass" in the package documentation). Normally the sink should be used only
// indirectly.
type Logger struct {
sink LogSink
level int
}
// Enabled tests whether this Logger is enabled. For example, commandline
// flags might be used to set the logging verbosity and disable some info logs.
func (l Logger) Enabled() bool {
return l.sink.Enabled(l.level)
}
// Info logs a non-error message with the given key/value pairs as context.
//
// The msg argument should be used to add some constant description to the log
// line. The key/value pairs can then be used to add additional variable
// information. The key/value pairs must alternate string keys and arbitrary
// values.
func (l Logger) Info(msg string, keysAndValues ...interface{}) {
if l.Enabled() {
if withHelper, ok := l.sink.(CallStackHelperLogSink); ok {
withHelper.GetCallStackHelper()()
}
l.sink.Info(l.level, msg, keysAndValues...)
}
}
// Error logs an error, with the given message and key/value pairs as context.
// It functions similarly to Info, but may have unique behavior, and should be
// preferred for logging errors (see the package documentations for more
// information).
//
// The msg argument should be used to add context to any underlying error,
// while the err argument should be used to attach the actual error that
// triggered this log line, if present.
func (l Logger) Error(err error, msg string, keysAndValues ...interface{}) {
if withHelper, ok := l.sink.(CallStackHelperLogSink); ok {
withHelper.GetCallStackHelper()()
}
l.sink.Error(err, msg, keysAndValues...)
}
// V returns a new Logger instance for a specific verbosity level, relative to
// this Logger. In other words, V-levels are additive. A higher verbosity
// level means a log message is less important. Negative V-levels are treated
// as 0.
func (l Logger) V(level int) Logger {
if level < 0 {
level = 0
}
l.level += level
return l
}
// WithValues returns a new Logger instance with additional key/value pairs.
// See Info for documentation on how key/value pairs work.
func (l Logger) WithValues(keysAndValues ...interface{}) Logger {
l.setSink(l.sink.WithValues(keysAndValues...))
return l
}
// WithName returns a new Logger instance with the specified name element added
// to the Logger's name. Successive calls with WithName append additional
// suffixes to the Logger's name. It's strongly recommended that name segments
// contain only letters, digits, and hyphens (see the package documentation for
// more information).
func (l Logger) WithName(name string) Logger {
l.setSink(l.sink.WithName(name))
return l
}
// WithCallDepth returns a Logger instance that offsets the call stack by the
// specified number of frames when logging call site information, if possible.
// This is useful for users who have helper functions between the "real" call
// site and the actual calls to Logger methods. If depth is 0 the attribution
// should be to the direct caller of this function. If depth is 1 the
// attribution should skip 1 call frame, and so on. Successive calls to this
// are additive.
//
// If the underlying log implementation supports a WithCallDepth(int) method,
// it will be called and the result returned. If the implementation does not
// support CallDepthLogSink, the original Logger will be returned.
//
// To skip one level, WithCallStackHelper() should be used instead of
// WithCallDepth(1) because it works with implementions that support the
// CallDepthLogSink and/or CallStackHelperLogSink interfaces.
func (l Logger) WithCallDepth(depth int) Logger {
if withCallDepth, ok := l.sink.(CallDepthLogSink); ok {
l.setSink(withCallDepth.WithCallDepth(depth))
}
return l
}
// WithCallStackHelper returns a new Logger instance that skips the direct
// caller when logging call site information, if possible. This is useful for
// users who have helper functions between the "real" call site and the actual
// calls to Logger methods and want to support loggers which depend on marking
// each individual helper function, like loggers based on testing.T.
//
// In addition to using that new logger instance, callers also must call the
// returned function.
//
// If the underlying log implementation supports a WithCallDepth(int) method,
// WithCallDepth(1) will be called to produce a new logger. If it supports a
// WithCallStackHelper() method, that will be also called. If the
// implementation does not support either of these, the original Logger will be
// returned.
func (l Logger) WithCallStackHelper() (func(), Logger) {
var helper func()
if withCallDepth, ok := l.sink.(CallDepthLogSink); ok {
l.setSink(withCallDepth.WithCallDepth(1))
}
if withHelper, ok := l.sink.(CallStackHelperLogSink); ok {
helper = withHelper.GetCallStackHelper()
} else {
helper = func() {}
}
return helper, l
}
// contextKey is how we find Loggers in a context.Context.
type contextKey struct{}
// FromContext returns a Logger from ctx or an error if no Logger is found.
func FromContext(ctx context.Context) (Logger, error) {
if v, ok := ctx.Value(contextKey{}).(Logger); ok {
return v, nil
}
return Logger{}, notFoundError{}
}
// notFoundError exists to carry an IsNotFound method.
type notFoundError struct{}
func (notFoundError) Error() string {
return "no logr.Logger was present"
}
func (notFoundError) IsNotFound() bool {
return true
}
// FromContextOrDiscard returns a Logger from ctx. If no Logger is found, this
// returns a Logger that discards all log messages.
func FromContextOrDiscard(ctx context.Context) Logger {
if v, ok := ctx.Value(contextKey{}).(Logger); ok {
return v
}
return Discard()
}
// NewContext returns a new Context, derived from ctx, which carries the
// provided Logger.
func NewContext(ctx context.Context, logger Logger) context.Context {
return context.WithValue(ctx, contextKey{}, logger)
}
// RuntimeInfo holds information that the logr "core" library knows which
// LogSinks might want to know.
type RuntimeInfo struct {
// CallDepth is the number of call frames the logr library adds between the
// end-user and the LogSink. LogSink implementations which choose to print
// the original logging site (e.g. file & line) should climb this many
// additional frames to find it.
CallDepth int
}
// runtimeInfo is a static global. It must not be changed at run time.
var runtimeInfo = RuntimeInfo{
CallDepth: 1,
}
// LogSink represents a logging implementation. End-users will generally not
// interact with this type.
type LogSink interface {
// Init receives optional information about the logr library for LogSink
// implementations that need it.
Init(info RuntimeInfo)
// Enabled tests whether this LogSink is enabled at the specified V-level.
// For example, commandline flags might be used to set the logging
// verbosity and disable some info logs.
Enabled(level int) bool
// Info logs a non-error message with the given key/value pairs as context.
// The level argument is provided for optional logging. This method will
// only be called when Enabled(level) is true. See Logger.Info for more
// details.
Info(level int, msg string, keysAndValues ...interface{})
// Error logs an error, with the given message and key/value pairs as
// context. See Logger.Error for more details.
Error(err error, msg string, keysAndValues ...interface{})
// WithValues returns a new LogSink with additional key/value pairs. See
// Logger.WithValues for more details.
WithValues(keysAndValues ...interface{}) LogSink
// WithName returns a new LogSink with the specified name appended. See
// Logger.WithName for more details.
WithName(name string) LogSink
}
// CallDepthLogSink represents a Logger that knows how to climb the call stack
// to identify the original call site and can offset the depth by a specified
// number of frames. This is useful for users who have helper functions
// between the "real" call site and the actual calls to Logger methods.
// Implementations that log information about the call site (such as file,
// function, or line) would otherwise log information about the intermediate
// helper functions.
//
// This is an optional interface and implementations are not required to
// support it.
type CallDepthLogSink interface {
// WithCallDepth returns a LogSink that will offset the call
// stack by the specified number of frames when logging call
// site information.
//
// If depth is 0, the LogSink should skip exactly the number
// of call frames defined in RuntimeInfo.CallDepth when Info
// or Error are called, i.e. the attribution should be to the
// direct caller of Logger.Info or Logger.Error.
//
// If depth is 1 the attribution should skip 1 call frame, and so on.
// Successive calls to this are additive.
WithCallDepth(depth int) LogSink
}
// CallStackHelperLogSink represents a Logger that knows how to climb
// the call stack to identify the original call site and can skip
// intermediate helper functions if they mark themselves as
// helper. Go's testing package uses that approach.
//
// This is useful for users who have helper functions between the
// "real" call site and the actual calls to Logger methods.
// Implementations that log information about the call site (such as
// file, function, or line) would otherwise log information about the
// intermediate helper functions.
//
// This is an optional interface and implementations are not required
// to support it. Implementations that choose to support this must not
// simply implement it as WithCallDepth(1), because
// Logger.WithCallStackHelper will call both methods if they are
// present. This should only be implemented for LogSinks that actually
// need it, as with testing.T.
type CallStackHelperLogSink interface {
// GetCallStackHelper returns a function that must be called
// to mark the direct caller as helper function when logging
// call site information.
GetCallStackHelper() func()
}
// Marshaler is an optional interface that logged values may choose to
// implement. Loggers with structured output, such as JSON, should
// log the object return by the MarshalLog method instead of the
// original value.
type Marshaler interface {
// MarshalLog can be used to:
// - ensure that structs are not logged as strings when the original
// value has a String method: return a different type without a
// String method
// - select which fields of a complex type should get logged:
// return a simpler struct with fewer fields
// - log unexported fields: return a different struct
// with exported fields
//
// It may return any value of any type.
MarshalLog() interface{}
}

15
vendor/github.com/gogo/protobuf/AUTHORS generated vendored Normal file
View File

@ -0,0 +1,15 @@
# This is the official list of GoGo authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS file, which
# lists people. For example, employees are listed in CONTRIBUTORS,
# but not in AUTHORS, because the employer holds the copyright.
# Names should be added to this file as one of
# Organization's name
# Individual's name <submission email address>
# Individual's name <submission email address> <email2> <emailN>
# Please keep the list sorted.
Sendgrid, Inc
Vastech SA (PTY) LTD
Walter Schulze <awalterschulze@gmail.com>

23
vendor/github.com/gogo/protobuf/CONTRIBUTORS generated vendored Normal file
View File

@ -0,0 +1,23 @@
Anton Povarov <anton.povarov@gmail.com>
Brian Goff <cpuguy83@gmail.com>
Clayton Coleman <ccoleman@redhat.com>
Denis Smirnov <denis.smirnov.91@gmail.com>
DongYun Kang <ceram1000@gmail.com>
Dwayne Schultz <dschultz@pivotal.io>
Georg Apitz <gapitz@pivotal.io>
Gustav Paul <gustav.paul@gmail.com>
Johan Brandhorst <johan.brandhorst@gmail.com>
John Shahid <jvshahid@gmail.com>
John Tuley <john@tuley.org>
Laurent <laurent@adyoulike.com>
Patrick Lee <patrick@dropbox.com>
Peter Edge <peter.edge@gmail.com>
Roger Johansson <rogeralsing@gmail.com>
Sam Nguyen <sam.nguyen@sendgrid.com>
Sergio Arbeo <serabe@gmail.com>
Stephen J Day <stephen.day@docker.com>
Tamir Duberstein <tamird@gmail.com>
Todd Eisenberger <teisenberger@dropbox.com>
Tormod Erevik Lea <tormodlea@gmail.com>
Vyacheslav Kim <kane@sendgrid.com>
Walter Schulze <awalterschulze@gmail.com>

35
vendor/github.com/gogo/protobuf/LICENSE generated vendored Normal file
View File

@ -0,0 +1,35 @@
Copyright (c) 2013, The GoGo Authors. All rights reserved.
Protocol Buffers for Go with Gadgets
Go support for Protocol Buffers - Google's data interchange format
Copyright 2010 The Go Authors. All rights reserved.
https://github.com/golang/protobuf
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

43
vendor/github.com/gogo/protobuf/proto/Makefile generated vendored Normal file
View File

@ -0,0 +1,43 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
install:
go install
test: install generate-test-pbs
go test
generate-test-pbs:
make install
make -C test_proto
make -C proto3_proto
make

258
vendor/github.com/gogo/protobuf/proto/clone.go generated vendored Normal file
View File

@ -0,0 +1,258 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer deep copy and merge.
// TODO: RawMessage.
package proto
import (
"fmt"
"log"
"reflect"
"strings"
)
// Clone returns a deep copy of a protocol buffer.
func Clone(src Message) Message {
in := reflect.ValueOf(src)
if in.IsNil() {
return src
}
out := reflect.New(in.Type().Elem())
dst := out.Interface().(Message)
Merge(dst, src)
return dst
}
// Merger is the interface representing objects that can merge messages of the same type.
type Merger interface {
// Merge merges src into this message.
// Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended.
//
// Merge may panic if called with a different argument type than the receiver.
Merge(src Message)
}
// generatedMerger is the custom merge method that generated protos will have.
// We must add this method since a generate Merge method will conflict with
// many existing protos that have a Merge data field already defined.
type generatedMerger interface {
XXX_Merge(src Message)
}
// Merge merges src into dst.
// Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended.
// Merge panics if src and dst are not the same type, or if dst is nil.
func Merge(dst, src Message) {
if m, ok := dst.(Merger); ok {
m.Merge(src)
return
}
in := reflect.ValueOf(src)
out := reflect.ValueOf(dst)
if out.IsNil() {
panic("proto: nil destination")
}
if in.Type() != out.Type() {
panic(fmt.Sprintf("proto.Merge(%T, %T) type mismatch", dst, src))
}
if in.IsNil() {
return // Merge from nil src is a noop
}
if m, ok := dst.(generatedMerger); ok {
m.XXX_Merge(src)
return
}
mergeStruct(out.Elem(), in.Elem())
}
func mergeStruct(out, in reflect.Value) {
sprop := GetProperties(in.Type())
for i := 0; i < in.NumField(); i++ {
f := in.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
}
if emIn, ok := in.Addr().Interface().(extensionsBytes); ok {
emOut := out.Addr().Interface().(extensionsBytes)
bIn := emIn.GetExtensions()
bOut := emOut.GetExtensions()
*bOut = append(*bOut, *bIn...)
} else if emIn, err := extendable(in.Addr().Interface()); err == nil {
emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead()
if mIn != nil {
mOut := emOut.extensionsWrite()
muIn.Lock()
mergeExtension(mOut, mIn)
muIn.Unlock()
}
}
uf := in.FieldByName("XXX_unrecognized")
if !uf.IsValid() {
return
}
uin := uf.Bytes()
if len(uin) > 0 {
out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...))
}
}
// mergeAny performs a merge between two values of the same type.
// viaPtr indicates whether the values were indirected through a pointer (implying proto2).
// prop is set if this is a struct field (it may be nil).
func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
if in.Type() == protoMessageType {
if !in.IsNil() {
if out.IsNil() {
out.Set(reflect.ValueOf(Clone(in.Interface().(Message))))
} else {
Merge(out.Interface().(Message), in.Interface().(Message))
}
}
return
}
switch in.Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
if !viaPtr && isProto3Zero(in) {
return
}
out.Set(in)
case reflect.Interface:
// Probably a oneof field; copy non-nil values.
if in.IsNil() {
return
}
// Allocate destination if it is not set, or set to a different type.
// Otherwise we will merge as normal.
if out.IsNil() || out.Elem().Type() != in.Elem().Type() {
out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T)
}
mergeAny(out.Elem(), in.Elem(), false, nil)
case reflect.Map:
if in.Len() == 0 {
return
}
if out.IsNil() {
out.Set(reflect.MakeMap(in.Type()))
}
// For maps with value types of *T or []byte we need to deep copy each value.
elemKind := in.Type().Elem().Kind()
for _, key := range in.MapKeys() {
var val reflect.Value
switch elemKind {
case reflect.Ptr:
val = reflect.New(in.Type().Elem().Elem())
mergeAny(val, in.MapIndex(key), false, nil)
case reflect.Slice:
val = in.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
default:
val = in.MapIndex(key)
}
out.SetMapIndex(key, val)
}
case reflect.Ptr:
if in.IsNil() {
return
}
if out.IsNil() {
out.Set(reflect.New(in.Elem().Type()))
}
mergeAny(out.Elem(), in.Elem(), true, nil)
case reflect.Slice:
if in.IsNil() {
return
}
if in.Type().Elem().Kind() == reflect.Uint8 {
// []byte is a scalar bytes field, not a repeated field.
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value, and should not
// be merged.
if prop != nil && prop.proto3 && in.Len() == 0 {
return
}
// Make a deep copy.
// Append to []byte{} instead of []byte(nil) so that we never end up
// with a nil result.
out.SetBytes(append([]byte{}, in.Bytes()...))
return
}
n := in.Len()
if out.IsNil() {
out.Set(reflect.MakeSlice(in.Type(), 0, n))
}
switch in.Type().Elem().Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
out.Set(reflect.AppendSlice(out, in))
default:
for i := 0; i < n; i++ {
x := reflect.Indirect(reflect.New(in.Type().Elem()))
mergeAny(x, in.Index(i), false, nil)
out.Set(reflect.Append(out, x))
}
}
case reflect.Struct:
mergeStruct(out, in)
default:
// unknown type, so not a protocol buffer
log.Printf("proto: don't know how to copy %v", in)
}
}
func mergeExtension(out, in map[int32]Extension) {
for extNum, eIn := range in {
eOut := Extension{desc: eIn.desc}
if eIn.value != nil {
v := reflect.New(reflect.TypeOf(eIn.value)).Elem()
mergeAny(v, reflect.ValueOf(eIn.value), false, nil)
eOut.value = v.Interface()
}
if eIn.enc != nil {
eOut.enc = make([]byte, len(eIn.enc))
copy(eOut.enc, eIn.enc)
}
out[extNum] = eOut
}
}

39
vendor/github.com/gogo/protobuf/proto/custom_gogo.go generated vendored Normal file
View File

@ -0,0 +1,39 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2018, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import "reflect"
type custom interface {
Marshal() ([]byte, error)
Unmarshal(data []byte) error
Size() int
}
var customType = reflect.TypeOf((*custom)(nil)).Elem()

427
vendor/github.com/gogo/protobuf/proto/decode.go generated vendored Normal file
View File

@ -0,0 +1,427 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for decoding protocol buffer data to construct in-memory representations.
*/
import (
"errors"
"fmt"
"io"
)
// errOverflow is returned when an integer is too large to be represented.
var errOverflow = errors.New("proto: integer overflow")
// ErrInternalBadWireType is returned by generated code when an incorrect
// wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// DecodeVarint reads a varint-encoded integer from the slice.
// It returns the integer and the number of bytes consumed, or
// zero if there is not enough.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func DecodeVarint(buf []byte) (x uint64, n int) {
for shift := uint(0); shift < 64; shift += 7 {
if n >= len(buf) {
return 0, 0
}
b := uint64(buf[n])
n++
x |= (b & 0x7F) << shift
if (b & 0x80) == 0 {
return x, n
}
}
// The number is too large to represent in a 64-bit value.
return 0, 0
}
func (p *Buffer) decodeVarintSlow() (x uint64, err error) {
i := p.index
l := len(p.buf)
for shift := uint(0); shift < 64; shift += 7 {
if i >= l {
err = io.ErrUnexpectedEOF
return
}
b := p.buf[i]
i++
x |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
p.index = i
return
}
}
// The number is too large to represent in a 64-bit value.
err = errOverflow
return
}
// DecodeVarint reads a varint-encoded integer from the Buffer.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func (p *Buffer) DecodeVarint() (x uint64, err error) {
i := p.index
buf := p.buf
if i >= len(buf) {
return 0, io.ErrUnexpectedEOF
} else if buf[i] < 0x80 {
p.index++
return uint64(buf[i]), nil
} else if len(buf)-i < 10 {
return p.decodeVarintSlow()
}
var b uint64
// we already checked the first byte
x = uint64(buf[i]) - 0x80
i++
b = uint64(buf[i])
i++
x += b << 7
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 7
b = uint64(buf[i])
i++
x += b << 14
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 14
b = uint64(buf[i])
i++
x += b << 21
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 21
b = uint64(buf[i])
i++
x += b << 28
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 28
b = uint64(buf[i])
i++
x += b << 35
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 35
b = uint64(buf[i])
i++
x += b << 42
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 42
b = uint64(buf[i])
i++
x += b << 49
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 49
b = uint64(buf[i])
i++
x += b << 56
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 56
b = uint64(buf[i])
i++
x += b << 63
if b&0x80 == 0 {
goto done
}
return 0, errOverflow
done:
p.index = i
return x, nil
}
// DecodeFixed64 reads a 64-bit integer from the Buffer.
// This is the format for the
// fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) DecodeFixed64() (x uint64, err error) {
// x, err already 0
i := p.index + 8
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-8])
x |= uint64(p.buf[i-7]) << 8
x |= uint64(p.buf[i-6]) << 16
x |= uint64(p.buf[i-5]) << 24
x |= uint64(p.buf[i-4]) << 32
x |= uint64(p.buf[i-3]) << 40
x |= uint64(p.buf[i-2]) << 48
x |= uint64(p.buf[i-1]) << 56
return
}
// DecodeFixed32 reads a 32-bit integer from the Buffer.
// This is the format for the
// fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) DecodeFixed32() (x uint64, err error) {
// x, err already 0
i := p.index + 4
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-4])
x |= uint64(p.buf[i-3]) << 8
x |= uint64(p.buf[i-2]) << 16
x |= uint64(p.buf[i-1]) << 24
return
}
// DecodeZigzag64 reads a zigzag-encoded 64-bit integer
// from the Buffer.
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) DecodeZigzag64() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63)
return
}
// DecodeZigzag32 reads a zigzag-encoded 32-bit integer
// from the Buffer.
// This is the format used for the sint32 protocol buffer type.
func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31))
return
}
// DecodeRawBytes reads a count-delimited byte buffer from the Buffer.
// This is the format used for the bytes protocol buffer
// type and for embedded messages.
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
n, err := p.DecodeVarint()
if err != nil {
return nil, err
}
nb := int(n)
if nb < 0 {
return nil, fmt.Errorf("proto: bad byte length %d", nb)
}
end := p.index + nb
if end < p.index || end > len(p.buf) {
return nil, io.ErrUnexpectedEOF
}
if !alloc {
// todo: check if can get more uses of alloc=false
buf = p.buf[p.index:end]
p.index += nb
return
}
buf = make([]byte, nb)
copy(buf, p.buf[p.index:])
p.index += nb
return
}
// DecodeStringBytes reads an encoded string from the Buffer.
// This is the format used for the proto2 string type.
func (p *Buffer) DecodeStringBytes() (s string, err error) {
buf, err := p.DecodeRawBytes(false)
if err != nil {
return
}
return string(buf), nil
}
// Unmarshaler is the interface representing objects that can
// unmarshal themselves. The argument points to data that may be
// overwritten, so implementations should not keep references to the
// buffer.
// Unmarshal implementations should not clear the receiver.
// Any unmarshaled data should be merged into the receiver.
// Callers of Unmarshal that do not want to retain existing data
// should Reset the receiver before calling Unmarshal.
type Unmarshaler interface {
Unmarshal([]byte) error
}
// newUnmarshaler is the interface representing objects that can
// unmarshal themselves. The semantics are identical to Unmarshaler.
//
// This exists to support protoc-gen-go generated messages.
// The proto package will stop type-asserting to this interface in the future.
//
// DO NOT DEPEND ON THIS.
type newUnmarshaler interface {
XXX_Unmarshal([]byte) error
}
// Unmarshal parses the protocol buffer representation in buf and places the
// decoded result in pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// Unmarshal resets pb before starting to unmarshal, so any
// existing data in pb is always removed. Use UnmarshalMerge
// to preserve and append to existing data.
func Unmarshal(buf []byte, pb Message) error {
pb.Reset()
if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf)
}
if u, ok := pb.(Unmarshaler); ok {
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
}
// UnmarshalMerge parses the protocol buffer representation in buf and
// writes the decoded result to pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// UnmarshalMerge merges into existing data in pb.
// Most code should use Unmarshal instead.
func UnmarshalMerge(buf []byte, pb Message) error {
if u, ok := pb.(newUnmarshaler); ok {
return u.XXX_Unmarshal(buf)
}
if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not.
// Thus, calling this here may or may not do what people want.
//
// See https://github.com/golang/protobuf/issues/424
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
}
// DecodeMessage reads a count-delimited message from the Buffer.
func (p *Buffer) DecodeMessage(pb Message) error {
enc, err := p.DecodeRawBytes(false)
if err != nil {
return err
}
return NewBuffer(enc).Unmarshal(pb)
}
// DecodeGroup reads a tag-delimited group from the Buffer.
// StartGroup tag is already consumed. This function consumes
// EndGroup tag.
func (p *Buffer) DecodeGroup(pb Message) error {
b := p.buf[p.index:]
x, y := findEndGroup(b)
if x < 0 {
return io.ErrUnexpectedEOF
}
err := Unmarshal(b[:x], pb)
p.index += y
return err
}
// Unmarshal parses the protocol buffer representation in the
// Buffer and places the decoded result in pb. If the struct
// underlying pb does not match the data in the buffer, the results can be
// unpredictable.
//
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
func (p *Buffer) Unmarshal(pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(newUnmarshaler); ok {
err := u.XXX_Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
if u, ok := pb.(Unmarshaler); ok {
// NOTE: The history of proto have unfortunately been inconsistent
// whether Unmarshaler should or should not implicitly clear itself.
// Some implementations do, most do not.
// Thus, calling this here may or may not do what people want.
//
// See https://github.com/golang/protobuf/issues/424
err := u.Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
// Slow workaround for messages that aren't Unmarshalers.
// This includes some hand-coded .pb.go files and
// bootstrap protos.
// TODO: fix all of those and then add Unmarshal to
// the Message interface. Then:
// The cast above and code below can be deleted.
// The old unmarshaler can be deleted.
// Clients can call Unmarshal directly (can already do that, actually).
var info InternalMessageInfo
err := info.Unmarshal(pb, p.buf[p.index:])
p.index = len(p.buf)
return err
}

63
vendor/github.com/gogo/protobuf/proto/deprecated.go generated vendored Normal file
View File

@ -0,0 +1,63 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2018 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import "errors"
// Deprecated: do not use.
type Stats struct{ Emalloc, Dmalloc, Encode, Decode, Chit, Cmiss, Size uint64 }
// Deprecated: do not use.
func GetStats() Stats { return Stats{} }
// Deprecated: do not use.
func MarshalMessageSet(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented")
}
// Deprecated: do not use.
func UnmarshalMessageSet([]byte, interface{}) error {
return errors.New("proto: not implemented")
}
// Deprecated: do not use.
func MarshalMessageSetJSON(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented")
}
// Deprecated: do not use.
func UnmarshalMessageSetJSON([]byte, interface{}) error {
return errors.New("proto: not implemented")
}
// Deprecated: do not use.
func RegisterMessageSetType(Message, int32, string) {}

350
vendor/github.com/gogo/protobuf/proto/discard.go generated vendored Normal file
View File

@ -0,0 +1,350 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2017 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"fmt"
"reflect"
"strings"
"sync"
"sync/atomic"
)
type generatedDiscarder interface {
XXX_DiscardUnknown()
}
// DiscardUnknown recursively discards all unknown fields from this message
// and all embedded messages.
//
// When unmarshaling a message with unrecognized fields, the tags and values
// of such fields are preserved in the Message. This allows a later call to
// marshal to be able to produce a message that continues to have those
// unrecognized fields. To avoid this, DiscardUnknown is used to
// explicitly clear the unknown fields after unmarshaling.
//
// For proto2 messages, the unknown fields of message extensions are only
// discarded from messages that have been accessed via GetExtension.
func DiscardUnknown(m Message) {
if m, ok := m.(generatedDiscarder); ok {
m.XXX_DiscardUnknown()
return
}
// TODO: Dynamically populate a InternalMessageInfo for legacy messages,
// but the master branch has no implementation for InternalMessageInfo,
// so it would be more work to replicate that approach.
discardLegacy(m)
}
// DiscardUnknown recursively discards all unknown fields.
func (a *InternalMessageInfo) DiscardUnknown(m Message) {
di := atomicLoadDiscardInfo(&a.discard)
if di == nil {
di = getDiscardInfo(reflect.TypeOf(m).Elem())
atomicStoreDiscardInfo(&a.discard, di)
}
di.discard(toPointer(&m))
}
type discardInfo struct {
typ reflect.Type
initialized int32 // 0: only typ is valid, 1: everything is valid
lock sync.Mutex
fields []discardFieldInfo
unrecognized field
}
type discardFieldInfo struct {
field field // Offset of field, guaranteed to be valid
discard func(src pointer)
}
var (
discardInfoMap = map[reflect.Type]*discardInfo{}
discardInfoLock sync.Mutex
)
func getDiscardInfo(t reflect.Type) *discardInfo {
discardInfoLock.Lock()
defer discardInfoLock.Unlock()
di := discardInfoMap[t]
if di == nil {
di = &discardInfo{typ: t}
discardInfoMap[t] = di
}
return di
}
func (di *discardInfo) discard(src pointer) {
if src.isNil() {
return // Nothing to do.
}
if atomic.LoadInt32(&di.initialized) == 0 {
di.computeDiscardInfo()
}
for _, fi := range di.fields {
sfp := src.offset(fi.field)
fi.discard(sfp)
}
// For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension.
if em, err := extendable(src.asPointerTo(di.typ).Interface()); err == nil {
// Ignore lock since DiscardUnknown is not concurrency safe.
emm, _ := em.extensionsRead()
for _, mx := range emm {
if m, ok := mx.value.(Message); ok {
DiscardUnknown(m)
}
}
}
if di.unrecognized.IsValid() {
*src.offset(di.unrecognized).toBytes() = nil
}
}
func (di *discardInfo) computeDiscardInfo() {
di.lock.Lock()
defer di.lock.Unlock()
if di.initialized != 0 {
return
}
t := di.typ
n := t.NumField()
for i := 0; i < n; i++ {
f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
dfi := discardFieldInfo{field: toField(&f)}
tf := f.Type
// Unwrap tf to get its most basic type.
var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true
tf = tf.Elem()
}
if tf.Kind() == reflect.Ptr {
isPointer = true
tf = tf.Elem()
}
if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%v.%s cannot be a slice of pointers to primitive types", t, f.Name))
}
switch tf.Kind() {
case reflect.Struct:
switch {
case !isPointer:
panic(fmt.Sprintf("%v.%s cannot be a direct struct value", t, f.Name))
case isSlice: // E.g., []*pb.T
discardInfo := getDiscardInfo(tf)
dfi.discard = func(src pointer) {
sps := src.getPointerSlice()
for _, sp := range sps {
if !sp.isNil() {
discardInfo.discard(sp)
}
}
}
default: // E.g., *pb.T
discardInfo := getDiscardInfo(tf)
dfi.discard = func(src pointer) {
sp := src.getPointer()
if !sp.isNil() {
discardInfo.discard(sp)
}
}
}
case reflect.Map:
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a map or a slice of map values", t, f.Name))
default: // E.g., map[K]V
if tf.Elem().Kind() == reflect.Ptr { // Proto struct (e.g., *T)
dfi.discard = func(src pointer) {
sm := src.asPointerTo(tf).Elem()
if sm.Len() == 0 {
return
}
for _, key := range sm.MapKeys() {
val := sm.MapIndex(key)
DiscardUnknown(val.Interface().(Message))
}
}
} else {
dfi.discard = func(pointer) {} // Noop
}
}
case reflect.Interface:
// Must be oneof field.
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%v.%s cannot be a pointer to a interface or a slice of interface values", t, f.Name))
default: // E.g., interface{}
// TODO: Make this faster?
dfi.discard = func(src pointer) {
su := src.asPointerTo(tf).Elem()
if !su.IsNil() {
sv := su.Elem().Elem().Field(0)
if sv.Kind() == reflect.Ptr && sv.IsNil() {
return
}
switch sv.Type().Kind() {
case reflect.Ptr: // Proto struct (e.g., *T)
DiscardUnknown(sv.Interface().(Message))
}
}
}
}
default:
continue
}
di.fields = append(di.fields, dfi)
}
di.unrecognized = invalidField
if f, ok := t.FieldByName("XXX_unrecognized"); ok {
if f.Type != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte")
}
di.unrecognized = toField(&f)
}
atomic.StoreInt32(&di.initialized, 1)
}
func discardLegacy(m Message) {
v := reflect.ValueOf(m)
if v.Kind() != reflect.Ptr || v.IsNil() {
return
}
v = v.Elem()
if v.Kind() != reflect.Struct {
return
}
t := v.Type()
for i := 0; i < v.NumField(); i++ {
f := t.Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
vf := v.Field(i)
tf := f.Type
// Unwrap tf to get its most basic type.
var isPointer, isSlice bool
if tf.Kind() == reflect.Slice && tf.Elem().Kind() != reflect.Uint8 {
isSlice = true
tf = tf.Elem()
}
if tf.Kind() == reflect.Ptr {
isPointer = true
tf = tf.Elem()
}
if isPointer && isSlice && tf.Kind() != reflect.Struct {
panic(fmt.Sprintf("%T.%s cannot be a slice of pointers to primitive types", m, f.Name))
}
switch tf.Kind() {
case reflect.Struct:
switch {
case !isPointer:
panic(fmt.Sprintf("%T.%s cannot be a direct struct value", m, f.Name))
case isSlice: // E.g., []*pb.T
for j := 0; j < vf.Len(); j++ {
discardLegacy(vf.Index(j).Interface().(Message))
}
default: // E.g., *pb.T
discardLegacy(vf.Interface().(Message))
}
case reflect.Map:
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a map or a slice of map values", m, f.Name))
default: // E.g., map[K]V
tv := vf.Type().Elem()
if tv.Kind() == reflect.Ptr && tv.Implements(protoMessageType) { // Proto struct (e.g., *T)
for _, key := range vf.MapKeys() {
val := vf.MapIndex(key)
discardLegacy(val.Interface().(Message))
}
}
}
case reflect.Interface:
// Must be oneof field.
switch {
case isPointer || isSlice:
panic(fmt.Sprintf("%T.%s cannot be a pointer to a interface or a slice of interface values", m, f.Name))
default: // E.g., test_proto.isCommunique_Union interface
if !vf.IsNil() && f.Tag.Get("protobuf_oneof") != "" {
vf = vf.Elem() // E.g., *test_proto.Communique_Msg
if !vf.IsNil() {
vf = vf.Elem() // E.g., test_proto.Communique_Msg
vf = vf.Field(0) // E.g., Proto struct (e.g., *T) or primitive value
if vf.Kind() == reflect.Ptr {
discardLegacy(vf.Interface().(Message))
}
}
}
}
}
}
if vf := v.FieldByName("XXX_unrecognized"); vf.IsValid() {
if vf.Type() != reflect.TypeOf([]byte{}) {
panic("expected XXX_unrecognized to be of type []byte")
}
vf.Set(reflect.ValueOf([]byte(nil)))
}
// For proto2 messages, only discard unknown fields in message extensions
// that have been accessed via GetExtension.
if em, err := extendable(m); err == nil {
// Ignore lock since discardLegacy is not concurrency safe.
emm, _ := em.extensionsRead()
for _, mx := range emm {
if m, ok := mx.value.(Message); ok {
discardLegacy(m)
}
}
}
}

100
vendor/github.com/gogo/protobuf/proto/duration.go generated vendored Normal file
View File

@ -0,0 +1,100 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
// This file implements conversions between google.protobuf.Duration
// and time.Duration.
import (
"errors"
"fmt"
"time"
)
const (
// Range of a Duration in seconds, as specified in
// google/protobuf/duration.proto. This is about 10,000 years in seconds.
maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60)
minSeconds = -maxSeconds
)
// validateDuration determines whether the Duration is valid according to the
// definition in google/protobuf/duration.proto. A valid Duration
// may still be too large to fit into a time.Duration (the range of Duration
// is about 10,000 years, and the range of time.Duration is about 290).
func validateDuration(d *duration) error {
if d == nil {
return errors.New("duration: nil Duration")
}
if d.Seconds < minSeconds || d.Seconds > maxSeconds {
return fmt.Errorf("duration: %#v: seconds out of range", d)
}
if d.Nanos <= -1e9 || d.Nanos >= 1e9 {
return fmt.Errorf("duration: %#v: nanos out of range", d)
}
// Seconds and Nanos must have the same sign, unless d.Nanos is zero.
if (d.Seconds < 0 && d.Nanos > 0) || (d.Seconds > 0 && d.Nanos < 0) {
return fmt.Errorf("duration: %#v: seconds and nanos have different signs", d)
}
return nil
}
// DurationFromProto converts a Duration to a time.Duration. DurationFromProto
// returns an error if the Duration is invalid or is too large to be
// represented in a time.Duration.
func durationFromProto(p *duration) (time.Duration, error) {
if err := validateDuration(p); err != nil {
return 0, err
}
d := time.Duration(p.Seconds) * time.Second
if int64(d/time.Second) != p.Seconds {
return 0, fmt.Errorf("duration: %#v is out of range for time.Duration", p)
}
if p.Nanos != 0 {
d += time.Duration(p.Nanos)
if (d < 0) != (p.Nanos < 0) {
return 0, fmt.Errorf("duration: %#v is out of range for time.Duration", p)
}
}
return d, nil
}
// DurationProto converts a time.Duration to a Duration.
func durationProto(d time.Duration) *duration {
nanos := d.Nanoseconds()
secs := nanos / 1e9
nanos -= secs * 1e9
return &duration{
Seconds: secs,
Nanos: int32(nanos),
}
}

49
vendor/github.com/gogo/protobuf/proto/duration_gogo.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2016, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"reflect"
"time"
)
var durationType = reflect.TypeOf((*time.Duration)(nil)).Elem()
type duration struct {
Seconds int64 `protobuf:"varint,1,opt,name=seconds,proto3" json:"seconds,omitempty"`
Nanos int32 `protobuf:"varint,2,opt,name=nanos,proto3" json:"nanos,omitempty"`
}
func (m *duration) Reset() { *m = duration{} }
func (*duration) ProtoMessage() {}
func (*duration) String() string { return "duration<string>" }
func init() {
RegisterType((*duration)(nil), "gogo.protobuf.proto.duration")
}

205
vendor/github.com/gogo/protobuf/proto/encode.go generated vendored Normal file
View File

@ -0,0 +1,205 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for encoding data into the wire format for protocol buffers.
*/
import (
"errors"
"reflect"
)
var (
// errRepeatedHasNil is the error returned if Marshal is called with
// a struct with a repeated field containing a nil element.
errRepeatedHasNil = errors.New("proto: repeated field has nil element")
// errOneofHasNil is the error returned if Marshal is called with
// a struct with a oneof field containing a nil element.
errOneofHasNil = errors.New("proto: oneof field has nil value")
// ErrNil is the error returned if Marshal is called with nil.
ErrNil = errors.New("proto: Marshal called with nil")
// ErrTooLarge is the error returned if Marshal is called with a
// message that encodes to >2GB.
ErrTooLarge = errors.New("proto: message encodes to over 2 GB")
)
// The fundamental encoders that put bytes on the wire.
// Those that take integer types all accept uint64 and are
// therefore of type valueEncoder.
const maxVarintBytes = 10 // maximum length of a varint
// EncodeVarint returns the varint encoding of x.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
// Not used by the package itself, but helpful to clients
// wishing to use the same encoding.
func EncodeVarint(x uint64) []byte {
var buf [maxVarintBytes]byte
var n int
for n = 0; x > 127; n++ {
buf[n] = 0x80 | uint8(x&0x7F)
x >>= 7
}
buf[n] = uint8(x)
n++
return buf[0:n]
}
// EncodeVarint writes a varint-encoded integer to the Buffer.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func (p *Buffer) EncodeVarint(x uint64) error {
for x >= 1<<7 {
p.buf = append(p.buf, uint8(x&0x7f|0x80))
x >>= 7
}
p.buf = append(p.buf, uint8(x))
return nil
}
// SizeVarint returns the varint encoding size of an integer.
func SizeVarint(x uint64) int {
switch {
case x < 1<<7:
return 1
case x < 1<<14:
return 2
case x < 1<<21:
return 3
case x < 1<<28:
return 4
case x < 1<<35:
return 5
case x < 1<<42:
return 6
case x < 1<<49:
return 7
case x < 1<<56:
return 8
case x < 1<<63:
return 9
}
return 10
}
// EncodeFixed64 writes a 64-bit integer to the Buffer.
// This is the format for the
// fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) EncodeFixed64(x uint64) error {
p.buf = append(p.buf,
uint8(x),
uint8(x>>8),
uint8(x>>16),
uint8(x>>24),
uint8(x>>32),
uint8(x>>40),
uint8(x>>48),
uint8(x>>56))
return nil
}
// EncodeFixed32 writes a 32-bit integer to the Buffer.
// This is the format for the
// fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) EncodeFixed32(x uint64) error {
p.buf = append(p.buf,
uint8(x),
uint8(x>>8),
uint8(x>>16),
uint8(x>>24))
return nil
}
// EncodeZigzag64 writes a zigzag-encoded 64-bit integer
// to the Buffer.
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) EncodeZigzag64(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
// EncodeZigzag32 writes a zigzag-encoded 32-bit integer
// to the Buffer.
// This is the format used for the sint32 protocol buffer type.
func (p *Buffer) EncodeZigzag32(x uint64) error {
// use signed number to get arithmetic right shift.
return p.EncodeVarint(uint64((uint32(x) << 1) ^ uint32((int32(x) >> 31))))
}
// EncodeRawBytes writes a count-delimited byte buffer to the Buffer.
// This is the format used for the bytes protocol buffer
// type and for embedded messages.
func (p *Buffer) EncodeRawBytes(b []byte) error {
p.EncodeVarint(uint64(len(b)))
p.buf = append(p.buf, b...)
return nil
}
// EncodeStringBytes writes an encoded string to the Buffer.
// This is the format used for the proto2 string type.
func (p *Buffer) EncodeStringBytes(s string) error {
p.EncodeVarint(uint64(len(s)))
p.buf = append(p.buf, s...)
return nil
}
// Marshaler is the interface representing objects that can marshal themselves.
type Marshaler interface {
Marshal() ([]byte, error)
}
// EncodeMessage writes the protocol buffer to the Buffer,
// prefixed by a varint-encoded length.
func (p *Buffer) EncodeMessage(pb Message) error {
siz := Size(pb)
sizVar := SizeVarint(uint64(siz))
p.grow(siz + sizVar)
p.EncodeVarint(uint64(siz))
return p.Marshal(pb)
}
// All protocol buffer fields are nillable, but be careful.
func isNil(v reflect.Value) bool {
switch v.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return v.IsNil()
}
return false
}

33
vendor/github.com/gogo/protobuf/proto/encode_gogo.go generated vendored Normal file
View File

@ -0,0 +1,33 @@
// Protocol Buffers for Go with Gadgets
//
// Copyright (c) 2013, The GoGo Authors. All rights reserved.
// http://github.com/gogo/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
func NewRequiredNotSetError(field string) *RequiredNotSetError {
return &RequiredNotSetError{field}
}

Some files were not shown because too many files have changed in this diff Show More