Merge branch 'master' into output
This commit is contained in:
commit
f206e3f425
|
|
@ -7,6 +7,7 @@ about: Report a bug in kaniko
|
|||
**Actual behavior**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
|
|
@ -21,3 +22,16 @@ Steps to reproduce the behavior:
|
|||
- Build Context
|
||||
Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
|
||||
- Kaniko Image (fully qualified with digest)
|
||||
|
||||
**Triage Notes for the Maintainers**
|
||||
<!-- 🎉🎉🎉 Thank you for an opening an issue !!! 🎉🎉🎉
|
||||
We are doing our best to get to this. Please help us by helping us prioritize your issue by filling the section below -->
|
||||
|
||||
|
||||
| **Description** | **Yes/No** |
|
||||
|----------------|---------------|
|
||||
| Please check if this a new feature you are proposing | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if the build works in docker but not in kaniko | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if this error is seen when you use `--cache` flag | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if your dockerfile is a multistage dockerfile | <ul><li>- [ ] </li></ul>|
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,41 @@
|
|||
<!-- 🎉🎉🎉 Thank you for the PR!!! 🎉🎉🎉 -->
|
||||
|
||||
|
||||
Fixes `#<issue number>`. _in case of a bug fix, this should point to a bug and any other related issue(s)_
|
||||
|
||||
**Description**
|
||||
|
||||
<!-- Describe your changes here- ideally you can get that description straight from
|
||||
your descriptive commit message(s)! -->
|
||||
|
||||
**Submitter Checklist**
|
||||
|
||||
These are the criteria that every PR should meet, please check them off as you
|
||||
review them:
|
||||
|
||||
- [ ] Includes [unit tests](../DEVELOPMENT.md#creating-a-pr)
|
||||
- [ ] Adds integration tests if needed.
|
||||
|
||||
_See [the contribution guide](../CONTRIBUTING.md) for more details._
|
||||
|
||||
|
||||
**Reviewer Notes**
|
||||
|
||||
- [ ] The code flow looks good.
|
||||
- [ ] Unit tests and or integration tests added.
|
||||
|
||||
|
||||
**Release Notes**
|
||||
|
||||
Describe any changes here so maintainer can include it in the release notes, or delete this block.
|
||||
|
||||
```
|
||||
Examples of user facing changes:
|
||||
- Skaffold config changes like
|
||||
e.g. "Add buildArgs to `Kustomize` deployer skaffold config."
|
||||
- Bug fixes
|
||||
e.g. "Improve skaffold init behaviour when tags are used in manifests"
|
||||
- Any changes in skaffold behavior
|
||||
e.g. "Artiface cachine is turned on by default."
|
||||
|
||||
```
|
||||
161
CHANGELOG.md
161
CHANGELOG.md
|
|
@ -1,11 +1,154 @@
|
|||
# v0.9.0 Release - 2/8/2019
|
||||
# v0.12.0 Release - 2019-09/13
|
||||
|
||||
## New Features
|
||||
* Added `--oci-layout-path` flag to save image in OCI layout. [#744](https://github.com/GoogleContainerTools/kaniko/pull/744)
|
||||
* Add support for S3 custom endpoint [#698](https://github.com/GoogleContainerTools/kaniko/pull/698)
|
||||
|
||||
## Bug Fixes
|
||||
* Setting PATH [#760](https://github.com/GoogleContainerTools/kaniko/pull/760)
|
||||
* Remove leading slash in layer tarball paths (Closes: #726) [#729](https://github.com/GoogleContainerTools/kaniko/pull/729)
|
||||
|
||||
## Updates and Refactors
|
||||
* Remove cruft [#635](https://github.com/GoogleContainerTools/kaniko/pull/635)
|
||||
* Add desc for `--skip-tls-verify-pull` to README [#493](https://github.com/GoogleContainerTools/kaniko/pull/493)
|
||||
|
||||
Huge thank you for this release towards our contributors:
|
||||
- Carlos Alexandro Becker
|
||||
- Carlos Sanchez
|
||||
- chhsia0
|
||||
- Deniz Zoeteman
|
||||
- Luke Wood
|
||||
- Matthew Dawson
|
||||
- Niels Denissen
|
||||
- Priya Wadhwa
|
||||
- Sharif Elgamal
|
||||
- Takeaki Matsumoto
|
||||
- Taylor Barrella
|
||||
- Tejal Desai
|
||||
- v.rul
|
||||
- Warren Seymour
|
||||
- xanonid
|
||||
- Xueshan Feng
|
||||
- Роман Небалуев
|
||||
|
||||
|
||||
# v0.11.0 Release - 2019-08-23
|
||||
|
||||
## Bug Fixes
|
||||
* fix unpacking archives via ADD [#717](https://github.com/GoogleContainerTools/kaniko/pull/717)
|
||||
* Reverted not including build args in cache key [#739](https://github.com/GoogleContainerTools/kaniko/pull/739)
|
||||
* Create cache directory if it doesn't already exist [#452](https://github.com/GoogleContainerTools/kaniko/pull/452)
|
||||
|
||||
## New Features
|
||||
* add multiple user agents to kaniko if upstream_client_type value is set [#750](https://github.com/GoogleContainerTools/kaniko/pull/750)
|
||||
* Make container layers captured using FS snapshots reproducible [#714](https://github.com/GoogleContainerTools/kaniko/pull/714)
|
||||
* Include warmer in debug image [#497](https://github.com/GoogleContainerTools/kaniko/pull/497)
|
||||
* Bailout when there is not enough input arguments [#735](https://github.com/GoogleContainerTools/kaniko/pull/735)
|
||||
* Add checking image presence in cache prior to downloading it [#723](https://github.com/GoogleContainerTools/kaniko/pull/723)
|
||||
|
||||
## Additonal PRs
|
||||
* Document how to build from git reference [#730](https://github.com/GoogleContainerTools/kaniko/pull/730)
|
||||
* Misc. small changes/refactoring [#712](https://github.com/GoogleContainerTools/kaniko/pull/712)
|
||||
* Update go-containerregistry [#680](https://github.com/GoogleContainerTools/kaniko/pull/680)
|
||||
* Update version of go-containerregistry [#724](https://github.com/GoogleContainerTools/kaniko/pull/724)
|
||||
* feat: support specifying branch for cloning [#703](https://github.com/GoogleContainerTools/kaniko/pull/703)
|
||||
|
||||
Huge thank you for this release towards our contributors:
|
||||
- Carlos Alexandro Becker
|
||||
- Carlos Sanchez
|
||||
- Deniz Zoeteman
|
||||
- Luke Wood
|
||||
- Matthew Dawson
|
||||
- priyawadhwa
|
||||
- sharifelgamal
|
||||
- Sharif Elgamal
|
||||
- Taylor Barrella
|
||||
- Tejal Desai
|
||||
- v.rul
|
||||
- Warren Seymour
|
||||
- Xueshan Feng
|
||||
- Роман Небалуе
|
||||
|
||||
# v0.10.0 Release - 2019-06-19
|
||||
|
||||
## Bug Fixes
|
||||
* Fix kaniko caching [#639](https://github.com/GoogleContainerTools/kaniko/pull/639)
|
||||
* chore: fix typo [#665](https://github.com/GoogleContainerTools/kaniko/pull/665)
|
||||
* Fix file mode bug [#618](https://github.com/GoogleContainerTools/kaniko/pull/618)
|
||||
* Fix arg handling for multi-stage images in COPY instructions. [#621](https://github.com/GoogleContainerTools/kaniko/pull/621)
|
||||
* Fix parent directory permissions [#619](https://github.com/GoogleContainerTools/kaniko/pull/619)
|
||||
* Environment variables should be replaced in URLs in ADD commands. [#580](https://github.com/GoogleContainerTools/kaniko/pull/580)
|
||||
* Update the cache warmer to also save manifests. [#576](https://github.com/GoogleContainerTools/kaniko/pull/576)
|
||||
* Fix typo in error message [#569](https://github.com/GoogleContainerTools/kaniko/pull/569)
|
||||
|
||||
## New Features
|
||||
* Add SkipVerify support to CheckPushPermissions. [#663](https://github.com/GoogleContainerTools/kaniko/pull/663)
|
||||
* Creating github Build Context [#672](https://github.com/GoogleContainerTools/kaniko/pull/672)
|
||||
* Add `--digest-file` flag to output built digest to file. [#655](https://github.com/GoogleContainerTools/kaniko/pull/655)
|
||||
* README.md: update BuildKit/img comparison [#642](https://github.com/GoogleContainerTools/kaniko/pull/642)
|
||||
* Add documentation for --verbosity flag [#634](https://github.com/GoogleContainerTools/kaniko/pull/634)
|
||||
* Optimize file copying and stage saving between stages. [#605](https://github.com/GoogleContainerTools/kaniko/pull/605)
|
||||
* Add an integration test for USER unpacking. [#600](https://github.com/GoogleContainerTools/kaniko/pull/600)
|
||||
* Added missing documentation for --skip-tls-verify-pull arg [#593](https://github.com/GoogleContainerTools/kaniko/pull/593)
|
||||
* README.me: update Buildah description [#586](https://github.com/GoogleContainerTools/kaniko/pull/586)
|
||||
* Add missing tests for bucket util [#565](https://github.com/GoogleContainerTools/kaniko/pull/565)
|
||||
* Look for manifests in the local cache next to the full images. [#570](https://github.com/GoogleContainerTools/kaniko/pull/570)
|
||||
* Make the run_in_docker script support caching. [#564](https://github.com/GoogleContainerTools/kaniko/pull/564)
|
||||
* Refactor snapshotting [#561](https://github.com/GoogleContainerTools/kaniko/pull/561)
|
||||
* Stop storing a separate cache hash. [#560](https://github.com/GoogleContainerTools/kaniko/pull/560)
|
||||
* Speed up workdir by always returning an empty filelist (rather than a… [#557](https://github.com/GoogleContainerTools/kaniko/pull/557)
|
||||
* Refactor whitelist handling. [#559](https://github.com/GoogleContainerTools/kaniko/pull/559)
|
||||
* Refactor the build loop to fetch stagebuilders earlier. [#558](https://github.com/GoogleContainerTools/kaniko/pull/558)
|
||||
|
||||
## Additonal PRs
|
||||
* Improve changelog dates [#657](https://github.com/GoogleContainerTools/kaniko/pull/657)
|
||||
* Change verbose output from info to debug [#640](https://github.com/GoogleContainerTools/kaniko/pull/640)
|
||||
* Check push permissions before building images [#622](https://github.com/GoogleContainerTools/kaniko/pull/622)
|
||||
* Bump go-containerregistry to 8c1640add99804503b4126abc718931a4d93c31a [#609](https://github.com/GoogleContainerTools/kaniko/pull/609)
|
||||
* Update go-containerregistry [#599](https://github.com/GoogleContainerTools/kaniko/pull/599)
|
||||
* Log "Skipping paths under..." to debug [#571](https://github.com/GoogleContainerTools/kaniko/pull/571)
|
||||
|
||||
Huge thank you for this release towards our contributors:
|
||||
- Achilleas Pipinellis
|
||||
- Adrian Duong
|
||||
- Akihiro Suda
|
||||
- Andreas Bergmeier
|
||||
- Andrew Rynhard
|
||||
- Anthony Weston
|
||||
- Anurag Goel
|
||||
- Balint Pato
|
||||
- Christie Wilson
|
||||
- Daisuke Taniwaki
|
||||
- Dan Cecile
|
||||
- Dirk Gustke
|
||||
- dlorenc
|
||||
- Fredrik Lönnegren
|
||||
- Gijs
|
||||
- Jake Shadle
|
||||
- James Rawlings
|
||||
- Jason Hall
|
||||
- Johan Hernandez
|
||||
- Johannes 'fish' Ziemke
|
||||
- Kartik Verma
|
||||
- linuxshokunin
|
||||
- MMeent
|
||||
- Myers Carpenter
|
||||
- Nándor István Krácser
|
||||
- Nao YONASHIRO
|
||||
- Priya Wadhwa
|
||||
- Sharif Elgamal
|
||||
- Shuhei Kitagawa
|
||||
- Valentin Rothberg
|
||||
- Vincent Demeester
|
||||
|
||||
# v0.9.0 Release - 2019-02-08
|
||||
|
||||
## Bug Fixes
|
||||
* Bug fix with volumes declared in base images during multi-stage builds
|
||||
* Bug fix during snapshotting multi-stage builds.
|
||||
* Bug fix for caching with tar output.
|
||||
|
||||
# v0.8.0 Release - 1/29/2019
|
||||
# v0.8.0 Release - 2019-01-29
|
||||
|
||||
## New Features
|
||||
* Even faster snapshotting with godirwalk
|
||||
|
|
@ -20,7 +163,7 @@
|
|||
* Fix bug with USER command and unpacking base images.
|
||||
* Added COPY --from=previous stage name/number validation
|
||||
|
||||
# v0.7.0 Release - 12/10/2018
|
||||
# v0.7.0 Release - 2018-12-10
|
||||
|
||||
## New Features
|
||||
* Add support for COPY --from an unrelated image
|
||||
|
|
@ -34,7 +177,7 @@
|
|||
* Fix bug with call loop
|
||||
* Fix caching for multi-step builds
|
||||
|
||||
# v0.6.0 Release - 11/06/2018
|
||||
# v0.6.0 Release - 2018-11-06
|
||||
|
||||
## New Features
|
||||
* parse arg commands at the top of dockerfiles [#404](https://github.com/GoogleContainerTools/kaniko/pull/404)
|
||||
|
|
@ -59,7 +202,7 @@
|
|||
* fix releasing the cache warmer [#418](https://github.com/GoogleContainerTools/kaniko/pull/418)
|
||||
|
||||
|
||||
# v0.5.0 Release - 10/16/2018
|
||||
# v0.5.0 Release - 2018-10-16
|
||||
|
||||
## New Features
|
||||
* Persistent volume caching for base images [#383](https://github.com/GoogleContainerTools/kaniko/pull/383)
|
||||
|
|
@ -78,7 +221,7 @@
|
|||
* Don't cut everything after an equals sign [#381](https://github.com/GoogleContainerTools/kaniko/pull/381)
|
||||
|
||||
|
||||
# v0.4.0 Release - 10/01/2018
|
||||
# v0.4.0 Release - 2018-10-01
|
||||
|
||||
## New Features
|
||||
* Add a benchmark package to store and monitor timings. [#367](https://github.com/GoogleContainerTools/kaniko/pull/367)
|
||||
|
|
@ -137,7 +280,7 @@
|
|||
* Fix handling of the volume directive [#334](https://github.com/GoogleContainerTools/kaniko/pull/334)
|
||||
|
||||
|
||||
# v0.3.0 Release - 7/31/2018
|
||||
# v0.3.0 Release - 2018-07-31
|
||||
New Features
|
||||
* Local integration testing [#256](https://github.com/GoogleContainerTools/kaniko/pull/256)
|
||||
* Add --target flag for multistage builds [#255](https://github.com/GoogleContainerTools/kaniko/pull/255)
|
||||
|
|
@ -149,7 +292,7 @@ Bug Fixes
|
|||
* Multi-stage errors when referencing earlier stages [#233](https://github.com/GoogleContainerTools/kaniko/issues/233)
|
||||
|
||||
|
||||
# v0.2.0 Release - 7/09/2018
|
||||
# v0.2.0 Release - 2018-07-09
|
||||
|
||||
New Features
|
||||
* Support for adding different source contexts, including Amazon S3 [#195](https://github.com/GoogleContainerTools/kaniko/issues/195)
|
||||
|
|
@ -158,7 +301,7 @@ New Features
|
|||
* Update go-containerregistry so kaniko works better with Harbor and Gitlab[#227](https://github.com/GoogleContainerTools/kaniko/pull/227)
|
||||
* Push image to multiple destinations [#184](https://github.com/GoogleContainerTools/kaniko/pull/184)
|
||||
|
||||
# v0.1.0 Release - 5/17/2018
|
||||
# v0.1.0 Release - 2018-05-17
|
||||
|
||||
New Features
|
||||
* The majority of Dockerfile commands are feature complete [#1](https://github.com/GoogleContainerTools/kaniko/issues/1)
|
||||
|
|
|
|||
|
|
@ -339,6 +339,21 @@
|
|||
pruneopts = "NUT"
|
||||
revision = "7567d47988d82a78b052c4b9ba1e61399e0e7897"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:de4a74b504df31145ffa8ca0c4edbffa2f3eb7f466753962184611b618fa5981"
|
||||
name = "github.com/emirpasic/gods"
|
||||
packages = [
|
||||
"containers",
|
||||
"lists",
|
||||
"lists/arraylist",
|
||||
"trees",
|
||||
"trees/binaryheap",
|
||||
"utils",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "f6c17b524822278a87e3b3bd809fec33b51f5b46"
|
||||
version = "v1.9.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:1b91ae0dc69a41d4c2ed23ea5cffb721ea63f5037ca4b81e6d6771fbb8f45129"
|
||||
name = "github.com/fsnotify/fsnotify"
|
||||
|
|
@ -430,26 +445,30 @@
|
|||
version = "v0.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:f1b23f53418c1b035a5965ac2600a28b16c08643683d5213fb581ecf4e79a02a"
|
||||
digest = "1:5924704ec96f00247784c512cc57f45a595030376a7ff2ff993bf356793a2cb0"
|
||||
name = "github.com/google/go-containerregistry"
|
||||
packages = [
|
||||
"pkg/authn",
|
||||
"pkg/authn/k8schain",
|
||||
"pkg/internal/retry",
|
||||
"pkg/logs",
|
||||
"pkg/name",
|
||||
"pkg/v1",
|
||||
"pkg/v1/daemon",
|
||||
"pkg/v1/empty",
|
||||
"pkg/v1/layout",
|
||||
"pkg/v1/mutate",
|
||||
"pkg/v1/partial",
|
||||
"pkg/v1/random",
|
||||
"pkg/v1/remote",
|
||||
"pkg/v1/remote/transport",
|
||||
"pkg/v1/stream",
|
||||
"pkg/v1/tarball",
|
||||
"pkg/v1/types",
|
||||
"pkg/v1/v1util",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "88d8d18eb1bde1fcef23c745205c738074290515"
|
||||
revision = "31e00cede111067bae48bfc2cbfc522b0b36207f"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:f4f203acd8b11b8747bdcd91696a01dbc95ccb9e2ca2db6abf81c3a4f5e950ce"
|
||||
|
|
@ -546,6 +565,14 @@
|
|||
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
|
||||
version = "v1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:62fe3a7ea2050ecbd753a71889026f83d73329337ada66325cbafd5dea5f713d"
|
||||
name = "github.com/jbenet/go-context"
|
||||
packages = ["io"]
|
||||
pruneopts = "NUT"
|
||||
revision = "d14ea06fba99483203c19d92cfcd13ebe73135f4"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ac6d01547ec4f7f673311b4663909269bfb8249952de3279799289467837c3cc"
|
||||
name = "github.com/jmespath/go-jmespath"
|
||||
|
|
@ -569,6 +596,14 @@
|
|||
revision = "cceff240ca8af695e41738831646717e80d2f846"
|
||||
version = "v1.7.7"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:29e44e9481a689be0093a0033299b95741d394a97b28e0273c21afe697873a22"
|
||||
name = "github.com/kevinburke/ssh_config"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "81db2a75821ed34e682567d48be488a1c3121088"
|
||||
version = "0.5"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d0164259ed17929689df11205194d80288e8ae25351778f7a3421a24774c36f8"
|
||||
name = "github.com/mattn/go-shellwords"
|
||||
|
|
@ -585,6 +620,22 @@
|
|||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:56eaee71300a91f7a2f096b5d1d1d5389ebe8e69c068ec7d84c20459f599ddde"
|
||||
name = "github.com/minio/HighwayHash"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "02ca4b43caa3297fbb615700d8800acc7933be98"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a4df73029d2c42fabcb6b41e327d2f87e685284ec03edf76921c267d9cfc9c23"
|
||||
name = "github.com/mitchellh/go-homedir"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "ae18d6b8b3205b561c79e8e5f69bff09736185f4"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:e92b50943cf297f740ce62ee527a1044215689f0daea2b51bb9be4265e2b2b36"
|
||||
|
|
@ -679,6 +730,22 @@
|
|||
revision = "1949ddbfd147afd4d964a9f00b24eb291e0e7c38"
|
||||
version = "v1.0.2"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:15057fc7395024283a7d2639b8afc61c5b6df3fe260ce06ff5834c8464f16b5c"
|
||||
name = "github.com/otiai10/copy"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "7e9a647135a142c2669943d4a4d29be015ce9392"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:cf254277d898b713195cc6b4a3fac8bf738b9f1121625df27843b52b267eec6c"
|
||||
name = "github.com/pelletier/go-buffruneio"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "c37440a7cf42ac63b919c752ca73a85067e05992"
|
||||
version = "v0.2.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:3bf17a6e6eaa6ad24152148a631d18662f7212e21637c2699bff3369b7f00fa2"
|
||||
|
|
@ -746,6 +813,14 @@
|
|||
pruneopts = "NUT"
|
||||
revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d917313f309bda80d27274d53985bc65651f81a5b66b820749ac7f8ef061fd04"
|
||||
name = "github.com/sergi/go-diff"
|
||||
packages = ["diffmatchpatch"]
|
||||
pruneopts = "NUT"
|
||||
revision = "1744e2970ca51c86172c8190fadad617561ed6e7"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b2339e83ce9b5c4f79405f949429a7f68a9a904fed903c672aac1e7ceb7f5f02"
|
||||
name = "github.com/sirupsen/logrus"
|
||||
|
|
@ -781,6 +856,19 @@
|
|||
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:ccca1dcd18bc54e23b517a3c5babeff2e3924a7d8fc1932162225876cfe4bfb0"
|
||||
name = "github.com/src-d/gcfg"
|
||||
packages = [
|
||||
".",
|
||||
"scanner",
|
||||
"token",
|
||||
"types",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "f187355171c936ac84a82793659ebb4936bc1c23"
|
||||
version = "v1.3.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:e865a1cd94806d1bb0eaa0bffba2ccb5e25ac42e1f55328c83d5e3399c9961a4"
|
||||
|
|
@ -809,6 +897,14 @@
|
|||
revision = "38ec4ddb06dedbea0a895c4848b248eb38af221b"
|
||||
version = "v0.10.2"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:3148cb3478c26a92b4c1a18abb9428234b281e278af6267840721a24b6cbc6a3"
|
||||
name = "github.com/xanzy/ssh-agent"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "640f0ab560aeb89d523bb6ac322b1244d5c3796c"
|
||||
version = "v0.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:51db8b83c5320ce0522433046efec5fda0d34a2308ff969627e1ca206de77f35"
|
||||
name = "go.opencensus.io"
|
||||
|
|
@ -833,11 +929,27 @@
|
|||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:670ac353e434afad9b6d96ba9735f243c5808c58ed67fdce9768e160d84389ba"
|
||||
digest = "1:32c6e2719119a24b32fd3f4e5680e68f6b29752616441f526d055987c2df9364"
|
||||
name = "golang.org/x/crypto"
|
||||
packages = [
|
||||
"cast5",
|
||||
"curve25519",
|
||||
"ed25519",
|
||||
"ed25519/internal/edwards25519",
|
||||
"internal/chacha20",
|
||||
"internal/subtle",
|
||||
"openpgp",
|
||||
"openpgp/armor",
|
||||
"openpgp/elgamal",
|
||||
"openpgp/errors",
|
||||
"openpgp/packet",
|
||||
"openpgp/s2k",
|
||||
"pkcs12",
|
||||
"pkcs12/internal/rc2",
|
||||
"poly1305",
|
||||
"ssh",
|
||||
"ssh/agent",
|
||||
"ssh/knownhosts",
|
||||
"ssh/terminal",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
|
|
@ -890,9 +1002,10 @@
|
|||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:eeb413d109f4b2813de0b5b23645d7a503db926cae8f10dfdcf248d15499314f"
|
||||
digest = "1:2d5f7cd5c2bc42a4d5b18f711d482f14689a30212bbe0e398e151b3e2147cb86"
|
||||
name = "golang.org/x/sys"
|
||||
packages = [
|
||||
"cpu",
|
||||
"unix",
|
||||
"windows",
|
||||
"windows/registry",
|
||||
|
|
@ -1022,6 +1135,77 @@
|
|||
revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf"
|
||||
version = "v0.9.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:47a697b155f5214ff14e68e39ce9c2e8d93e1fb035ae5ba7e247d044e0ce64e3"
|
||||
name = "gopkg.in/src-d/go-billy.v4"
|
||||
packages = [
|
||||
".",
|
||||
"helper/chroot",
|
||||
"helper/polyfill",
|
||||
"osfs",
|
||||
"util",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "83cf655d40b15b427014d7875d10850f96edba14"
|
||||
version = "v4.2.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d1b3d8ed1789f92b3d8192f8822cf422f803cd690139a8537a61313145fcae44"
|
||||
name = "gopkg.in/src-d/go-git.v4"
|
||||
packages = [
|
||||
".",
|
||||
"config",
|
||||
"internal/revision",
|
||||
"plumbing",
|
||||
"plumbing/cache",
|
||||
"plumbing/filemode",
|
||||
"plumbing/format/config",
|
||||
"plumbing/format/diff",
|
||||
"plumbing/format/gitignore",
|
||||
"plumbing/format/idxfile",
|
||||
"plumbing/format/index",
|
||||
"plumbing/format/objfile",
|
||||
"plumbing/format/packfile",
|
||||
"plumbing/format/pktline",
|
||||
"plumbing/object",
|
||||
"plumbing/protocol/packp",
|
||||
"plumbing/protocol/packp/capability",
|
||||
"plumbing/protocol/packp/sideband",
|
||||
"plumbing/revlist",
|
||||
"plumbing/storer",
|
||||
"plumbing/transport",
|
||||
"plumbing/transport/client",
|
||||
"plumbing/transport/file",
|
||||
"plumbing/transport/git",
|
||||
"plumbing/transport/http",
|
||||
"plumbing/transport/internal/common",
|
||||
"plumbing/transport/server",
|
||||
"plumbing/transport/ssh",
|
||||
"storage",
|
||||
"storage/filesystem",
|
||||
"storage/filesystem/dotgit",
|
||||
"storage/memory",
|
||||
"utils/binary",
|
||||
"utils/diff",
|
||||
"utils/ioutil",
|
||||
"utils/merkletrie",
|
||||
"utils/merkletrie/filesystem",
|
||||
"utils/merkletrie/index",
|
||||
"utils/merkletrie/internal/frame",
|
||||
"utils/merkletrie/noder",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "7b6c1266556f59ac436fada3fa6106d4a84f9b56"
|
||||
version = "v4.6.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b233ad4ec87ac916e7bf5e678e98a2cb9e8b52f6de6ad3e11834fc7a71b8e3bf"
|
||||
name = "gopkg.in/warnings.v0"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "ec4a0fea49c7b46c2aeb0b51aac55779c607e52b"
|
||||
version = "v0.1.2"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:7c95b35057a0ff2e19f707173cc1a947fa43a6eb5c4d300d196ece0334046082"
|
||||
name = "gopkg.in/yaml.v2"
|
||||
|
|
@ -1113,7 +1297,7 @@
|
|||
version = "kubernetes-1.11.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:b960fc62d636ccdc3265dd1e190b7f5e7bf5f8d29bf4f02af7f1352768c58f3f"
|
||||
digest = "1:2f523dd16b56091fab1f329f772c3540742920e270bf0f9b8451106b7f005a66"
|
||||
name = "k8s.io/client-go"
|
||||
packages = [
|
||||
"discovery",
|
||||
|
|
@ -1165,8 +1349,8 @@
|
|||
"util/integer",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "7d04d0e2a0a1a4d4a1cd6baa432a2301492e4e65"
|
||||
version = "kubernetes-1.11.0"
|
||||
revision = "2cefa64ff137e128daeddbd1775cd775708a05bf"
|
||||
version = "kubernetes-1.11.3"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:e345c95cf277bb7f650306556904df69e0904395c56959a56002d0140747eda0"
|
||||
|
|
@ -1205,6 +1389,7 @@
|
|||
"github.com/google/go-containerregistry/pkg/v1",
|
||||
"github.com/google/go-containerregistry/pkg/v1/daemon",
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty",
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout",
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate",
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial",
|
||||
"github.com/google/go-containerregistry/pkg/v1/random",
|
||||
|
|
@ -1212,9 +1397,11 @@
|
|||
"github.com/google/go-containerregistry/pkg/v1/tarball",
|
||||
"github.com/google/go-github/github",
|
||||
"github.com/karrick/godirwalk",
|
||||
"github.com/minio/HighwayHash",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/parser",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/shell",
|
||||
"github.com/otiai10/copy",
|
||||
"github.com/pkg/errors",
|
||||
"github.com/sirupsen/logrus",
|
||||
"github.com/spf13/afero",
|
||||
|
|
@ -1223,6 +1410,8 @@
|
|||
"golang.org/x/net/context",
|
||||
"golang.org/x/oauth2",
|
||||
"golang.org/x/sync/errgroup",
|
||||
"gopkg.in/src-d/go-git.v4",
|
||||
"gopkg.in/src-d/go-git.v4/plumbing",
|
||||
"k8s.io/client-go/discovery",
|
||||
]
|
||||
solver-name = "gps-cdcl"
|
||||
|
|
|
|||
12
Gopkg.toml
12
Gopkg.toml
|
|
@ -33,12 +33,20 @@ required = [
|
|||
|
||||
[[constraint]]
|
||||
name = "k8s.io/client-go"
|
||||
version = "kubernetes-1.11.0"
|
||||
version = "kubernetes-1.11.3"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/google/go-containerregistry"
|
||||
revision = "88d8d18eb1bde1fcef23c745205c738074290515"
|
||||
revision = "31e00cede111067bae48bfc2cbfc522b0b36207f"
|
||||
|
||||
[[override]]
|
||||
name = "k8s.io/apimachinery"
|
||||
version = "kubernetes-1.11.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "gopkg.in/src-d/go-git.v4"
|
||||
version = "4.6.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/minio/HighwayHash"
|
||||
version = "1.0.0"
|
||||
|
|
|
|||
3
Makefile
3
Makefile
|
|
@ -14,12 +14,13 @@
|
|||
|
||||
# Bump these on release
|
||||
VERSION_MAJOR ?= 0
|
||||
VERSION_MINOR ?= 9
|
||||
VERSION_MINOR ?= 12
|
||||
VERSION_BUILD ?= 0
|
||||
|
||||
VERSION ?= v$(VERSION_MAJOR).$(VERSION_MINOR).$(VERSION_BUILD)
|
||||
VERSION_PACKAGE = $(REPOPATH/pkg/version)
|
||||
|
||||
SHELL := /bin/bash
|
||||
GOOS ?= $(shell go env GOOS)
|
||||
GOARCH = amd64
|
||||
ORG := github.com/GoogleContainerTools
|
||||
|
|
|
|||
86
README.md
86
README.md
|
|
@ -21,6 +21,7 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
|
|||
- [How does kaniko work?](#how-does-kaniko-work)
|
||||
- [Known Issues](#known-issues)
|
||||
- [Demo](#demo)
|
||||
- [Tutorial](#tutorial)
|
||||
- [Using kaniko](#using-kaniko)
|
||||
- [kaniko Build Contexts](#kaniko-build-contexts)
|
||||
- [Running kaniko](#running-kaniko)
|
||||
|
|
@ -40,9 +41,11 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
|
|||
- [--cache-dir](#--cache-dir)
|
||||
- [--cache-repo](#--cache-repo)
|
||||
- [--cleanup](#--cleanup)
|
||||
- [--digest-file](#--digest-file)
|
||||
- [--insecure](#--insecure)
|
||||
- [--insecure-pull](#--insecure-pull)
|
||||
- [--no-push](#--no-push)
|
||||
- [--oci-layout-path](#--oci-layout-path)
|
||||
- [--reproducible](#--reproducible)
|
||||
- [--single-snapshot](#--single-snapshot)
|
||||
- [--snapshotMode](#--snapshotmode)
|
||||
|
|
@ -50,6 +53,7 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
|
|||
- [--skip-tls-verify-pull](#--skip-tls-verify-pull)
|
||||
- [--target](#--target)
|
||||
- [--tarPath](#--tarpath)
|
||||
- [--verbosity](#--verbosity)
|
||||
- [Debug Image](#debug-image)
|
||||
- [Security](#security)
|
||||
- [Comparison with Other Tools](#comparison-with-other-tools)
|
||||
|
|
@ -74,6 +78,10 @@ kaniko does not support building Windows containers.
|
|||
|
||||

|
||||
|
||||
## Tutorial
|
||||
|
||||
For a detailed example of kaniko with local storage, please refer to a [getting started tutorial](./docs/tutorial.md).
|
||||
|
||||
## Using kaniko
|
||||
|
||||
To use kaniko to build and push an image for you, you will need:
|
||||
|
|
@ -91,6 +99,7 @@ Right now, kaniko supports these storage solutions:
|
|||
- GCS Bucket
|
||||
- S3 Bucket
|
||||
- Local Directory
|
||||
- Git Repository
|
||||
|
||||
_Note: the local directory option refers to a directory within the kaniko container.
|
||||
If you wish to use this option, you will need to mount in your build context into the container as a directory._
|
||||
|
|
@ -112,15 +121,19 @@ gsutil cp context.tar.gz gs://<bucket name>
|
|||
|
||||
When running kaniko, use the `--context` flag with the appropriate prefix to specify the location of your build context:
|
||||
|
||||
| Source | Prefix |
|
||||
|---------|---------|
|
||||
| Local Directory | dir://[path to a directory in the kaniko container] |
|
||||
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] |
|
||||
| S3 Bucket | s3://[bucket name]/[path to .tar.gz] |
|
||||
| Source | Prefix | Example |
|
||||
|---------|---------|---------|
|
||||
| Local Directory | dir://[path to a directory in the kaniko container] | `dir:///workspace` |
|
||||
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] | `gs://kaniko-bucket/path/to/context.tar.gz` |
|
||||
| S3 Bucket | s3://[bucket name]/[path to .tar.gz] | `s3://kaniko-bucket/path/to/context.tar.gz` |
|
||||
| Git Repository | git://[repository url][#reference] | `git://github.com/acme/myproject.git#refs/heads/mybranch` |
|
||||
|
||||
If you don't specify a prefix, kaniko will assume a local directory.
|
||||
For example, to use a GCS bucket called `kaniko-bucket`, you would pass in `--context=gs://kaniko-bucket/path/to/context.tar.gz`.
|
||||
|
||||
### Using Private Git Repository
|
||||
You can use `Personal Access Tokens` for Build Contexts from Private Repositories from [GitHub](https://blog.github.com/2012-09-21-easier-builds-and-deployments-using-git-over-https-and-oauth/).
|
||||
|
||||
### Running kaniko
|
||||
|
||||
There are several different ways to deploy and run kaniko:
|
||||
|
|
@ -355,14 +368,39 @@ If `--destination=gcr.io/kaniko-project/test`, then cached layers will be stored
|
|||
|
||||
_This flag must be used in conjunction with the `--cache=true` flag._
|
||||
|
||||
|
||||
#### --digest-file
|
||||
|
||||
Set this flag to specify a file in the container. This file will
|
||||
receive the digest of a built image. This can be used to
|
||||
automatically track the exact image built by Kaniko.
|
||||
|
||||
For example, setting the flag to `--digest-file=/dev/termination-log`
|
||||
will write the digest to that file, which is picked up by
|
||||
Kubernetes automatically as the `{{.state.terminated.message}}`
|
||||
of the container.
|
||||
|
||||
#### --oci-layout-path
|
||||
|
||||
Set this flag to specify a directory in the container where the OCI image
|
||||
layout of a built image will be placed. This can be used to automatically
|
||||
track the exact image built by Kaniko.
|
||||
|
||||
For example, to surface the image digest built in a
|
||||
[Tekton task](https://github.com/tektoncd/pipeline/blob/v0.6.0/docs/resources.md#surfacing-the-image-digest-built-in-a-task),
|
||||
this flag should be set to match the image resource `outputImageDir`.
|
||||
|
||||
_Note: Depending on the built image, the media type of the image manifest might be either
|
||||
`application/vnd.oci.image.manifest.v1+json` or `application/vnd.docker.distribution.manifest.v2+json``._
|
||||
|
||||
#### --insecure-registry
|
||||
|
||||
Set this flag to use plain HTTP requests when accessing a registry. It is supposed to be useed for testing purposes only and should not be used in production!
|
||||
Set this flag to use plain HTTP requests when accessing a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
You can set it multiple times for multiple registries.
|
||||
|
||||
#### --skip-tls-verify-registry
|
||||
|
||||
Set this flag to skip TLS cerificate validation when accessing a registry. It is supposed to be useed for testing purposes only and should not be used in production!
|
||||
Set this flag to skip TLS cerificate validation when accessing a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
You can set it multiple times for multiple registries.
|
||||
|
||||
#### --cleanup
|
||||
|
|
@ -391,7 +429,11 @@ This flag takes a single snapshot of the filesystem at the end of the build, so
|
|||
|
||||
#### --skip-tls-verify
|
||||
|
||||
Set this flag to skip TLS certificate validation when connecting to a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
Set this flag to skip TLS certificate validation when pushing to a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --skip-tls-verify-pull
|
||||
|
||||
Set this flag to skip TLS certificate validation when pulling from a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --snapshotMode
|
||||
|
||||
|
|
@ -407,6 +449,10 @@ Set this flag to indicate which build stage is the target build stage.
|
|||
|
||||
Set this flag as `--tarPath=<path>` to save the image as a tarball at path instead of pushing the image.
|
||||
|
||||
#### --verbosity
|
||||
|
||||
Set this flag as `--verbosity=<panic|fatal|error|warn|info|debug>` to set the logging level. Defaults to `info`.
|
||||
|
||||
### Debug Image
|
||||
|
||||
The kaniko executor image is based off of scratch and doesn't contain a shell.
|
||||
|
|
@ -441,6 +487,7 @@ You may be able to achieve the same default seccomp profile that Docker uses in
|
|||
|
||||
Similar tools include:
|
||||
|
||||
- [BuildKit](https://github.com/moby/buildkit)
|
||||
- [img](https://github.com/genuinetools/img)
|
||||
- [orca-build](https://github.com/cyphar/orca-build)
|
||||
- [umoci](https://github.com/openSUSE/umoci)
|
||||
|
|
@ -450,10 +497,10 @@ Similar tools include:
|
|||
|
||||
All of these tools build container images with different approaches.
|
||||
|
||||
`img` can perform as a non root user from within a container, but requires that
|
||||
the `img` container has `RawProc` access to create nested containers. `kaniko`
|
||||
does not actually create nested containers, so it does not require `RawProc`
|
||||
access.
|
||||
BuildKit (and `img`) can perform as a non root user from within a container, but requires
|
||||
seccomp and AppArmor to be disabled to create nested containers. `kaniko`
|
||||
does not actually create nested containers, so it does not require seccomp and AppArmor
|
||||
to be disabled.
|
||||
|
||||
`orca-build` depends on `runc` to build images from Dockerfiles, which can not
|
||||
run inside a container (for similar reasons to `img` above). `kaniko` doesn't
|
||||
|
|
@ -467,12 +514,15 @@ filesystem is sufficiently complicated). However it has no `Dockerfile`-like
|
|||
build tooling (it's a slightly lower-level tool that can be used to build such
|
||||
builders -- such as `orca-build`).
|
||||
|
||||
`Buildah` can run as a non root user and does not require privileges. Buildah
|
||||
specializes in building OCI images. Buildah's commands replicate all of the
|
||||
commands that are found in a Dockerfile. Its goal is also to provide a lower
|
||||
level coreutils interface to build images, allowing people to build containers
|
||||
without requiring a Dockerfile. The intent with Buildah is to allow other
|
||||
scripting languages to build container images, without requiring a daemon.
|
||||
`Buildah` specializes in building OCI images. Buildah's commands replicate all
|
||||
of the commands that are found in a Dockerfile. This allows building images
|
||||
with and without Dockerfiles while not requiring any root privileges.
|
||||
Buildah’s ultimate goal is to provide a lower-level coreutils interface to
|
||||
build images. The flexibility of building images without Dockerfiles allows
|
||||
for the integration of other scripting languages into the build process.
|
||||
Buildah follows a simple fork-exec model and does not run as a daemon
|
||||
but it is based on a comprehensive API in golang, which can be vendored
|
||||
into other tools.
|
||||
|
||||
`FTL` and `Bazel` aim to achieve the fastest possible creation of Docker images
|
||||
for a subset of images. These can be thought of as a special-case "fast path"
|
||||
|
|
|
|||
|
|
@ -24,12 +24,11 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/buildcontext"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/executor"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
"github.com/genuinetools/amicontained/container"
|
||||
"github.com/pkg/errors"
|
||||
|
|
@ -79,6 +78,12 @@ var RootCmd = &cobra.Command{
|
|||
}
|
||||
logrus.Warn("kaniko is being run outside of a container. This can have dangerous effects on your system")
|
||||
}
|
||||
if err := executor.CheckPushPermissions(opts); err != nil {
|
||||
exit(errors.Wrap(err, "error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again"))
|
||||
}
|
||||
if err := resolveRelativePaths(); err != nil {
|
||||
exit(errors.Wrap(err, "error resolving relative paths to absolute paths"))
|
||||
}
|
||||
if err := os.Chdir("/"); err != nil {
|
||||
exit(errors.Wrap(err, "error changing to root dir"))
|
||||
}
|
||||
|
|
@ -126,6 +131,8 @@ func addKanikoOptionsFlags(cmd *cobra.Command) {
|
|||
RootCmd.PersistentFlags().BoolVarP(&opts.NoPush, "no-push", "", false, "Do not push the image to the registry")
|
||||
RootCmd.PersistentFlags().StringVarP(&opts.CacheRepo, "cache-repo", "", "", "Specify a repository to use as a cache, otherwise one will be inferred from the destination provided")
|
||||
RootCmd.PersistentFlags().StringVarP(&opts.CacheDir, "cache-dir", "", "/cache", "Specify a local directory to use as a cache.")
|
||||
RootCmd.PersistentFlags().StringVarP(&opts.DigestFile, "digest-file", "", "", "Specify a file to save the digest of the built image to.")
|
||||
RootCmd.PersistentFlags().StringVarP(&opts.OCILayoutPath, "oci-layout-path", "", "", "Path to save the OCI image layout of the built image.")
|
||||
RootCmd.PersistentFlags().BoolVarP(&opts.Cache, "cache", "", false, "Use cache when building image")
|
||||
RootCmd.PersistentFlags().BoolVarP(&opts.Cleanup, "cleanup", "", false, "Clean the filesystem at the end")
|
||||
RootCmd.PersistentFlags().DurationVarP(&opts.CacheTTL, "cache-ttl", "", time.Hour*336, "Cache timeout in hours. Defaults to two weeks.")
|
||||
|
|
@ -205,12 +212,12 @@ func resolveSourceContext() error {
|
|||
}
|
||||
if opts.Bucket != "" {
|
||||
if !strings.Contains(opts.Bucket, "://") {
|
||||
// if no prefix use Google Cloud Storage as default for backwards compatibility
|
||||
opts.SrcContext = constants.GCSBuildContextPrefix + opts.Bucket
|
||||
} else {
|
||||
opts.SrcContext = opts.Bucket
|
||||
}
|
||||
}
|
||||
// if no prefix use Google Cloud Storage as default for backwards compatibility
|
||||
contextExecutor, err := buildcontext.GetBuildContext(opts.SrcContext)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
@ -224,6 +231,37 @@ func resolveSourceContext() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func resolveRelativePaths() error {
|
||||
optsPaths := []*string{
|
||||
&opts.DockerfilePath,
|
||||
&opts.SrcContext,
|
||||
&opts.CacheDir,
|
||||
&opts.TarPath,
|
||||
&opts.DigestFile,
|
||||
}
|
||||
|
||||
for _, p := range optsPaths {
|
||||
// Skip empty path
|
||||
if *p == "" {
|
||||
continue
|
||||
}
|
||||
// Skip path that is already absolute
|
||||
if filepath.IsAbs(*p) {
|
||||
logrus.Debugf("Path %s is absolute, skipping", *p)
|
||||
continue
|
||||
}
|
||||
|
||||
// Resolve relative path to absolute path
|
||||
var err error
|
||||
relp := *p // save original relative path
|
||||
if *p, err = filepath.Abs(*p); err != nil {
|
||||
return errors.Wrapf(err, "Couldn't resolve relative path %s to an absolute path", *p)
|
||||
}
|
||||
logrus.Debugf("Resolved relative path %s to %s", relp, *p)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func exit(err error) {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ package cmd
|
|||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/cache"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
|
|
@ -51,6 +52,12 @@ var RootCmd = &cobra.Command{
|
|||
return nil
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if _, err := os.Stat(opts.CacheDir); os.IsNotExist(err) {
|
||||
err = os.MkdirAll(opts.CacheDir, 0755)
|
||||
if err != nil {
|
||||
exit(errors.Wrap(err, "Failed to create cache directory"))
|
||||
}
|
||||
}
|
||||
if err := cache.WarmCache(opts); err != nil {
|
||||
exit(errors.Wrap(err, "Failed warming cache"))
|
||||
}
|
||||
|
|
@ -61,6 +68,8 @@ var RootCmd = &cobra.Command{
|
|||
func addKanikoOptionsFlags(cmd *cobra.Command) {
|
||||
RootCmd.PersistentFlags().VarP(&opts.Images, "image", "i", "Image to cache. Set it repeatedly for multiple images.")
|
||||
RootCmd.PersistentFlags().StringVarP(&opts.CacheDir, "cache-dir", "c", "/cache", "Directory of the cache.")
|
||||
RootCmd.PersistentFlags().BoolVarP(&opts.Force, "force", "f", false, "Force cache overwriting.")
|
||||
RootCmd.PersistentFlags().DurationVarP(&opts.CacheTTL, "cache-ttl", "", time.Hour*336, "Cache timeout in hours. Defaults to two weeks.")
|
||||
}
|
||||
|
||||
// addHiddenFlags marks certain flags as hidden from the executor help text
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.5.0/docker-credential-gcr_linux_amd64-1.5.0.tar.gz /usr/local/bin/
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@
|
|||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
# Stage 0: Build the executor binary and get credential helpers
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.5.0/docker-credential-gcr_linux_amd64-1.5.0.tar.gz /usr/local/bin/
|
||||
|
|
@ -25,7 +25,7 @@ RUN docker-credential-gcr configure-docker
|
|||
RUN go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-login/cli/docker-credential-ecr-login
|
||||
RUN make -C /go/src/github.com/awslabs/amazon-ecr-credential-helper linux-amd64
|
||||
COPY . .
|
||||
RUN make
|
||||
RUN make && make out/warmer
|
||||
|
||||
# Stage 1: Get the busybox shell
|
||||
FROM gcr.io/cloud-builders/bazel:latest
|
||||
|
|
@ -35,7 +35,7 @@ RUN bazel build //experimental/busybox:busybox_tar
|
|||
RUN tar -C /distroless/bazel-genfiles/experimental/busybox/ -xf /distroless/bazel-genfiles/experimental/busybox/busybox.tar
|
||||
|
||||
FROM scratch
|
||||
COPY --from=0 /go/src/github.com/GoogleContainerTools/kaniko/out/executor /kaniko/executor
|
||||
COPY --from=0 /go/src/github.com/GoogleContainerTools/kaniko/out/* /kaniko/
|
||||
COPY --from=0 /usr/local/bin/docker-credential-gcr /kaniko/docker-credential-gcr
|
||||
COPY --from=0 /go/src/github.com/awslabs/amazon-ecr-credential-helper/bin/linux-amd64/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
|
||||
COPY --from=1 /distroless/bazel-genfiles/experimental/busybox/busybox/ /busybox/
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
COPY . .
|
||||
RUN make out/warmer
|
||||
|
|
|
|||
|
|
@ -0,0 +1,125 @@
|
|||
# Getting Started Tutorial
|
||||
|
||||
This tutorial is for beginners who want to start using kaniko and aims to establish a quick start test case.
|
||||
|
||||
## Table of Content
|
||||
|
||||
1. [Prerequisities](#Prerequisities)
|
||||
2. [Prepare config files for kaniko](#Prepare-config-files-for-kaniko)
|
||||
3. [Prepare the local mounted directory](#Prepare-the-local-mounted-directory)
|
||||
4. [Create a Secret that holds your authorization token](#Create-a-Secret-that-holds-your-authorization-token)
|
||||
5. [Create resources in kubernetes](#Create-resources-in-kubernetes)
|
||||
6. [Pull the image and test](#Pull-the-image-and-test)
|
||||
|
||||
## Prerequisities
|
||||
|
||||
- A Kubernetes Cluster. You could use [Minikube](https://kubernetes.io/docs/setup/minikube/) to deploy kubernetes locally, or use kubernetes service from cloud provider like [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/).
|
||||
- A [dockerhub](https://hub.docker.com/) account to push built image public.
|
||||
|
||||
## Prepare config files for kaniko
|
||||
|
||||
Prepare several config files to create resources in kubernetes, which are:
|
||||
|
||||
- [pod.yaml](../examples/pod.yaml) is for starting a kaniko container to build the example image.
|
||||
- [volume.yaml](../examples/volume.yaml) is for creating a persistent volume used as kaniko build context.
|
||||
- [volume-claim.yaml](../examples/volume-claim.yaml) is for creating a persistent volume claim which will mounted in the kaniko container.
|
||||
|
||||
## Prepare the local mounted directory
|
||||
|
||||
SSH into the cluster, and create a local directory which will be mounted in kaniko container as build context. Create a simple dockerfile there.
|
||||
|
||||
> Note: To ssh into cluster, if you use minikube, you could use `minikube ssh` command. If you use cloud service, please refer to official doc, such as [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/ssh#code-try-0).
|
||||
|
||||
```shell
|
||||
$ mkdir kaniko && cd kaniko
|
||||
$ echo 'FROM ubuntu' >> dockerfile
|
||||
$ echo 'ENTRYPOINT ["/bin/bash", "-c", "echo hello"]' >> dockerfile
|
||||
$ cat dockerfile
|
||||
FROM ubuntu
|
||||
ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
|
||||
$ pwd
|
||||
/home/<user-name>/kaniko # copy this path in volume.yaml file
|
||||
```
|
||||
|
||||
> Note: It is important to notice that the `hostPath` in the volume.yaml need to be replaced with the local directory you created.
|
||||
|
||||
## Create a Secret that holds your authorization token
|
||||
|
||||
A Kubernetes cluster uses the Secret of docker-registry type to authenticate with a docker registry to push an image.
|
||||
|
||||
Create this Secret, naming it regcred:
|
||||
|
||||
```shell
|
||||
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
|
||||
```
|
||||
|
||||
- `<your-registry-server>` is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
|
||||
- `<your-name>` is your Docker username.
|
||||
- `<your-pword>` is your Docker password.
|
||||
- `<your-email>` is your Docker email.
|
||||
|
||||
This secret will be used in pod.yaml config.
|
||||
|
||||
## Create resources in kubernetes
|
||||
|
||||
```shell
|
||||
# create persistent volume
|
||||
$ kubectl create -f volume.yaml
|
||||
persistentvolume/dockerfile created
|
||||
|
||||
# create persistent volume claim
|
||||
$ kubectl create -f volume-claim.yaml
|
||||
persistentvolumeclaim/dockerfile-claim created
|
||||
|
||||
# check whether the volume mounted correctly
|
||||
$ kubectl get pv dockerfile
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
dockerfile 10Gi RWO Retain Bound default/dockerfile-claim local-storage 1m
|
||||
|
||||
# create pod
|
||||
$ kubectl create -f pod.yaml
|
||||
pod/kaniko created
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kaniko 0/1 ContainerCreating 0 7s
|
||||
|
||||
# check whether the build complete and show the build logs
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kaniko 0/1 Completed 0 34s
|
||||
$ kubectl logs kaniko
|
||||
➜ kubectl logs kaniko
|
||||
INFO[0000] Resolved base name ubuntu to ubuntu
|
||||
INFO[0000] Resolved base name ubuntu to ubuntu
|
||||
INFO[0000] Downloading base image ubuntu
|
||||
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:1bbdea4846231d91cce6c7ff3907d26fca444fd6b7e3c282b90c7fe4251f9f86: no such file or directory
|
||||
INFO[0000] Downloading base image ubuntu
|
||||
INFO[0001] Built cross stage deps: map[]
|
||||
INFO[0001] Downloading base image ubuntu
|
||||
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:1bbdea4846231d91cce6c7ff3907d26fca444fd6b7e3c282b90c7fe4251f9f86: no such file or directory
|
||||
INFO[0001] Downloading base image ubuntu
|
||||
INFO[0001] Skipping unpacking as no commands require it.
|
||||
INFO[0001] Taking snapshot of full filesystem...
|
||||
INFO[0001] ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
|
||||
```
|
||||
|
||||
> Note: It is important to notice that the `destination` in the pod.yaml need to be replaced with your own.
|
||||
|
||||
## Pull the image and test
|
||||
|
||||
If as expected, the kaniko will build image and push to dockerhub successfully. Pull the image to local and run it to test:
|
||||
|
||||
```shell
|
||||
$ sudo docker run -it <user-name>/<repo-name>
|
||||
Unable to find image 'debuggy/helloworld:latest' locally
|
||||
latest: Pulling from debuggy/helloworld
|
||||
5667fdb72017: Pull complete
|
||||
d83811f270d5: Pull complete
|
||||
ee671aafb583: Pull complete
|
||||
7fc152dfb3a6: Pull complete
|
||||
Digest: sha256:2707d17754ea99ce0cf15d84a7282ae746a44ff90928c2064755ee3b35c1057b
|
||||
Status: Downloaded newer image for debuggy/helloworld:latest
|
||||
hello
|
||||
```
|
||||
|
||||
Congratulation! You have gone through the hello world successfully, please refer to project for more details.
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kaniko
|
||||
spec:
|
||||
containers:
|
||||
- name: kaniko
|
||||
image: gcr.io/kaniko-project/executor:latest
|
||||
args: ["--dockerfile=/workspace/dockerfile",
|
||||
"--context=dir://workspace",
|
||||
"--destination=<user-name>/<repo>"] # replace with your dockerhub account
|
||||
volumeMounts:
|
||||
- name: kaniko-secret
|
||||
mountPath: /root
|
||||
- name: dockerfile-storage
|
||||
mountPath: /workspace
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: kaniko-secret
|
||||
secret:
|
||||
secretName: regcred
|
||||
items:
|
||||
- key: .dockerconfigjson
|
||||
path: .docker/config.json
|
||||
- name: dockerfile-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: dockerfile-claim
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: dockerfile-claim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
storageClassName: local-storage
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: dockerfile
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: local-storage
|
||||
hostPath:
|
||||
path: <local-directory> # replace with local directory, such as "/home/<user-name>/kaniko"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
# This is not included in integration tests because docker build does not exploit Dockerfile.dockerignore
|
||||
# See https://github.com/moby/moby/issues/12886#issuecomment-523706042 for more details
|
||||
# This dockerfile makes sure Dockerfile.dockerignore is working
|
||||
# If so then ignore_relative/foo should copy to /foo
|
||||
# If not, then this image won't build because it will attempt to copy three files to /foo, which is a file not a directory
|
||||
FROM scratch
|
||||
COPY ignore_relative/* /foo
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
# A .dockerignore file to make sure dockerignore support works
|
||||
ignore_relative/**
|
||||
!ignore_relative/foo
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
FROM scratch
|
||||
COPY LICENSE ./LICENSE
|
||||
|
|
@ -24,4 +24,11 @@ COPY $file /arg
|
|||
|
||||
# Finally, test adding a remote URL, concurrently with a normal file
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3/docker-credential-gcr_linux_386-1.4.3.tar.gz context/foo /test/all/
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3-static/docker-credential-gcr_linux_amd64-1.4.3.tar.gz /destination
|
||||
|
||||
# Test environment replacement in the URL
|
||||
ENV VERSION=v1.4.3
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/${VERSION}-static/docker-credential-gcr_linux_amd64-1.4.3.tar.gz /destination
|
||||
|
||||
# Test full url replacement
|
||||
ENV URL=https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3/docker-credential-gcr_linux_386-1.4.3.tar.gz
|
||||
ADD $URL /otherdestination
|
||||
|
|
|
|||
|
|
@ -0,0 +1,12 @@
|
|||
ARG FILE_NAME=myFile
|
||||
|
||||
FROM busybox:latest AS builder
|
||||
ARG FILE_NAME
|
||||
|
||||
RUN echo $FILE_NAME && touch /$FILE_NAME.txt && stat /$FILE_NAME.txt;
|
||||
|
||||
FROM busybox:latest
|
||||
ARG FILE_NAME
|
||||
|
||||
RUN echo $FILE_NAME && touch /$FILE_NAME.txt && stat /$FILE_NAME.txt;
|
||||
COPY --from=builder /$FILE_NAME.txt /
|
||||
|
|
@ -16,15 +16,7 @@
|
|||
# If the image is built twice, /date should be the same in both images
|
||||
# if the cache is implemented correctly
|
||||
|
||||
FROM alpine as base_stage
|
||||
|
||||
RUN mkdir foo && echo base_stage > foo/base
|
||||
|
||||
FROM base_stage as cached_stage
|
||||
|
||||
RUN echo cached_stage > foo/cache
|
||||
|
||||
FROM cached_stage as bug_stage
|
||||
|
||||
RUN echo bug_stage > foo/bug
|
||||
FROM gcr.io/google-appengine/debian9@sha256:1d6a9a6d106bd795098f60f4abb7083626354fa6735e81743c7f8cfca11259f0
|
||||
RUN date > /date
|
||||
COPY context/foo /foo
|
||||
RUN echo hey
|
||||
|
|
|
|||
|
|
@ -0,0 +1,8 @@
|
|||
# Test to make sure the cache works with special file permissions properly.
|
||||
# If the image is built twice, directory foo should have the sticky bit,
|
||||
# and file bar should have the setuid and setgid bits.
|
||||
|
||||
FROM busybox
|
||||
|
||||
RUN mkdir foo && chmod +t foo
|
||||
RUN touch bar && chmod u+s,g+s bar
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
FROM busybox
|
||||
|
||||
RUN adduser --disabled-password --gecos "" --uid 1000 user
|
||||
RUN mkdir -p /home/user/foo
|
||||
RUN chown -R user /home/user
|
||||
RUN chmod 700 /home/user/foo
|
||||
ADD https://raw.githubusercontent.com/GoogleContainerTools/kaniko/master/README.md /home/user/foo/README.md
|
||||
RUN chown -R user /home/user
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
# Copyright 2018 Google, Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM gcr.io/google-appengine/debian9@sha256:1d6a9a6d106bd795098f60f4abb7083626354fa6735e81743c7f8cfca11259f0
|
||||
USER testuser:testgroup
|
||||
|
||||
|
|
@ -134,6 +134,7 @@ func NewDockerFileBuilder(dockerfiles []string) *DockerFileBuilder {
|
|||
d.TestCacheDockerfiles = map[string]struct{}{
|
||||
"Dockerfile_test_cache": {},
|
||||
"Dockerfile_test_cache_install": {},
|
||||
"Dockerfile_test_cache_perm": {},
|
||||
}
|
||||
return &d
|
||||
}
|
||||
|
|
@ -177,7 +178,6 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
|
|||
if d == dockerfile {
|
||||
contextFlag = "-b"
|
||||
contextPath = gcsBucket
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -286,3 +286,33 @@ func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesP
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// buildRelativePathsImage builds the images for testing passing relatives paths to Kaniko
|
||||
func (d *DockerFileBuilder) buildRelativePathsImage(imageRepo, dockerfile string) error {
|
||||
_, ex, _, _ := runtime.Caller(0)
|
||||
cwd := filepath.Dir(ex)
|
||||
|
||||
buildContextPath := "./relative-subdirectory"
|
||||
kanikoImage := GetKanikoImage(imageRepo, dockerfile)
|
||||
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
"-v", cwd + ":/workspace",
|
||||
ExecutorImage,
|
||||
"-f", dockerfile,
|
||||
"-d", kanikoImage,
|
||||
"--digest-file", "./digest",
|
||||
"-c", buildContextPath,
|
||||
})...,
|
||||
)
|
||||
|
||||
timer := timing.Start(dockerfile + "_kaniko_relative_paths")
|
||||
_, err := RunCommandWithoutTest(kanikoCmd)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build relative path image %s with kaniko command \"%s\": %s", kanikoImage, kanikoCmd.Args, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -30,10 +30,9 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/daemon"
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
|
|
@ -236,9 +235,95 @@ func TestRun(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestGitBuildcontext(t *testing.T) {
|
||||
repo := "github.com/GoogleContainerTools/kaniko"
|
||||
dockerfile := "integration/dockerfiles/Dockerfile_test_run_2"
|
||||
|
||||
// Build with docker
|
||||
dockerImage := GetDockerImage(config.imageRepo, "Dockerfile_test_git")
|
||||
dockerCmd := exec.Command("docker",
|
||||
append([]string{"build",
|
||||
"-t", dockerImage,
|
||||
"-f", dockerfile,
|
||||
repo})...)
|
||||
out, err := RunCommandWithoutTest(dockerCmd)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to build image %s with docker command \"%s\": %s %s", dockerImage, dockerCmd.Args, err, string(out))
|
||||
}
|
||||
|
||||
// Build with kaniko
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, "Dockerfile_test_git")
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
ExecutorImage,
|
||||
"-f", dockerfile,
|
||||
"-d", kanikoImage,
|
||||
"-c", fmt.Sprintf("git://%s", repo)})...)
|
||||
|
||||
out, err = RunCommandWithoutTest(kanikoCmd)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to build image %s with kaniko command \"%s\": %v %s", dockerImage, kanikoCmd.Args, err, string(out))
|
||||
}
|
||||
|
||||
// container-diff
|
||||
daemonDockerImage := daemonPrefix + dockerImage
|
||||
containerdiffCmd := exec.Command("container-diff", "diff", "--no-cache",
|
||||
daemonDockerImage, kanikoImage,
|
||||
"-q", "--type=file", "--type=metadata", "--json")
|
||||
diff := RunCommand(containerdiffCmd, t)
|
||||
t.Logf("diff = %s", string(diff))
|
||||
|
||||
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
|
||||
checkContainerDiffOutput(t, diff, expected)
|
||||
}
|
||||
|
||||
func TestGitBuildContextWithBranch(t *testing.T) {
|
||||
repo := "github.com/GoogleContainerTools/kaniko#refs/tags/v0.10.0"
|
||||
dockerfile := "integration/dockerfiles/Dockerfile_test_run_2"
|
||||
|
||||
// Build with docker
|
||||
dockerImage := GetDockerImage(config.imageRepo, "Dockerfile_test_git")
|
||||
dockerCmd := exec.Command("docker",
|
||||
append([]string{"build",
|
||||
"-t", dockerImage,
|
||||
"-f", dockerfile,
|
||||
repo})...)
|
||||
out, err := RunCommandWithoutTest(dockerCmd)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to build image %s with docker command \"%s\": %s %s", dockerImage, dockerCmd.Args, err, string(out))
|
||||
}
|
||||
|
||||
// Build with kaniko
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, "Dockerfile_test_git")
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
ExecutorImage,
|
||||
"-f", dockerfile,
|
||||
"-d", kanikoImage,
|
||||
"-c", fmt.Sprintf("git://%s", repo)})...)
|
||||
|
||||
out, err = RunCommandWithoutTest(kanikoCmd)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to build image %s with kaniko command \"%s\": %v %s", dockerImage, kanikoCmd.Args, err, string(out))
|
||||
}
|
||||
|
||||
// container-diff
|
||||
daemonDockerImage := daemonPrefix + dockerImage
|
||||
containerdiffCmd := exec.Command("container-diff", "diff", "--no-cache",
|
||||
daemonDockerImage, kanikoImage,
|
||||
"-q", "--type=file", "--type=metadata", "--json")
|
||||
diff := RunCommand(containerdiffCmd, t)
|
||||
t.Logf("diff = %s", string(diff))
|
||||
|
||||
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
|
||||
checkContainerDiffOutput(t, diff, expected)
|
||||
}
|
||||
|
||||
func TestLayers(t *testing.T) {
|
||||
offset := map[string]int{
|
||||
"Dockerfile_test_add": 11,
|
||||
"Dockerfile_test_add": 12,
|
||||
"Dockerfile_test_scratch": 3,
|
||||
}
|
||||
for dockerfile := range imageBuilder.FilesBuilt {
|
||||
|
|
@ -301,6 +386,30 @@ func TestCache(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRelativePaths(t *testing.T) {
|
||||
|
||||
dockerfile := "Dockerfile_test_copy"
|
||||
|
||||
t.Run("test_relative_"+dockerfile, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
imageBuilder.buildRelativePathsImage(config.imageRepo, dockerfile)
|
||||
|
||||
dockerImage := GetDockerImage(config.imageRepo, dockerfile)
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, dockerfile)
|
||||
|
||||
// container-diff
|
||||
daemonDockerImage := daemonPrefix + dockerImage
|
||||
containerdiffCmd := exec.Command("container-diff", "diff", "--no-cache",
|
||||
daemonDockerImage, kanikoImage,
|
||||
"-q", "--type=file", "--type=metadata", "--json")
|
||||
diff := RunCommand(containerdiffCmd, t)
|
||||
t.Logf("diff = %s", string(diff))
|
||||
|
||||
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
|
||||
checkContainerDiffOutput(t, diff, expected)
|
||||
})
|
||||
}
|
||||
|
||||
type fileDiff struct {
|
||||
Name string
|
||||
Size int
|
||||
|
|
|
|||
|
|
@ -42,6 +42,8 @@ func GetBuildContext(srcContext string) (BuildContext, error) {
|
|||
return &S3{context: context}, nil
|
||||
case constants.LocalDirBuildContextPrefix:
|
||||
return &Dir{context: context}, nil
|
||||
case constants.GitBuildContextPrefix:
|
||||
return &Git{context: context}, nil
|
||||
}
|
||||
return nil, errors.New("unknown build context prefix provided, please use one of the following: gs://, dir://, s3://")
|
||||
return nil, errors.New("unknown build context prefix provided, please use one of the following: gs://, dir://, s3://, git://")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,46 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package buildcontext
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
git "gopkg.in/src-d/go-git.v4"
|
||||
"gopkg.in/src-d/go-git.v4/plumbing"
|
||||
)
|
||||
|
||||
// Git unifies calls to download and unpack the build context.
|
||||
type Git struct {
|
||||
context string
|
||||
}
|
||||
|
||||
// UnpackTarFromBuildContext will provide the directory where Git Repository is Cloned
|
||||
func (g *Git) UnpackTarFromBuildContext() (string, error) {
|
||||
directory := constants.BuildContextDir
|
||||
parts := strings.Split(g.context, "#")
|
||||
options := git.CloneOptions{
|
||||
URL: "https://" + parts[0],
|
||||
Progress: os.Stdout,
|
||||
}
|
||||
if len(parts) > 1 {
|
||||
options.ReferenceName = plumbing.ReferenceName(parts[1])
|
||||
}
|
||||
_, err := git.PlainClone(directory, false, &options)
|
||||
return directory, err
|
||||
}
|
||||
|
|
@ -19,6 +19,7 @@ package buildcontext
|
|||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
|
|
@ -36,9 +37,21 @@ type S3 struct {
|
|||
// UnpackTarFromBuildContext download and untar a file from s3
|
||||
func (s *S3) UnpackTarFromBuildContext() (string, error) {
|
||||
bucket, item := util.GetBucketAndItem(s.context)
|
||||
sess, err := session.NewSessionWithOptions(session.Options{
|
||||
option := session.Options{
|
||||
SharedConfigState: session.SharedConfigEnable,
|
||||
})
|
||||
}
|
||||
endpoint := os.Getenv(constants.S3EndpointEnv)
|
||||
forcePath := false
|
||||
if strings.ToLower(os.Getenv(constants.S3ForcePathStyle)) == "true" {
|
||||
forcePath = true
|
||||
}
|
||||
if endpoint != "" {
|
||||
option.Config = aws.Config{
|
||||
Endpoint: aws.String(endpoint),
|
||||
S3ForcePathStyle: aws.Bool(forcePath),
|
||||
}
|
||||
}
|
||||
sess, err := session.NewSessionWithOptions(option)
|
||||
if err != nil {
|
||||
return bucket, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -59,8 +59,8 @@ func (rc *RegistryCache) RetrieveLayer(ck string) (v1.Image, error) {
|
|||
}
|
||||
|
||||
registryName := cacheRef.Repository.Registry.Name()
|
||||
if rc.Opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
if rc.Opts.Insecure || rc.Opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewRegistry(registryName, name.WeakValidation, name.Insecure)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -88,7 +88,7 @@ func (rc *RegistryCache) RetrieveLayer(ck string) (v1.Image, error) {
|
|||
// Layer is stale, rebuild it.
|
||||
if expiry.Before(time.Now()) {
|
||||
logrus.Infof("Cache entry expired: %s", cache)
|
||||
return nil, errors.New(fmt.Sprintf("Cache entry expired: %s", cache))
|
||||
return nil, fmt.Errorf("Cache entry expired: %s", cache)
|
||||
}
|
||||
|
||||
// Force the manifest to be populated
|
||||
|
|
@ -114,7 +114,7 @@ func Destination(opts *config.KanikoOptions, cacheKey string) (string, error) {
|
|||
}
|
||||
|
||||
// LocalSource retieves a source image from a local cache given cacheKey
|
||||
func LocalSource(opts *config.KanikoOptions, cacheKey string) (v1.Image, error) {
|
||||
func LocalSource(opts *config.CacheOptions, cacheKey string) (v1.Image, error) {
|
||||
cache := opts.CacheDir
|
||||
if cache == "" {
|
||||
return nil, nil
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ import (
|
|||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// WarmCache populates the cache
|
||||
func WarmCache(opts *config.WarmerOptions) error {
|
||||
cacheDir := opts.CacheDir
|
||||
images := opts.Images
|
||||
|
|
@ -41,7 +42,7 @@ func WarmCache(opts *config.WarmerOptions) error {
|
|||
return errors.Wrap(err, fmt.Sprintf("Failed to verify image name: %s", image))
|
||||
}
|
||||
img, err := remote.Image(cacheRef)
|
||||
if err != nil {
|
||||
if err != nil || img == nil {
|
||||
return errors.Wrap(err, fmt.Sprintf("Failed to retrieve image: %s", image))
|
||||
}
|
||||
|
||||
|
|
@ -50,6 +51,14 @@ func WarmCache(opts *config.WarmerOptions) error {
|
|||
return errors.Wrap(err, fmt.Sprintf("Failed to retrieve digest: %s", image))
|
||||
}
|
||||
cachePath := path.Join(cacheDir, digest.String())
|
||||
|
||||
if !opts.Force {
|
||||
_, err := LocalSource(&opts.CacheOptions, digest.String())
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
err = tarball.WriteToFile(cachePath, cacheRef, img)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, fmt.Sprintf("Failed to write %s to cache", image))
|
||||
|
|
|
|||
|
|
@ -43,11 +43,11 @@ type AddCommand struct {
|
|||
// - If remote file has HTTP Last-Modified header, we set the mtime of the file to that timestamp
|
||||
// - If dest doesn't end with a slash, the filepath is inferred to be <dest>/<filename>
|
||||
// 2. If <src> is a local tar archive:
|
||||
// -If <src> is a local tar archive, it is unpacked at the dest, as 'tar -x' would
|
||||
// - it is unpacked at the dest, as 'tar -x' would
|
||||
func (a *AddCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
|
||||
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
|
||||
|
||||
srcs, dest, err := resolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
|
||||
srcs, dest, err := util.ResolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -61,15 +61,22 @@ func (a *AddCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bui
|
|||
for _, src := range srcs {
|
||||
fullPath := filepath.Join(a.buildcontext, src)
|
||||
if util.IsSrcRemoteFileURL(src) {
|
||||
urlDest := util.URLDestinationFilepath(src, dest, config.WorkingDir)
|
||||
urlDest, err := util.URLDestinationFilepath(src, dest, config.WorkingDir, replacementEnvs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Infof("Adding remote URL %s to %s", src, urlDest)
|
||||
if err := util.DownloadFileToDest(src, urlDest); err != nil {
|
||||
return err
|
||||
}
|
||||
a.snapshotFiles = append(a.snapshotFiles, urlDest)
|
||||
} else if util.IsFileLocalTarArchive(fullPath) {
|
||||
logrus.Infof("Unpacking local tar archive %s to %s", src, dest)
|
||||
extractedFiles, err := util.UnpackLocalTarArchive(fullPath, dest)
|
||||
tarDest, err := util.DestinationFilepath("", dest, config.WorkingDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Infof("Unpacking local tar archive %s to %s", src, tarDest)
|
||||
extractedFiles, err := util.UnpackLocalTarArchive(fullPath, tarDest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -111,7 +118,7 @@ func (a *AddCommand) String() string {
|
|||
func (a *AddCommand) FilesUsedFromContext(config *v1.Config, buildArgs *dockerfile.BuildArgs) ([]string, error) {
|
||||
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
|
||||
|
||||
srcs, _, err := resolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
|
||||
srcs, _, err := util.ResolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -30,27 +30,35 @@ type ArgCommand struct {
|
|||
|
||||
// ExecuteCommand only needs to add this ARG key/value as seen
|
||||
func (r *ArgCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
|
||||
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
|
||||
resolvedKey, err := util.ResolveEnvironmentReplacement(r.cmd.Key, replacementEnvs, false)
|
||||
key, val, err := ParseArg(r.cmd.Key, r.cmd.Value, config.Env, buildArgs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
buildArgs.AddArg(key, val)
|
||||
return nil
|
||||
}
|
||||
|
||||
func ParseArg(key string, val *string, env []string, ba *dockerfile.BuildArgs) (string, *string, error) {
|
||||
replacementEnvs := ba.ReplacementEnvs(env)
|
||||
resolvedKey, err := util.ResolveEnvironmentReplacement(key, replacementEnvs, false)
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
var resolvedValue *string
|
||||
if r.cmd.Value != nil {
|
||||
value, err := util.ResolveEnvironmentReplacement(*r.cmd.Value, replacementEnvs, false)
|
||||
if val != nil {
|
||||
value, err := util.ResolveEnvironmentReplacement(*val, replacementEnvs, false)
|
||||
if err != nil {
|
||||
return err
|
||||
return "", nil, err
|
||||
}
|
||||
resolvedValue = &value
|
||||
} else {
|
||||
meta := buildArgs.GetAllMeta()
|
||||
meta := ba.GetAllMeta()
|
||||
if value, ok := meta[resolvedKey]; ok {
|
||||
resolvedValue = &value
|
||||
}
|
||||
}
|
||||
|
||||
buildArgs.AddArg(resolvedKey, resolvedValue)
|
||||
return nil
|
||||
return resolvedKey, resolvedValue, nil
|
||||
}
|
||||
|
||||
// String returns some information about the command for the image config history
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ func (c *CopyCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
|
|||
|
||||
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
|
||||
|
||||
srcs, dest, err := resolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
|
||||
srcs, dest, err := util.ResolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -100,18 +100,6 @@ func (c *CopyCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
|
|||
return nil
|
||||
}
|
||||
|
||||
func resolveEnvAndWildcards(sd instructions.SourcesAndDest, buildcontext string, envs []string) ([]string, string, error) {
|
||||
// First, resolve any environment replacement
|
||||
resolvedEnvs, err := util.ResolveEnvironmentReplacementList(sd, envs, true)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
dest := resolvedEnvs[len(resolvedEnvs)-1]
|
||||
// Resolve wildcards and get a list of resolved sources
|
||||
srcs, err := util.ResolveSources(resolvedEnvs, buildcontext)
|
||||
return srcs, dest, err
|
||||
}
|
||||
|
||||
// FilesToSnapshot should return an empty array if still nil; no files were changed
|
||||
func (c *CopyCommand) FilesToSnapshot() []string {
|
||||
return c.snapshotFiles
|
||||
|
|
@ -129,7 +117,7 @@ func (c *CopyCommand) FilesUsedFromContext(config *v1.Config, buildArgs *dockerf
|
|||
}
|
||||
|
||||
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
|
||||
srcs, _, err := resolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
|
||||
srcs, _, err := util.ResolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ func (r *ExposeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.
|
|||
}
|
||||
protocol := strings.Split(p, "/")[1]
|
||||
if !validProtocol(protocol) {
|
||||
return fmt.Errorf("Invalid protocol: %s", protocol)
|
||||
return fmt.Errorf("invalid protocol: %s", protocol)
|
||||
}
|
||||
logrus.Infof("Adding exposed port: %s", p)
|
||||
existingPorts[p] = struct{}{}
|
||||
|
|
|
|||
|
|
@ -31,10 +31,6 @@ type UserCommand struct {
|
|||
cmd *instructions.UserCommand
|
||||
}
|
||||
|
||||
func (r *UserCommand) RequiresUnpackedFS() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (r *UserCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
|
||||
logrus.Info("cmd: USER")
|
||||
u := r.cmd.User
|
||||
|
|
@ -52,11 +48,6 @@ func (r *UserCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
|
|||
}
|
||||
}
|
||||
|
||||
_, _, err = util.GetUserFromUsername(userStr, groupStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if groupStr != "" {
|
||||
userStr = userStr + ":" + groupStr
|
||||
}
|
||||
|
|
|
|||
|
|
@ -28,57 +28,42 @@ import (
|
|||
var userTests = []struct {
|
||||
user string
|
||||
expectedUID string
|
||||
shouldError bool
|
||||
}{
|
||||
{
|
||||
user: "root",
|
||||
expectedUID: "root",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "0",
|
||||
expectedUID: "0",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "fakeUser",
|
||||
expectedUID: "",
|
||||
shouldError: true,
|
||||
expectedUID: "fakeUser",
|
||||
},
|
||||
{
|
||||
user: "root:root",
|
||||
expectedUID: "root:root",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "0:root",
|
||||
expectedUID: "0:root",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "root:0",
|
||||
expectedUID: "root:0",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "0:0",
|
||||
expectedUID: "0:0",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "root:fakeGroup",
|
||||
expectedUID: "",
|
||||
shouldError: true,
|
||||
},
|
||||
{
|
||||
user: "$envuser",
|
||||
expectedUID: "root",
|
||||
shouldError: false,
|
||||
},
|
||||
{
|
||||
user: "root:$envgroup",
|
||||
expectedUID: "root:root",
|
||||
shouldError: false,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -97,6 +82,6 @@ func TestUpdateUser(t *testing.T) {
|
|||
}
|
||||
buildArgs := dockerfile.NewBuildArgs([]string{})
|
||||
err := cmd.ExecuteCommand(cfg, buildArgs)
|
||||
testutil.CheckErrorAndDeepEqual(t, test.shouldError, err, test.expectedUID, cfg.User)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, err, test.expectedUID, cfg.User)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ func (v *VolumeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.
|
|||
if _, err := os.Stat(volume); os.IsNotExist(err) {
|
||||
logrus.Infof("Creating directory %s", volume)
|
||||
if err := os.MkdirAll(volume, 0755); err != nil {
|
||||
return fmt.Errorf("Could not create directory for volume %s: %s", volume, err)
|
||||
return fmt.Errorf("could not create directory for volume %s: %s", volume, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,8 +20,15 @@ import (
|
|||
"time"
|
||||
)
|
||||
|
||||
// CacheOptions are base image cache options that are set by command line arguments
|
||||
type CacheOptions struct {
|
||||
CacheDir string
|
||||
CacheTTL time.Duration
|
||||
}
|
||||
|
||||
// KanikoOptions are options that are set by command line arguments
|
||||
type KanikoOptions struct {
|
||||
CacheOptions
|
||||
DockerfilePath string
|
||||
SrcContext string
|
||||
SnapshotMode string
|
||||
|
|
@ -29,7 +36,8 @@ type KanikoOptions struct {
|
|||
TarPath string
|
||||
Target string
|
||||
CacheRepo string
|
||||
CacheDir string
|
||||
DigestFile string
|
||||
OCILayoutPath string
|
||||
Destinations multiArg
|
||||
BuildArgs multiArg
|
||||
Insecure bool
|
||||
|
|
@ -41,13 +49,13 @@ type KanikoOptions struct {
|
|||
NoPush bool
|
||||
Cache bool
|
||||
Cleanup bool
|
||||
CacheTTL time.Duration
|
||||
InsecureRegistries multiArg
|
||||
SkipTLSVerifyRegistries multiArg
|
||||
}
|
||||
|
||||
// WarmerOptions are options that are set by command line arguments to the cache warmer.
|
||||
type WarmerOptions struct {
|
||||
Images multiArg
|
||||
CacheDir string
|
||||
CacheOptions
|
||||
Images multiArg
|
||||
Force bool
|
||||
}
|
||||
|
|
|
|||
|
|
@ -26,4 +26,5 @@ type KanikoStage struct {
|
|||
BaseImageStoredLocally bool
|
||||
SaveStage bool
|
||||
MetaArgs []instructions.ArgCommand
|
||||
Index int
|
||||
}
|
||||
|
|
|
|||
|
|
@ -57,6 +57,7 @@ const (
|
|||
GCSBuildContextPrefix = "gs://"
|
||||
S3BuildContextPrefix = "s3://"
|
||||
LocalDirBuildContextPrefix = "dir://"
|
||||
GitBuildContextPrefix = "git://"
|
||||
|
||||
HOME = "HOME"
|
||||
// DefaultHOMEValue is the default value Docker sets for $HOME
|
||||
|
|
@ -69,6 +70,10 @@ const (
|
|||
|
||||
// Name of the .dockerignore file
|
||||
Dockerignore = ".dockerignore"
|
||||
|
||||
// S3 Custom endpoint ENV name
|
||||
S3EndpointEnv = "S3_ENDPOINT"
|
||||
S3ForcePathStyle = "S3_FORCE_PATH_STYLE"
|
||||
)
|
||||
|
||||
// ScratchEnvVars are the default environment variables needed for a scratch image.
|
||||
|
|
|
|||
|
|
@ -25,6 +25,8 @@ import (
|
|||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
|
|
@ -67,6 +69,7 @@ func Stages(opts *config.KanikoOptions) ([]config.KanikoStage, error) {
|
|||
return nil, errors.Wrap(err, "resolving base name")
|
||||
}
|
||||
stage.Name = resolvedBaseName
|
||||
logrus.Infof("Resolved base name %s to %s", stage.BaseName, stage.Name)
|
||||
kanikoStages = append(kanikoStages, config.KanikoStage{
|
||||
Stage: stage,
|
||||
BaseImageIndex: baseImageIndex(index, stages),
|
||||
|
|
@ -74,6 +77,7 @@ func Stages(opts *config.KanikoOptions) ([]config.KanikoStage, error) {
|
|||
SaveStage: saveStage(index, stages),
|
||||
Final: index == targetStage,
|
||||
MetaArgs: metaArgs,
|
||||
Index: index,
|
||||
})
|
||||
if index == targetStage {
|
||||
break
|
||||
|
|
@ -107,7 +111,7 @@ func Parse(b []byte) ([]instructions.Stage, []instructions.ArgCommand, error) {
|
|||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return stages, metaArgs, err
|
||||
return stages, metaArgs, nil
|
||||
}
|
||||
|
||||
// targetStage returns the index of the target stage kaniko is trying to build
|
||||
|
|
@ -175,14 +179,6 @@ func saveStage(index int, stages []instructions.Stage) bool {
|
|||
return true
|
||||
}
|
||||
}
|
||||
for _, cmd := range stage.Commands {
|
||||
switch c := cmd.(type) {
|
||||
case *instructions.CopyCommand:
|
||||
if c.From == strconv.Itoa(index) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ func Test_SaveStage(t *testing.T) {
|
|||
{
|
||||
name: "reference stage in later copy command",
|
||||
index: 0,
|
||||
expected: true,
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "reference stage in later from command",
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@ import (
|
|||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/otiai10/copy"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
|
|
@ -60,10 +62,11 @@ type stageBuilder struct {
|
|||
opts *config.KanikoOptions
|
||||
cmds []commands.DockerCommand
|
||||
args *dockerfile.BuildArgs
|
||||
crossStageDeps map[int][]string
|
||||
}
|
||||
|
||||
// newStageBuilder returns a new type stageBuilder which contains all the information required to build the stage
|
||||
func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*stageBuilder, error) {
|
||||
func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage, crossStageDeps map[int][]string) (*stageBuilder, error) {
|
||||
sourceImage, err := util.RetrieveSourceImage(stage, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -96,6 +99,7 @@ func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*sta
|
|||
snapshotter: snapshotter,
|
||||
baseImageDigest: digest.String(),
|
||||
opts: opts,
|
||||
crossStageDeps: crossStageDeps,
|
||||
}
|
||||
|
||||
for _, cmd := range s.stage.Commands {
|
||||
|
|
@ -120,7 +124,7 @@ func initializeConfig(img partial.WithConfigFile) (*v1.ConfigFile, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if img == empty.Image {
|
||||
if imageConfig.Config.Env == nil {
|
||||
imageConfig.Config.Env = constants.ScratchEnvVars
|
||||
}
|
||||
return imageConfig, nil
|
||||
|
|
@ -186,11 +190,7 @@ func (s *stageBuilder) optimize(compositeKey CompositeCache, cfg v1.Config) erro
|
|||
|
||||
func (s *stageBuilder) build() error {
|
||||
// Set the initial cache key to be the base image digest, the build args and the SrcContext.
|
||||
dgst, err := util.ReproducibleDigest(s.image)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
compositeKey := NewCompositeCache(dgst)
|
||||
compositeKey := NewCompositeCache(s.baseImageDigest)
|
||||
compositeKey.AddKey(s.opts.BuildArgs...)
|
||||
|
||||
// Apply optimizations to the instructions.
|
||||
|
|
@ -207,6 +207,10 @@ func (s *stageBuilder) build() error {
|
|||
break
|
||||
}
|
||||
}
|
||||
if len(s.crossStageDeps[s.stage.Index]) > 0 {
|
||||
shouldUnpack = true
|
||||
}
|
||||
|
||||
if shouldUnpack {
|
||||
t := timing.Start("FS Unpacking")
|
||||
if _, err := util.GetFSFromImage(constants.RootDir, s.image); err != nil {
|
||||
|
|
@ -353,6 +357,68 @@ func (s *stageBuilder) saveSnapshotToImage(createdBy string, tarPath string) err
|
|||
|
||||
}
|
||||
|
||||
func CalculateDependencies(opts *config.KanikoOptions) (map[int][]string, error) {
|
||||
stages, err := dockerfile.Stages(opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
images := []v1.Image{}
|
||||
depGraph := map[int][]string{}
|
||||
for _, s := range stages {
|
||||
ba := dockerfile.NewBuildArgs(opts.BuildArgs)
|
||||
ba.AddMetaArgs(s.MetaArgs)
|
||||
var image v1.Image
|
||||
var err error
|
||||
if s.BaseImageStoredLocally {
|
||||
image = images[s.BaseImageIndex]
|
||||
} else if s.Name == constants.NoBaseImage {
|
||||
image = empty.Image
|
||||
} else {
|
||||
image, err = util.RetrieveSourceImage(s, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
cfg, err := initializeConfig(image)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, c := range s.Commands {
|
||||
switch cmd := c.(type) {
|
||||
case *instructions.CopyCommand:
|
||||
if cmd.From != "" {
|
||||
i, err := strconv.Atoi(cmd.From)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
resolved, err := util.ResolveEnvironmentReplacementList(cmd.SourcesAndDest, ba.ReplacementEnvs(cfg.Config.Env), true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
depGraph[i] = append(depGraph[i], resolved[0:len(resolved)-1]...)
|
||||
}
|
||||
case *instructions.EnvCommand:
|
||||
if err := util.UpdateConfigEnv(cmd.Env, &cfg.Config, ba.ReplacementEnvs(cfg.Config.Env)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
image, err = mutate.Config(image, cfg.Config)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
case *instructions.ArgCommand:
|
||||
k, v, err := commands.ParseArg(cmd.Key, cmd.Value, cfg.Config.Env, ba)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ba.AddArg(k, v)
|
||||
}
|
||||
}
|
||||
images = append(images, image)
|
||||
}
|
||||
return depGraph, nil
|
||||
}
|
||||
|
||||
// DoBuild executes building the Dockerfile
|
||||
func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
||||
t := timing.Start("Total Build Time")
|
||||
|
|
@ -361,7 +427,7 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := util.GetExcludedFiles(opts.SrcContext); err != nil {
|
||||
if err := util.GetExcludedFiles(opts.DockerfilePath, opts.SrcContext); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Some stages may refer to other random images, not previous stages
|
||||
|
|
@ -369,8 +435,14 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
crossStageDependencies, err := CalculateDependencies(opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logrus.Infof("Built cross stage deps: %v", crossStageDependencies)
|
||||
|
||||
for index, stage := range stages {
|
||||
sb, err := newStageBuilder(opts, stage)
|
||||
sb, err := newStageBuilder(opts, stage, crossStageDependencies)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -405,10 +477,21 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
if err := saveStageAsTarball(strconv.Itoa(index), sourceImage); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := extractImageToDependecyDir(strconv.Itoa(index), sourceImage); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
filesToSave, err := filesToSave(crossStageDependencies[index])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dstDir := filepath.Join(constants.KanikoDir, strconv.Itoa(index))
|
||||
if err := os.MkdirAll(dstDir, 0644); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, p := range filesToSave {
|
||||
logrus.Infof("Saving file %s for later use.", p)
|
||||
copy.Copy(p, filepath.Join(dstDir, p))
|
||||
}
|
||||
|
||||
// Delete the filesystem
|
||||
if err := util.DeleteFilesystem(); err != nil {
|
||||
return nil, err
|
||||
|
|
@ -418,6 +501,18 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
func filesToSave(deps []string) ([]string, error) {
|
||||
allFiles := []string{}
|
||||
for _, src := range deps {
|
||||
srcs, err := filepath.Glob(src)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
allFiles = append(allFiles, srcs...)
|
||||
}
|
||||
return allFiles, nil
|
||||
}
|
||||
|
||||
func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) error {
|
||||
t := timing.Start("Fetching Extra Stages")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
|
|
@ -453,7 +548,7 @@ func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) e
|
|||
if err := saveStageAsTarball(c.From, sourceImage); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := extractImageToDependecyDir(c.From, sourceImage); err != nil {
|
||||
if err := extractImageToDependencyDir(c.From, sourceImage); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
@ -464,7 +559,7 @@ func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) e
|
|||
}
|
||||
return nil
|
||||
}
|
||||
func extractImageToDependecyDir(name string, image v1.Image) error {
|
||||
func extractImageToDependencyDir(name string, image v1.Image) error {
|
||||
t := timing.Start("Extracting Image to Dependency Dir")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
dependencyDir := filepath.Join(constants.KanikoDir, name)
|
||||
|
|
|
|||
|
|
@ -17,14 +17,21 @@ limitations under the License.
|
|||
package executor
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-cmp/cmp"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate"
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
)
|
||||
|
||||
func Test_reviewConfig(t *testing.T) {
|
||||
|
|
@ -180,3 +187,278 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCalculateDependencies(t *testing.T) {
|
||||
type args struct {
|
||||
dockerfile string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want map[int][]string
|
||||
}{
|
||||
{
|
||||
name: "no deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
RUN foo
|
||||
FROM stage1
|
||||
RUN bar
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{},
|
||||
},
|
||||
{
|
||||
name: "args",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
ARG myFile=foo
|
||||
FROM debian as stage1
|
||||
RUN foo
|
||||
FROM stage1
|
||||
ARG myFile
|
||||
COPY --from=stage1 /tmp/$myFile.txt .
|
||||
RUN bar
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/tmp/foo.txt"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "simple deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
FROM alpine
|
||||
COPY --from=stage1 /foo /bar
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/foo"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "two sets deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
FROM ubuntu as stage2
|
||||
RUN foo
|
||||
COPY --from=stage1 /foo /bar
|
||||
FROM alpine
|
||||
COPY --from=stage2 /bar /bat
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/foo"},
|
||||
1: {"/bar"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "double deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
FROM ubuntu as stage2
|
||||
RUN foo
|
||||
COPY --from=stage1 /foo /bar
|
||||
FROM alpine
|
||||
COPY --from=stage1 /baz /bat
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/foo", "/baz"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "envs in deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
FROM ubuntu as stage2
|
||||
RUN foo
|
||||
ENV key1 val1
|
||||
ENV key2 val2
|
||||
COPY --from=stage1 /foo/$key1 /foo/$key2 /bar
|
||||
FROM alpine
|
||||
COPY --from=stage2 /bar /bat
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/foo/val1", "/foo/val2"},
|
||||
1: {"/bar"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "envs from base image in deps",
|
||||
args: args{
|
||||
dockerfile: `
|
||||
FROM debian as stage1
|
||||
ENV key1 baseval1
|
||||
FROM stage1 as stage2
|
||||
RUN foo
|
||||
ENV key2 val2
|
||||
COPY --from=stage1 /foo/$key1 /foo/$key2 /bar
|
||||
FROM alpine
|
||||
COPY --from=stage2 /bar /bat
|
||||
`,
|
||||
},
|
||||
want: map[int][]string{
|
||||
0: {"/foo/baseval1", "/foo/val2"},
|
||||
1: {"/bar"},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
f, _ := ioutil.TempFile("", "")
|
||||
ioutil.WriteFile(f.Name(), []byte(tt.args.dockerfile), 0755)
|
||||
opts := &config.KanikoOptions{
|
||||
DockerfilePath: f.Name(),
|
||||
}
|
||||
|
||||
got, err := CalculateDependencies(opts)
|
||||
if err != nil {
|
||||
t.Errorf("got error: %s,", err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
diff := cmp.Diff(got, tt.want)
|
||||
t.Errorf("CalculateDependencies() = %v, want %v, diff %v", got, tt.want, diff)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_filesToSave(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
args []string
|
||||
want []string
|
||||
files []string
|
||||
}{
|
||||
{
|
||||
name: "simple",
|
||||
args: []string{"foo"},
|
||||
files: []string{"foo"},
|
||||
want: []string{"foo"},
|
||||
},
|
||||
{
|
||||
name: "glob",
|
||||
args: []string{"foo*"},
|
||||
files: []string{"foo", "foo2", "fooooo", "bar"},
|
||||
want: []string{"foo", "foo2", "fooooo"},
|
||||
},
|
||||
{
|
||||
name: "complex glob",
|
||||
args: []string{"foo*", "bar?"},
|
||||
files: []string{"foo", "foo2", "fooooo", "bar", "bar1", "bar2", "bar33"},
|
||||
want: []string{"foo", "foo2", "fooooo", "bar1", "bar2"},
|
||||
},
|
||||
{
|
||||
name: "dir",
|
||||
args: []string{"foo"},
|
||||
files: []string{"foo/bar", "foo/baz", "foo/bat/baz"},
|
||||
want: []string{"foo"},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tmpDir, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
t.Errorf("error creating tmpdir: %s", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
for _, f := range tt.files {
|
||||
p := filepath.Join(tmpDir, f)
|
||||
dir := filepath.Dir(p)
|
||||
if dir != "." {
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
t.Errorf("error making dir: %s", err)
|
||||
}
|
||||
}
|
||||
fp, err := os.Create(p)
|
||||
if err != nil {
|
||||
t.Errorf("error making file: %s", err)
|
||||
}
|
||||
fp.Close()
|
||||
}
|
||||
|
||||
args := []string{}
|
||||
for _, arg := range tt.args {
|
||||
args = append(args, filepath.Join(tmpDir, arg))
|
||||
}
|
||||
got, err := filesToSave(args)
|
||||
if err != nil {
|
||||
t.Errorf("got err: %s", err)
|
||||
}
|
||||
want := []string{}
|
||||
for _, w := range tt.want {
|
||||
want = append(want, filepath.Join(tmpDir, w))
|
||||
}
|
||||
sort.Strings(want)
|
||||
sort.Strings(got)
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Errorf("filesToSave() = %v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInitializeConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
description string
|
||||
cfg v1.ConfigFile
|
||||
expected v1.Config
|
||||
}{
|
||||
{
|
||||
description: "env is not set in the image",
|
||||
cfg: v1.ConfigFile{
|
||||
Config: v1.Config{
|
||||
Image: "test",
|
||||
},
|
||||
},
|
||||
expected: v1.Config{
|
||||
Image: "test",
|
||||
Env: []string{
|
||||
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
description: "env is set in the image",
|
||||
cfg: v1.ConfigFile{
|
||||
Config: v1.Config{
|
||||
Env: []string{
|
||||
"PATH=/usr/local/something",
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: v1.Config{
|
||||
Env: []string{
|
||||
"PATH=/usr/local/something",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
description: "image is empty",
|
||||
expected: v1.Config{
|
||||
Env: []string{
|
||||
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
img, err := mutate.ConfigFile(empty.Image, &tt.cfg)
|
||||
if err != nil {
|
||||
t.Errorf("error seen when running test %s", err)
|
||||
t.Fail()
|
||||
}
|
||||
actual, _ := initializeConfig(img)
|
||||
testutil.CheckDeepEqual(t, tt.expected, actual.Config)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,9 +20,11 @@ import (
|
|||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/cache"
|
||||
|
|
@ -34,6 +36,7 @@ import (
|
|||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
|
|
@ -46,14 +49,79 @@ type withUserAgent struct {
|
|||
t http.RoundTripper
|
||||
}
|
||||
|
||||
const (
|
||||
UpstreamClientUaKey = "UPSTREAM_CLIENT_TYPE"
|
||||
)
|
||||
|
||||
func (w *withUserAgent) RoundTrip(r *http.Request) (*http.Response, error) {
|
||||
r.Header.Set("User-Agent", fmt.Sprintf("kaniko/%s", version.Version()))
|
||||
ua := []string{fmt.Sprintf("kaniko/%s", version.Version())}
|
||||
if upstream := os.Getenv(UpstreamClientUaKey); upstream != "" {
|
||||
ua = append(ua, upstream)
|
||||
}
|
||||
r.Header.Set("User-Agent", strings.Join(ua, ","))
|
||||
return w.t.RoundTrip(r)
|
||||
}
|
||||
|
||||
// CheckPushPermissions checks that the configured credentials can be used to
|
||||
// push to every specified destination.
|
||||
func CheckPushPermissions(opts *config.KanikoOptions) error {
|
||||
if opts.NoPush {
|
||||
return nil
|
||||
}
|
||||
|
||||
checked := map[string]bool{}
|
||||
for _, destination := range opts.Destinations {
|
||||
destRef, err := name.NewTag(destination, name.WeakValidation)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting tag for destination")
|
||||
}
|
||||
if checked[destRef.Context().RepositoryStr()] {
|
||||
continue
|
||||
}
|
||||
|
||||
registryName := destRef.Repository.Registry.Name()
|
||||
if opts.Insecure || opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewRegistry(registryName, name.WeakValidation, name.Insecure)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting new insecure registry")
|
||||
}
|
||||
destRef.Repository.Registry = newReg
|
||||
}
|
||||
tr := makeTransport(opts, registryName)
|
||||
if err := remote.CheckPushPermission(destRef, creds.GetKeychain(), tr); err != nil {
|
||||
return errors.Wrapf(err, "checking push permission for %q", destRef)
|
||||
}
|
||||
checked[destRef.Context().RepositoryStr()] = true
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DoPush is responsible for pushing image to the destinations specified in opts
|
||||
func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
||||
t := timing.Start("Total Push Time")
|
||||
|
||||
if opts.DigestFile != "" {
|
||||
digest, err := image.Digest()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error fetching digest")
|
||||
}
|
||||
digestByteArray := []byte(digest.String())
|
||||
err = ioutil.WriteFile(opts.DigestFile, digestByteArray, 0644)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "writing digest to file failed")
|
||||
}
|
||||
}
|
||||
|
||||
if opts.OCILayoutPath != "" {
|
||||
path, err := layout.Write(opts.OCILayoutPath, empty.Index)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "writing empty layout")
|
||||
}
|
||||
if err := path.AppendImage(image); err != nil {
|
||||
return errors.Wrap(err, "appending image")
|
||||
}
|
||||
}
|
||||
|
||||
destRefs := []name.Tag{}
|
||||
for _, destination := range opts.Destinations {
|
||||
destRef, err := name.NewTag(destination, name.WeakValidation)
|
||||
|
|
@ -80,7 +148,7 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
for _, destRef := range destRefs {
|
||||
registryName := destRef.Repository.Registry.Name()
|
||||
if opts.Insecure || opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
newReg, err := name.NewRegistry(registryName, name.WeakValidation, name.Insecure)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting new insecure registry")
|
||||
}
|
||||
|
|
@ -92,16 +160,10 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
return errors.Wrap(err, "resolving pushAuth")
|
||||
}
|
||||
|
||||
// Create a transport to set our user-agent.
|
||||
tr := http.DefaultTransport
|
||||
if opts.SkipTLSVerify || opts.SkipTLSVerifyRegistries.Contains(registryName) {
|
||||
tr.(*http.Transport).TLSClientConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
}
|
||||
tr := makeTransport(opts, registryName)
|
||||
rt := &withUserAgent{t: tr}
|
||||
|
||||
if err := remote.Write(destRef, image, pushAuth, rt); err != nil {
|
||||
if err := remote.Write(destRef, image, remote.WithAuth(pushAuth), remote.WithTransport(rt)); err != nil {
|
||||
return errors.Wrap(err, fmt.Sprintf("failed to push to destination %s", destRef))
|
||||
}
|
||||
}
|
||||
|
|
@ -142,6 +204,17 @@ func writeImageOutputs(image v1.Image, destRefs []name.Tag) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func makeTransport(opts *config.KanikoOptions, registryName string) http.RoundTripper {
|
||||
// Create a transport to set our user-agent.
|
||||
tr := http.DefaultTransport
|
||||
if opts.SkipTLSVerify || opts.SkipTLSVerifyRegistries.Contains(registryName) {
|
||||
tr.(*http.Transport).TLSClientConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
}
|
||||
return tr
|
||||
}
|
||||
|
||||
// pushLayerToCache pushes layer (tagged with cacheKey) to opts.Cache
|
||||
// if opts.Cache doesn't exist, infer the cache from the given destination
|
||||
func pushLayerToCache(opts *config.KanikoOptions, cacheKey string, tarPath string, createdBy string) error {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,12 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
|
|
@ -14,6 +17,7 @@ limitations under the License.
|
|||
package executor
|
||||
|
||||
import (
|
||||
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
|
@ -91,4 +95,106 @@ func TestWriteImageOutputs(t *testing.T) {
|
|||
}
|
||||
})
|
||||
}
|
||||
=======
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/random"
|
||||
"github.com/google/go-containerregistry/pkg/v1/validate"
|
||||
)
|
||||
|
||||
func TestHeaderAdded(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
upstream string
|
||||
expected string
|
||||
}{{
|
||||
name: "upstream env variable set",
|
||||
upstream: "skaffold-v0.25.45",
|
||||
expected: "kaniko/unset,skaffold-v0.25.45",
|
||||
}, {
|
||||
name: "upstream env variable not set",
|
||||
expected: "kaniko/unset",
|
||||
},
|
||||
}
|
||||
for _, test := range tests {
|
||||
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
rt := &withUserAgent{t: &mockRoundTripper{}}
|
||||
if test.upstream != "" {
|
||||
os.Setenv("UPSTREAM_CLIENT_TYPE", test.upstream)
|
||||
defer func() { os.Unsetenv("UPSTREAM_CLIENT_TYPE") }()
|
||||
}
|
||||
req, err := http.NewRequest("GET", "dummy", nil)
|
||||
if err != nil {
|
||||
t.Fatalf("culd not create a req due to %s", err)
|
||||
}
|
||||
resp, err := rt.RoundTrip(req)
|
||||
testutil.CheckError(t, false, err)
|
||||
defer resp.Body.Close()
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, err, test.expected, string(body))
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
type mockRoundTripper struct {
|
||||
}
|
||||
|
||||
func (m *mockRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
|
||||
ua := r.UserAgent()
|
||||
return &http.Response{Body: ioutil.NopCloser(bytes.NewBufferString(ua))}, nil
|
||||
}
|
||||
|
||||
func TestOCILayoutPath(t *testing.T) {
|
||||
tmpDir, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
t.Fatalf("could not create temp dir: %s", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
image, err := random.Image(1024, 4)
|
||||
if err != nil {
|
||||
t.Fatalf("could not create image: %s", err)
|
||||
}
|
||||
|
||||
digest, err := image.Digest()
|
||||
if err != nil {
|
||||
t.Fatalf("could not get image digest: %s", err)
|
||||
}
|
||||
|
||||
want, err := image.Manifest()
|
||||
if err != nil {
|
||||
t.Fatalf("could not get image manifest: %s", err)
|
||||
}
|
||||
|
||||
opts := config.KanikoOptions{
|
||||
NoPush: true,
|
||||
OCILayoutPath: tmpDir,
|
||||
}
|
||||
|
||||
if err := DoPush(image, &opts); err != nil {
|
||||
t.Fatalf("could not push image: %s", err)
|
||||
}
|
||||
|
||||
layoutIndex, err := layout.ImageIndexFromPath(tmpDir)
|
||||
if err != nil {
|
||||
t.Fatalf("could not get index from layout: %s", err)
|
||||
}
|
||||
testutil.CheckError(t, false, validate.Index(layoutIndex))
|
||||
|
||||
layoutImage, err := layoutIndex.Image(digest)
|
||||
if err != nil {
|
||||
t.Fatalf("could not get image from layout: %s", err)
|
||||
}
|
||||
|
||||
got, err := layoutImage.Manifest()
|
||||
testutil.CheckErrorAndDeepEqual(t, false, err, want, got)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -103,17 +103,16 @@ func (l *LayeredMap) Add(s string) error {
|
|||
// Use hash function and add to layers
|
||||
newV, err := l.hasher(s)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating hash for %s: %v", s, err)
|
||||
return fmt.Errorf("error creating hash for %s: %v", s, err)
|
||||
}
|
||||
l.layers[len(l.layers)-1][s] = newV
|
||||
return nil
|
||||
}
|
||||
|
||||
// MaybeAdd will add the specified file s to the layered map if
|
||||
// the layered map's hashing function determines it has changed. If
|
||||
// it has not changed, it will not be added. Returns true if the file
|
||||
// was added.
|
||||
func (l *LayeredMap) MaybeAdd(s string) (bool, error) {
|
||||
// CheckFileChange checks whether a given file changed
|
||||
// from the current layered map by its hashing function.
|
||||
// Returns true if the file is changed.
|
||||
func (l *LayeredMap) CheckFileChange(s string) (bool, error) {
|
||||
oldV, ok := l.Get(s)
|
||||
t := timing.Start("Hashing files")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
|
|
@ -124,6 +123,5 @@ func (l *LayeredMap) MaybeAdd(s string) (bool, error) {
|
|||
if ok && newV == oldV {
|
||||
return false, nil
|
||||
}
|
||||
l.layers[len(l.layers)-1][s] = newV
|
||||
return true, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ import (
|
|||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"syscall"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
|
|
@ -73,44 +74,15 @@ func (s *Snapshotter) TakeSnapshot(files []string) (string, error) {
|
|||
}
|
||||
logrus.Info("Taking snapshot of files...")
|
||||
logrus.Debugf("Taking snapshot of files %v", files)
|
||||
snapshottedFiles := make(map[string]bool)
|
||||
|
||||
// First add to the tar any parent directories that haven't been added
|
||||
parentDirs := map[string]struct{}{}
|
||||
for _, file := range files {
|
||||
for _, p := range util.ParentDirectories(file) {
|
||||
parentDirs[p] = struct{}{}
|
||||
}
|
||||
}
|
||||
filesToAdd := []string{}
|
||||
for file := range parentDirs {
|
||||
file = filepath.Clean(file)
|
||||
snapshottedFiles[file] = true
|
||||
|
||||
// The parent directory might already be in a previous layer.
|
||||
fileAdded, err := s.l.MaybeAdd(file)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Unable to add parent dir %s to layered map: %s", file, err)
|
||||
}
|
||||
|
||||
if fileAdded {
|
||||
filesToAdd = append(filesToAdd, file)
|
||||
}
|
||||
}
|
||||
|
||||
// Next add the files themselves to the tar
|
||||
for _, file := range files {
|
||||
// We might have already added the file above as a parent directory of another file.
|
||||
file = filepath.Clean(file)
|
||||
if _, ok := snapshottedFiles[file]; ok {
|
||||
continue
|
||||
}
|
||||
snapshottedFiles[file] = true
|
||||
// Also add parent directories to keep the permission of them correctly.
|
||||
filesToAdd := filesWithParentDirs(files)
|
||||
|
||||
// Add files to the layered map
|
||||
for _, file := range filesToAdd {
|
||||
if err := s.l.Add(file); err != nil {
|
||||
return "", fmt.Errorf("Unable to add file %s to layered map: %s", file, err)
|
||||
return "", fmt.Errorf("unable to add file %s to layered map: %s", file, err)
|
||||
}
|
||||
filesToAdd = append(filesToAdd, file)
|
||||
}
|
||||
|
||||
t := util.NewTar(f)
|
||||
|
|
@ -201,16 +173,29 @@ func (s *Snapshotter) scanFullFilesystem() ([]string, []string, error) {
|
|||
logrus.Debugf("Not adding %s to layer, as it's whitelisted", path)
|
||||
continue
|
||||
}
|
||||
// Only add to the tar if we add it to the layeredmap.
|
||||
maybeAdd, err := s.l.MaybeAdd(path)
|
||||
// Only add changed files.
|
||||
fileChanged, err := s.l.CheckFileChange(path)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
if maybeAdd {
|
||||
if fileChanged {
|
||||
logrus.Debugf("Adding %s to layer, because it was changed.", path)
|
||||
filesToAdd = append(filesToAdd, path)
|
||||
}
|
||||
}
|
||||
|
||||
// Also add parent directories to keep their permissions correctly.
|
||||
filesToAdd = filesWithParentDirs(filesToAdd)
|
||||
|
||||
sort.Strings(filesToAdd)
|
||||
|
||||
// Add files to the layered map
|
||||
for _, file := range filesToAdd {
|
||||
if err := s.l.Add(file); err != nil {
|
||||
return nil, nil, fmt.Errorf("unable to add file %s to layered map: %s", file, err)
|
||||
}
|
||||
}
|
||||
|
||||
return filesToAdd, filesToWhiteOut, nil
|
||||
}
|
||||
|
||||
|
|
@ -230,3 +215,24 @@ func writeToTar(t util.Tar, files, whiteouts []string) error {
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func filesWithParentDirs(files []string) []string {
|
||||
filesSet := map[string]bool{}
|
||||
|
||||
for _, file := range files {
|
||||
file = filepath.Clean(file)
|
||||
filesSet[file] = true
|
||||
|
||||
for _, dir := range util.ParentDirectories(file) {
|
||||
dir = filepath.Clean(dir)
|
||||
filesSet[dir] = true
|
||||
}
|
||||
}
|
||||
|
||||
newFiles := []string{}
|
||||
for file := range filesSet {
|
||||
newFiles = append(newFiles, file)
|
||||
}
|
||||
|
||||
return newFiles
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,6 +21,8 @@ import (
|
|||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
|
|
@ -30,6 +32,7 @@ import (
|
|||
|
||||
func TestSnapshotFSFileChange(t *testing.T) {
|
||||
testDir, snapshotter, cleanup, err := setUpTestDir()
|
||||
testDirWithoutLeadingSlash := strings.TrimLeft(testDir, "/")
|
||||
defer cleanup()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
|
@ -54,12 +57,18 @@ func TestSnapshotFSFileChange(t *testing.T) {
|
|||
}
|
||||
// Check contents of the snapshot, make sure contents is equivalent to snapshotFiles
|
||||
tr := tar.NewReader(f)
|
||||
fooPath := filepath.Join(testDir, "foo")
|
||||
batPath := filepath.Join(testDir, "bar/bat")
|
||||
fooPath := filepath.Join(testDirWithoutLeadingSlash, "foo")
|
||||
batPath := filepath.Join(testDirWithoutLeadingSlash, "bar/bat")
|
||||
snapshotFiles := map[string]string{
|
||||
fooPath: "newbaz1",
|
||||
batPath: "baz",
|
||||
}
|
||||
for _, dir := range util.ParentDirectoriesWithoutLeadingSlash(fooPath) {
|
||||
snapshotFiles[dir] = ""
|
||||
}
|
||||
for _, dir := range util.ParentDirectoriesWithoutLeadingSlash(batPath) {
|
||||
snapshotFiles[dir] = ""
|
||||
}
|
||||
numFiles := 0
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
|
|
@ -75,19 +84,60 @@ func TestSnapshotFSFileChange(t *testing.T) {
|
|||
t.Fatalf("Contents of %s incorrect, expected: %s, actual: %s", hdr.Name, snapshotFiles[hdr.Name], string(contents))
|
||||
}
|
||||
}
|
||||
if numFiles != 2 {
|
||||
if numFiles != len(snapshotFiles) {
|
||||
t.Fatalf("Incorrect number of files were added, expected: 2, actual: %v", numFiles)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSnapshotFSIsReproducible(t *testing.T) {
|
||||
testDir, snapshotter, cleanup, err := setUpTestDir()
|
||||
defer cleanup()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Make some changes to the filesystem
|
||||
newFiles := map[string]string{
|
||||
"foo": "newbaz1",
|
||||
"bar/bat": "baz",
|
||||
}
|
||||
if err := testutil.SetupFiles(testDir, newFiles); err != nil {
|
||||
t.Fatalf("Error setting up fs: %s", err)
|
||||
}
|
||||
// Take another snapshot
|
||||
tarPath, err := snapshotter.TakeSnapshotFS()
|
||||
if err != nil {
|
||||
t.Fatalf("Error taking snapshot of fs: %s", err)
|
||||
}
|
||||
|
||||
f, err := os.Open(tarPath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Check contents of the snapshot, make sure contents are sorted by name
|
||||
tr := tar.NewReader(f)
|
||||
var filesInTar []string
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
filesInTar = append(filesInTar, hdr.Name)
|
||||
}
|
||||
if !sort.StringsAreSorted(filesInTar) {
|
||||
t.Fatalf("Expected the file in the tar archive were sorted, actual list was not sorted: %v", filesInTar)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSnapshotFSChangePermissions(t *testing.T) {
|
||||
testDir, snapshotter, cleanup, err := setUpTestDir()
|
||||
testDirWithoutLeadingSlash := strings.TrimLeft(testDir, "/")
|
||||
defer cleanup()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// Change permissions on a file
|
||||
batPath := filepath.Join(testDir, "bar/bat")
|
||||
batPathWithoutLeadingSlash := filepath.Join(testDirWithoutLeadingSlash, "bar/bat")
|
||||
if err := os.Chmod(batPath, 0600); err != nil {
|
||||
t.Fatalf("Error changing permissions on %s: %v", batPath, err)
|
||||
}
|
||||
|
|
@ -103,7 +153,10 @@ func TestSnapshotFSChangePermissions(t *testing.T) {
|
|||
// Check contents of the snapshot, make sure contents is equivalent to snapshotFiles
|
||||
tr := tar.NewReader(f)
|
||||
snapshotFiles := map[string]string{
|
||||
batPath: "baz2",
|
||||
batPathWithoutLeadingSlash: "baz2",
|
||||
}
|
||||
for _, dir := range util.ParentDirectoriesWithoutLeadingSlash(batPath) {
|
||||
snapshotFiles[dir] = ""
|
||||
}
|
||||
numFiles := 0
|
||||
for {
|
||||
|
|
@ -111,6 +164,7 @@ func TestSnapshotFSChangePermissions(t *testing.T) {
|
|||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
t.Logf("Info %s in tar", hdr.Name)
|
||||
numFiles++
|
||||
if _, isFile := snapshotFiles[hdr.Name]; !isFile {
|
||||
t.Fatalf("File %s unexpectedly in tar", hdr.Name)
|
||||
|
|
@ -120,13 +174,14 @@ func TestSnapshotFSChangePermissions(t *testing.T) {
|
|||
t.Fatalf("Contents of %s incorrect, expected: %s, actual: %s", hdr.Name, snapshotFiles[hdr.Name], string(contents))
|
||||
}
|
||||
}
|
||||
if numFiles != 1 {
|
||||
if numFiles != len(snapshotFiles) {
|
||||
t.Fatalf("Incorrect number of files were added, expected: 1, got: %v", numFiles)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSnapshotFiles(t *testing.T) {
|
||||
testDir, snapshotter, cleanup, err := setUpTestDir()
|
||||
testDirWithoutLeadingSlash := strings.TrimLeft(testDir, "/")
|
||||
defer cleanup()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
|
@ -147,7 +202,10 @@ func TestSnapshotFiles(t *testing.T) {
|
|||
}
|
||||
defer os.Remove(tarPath)
|
||||
|
||||
expectedFiles := []string{"/", "/tmp", filepath.Join(testDir, "foo")}
|
||||
expectedFiles := []string{
|
||||
filepath.Join(testDirWithoutLeadingSlash, "foo"),
|
||||
}
|
||||
expectedFiles = append(expectedFiles, util.ParentDirectoriesWithoutLeadingSlash(filepath.Join(testDir, "foo"))...)
|
||||
|
||||
f, err := os.Open(tarPath)
|
||||
if err != nil {
|
||||
|
|
@ -166,6 +224,8 @@ func TestSnapshotFiles(t *testing.T) {
|
|||
}
|
||||
actualFiles = append(actualFiles, hdr.Name)
|
||||
}
|
||||
sort.Strings(expectedFiles)
|
||||
sort.Strings(actualFiles)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, nil, expectedFiles, actualFiles)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -37,10 +37,6 @@ import (
|
|||
func ResolveEnvironmentReplacementList(values, envs []string, isFilepath bool) ([]string, error) {
|
||||
var resolvedValues []string
|
||||
for _, value := range values {
|
||||
if IsSrcRemoteFileURL(value) {
|
||||
resolvedValues = append(resolvedValues, value)
|
||||
continue
|
||||
}
|
||||
resolved, err := ResolveEnvironmentReplacement(value, envs, isFilepath)
|
||||
logrus.Debugf("Resolved %s to %s", value, resolved)
|
||||
if err != nil {
|
||||
|
|
@ -53,7 +49,7 @@ func ResolveEnvironmentReplacementList(values, envs []string, isFilepath bool) (
|
|||
|
||||
// ResolveEnvironmentReplacement resolves replacing env variables in some text from envs
|
||||
// It takes in a string representation of the command, the value to be resolved, and a list of envs (config.Env)
|
||||
// Ex: fp = $foo/newdir, envs = [foo=/foodir], then this should return /foodir/newdir
|
||||
// Ex: value = $foo/newdir, envs = [foo=/foodir], then this should return /foodir/newdir
|
||||
// The dockerfile/shell package handles processing env values
|
||||
// It handles escape characters and supports expansion from the config.Env array
|
||||
// Shlex handles some of the following use cases (these and more are tested in integration tests)
|
||||
|
|
@ -63,7 +59,8 @@ func ResolveEnvironmentReplacementList(values, envs []string, isFilepath bool) (
|
|||
func ResolveEnvironmentReplacement(value string, envs []string, isFilepath bool) (string, error) {
|
||||
shlex := shell.NewLex(parser.DefaultEscapeToken)
|
||||
fp, err := shlex.ProcessWord(value, envs)
|
||||
if !isFilepath {
|
||||
// Check after replacement if value is a remote URL
|
||||
if !isFilepath || IsSrcRemoteFileURL(fp) {
|
||||
return fp, err
|
||||
}
|
||||
if err != nil {
|
||||
|
|
@ -76,6 +73,22 @@ func ResolveEnvironmentReplacement(value string, envs []string, isFilepath bool)
|
|||
return fp, nil
|
||||
}
|
||||
|
||||
func ResolveEnvAndWildcards(sd instructions.SourcesAndDest, buildcontext string, envs []string) ([]string, string, error) {
|
||||
// First, resolve any environment replacement
|
||||
resolvedEnvs, err := ResolveEnvironmentReplacementList(sd, envs, true)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
dest := resolvedEnvs[len(resolvedEnvs)-1]
|
||||
// Resolve wildcards and get a list of resolved sources
|
||||
srcs, err := ResolveSources(resolvedEnvs[0:len(resolvedEnvs)-1], buildcontext)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
err = IsSrcsValid(sd, srcs, buildcontext)
|
||||
return srcs, dest, err
|
||||
}
|
||||
|
||||
// ContainsWildcards returns true if any entry in paths contains wildcards
|
||||
func ContainsWildcards(paths []string) bool {
|
||||
for _, path := range paths {
|
||||
|
|
@ -88,23 +101,22 @@ func ContainsWildcards(paths []string) bool {
|
|||
|
||||
// ResolveSources resolves the given sources if the sources contains wildcards
|
||||
// It returns a list of resolved sources
|
||||
func ResolveSources(srcsAndDest instructions.SourcesAndDest, root string) ([]string, error) {
|
||||
srcs := srcsAndDest[:len(srcsAndDest)-1]
|
||||
func ResolveSources(srcs []string, root string) ([]string, error) {
|
||||
// If sources contain wildcards, we first need to resolve them to actual paths
|
||||
if ContainsWildcards(srcs) {
|
||||
logrus.Debugf("Resolving srcs %v...", srcs)
|
||||
files, err := RelativeFiles("", root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
srcs, err = matchSources(srcs, files)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logrus.Debugf("Resolved sources to %v", srcs)
|
||||
if !ContainsWildcards(srcs) {
|
||||
return srcs, nil
|
||||
}
|
||||
// Check to make sure the sources are valid
|
||||
return srcs, IsSrcsValid(srcsAndDest, srcs, root)
|
||||
logrus.Infof("Resolving srcs %v...", srcs)
|
||||
files, err := RelativeFiles("", root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resolved, err := matchSources(srcs, files)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logrus.Debugf("Resolved sources to %v", resolved)
|
||||
return resolved, nil
|
||||
}
|
||||
|
||||
// matchSources returns a list of sources that match wildcards
|
||||
|
|
@ -165,20 +177,24 @@ func DestinationFilepath(src, dest, cwd string) (string, error) {
|
|||
}
|
||||
|
||||
// URLDestinationFilepath gives the destination a file from a remote URL should be saved to
|
||||
func URLDestinationFilepath(rawurl, dest, cwd string) string {
|
||||
func URLDestinationFilepath(rawurl, dest, cwd string, envs []string) (string, error) {
|
||||
if !IsDestDir(dest) {
|
||||
if !filepath.IsAbs(dest) {
|
||||
return filepath.Join(cwd, dest)
|
||||
return filepath.Join(cwd, dest), nil
|
||||
}
|
||||
return dest
|
||||
return dest, nil
|
||||
}
|
||||
urlBase := filepath.Base(rawurl)
|
||||
urlBase, err := ResolveEnvironmentReplacement(urlBase, envs, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
destPath := filepath.Join(dest, urlBase)
|
||||
|
||||
if !filepath.IsAbs(dest) {
|
||||
destPath = filepath.Join(cwd, destPath)
|
||||
}
|
||||
return destPath
|
||||
return destPath, nil
|
||||
}
|
||||
|
||||
func IsSrcsValid(srcsAndDest instructions.SourcesAndDest, resolvedSources []string, root string) error {
|
||||
|
|
@ -304,13 +320,12 @@ func GetUserFromUsername(userStr string, groupStr string) (string, string, error
|
|||
// Lookup by username
|
||||
userObj, err := user.Lookup(userStr)
|
||||
if err != nil {
|
||||
if _, ok := err.(user.UnknownUserError); ok {
|
||||
// Lookup by id
|
||||
userObj, err = user.LookupId(userStr)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
} else {
|
||||
if _, ok := err.(user.UnknownUserError); !ok {
|
||||
return "", "", err
|
||||
}
|
||||
// Lookup by id
|
||||
userObj, err = user.LookupId(userStr)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
}
|
||||
|
|
@ -320,12 +335,11 @@ func GetUserFromUsername(userStr string, groupStr string) (string, string, error
|
|||
if groupStr != "" {
|
||||
group, err = user.LookupGroup(groupStr)
|
||||
if err != nil {
|
||||
if _, ok := err.(user.UnknownGroupError); ok {
|
||||
group, err = user.LookupGroupId(groupStr)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
} else {
|
||||
if _, ok := err.(user.UnknownGroupError); !ok {
|
||||
return "", "", err
|
||||
}
|
||||
group, err = user.LookupGroupId(groupStr)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ limitations under the License.
|
|||
package util
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
|
|
@ -27,14 +28,12 @@ var testURL = "https://github.com/GoogleContainerTools/runtimes-common/blob/mast
|
|||
|
||||
var testEnvReplacement = []struct {
|
||||
path string
|
||||
command string
|
||||
envs []string
|
||||
isFilepath bool
|
||||
expectedPath string
|
||||
}{
|
||||
{
|
||||
path: "/simple/path",
|
||||
command: "WORKDIR /simple/path",
|
||||
path: "/simple/path",
|
||||
envs: []string{
|
||||
"simple=/path/",
|
||||
},
|
||||
|
|
@ -42,8 +41,7 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "/simple/path",
|
||||
},
|
||||
{
|
||||
path: "/simple/path/",
|
||||
command: "WORKDIR /simple/path/",
|
||||
path: "/simple/path/",
|
||||
envs: []string{
|
||||
"simple=/path/",
|
||||
},
|
||||
|
|
@ -51,8 +49,7 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "/simple/path/",
|
||||
},
|
||||
{
|
||||
path: "${a}/b",
|
||||
command: "WORKDIR ${a}/b",
|
||||
path: "${a}/b",
|
||||
envs: []string{
|
||||
"a=/path/",
|
||||
"b=/path2/",
|
||||
|
|
@ -61,8 +58,7 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "/path/b",
|
||||
},
|
||||
{
|
||||
path: "/$a/b",
|
||||
command: "COPY ${a}/b /c/",
|
||||
path: "/$a/b",
|
||||
envs: []string{
|
||||
"a=/path/",
|
||||
"b=/path2/",
|
||||
|
|
@ -71,8 +67,7 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "/path/b",
|
||||
},
|
||||
{
|
||||
path: "/$a/b/",
|
||||
command: "COPY /${a}/b /c/",
|
||||
path: "/$a/b/",
|
||||
envs: []string{
|
||||
"a=/path/",
|
||||
"b=/path2/",
|
||||
|
|
@ -81,8 +76,7 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "/path/b/",
|
||||
},
|
||||
{
|
||||
path: "\\$foo",
|
||||
command: "COPY \\$foo /quux",
|
||||
path: "\\$foo",
|
||||
envs: []string{
|
||||
"foo=/path/",
|
||||
},
|
||||
|
|
@ -90,13 +84,35 @@ var testEnvReplacement = []struct {
|
|||
expectedPath: "$foo",
|
||||
},
|
||||
{
|
||||
path: "8080/$protocol",
|
||||
command: "EXPOSE 8080/$protocol",
|
||||
path: "8080/$protocol",
|
||||
envs: []string{
|
||||
"protocol=udp",
|
||||
},
|
||||
expectedPath: "8080/udp",
|
||||
},
|
||||
{
|
||||
path: "8080/$protocol",
|
||||
envs: []string{
|
||||
"protocol=udp",
|
||||
},
|
||||
expectedPath: "8080/udp",
|
||||
},
|
||||
{
|
||||
path: "$url",
|
||||
envs: []string{
|
||||
"url=http://example.com",
|
||||
},
|
||||
isFilepath: true,
|
||||
expectedPath: "http://example.com",
|
||||
},
|
||||
{
|
||||
path: "$url",
|
||||
envs: []string{
|
||||
"url=http://example.com",
|
||||
},
|
||||
isFilepath: false,
|
||||
expectedPath: "http://example.com",
|
||||
},
|
||||
}
|
||||
|
||||
func Test_EnvReplacement(t *testing.T) {
|
||||
|
|
@ -183,6 +199,7 @@ var urlDestFilepathTests = []struct {
|
|||
cwd string
|
||||
dest string
|
||||
expectedDest string
|
||||
envs []string
|
||||
}{
|
||||
{
|
||||
url: "https://something/something",
|
||||
|
|
@ -202,12 +219,19 @@ var urlDestFilepathTests = []struct {
|
|||
dest: "/dest/",
|
||||
expectedDest: "/dest/something",
|
||||
},
|
||||
{
|
||||
url: "https://something/$foo.tar.gz",
|
||||
cwd: "/test",
|
||||
dest: "/foo/",
|
||||
expectedDest: "/foo/bar.tar.gz",
|
||||
envs: []string{"foo=bar"},
|
||||
},
|
||||
}
|
||||
|
||||
func Test_UrlDestFilepath(t *testing.T) {
|
||||
for _, test := range urlDestFilepathTests {
|
||||
actualDest := URLDestinationFilepath(test.url, test.dest, test.cwd)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, nil, test.expectedDest, actualDest)
|
||||
actualDest, err := URLDestinationFilepath(test.url, test.dest, test.cwd, test.envs)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, err, test.expectedDest, actualDest)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -382,7 +406,7 @@ var isSrcValidTests = []struct {
|
|||
func Test_IsSrcsValid(t *testing.T) {
|
||||
for _, test := range isSrcValidTests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
if err := GetExcludedFiles(buildContextPath); err != nil {
|
||||
if err := GetExcludedFiles("", buildContextPath); err != nil {
|
||||
t.Fatalf("error getting excluded files: %v", err)
|
||||
}
|
||||
err := IsSrcsValid(test.srcsAndDest, test.resolvedSources, buildContextPath)
|
||||
|
|
@ -400,7 +424,6 @@ var testResolveSources = []struct {
|
|||
"context/foo",
|
||||
"context/b*",
|
||||
testURL,
|
||||
"dest/",
|
||||
},
|
||||
expectedList: []string{
|
||||
"context/foo",
|
||||
|
|
@ -448,3 +471,58 @@ func Test_RemoteUrls(t *testing.T) {
|
|||
}
|
||||
|
||||
}
|
||||
|
||||
func TestResolveEnvironmentReplacementList(t *testing.T) {
|
||||
type args struct {
|
||||
values []string
|
||||
envs []string
|
||||
isFilepath bool
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want []string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "url",
|
||||
args: args{
|
||||
values: []string{
|
||||
"https://google.com/$foo", "$bar", "$url",
|
||||
},
|
||||
envs: []string{
|
||||
"foo=baz",
|
||||
"bar=bat",
|
||||
"url=https://google.com",
|
||||
},
|
||||
},
|
||||
want: []string{"https://google.com/baz", "bat", "https://google.com"},
|
||||
},
|
||||
{
|
||||
name: "mixed",
|
||||
args: args{
|
||||
values: []string{
|
||||
"$foo", "$bar$baz", "baz",
|
||||
},
|
||||
envs: []string{
|
||||
"foo=FOO",
|
||||
"bar=BAR",
|
||||
"baz=BAZ",
|
||||
},
|
||||
},
|
||||
want: []string{"FOO", "BARBAZ", "baz"},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := ResolveEnvironmentReplacementList(tt.args.values, tt.args.envs, tt.args.isFilepath)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("ResolveEnvironmentReplacementList() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("ResolveEnvironmentReplacementList() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -79,10 +79,7 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Store a map of files to their mtime. We need to set mtimes in a second pass because creating files
|
||||
// can change the mtime of a directory.
|
||||
extractedFiles := map[string]time.Time{}
|
||||
extractedFiles := []string{}
|
||||
|
||||
for i, l := range layers {
|
||||
logrus.Debugf("Extracting layer %d", i)
|
||||
|
|
@ -113,17 +110,10 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
if err := extractFile(root, hdr, tr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
extractedFiles[filepath.Join(root, filepath.Clean(hdr.Name))] = hdr.ModTime
|
||||
extractedFiles = append(extractedFiles, filepath.Join(root, filepath.Clean(hdr.Name)))
|
||||
}
|
||||
}
|
||||
|
||||
fileNames := []string{}
|
||||
for f, t := range extractedFiles {
|
||||
fileNames = append(fileNames, f)
|
||||
os.Chtimes(f, time.Time{}, t)
|
||||
}
|
||||
|
||||
return fileNames, nil
|
||||
return extractedFiles, nil
|
||||
}
|
||||
|
||||
// DeleteFilesystem deletes the extracted image file system
|
||||
|
|
@ -131,6 +121,10 @@ func DeleteFilesystem() error {
|
|||
logrus.Info("Deleting filesystem...")
|
||||
return filepath.Walk(constants.RootDir, func(path string, info os.FileInfo, _ error) error {
|
||||
if CheckWhitelist(path) {
|
||||
if !isExist(path) {
|
||||
logrus.Debugf("Path %s whitelisted, but not exists", path)
|
||||
return nil
|
||||
}
|
||||
if info.IsDir() {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
|
@ -148,6 +142,14 @@ func DeleteFilesystem() error {
|
|||
})
|
||||
}
|
||||
|
||||
// isExists returns true if path exists
|
||||
func isExist(path string) bool {
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ChildDirInWhitelist returns true if there is a child file or directory of the path in the whitelist
|
||||
func childDirInWhitelist(path string) bool {
|
||||
for _, d := range whitelist {
|
||||
|
|
@ -198,7 +200,7 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
|
|||
switch hdr.Typeflag {
|
||||
case tar.TypeReg:
|
||||
logrus.Debugf("creating file %s", path)
|
||||
// It's possible a file is in the tar before it's directory.
|
||||
// It's possible a file is in the tar before its directory.
|
||||
if _, err := os.Stat(dir); os.IsNotExist(err) {
|
||||
logrus.Debugf("base %s for file %s does not exist. Creating.", base, path)
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
|
|
@ -216,27 +218,16 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// manually set permissions on file, since the default umask (022) will interfere
|
||||
if err = os.Chmod(path, mode); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err = io.Copy(currFile, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
if err = currFile.Chown(uid, gid); err != nil {
|
||||
if err = setFilePermissions(path, mode, uid, gid); err != nil {
|
||||
return err
|
||||
}
|
||||
currFile.Close()
|
||||
case tar.TypeDir:
|
||||
logrus.Debugf("creating dir %s", path)
|
||||
if err := os.MkdirAll(path, mode); err != nil {
|
||||
return err
|
||||
}
|
||||
// In some cases, MkdirAll doesn't change the permissions, so run Chmod
|
||||
if err := os.Chmod(path, mode); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Chown(path, uid, gid); err != nil {
|
||||
if err := mkdirAllWithPermissions(path, mode, uid, gid); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -283,7 +274,6 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
|
|||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -310,12 +300,7 @@ func checkWhitelistRoot(root string) bool {
|
|||
if root == constants.RootDir {
|
||||
return false
|
||||
}
|
||||
for _, wl := range whitelist {
|
||||
if HasFilepathPrefix(root, wl.Path, wl.PrefixMatchOnly) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
return CheckWhitelist(root)
|
||||
}
|
||||
|
||||
// Get whitelist from roots of mounted files
|
||||
|
|
@ -388,8 +373,7 @@ func RelativeFiles(fp string, root string) ([]string, error) {
|
|||
}
|
||||
|
||||
// ParentDirectories returns a list of paths to all parent directories
|
||||
// Ex. /some/temp/dir -> [/some, /some/temp, /some/temp/dir]
|
||||
// This purposefully excludes the /.
|
||||
// Ex. /some/temp/dir -> [/, /some, /some/temp, /some/temp/dir]
|
||||
func ParentDirectories(path string) []string {
|
||||
path = filepath.Clean(path)
|
||||
dirs := strings.Split(path, "/")
|
||||
|
|
@ -405,6 +389,24 @@ func ParentDirectories(path string) []string {
|
|||
return paths
|
||||
}
|
||||
|
||||
// ParentDirectoriesWithoutLeadingSlash returns a list of paths to all parent directories
|
||||
// all subdirectories do not contain a leading /
|
||||
// Ex. /some/temp/dir -> [/, some, some/temp, some/temp/dir]
|
||||
func ParentDirectoriesWithoutLeadingSlash(path string) []string {
|
||||
path = filepath.Clean(path)
|
||||
dirs := strings.Split(path, "/")
|
||||
dirPath := ""
|
||||
paths := []string{constants.RootDir}
|
||||
for index, dir := range dirs {
|
||||
if dir == "" || index == (len(dirs)-1) {
|
||||
continue
|
||||
}
|
||||
dirPath = filepath.Join(dirPath, dir)
|
||||
paths = append(paths, dirPath)
|
||||
}
|
||||
return paths
|
||||
}
|
||||
|
||||
// FilepathExists returns true if the path exists
|
||||
func FilepathExists(path string) bool {
|
||||
_, err := os.Lstat(path)
|
||||
|
|
@ -429,10 +431,7 @@ func CreateFile(path string, reader io.Reader, perm os.FileMode, uid uint32, gid
|
|||
if _, err := io.Copy(dest, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := dest.Chmod(perm); err != nil {
|
||||
return err
|
||||
}
|
||||
return dest.Chown(int(uid), int(gid))
|
||||
return setFilePermissions(path, perm, int(uid), int(gid))
|
||||
}
|
||||
|
||||
// AddVolumePath adds the given path to the volume whitelist.
|
||||
|
|
@ -492,13 +491,11 @@ func CopyDir(src, dest, buildcontext string) ([]string, error) {
|
|||
if fi.IsDir() {
|
||||
logrus.Debugf("Creating directory %s", destPath)
|
||||
|
||||
mode := fi.Mode()
|
||||
uid := int(fi.Sys().(*syscall.Stat_t).Uid)
|
||||
gid := int(fi.Sys().(*syscall.Stat_t).Gid)
|
||||
|
||||
if err := os.MkdirAll(destPath, fi.Mode()); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := os.Chown(destPath, uid, gid); err != nil {
|
||||
if err := mkdirAllWithPermissions(destPath, mode, uid, gid); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if fi.Mode()&os.ModeSymlink != 0 {
|
||||
|
|
@ -557,11 +554,15 @@ func CopyFile(src, dest, buildcontext string) (bool, error) {
|
|||
}
|
||||
|
||||
// GetExcludedFiles gets a list of files to exclude from the .dockerignore
|
||||
func GetExcludedFiles(buildcontext string) error {
|
||||
path := filepath.Join(buildcontext, ".dockerignore")
|
||||
func GetExcludedFiles(dockerfilepath string, buildcontext string) error {
|
||||
path := dockerfilepath + ".dockerignore"
|
||||
if !FilepathExists(path) {
|
||||
path = filepath.Join(buildcontext, ".dockerignore")
|
||||
}
|
||||
if !FilepathExists(path) {
|
||||
return nil
|
||||
}
|
||||
logrus.Infof("Using dockerignore file: %v", path)
|
||||
contents, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "parsing .dockerignore")
|
||||
|
|
@ -591,10 +592,10 @@ func excludeFile(path, buildcontext string) bool {
|
|||
|
||||
// HasFilepathPrefix checks if the given file path begins with prefix
|
||||
func HasFilepathPrefix(path, prefix string, prefixMatchOnly bool) bool {
|
||||
path = filepath.Clean(path)
|
||||
prefix = filepath.Clean(prefix)
|
||||
pathArray := strings.Split(path, "/")
|
||||
prefixArray := strings.Split(prefix, "/")
|
||||
path = filepath.Clean(path)
|
||||
pathArray := strings.SplitN(path, "/", len(prefixArray)+1)
|
||||
|
||||
if len(pathArray) < len(prefixArray) {
|
||||
return false
|
||||
|
|
@ -614,3 +615,24 @@ func HasFilepathPrefix(path, prefix string, prefixMatchOnly bool) bool {
|
|||
func Volumes() []string {
|
||||
return volumes
|
||||
}
|
||||
|
||||
func mkdirAllWithPermissions(path string, mode os.FileMode, uid, gid int) error {
|
||||
if err := os.MkdirAll(path, mode); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Chown(path, uid, gid); err != nil {
|
||||
return err
|
||||
}
|
||||
// In some cases, MkdirAll doesn't change the permissions, so run Chmod
|
||||
// Must chmod after chown because chown resets the file mode.
|
||||
return os.Chmod(path, mode)
|
||||
}
|
||||
|
||||
func setFilePermissions(path string, mode os.FileMode, uid, gid int) error {
|
||||
if err := os.Chown(path, uid, gid); err != nil {
|
||||
return err
|
||||
}
|
||||
// manually set permissions on file, since the default umask (022) will interfere
|
||||
// Must chmod after chown because chown resets the file mode.
|
||||
return os.Chmod(path, mode)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -19,11 +19,13 @@ package util
|
|||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
|
|
@ -177,6 +179,38 @@ func Test_ParentDirectories(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func Test_ParentDirectoriesWithoutLeadingSlash(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
expected []string
|
||||
}{
|
||||
{
|
||||
name: "regular path",
|
||||
path: "/path/to/dir",
|
||||
expected: []string{
|
||||
"/",
|
||||
"path",
|
||||
"path/to",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "current directory",
|
||||
path: ".",
|
||||
expected: []string{
|
||||
"/",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
actual := ParentDirectoriesWithoutLeadingSlash(tt.path)
|
||||
testutil.CheckErrorAndDeepEqual(t, false, nil, tt.expected, actual)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_CheckWhitelist(t *testing.T) {
|
||||
type args struct {
|
||||
path string
|
||||
|
|
@ -310,6 +344,84 @@ func TestHasFilepathPrefix(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkHasFilepathPrefix(b *testing.B) {
|
||||
tests := []struct {
|
||||
path string
|
||||
prefix string
|
||||
prefixMatchOnly bool
|
||||
}{
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
}
|
||||
for _, ts := range tests {
|
||||
name := fmt.Sprint("PathDepth=", strings.Count(ts.path, "/"), ",PrefixDepth=", strings.Count(ts.prefix, "/"))
|
||||
b.Run(name, func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
for i := 0; i < b.N; i++ {
|
||||
HasFilepathPrefix(ts.path, ts.prefix, ts.prefixMatchOnly)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type checker func(root string, t *testing.T)
|
||||
|
||||
func fileExists(p string) checker {
|
||||
|
|
@ -503,6 +615,30 @@ func TestExtractFile(t *testing.T) {
|
|||
filesAreHardlinks("/bin/uncompress", "/bin/gzip"),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "file with setuid bit",
|
||||
contents: []byte("helloworld"),
|
||||
hdrs: []*tar.Header{fileHeader("./bar", "helloworld", 04644)},
|
||||
checkers: []checker{
|
||||
fileExists("/bar"),
|
||||
fileMatches("/bar", []byte("helloworld")),
|
||||
permissionsMatch("/bar", 0644|os.ModeSetuid),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "dir with sticky bit",
|
||||
contents: []byte("helloworld"),
|
||||
hdrs: []*tar.Header{
|
||||
dirHeader("./foo", 01755),
|
||||
fileHeader("./foo/bar", "helloworld", 0644),
|
||||
},
|
||||
checkers: []checker{
|
||||
fileExists("/foo/bar"),
|
||||
fileMatches("/foo/bar", []byte("helloworld")),
|
||||
permissionsMatch("/foo/bar", 0644),
|
||||
permissionsMatch("/foo", 0755|os.ModeDir|os.ModeSticky),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tcs {
|
||||
|
|
@ -638,3 +774,54 @@ func Test_childDirInWhitelist(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_correctDockerignoreFileIsUsed(t *testing.T) {
|
||||
type args struct {
|
||||
dockerfilepath string
|
||||
buildcontext string
|
||||
excluded []string
|
||||
included []string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
}{
|
||||
{
|
||||
name: "relative dockerfile used",
|
||||
args: args{
|
||||
dockerfilepath: "../../integration/dockerfiles/Dockerfile_dockerignore_relative",
|
||||
buildcontext: "../../integration/",
|
||||
excluded: []string{"ignore_relative/bar"},
|
||||
included: []string{"ignore_relative/foo", "ignore/bar"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "context dockerfile is used",
|
||||
args: args{
|
||||
dockerfilepath: "../../integration/dockerfiles/Dockerfile_test_dockerignore",
|
||||
buildcontext: "../../integration/",
|
||||
excluded: []string{"ignore/bar"},
|
||||
included: []string{"ignore/foo", "ignore_relative/bar"},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
if err := GetExcludedFiles(tt.args.dockerfilepath, tt.args.buildcontext); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, excl := range tt.args.excluded {
|
||||
t.Run(tt.name+" to exclude "+excl, func(t *testing.T) {
|
||||
if !excludeFile(excl, tt.args.buildcontext) {
|
||||
t.Errorf("'%v' not excluded", excl)
|
||||
}
|
||||
})
|
||||
}
|
||||
for _, incl := range tt.args.included {
|
||||
t.Run(tt.name+" to include "+incl, func(t *testing.T) {
|
||||
if excludeFile(incl, tt.args.buildcontext) {
|
||||
t.Errorf("'%v' not included", incl)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -102,7 +102,7 @@ func remoteImage(image string, opts *config.KanikoOptions) (v1.Image, error) {
|
|||
|
||||
registryName := ref.Context().RegistryStr()
|
||||
if opts.InsecurePull || opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
newReg, err := name.NewRegistry(registryName, name.WeakValidation, name.Insecure)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -149,5 +149,5 @@ func cachedImage(opts *config.KanikoOptions, image string) (v1.Image, error) {
|
|||
cacheKey = d.String()
|
||||
}
|
||||
|
||||
return cache.LocalSource(opts, cacheKey)
|
||||
return cache.LocalSource(&opts.CacheOptions, cacheKey)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ import (
|
|||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
|
|
@ -54,7 +55,6 @@ func (t *Tar) Close() {
|
|||
|
||||
// AddFileToTar adds the file at path p to the tar
|
||||
func (t *Tar) AddFileToTar(p string) error {
|
||||
logrus.Debugf("Adding file %s to tar", p)
|
||||
i, err := os.Lstat(p)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to get file info for %s: %s", p, err)
|
||||
|
|
@ -75,7 +75,14 @@ func (t *Tar) AddFileToTar(p string) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hdr.Name = p
|
||||
|
||||
if p != "/" {
|
||||
// Docker uses no leading / in the tarball
|
||||
hdr.Name = strings.TrimLeft(p, "/")
|
||||
} else {
|
||||
// allow entry for / to preserve permission changes etc. (currently ignored anyway by Docker runtime)
|
||||
hdr.Name = p
|
||||
}
|
||||
|
||||
hardlink, linkDst := t.checkHardlink(p, i)
|
||||
if hardlink {
|
||||
|
|
@ -105,7 +112,8 @@ func (t *Tar) Whiteout(p string) error {
|
|||
name := ".wh." + filepath.Base(p)
|
||||
|
||||
th := &tar.Header{
|
||||
Name: filepath.Join(dir, name),
|
||||
// Docker uses no leading / in the tarball
|
||||
Name: strings.TrimLeft(filepath.Join(dir, name), "/"),
|
||||
Size: 0,
|
||||
}
|
||||
if err := t.w.WriteHeader(th); err != nil {
|
||||
|
|
|
|||
|
|
@ -20,14 +20,13 @@ import (
|
|||
"crypto/md5"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
highwayhash "github.com/minio/HighwayHash"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
|
@ -47,8 +46,15 @@ func ConfigureLogging(logLevel string) error {
|
|||
|
||||
// Hasher returns a hash function, used in snapshotting to determine if a file has changed
|
||||
func Hasher() func(string) (string, error) {
|
||||
pool := sync.Pool{
|
||||
New: func() interface{} {
|
||||
b := make([]byte, highwayhash.Size*10*1024)
|
||||
return &b
|
||||
},
|
||||
}
|
||||
key := make([]byte, highwayhash.Size)
|
||||
hasher := func(p string) (string, error) {
|
||||
h := md5.New()
|
||||
h, _ := highwayhash.New(key)
|
||||
fi, err := os.Lstat(p)
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
|
@ -66,7 +72,9 @@ func Hasher() func(string) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
buf := pool.Get().(*[]byte)
|
||||
defer pool.Put(buf)
|
||||
if _, err := io.CopyBuffer(h, f, *buf); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
|
@ -130,29 +138,3 @@ func SHA256(r io.Reader) (string, error) {
|
|||
}
|
||||
return hex.EncodeToString(hasher.Sum(make([]byte, 0, hasher.Size()))), nil
|
||||
}
|
||||
|
||||
type ReproducibleManifest struct {
|
||||
Layers []v1.Descriptor
|
||||
Config v1.Config
|
||||
}
|
||||
|
||||
func ReproducibleDigest(img partial.WithManifestAndConfigFile) (string, error) {
|
||||
mfst, err := img.Manifest()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cfg, err := img.ConfigFile()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
rm := ReproducibleManifest{
|
||||
Layers: mfst.Layers,
|
||||
Config: cfg.Config,
|
||||
}
|
||||
|
||||
b, err := json.Marshal(rm)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(b), nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,8 +15,9 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ $# -lt 3 ];
|
||||
then echo "Usage: run_in_docker.sh <path to Dockerfile> <context directory> <image tag> <cache>"
|
||||
if [ $# -lt 3 ]; then
|
||||
echo "Usage: run_in_docker.sh <path to Dockerfile> <context directory> <image tag> <cache>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
dockerfile=$1
|
||||
|
|
|
|||
|
|
@ -0,0 +1,41 @@
|
|||
Copyright (c) 2015, Emir Pasic
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
AVL Tree:
|
||||
|
||||
Copyright (c) 2017 Benjamin Scher Purcell <benjapurcell@gmail.com>
|
||||
|
||||
Permission to use, copy, modify, and distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package containers provides core interfaces and functions for data structures.
|
||||
//
|
||||
// Container is the base interface for all data structures to implement.
|
||||
//
|
||||
// Iterators provide stateful iterators.
|
||||
//
|
||||
// Enumerable provides Ruby inspired (each, select, map, find, any?, etc.) container functions.
|
||||
//
|
||||
// Serialization provides serializers (marshalers) and deserializers (unmarshalers).
|
||||
package containers
|
||||
|
||||
import "github.com/emirpasic/gods/utils"
|
||||
|
||||
// Container is base interface that all data structures implement.
|
||||
type Container interface {
|
||||
Empty() bool
|
||||
Size() int
|
||||
Clear()
|
||||
Values() []interface{}
|
||||
}
|
||||
|
||||
// GetSortedValues returns sorted container's elements with respect to the passed comparator.
|
||||
// Does not effect the ordering of elements within the container.
|
||||
func GetSortedValues(container Container, comparator utils.Comparator) []interface{} {
|
||||
values := container.Values()
|
||||
if len(values) < 2 {
|
||||
return values
|
||||
}
|
||||
utils.Sort(values, comparator)
|
||||
return values
|
||||
}
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package containers
|
||||
|
||||
// EnumerableWithIndex provides functions for ordered containers whose values can be fetched by an index.
|
||||
type EnumerableWithIndex interface {
|
||||
// Each calls the given function once for each element, passing that element's index and value.
|
||||
Each(func(index int, value interface{}))
|
||||
|
||||
// Map invokes the given function once for each element and returns a
|
||||
// container containing the values returned by the given function.
|
||||
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
|
||||
// Map(func(index int, value interface{}) interface{}) Container
|
||||
|
||||
// Select returns a new container containing all elements for which the given function returns a true value.
|
||||
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
|
||||
// Select(func(index int, value interface{}) bool) Container
|
||||
|
||||
// Any passes each element of the container to the given function and
|
||||
// returns true if the function ever returns true for any element.
|
||||
Any(func(index int, value interface{}) bool) bool
|
||||
|
||||
// All passes each element of the container to the given function and
|
||||
// returns true if the function returns true for all elements.
|
||||
All(func(index int, value interface{}) bool) bool
|
||||
|
||||
// Find passes each element of the container to the given function and returns
|
||||
// the first (index,value) for which the function is true or -1,nil otherwise
|
||||
// if no element matches the criteria.
|
||||
Find(func(index int, value interface{}) bool) (int, interface{})
|
||||
}
|
||||
|
||||
// EnumerableWithKey provides functions for ordered containers whose values whose elements are key/value pairs.
|
||||
type EnumerableWithKey interface {
|
||||
// Each calls the given function once for each element, passing that element's key and value.
|
||||
Each(func(key interface{}, value interface{}))
|
||||
|
||||
// Map invokes the given function once for each element and returns a container
|
||||
// containing the values returned by the given function as key/value pairs.
|
||||
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
|
||||
// Map(func(key interface{}, value interface{}) (interface{}, interface{})) Container
|
||||
|
||||
// Select returns a new container containing all elements for which the given function returns a true value.
|
||||
// TODO need help on how to enforce this in containers (don't want to type assert when chaining)
|
||||
// Select(func(key interface{}, value interface{}) bool) Container
|
||||
|
||||
// Any passes each element of the container to the given function and
|
||||
// returns true if the function ever returns true for any element.
|
||||
Any(func(key interface{}, value interface{}) bool) bool
|
||||
|
||||
// All passes each element of the container to the given function and
|
||||
// returns true if the function returns true for all elements.
|
||||
All(func(key interface{}, value interface{}) bool) bool
|
||||
|
||||
// Find passes each element of the container to the given function and returns
|
||||
// the first (key,value) for which the function is true or nil,nil otherwise if no element
|
||||
// matches the criteria.
|
||||
Find(func(key interface{}, value interface{}) bool) (interface{}, interface{})
|
||||
}
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package containers
|
||||
|
||||
// IteratorWithIndex is stateful iterator for ordered containers whose values can be fetched by an index.
|
||||
type IteratorWithIndex interface {
|
||||
// Next moves the iterator to the next element and returns true if there was a next element in the container.
|
||||
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
|
||||
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
|
||||
// Modifies the state of the iterator.
|
||||
Next() bool
|
||||
|
||||
// Value returns the current element's value.
|
||||
// Does not modify the state of the iterator.
|
||||
Value() interface{}
|
||||
|
||||
// Index returns the current element's index.
|
||||
// Does not modify the state of the iterator.
|
||||
Index() int
|
||||
|
||||
// Begin resets the iterator to its initial state (one-before-first)
|
||||
// Call Next() to fetch the first element if any.
|
||||
Begin()
|
||||
|
||||
// First moves the iterator to the first element and returns true if there was a first element in the container.
|
||||
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
First() bool
|
||||
}
|
||||
|
||||
// IteratorWithKey is a stateful iterator for ordered containers whose elements are key value pairs.
|
||||
type IteratorWithKey interface {
|
||||
// Next moves the iterator to the next element and returns true if there was a next element in the container.
|
||||
// If Next() returns true, then next element's key and value can be retrieved by Key() and Value().
|
||||
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
|
||||
// Modifies the state of the iterator.
|
||||
Next() bool
|
||||
|
||||
// Value returns the current element's value.
|
||||
// Does not modify the state of the iterator.
|
||||
Value() interface{}
|
||||
|
||||
// Key returns the current element's key.
|
||||
// Does not modify the state of the iterator.
|
||||
Key() interface{}
|
||||
|
||||
// Begin resets the iterator to its initial state (one-before-first)
|
||||
// Call Next() to fetch the first element if any.
|
||||
Begin()
|
||||
|
||||
// First moves the iterator to the first element and returns true if there was a first element in the container.
|
||||
// If First() returns true, then first element's key and value can be retrieved by Key() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
First() bool
|
||||
}
|
||||
|
||||
// ReverseIteratorWithIndex is stateful iterator for ordered containers whose values can be fetched by an index.
|
||||
//
|
||||
// Essentially it is the same as IteratorWithIndex, but provides additional:
|
||||
//
|
||||
// Prev() function to enable traversal in reverse
|
||||
//
|
||||
// Last() function to move the iterator to the last element.
|
||||
//
|
||||
// End() function to move the iterator past the last element (one-past-the-end).
|
||||
type ReverseIteratorWithIndex interface {
|
||||
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
|
||||
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
Prev() bool
|
||||
|
||||
// End moves the iterator past the last element (one-past-the-end).
|
||||
// Call Prev() to fetch the last element if any.
|
||||
End()
|
||||
|
||||
// Last moves the iterator to the last element and returns true if there was a last element in the container.
|
||||
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
Last() bool
|
||||
|
||||
IteratorWithIndex
|
||||
}
|
||||
|
||||
// ReverseIteratorWithKey is a stateful iterator for ordered containers whose elements are key value pairs.
|
||||
//
|
||||
// Essentially it is the same as IteratorWithKey, but provides additional:
|
||||
//
|
||||
// Prev() function to enable traversal in reverse
|
||||
//
|
||||
// Last() function to move the iterator to the last element.
|
||||
type ReverseIteratorWithKey interface {
|
||||
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
|
||||
// If Prev() returns true, then previous element's key and value can be retrieved by Key() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
Prev() bool
|
||||
|
||||
// End moves the iterator past the last element (one-past-the-end).
|
||||
// Call Prev() to fetch the last element if any.
|
||||
End()
|
||||
|
||||
// Last moves the iterator to the last element and returns true if there was a last element in the container.
|
||||
// If Last() returns true, then last element's key and value can be retrieved by Key() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
Last() bool
|
||||
|
||||
IteratorWithKey
|
||||
}
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package containers
|
||||
|
||||
// JSONSerializer provides JSON serialization
|
||||
type JSONSerializer interface {
|
||||
// ToJSON outputs the JSON representation of containers's elements.
|
||||
ToJSON() ([]byte, error)
|
||||
}
|
||||
|
||||
// JSONDeserializer provides JSON deserialization
|
||||
type JSONDeserializer interface {
|
||||
// FromJSON populates containers's elements from the input JSON representation.
|
||||
FromJSON([]byte) error
|
||||
}
|
||||
|
|
@ -0,0 +1,200 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package arraylist implements the array list.
|
||||
//
|
||||
// Structure is not thread safe.
|
||||
//
|
||||
// Reference: https://en.wikipedia.org/wiki/List_%28abstract_data_type%29
|
||||
package arraylist
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/emirpasic/gods/lists"
|
||||
"github.com/emirpasic/gods/utils"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func assertListImplementation() {
|
||||
var _ lists.List = (*List)(nil)
|
||||
}
|
||||
|
||||
// List holds the elements in a slice
|
||||
type List struct {
|
||||
elements []interface{}
|
||||
size int
|
||||
}
|
||||
|
||||
const (
|
||||
growthFactor = float32(2.0) // growth by 100%
|
||||
shrinkFactor = float32(0.25) // shrink when size is 25% of capacity (0 means never shrink)
|
||||
)
|
||||
|
||||
// New instantiates a new empty list
|
||||
func New() *List {
|
||||
return &List{}
|
||||
}
|
||||
|
||||
// Add appends a value at the end of the list
|
||||
func (list *List) Add(values ...interface{}) {
|
||||
list.growBy(len(values))
|
||||
for _, value := range values {
|
||||
list.elements[list.size] = value
|
||||
list.size++
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the element at index.
|
||||
// Second return parameter is true if index is within bounds of the array and array is not empty, otherwise false.
|
||||
func (list *List) Get(index int) (interface{}, bool) {
|
||||
|
||||
if !list.withinRange(index) {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return list.elements[index], true
|
||||
}
|
||||
|
||||
// Remove removes one or more elements from the list with the supplied indices.
|
||||
func (list *List) Remove(index int) {
|
||||
|
||||
if !list.withinRange(index) {
|
||||
return
|
||||
}
|
||||
|
||||
list.elements[index] = nil // cleanup reference
|
||||
copy(list.elements[index:], list.elements[index+1:list.size]) // shift to the left by one (slow operation, need ways to optimize this)
|
||||
list.size--
|
||||
|
||||
list.shrink()
|
||||
}
|
||||
|
||||
// Contains checks if elements (one or more) are present in the set.
|
||||
// All elements have to be present in the set for the method to return true.
|
||||
// Performance time complexity of n^2.
|
||||
// Returns true if no arguments are passed at all, i.e. set is always super-set of empty set.
|
||||
func (list *List) Contains(values ...interface{}) bool {
|
||||
|
||||
for _, searchValue := range values {
|
||||
found := false
|
||||
for _, element := range list.elements {
|
||||
if element == searchValue {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Values returns all elements in the list.
|
||||
func (list *List) Values() []interface{} {
|
||||
newElements := make([]interface{}, list.size, list.size)
|
||||
copy(newElements, list.elements[:list.size])
|
||||
return newElements
|
||||
}
|
||||
|
||||
// Empty returns true if list does not contain any elements.
|
||||
func (list *List) Empty() bool {
|
||||
return list.size == 0
|
||||
}
|
||||
|
||||
// Size returns number of elements within the list.
|
||||
func (list *List) Size() int {
|
||||
return list.size
|
||||
}
|
||||
|
||||
// Clear removes all elements from the list.
|
||||
func (list *List) Clear() {
|
||||
list.size = 0
|
||||
list.elements = []interface{}{}
|
||||
}
|
||||
|
||||
// Sort sorts values (in-place) using.
|
||||
func (list *List) Sort(comparator utils.Comparator) {
|
||||
if len(list.elements) < 2 {
|
||||
return
|
||||
}
|
||||
utils.Sort(list.elements[:list.size], comparator)
|
||||
}
|
||||
|
||||
// Swap swaps the two values at the specified positions.
|
||||
func (list *List) Swap(i, j int) {
|
||||
if list.withinRange(i) && list.withinRange(j) {
|
||||
list.elements[i], list.elements[j] = list.elements[j], list.elements[i]
|
||||
}
|
||||
}
|
||||
|
||||
// Insert inserts values at specified index position shifting the value at that position (if any) and any subsequent elements to the right.
|
||||
// Does not do anything if position is negative or bigger than list's size
|
||||
// Note: position equal to list's size is valid, i.e. append.
|
||||
func (list *List) Insert(index int, values ...interface{}) {
|
||||
|
||||
if !list.withinRange(index) {
|
||||
// Append
|
||||
if index == list.size {
|
||||
list.Add(values...)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
l := len(values)
|
||||
list.growBy(l)
|
||||
list.size += l
|
||||
// Shift old to right
|
||||
for i := list.size - 1; i >= index+l; i-- {
|
||||
list.elements[i] = list.elements[i-l]
|
||||
}
|
||||
// Insert new
|
||||
for i, value := range values {
|
||||
list.elements[index+i] = value
|
||||
}
|
||||
}
|
||||
|
||||
// String returns a string representation of container
|
||||
func (list *List) String() string {
|
||||
str := "ArrayList\n"
|
||||
values := []string{}
|
||||
for _, value := range list.elements[:list.size] {
|
||||
values = append(values, fmt.Sprintf("%v", value))
|
||||
}
|
||||
str += strings.Join(values, ", ")
|
||||
return str
|
||||
}
|
||||
|
||||
// Check that the index is within bounds of the list
|
||||
func (list *List) withinRange(index int) bool {
|
||||
return index >= 0 && index < list.size
|
||||
}
|
||||
|
||||
func (list *List) resize(cap int) {
|
||||
newElements := make([]interface{}, cap, cap)
|
||||
copy(newElements, list.elements)
|
||||
list.elements = newElements
|
||||
}
|
||||
|
||||
// Expand the array if necessary, i.e. capacity will be reached if we add n elements
|
||||
func (list *List) growBy(n int) {
|
||||
// When capacity is reached, grow by a factor of growthFactor and add number of elements
|
||||
currentCapacity := cap(list.elements)
|
||||
if list.size+n >= currentCapacity {
|
||||
newCapacity := int(growthFactor * float32(currentCapacity+n))
|
||||
list.resize(newCapacity)
|
||||
}
|
||||
}
|
||||
|
||||
// Shrink the array if necessary, i.e. when size is shrinkFactor percent of current capacity
|
||||
func (list *List) shrink() {
|
||||
if shrinkFactor == 0.0 {
|
||||
return
|
||||
}
|
||||
// Shrink when size is at shrinkFactor * capacity
|
||||
currentCapacity := cap(list.elements)
|
||||
if list.size <= int(float32(currentCapacity)*shrinkFactor) {
|
||||
list.resize(list.size)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,79 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package arraylist
|
||||
|
||||
import "github.com/emirpasic/gods/containers"
|
||||
|
||||
func assertEnumerableImplementation() {
|
||||
var _ containers.EnumerableWithIndex = (*List)(nil)
|
||||
}
|
||||
|
||||
// Each calls the given function once for each element, passing that element's index and value.
|
||||
func (list *List) Each(f func(index int, value interface{})) {
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
f(iterator.Index(), iterator.Value())
|
||||
}
|
||||
}
|
||||
|
||||
// Map invokes the given function once for each element and returns a
|
||||
// container containing the values returned by the given function.
|
||||
func (list *List) Map(f func(index int, value interface{}) interface{}) *List {
|
||||
newList := &List{}
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
newList.Add(f(iterator.Index(), iterator.Value()))
|
||||
}
|
||||
return newList
|
||||
}
|
||||
|
||||
// Select returns a new container containing all elements for which the given function returns a true value.
|
||||
func (list *List) Select(f func(index int, value interface{}) bool) *List {
|
||||
newList := &List{}
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
if f(iterator.Index(), iterator.Value()) {
|
||||
newList.Add(iterator.Value())
|
||||
}
|
||||
}
|
||||
return newList
|
||||
}
|
||||
|
||||
// Any passes each element of the collection to the given function and
|
||||
// returns true if the function ever returns true for any element.
|
||||
func (list *List) Any(f func(index int, value interface{}) bool) bool {
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
if f(iterator.Index(), iterator.Value()) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// All passes each element of the collection to the given function and
|
||||
// returns true if the function returns true for all elements.
|
||||
func (list *List) All(f func(index int, value interface{}) bool) bool {
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
if !f(iterator.Index(), iterator.Value()) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Find passes each element of the container to the given function and returns
|
||||
// the first (index,value) for which the function is true or -1,nil otherwise
|
||||
// if no element matches the criteria.
|
||||
func (list *List) Find(f func(index int, value interface{}) bool) (int, interface{}) {
|
||||
iterator := list.Iterator()
|
||||
for iterator.Next() {
|
||||
if f(iterator.Index(), iterator.Value()) {
|
||||
return iterator.Index(), iterator.Value()
|
||||
}
|
||||
}
|
||||
return -1, nil
|
||||
}
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package arraylist
|
||||
|
||||
import "github.com/emirpasic/gods/containers"
|
||||
|
||||
func assertIteratorImplementation() {
|
||||
var _ containers.ReverseIteratorWithIndex = (*Iterator)(nil)
|
||||
}
|
||||
|
||||
// Iterator holding the iterator's state
|
||||
type Iterator struct {
|
||||
list *List
|
||||
index int
|
||||
}
|
||||
|
||||
// Iterator returns a stateful iterator whose values can be fetched by an index.
|
||||
func (list *List) Iterator() Iterator {
|
||||
return Iterator{list: list, index: -1}
|
||||
}
|
||||
|
||||
// Next moves the iterator to the next element and returns true if there was a next element in the container.
|
||||
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
|
||||
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Next() bool {
|
||||
if iterator.index < iterator.list.size {
|
||||
iterator.index++
|
||||
}
|
||||
return iterator.list.withinRange(iterator.index)
|
||||
}
|
||||
|
||||
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
|
||||
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Prev() bool {
|
||||
if iterator.index >= 0 {
|
||||
iterator.index--
|
||||
}
|
||||
return iterator.list.withinRange(iterator.index)
|
||||
}
|
||||
|
||||
// Value returns the current element's value.
|
||||
// Does not modify the state of the iterator.
|
||||
func (iterator *Iterator) Value() interface{} {
|
||||
return iterator.list.elements[iterator.index]
|
||||
}
|
||||
|
||||
// Index returns the current element's index.
|
||||
// Does not modify the state of the iterator.
|
||||
func (iterator *Iterator) Index() int {
|
||||
return iterator.index
|
||||
}
|
||||
|
||||
// Begin resets the iterator to its initial state (one-before-first)
|
||||
// Call Next() to fetch the first element if any.
|
||||
func (iterator *Iterator) Begin() {
|
||||
iterator.index = -1
|
||||
}
|
||||
|
||||
// End moves the iterator past the last element (one-past-the-end).
|
||||
// Call Prev() to fetch the last element if any.
|
||||
func (iterator *Iterator) End() {
|
||||
iterator.index = iterator.list.size
|
||||
}
|
||||
|
||||
// First moves the iterator to the first element and returns true if there was a first element in the container.
|
||||
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) First() bool {
|
||||
iterator.Begin()
|
||||
return iterator.Next()
|
||||
}
|
||||
|
||||
// Last moves the iterator to the last element and returns true if there was a last element in the container.
|
||||
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Last() bool {
|
||||
iterator.End()
|
||||
return iterator.Prev()
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package arraylist
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"github.com/emirpasic/gods/containers"
|
||||
)
|
||||
|
||||
func assertSerializationImplementation() {
|
||||
var _ containers.JSONSerializer = (*List)(nil)
|
||||
var _ containers.JSONDeserializer = (*List)(nil)
|
||||
}
|
||||
|
||||
// ToJSON outputs the JSON representation of list's elements.
|
||||
func (list *List) ToJSON() ([]byte, error) {
|
||||
return json.Marshal(list.elements[:list.size])
|
||||
}
|
||||
|
||||
// FromJSON populates list's elements from the input JSON representation.
|
||||
func (list *List) FromJSON(data []byte) error {
|
||||
err := json.Unmarshal(data, &list.elements)
|
||||
if err == nil {
|
||||
list.size = len(list.elements)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package lists provides an abstract List interface.
|
||||
//
|
||||
// In computer science, a list or sequence is an abstract data type that represents an ordered sequence of values, where the same value may occur more than once. An instance of a list is a computer representation of the mathematical concept of a finite sequence; the (potentially) infinite analog of a list is a stream. Lists are a basic example of containers, as they contain other values. If the same value occurs multiple times, each occurrence is considered a distinct item.
|
||||
//
|
||||
// Reference: https://en.wikipedia.org/wiki/List_%28abstract_data_type%29
|
||||
package lists
|
||||
|
||||
import (
|
||||
"github.com/emirpasic/gods/containers"
|
||||
"github.com/emirpasic/gods/utils"
|
||||
)
|
||||
|
||||
// List interface that all lists implement
|
||||
type List interface {
|
||||
Get(index int) (interface{}, bool)
|
||||
Remove(index int)
|
||||
Add(values ...interface{})
|
||||
Contains(values ...interface{}) bool
|
||||
Sort(comparator utils.Comparator)
|
||||
Swap(index1, index2 int)
|
||||
Insert(index int, values ...interface{})
|
||||
|
||||
containers.Container
|
||||
// Empty() bool
|
||||
// Size() int
|
||||
// Clear()
|
||||
// Values() []interface{}
|
||||
}
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package binaryheap implements a binary heap backed by array list.
|
||||
//
|
||||
// Comparator defines this heap as either min or max heap.
|
||||
//
|
||||
// Structure is not thread safe.
|
||||
//
|
||||
// References: http://en.wikipedia.org/wiki/Binary_heap
|
||||
package binaryheap
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/emirpasic/gods/lists/arraylist"
|
||||
"github.com/emirpasic/gods/trees"
|
||||
"github.com/emirpasic/gods/utils"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func assertTreeImplementation() {
|
||||
var _ trees.Tree = (*Heap)(nil)
|
||||
}
|
||||
|
||||
// Heap holds elements in an array-list
|
||||
type Heap struct {
|
||||
list *arraylist.List
|
||||
Comparator utils.Comparator
|
||||
}
|
||||
|
||||
// NewWith instantiates a new empty heap tree with the custom comparator.
|
||||
func NewWith(comparator utils.Comparator) *Heap {
|
||||
return &Heap{list: arraylist.New(), Comparator: comparator}
|
||||
}
|
||||
|
||||
// NewWithIntComparator instantiates a new empty heap with the IntComparator, i.e. elements are of type int.
|
||||
func NewWithIntComparator() *Heap {
|
||||
return &Heap{list: arraylist.New(), Comparator: utils.IntComparator}
|
||||
}
|
||||
|
||||
// NewWithStringComparator instantiates a new empty heap with the StringComparator, i.e. elements are of type string.
|
||||
func NewWithStringComparator() *Heap {
|
||||
return &Heap{list: arraylist.New(), Comparator: utils.StringComparator}
|
||||
}
|
||||
|
||||
// Push adds a value onto the heap and bubbles it up accordingly.
|
||||
func (heap *Heap) Push(values ...interface{}) {
|
||||
if len(values) == 1 {
|
||||
heap.list.Add(values[0])
|
||||
heap.bubbleUp()
|
||||
} else {
|
||||
// Reference: https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
|
||||
for _, value := range values {
|
||||
heap.list.Add(value)
|
||||
}
|
||||
size := heap.list.Size()/2 + 1
|
||||
for i := size; i >= 0; i-- {
|
||||
heap.bubbleDownIndex(i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Pop removes top element on heap and returns it, or nil if heap is empty.
|
||||
// Second return parameter is true, unless the heap was empty and there was nothing to pop.
|
||||
func (heap *Heap) Pop() (value interface{}, ok bool) {
|
||||
value, ok = heap.list.Get(0)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
lastIndex := heap.list.Size() - 1
|
||||
heap.list.Swap(0, lastIndex)
|
||||
heap.list.Remove(lastIndex)
|
||||
heap.bubbleDown()
|
||||
return
|
||||
}
|
||||
|
||||
// Peek returns top element on the heap without removing it, or nil if heap is empty.
|
||||
// Second return parameter is true, unless the heap was empty and there was nothing to peek.
|
||||
func (heap *Heap) Peek() (value interface{}, ok bool) {
|
||||
return heap.list.Get(0)
|
||||
}
|
||||
|
||||
// Empty returns true if heap does not contain any elements.
|
||||
func (heap *Heap) Empty() bool {
|
||||
return heap.list.Empty()
|
||||
}
|
||||
|
||||
// Size returns number of elements within the heap.
|
||||
func (heap *Heap) Size() int {
|
||||
return heap.list.Size()
|
||||
}
|
||||
|
||||
// Clear removes all elements from the heap.
|
||||
func (heap *Heap) Clear() {
|
||||
heap.list.Clear()
|
||||
}
|
||||
|
||||
// Values returns all elements in the heap.
|
||||
func (heap *Heap) Values() []interface{} {
|
||||
return heap.list.Values()
|
||||
}
|
||||
|
||||
// String returns a string representation of container
|
||||
func (heap *Heap) String() string {
|
||||
str := "BinaryHeap\n"
|
||||
values := []string{}
|
||||
for _, value := range heap.list.Values() {
|
||||
values = append(values, fmt.Sprintf("%v", value))
|
||||
}
|
||||
str += strings.Join(values, ", ")
|
||||
return str
|
||||
}
|
||||
|
||||
// Performs the "bubble down" operation. This is to place the element that is at the root
|
||||
// of the heap in its correct place so that the heap maintains the min/max-heap order property.
|
||||
func (heap *Heap) bubbleDown() {
|
||||
heap.bubbleDownIndex(0)
|
||||
}
|
||||
|
||||
// Performs the "bubble down" operation. This is to place the element that is at the index
|
||||
// of the heap in its correct place so that the heap maintains the min/max-heap order property.
|
||||
func (heap *Heap) bubbleDownIndex(index int) {
|
||||
size := heap.list.Size()
|
||||
for leftIndex := index<<1 + 1; leftIndex < size; leftIndex = index<<1 + 1 {
|
||||
rightIndex := index<<1 + 2
|
||||
smallerIndex := leftIndex
|
||||
leftValue, _ := heap.list.Get(leftIndex)
|
||||
rightValue, _ := heap.list.Get(rightIndex)
|
||||
if rightIndex < size && heap.Comparator(leftValue, rightValue) > 0 {
|
||||
smallerIndex = rightIndex
|
||||
}
|
||||
indexValue, _ := heap.list.Get(index)
|
||||
smallerValue, _ := heap.list.Get(smallerIndex)
|
||||
if heap.Comparator(indexValue, smallerValue) > 0 {
|
||||
heap.list.Swap(index, smallerIndex)
|
||||
} else {
|
||||
break
|
||||
}
|
||||
index = smallerIndex
|
||||
}
|
||||
}
|
||||
|
||||
// Performs the "bubble up" operation. This is to place a newly inserted
|
||||
// element (i.e. last element in the list) in its correct place so that
|
||||
// the heap maintains the min/max-heap order property.
|
||||
func (heap *Heap) bubbleUp() {
|
||||
index := heap.list.Size() - 1
|
||||
for parentIndex := (index - 1) >> 1; index > 0; parentIndex = (index - 1) >> 1 {
|
||||
indexValue, _ := heap.list.Get(index)
|
||||
parentValue, _ := heap.list.Get(parentIndex)
|
||||
if heap.Comparator(parentValue, indexValue) <= 0 {
|
||||
break
|
||||
}
|
||||
heap.list.Swap(index, parentIndex)
|
||||
index = parentIndex
|
||||
}
|
||||
}
|
||||
|
||||
// Check that the index is within bounds of the list
|
||||
func (heap *Heap) withinRange(index int) bool {
|
||||
return index >= 0 && index < heap.list.Size()
|
||||
}
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package binaryheap
|
||||
|
||||
import "github.com/emirpasic/gods/containers"
|
||||
|
||||
func assertIteratorImplementation() {
|
||||
var _ containers.ReverseIteratorWithIndex = (*Iterator)(nil)
|
||||
}
|
||||
|
||||
// Iterator returns a stateful iterator whose values can be fetched by an index.
|
||||
type Iterator struct {
|
||||
heap *Heap
|
||||
index int
|
||||
}
|
||||
|
||||
// Iterator returns a stateful iterator whose values can be fetched by an index.
|
||||
func (heap *Heap) Iterator() Iterator {
|
||||
return Iterator{heap: heap, index: -1}
|
||||
}
|
||||
|
||||
// Next moves the iterator to the next element and returns true if there was a next element in the container.
|
||||
// If Next() returns true, then next element's index and value can be retrieved by Index() and Value().
|
||||
// If Next() was called for the first time, then it will point the iterator to the first element if it exists.
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Next() bool {
|
||||
if iterator.index < iterator.heap.Size() {
|
||||
iterator.index++
|
||||
}
|
||||
return iterator.heap.withinRange(iterator.index)
|
||||
}
|
||||
|
||||
// Prev moves the iterator to the previous element and returns true if there was a previous element in the container.
|
||||
// If Prev() returns true, then previous element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Prev() bool {
|
||||
if iterator.index >= 0 {
|
||||
iterator.index--
|
||||
}
|
||||
return iterator.heap.withinRange(iterator.index)
|
||||
}
|
||||
|
||||
// Value returns the current element's value.
|
||||
// Does not modify the state of the iterator.
|
||||
func (iterator *Iterator) Value() interface{} {
|
||||
value, _ := iterator.heap.list.Get(iterator.index)
|
||||
return value
|
||||
}
|
||||
|
||||
// Index returns the current element's index.
|
||||
// Does not modify the state of the iterator.
|
||||
func (iterator *Iterator) Index() int {
|
||||
return iterator.index
|
||||
}
|
||||
|
||||
// Begin resets the iterator to its initial state (one-before-first)
|
||||
// Call Next() to fetch the first element if any.
|
||||
func (iterator *Iterator) Begin() {
|
||||
iterator.index = -1
|
||||
}
|
||||
|
||||
// End moves the iterator past the last element (one-past-the-end).
|
||||
// Call Prev() to fetch the last element if any.
|
||||
func (iterator *Iterator) End() {
|
||||
iterator.index = iterator.heap.Size()
|
||||
}
|
||||
|
||||
// First moves the iterator to the first element and returns true if there was a first element in the container.
|
||||
// If First() returns true, then first element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) First() bool {
|
||||
iterator.Begin()
|
||||
return iterator.Next()
|
||||
}
|
||||
|
||||
// Last moves the iterator to the last element and returns true if there was a last element in the container.
|
||||
// If Last() returns true, then last element's index and value can be retrieved by Index() and Value().
|
||||
// Modifies the state of the iterator.
|
||||
func (iterator *Iterator) Last() bool {
|
||||
iterator.End()
|
||||
return iterator.Prev()
|
||||
}
|
||||
22
vendor/github.com/emirpasic/gods/trees/binaryheap/serialization.go
generated
vendored
Normal file
22
vendor/github.com/emirpasic/gods/trees/binaryheap/serialization.go
generated
vendored
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package binaryheap
|
||||
|
||||
import "github.com/emirpasic/gods/containers"
|
||||
|
||||
func assertSerializationImplementation() {
|
||||
var _ containers.JSONSerializer = (*Heap)(nil)
|
||||
var _ containers.JSONDeserializer = (*Heap)(nil)
|
||||
}
|
||||
|
||||
// ToJSON outputs the JSON representation of list's elements.
|
||||
func (heap *Heap) ToJSON() ([]byte, error) {
|
||||
return heap.list.ToJSON()
|
||||
}
|
||||
|
||||
// FromJSON populates list's elements from the input JSON representation.
|
||||
func (heap *Heap) FromJSON(data []byte) error {
|
||||
return heap.list.FromJSON(data)
|
||||
}
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package trees provides an abstract Tree interface.
|
||||
//
|
||||
// In computer science, a tree is a widely used abstract data type (ADT) or data structure implementing this ADT that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes.
|
||||
//
|
||||
// Reference: https://en.wikipedia.org/wiki/Tree_%28data_structure%29
|
||||
package trees
|
||||
|
||||
import "github.com/emirpasic/gods/containers"
|
||||
|
||||
// Tree interface that all trees implement
|
||||
type Tree interface {
|
||||
containers.Container
|
||||
// Empty() bool
|
||||
// Size() int
|
||||
// Clear()
|
||||
// Values() []interface{}
|
||||
}
|
||||
|
|
@ -0,0 +1,251 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package utils
|
||||
|
||||
import "time"
|
||||
|
||||
// Comparator will make type assertion (see IntComparator for example),
|
||||
// which will panic if a or b are not of the asserted type.
|
||||
//
|
||||
// Should return a number:
|
||||
// negative , if a < b
|
||||
// zero , if a == b
|
||||
// positive , if a > b
|
||||
type Comparator func(a, b interface{}) int
|
||||
|
||||
// StringComparator provides a fast comparison on strings
|
||||
func StringComparator(a, b interface{}) int {
|
||||
s1 := a.(string)
|
||||
s2 := b.(string)
|
||||
min := len(s2)
|
||||
if len(s1) < len(s2) {
|
||||
min = len(s1)
|
||||
}
|
||||
diff := 0
|
||||
for i := 0; i < min && diff == 0; i++ {
|
||||
diff = int(s1[i]) - int(s2[i])
|
||||
}
|
||||
if diff == 0 {
|
||||
diff = len(s1) - len(s2)
|
||||
}
|
||||
if diff < 0 {
|
||||
return -1
|
||||
}
|
||||
if diff > 0 {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// IntComparator provides a basic comparison on int
|
||||
func IntComparator(a, b interface{}) int {
|
||||
aAsserted := a.(int)
|
||||
bAsserted := b.(int)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Int8Comparator provides a basic comparison on int8
|
||||
func Int8Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(int8)
|
||||
bAsserted := b.(int8)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Int16Comparator provides a basic comparison on int16
|
||||
func Int16Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(int16)
|
||||
bAsserted := b.(int16)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Int32Comparator provides a basic comparison on int32
|
||||
func Int32Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(int32)
|
||||
bAsserted := b.(int32)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Int64Comparator provides a basic comparison on int64
|
||||
func Int64Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(int64)
|
||||
bAsserted := b.(int64)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// UIntComparator provides a basic comparison on uint
|
||||
func UIntComparator(a, b interface{}) int {
|
||||
aAsserted := a.(uint)
|
||||
bAsserted := b.(uint)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// UInt8Comparator provides a basic comparison on uint8
|
||||
func UInt8Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(uint8)
|
||||
bAsserted := b.(uint8)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// UInt16Comparator provides a basic comparison on uint16
|
||||
func UInt16Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(uint16)
|
||||
bAsserted := b.(uint16)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// UInt32Comparator provides a basic comparison on uint32
|
||||
func UInt32Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(uint32)
|
||||
bAsserted := b.(uint32)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// UInt64Comparator provides a basic comparison on uint64
|
||||
func UInt64Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(uint64)
|
||||
bAsserted := b.(uint64)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Float32Comparator provides a basic comparison on float32
|
||||
func Float32Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(float32)
|
||||
bAsserted := b.(float32)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// Float64Comparator provides a basic comparison on float64
|
||||
func Float64Comparator(a, b interface{}) int {
|
||||
aAsserted := a.(float64)
|
||||
bAsserted := b.(float64)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// ByteComparator provides a basic comparison on byte
|
||||
func ByteComparator(a, b interface{}) int {
|
||||
aAsserted := a.(byte)
|
||||
bAsserted := b.(byte)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// RuneComparator provides a basic comparison on rune
|
||||
func RuneComparator(a, b interface{}) int {
|
||||
aAsserted := a.(rune)
|
||||
bAsserted := b.(rune)
|
||||
switch {
|
||||
case aAsserted > bAsserted:
|
||||
return 1
|
||||
case aAsserted < bAsserted:
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
// TimeComparator provides a basic comparison on time.Time
|
||||
func TimeComparator(a, b interface{}) int {
|
||||
aAsserted := a.(time.Time)
|
||||
bAsserted := b.(time.Time)
|
||||
|
||||
switch {
|
||||
case aAsserted.After(bAsserted):
|
||||
return 1
|
||||
case aAsserted.Before(bAsserted):
|
||||
return -1
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package utils
|
||||
|
||||
import "sort"
|
||||
|
||||
// Sort sorts values (in-place) with respect to the given comparator.
|
||||
//
|
||||
// Uses Go's sort (hybrid of quicksort for large and then insertion sort for smaller slices).
|
||||
func Sort(values []interface{}, comparator Comparator) {
|
||||
sort.Sort(sortable{values, comparator})
|
||||
}
|
||||
|
||||
type sortable struct {
|
||||
values []interface{}
|
||||
comparator Comparator
|
||||
}
|
||||
|
||||
func (s sortable) Len() int {
|
||||
return len(s.values)
|
||||
}
|
||||
func (s sortable) Swap(i, j int) {
|
||||
s.values[i], s.values[j] = s.values[j], s.values[i]
|
||||
}
|
||||
func (s sortable) Less(i, j int) bool {
|
||||
return s.comparator(s.values[i], s.values[j]) < 0
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
// Copyright (c) 2015, Emir Pasic. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package utils provides common utility functions.
|
||||
//
|
||||
// Provided functionalities:
|
||||
// - sorting
|
||||
// - comparators
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// ToString converts a value to string.
|
||||
func ToString(value interface{}) string {
|
||||
switch value.(type) {
|
||||
case string:
|
||||
return value.(string)
|
||||
case int8:
|
||||
return strconv.FormatInt(int64(value.(int8)), 10)
|
||||
case int16:
|
||||
return strconv.FormatInt(int64(value.(int16)), 10)
|
||||
case int32:
|
||||
return strconv.FormatInt(int64(value.(int32)), 10)
|
||||
case int64:
|
||||
return strconv.FormatInt(int64(value.(int64)), 10)
|
||||
case uint8:
|
||||
return strconv.FormatUint(uint64(value.(uint8)), 10)
|
||||
case uint16:
|
||||
return strconv.FormatUint(uint64(value.(uint16)), 10)
|
||||
case uint32:
|
||||
return strconv.FormatUint(uint64(value.(uint32)), 10)
|
||||
case uint64:
|
||||
return strconv.FormatUint(uint64(value.(uint64)), 10)
|
||||
case float32:
|
||||
return strconv.FormatFloat(float64(value.(float32)), 'g', -1, 64)
|
||||
case float64:
|
||||
return strconv.FormatFloat(float64(value.(float64)), 'g', -1, 64)
|
||||
case bool:
|
||||
return strconv.FormatBool(value.(bool))
|
||||
default:
|
||||
return fmt.Sprintf("%+v", value)
|
||||
}
|
||||
}
|
||||
|
|
@ -1 +0,0 @@
|
|||
../kenobi
|
||||
|
|
@ -72,7 +72,7 @@ func (h *helper) Authorization() (string, error) {
|
|||
|
||||
var out bytes.Buffer
|
||||
cmd.Stdout = &out
|
||||
err := h.r.Run(cmd)
|
||||
cmdErr := h.r.Run(cmd)
|
||||
|
||||
// If we see this specific message, it means the domain wasn't found
|
||||
// and we should fall back on anonymous auth.
|
||||
|
|
@ -81,16 +81,22 @@ func (h *helper) Authorization() (string, error) {
|
|||
return Anonymous.Authorization()
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Any other output should be parsed as JSON and the Username / Secret
|
||||
// fields used for Basic authentication.
|
||||
ho := helperOutput{}
|
||||
if err := json.Unmarshal([]byte(output), &ho); err != nil {
|
||||
if cmdErr != nil {
|
||||
// If we failed to parse output, it won't contain Secret, so returning it
|
||||
// in an error should be fine.
|
||||
return "", fmt.Errorf("invoking %s: %v; output: %s", helperName, cmdErr, output)
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
|
||||
if cmdErr != nil {
|
||||
return "", fmt.Errorf("invoking %s: %v", helperName, cmdErr)
|
||||
}
|
||||
|
||||
b := Basic{Username: ho.Username, Password: ho.Secret}
|
||||
return b.Authorization()
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// package k8schain exposes an implementation of the authn.Keychain interface
|
||||
// Package k8schain exposes an implementation of the authn.Keychain interface
|
||||
// based on the semantics the Kubelet follows when pulling the images for a
|
||||
// Pod in Kubernetes.
|
||||
package k8schain
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ package k8schain
|
|||
import (
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"k8s.io/api/core/v1"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/rest"
|
||||
|
|
|
|||
|
|
@ -19,11 +19,11 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/logs"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
)
|
||||
|
||||
|
|
@ -100,19 +100,19 @@ var (
|
|||
func (dk *defaultKeychain) Resolve(reg name.Registry) (Authenticator, error) {
|
||||
dir, err := configDir()
|
||||
if err != nil {
|
||||
log.Printf("Unable to determine config dir: %v", err)
|
||||
logs.Warn.Printf("Unable to determine config dir: %v", err)
|
||||
return Anonymous, nil
|
||||
}
|
||||
file := filepath.Join(dir, "config.json")
|
||||
content, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
log.Printf("Unable to read %q: %v", file, err)
|
||||
logs.Warn.Printf("Unable to read %q: %v", file, err)
|
||||
return Anonymous, nil
|
||||
}
|
||||
|
||||
var cf cfg
|
||||
if err := json.Unmarshal(content, &cf); err != nil {
|
||||
log.Printf("Unable to parse %q: %v", file, err)
|
||||
logs.Warn.Printf("Unable to parse %q: %v", file, err)
|
||||
return Anonymous, nil
|
||||
}
|
||||
|
||||
|
|
|
|||
68
vendor/github.com/google/go-containerregistry/pkg/internal/retry/retry.go
generated
vendored
Normal file
68
vendor/github.com/google/go-containerregistry/pkg/internal/retry/retry.go
generated
vendored
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
// Copyright 2019 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package retry provides methods for retrying operations. It is a thin wrapper
|
||||
// around k8s.io/apimachinery/pkg/util/wait to make certain operations easier.
|
||||
package retry
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
)
|
||||
|
||||
// This is implemented by several errors in the net package as well as our
|
||||
// transport.Error.
|
||||
type temporary interface {
|
||||
Temporary() bool
|
||||
}
|
||||
|
||||
// IsTemporary returns true if err implements Temporary() and it returns true.
|
||||
func IsTemporary(err error) bool {
|
||||
if te, ok := err.(temporary); ok && te.Temporary() {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsNotNil returns true if err is not nil.
|
||||
func IsNotNil(err error) bool {
|
||||
return err != nil
|
||||
}
|
||||
|
||||
// Predicate determines whether an error should be retried.
|
||||
type Predicate func(error) (retry bool)
|
||||
|
||||
// Retry retries a given function, f, until a predicate is satisfied, using
|
||||
// exponential backoff. If the predicate is never satisfied, it will return the
|
||||
// last error returned by f.
|
||||
func Retry(f func() error, p Predicate, backoff wait.Backoff) (err error) {
|
||||
if f == nil {
|
||||
return fmt.Errorf("nil f passed to retry")
|
||||
}
|
||||
if p == nil {
|
||||
return fmt.Errorf("nil p passed to retry")
|
||||
}
|
||||
|
||||
condition := func() (bool, error) {
|
||||
err = f()
|
||||
if p(err) {
|
||||
return false, nil
|
||||
}
|
||||
return true, err
|
||||
}
|
||||
|
||||
wait.ExponentialBackoff(backoff, condition)
|
||||
return
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
// Copyright 2018 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package logs exposes the loggers used by this library.
|
||||
package logs
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
)
|
||||
|
||||
var (
|
||||
// Warn is used to log non-fatal errors.
|
||||
Warn = log.New(ioutil.Discard, "", log.LstdFlags)
|
||||
|
||||
// Progress is used to log notable, successful events.
|
||||
Progress = log.New(ioutil.Discard, "", log.LstdFlags)
|
||||
)
|
||||
|
|
@ -19,15 +19,6 @@ import (
|
|||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// Strictness defines the level of strictness for name validation.
|
||||
type Strictness int
|
||||
|
||||
// Enums for CRUD operations.
|
||||
const (
|
||||
StrictValidation Strictness = iota
|
||||
WeakValidation
|
||||
)
|
||||
|
||||
// stripRunesFn returns a function which returns -1 (i.e. a value which
|
||||
// signals deletion in strings.Map) for runes in 'runes', and the rune otherwise.
|
||||
func stripRunesFn(runes string) func(rune) rune {
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package name defines structured types for representing image references.
|
||||
package name
|
||||
|
||||
import (
|
||||
|
|
@ -63,8 +62,8 @@ func checkDigest(name string) error {
|
|||
return checkElement("digest", name, digestChars, 7+64, 7+64)
|
||||
}
|
||||
|
||||
// NewDigest returns a new Digest representing the given name, according to the given strictness.
|
||||
func NewDigest(name string, strict Strictness) (Digest, error) {
|
||||
// NewDigest returns a new Digest representing the given name.
|
||||
func NewDigest(name string, opts ...Option) (Digest, error) {
|
||||
// Split on "@"
|
||||
parts := strings.Split(name, digestDelim)
|
||||
if len(parts) != 2 {
|
||||
|
|
@ -73,17 +72,17 @@ func NewDigest(name string, strict Strictness) (Digest, error) {
|
|||
base := parts[0]
|
||||
digest := parts[1]
|
||||
|
||||
// We don't require a digest, but if we get one check it's valid,
|
||||
// even when not being strict.
|
||||
// If we are being strict, we want to validate the digest regardless in case
|
||||
// it's empty.
|
||||
if digest != "" || strict == StrictValidation {
|
||||
if err := checkDigest(digest); err != nil {
|
||||
return Digest{}, err
|
||||
}
|
||||
// Always check that the digest is valid.
|
||||
if err := checkDigest(digest); err != nil {
|
||||
return Digest{}, err
|
||||
}
|
||||
|
||||
repo, err := NewRepository(base, strict)
|
||||
tag, err := NewTag(base, opts...)
|
||||
if err == nil {
|
||||
base = tag.Repository.Name()
|
||||
}
|
||||
|
||||
repo, err := NewRepository(base, opts...)
|
||||
if err != nil {
|
||||
return Digest{}, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,42 @@
|
|||
// Copyright 2018 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package name defines structured types for representing image references.
|
||||
//
|
||||
// What's in a name? For image references, not nearly enough!
|
||||
//
|
||||
// Image references look a lot like URLs, but they differ in that they don't
|
||||
// contain the scheme (http or https), they can end with a :tag or a @digest
|
||||
// (the latter being validated), and they perform defaulting for missing
|
||||
// components.
|
||||
//
|
||||
// Since image references don't contain the scheme, we do our best to infer
|
||||
// if we use http or https from the given hostname. We allow http fallback for
|
||||
// any host that looks like localhost (localhost, 127.0.0.1, ::1), ends in
|
||||
// ".local", or is in the "private" address space per RFC 1918. For everything
|
||||
// else, we assume https only. To override this heuristic, use the Insecure
|
||||
// option.
|
||||
//
|
||||
// Image references with a digest signal to us that we should verify the content
|
||||
// of the image matches the digest. E.g. when pulling a Digest reference, we'll
|
||||
// calculate the sha256 of the manifest returned by the registry and error out
|
||||
// if it doesn't match what we asked for.
|
||||
//
|
||||
// For defaulting, we interpret "ubuntu" as
|
||||
// "index.docker.io/library/ubuntu:latest" because we add the missing repo
|
||||
// "library", the missing registry "index.docker.io", and the missing tag
|
||||
// "latest". To disable this defaulting, use the StrictValidation option. This
|
||||
// is useful e.g. to only allow image references that explicitly set a tag or
|
||||
// digest, so that you don't accidentally pull "latest".
|
||||
package name
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
// Copyright 2018 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package name
|
||||
|
||||
type options struct {
|
||||
strict bool // weak by default
|
||||
insecure bool // secure by default
|
||||
}
|
||||
|
||||
func makeOptions(opts ...Option) options {
|
||||
opt := options{}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
return opt
|
||||
}
|
||||
|
||||
// Option is a functional option for name parsing.
|
||||
type Option func(*options)
|
||||
|
||||
// StrictValidation is an Option that requires image references to be fully
|
||||
// specified; i.e. no defaulting for registry (dockerhub), repo (library),
|
||||
// or tag (latest).
|
||||
func StrictValidation(opts *options) {
|
||||
opts.strict = true
|
||||
}
|
||||
|
||||
// WeakValidation is an Option that sets defaults when parsing names, see
|
||||
// StrictValidation.
|
||||
func WeakValidation(opts *options) {
|
||||
opts.strict = false
|
||||
}
|
||||
|
||||
// Insecure is an Option that allows image references to be fetched without TLS.
|
||||
func Insecure(opts *options) {
|
||||
opts.insecure = true
|
||||
}
|
||||
|
|
@ -38,11 +38,11 @@ type Reference interface {
|
|||
}
|
||||
|
||||
// ParseReference parses the string as a reference, either by tag or digest.
|
||||
func ParseReference(s string, strict Strictness) (Reference, error) {
|
||||
if t, err := NewTag(s, strict); err == nil {
|
||||
func ParseReference(s string, opts ...Option) (Reference, error) {
|
||||
if t, err := NewTag(s, opts...); err == nil {
|
||||
return t, nil
|
||||
}
|
||||
if d, err := NewDigest(s, strict); err == nil {
|
||||
if d, err := NewDigest(s, opts...); err == nil {
|
||||
return d, nil
|
||||
}
|
||||
// TODO: Combine above errors into something more useful?
|
||||
|
|
|
|||
|
|
@ -15,12 +15,14 @@
|
|||
package name
|
||||
|
||||
import (
|
||||
"net"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultRegistry is Docker Hub, assumed when a hostname is omitted.
|
||||
DefaultRegistry = "index.docker.io"
|
||||
defaultRegistryAlias = "docker.io"
|
||||
)
|
||||
|
|
@ -63,11 +65,29 @@ func (r Registry) Scope(string) string {
|
|||
return "registry:catalog:*"
|
||||
}
|
||||
|
||||
func (r Registry) isRFC1918() bool {
|
||||
ipStr := strings.Split(r.Name(), ":")[0]
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return false
|
||||
}
|
||||
for _, cidr := range []string{"10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"} {
|
||||
_, block, _ := net.ParseCIDR(cidr)
|
||||
if block.Contains(ip) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Scheme returns https scheme for all the endpoints except localhost or when explicitly defined.
|
||||
func (r Registry) Scheme() string {
|
||||
if r.insecure {
|
||||
return "http"
|
||||
}
|
||||
if r.isRFC1918() {
|
||||
return "http"
|
||||
}
|
||||
if strings.HasPrefix(r.Name(), "localhost:") {
|
||||
return "http"
|
||||
}
|
||||
|
|
@ -94,8 +114,9 @@ func checkRegistry(name string) error {
|
|||
|
||||
// NewRegistry returns a Registry based on the given name.
|
||||
// Strict validation requires explicit, valid RFC 3986 URI authorities to be given.
|
||||
func NewRegistry(name string, strict Strictness) (Registry, error) {
|
||||
if strict == StrictValidation && len(name) == 0 {
|
||||
func NewRegistry(name string, opts ...Option) (Registry, error) {
|
||||
opt := makeOptions(opts...)
|
||||
if opt.strict && len(name) == 0 {
|
||||
return Registry{}, NewErrBadName("strict validation requires the registry to be explicitly defined")
|
||||
}
|
||||
|
||||
|
|
@ -109,16 +130,13 @@ func NewRegistry(name string, strict Strictness) (Registry, error) {
|
|||
name = DefaultRegistry
|
||||
}
|
||||
|
||||
return Registry{registry: name}, nil
|
||||
return Registry{registry: name, insecure: opt.insecure}, nil
|
||||
}
|
||||
|
||||
// NewInsecureRegistry returns an Insecure Registry based on the given name.
|
||||
// Strict validation requires explicit, valid RFC 3986 URI authorities to be given.
|
||||
func NewInsecureRegistry(name string, strict Strictness) (Registry, error) {
|
||||
reg, err := NewRegistry(name, strict)
|
||||
if err != nil {
|
||||
return Registry{}, err
|
||||
}
|
||||
reg.insecure = true
|
||||
return reg, nil
|
||||
//
|
||||
// Deprecated: Use the Insecure Option with NewRegistry instead.
|
||||
func NewInsecureRegistry(name string, opts ...Option) (Registry, error) {
|
||||
opts = append(opts, Insecure)
|
||||
return NewRegistry(name, opts...)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -68,7 +68,8 @@ func checkRepository(repository string) error {
|
|||
}
|
||||
|
||||
// NewRepository returns a new Repository representing the given name, according to the given strictness.
|
||||
func NewRepository(name string, strict Strictness) (Repository, error) {
|
||||
func NewRepository(name string, opts ...Option) (Repository, error) {
|
||||
opt := makeOptions(opts...)
|
||||
if len(name) == 0 {
|
||||
return Repository{}, NewErrBadName("a repository name must be specified")
|
||||
}
|
||||
|
|
@ -88,11 +89,11 @@ func NewRepository(name string, strict Strictness) (Repository, error) {
|
|||
return Repository{}, err
|
||||
}
|
||||
|
||||
reg, err := NewRegistry(registry, strict)
|
||||
reg, err := NewRegistry(registry, opts...)
|
||||
if err != nil {
|
||||
return Repository{}, err
|
||||
}
|
||||
if hasImplicitNamespace(repo, reg) && strict == StrictValidation {
|
||||
if hasImplicitNamespace(repo, reg) && opt.strict {
|
||||
return Repository{}, NewErrBadName("strict validation requires the full repository path (missing 'library')")
|
||||
}
|
||||
return Repository{reg, repo}, nil
|
||||
|
|
|
|||
|
|
@ -71,7 +71,8 @@ func checkTag(name string) error {
|
|||
}
|
||||
|
||||
// NewTag returns a new Tag representing the given name, according to the given strictness.
|
||||
func NewTag(name string, strict Strictness) (Tag, error) {
|
||||
func NewTag(name string, opts ...Option) (Tag, error) {
|
||||
opt := makeOptions(opts...)
|
||||
base := name
|
||||
tag := ""
|
||||
|
||||
|
|
@ -87,13 +88,13 @@ func NewTag(name string, strict Strictness) (Tag, error) {
|
|||
// even when not being strict.
|
||||
// If we are being strict, we want to validate the tag regardless in case
|
||||
// it's empty.
|
||||
if tag != "" || strict == StrictValidation {
|
||||
if tag != "" || opt.strict {
|
||||
if err := checkTag(tag); err != nil {
|
||||
return Tag{}, err
|
||||
}
|
||||
}
|
||||
|
||||
repo, err := NewRepository(base, strict)
|
||||
repo, err := NewRepository(base, opts...)
|
||||
if err != nil {
|
||||
return Tag{}, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,27 +21,28 @@ import (
|
|||
)
|
||||
|
||||
// ConfigFile is the configuration file that holds the metadata describing
|
||||
// how to launch a container. The names of the fields are chosen to reflect
|
||||
// the JSON payload of the ConfigFile as defined here: https://git.io/vrAEY
|
||||
// how to launch a container. See:
|
||||
// https://github.com/opencontainers/image-spec/blob/master/config.md
|
||||
type ConfigFile struct {
|
||||
Architecture string `json:"architecture"`
|
||||
Container string `json:"container"`
|
||||
Created Time `json:"created"`
|
||||
DockerVersion string `json:"docker_version"`
|
||||
History []History `json:"history"`
|
||||
Author string `json:"author,omitempty"`
|
||||
Container string `json:"container,omitempty"`
|
||||
Created Time `json:"created,omitempty"`
|
||||
DockerVersion string `json:"docker_version,omitempty"`
|
||||
History []History `json:"history,omitempty"`
|
||||
OS string `json:"os"`
|
||||
RootFS RootFS `json:"rootfs"`
|
||||
Config Config `json:"config"`
|
||||
ContainerConfig Config `json:"container_config"`
|
||||
OSVersion string `json:"osversion"`
|
||||
ContainerConfig Config `json:"container_config,omitempty"`
|
||||
OSVersion string `json:"osversion,omitempty"`
|
||||
}
|
||||
|
||||
// History is one entry of a list recording how this container image was built.
|
||||
type History struct {
|
||||
Author string `json:"author"`
|
||||
Created Time `json:"created"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
Comment string `json:"comment"`
|
||||
Author string `json:"author,omitempty"`
|
||||
Created Time `json:"created,omitempty"`
|
||||
CreatedBy string `json:"created_by,omitempty"`
|
||||
Comment string `json:"comment,omitempty"`
|
||||
EmptyLayer bool `json:"empty_layer,omitempty"`
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -20,11 +20,10 @@ import (
|
|||
"io"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
)
|
||||
|
||||
// image accesses an image from a docker daemon
|
||||
|
|
@ -42,6 +41,7 @@ type imageOpener struct {
|
|||
buffered bool
|
||||
}
|
||||
|
||||
// ImageOption is a functional option for Image.
|
||||
type ImageOption func(*imageOpener) error
|
||||
|
||||
func (i *imageOpener) Open() (v1.Image, error) {
|
||||
|
|
@ -66,7 +66,7 @@ func (i *imageOpener) Open() (v1.Image, error) {
|
|||
return img, nil
|
||||
}
|
||||
|
||||
// API interface for testing.
|
||||
// ImageSaver is an interface for testing.
|
||||
type ImageSaver interface {
|
||||
ImageSave(context.Context, []string) (io.ReadCloser, error)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -14,12 +14,14 @@
|
|||
|
||||
package daemon
|
||||
|
||||
// WithBufferedOpener buffers the image.
|
||||
func WithBufferedOpener() ImageOption {
|
||||
return func(i *imageOpener) error {
|
||||
return i.setBuffered(true)
|
||||
}
|
||||
}
|
||||
|
||||
// WithUnbufferedOpener streams the image to avoid buffering.
|
||||
func WithUnbufferedOpener() ImageOption {
|
||||
return func(i *imageOpener) error {
|
||||
return i.setBuffered(false)
|
||||
|
|
|
|||
|
|
@ -19,22 +19,21 @@ import (
|
|||
"io"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/client"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// API interface for testing.
|
||||
// ImageLoader is an interface for testing.
|
||||
type ImageLoader interface {
|
||||
ImageLoad(context.Context, io.Reader, bool) (types.ImageLoadResponse, error)
|
||||
ImageTag(context.Context, string, string) error
|
||||
}
|
||||
|
||||
// This is a variable so we can override in tests.
|
||||
// GetImageLoader is a variable so we can override in tests.
|
||||
var GetImageLoader = func() (ImageLoader, error) {
|
||||
cli, err := client.NewEnvClient()
|
||||
if err != nil {
|
||||
|
|
@ -44,6 +43,16 @@ var GetImageLoader = func() (ImageLoader, error) {
|
|||
return cli, nil
|
||||
}
|
||||
|
||||
// Tag adds a tag to an already existent image.
|
||||
func Tag(src, dest name.Tag) error {
|
||||
cli, err := GetImageLoader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return cli.ImageTag(context.Background(), src.String(), dest.String())
|
||||
}
|
||||
|
||||
// Write saves the image into the daemon as the given tag.
|
||||
func Write(tag name.Tag, img v1.Image) (string, error) {
|
||||
cli, err := GetImageLoader()
|
||||
|
|
|
|||
|
|
@ -12,8 +12,8 @@
|
|||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package v1 defines structured types for OCI v1 images
|
||||
//go:generate deepcopy-gen -O zz_deepcopy_generated --go-header-file $BOILER_PLATE_FILE -i .
|
||||
// +k8s:deepcopy-gen=package
|
||||
|
||||
//go:generate deepcopy-gen -O zz_deepcopy_generated --go-header-file $BOILER_PLATE_FILE -i .
|
||||
// Package v1 defines structured types for OCI v1 images
|
||||
package v1
|
||||
|
|
|
|||
59
vendor/github.com/google/go-containerregistry/pkg/v1/empty/index.go
generated
vendored
Normal file
59
vendor/github.com/google/go-containerregistry/pkg/v1/empty/index.go
generated
vendored
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
// Copyright 2018 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package empty
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
"github.com/google/go-containerregistry/pkg/v1/types"
|
||||
)
|
||||
|
||||
// Index is a singleton empty index, think: FROM scratch.
|
||||
var Index = emptyIndex{}
|
||||
|
||||
type emptyIndex struct{}
|
||||
|
||||
func (i emptyIndex) MediaType() (types.MediaType, error) {
|
||||
return types.OCIImageIndex, nil
|
||||
}
|
||||
|
||||
func (i emptyIndex) Digest() (v1.Hash, error) {
|
||||
return partial.Digest(i)
|
||||
}
|
||||
|
||||
func (i emptyIndex) IndexManifest() (*v1.IndexManifest, error) {
|
||||
return &v1.IndexManifest{
|
||||
SchemaVersion: 2,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (i emptyIndex) RawManifest() ([]byte, error) {
|
||||
im, err := i.IndexManifest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return json.Marshal(im)
|
||||
}
|
||||
|
||||
func (i emptyIndex) Image(v1.Hash) (v1.Image, error) {
|
||||
return nil, errors.New("empty index")
|
||||
}
|
||||
|
||||
func (i emptyIndex) ImageIndex(v1.Hash) (v1.ImageIndex, error) {
|
||||
return nil, errors.New("empty index")
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue