- Change integration tests to use docker hub instead of GCR due to bug
in library that requires authentication with gcr.io even for public
images. See #966 for bug tracking this.
- Make uploading to GCS bucket configurable through a flag --uploadToGCS
- Utilize a locally deployed docker registry in travis CI to remove
dependency of needing to authenticate with GCP. This requires host
networking such that we can access the registry on localhost:5000
- Use the commit that's being tested for `TestGitBuildcontext`
- Remove duplicate GitBuildContext case that's now covered by default in
travis CI.
ArgsEscaped according to Docker docs should only be set in Windows
environments: https://docs.docker.com/engine/api/v1.30/
It was causing integration test to fail with following message:
```
FAIL: TestRun/test_Dockerfile_test_metadata (8.48s)
"Diff": {
"Adds": [
"ArgsEscaped: true"
],
"Dels": [
"ArgsEscaped: false"
]
```
However docker 18.xx returns ArgsEscaped: true
whereas docker 19.xx returns ArgsEscaped: false
Hence this patch also adds the docker version check to the integration
to ignore ArgsEscaped being different when 18.xx is used.
Previously it would mount .config/gcloud directory which is not
recommended for systems such as CI that authenticate with Google Cloud.
This commit allows you to set the path to a service account.
By default previous behaviour will be as before so this shouldn't break
existing systems that run the integration test.
The executor accepts a few arguments dockerfile, context, cache-dir and
digest-file that all represent paths. This commits allows those paths to
be relative to the working directory of the executor.
Fixes#732#731#675
* adding benchmarking code
* enable writing to file
* fix build
* time more stuff
* adding benchmarking to integration tests
* compare docker and kaniko times in integration tests
* Switch to setting benchmark file with an env var
* close file at the right time
* fix integration test with environment variables
* fix integration tests
* Adding benchmarking documentation to DEVELOPEMENT.md
* human readable benchmarking steps
I improved handling of the .dockerignore file by:
1. Using docker's parser to parse the .dockerignore and using their
helper functions to determine if a file should be deleted
2. Copying the Dockerfile we are building to /kaniko/Dockerfile so that
if the Dockerfile is specified in .dockerignore it won't be deleted, and
if it is specified in the .dockerignore it won't end up in the final
image
3. I also improved the integration test to create a temp directory with
files to ignore, and updated the .dockerignore to include exclusions (!)
It seems like .dockerignore deletes files containeed within the file locally upon
docker build, so I created a temporary test dir to make sure the
--ignore flag works. We make sure it exists before building docker and
kaniko for Dockerfile_test_ignore, and then delete it after the builds
have completed.
Added a --ignore flag to ignore packages and files in the build context.
This should mimic the .dockerignore file. Before starting the build, we
go through and delete ignored files from the build context.
* comments
* initial commit for persisent volume caching
* cache warmer works
* general cleanup
* adding some debugging
* adding missing files
* Fixing up cache retrieval and cleanup
* fix tests
* removing auth since we only cache public images
* simplifying the caching logic
* fixing logic
* adding volume cache to integration tests. remove auth from cache warmer image.
* add building warmer to integration-test
* move sample yaml files to examples dir
* small test fix
To add layer caching to kaniko, I added two flags: --cache and
--use-cache.
If --use-cache is set, then the cache will be used, and if --cache is
specified then that repo will be used to store cached layers. If --cache
isn't set, a cache will be inferred from the destination provided.
Currently, caching only works for RUN commands. Before executing the
command, kaniko checks if the cached layer exists. If it does, it pulls
it and extracts it. It then adds those files to the snapshotter and
append a layer to the config history. If the cached layer does not exist, kaniko executes the command and
pushes the newly created layer to the cache.
All cached layers are tagged with a stable key, which is built based off
of:
1. The base image digest
2. The current state of the filesystem
3. The current command being run
4. The current config file (to account for metadata changes)
I also added two integration tests to make sure caching works
1. Dockerfile_test_cache runs 'date', which should be exactly the same
the second time the image is built
2. Dockerfile_test_cache_install makes sure apt-get install can be
reproduced
In #251 we are investigating test flakes due to layer offsets not
matching, this change will give us a bit more context so we can be sure
which image has which number of layers, and it will also include the
digest of the image, since kaniko always pushes images to a remote repo,
so if the test fails we can pull the digest and see what is up.
Also updated reproducible Dockerfile to be built with reproducible flag,
which I think was the original intent (without this change, there is no
difference between how `kaniko-dockerfile_test_copy_reproducible` and
`kaniko-dockerfile_test_copy` are built.
To allow contributors to run the integration tests with their own GCS
buckets and image repos (since not all contributors will have accesss to
the projects used by the kaniko maintainers) this updates the
integration tests so that these can be provided on the command line.
This allows tests to be run individually, without using `make
integration-test`. Previously, part of the test setup was done
in the shell script (creating the context tarball that is required
for the tests that build images with context). Instead it will be
done in the test iself, so we can use `go test` to run tests
individually if we want to.
If we are running only one individual test, we don't want to build
all of the images, so this commit creates a builder which tracks which
images it has built and can be used by a tests to check if it should
build an image before running, or it will use the images that have
already been built by a previous test.
The name of the context tarball has also been made unique (it includes
the unix timestamp) to avoid potential test flakes if two tests using
the same GCS bucket run simultaneously.