The executor accepts a few arguments dockerfile, context, cache-dir and
digest-file that all represent paths. This commits allows those paths to
be relative to the working directory of the executor.
Fixes#732#731#675
Fixed#296.
The output manifests may have `application/vnd.docker.distribution.manifest.v2+json`
as their media types instead of `application/vnd.oci.image.manifest.v1+json`.
This changes allow to use kaniko-warmer multiple times without unnecessary docker image downloads.
To check image presence in cache directory I'm using existing cache function that is used by kaniko-executor.
I've considered building separate function to only check image presence, but it will have pretty much the same code.
Questionable decision is to embed CacheOptions type to KanikoOptions and WarmerOptions. Probably this should be resolved by creating interface providing needed options and implement it both mentioned structs. But I've struggled to get a meaningfull name to it.
To replicate previous behaviour of downloading regardless of cache state I've added --force(-f) option.
This changes provides crucial speed-up when downloading images from remote registry is slow.
Closes#722
This flag, when set, takes a file in the container and writes the image digest to it. This can be used to extract the exact digest of the built image by surrounding tooling without having to parse the logs from Kaniko, for example by pointing the file to a mounted volume or to a file used durint exit status, such as with Kubernetes' [Termination message policy](https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/)]
When the flag is not set, the digest is not written to file and the executor behaves as before. The digest is also written to file in case of a tarball or a `--no-push`.
Closes#654
This PR adds support for the dockerignore file. Previously when kaniko
had support for the dockerignore file, kaniko first went through the
build context and deleted files that were meant to be ignored. This
resulted in a really bad bug where files in user mounted volumes would
be deleted (my bad).
This time around, instead of modifying the build context at all, kaniko
will check if a file should be excluded when executing ADD/COPY
commands. If a file should be excluded (based on the .dockerignore) it
won't be copied over from the buildcontext and shouldn't end up in the
final image.
I also added a .dockerignore file and Dockerfile as an integration test,
which should fail if the dockerignore is not being processed correctly or if files aren't being excluded correctly.
Also, I removed all the integration testing from the previous version of the
dockerignore support.
* adding benchmarking code
* enable writing to file
* fix build
* time more stuff
* adding benchmarking to integration tests
* compare docker and kaniko times in integration tests
* Switch to setting benchmark file with an env var
* close file at the right time
* fix integration test with environment variables
* fix integration tests
* Adding benchmarking documentation to DEVELOPEMENT.md
* human readable benchmarking steps
I improved handling of the .dockerignore file by:
1. Using docker's parser to parse the .dockerignore and using their
helper functions to determine if a file should be deleted
2. Copying the Dockerfile we are building to /kaniko/Dockerfile so that
if the Dockerfile is specified in .dockerignore it won't be deleted, and
if it is specified in the .dockerignore it won't end up in the final
image
3. I also improved the integration test to create a temp directory with
files to ignore, and updated the .dockerignore to include exclusions (!)
Added a --ignore flag to ignore packages and files in the build context.
This should mimic the .dockerignore file. Before starting the build, we
go through and delete ignored files from the build context.
* comments
* initial commit for persisent volume caching
* cache warmer works
* general cleanup
* adding some debugging
* adding missing files
* Fixing up cache retrieval and cleanup
* fix tests
* removing auth since we only cache public images
* simplifying the caching logic
* fixing logic
* adding volume cache to integration tests. remove auth from cache warmer image.
* add building warmer to integration-test
* move sample yaml files to examples dir
* small test fix
As described in #373, kaniko panics when provided with --cache and --no-push since it tries to infer a cache repo from the destination, which doesn't exist.
To fix this, I added a check to make sure --cache-repo is passed in when both these flags are provided.
Currently, kaniko can only build a single image per container run, because the filesystem is full of the content of the first image.
When running kaniko in Jenkins, where we need to start the container "doing nothing" first (using the debug kaniko container), and then exec /kaniko/executor, this is a limitation because it means that if we want to build multiple images, we need to start multiple containers - see https://groups.google.com/forum/#!topic/kaniko-users/_7LivHdMdy0 for more details
A solution to fix this issue is to add a new flag to cleanup the filesystem at the end - the same way it is done between stages when building a multi-stages image. This way, the same (debug) container can be used to build multiple images.
I changed RunE to Run so that usage wouldn't show upon error. Usage will
still show if PersistentPreRunE fails, which makes sense since those
functions check to make sure arguments passed in are valid.
Also changed logging of multi arg flags to Debugf so that output would
be cleaner.
To add layer caching to kaniko, I added two flags: --cache and
--use-cache.
If --use-cache is set, then the cache will be used, and if --cache is
specified then that repo will be used to store cached layers. If --cache
isn't set, a cache will be inferred from the destination provided.
Currently, caching only works for RUN commands. Before executing the
command, kaniko checks if the cached layer exists. If it does, it pulls
it and extracts it. It then adds those files to the snapshotter and
append a layer to the config history. If the cached layer does not exist, kaniko executes the command and
pushes the newly created layer to the cache.
All cached layers are tagged with a stable key, which is built based off
of:
1. The base image digest
2. The current state of the filesystem
3. The current command being run
4. The current config file (to account for metadata changes)
I also added two integration tests to make sure caching works
1. Dockerfile_test_cache runs 'date', which should be exactly the same
the second time the image is built
2. Dockerfile_test_cache_install makes sure apt-get install can be
reproduced
gometalinter is broken @ HEAD, and I looked into why that was. During
that process, I remembered that we took the linting scripts from
skaffold, and found that in skaffold gometalinter was replaced with
GolangCI-Lint:
https://github.com/GoogleContainerTools/skaffold/pull/619
The change made linting in skaffold faster, so I figured instead of
fixing gometalinter it made more sense to remove it and replace it with
GolangCI-Lint for kaniko as well.