This changes allow to use kaniko-warmer multiple times without unnecessary docker image downloads.
To check image presence in cache directory I'm using existing cache function that is used by kaniko-executor.
I've considered building separate function to only check image presence, but it will have pretty much the same code.
Questionable decision is to embed CacheOptions type to KanikoOptions and WarmerOptions. Probably this should be resolved by creating interface providing needed options and implement it both mentioned structs. But I've struggled to get a meaningfull name to it.
To replicate previous behaviour of downloading regardless of cache state I've added --force(-f) option.
This changes provides crucial speed-up when downloading images from remote registry is slow.
Closes#722
Resolves#607
* Deleted a duplicate Gopkg.lock block for github.com/otiai10/copy to
prevent `dep ensure` from deleting it from vendor/
* Searched for breaking changes. Only found ones for
remote.Delete/List/Write/WriteIndex. Searched for those and fixed
* Noticed that NewInsecureRegistry was deprecated and replaced it
* Revert "Change cache key calculation to be more reproducible. (#525)"
This reverts commit 1ffae47fdd.
* Add logging of composition key back
* Do not include build args in cache key
This should be save, given that the commands will have the args included
when the cache key gets built.
This change calculates the exact files and directories needed between
stages used in the COPY command. Instead of saving the entire
stage as a tarball, we now save only the necessary files.
Calculating a manifest from a v1.tarball is very expensive. We can
store those locally as well, and use them if they exist.
This should eventually be replaced with oci layout support once that exists
in ggcr.
and our snapshot optimizations.
If a previous base image has a volume, the directory is added to the
list of files to snapshot. That directory may not actually exist in the image.
Before we were using the full image digest, but that contains a timestamp. Now
we only use the layers themselves and the image config (env vars, etc.).
Also fix a bug in unpacking the layers themselves. mtimes can change during unpacking,
so set them all once at the end.
Right now when we find a v1.Tarball in the local disk cache, we
recompute the digest. This is very expensive and redundant, because
we store tarballs by their digest and use that as a key to look them up.
This PR adds support for the dockerignore file. Previously when kaniko
had support for the dockerignore file, kaniko first went through the
build context and deleted files that were meant to be ignored. This
resulted in a really bad bug where files in user mounted volumes would
be deleted (my bad).
This time around, instead of modifying the build context at all, kaniko
will check if a file should be excluded when executing ADD/COPY
commands. If a file should be excluded (based on the .dockerignore) it
won't be copied over from the buildcontext and shouldn't end up in the
final image.
I also added a .dockerignore file and Dockerfile as an integration test,
which should fail if the dockerignore is not being processed correctly or if files aren't being excluded correctly.
Also, I removed all the integration testing from the previous version of the
dockerignore support.
Right now kaniko only supports COPY --from=<another stage>.
This commit adds support for the case where the referenced image is a remote image
in a registry that has not been used as a stage yet in the build.
Now that hardlink destinations take into account the directory that they are being extracted to, the unit test had to be updated to make sure that two hardlinks were extracted to /tmp/hardlink correctly.
When we execute multistage builds, we store the fs of each intermediate
stage at /kaniko/<stage number> if it's used later in the build. This
created a bug when extracting hardlinks, because we weren't appending
the new directory to the link path.
So, if `/tmp/file1` and `/tmp/file2` were hardlinked, kaniko was trying
to link `/kaniko/0/tmp/file1` to `/tmp/file2` instead of
`/kaniko/0/tmp/file2`. This change will append the correct directory to
the link, and fixes#437#362#352#342.
filepath.Walk has a special error you can return from your walkFn
indicating it should skip directories. This change makes use of that
to skip whitelisted directories.
This is the final part of an optimization that I've been refactoring towards for awhile.
If the Dockerfile consists of no RUN commands, or cached RUN commands, followed by metadata-only
operations, we can skip downloading and unpacking the base image.
* parse arg commands at the top of dockerfiles
* fix pointer reference bug and remove debugging
* fixing tests
* account for meta args with no value
* don't take fs snapshot if / is the only changed path
* move metaArgs inside KanikoStage
* removing unused property
* check for any directory instead of just /
* remove unnecessary check
When building Docker images, layers were previously stored in memory.
This caused obvious issues when manipulating large layers, which could
cause Kaniko to crash.
Issue #410 experienced an error with base image caching where they were
"Not Authorized" to get information for a remote image, but later were
able to download and extract the base image.
To fix this, we can switch to using the remoteImage function for getting
information about the digest, which is the same function used for
downloading base images. This way we can also take advantage of the
--insecure and --skip-tls-verify flags if users pass those in when
trying to get digests for the cache as well.
* comments
* initial commit for persisent volume caching
* cache warmer works
* general cleanup
* adding some debugging
* adding missing files
* Fixing up cache retrieval and cleanup
* fix tests
* removing auth since we only cache public images
* simplifying the caching logic
* fixing logic
* adding volume cache to integration tests. remove auth from cache warmer image.
* add building warmer to integration-test
* move sample yaml files to examples dir
* small test fix
* Enable overwriting of links (solves #351)
* add integration test to check extraction of images with replaced hardlinks
* Prevent following symlinks during extracting normal files
This fixes#359, #361, #362.
To add layer caching to kaniko, I added two flags: --cache and
--use-cache.
If --use-cache is set, then the cache will be used, and if --cache is
specified then that repo will be used to store cached layers. If --cache
isn't set, a cache will be inferred from the destination provided.
Currently, caching only works for RUN commands. Before executing the
command, kaniko checks if the cached layer exists. If it does, it pulls
it and extracts it. It then adds those files to the snapshotter and
append a layer to the config history. If the cached layer does not exist, kaniko executes the command and
pushes the newly created layer to the cache.
All cached layers are tagged with a stable key, which is built based off
of:
1. The base image digest
2. The current state of the filesystem
3. The current command being run
4. The current config file (to account for metadata changes)
I also added two integration tests to make sure caching works
1. Dockerfile_test_cache runs 'date', which should be exactly the same
the second time the image is built
2. Dockerfile_test_cache_install makes sure apt-get install can be
reproduced
gometalinter is broken @ HEAD, and I looked into why that was. During
that process, I remembered that we took the linting scripts from
skaffold, and found that in skaffold gometalinter was replaced with
GolangCI-Lint:
https://github.com/GoogleContainerTools/skaffold/pull/619
The change made linting in skaffold faster, so I figured instead of
fixing gometalinter it made more sense to remove it and replace it with
GolangCI-Lint for kaniko as well.
While looking into #345, we were seeing the error:
Error: error building image: chmod /etc/mtab: operation not permitted
during extraction of `amazonlinux:1`. I looked into why kaniko couldn't
extract this file properly, and found that it already existed as a
symlink pointing to /proc/mounts, which returned an error when we tried
to run chmod on it.
Confusingly, in the image the /etc/mtab is a regular file, not a
symlink.
I can think of two ways to solve this problem:
1. Whitelist /etc/mtab so that whatever already exists in the system
is used
2. Check if a regular file already exists, and hasn't been extracted yet,
before extracting
I went with option 1 because for option 2 we'd have to keep a list of
all files that had been extracted in memory.
This will return a string representaiton of the current filesystem to be
used with caching.
Whenever a file is explictly added (via ADD or COPY), it will be stored
in "added" in the LayeredMap. The file will map to a hash created by
CacheHasher (which doesn't take into account mtime, since that will be
different with every build, making the cache useless)
Key() will returns a sha of the added files which will be used in
determining the overall cache key for a command.
The bug in #329 occurred because of a bug in matchSources, where the
filepath wasn't absolute, so the source "/kaniko-bug/*" wasn't being
matched to the file "kaniko-bug/test-file"
To fix this, I added logic for making filepaths absolute and added to
the unit test for the function to test that it works.
Extracting the layers of the filesystem in order will make it easier to
extract cached layers and deal with hardlinks.
This PR implements extracting in order and adds an integration tests to
make sure hardlinks are extracted properly.
It also fixes two bugs I found when extracting symlinks:
1. We'd get a "file exists" error when trying to symlink to an existing
file with a whiteout later in the layer tarball
2. We'd get a "file exists" error when trying to create a symlink from a
file that was created in a prior layer (perhaps as a regular file or as
a symlink pointing to someting else)
To fix both of these, we resolve all symlinks in a layer at the end. I
also added logic to delete any existing paths before creating the
symlink.
I added a KanikoStage to hold each stage of the Dockerfile along with
information about each stage that would be useful later on.
The new KanikoStage type holds the stage itself, along with some
additional information:
1. FinalStage -- whether the current stage is the final stage
2. BaseImageStoredLocally/BaseImageIndex -- whether the base image for
this stage is stored locally, and if so what the index of the base image
is
3. SaveStage -- whether this stage needs to be saved for use in a future
stage
This is the first part of a larger refactor for building stages, which
will later make it easier to add layer caching.
I changed UnpackLocalTarArchive to return a list of files that were
extracted, so that the list of snapshotted files for ADD is more
accurate. Previously, we used to add all files in the extracted dir to
be snapshotted, but this could result in preexisting files being
snapshotted again.
Before #289 was merged, when copying over directories for COPY kaniko
would get a list of all files at the destination specified and add them
to the list of files to be snapshotted. If the destination was root it
would add all files. This worked because the snapshotter made sure the
file had been changed before adding it to the layer.
After #289, we changed the logic to add all files snapshotted to a layer
without checking if the files had been changed. This created the bug in
got all the files at root and added them to the layer without checking
if they had been changed.
This change should fix this bug. Now, the CopyDir function returns a
list of files it copied over and only those files are added to the list
of files to be snapshotted.
Should fix#314
To make the logic a bit more clear, when snapshotting files, the
parent dirs are now snapshotted in a different loop from the files we
are actually trying to snapshot. Unfortunately this loop is nearly
duplicated but I did managed to group some fo the related logic
together:
- A function to check if the file should be snapshotted (e.g. isn't
whitelisted, etc.)
- Created a `Tar` type to handle some of the logic around tar-ing, e.g.
tracking hardlinks and stat-ing files before adding them
One side effect of this is that now when snapshoting the file system,
files will be stat-ed twice.
Kaniko uses mtime (as well as file contents and other attributes) to
determine if files have changed. COPY and ADD commands should _always_
update the mtime, because they actually overwrite the files. However it
turns out that the mtime can lag, so kaniko would sometimes add a new
layer when using COPY or ADD on a file, and sometimes would not. This
leads to a non-deterministic number of layers.
To fix this, we have updated the kaniko commands to be more
authoritative in declaring when they have changed a file (e.g. WORKDIR
will now only create the directory when it doesn't exist) and we will
trust those files and _always_ add them, instead of only adding them if
they haven't changed.
It is possible for RUN commands to also change the filesystem, in which
case kaniko has no choice but to look at the filesystem to determine
what has changed. For this case we have added a call to `sync` however
we still cannot guarantee that sometimes the mtime will not lag, causing the
number of layers to be non-deterministic. However when I tried to cause
this behaviour with the RUN command, I couldn't.
This changes the snapshotting logic a bit; before this change, the last
command of the last stage in a Dockerfile would always scan the whole
file system and ignore the files returned by the kaniko command. Instead
we will now trust those files and assume that the snapshotting
performed by previous commands will be adequate.
Docker itself seems to rely on the storage driver to determine when
files have changed and so doesn't have to deal with these problems
directly.
An alternative implementation would use `inotify` to track which files
have changed. However that would mean watching every file in the
filesystem, and adding new watches as files are added. Not only is there
a limit on the number of files that can be watched, but according to the
man pages a) this can take a significant amount of time b) there is
complication around when events arrive (e.g. by the time they arrive,
the files may have changed) and lastly c) events can be lost, which
would mean we'd run into this non-deterministic behaviour again anyway.
Fixes#251
Issue 291 pointed out that symlink "../proc/self/mounts" in the fedora image wasn't being extracted properly and kaniko was erroring out.
This is because the file path wasn't absolute so kaniko wasn't recognizing it as a whitelisted path.
With this change, we first resolve a path to it's absolute path before checking the whitelist.
* dep ensure and use k8schain
* checkpoint
* fix vendoring, stuff builds
* Use k8schain for pushes too
* Use NewNoClient
* update ggcr dep
* Move k8schain usage to image_util.go
When this test was originally created, an HTTP get to
`https://url.com/something/not/real` probably failed, but now it
will return a `503`, i.e. the `http.Get` call will succeed.
This test will now use a URL which should not reasonable ever
succeed (famous last words). Alternatively we could use dependency
injection and mock `http.Get` but it doesn't seem worth it.
This commit also updates the test to use `Run` to run each test
in the table test as a separate test so we can get a clear indication
which cases fail and which succeed.
* added switch to extract different sources as build context
* first rough implementation of aws s3
* added buildcontext package and interface
* added GetBuildContext func to buildcontext.go
added fallback to gcs
renamed GC struct to GCS
* improved the default behavior of build context retrieval
* renamed gc:// to gs:// in order to follow common standards
* renamed struct File to Dir and some cleanup work
* moved context.tar suffix to the buildcontext processors where it is needed
* added buildcontext retrieval as struct variable
added fallback if prefix in bucket specifier is present
* cleanup if structures
* added prefix to s3
* WIP
* Fixed build context bugs
* refactored build context
* adding metadata tests back to integration tests and fixing resulting bugs
* fix onbuild and default env
* removing old test files
* adding the ArgsEscaped boolean on CMD commands
* fix onbuild test
* ignore failing test until container-diff is fixed
* code comments
* adding todo to remove uncomment failing test
* Vendor changes for go-containerregistry switch.
* Manual changes for go-containerregistry switch.
The biggest change is refactoring the tarball unpacking.
* Pull more of container-diff out.
* More vendor removals.
* More unit tests.
* adding VOLUME command
* proper test project
* general fixes
* fixing project name
* fixing volume unit test
* fixing integration test
* adding tests
* adding util test
* fixing test
* actually create the volume mounted directory
* fix test