The zero value of time.Time is not a valid argument to os.Chtimes
because of the syscall that os.Chtimes calls. Instead we update the zero
value of time.Time to the zero value of Unix Epoch
When using cache, the rootfs may not have been extracted. This prevents uname/gname from resolving
as there is no /etc/password or /etc/group. This makes this layer unnecessarily differ from
a cached layer which does contain this information. Omitting these should be consistent with Docker's
behavior.
Add a new option additonal-whitelist which defaults
to a single entry, "/var/run". This will allow users to
remove "/var/run" from the whitelist or retain the current
behavior with no change.
Previously kaniko would compute the cache key for any copy command by computing
the combined hash of all files in a directory, even if they were listed
as ignored.
With this change, the cache key creation was updated to be aware of ignored
files.
Related issues:
* https://github.com/GoogleContainerTools/kaniko/issues/594
Certain utilities like Apt depend on modtime
for certain files. Kaniko was not setting modtime when
extracting files and so this broke things like apt.
Kaniko now sets the file mod time to the value from the tar
header.
I needed this for my arm64 k8s cluster. I have zero Go experience but
enough experience with other things to fix the rebase (I think!). This
patch is working fine on my cluster.
Update caching run and copy commands to use the new
GetFSFromLayers method and include the whiteout option so that
whiteout files are extracted and included in extractedFiles
* add util.GetFSFromLayers
* GetFSFromImage delegates to GetFSFromLayers
* add FSOpts and FSConfig for GetFSFromLayers
* add tests for GetFSFromLayers
* add gomock for test support
* add mock_v1 for layers
When using the COPY command, if the source and destination have the same
the file should be skipped rather than copied. This is to prevent the
file from being overwritten and therefore producing an empty file.
fixes#904
Previously we returned a low level file system error when checking for
a cached image. By adding a more human friendly log message and explicit
error handling we improve upon the user experience.
This changes allow to use kaniko-warmer multiple times without unnecessary docker image downloads.
To check image presence in cache directory I'm using existing cache function that is used by kaniko-executor.
I've considered building separate function to only check image presence, but it will have pretty much the same code.
Questionable decision is to embed CacheOptions type to KanikoOptions and WarmerOptions. Probably this should be resolved by creating interface providing needed options and implement it both mentioned structs. But I've struggled to get a meaningfull name to it.
To replicate previous behaviour of downloading regardless of cache state I've added --force(-f) option.
This changes provides crucial speed-up when downloading images from remote registry is slow.
Closes#722
Resolves#607
* Deleted a duplicate Gopkg.lock block for github.com/otiai10/copy to
prevent `dep ensure` from deleting it from vendor/
* Searched for breaking changes. Only found ones for
remote.Delete/List/Write/WriteIndex. Searched for those and fixed
* Noticed that NewInsecureRegistry was deprecated and replaced it
* Revert "Change cache key calculation to be more reproducible. (#525)"
This reverts commit 1ffae47fdd.
* Add logging of composition key back
* Do not include build args in cache key
This should be save, given that the commands will have the args included
when the cache key gets built.
This change calculates the exact files and directories needed between
stages used in the COPY command. Instead of saving the entire
stage as a tarball, we now save only the necessary files.
Calculating a manifest from a v1.tarball is very expensive. We can
store those locally as well, and use them if they exist.
This should eventually be replaced with oci layout support once that exists
in ggcr.
and our snapshot optimizations.
If a previous base image has a volume, the directory is added to the
list of files to snapshot. That directory may not actually exist in the image.
Before we were using the full image digest, but that contains a timestamp. Now
we only use the layers themselves and the image config (env vars, etc.).
Also fix a bug in unpacking the layers themselves. mtimes can change during unpacking,
so set them all once at the end.
Right now when we find a v1.Tarball in the local disk cache, we
recompute the digest. This is very expensive and redundant, because
we store tarballs by their digest and use that as a key to look them up.
This PR adds support for the dockerignore file. Previously when kaniko
had support for the dockerignore file, kaniko first went through the
build context and deleted files that were meant to be ignored. This
resulted in a really bad bug where files in user mounted volumes would
be deleted (my bad).
This time around, instead of modifying the build context at all, kaniko
will check if a file should be excluded when executing ADD/COPY
commands. If a file should be excluded (based on the .dockerignore) it
won't be copied over from the buildcontext and shouldn't end up in the
final image.
I also added a .dockerignore file and Dockerfile as an integration test,
which should fail if the dockerignore is not being processed correctly or if files aren't being excluded correctly.
Also, I removed all the integration testing from the previous version of the
dockerignore support.
Right now kaniko only supports COPY --from=<another stage>.
This commit adds support for the case where the referenced image is a remote image
in a registry that has not been used as a stage yet in the build.
Now that hardlink destinations take into account the directory that they are being extracted to, the unit test had to be updated to make sure that two hardlinks were extracted to /tmp/hardlink correctly.
When we execute multistage builds, we store the fs of each intermediate
stage at /kaniko/<stage number> if it's used later in the build. This
created a bug when extracting hardlinks, because we weren't appending
the new directory to the link path.
So, if `/tmp/file1` and `/tmp/file2` were hardlinked, kaniko was trying
to link `/kaniko/0/tmp/file1` to `/tmp/file2` instead of
`/kaniko/0/tmp/file2`. This change will append the correct directory to
the link, and fixes#437#362#352#342.
filepath.Walk has a special error you can return from your walkFn
indicating it should skip directories. This change makes use of that
to skip whitelisted directories.
This is the final part of an optimization that I've been refactoring towards for awhile.
If the Dockerfile consists of no RUN commands, or cached RUN commands, followed by metadata-only
operations, we can skip downloading and unpacking the base image.
* parse arg commands at the top of dockerfiles
* fix pointer reference bug and remove debugging
* fixing tests
* account for meta args with no value
* don't take fs snapshot if / is the only changed path
* move metaArgs inside KanikoStage
* removing unused property
* check for any directory instead of just /
* remove unnecessary check
When building Docker images, layers were previously stored in memory.
This caused obvious issues when manipulating large layers, which could
cause Kaniko to crash.