During a snapshot, when a file changed and not its parent directories,
the parent directories weren't added to the layer. This is inconsistent
with Docker's behavior which always add parent directories to the layer.
In some edge-cases, it could lead to problems with docker considering
that parent directories where owned by root in forthcoming layers
although they shouldn't (see #1163).
Also, Docker seems to be POSIX compliant regarding the name of
directories in the archive, which always have a slash appended. This
commit also fixes this.
Fixes#1163
filesToAdd is sorted in TakeSnapshotFS, but not here. This makes ordering unpredictable within the layer's tarball,
causing the SHA to differ even if layer contents haven't changed
When a Dockerfile command requires using the TakeSnapshotFS function,
the resulting layer has a random ordering of files. This causes the
layer to have a non-deterministic hash defeating the reproducible flag.
Issue #710 appears to document this issue as well.
To fix, always sort the list of files to be added in scanFullFilesystem.
This avoids trying to sort the file list during execution, and takes
almost no time to complete.
* Add parent directories of adding files
* Add integration Dockerfile to test parent directory permissions
* Remove unnecessary helper method
* Use a file on the internet for integration Dockerfile
From the docs on filepath.SkipDir:
> If the function returns SkipDir when invoked on a non-directory file, Walk skips the remaining files in the containing directory
This was causing the bug in #457. Since the file `/etc/hosts` was in the whitelist, when filepath.SkipDir was called the entire etc directory was skipped.
This change only returns filepath.SkipDir on directories.
filepath.Walk has a special error you can return from your walkFn
indicating it should skip directories. This change makes use of that
to skip whitelisted directories.
When building Docker images, layers were previously stored in memory.
This caused obvious issues when manipulating large layers, which could
cause Kaniko to crash.
This will return a string representaiton of the current filesystem to be
used with caching.
Whenever a file is explictly added (via ADD or COPY), it will be stored
in "added" in the LayeredMap. The file will map to a hash created by
CacheHasher (which doesn't take into account mtime, since that will be
different with every build, making the cache useless)
Key() will returns a sha of the added files which will be used in
determining the overall cache key for a command.
To make the logic a bit more clear, when snapshotting files, the
parent dirs are now snapshotted in a different loop from the files we
are actually trying to snapshot. Unfortunately this loop is nearly
duplicated but I did managed to group some fo the related logic
together:
- A function to check if the file should be snapshotted (e.g. isn't
whitelisted, etc.)
- Created a `Tar` type to handle some of the logic around tar-ing, e.g.
tracking hardlinks and stat-ing files before adding them
One side effect of this is that now when snapshoting the file system,
files will be stat-ed twice.
Kaniko uses mtime (as well as file contents and other attributes) to
determine if files have changed. COPY and ADD commands should _always_
update the mtime, because they actually overwrite the files. However it
turns out that the mtime can lag, so kaniko would sometimes add a new
layer when using COPY or ADD on a file, and sometimes would not. This
leads to a non-deterministic number of layers.
To fix this, we have updated the kaniko commands to be more
authoritative in declaring when they have changed a file (e.g. WORKDIR
will now only create the directory when it doesn't exist) and we will
trust those files and _always_ add them, instead of only adding them if
they haven't changed.
It is possible for RUN commands to also change the filesystem, in which
case kaniko has no choice but to look at the filesystem to determine
what has changed. For this case we have added a call to `sync` however
we still cannot guarantee that sometimes the mtime will not lag, causing the
number of layers to be non-deterministic. However when I tried to cause
this behaviour with the RUN command, I couldn't.
This changes the snapshotting logic a bit; before this change, the last
command of the last stage in a Dockerfile would always scan the whole
file system and ignore the files returned by the kaniko command. Instead
we will now trust those files and assume that the snapshotting
performed by previous commands will be adequate.
Docker itself seems to rely on the storage driver to determine when
files have changed and so doesn't have to deal with these problems
directly.
An alternative implementation would use `inotify` to track which files
have changed. However that would mean watching every file in the
filesystem, and adding new watches as files are added. Not only is there
a limit on the number of files that can be watched, but according to the
man pages a) this can take a significant amount of time b) there is
complication around when events arrive (e.g. by the time they arrive,
the files may have changed) and lastly c) events can be lost, which
would mean we'd run into this non-deterministic behaviour again anyway.
Fixes#251
Issue 291 pointed out that symlink "../proc/self/mounts" in the fedora image wasn't being extracted properly and kaniko was erroring out.
This is because the file path wasn't absolute so kaniko wasn't recognizing it as a whitelisted path.
With this change, we first resolve a path to it's absolute path before checking the whitelist.
* Vendor changes for go-containerregistry switch.
* Manual changes for go-containerregistry switch.
The biggest change is refactoring the tarball unpacking.
* Pull more of container-diff out.
* More vendor removals.
* More unit tests.