Currently, kaniko can only build a single image per container run, because the filesystem is full of the content of the first image.
When running kaniko in Jenkins, where we need to start the container "doing nothing" first (using the debug kaniko container), and then exec /kaniko/executor, this is a limitation because it means that if we want to build multiple images, we need to start multiple containers - see https://groups.google.com/forum/#!topic/kaniko-users/_7LivHdMdy0 for more details
A solution to fix this issue is to add a new flag to cleanup the filesystem at the end - the same way it is done between stages when building a multi-stages image. This way, the same (debug) container can be used to build multiple images.
* set default HOME env properly
* set HOME to / if user is set by uid
* fix test
* continue to skip user_run test
* fix unit test to match new functionality
* Enable overwriting of links (solves #351)
* add integration test to check extraction of images with replaced hardlinks
* Prevent following symlinks during extracting normal files
This fixes#359, #361, #362.
I changed RunE to Run so that usage wouldn't show upon error. Usage will
still show if PersistentPreRunE fails, which makes sense since those
functions check to make sure arguments passed in are valid.
Also changed logging of multi arg flags to Debugf so that output would
be cleaner.
To add layer caching to kaniko, I added two flags: --cache and
--use-cache.
If --use-cache is set, then the cache will be used, and if --cache is
specified then that repo will be used to store cached layers. If --cache
isn't set, a cache will be inferred from the destination provided.
Currently, caching only works for RUN commands. Before executing the
command, kaniko checks if the cached layer exists. If it does, it pulls
it and extracts it. It then adds those files to the snapshotter and
append a layer to the config history. If the cached layer does not exist, kaniko executes the command and
pushes the newly created layer to the cache.
All cached layers are tagged with a stable key, which is built based off
of:
1. The base image digest
2. The current state of the filesystem
3. The current command being run
4. The current config file (to account for metadata changes)
I also added two integration tests to make sure caching works
1. Dockerfile_test_cache runs 'date', which should be exactly the same
the second time the image is built
2. Dockerfile_test_cache_install makes sure apt-get install can be
reproduced
gometalinter is broken @ HEAD, and I looked into why that was. During
that process, I remembered that we took the linting scripts from
skaffold, and found that in skaffold gometalinter was replaced with
GolangCI-Lint:
https://github.com/GoogleContainerTools/skaffold/pull/619
The change made linting in skaffold faster, so I figured instead of
fixing gometalinter it made more sense to remove it and replace it with
GolangCI-Lint for kaniko as well.
As mentioned in #346, if only ENTRYPOINT is set in a stage then any
CMD inherited from a parent should be cleared.
If both entrypoint and cmd are set then nothing should change.
I added a function and unit test to review the config file after building a stage
which clears out config.Cmd if ENTRYPOINT was declared but CMD wasn't.
I also added an integration test to make sure this works, which should
be tested by the preexisting container-diff --metadata test.
While looking into #345, we were seeing the error:
Error: error building image: chmod /etc/mtab: operation not permitted
during extraction of `amazonlinux:1`. I looked into why kaniko couldn't
extract this file properly, and found that it already existed as a
symlink pointing to /proc/mounts, which returned an error when we tried
to run chmod on it.
Confusingly, in the image the /etc/mtab is a regular file, not a
symlink.
I can think of two ways to solve this problem:
1. Whitelist /etc/mtab so that whatever already exists in the system
is used
2. Check if a regular file already exists, and hasn't been extracted yet,
before extracting
I went with option 1 because for option 2 we'd have to keep a list of
all files that had been extracted in memory.
Refactoring builds by stage will make it easier to generate cache keys
for layers, since the stageBuilder type will contain everything required
to generate the key:
1. Base image with digest
2. Config file
3. Snapshotter (which will provide a key for the filesystem)
4. The current command (which will be passed in)
This will return a string representaiton of the current filesystem to be
used with caching.
Whenever a file is explictly added (via ADD or COPY), it will be stored
in "added" in the LayeredMap. The file will map to a hash created by
CacheHasher (which doesn't take into account mtime, since that will be
different with every build, making the cache useless)
Key() will returns a sha of the added files which will be used in
determining the overall cache key for a command.
CacheCommand returns true if the command should be cached. Currently,
it's only true for RUN but can be added to ADD/COPY later on (these are
different since the contents of files for ADD/COPY need to be included
in the cache key generation).
I also changed CreatedBy to String so that we can log each command
before cache extraction or regular execution takes place.
The bug in #329 occurred because of a bug in matchSources, where the
filepath wasn't absolute, so the source "/kaniko-bug/*" wasn't being
matched to the file "kaniko-bug/test-file"
To fix this, I added logic for making filepaths absolute and added to
the unit test for the function to test that it works.
Extracting the layers of the filesystem in order will make it easier to
extract cached layers and deal with hardlinks.
This PR implements extracting in order and adds an integration tests to
make sure hardlinks are extracted properly.
It also fixes two bugs I found when extracting symlinks:
1. We'd get a "file exists" error when trying to symlink to an existing
file with a whiteout later in the layer tarball
2. We'd get a "file exists" error when trying to create a symlink from a
file that was created in a prior layer (perhaps as a regular file or as
a symlink pointing to someting else)
To fix both of these, we resolve all symlinks in a layer at the end. I
also added logic to delete any existing paths before creating the
symlink.
I added a KanikoStage to hold each stage of the Dockerfile along with
information about each stage that would be useful later on.
The new KanikoStage type holds the stage itself, along with some
additional information:
1. FinalStage -- whether the current stage is the final stage
2. BaseImageStoredLocally/BaseImageIndex -- whether the base image for
this stage is stored locally, and if so what the index of the base image
is
3. SaveStage -- whether this stage needs to be saved for use in a future
stage
This is the first part of a larger refactor for building stages, which
will later make it easier to add layer caching.
I changed UnpackLocalTarArchive to return a list of files that were
extracted, so that the list of snapshotted files for ADD is more
accurate. Previously, we used to add all files in the extracted dir to
be snapshotted, but this could result in preexisting files being
snapshotted again.
Before #289 was merged, when copying over directories for COPY kaniko
would get a list of all files at the destination specified and add them
to the list of files to be snapshotted. If the destination was root it
would add all files. This worked because the snapshotter made sure the
file had been changed before adding it to the layer.
After #289, we changed the logic to add all files snapshotted to a layer
without checking if the files had been changed. This created the bug in
got all the files at root and added them to the layer without checking
if they had been changed.
This change should fix this bug. Now, the CopyDir function returns a
list of files it copied over and only those files are added to the list
of files to be snapshotted.
Should fix#314