* spelling: environment
* spelling: into
* spelling: certificate
* grammar: based on
* spelling: its
* spelling: non-root
* grammar: unnecessary comma in a compound predicate
* grammar: comma after introductory phrase
* grammar: pronoun problem
(per app.grammarly.com)
Links to command line arguments where all broken on github, since preceding dashes where missing from the links. This commit contains an autogenerated doc after running: doctoc --github README.md (version 1.4.0)
This flag, when set, takes a file in the container and writes the image digest to it. This can be used to extract the exact digest of the built image by surrounding tooling without having to parse the logs from Kaniko, for example by pointing the file to a mounted volume or to a file used durint exit status, such as with Kubernetes' [Termination message policy](https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/)]
When the flag is not set, the digest is not written to file and the executor behaves as before. The digest is also written to file in case of a tarball or a `--no-push`.
Closes#654
Latest BuildKit/img no longer necessarily requires procMount to be unmasked, by
not unsharing PID namespaces.
The current drawback of BuildKit/img compared to kaniko is that BuildKit/img
requires seccomp and AppArmor to be disabled so as to create nested containers.
https://github.com/moby/buildkit/pull/768https://github.com/genuinetools/img/pull/221
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
The description of Buildah is a bit outdated, most importantly Buildah
does not require root privileges (anymore). Also provide a more
detailed description copied from github.com/containers/buildah.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Updated README to clarify:
1. What a build context is and how kaniko interacts with it
2. How to set up a Kubernetes secret for auth to push the final image
Also made some general fixes to make the docs and the run_in_docker
script more clearer.
Currently, kaniko can only build a single image per container run, because the filesystem is full of the content of the first image.
When running kaniko in Jenkins, where we need to start the container "doing nothing" first (using the debug kaniko container), and then exec /kaniko/executor, this is a limitation because it means that if we want to build multiple images, we need to start multiple containers - see https://groups.google.com/forum/#!topic/kaniko-users/_7LivHdMdy0 for more details
A solution to fix this issue is to add a new flag to cleanup the filesystem at the end - the same way it is done between stages when building a multi-stages image. This way, the same (debug) container can be used to build multiple images.
Although we were able to reproduce this with the previous behaviour of
the COPY and ADD commands, we have fixed that issue and our attempts to
cause the issue to occur with RUN did not succeed, so it may be that in
practice this will never happen.
Kaniko uses mtime (as well as file contents and other attributes) to
determine if files have changed. COPY and ADD commands should _always_
update the mtime, because they actually overwrite the files. However it
turns out that the mtime can lag, so kaniko would sometimes add a new
layer when using COPY or ADD on a file, and sometimes would not. This
leads to a non-deterministic number of layers.
To fix this, we have updated the kaniko commands to be more
authoritative in declaring when they have changed a file (e.g. WORKDIR
will now only create the directory when it doesn't exist) and we will
trust those files and _always_ add them, instead of only adding them if
they haven't changed.
It is possible for RUN commands to also change the filesystem, in which
case kaniko has no choice but to look at the filesystem to determine
what has changed. For this case we have added a call to `sync` however
we still cannot guarantee that sometimes the mtime will not lag, causing the
number of layers to be non-deterministic. However when I tried to cause
this behaviour with the RUN command, I couldn't.
This changes the snapshotting logic a bit; before this change, the last
command of the last stage in a Dockerfile would always scan the whole
file system and ignore the files returned by the kaniko command. Instead
we will now trust those files and assume that the snapshotting
performed by previous commands will be adequate.
Docker itself seems to rely on the storage driver to determine when
files have changed and so doesn't have to deal with these problems
directly.
An alternative implementation would use `inotify` to track which files
have changed. However that would mean watching every file in the
filesystem, and adding new watches as files are added. Not only is there
a limit on the number of files that can be watched, but according to the
man pages a) this can take a significant amount of time b) there is
complication around when events arrive (e.g. by the time they arrive,
the files may have changed) and lastly c) events can be lost, which
would mean we'd run into this non-deterministic behaviour again anyway.
Fixes#251
In this refactor I:
1. Created KanikoOptions to make it easier to pass around arguments
passed in through the command line
2. Reorganized executor.go by putting the logic for pushing the image in
a new file push.go
3. Made some error messages clearer
4. Fixed a mistake in the README for pushing to AWS
5. Marked the --bucket flag as hidden since we want people to use
--context instead, and marked an aws flag as hidden which is set in a
vendored directorya
The flag, `--no-push`, is added to allow building a container image
without pushing to a container registry. It can be common, especially
with multi-stage builds and `--target`, to build enough to run the tests,
and then perform a push in a separate CI step. This will facilitate these
workflows.
This commit adds docs aimed at folks interested in ramping up and
contributing to kaniko.
It starts with setting up a github account and forking to make sure the
barrier to entry is as low as possible.
The previous document did not mention that Docker runs as root, and so
some of the benefits of the tools being compared (such as img,
orca-build, and umoci) were not properly explained. This is quite
important because while Kubernetes users have Docker installed (on most
clusters), on local machines and non-Kubernetes workloads the story is
quite different.
Signed-off-by: Aleksa Sarai <asarai@suse.de>