Merged master

This commit is contained in:
Priya Wadhwa 2018-04-04 10:44:19 -07:00
commit 4e2bf40736
No known key found for this signature in database
GPG Key ID: 0D0DAFD8F7AA73AE
7 changed files with 69 additions and 70 deletions

View File

@ -52,11 +52,3 @@ integration-test: out/executor out/kaniko
.PHONY: images
images: out/executor out/kaniko
docker build -t $(REGISTRY)/executor:latest -f deploy/Dockerfile .
.PHONY: run-in-docker
run-in-docker: images
docker run \
-v $(HOME)/.config/gcloud:/root/.config/gcloud \
-v $(GOOGLE_APPLICATION_CREDENTIALS):$(GOOGLE_APPLICATION_CREDENTIALS) \
-e GOOGLE_APPLICATION_CREDENTIALS=$(GOOGLE_APPLICATION_CREDENTIALS) \
$(REGISTRY)/executor:latest

View File

@ -3,54 +3,68 @@
kaniko is a tool to build unpriviliged container images from a Dockerfile. It doesn't depend on a Docker daemon, which enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
The majority of Dockerfile commands can be executed with kaniko, but we're still working on supporting the following commands:
* ADD
* VOLUME
* SHELL
* HEALTHCHECK
* STOPSIGNAL
* ONBUILD
* ARG
* ADD
* VOLUME
* SHELL
* HEALTHCHECK
* STOPSIGNAL
* ONBUILD
* ARG
We're currently in the process of building kaniko, so as of now it isn't production ready. Please let us know if you have any feature requests or find any bugs!
## How does it work?
## How does kaniko work?
The kaniko executor image is responsible for building the final image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
## kaniko Build Context
kaniko supports local directories and GCS buckets as build contexts. To specify a local directory, pass in the `--context=<path to build context>` flag as an argument to the executor image. To specify a GCS bucket, pass in the `--bucket=<GCS bucket name>` flag. The GCS bucket should contain a compressed tar of the build context called `context.tar.gz`, which kaniko will unpack and use as the build context.
## kaniko Build Contexts
kaniko supports local directories and GCS buckets as build contexts. To specify a local directory, pass in the
`--context` flag as an argument to the executor image. To specify a GCS bucket, pass in the
`--bucket` flag. The GCS bucket should contain a compressed tar of the build context called `context.tar.gz`, which kaniko will unpack and use as the build context.
To easily create `context.tar.gz`, we can use [skaffold](https://github.com/GoogleCloudPlatform/skaffold).
To easily create `context.tar.gz`, we can use [skaffold](https://github.com/GoogleCloudPlatform/skaffold). Running the following command within the build context will create `context.tar.gz`, which will contain the Dockerfile and any files it depends on.
Running `skaffold docker context` will create `context.tar.gz`, which will contain the Dockerfile and any files it depends on.
```
skaffold docker context
```
We can copy over the compressed tar with gsutil:
`gsutil cp context.tar.gz gs://<bucket name>`
We can copy over the compressed tar to a GCS bucket with gsutil:
```
gsutil cp context.tar.gz gs://<bucket name>
```
## Running kaniko locally
Requirements:
* Docker
* gcloud
* Docker
* gcloud
We can run the kaniko executor image locally in a Docker daemon to build and push an image from a Dockerfile.
First, to build the executor image locally, run `make images`. This will load the executor image into your Docker daemon.
First, we want to load the executor image into the Docker daemon by running
```shell
make images
```
To run kaniko in Docker, run the following command:
`./run_in_docker.sh <path to build context> <destination of final image in the form gcr.io/$PROJECT/$IMAGE:$TAG>`
```shell
./run_in_docker.sh <path to Dockerfile> <path to build context> <destination of final image>
```
## Running kaniko in a Kubernetes cluster
Requirements:
* Standard Kubernetes cluster
* Kubernetes Secret
* Standard Kubernetes cluster
* Kubernetes Secret
To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image.
To create the secret, first you will need to create a service account in the Pantheon project you want to push the final image to, with `Storage Admin` permissions. You can download a JSON key for this service account, and rename it `kaniko-secret.json`. To create the secret, run:
`kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json>`
```shell
kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json>
```
The Kubernetes job.yaml should look similar to this, with the args parameters filled in:
@ -65,7 +79,7 @@ spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>", "--bucket=<GCS bucket where context.tar.gz lives>", "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
args: ["--dockerfile=<path to Dockerfile>", "--bucket=<GCS bucket>", "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
@ -79,22 +93,20 @@ spec:
secretName: kaniko-secret
```
This example pulls the build context from a GCS bucket. To use a local directory build context, you could consider using configMaps to mount in small build context.
This example pulls the build context from a GCS bucket. To use a local directory build context, you could consider using configMaps to mount in small build contexts.
## Comparison with Other Tools/Solutions
## Comparison with Other Tools
Similar tools include:
* [img](https://github.com/genuinetools/img)
* [orca-build](https://github.com/cyphar/orca-build)
* [buildah](https://github.com/projectatomic/buildah)
* [Bazel](https://github.com/bazelbuild/rules_docker)/[FTL](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl)
* [img](https://github.com/genuinetools/img)
* [orca-build](https://github.com/cyphar/orca-build)
* [buildah](https://github.com/projectatomic/buildah)
* [FTL](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl)
All of these tools build container images; however, the way in which they accomplish this differs from kaniko. Both kaniko and img build unprivileged images, but they interpret “unprivileged” differently. img builds as a non root user from within the container, while kaniko is run in an unprivileged environment with root access inside the container.
All of these tools build container images with different approaches. Both kaniko and img build unprivileged images, but they interpret “unprivileged” differently. img builds as a non root user from within the container, while kaniko is run in an unprivileged environment with root access inside the container.
Unlike orca-build, kaniko doesn't use runC to build images. Instead, it runs as a root user within the container.
orca-build depends on runC to build images from Dockerfiles; since kaniko doesn't use runC it doesn't require the use of kernel namespacing techniques.
buildah requires the same root privilges as a Docker daemon does to run, while kaniko runs without any special privileges or permissions.
Bazel/FTL aim to improve DevEx by achieving the fastest possible creation of Docker images, at the expense of build compatibility. By restricting the set of allowed builds to an optimizable subset, we get the nice side effect of being able to run without privileges inside an arbitrary cluster.
These approaches can be thought of as special-case "fast paths" that can be used in conjunction with the support for general Dockerfile kaniko provides.
FTL aims to achieve the fastest possible creation of Docker images for a subset of images. It can be thought of as a special-case "fast path" that can be used in conjunction with the support for general Dockerfiles kaniko provides.

View File

@ -22,3 +22,4 @@ ADD files/config.json /root/.docker/
RUN ["docker-credential-gcr", "config", "--token-source=env"]
ENV HOME /root
ENV PATH /usr/local/bin
ENTRYPOINT ["/kaniko/executor"]

View File

@ -1,22 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: kbuild-demo
spec:
template:
spec:
containers:
- name: init-static
image: gcr.io/priya-wadhwa/executor:latest
command: ["/work-dir/executor", "--context=kbuild-demo", "--name=gcr.io/priya-wadhwa/kbuild:example", "--dockerfile=/workspace/Dockerfile"]
volumeMounts:
- name: dockerfile-volume
mountPath: /workspace/
restartPolicy: Never
volumes:
- name: dockerfile-volume
configMap:
name: dockerfile-config
items:
- key: Dockerfile
path: Dockerfile

View File

@ -23,10 +23,12 @@ import (
"github.com/GoogleCloudPlatform/k8s-container-builder/pkg/image"
"github.com/GoogleCloudPlatform/k8s-container-builder/pkg/snapshot"
"github.com/GoogleCloudPlatform/k8s-container-builder/pkg/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"io/ioutil"
"os"
"path/filepath"
)
var (
@ -46,7 +48,10 @@ func init() {
var RootCmd = &cobra.Command{
Use: "executor",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
return util.SetLogLevel(logLevel)
if err := util.SetLogLevel(logLevel); err != nil {
return err
}
return checkDockerfilePath()
},
Run: func(cmd *cobra.Command, args []string) {
if err := execute(); err != nil {
@ -56,6 +61,18 @@ var RootCmd = &cobra.Command{
},
}
func checkDockerfilePath() error {
if util.FilepathExists(dockerfilePath) {
return nil
}
// Otherwise, check if the path relative to the build context exists
if util.FilepathExists(filepath.Join(srcContext, dockerfilePath)) {
dockerfilePath = filepath.Join(srcContext, dockerfilePath)
return nil
}
return errors.New("please provide a valid path to a Dockerfile within the build context")
}
func execute() error {
// Parse dockerfile and unpack base image to root
d, err := ioutil.ReadFile(dockerfilePath)

View File

@ -107,7 +107,6 @@ type testyaml struct {
}
var executorImage = "executor-image"
var executorCommand = "/kaniko/executor"
var dockerImage = "gcr.io/cloud-builders/docker"
var ubuntuImage = "ubuntu"
var testRepo = "gcr.io/kaniko-test/"
@ -155,7 +154,7 @@ func main() {
kanikoImage := testRepo + kanikoPrefix + test.repo
kaniko := step{
Name: executorImage,
Args: []string{executorCommand, "--destination", kanikoImage, "--dockerfile", test.dockerfilePath, "--context", test.context},
Args: []string{"--destination", kanikoImage, "--dockerfile", test.dockerfilePath, "--context", test.context},
}
// Pull the kaniko image
@ -199,7 +198,7 @@ func main() {
kanikoImage := testRepo + kanikoPrefix + test.repo
kaniko := step{
Name: executorImage,
Args: []string{executorCommand, "--destination", kanikoImage, "--dockerfile", test.dockerfilePath},
Args: []string{"--destination", kanikoImage, "--dockerfile", test.dockerfilePath},
}
// Pull the kaniko image
pullKanikoImage := step{

View File

@ -32,4 +32,4 @@ docker run \
-v $HOME/.config/gcloud:/root/.config/gcloud \
-v ${context}:/workspace \
gcr.io/kaniko-project/executor:latest \
/kaniko/executor -d ${tag}
-f ${dockerfile} -d ${tag} -c /workspace/