Add DEVELOPMENT.md (#238)

This commit adds docs aimed at folks interested in ramping up and
contributing to kaniko.

It starts with setting up a github account and forking to make sure the
barrier to entry is as low as possible.
This commit is contained in:
Christie Wilson 2018-07-18 10:52:37 -07:00 committed by priyawadhwa
parent 8716936977
commit d293df5c47
2 changed files with 208 additions and 68 deletions

93
DEVELOPMENT.md Normal file
View File

@ -0,0 +1,93 @@
# Development
This doc explains the development workflow so you can get started
[contributing](CONTRIBUTING.md) to Kaniko!
## Getting started
First you will need to setup your GitHub account and create a fork:
1. Create [a GitHub account](https://github.com/join)
1. Setup [GitHub access via
SSH](https://help.github.com/articles/connecting-to-github-with-ssh/)
1. [Create and checkout a repo fork](#checkout-your-fork)
Once you have those, you can iterate on kaniko:
1. [Run your instance of kaniko](README.md#running-kaniko)
1. [Verifying kaniko builds](#verifying-kaniko-builds)
1. [Run kaniko tests](#testing-kaniko)
When you're ready, you can [create a PR](#creating-a-pr)!
## Checkout your fork
The Go tools require that you clone the repository to the `src/github.com/GoogleContainerTools/kaniko` directory
in your [`GOPATH`](https://github.com/golang/go/wiki/SettingGOPATH).
To check out this repository:
1. Create your own [fork of this
repo](https://help.github.com/articles/fork-a-repo/)
2. Clone it to your machine:
```shell
mkdir -p ${GOPATH}/src/github.com/GoogleContainerTools
cd ${GOPATH}/src/github.com/GoogleContainerTools
git clone git@github.com:${YOUR_GITHUB_USERNAME}/kaniko.git
cd kaniko
git remote add upstream git@github.com:GoogleContainerTools/kaniko.git
git remote set-url --push upstream no_push
```
_Adding the `upstream` remote sets you up nicely for regularly [syncing your
fork](https://help.github.com/articles/syncing-a-fork/)._
## Verifying kaniko builds
Images built with kaniko should be no different from images built elsewhere.
While you iterate on kaniko, you can verify images built with kaniko by:
1. Build the image using another system, such as `docker build`
2. Use [`container-diff`](https://github.com/GoogleContainerTools/container-diff) to diff the images
## Testing kaniko
kaniko has both [unit tests](#unit-tests) and [integration tests](#integration-tests).
### Unit Tests
The unit tests live with the code they test and can be run with:
```shell
make test
```
_These tests will not run correctly unless you have [checked out your fork into your `$GOPATH`](#checkout-your-fork)._
### Integration tests
The integration tests live in [`integration`](./integration) and can be run with:
```shell
make integration-test
```
_These tests require push access to a project in GCP, and so can only be run
by maintainers who have access. These tests will be kicked off by [reviewers](#reviews)
for submitted PRs._
## Creating a PR
When you have changes you would like to propose to kaniko, you will need to:
1. Ensure the commit message(s) describe what issue you are fixing and how you are fixing it
(include references to [issue numbers](https://help.github.com/articles/closing-issues-using-keywords/)
if appropriate)
1. [Create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/)
### Reviews
Each PR must be reviewed by a maintainer. This maintainer will add the `kokoro:run` label
to a PR to kick of [the integration tests](#integration-tests), which must pass for the PR
to be submitted.

183
README.md
View File

@ -14,12 +14,13 @@ We do **not** recommend running the kaniko executor binary in another image, as
- [How does kaniko work?](#how-does-kaniko-work) - [How does kaniko work?](#how-does-kaniko-work)
- [Known Issues](#known-issues) - [Known Issues](#known-issues)
- [Demo](#demo) - [Demo](#demo)
- [Development](#development) - [Using kaniko](#using-kaniko)
- [kaniko Build Contexts](#kaniko-build-contexts) - [kaniko Build Contexts](#kaniko-build-contexts)
- [Running kaniko in a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster) - [Running kaniko](#running-kaniko)
- [Running kaniko in gVisor](#running-kaniko-in-gvisor) - [Running kaniko in a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster)
- [Running kaniko in Google Container Builder](#running-kaniko-in-google-container-builder) - [Running kaniko in gVisor](#running-kaniko-in-gvisor)
- [Running kaniko locally](#running-kaniko-locally) - [Running kaniko in Google Container Builder](#running-kaniko-in-google-container-builder)
- [Running kaniko locally](#running-kaniko-locally)
- [Pushing to Different Registries](#pushing-to-different-registries) - [Pushing to Different Registries](#pushing-to-different-registries)
- [Additional Flags](#additional-flags) - [Additional Flags](#additional-flags)
- [Debug Image](#debug-image) - [Debug Image](#debug-image)
@ -27,6 +28,8 @@ We do **not** recommend running the kaniko executor binary in another image, as
- [Comparison with Other Tools](#comparison-with-other-tools) - [Comparison with Other Tools](#comparison-with-other-tools)
- [Community](#community) - [Community](#community)
_If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPMENT.md) and [CONTRIBUTING.md](CONTRIBUTING.md)._
### How does kaniko work? ### How does kaniko work?
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry.
@ -41,8 +44,15 @@ kaniko does not support building Windows containers.
![Demo](/docs/demo.gif) ![Demo](/docs/demo.gif)
## Development ## Using kaniko
To use kaniko to build and push an image for you, you will need:
1. A [build context](#kaniko-build-contexts), aka something to build
2. A [running instance of kaniko](#running-kaniko)
### kaniko Build Contexts ### kaniko Build Contexts
kaniko currently supports local directories, Google Cloud Storage and Amazon S3 as build contexts. kaniko currently supports local directories, Google Cloud Storage and Amazon S3 as build contexts.
If using a GCS or S3 bucket, the bucket should contain a compressed tar of the build context, which kaniko will unpack and use. If using a GCS or S3 bucket, the bucket should contain a compressed tar of the build context, which kaniko will unpack and use.
@ -67,11 +77,23 @@ Use the `--context` flag with the appropriate prefix to specify your build conte
If you don't specify a prefix, kaniko will assume a local directory. If you don't specify a prefix, kaniko will assume a local directory.
For example, to use a GCS bucket called `kaniko-bucket`, you would pass in `--context=gs://kaniko-bucket/path/to/context.tar.gz`. For example, to use a GCS bucket called `kaniko-bucket`, you would pass in `--context=gs://kaniko-bucket/path/to/context.tar.gz`.
### Running kaniko in a Kubernetes cluster ### Running kaniko
There are several different ways to deploy and run kaniko:
- [In a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster)
- [In gVisor](#running-kaniko-in-gvisor)
- [In Google Container Builder](#running-kaniko-in-google-container-builder)
- [Locally](#running-kaniko-locally)
#### Running kaniko in a Kubernetes cluster
Requirements: Requirements:
* Standard Kubernetes cluster
* Kubernetes Secret - Standard Kubernetes cluster (e.g. using [GKE](https://cloud.google.com/kubernetes-engine/))
- [Kubernetes Secret](#kubernetes-secrete)
##### Kubernetes secret
To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image. To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image.
@ -113,7 +135,7 @@ spec:
This example pulls the build context from a GCS bucket. This example pulls the build context from a GCS bucket.
To use a local directory build context, you could consider using configMaps to mount in small build contexts. To use a local directory build context, you could consider using configMaps to mount in small build contexts.
### Running kaniko in gVisor #### Running kaniko in gVisor
Running kaniko in [gVisor](https://github.com/google/gvisor) provides an additional security boundary. Running kaniko in [gVisor](https://github.com/google/gvisor) provides an additional security boundary.
You will need to add the `--force` flag to run kaniko in gVisor, since currently there isn't a way to determine whether or not a container is running in gVisor. You will need to add the `--force` flag to run kaniko in gVisor, since currently there isn't a way to determine whether or not a container is running in gVisor.
@ -128,7 +150,8 @@ gcr.io/kaniko-project/executor:latest \
We pass in `--runtime=runsc` to use gVisor. We pass in `--runtime=runsc` to use gVisor.
This example mounts the current directory to `/workspace` for the build context and the `~/.config` directory for GCR credentials. This example mounts the current directory to `/workspace` for the build context and the `~/.config` directory for GCR credentials.
### Running kaniko in Google Container Builder #### Running kaniko in Google Container Builder
To run kaniko in GCB, add it to your build config as a build step: To run kaniko in GCB, add it to your build config as a build step:
```yaml ```yaml
@ -138,25 +161,30 @@ steps:
"--context=dir://<path to build context>", "--context=dir://<path to build context>",
"--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"] "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
``` ```
kaniko will build and push the final image in this build step. kaniko will build and push the final image in this build step.
### Running kaniko locally #### Running kaniko locally
Requirements: Requirements:
* Docker
* gcloud - [Docker](https://docs.docker.com/install/)
- [gcloud](https://cloud.google.com/sdk/install)
We can run the kaniko executor image locally in a Docker daemon to build and push an image from a Dockerfile. We can run the kaniko executor image locally in a Docker daemon to build and push an image from a Dockerfile.
First, we want to load the executor image into the Docker daemon by running 1. Load the executor image into the Docker daemon by running:
```shell
make images ```shell
``` make images
```
2. Run kaniko in Docker using [`run_in_docker.sh`](./run_in_docker.sh):
```shell
./run_in_docker.sh <path to Dockerfile> <path to build context> <destination of final image>
```
To run kaniko in Docker, run the following command:
```shell
./run_in_docker.sh <path to Dockerfile> <path to build context> <destination of final image>
```
### Pushing to Different Registries ### Pushing to Different Registries
kaniko uses Docker credential helpers to push images to a registry. kaniko uses Docker credential helpers to push images to a registry.
@ -164,71 +192,85 @@ kaniko uses Docker credential helpers to push images to a registry.
kaniko comes with support for GCR and Amazon ECR, but configuring another credential helper should allow pushing to a different registry. kaniko comes with support for GCR and Amazon ECR, but configuring another credential helper should allow pushing to a different registry.
#### Pushing to Amazon ECR #### Pushing to Amazon ECR
The Amazon ECR [credential helper](https://github.com/awslabs/amazon-ecr-credential-helper) is built in to the kaniko executor image. The Amazon ECR [credential helper](https://github.com/awslabs/amazon-ecr-credential-helper) is built in to the kaniko executor image.
To configure credentials, you will need to do the following: To configure credentials, you will need to do the following:
1. Update the `credHelpers` section of [config.json](https://github.com/GoogleContainerTools/kaniko/blob/master/files/config.json) with the specific URI of your ECR registry: 1. Update the `credHelpers` section of [config.json](https://github.com/GoogleContainerTools/kaniko/blob/master/files/config.json) with the specific URI of your ECR registry:
```json
{ ```json
"credHelpers": { {
"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login" "credHelpers": {
} "aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
} }
``` }
You can mount in the new config as a configMap: ```
```shell
kubectl create configmap docker-config --from-file=<path to config.json> You can mount in the new config as a configMap:
```
```shell
kubectl create configmap docker-config --from-file=<path to config.json>
```
2. Create a Kubernetes secret for your `~/.aws/credentials` file so that credentials can be accessed within the cluster. 2. Create a Kubernetes secret for your `~/.aws/credentials` file so that credentials can be accessed within the cluster.
To create the secret, run:
```shell To create the secret, run:
kubectl create secret generic aws-secret --from-file=<path to .aws/credentials>
```
The Kubernetes Pod spec should look similar to this, with the args parameters filled in: ```shell
kubectl create secret generic aws-secret --from-file=<path to .aws/credentials>
```
```yaml The Kubernetes Pod spec should look similar to this, with the args parameters filled in:
apiVersion: v1
kind: Pod ```yaml
metadata: apiVersion: v1
name: kaniko kind: Pod
spec: metadata:
containers: name: kaniko
- name: kaniko spec:
image: gcr.io/kaniko-project/executor:latest containers:
args: ["--dockerfile=<path to Dockerfile>", - name: kaniko
"--context=s3://<bucket name>/<path to .tar.gz>", image: gcr.io/kaniko-project/executor:latest
"--destination=<aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:my-tag>"] args: ["--dockerfile=<path to Dockerfile>",
volumeMounts: "--context=s3://<bucket name>/<path to .tar.gz>",
"--destination=<aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:my-tag>"]
volumeMounts:
- name: aws-secret
mountPath: /root/.aws/
- name: docker-config
mountPath: /root/.docker/
restartPolicy: Never
volumes:
- name: aws-secret - name: aws-secret
mountPath: /root/.aws/ secret:
secretName: aws-secret
- name: docker-config - name: docker-config
mountPath: /root/.docker/ configMap:
restartPolicy: Never name: docker-config
volumes: ```
- name: aws-secret
secret:
secretName: aws-secret
- name: docker-config
configMap:
name: docker-config
```
### Additional Flags ### Additional Flags
#### --snapshotMode #### --snapshotMode
You can set the `--snapshotMode=<full (default), time>` flag to set how kaniko will snapshot the filesystem. You can set the `--snapshotMode=<full (default), time>` flag to set how kaniko will snapshot the filesystem.
If `--snapshotMode=time` is set, only file mtime will be considered when snapshotting. If `--snapshotMode=time` is set, only file mtime will be considered when snapshotting.
#### --build-arg #### --build-arg
This flag allows you to pass in ARG values at build time, similarly to Docker. This flag allows you to pass in ARG values at build time, similarly to Docker.
You can set it multiple times for multiple arguments. You can set it multiple times for multiple arguments.
#### --single-snapshot #### --single-snapshot
This flag takes a single snapshot of the filesystem at the end of the build, so only one layer will be appended to the base image. This flag takes a single snapshot of the filesystem at the end of the build, so only one layer will be appended to the base image.
#### --reproducible #### --reproducible
Set this flag to strip timestamps out of the built image and make it reproducible. Set this flag to strip timestamps out of the built image and make it reproducible.
#### --tarPath #### --tarPath
Set this flag as `--tarPath=<path>` to save the image as a tarball at path instead of pushing the image. Set this flag as `--tarPath=<path>` to save the image as a tarball at path instead of pushing the image.
### Debug Image ### Debug Image
@ -237,9 +279,11 @@ The kaniko executor image is based off of scratch and doesn't contain a shell.
We provide `gcr.io/kaniko-project/executor:debug`, a debug image which consists of the kaniko executor image along with a busybox shell to enter. We provide `gcr.io/kaniko-project/executor:debug`, a debug image which consists of the kaniko executor image along with a busybox shell to enter.
You can launch the debug image with a shell entrypoint: You can launch the debug image with a shell entrypoint:
```shell ```shell
docker run -it --entrypoint=/busybox/sh gcr.io/kaniko-project/executor:debug docker run -it --entrypoint=/busybox/sh gcr.io/kaniko-project/executor:debug
``` ```
## Security ## Security
kaniko by itself **does not** make it safe to run untrusted builds inside your cluster, or anywhere else. kaniko by itself **does not** make it safe to run untrusted builds inside your cluster, or anywhere else.
@ -262,12 +306,13 @@ You may be able to achieve the same default seccomp profile that Docker uses in
## Comparison with Other Tools ## Comparison with Other Tools
Similar tools include: Similar tools include:
* [img](https://github.com/genuinetools/img)
* [orca-build](https://github.com/cyphar/orca-build) - [img](https://github.com/genuinetools/img)
* [umoci](https://github.com/openSUSE/umoci) - [orca-build](https://github.com/cyphar/orca-build)
* [buildah](https://github.com/projectatomic/buildah) - [umoci](https://github.com/openSUSE/umoci)
* [FTL](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl) - [buildah](https://github.com/projectatomic/buildah)
* [Bazel rules_docker](https://github.com/bazelbuild/rules_docker) - [FTL](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl)
- [Bazel rules_docker](https://github.com/bazelbuild/rules_docker)
All of these tools build container images with different approaches. All of these tools build container images with different approaches.
@ -299,3 +344,5 @@ provides.
## Community ## Community
[kaniko-users](https://groups.google.com/forum/#!forum/kaniko-users) Google group [kaniko-users](https://groups.google.com/forum/#!forum/kaniko-users) Google group
To Contribute to kaniko, see [DEVELOPMENT.md](DEVELOPMENT.md) and [CONTRIBUTING.md](CONTRIBUTING.md).