Fixed merge conflict

This commit is contained in:
Priya Wadhwa 2018-04-11 14:58:22 -07:00
commit cf90bd73d4
No known key found for this signature in database
GPG Key ID: 0D0DAFD8F7AA73AE
10 changed files with 242 additions and 37 deletions

View File

@ -1,28 +1,36 @@
# kaniko
kaniko is a tool to build unpriviliged container images from a Dockerfile. It doesn't depend on a Docker daemon, which enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
kaniko is a tool to build unpriviliged container images from a Dockerfile.
kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace.
This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
The majority of Dockerfile commands can be executed with kaniko, but we're still working on supporting the following commands:
* VOLUME
* SHELL
* HEALTHCHECK
* STOPSIGNAL
* ONBUILD
* ARG
We're currently in the process of building kaniko, so as of now it isn't production ready. Please let us know if you have any feature requests or find any bugs!
We're currently in the process of building kaniko, so as of now it isn't production ready.
Please let us know if you have any feature requests or find any bugs!
## How does kaniko work?
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry.
Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile).
We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one.
After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
## kaniko Build Contexts
kaniko supports local directories and GCS buckets as build contexts. To specify a local directory, pass in the
`--context` flag as an argument to the executor image. To specify a GCS bucket, pass in the
`--bucket` flag. The GCS bucket should contain a compressed tar of the build context called `context.tar.gz`, which kaniko will unpack and use as the build context.
kaniko supports local directories and GCS buckets as build contexts. To specify a local directory, pass in the `--context` flag as an argument to the executor image.
To specify a GCS bucket, pass in the `--bucket` flag.
The GCS bucket should contain a compressed tar of the build context called `context.tar.gz`, which kaniko will unpack and use as the build context.
To easily create `context.tar.gz`, we can use [skaffold](https://github.com/GoogleCloudPlatform/skaffold). Running the following command within the build context will create `context.tar.gz`, which will contain the Dockerfile and any files it depends on.
To create `context.tar.gz`, run the following command:
```shell
tar -C <path to build context> -zcvf context.tar.gz .
```
Or, you can use [skaffold](https://github.com/GoogleCloudPlatform/skaffold) to create `context.tar.gz` by running
```
skaffold docker context
```
@ -59,40 +67,55 @@ Requirements:
To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image.
To create the secret, first you will need to create a service account in the Pantheon project you want to push the final image to, with `Storage Admin` permissions. You can download a JSON key for this service account, and rename it `kaniko-secret.json`. To create the secret, run:
To create the secret, first you will need to create a service account in the Pantheon project you want to push the final image to, with `Storage Admin` permissions.
You can download a JSON key for this service account, and rename it `kaniko-secret.json`.
To create the secret, run:
```shell
kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json>
```
The Kubernetes job.yaml should look similar to this, with the args parameters filled in:
The Kubernetes Pod spec should look similar to this, with the args parameters filled in:
```yaml
apiVersion: batch/v1
kind: Job
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
template:
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>", "--bucket=<GCS bucket>", "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>",
"--bucket=<GCS bucket>",
"--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
```
This example pulls the build context from a GCS bucket. To use a local directory build context, you could consider using configMaps to mount in small build contexts.
This example pulls the build context from a GCS bucket.
To use a local directory build context, you could consider using configMaps to mount in small build contexts.
## Running kaniko in Google Container Builder
To run kaniko in GCB, add it to your build config as a build step:
```yaml
steps:
- name: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>",
"--context=<path to build context>",
"--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
```
kaniko will build and push the final image in this build step.
## Comparison with Other Tools
@ -102,10 +125,17 @@ Similar tools include:
* [buildah](https://github.com/projectatomic/buildah)
* [FTL](https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl)
All of these tools build container images with different approaches. Both kaniko and img build unprivileged images, but they interpret “unprivileged” differently. img builds as a non root user from within the container, while kaniko is run in an unprivileged environment with root access inside the container.
All of these tools build container images with different approaches.
Both kaniko and img build unprivileged images, but they interpret “unprivileged” differently.
img builds as a non root user from within the container, while kaniko is run in an unprivileged environment with root access inside the container.
orca-build depends on runC to build images from Dockerfiles; since kaniko doesn't use runC it doesn't require the use of kernel namespacing techniques.
buildah requires the same root privilges as a Docker daemon does to run, while kaniko runs without any special privileges or permissions.
FTL aims to achieve the fastest possible creation of Docker images for a subset of images. It can be thought of as a special-case "fast path" that can be used in conjunction with the support for general Dockerfiles kaniko provides.
FTL aims to achieve the fastest possible creation of Docker images for a subset of images.
It can be thought of as a special-case "fast path" that can be used in conjunction with the support for general Dockerfiles kaniko provides.
## Community
[kaniko-users](https://groups.google.com/forum/#!forum/kaniko-users) Google group

View File

@ -160,6 +160,7 @@ func execute() error {
if err != nil {
return err
}
util.MoveVolumeWhitelistToWhitelist()
if contents == nil {
logrus.Info("No files were changed, appending empty layer to config.")
sourceImage.AppendConfigHistory(constants.Author, true)

View File

@ -0,0 +1,7 @@
FROM gcr.io/google-appengine/debian9
RUN mkdir /foo
RUN echo "hello" > /foo/hey
VOLUME /foo/bar /tmp
ENV VOL /baz/bat
VOLUME ["${VOL}"]
RUN echo "hello again" > /tmp/hey

View File

@ -14,8 +14,8 @@
},
{
"Name": "/var/log/apt/term.log",
"Size1": 24421,
"Size2": 24421
"Size1": 23671,
"Size2": 23671
},
{
"Name": "/var/cache/ldconfig/aux-cache",
@ -24,8 +24,8 @@
},
{
"Name": "/var/log/apt/history.log",
"Size1": 5415,
"Size2": 5415
"Size1": 5661,
"Size2": 5661
},
{
"Name": "/var/log/alternatives.log",

View File

@ -0,0 +1,12 @@
[
{
"Image1": "gcr.io/kaniko-test/docker-test-volume:latest",
"Image2": "gcr.io/kaniko-test/kaniko-test-volume:latest",
"DiffType": "File",
"Diff": {
"Adds": null,
"Dels": null,
"Mods": null
}
}
]

View File

@ -95,6 +95,14 @@ var fileTests = []struct {
kanikoContext: buildcontextPath,
repo: "test-workdir",
},
{
description: "test volume",
dockerfilePath: "/workspace/integration_tests/dockerfiles/Dockerfile_test_volume",
configPath: "/workspace/integration_tests/dockerfiles/config_test_volume.json",
dockerContext: buildcontextPath,
kanikoContext: buildcontextPath,
repo: "test-volume",
},
{
description: "test add",
dockerfilePath: "/workspace/integration_tests/dockerfiles/Dockerfile_test_add",

View File

@ -58,6 +58,8 @@ func GetCommand(cmd instructions.Command, buildcontext string) (DockerCommand, e
return &UserCommand{cmd: c}, nil
case *instructions.OnbuildCommand:
return &OnBuildCommand{cmd: c}, nil
case *instructions.VolumeCommand:
return &VolumeCommand{cmd: c}, nil
}
return nil, errors.Errorf("%s is not a supported command", cmd.Name())
}

71
pkg/commands/volume.go Normal file
View File

@ -0,0 +1,71 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"github.com/GoogleCloudPlatform/k8s-container-builder/pkg/util"
"github.com/containers/image/manifest"
"github.com/docker/docker/builder/dockerfile/instructions"
"github.com/sirupsen/logrus"
"os"
"strings"
)
type VolumeCommand struct {
cmd *instructions.VolumeCommand
snapshotFiles []string
}
func (v *VolumeCommand) ExecuteCommand(config *manifest.Schema2Config) error {
logrus.Info("cmd: VOLUME")
volumes := v.cmd.Volumes
resolvedVolumes, err := util.ResolveEnvironmentReplacementList(volumes, config.Env, true)
if err != nil {
return err
}
existingVolumes := config.Volumes
if existingVolumes == nil {
existingVolumes = map[string]struct{}{}
}
for _, volume := range resolvedVolumes {
var x struct{}
existingVolumes[volume] = x
err := util.AddPathToVolumeWhitelist(volume)
if err != nil {
return err
}
logrus.Infof("Creating directory %s", volume)
if err := os.MkdirAll(volume, 0755); err != nil {
return err
}
//Check if directory already exists?
v.snapshotFiles = append(v.snapshotFiles, volume)
}
config.Volumes = existingVolumes
return nil
}
func (v *VolumeCommand) FilesToSnapshot() []string {
return v.snapshotFiles
}
func (v *VolumeCommand) CreatedBy() string {
return strings.Join(append([]string{v.cmd.Name()}, v.cmd.Volumes...), " ")
}

View File

@ -0,0 +1,54 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package commands
import (
"github.com/GoogleCloudPlatform/k8s-container-builder/testutil"
"github.com/containers/image/manifest"
"github.com/docker/docker/builder/dockerfile/instructions"
"testing"
)
func TestUpdateVolume(t *testing.T) {
cfg := &manifest.Schema2Config{
Env: []string{
"VOLUME=/etc",
},
Volumes: map[string]struct{}{},
}
volumes := []string{
"/tmp",
"/var/lib",
"$VOLUME",
}
volumeCmd := &VolumeCommand{
cmd: &instructions.VolumeCommand{
Volumes: volumes,
},
snapshotFiles: []string{},
}
expectedVolumes := map[string]struct{}{
"/tmp": {},
"/var/lib": {},
"/etc": {},
}
err := volumeCmd.ExecuteCommand(cfg)
testutil.CheckErrorAndDeepEqual(t, false, err, expectedVolumes, cfg.Volumes)
}

View File

@ -31,6 +31,7 @@ import (
)
var whitelist = []string{"/kaniko"}
var volumeWhitelist = []string{}
// ExtractFileSystemFromImage pulls an image and unpacks it to a file system at root
func ExtractFileSystemFromImage(img string) error {
@ -156,6 +157,25 @@ func CreateFile(path string, reader io.Reader, perm os.FileMode) error {
return dest.Chmod(perm)
}
// AddPathToVolumeWhitelist adds the given path to the volume whitelist
// It will get snapshotted when the VOLUME command is run then ignored
// for subsequent commands.
func AddPathToVolumeWhitelist(path string) error {
logrus.Infof("adding %s to volume whitelist", path)
volumeWhitelist = append(volumeWhitelist, path)
return nil
}
// MoveVolumeWhitelistToWhitelist copies over all directories that were volume mounted
// in this step to be whitelisted for all subsequent docker commands.
func MoveVolumeWhitelistToWhitelist() error {
if len(volumeWhitelist) > 0 {
whitelist = append(whitelist, volumeWhitelist...)
volumeWhitelist = []string{}
}
return nil
}
// DownloadFileToDest downloads the file at rawurl to the given dest for the ADD command
// From add command docs:
// 1. If <src> is a remote file URL: