Resolved merge in README

This commit is contained in:
Kartik Verma 2018-11-02 19:46:48 +05:30
parent b9c678e6f8
commit fdac2fa94c
No known key found for this signature in database
GPG Key ID: 1AB436B74ED67A64
48 changed files with 1135 additions and 267 deletions

View File

@ -1,3 +1,22 @@
# v0.5.0 Release - 10/16/2018
## New Features
* Persistent volume caching for base images [#383](https://github.com/GoogleContainerTools/kaniko/pull/383)
## Updates
* Use only the necessary files in the cache keys. [#387](https://github.com/GoogleContainerTools/kaniko/pull/387)
* Change loglevel for copying files to debug (#303) [#393](https://github.com/GoogleContainerTools/kaniko/pull/393)
* Improve IsDestDir functionality with filesystem info [#390](https://github.com/GoogleContainerTools/kaniko/pull/390)
* Refactor the build loop. [#385](https://github.com/GoogleContainerTools/kaniko/pull/385)
* Rework cache key generation a bit. [#375](https://github.com/GoogleContainerTools/kaniko/pull/375)
## Bug Fixes
* fix mispell [#396](https://github.com/GoogleContainerTools/kaniko/pull/396)
* Update go-containerregistry dependency [#388](https://github.com/GoogleContainerTools/kaniko/pull/388)
* chore: fix broken markdown (CHANGELOG.md) [#382](https://github.com/GoogleContainerTools/kaniko/pull/382)
* Don't cut everything after an equals sign [#381](https://github.com/GoogleContainerTools/kaniko/pull/381)
# v0.4.0 Release - 10/01/2018 # v0.4.0 Release - 10/01/2018
## New Features ## New Features

5
Gopkg.lock generated
View File

@ -445,8 +445,7 @@
version = "v0.2.0" version = "v0.2.0"
[[projects]] [[projects]]
branch = "master" digest = "1:f1b23f53418c1b035a5965ac2600a28b16c08643683d5213fb581ecf4e79a02a"
digest = "1:edf64d541c12aaf4f279642ea9939f035dcc9fc2edf649aba295e9cbca2c28d4"
name = "github.com/google/go-containerregistry" name = "github.com/google/go-containerregistry"
packages = [ packages = [
"pkg/authn", "pkg/authn",
@ -465,7 +464,7 @@
"pkg/v1/v1util", "pkg/v1/v1util",
] ]
pruneopts = "NUT" pruneopts = "NUT"
revision = "03167950e20ac82689f50828811e69cdd9e02af2" revision = "88d8d18eb1bde1fcef23c745205c738074290515"
[[projects]] [[projects]]
digest = "1:f4f203acd8b11b8747bdcd91696a01dbc95ccb9e2ca2db6abf81c3a4f5e950ce" digest = "1:f4f203acd8b11b8747bdcd91696a01dbc95ccb9e2ca2db6abf81c3a4f5e950ce"

View File

@ -35,6 +35,10 @@ required = [
name = "k8s.io/client-go" name = "k8s.io/client-go"
version = "kubernetes-1.11.0" version = "kubernetes-1.11.0"
[[constraint]]
name = "github.com/google/go-containerregistry"
revision = "88d8d18eb1bde1fcef23c745205c738074290515"
[[override]] [[override]]
name = "k8s.io/apimachinery" name = "k8s.io/apimachinery"
version = "kubernetes-1.11.0" version = "kubernetes-1.11.0"

View File

@ -36,11 +36,15 @@ GO_LDFLAGS += -w -s # Drop debugging symbols.
GO_LDFLAGS += ' GO_LDFLAGS += '
EXECUTOR_PACKAGE = $(REPOPATH)/cmd/executor EXECUTOR_PACKAGE = $(REPOPATH)/cmd/executor
WARMER_PACKAGE = $(REPOPATH)/cmd/warmer
KANIKO_PROJECT = $(REPOPATH)/kaniko KANIKO_PROJECT = $(REPOPATH)/kaniko
out/executor: $(GO_FILES) out/executor: $(GO_FILES)
GOARCH=$(GOARCH) GOOS=linux CGO_ENABLED=0 go build -ldflags $(GO_LDFLAGS) -o $@ $(EXECUTOR_PACKAGE) GOARCH=$(GOARCH) GOOS=linux CGO_ENABLED=0 go build -ldflags $(GO_LDFLAGS) -o $@ $(EXECUTOR_PACKAGE)
out/warmer: $(GO_FILES)
GOARCH=$(GOARCH) GOOS=linux CGO_ENABLED=0 go build -ldflags $(GO_LDFLAGS) -o $@ $(WARMER_PACKAGE)
.PHONY: test .PHONY: test
test: out/executor test: out/executor
@ ./test.sh @ ./test.sh
@ -53,3 +57,4 @@ integration-test:
images: images:
docker build -t $(REGISTRY)/executor:latest -f deploy/Dockerfile . docker build -t $(REGISTRY)/executor:latest -f deploy/Dockerfile .
docker build -t $(REGISTRY)/executor:debug -f deploy/Dockerfile_debug . docker build -t $(REGISTRY)/executor:debug -f deploy/Dockerfile_debug .
docker build -t $(REGISTRY)/warmer:latest -f deploy/Dockerfile_warmer .

View File

@ -21,8 +21,8 @@ We do **not** recommend running the kaniko executor binary in another image, as
- [Running kaniko](#running-kaniko) - [Running kaniko](#running-kaniko)
- [Running kaniko in a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster) - [Running kaniko in a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster)
- [Running kaniko in gVisor](#running-kaniko-in-gvisor) - [Running kaniko in gVisor](#running-kaniko-in-gvisor)
- [Running kaniko in Google Container Builder](#running-kaniko-in-google-container-builder) - [Running kaniko in Google Cloud Build](#running-kaniko-in-google-cloud-build)
- [Running kaniko locally](#running-kaniko-locally) - [Running kaniko in Docker](#running-kaniko-in-Docker)
- [Caching](#caching) - [Caching](#caching)
- [Pushing to Different Registries](#pushing-to-different-registries) - [Pushing to Different Registries](#pushing-to-different-registries)
- [Additional Flags](#additional-flags) - [Additional Flags](#additional-flags)
@ -57,8 +57,20 @@ To use kaniko to build and push an image for you, you will need:
### kaniko Build Contexts ### kaniko Build Contexts
kaniko currently supports local directories, Google Cloud Storage, Amazon S3 and Git Repositories as build contexts. kaniko's build context is very similar to the build context you would send your Docker daemon for an image build; it represents a directory containing a Dockerfile which kaniko will use to build your image.
If using a GCS or S3 bucket, the bucket should contain a compressed tar of the build context, which kaniko will unpack and use. For example, a `COPY` command in your Dockerfile should refer to a file in the build context.
You will need to store your build context in a place that kaniko can access.
Right now, kaniko supports these storage solutions:
- GCS Bucket
- S3 Bucket
- Local Directory
_Note: the local directory option refers to a directory within the kaniko container.
If you wish to use this option, you will need to mount in your build context into the container as a directory._
If using a GCS or S3 bucket, you will first need to create a compressed tar of your build context and upload it to your bucket.
Once running, kaniko will then download and unpack the compressed tar of the build context before starting the image build.
To create a compressed tar, you can run: To create a compressed tar, you can run:
```shell ```shell
@ -70,11 +82,11 @@ For example, we can copy over the compressed tar to a GCS bucket with gsutil:
gsutil cp context.tar.gz gs://<bucket name> gsutil cp context.tar.gz gs://<bucket name>
``` ```
Use the `--context` flag with the appropriate prefix to specify your build context: When running kaniko, use the `--context` flag with the appropriate prefix to specify the location of your build context:
| Source | Prefix | | Source | Prefix |
|---------|---------| |---------|---------|
| Local Directory | dir://[path to directory] | | Local Directory | dir://[path to a directory in the kaniko container] |
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] | | GCS Bucket | gs://[bucket name]/[path to .tar.gz] |
| S3 Bucket | s3://[bucket name]/[path to .tar.gz] | | S3 Bucket | s3://[bucket name]/[path to .tar.gz] |
| Git Repository | git://[repository url] | | Git Repository | git://[repository url] |
@ -91,8 +103,8 @@ There are several different ways to deploy and run kaniko:
- [In a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster) - [In a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster)
- [In gVisor](#running-kaniko-in-gvisor) - [In gVisor](#running-kaniko-in-gvisor)
- [In Google Container Builder](#running-kaniko-in-google-container-builder) - [In Google Cloud Build](#running-kaniko-in-google-cloud-build)
- [Locally](#running-kaniko-locally) - [In Docker](#running-kaniko-in-docker)
#### Running kaniko in a Kubernetes cluster #### Running kaniko in a Kubernetes cluster
@ -100,19 +112,24 @@ Requirements:
- Standard Kubernetes cluster (e.g. using [GKE](https://cloud.google.com/kubernetes-engine/)) - Standard Kubernetes cluster (e.g. using [GKE](https://cloud.google.com/kubernetes-engine/))
- [Kubernetes Secret](#kubernetes-secret) - [Kubernetes Secret](#kubernetes-secret)
- A [build context](#kaniko-build-contexts)
##### Kubernetes secret ##### Kubernetes secret
To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image. To run kaniko in a Kubernetes cluster, you will need a standard running Kubernetes cluster and a Kubernetes secret, which contains the auth required to push the final image.
To create the secret, first you will need to create a service account in the Google Cloud Console project you want to push the final image to, with `Storage Admin` permissions. To create a secret to authenticate to Google Cloud Registry, follow these steps:
You can download a JSON key for this service account, and rename it `kaniko-secret.json`. 1. Create a service account in the Google Cloud Console project you want to push the final image to with `Storage Admin` permissions.
To create the secret, run: 2. Download a JSON key for this service account
3. Rename the key to `kaniko-secret.json`
4. To create the secret, run:
```shell ```shell
kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json> kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json>
``` ```
_Note: If using a GCS bucket in the same GCP project as a build context, this service account should now also have permissions to read from that bucket._
The Kubernetes Pod spec should look similar to this, with the args parameters filled in: The Kubernetes Pod spec should look similar to this, with the args parameters filled in:
```yaml ```yaml
@ -124,7 +141,7 @@ spec:
containers: containers:
- name: kaniko - name: kaniko
image: gcr.io/kaniko-project/executor:latest image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>", args: ["--dockerfile=<path to Dockerfile within the build context>",
"--context=gs://<GCS bucket>/<path to .tar.gz>", "--context=gs://<GCS bucket>/<path to .tar.gz>",
"--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"] "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
volumeMounts: volumeMounts:
@ -158,21 +175,24 @@ gcr.io/kaniko-project/executor:latest \
We pass in `--runtime=runsc` to use gVisor. We pass in `--runtime=runsc` to use gVisor.
This example mounts the current directory to `/workspace` for the build context and the `~/.config` directory for GCR credentials. This example mounts the current directory to `/workspace` for the build context and the `~/.config` directory for GCR credentials.
#### Running kaniko in Google Container Builder #### Running kaniko in Google Cloud Build
Requirements:
- A [build context](#kaniko-build-contexts)
To run kaniko in GCB, add it to your build config as a build step: To run kaniko in GCB, add it to your build config as a build step:
```yaml ```yaml
steps: steps:
- name: gcr.io/kaniko-project/executor:latest - name: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>", args: ["--dockerfile=<path to Dockerfile within the build context>",
"--context=dir://<path to build context>", "--context=dir://<path to build context>",
"--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"] "--destination=<gcr.io/$PROJECT/$IMAGE:$TAG>"]
``` ```
kaniko will build and push the final image in this build step. kaniko will build and push the final image in this build step.
#### Running kaniko locally #### Running kaniko in Docker
Requirements: Requirements:
@ -194,6 +214,8 @@ We can run the kaniko executor image locally in a Docker daemon to build and pus
``` ```
### Caching ### Caching
#### Caching Layers
kaniko currently can cache layers created by `RUN` commands in a remote repository. kaniko currently can cache layers created by `RUN` commands in a remote repository.
Before executing a command, kaniko checks the cache for the layer. Before executing a command, kaniko checks the cache for the layer.
If it exists, kaniko will pull and extract the cached layer instead of executing the command. If it exists, kaniko will pull and extract the cached layer instead of executing the command.
@ -203,6 +225,21 @@ Users can opt in to caching by setting the `--cache=true` flag.
A remote repository for storing cached layers can be provided via the `--cache-repo` flag. A remote repository for storing cached layers can be provided via the `--cache-repo` flag.
If this flag isn't provided, a cached repo will be inferred from the `--destination` provided. If this flag isn't provided, a cached repo will be inferred from the `--destination` provided.
#### Caching Base Images
kaniko can cache images in a local directory that can be volume mounted into the kaniko image.
To do so, the cache must first be populated, as it is read-only. We provide a kaniko cache warming
image at `gcr.io/kaniko-project/warmer`:
```shell
docker run -v $(pwd):/workspace gcr.io/kaniko-project/warmer:latest --cache-dir=/workspace/cache --image=<image to cache> --image=<another image to cache>
```
`--image` can be specified for any number of desired images.
This command will cache those images by digest in a local directory named `cache`.
Once the cache is populated, caching is opted into with the same `--cache=true` flag as above.
The location of the local cache is provided via the `--cache-dir` flag, defaulting at `/cache` as with the cache warmer.
See the `examples` directory for how to use with kubernetes clusters and persistent cache volumes.
### Pushing to Different Registries ### Pushing to Different Registries
kaniko uses Docker credential helpers to push images to a registry. kaniko uses Docker credential helpers to push images to a registry.
@ -249,7 +286,7 @@ To configure credentials, you will need to do the following:
containers: containers:
- name: kaniko - name: kaniko
image: gcr.io/kaniko-project/executor:latest image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<path to Dockerfile>", args: ["--dockerfile=<path to Dockerfile within the build context>",
"--context=s3://<bucket name>/<path to .tar.gz>", "--context=s3://<bucket name>/<path to .tar.gz>",
"--destination=<aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:my-tag>"] "--destination=<aws_account_id.dkr.ecr.region.amazonaws.com/my-repository:my-tag>"]
volumeMounts: volumeMounts:
@ -302,11 +339,19 @@ Set this flag if you only want to build the image, without pushing to a registry
#### --insecure #### --insecure
Set this flag if you want to connect to a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production! Set this flag if you want to push images to a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
#### --skip-tls-verify #### --skip-tls-verify
Set this flag to skip TLS certificate validation when connecting to a registry. It is supposed to be used for testing purposes only and should not be used in production! Set this flag to skip TLS certificate validation when pushing images to a registry. It is supposed to be used for testing purposes only and should not be used in production!
#### --insecure-pull
Set this flag if you want to pull images from a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
#### --skip-tls-verify-pull
Set this flag to skip TLS certificate validation when pulling images from a registry. It is supposed to be used for testing purposes only and should not be used in production!
#### --cache #### --cache
@ -321,6 +366,12 @@ If `--destination=gcr.io/kaniko-project/test`, then cached layers will be stored
_This flag must be used in conjunction with the `--cache=true` flag._ _This flag must be used in conjunction with the `--cache=true` flag._
#### --cache-dir
Set this flag to specify a local directory cache for base images. Defaults to `/cache`.
_This flag must be used in conjunction with the `--cache=true` flag._
#### --cleanup #### --cleanup
Set this flag to cleanup the filesystem at the end, leaving a clean kaniko container (if you want to build multiple images in the same container, using the debug kaniko image) Set this flag to cleanup the filesystem at the end, leaving a clean kaniko container (if you want to build multiple images in the same container, using the debug kaniko image)
@ -417,4 +468,4 @@ file are made and when the `mtime` is updated. This means:
which will still be correct, but it does affect the number of layers. which will still be correct, but it does affect the number of layers.
_Note that these issues are currently theoretical only. If you see this issue occur, please _Note that these issues are currently theoretical only. If you see this issue occur, please
[open an issue](https://github.com/GoogleContainerTools/kaniko/issues)._ [open an issue](https://github.com/GoogleContainerTools/kaniko/issues)._

View File

@ -25,8 +25,10 @@ import (
"github.com/GoogleContainerTools/kaniko/pkg/buildcontext" "github.com/GoogleContainerTools/kaniko/pkg/buildcontext"
"github.com/GoogleContainerTools/kaniko/pkg/config" "github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/GoogleContainerTools/kaniko/pkg/constants" "github.com/GoogleContainerTools/kaniko/pkg/constants"
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
"github.com/GoogleContainerTools/kaniko/pkg/executor" "github.com/GoogleContainerTools/kaniko/pkg/executor"
"github.com/GoogleContainerTools/kaniko/pkg/util" "github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/docker/docker/pkg/fileutils"
"github.com/genuinetools/amicontained/container" "github.com/genuinetools/amicontained/container"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
@ -61,7 +63,10 @@ var RootCmd = &cobra.Command{
if err := resolveSourceContext(); err != nil { if err := resolveSourceContext(); err != nil {
return errors.Wrap(err, "error resolving source context") return errors.Wrap(err, "error resolving source context")
} }
return resolveDockerfilePath() if err := resolveDockerfilePath(); err != nil {
return errors.Wrap(err, "error resolving dockerfile path")
}
return removeIgnoredFiles()
}, },
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
if !checkContained() { if !checkContained() {
@ -91,14 +96,17 @@ func addKanikoOptionsFlags(cmd *cobra.Command) {
RootCmd.PersistentFlags().VarP(&opts.Destinations, "destination", "d", "Registry the final image should be pushed to. Set it repeatedly for multiple destinations.") RootCmd.PersistentFlags().VarP(&opts.Destinations, "destination", "d", "Registry the final image should be pushed to. Set it repeatedly for multiple destinations.")
RootCmd.PersistentFlags().StringVarP(&opts.SnapshotMode, "snapshotMode", "", "full", "Change the file attributes inspected during snapshotting") RootCmd.PersistentFlags().StringVarP(&opts.SnapshotMode, "snapshotMode", "", "full", "Change the file attributes inspected during snapshotting")
RootCmd.PersistentFlags().VarP(&opts.BuildArgs, "build-arg", "", "This flag allows you to pass in ARG values at build time. Set it repeatedly for multiple values.") RootCmd.PersistentFlags().VarP(&opts.BuildArgs, "build-arg", "", "This flag allows you to pass in ARG values at build time. Set it repeatedly for multiple values.")
RootCmd.PersistentFlags().BoolVarP(&opts.InsecurePush, "insecure", "", false, "Push to insecure registry using plain HTTP") RootCmd.PersistentFlags().BoolVarP(&opts.Insecure, "insecure", "", false, "Push to insecure registry using plain HTTP")
RootCmd.PersistentFlags().BoolVarP(&opts.SkipTLSVerify, "skip-tls-verify", "", false, "Push to insecure registry ignoring TLS verify") RootCmd.PersistentFlags().BoolVarP(&opts.SkipTLSVerify, "skip-tls-verify", "", false, "Push to insecure registry ignoring TLS verify")
RootCmd.PersistentFlags().BoolVarP(&opts.InsecurePull, "insecure-pull", "", false, "Pull from insecure registry using plain HTTP")
RootCmd.PersistentFlags().BoolVarP(&opts.SkipTLSVerifyPull, "skip-tls-verify-pull", "", false, "Pull from insecure registry ignoring TLS verify")
RootCmd.PersistentFlags().StringVarP(&opts.TarPath, "tarPath", "", "", "Path to save the image in as a tarball instead of pushing") RootCmd.PersistentFlags().StringVarP(&opts.TarPath, "tarPath", "", "", "Path to save the image in as a tarball instead of pushing")
RootCmd.PersistentFlags().BoolVarP(&opts.SingleSnapshot, "single-snapshot", "", false, "Take a single snapshot at the end of the build.") RootCmd.PersistentFlags().BoolVarP(&opts.SingleSnapshot, "single-snapshot", "", false, "Take a single snapshot at the end of the build.")
RootCmd.PersistentFlags().BoolVarP(&opts.Reproducible, "reproducible", "", false, "Strip timestamps out of the image to make it reproducible") RootCmd.PersistentFlags().BoolVarP(&opts.Reproducible, "reproducible", "", false, "Strip timestamps out of the image to make it reproducible")
RootCmd.PersistentFlags().StringVarP(&opts.Target, "target", "", "", "Set the target build stage to build") RootCmd.PersistentFlags().StringVarP(&opts.Target, "target", "", "", "Set the target build stage to build")
RootCmd.PersistentFlags().BoolVarP(&opts.NoPush, "no-push", "", false, "Do not push the image to the registry") RootCmd.PersistentFlags().BoolVarP(&opts.NoPush, "no-push", "", false, "Do not push the image to the registry")
RootCmd.PersistentFlags().StringVarP(&opts.CacheRepo, "cache-repo", "", "", "Specify a repository to use as a cache, otherwise one will be inferred from the destination provided") RootCmd.PersistentFlags().StringVarP(&opts.CacheRepo, "cache-repo", "", "", "Specify a repository to use as a cache, otherwise one will be inferred from the destination provided")
RootCmd.PersistentFlags().StringVarP(&opts.CacheDir, "cache-dir", "", "/cache", "Specify a local directory to use as a cache.")
RootCmd.PersistentFlags().BoolVarP(&opts.Cache, "cache", "", false, "Use cache when building image") RootCmd.PersistentFlags().BoolVarP(&opts.Cache, "cache", "", false, "Use cache when building image")
RootCmd.PersistentFlags().BoolVarP(&opts.Cleanup, "cleanup", "", false, "Clean the filesystem at the end") RootCmd.PersistentFlags().BoolVarP(&opts.Cleanup, "cleanup", "", false, "Clean the filesystem at the end")
} }
@ -137,7 +145,7 @@ func resolveDockerfilePath() error {
return errors.Wrap(err, "getting absolute path for dockerfile") return errors.Wrap(err, "getting absolute path for dockerfile")
} }
opts.DockerfilePath = abs opts.DockerfilePath = abs
return nil return copyDockerfile()
} }
// Otherwise, check if the path relative to the build context exists // Otherwise, check if the path relative to the build context exists
if util.FilepathExists(filepath.Join(opts.SrcContext, opts.DockerfilePath)) { if util.FilepathExists(filepath.Join(opts.SrcContext, opts.DockerfilePath)) {
@ -146,11 +154,21 @@ func resolveDockerfilePath() error {
return errors.Wrap(err, "getting absolute path for src context/dockerfile path") return errors.Wrap(err, "getting absolute path for src context/dockerfile path")
} }
opts.DockerfilePath = abs opts.DockerfilePath = abs
return nil return copyDockerfile()
} }
return errors.New("please provide a valid path to a Dockerfile within the build context with --dockerfile") return errors.New("please provide a valid path to a Dockerfile within the build context with --dockerfile")
} }
// copy Dockerfile to /kaniko/Dockerfile so that if it's specified in the .dockerignore
// it won't be copied into the image
func copyDockerfile() error {
if err := util.CopyFile(opts.DockerfilePath, constants.DockerfilePath); err != nil {
return errors.Wrap(err, "copying dockerfile")
}
opts.DockerfilePath = constants.DockerfilePath
return nil
}
// resolveSourceContext unpacks the source context if it is a tar in a bucket // resolveSourceContext unpacks the source context if it is a tar in a bucket
// it resets srcContext to be the path to the unpacked build context within the image // it resets srcContext to be the path to the unpacked build context within the image
func resolveSourceContext() error { func resolveSourceContext() error {
@ -181,6 +199,29 @@ func resolveSourceContext() error {
return nil return nil
} }
func removeIgnoredFiles() error {
if !dockerfile.DockerignoreExists(opts) {
return nil
}
ignore, err := dockerfile.ParseDockerignore(opts)
if err != nil {
return err
}
logrus.Infof("Removing ignored files from build context: %s", ignore)
files, err := util.RelativeFiles("", opts.SrcContext)
if err != nil {
return errors.Wrap(err, "getting all files in src context")
}
for _, f := range files {
if rm, _ := fileutils.Matches(f, ignore); rm {
if err := os.RemoveAll(f); err != nil {
logrus.Errorf("Error removing %s from build context", f)
}
}
}
return nil
}
func exit(err error) { func exit(err error) {
fmt.Println(err) fmt.Println(err)
os.Exit(1) os.Exit(1)

74
cmd/warmer/cmd/root.go Normal file
View File

@ -0,0 +1,74 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cmd
import (
"fmt"
"os"
"github.com/GoogleContainerTools/kaniko/pkg/cache"
"github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/GoogleContainerTools/kaniko/pkg/constants"
"github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
var (
opts = &config.WarmerOptions{}
logLevel string
)
func init() {
RootCmd.PersistentFlags().StringVarP(&logLevel, "verbosity", "v", constants.DefaultLogLevel, "Log level (debug, info, warn, error, fatal, panic")
addKanikoOptionsFlags(RootCmd)
addHiddenFlags(RootCmd)
}
var RootCmd = &cobra.Command{
Use: "cache warmer",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := util.ConfigureLogging(logLevel); err != nil {
return err
}
if len(opts.Images) == 0 {
return errors.New("You must select at least one image to cache")
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
if err := cache.WarmCache(opts); err != nil {
exit(errors.Wrap(err, "Failed warming cache"))
}
},
}
// addKanikoOptionsFlags configures opts
func addKanikoOptionsFlags(cmd *cobra.Command) {
RootCmd.PersistentFlags().VarP(&opts.Images, "image", "i", "Image to cache. Set it repeatedly for multiple images.")
RootCmd.PersistentFlags().StringVarP(&opts.CacheDir, "cache-dir", "c", "/cache", "Directory of the cache.")
}
// addHiddenFlags marks certain flags as hidden from the executor help text
func addHiddenFlags(cmd *cobra.Command) {
RootCmd.PersistentFlags().MarkHidden("azure-container-registry-config")
}
func exit(err error) {
fmt.Println(err)
os.Exit(1)
}

29
cmd/warmer/main.go Normal file
View File

@ -0,0 +1,29 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"os"
"github.com/GoogleContainerTools/kaniko/cmd/warmer/cmd"
)
func main() {
if err := cmd.RootCmd.Execute(); err != nil {
os.Exit(1)
}
}

32
deploy/Dockerfile_warmer Normal file
View File

@ -0,0 +1,32 @@
# Copyright 2018 Google, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Builds the static Go image to execute in a Kubernetes job
FROM golang:1.10
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
COPY . .
RUN make out/warmer
FROM scratch
COPY --from=0 /go/src/github.com/GoogleContainerTools/kaniko/out/warmer /kaniko/warmer
COPY files/ca-certificates.crt /kaniko/ssl/certs/
COPY files/config.json /kaniko/.docker/
ENV HOME /root
ENV USER /root
ENV PATH /usr/local/bin:/kaniko
ENV SSL_CERT_DIR=/kaniko/ssl/certs
ENV DOCKER_CONFIG /kaniko/.docker/
WORKDIR /workspace
ENTRYPOINT ["/kaniko/warmer"]

View File

@ -14,7 +14,16 @@ steps:
- name: "gcr.io/cloud-builders/docker" - name: "gcr.io/cloud-builders/docker"
args: ["tag", "gcr.io/kaniko-project/executor:debug-$TAG_NAME", args: ["tag", "gcr.io/kaniko-project/executor:debug-$TAG_NAME",
"gcr.io/kaniko-project/executor:debug"] "gcr.io/kaniko-project/executor:debug"]
# Then, we want to build the cache warmer
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-f", "deploy/Dockerfile_warmer",
"-t", "gcr.io/kaniko-project/warmer:$TAG_NAME", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["tag", "gcr.io/kaniko-project/warmer:$TAG_NAME",
"gcr.io/kaniko-project/warmer:latest"]
images: ["gcr.io/kaniko-project/executor:$TAG_NAME", images: ["gcr.io/kaniko-project/executor:$TAG_NAME",
"gcr.io/kaniko-project/executor:latest", "gcr.io/kaniko-project/executor:latest",
"gcr.io/kaniko-project/executor:debug-$TAG_NAME", "gcr.io/kaniko-project/executor:debug-$TAG_NAME",
"gcr.io/kaniko-project/executor:debug"] "gcr.io/kaniko-project/executor:debug",
"gcr.io/kaniko-project/warmer:$TAG_NAME",
"gcr.io/kaniko-project/warmer:latest"]

View File

@ -13,7 +13,16 @@ steps:
- name: "gcr.io/cloud-builders/docker" - name: "gcr.io/cloud-builders/docker"
args: ["build", "-f", "deploy/Dockerfile_debug", args: ["build", "-f", "deploy/Dockerfile_debug",
"-t", "gcr.io/kaniko-project/executor:debug", "."] "-t", "gcr.io/kaniko-project/executor:debug", "."]
# Then, we want to build the cache warmer
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-f", "deploy/Dockerfile_warmer",
"-t", "gcr.io/kaniko-project/warmer:${COMMIT_SHA}", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-f", "deploy/Dockerfile_warmer",
"-t", "gcr.io/kaniko-project/warmer:latest", "."]
images: ["gcr.io/kaniko-project/executor:${COMMIT_SHA}", images: ["gcr.io/kaniko-project/executor:${COMMIT_SHA}",
"gcr.io/kaniko-project/executor:latest", "gcr.io/kaniko-project/executor:latest",
"gcr.io/kaniko-project/executor:debug-${COMMIT_SHA}", "gcr.io/kaniko-project/executor:debug-${COMMIT_SHA}",
"gcr.io/kaniko-project/executor:debug"] "gcr.io/kaniko-project/executor:debug",
"gcr.io/kaniko-project/warmer:${COMMIT_SHA}",
"gcr.io/kaniko-project/warmer:latest"]

View File

@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kaniko-cache-claim
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 8Gi

View File

@ -0,0 +1,14 @@
kind: PersistentVolume
apiVersion: v1
metadata:
name: kaniko-cache-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/tmp/kaniko-cache"

30
examples/kaniko-test.yaml Normal file
View File

@ -0,0 +1,30 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=<dockerfile>",
"--context=<context>",
"--destination=<destination>",
"--cache",
"--cache-dir=/cache"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: kaniko-cache
mountPath: /cache
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
- name: kaniko-cache
persistentVolumeClaim:
claimName: kaniko-cache-claim

View File

@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-warmer
spec:
containers:
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer:latest
args: ["--cache-dir=/cache",
"--image=gcr.io/google-appengine/debian9"]
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: kaniko-cache
mountPath: /cache
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
- name: kaniko-cache
persistentVolumeClaim:
claimName: kaniko-cache-claim

View File

@ -34,5 +34,6 @@ fi
echo "Running integration tests..." echo "Running integration tests..."
make out/executor make out/executor
make out/warmer
pushd integration pushd integration
go test -v --bucket "${GCS_BUCKET}" --repo "${IMAGE_REPO}" --timeout 30m go test -v --bucket "${GCS_BUCKET}" --repo "${IMAGE_REPO}" --timeout 30m

View File

@ -0,0 +1,2 @@
# Tests extraction of symlink, hardlink and regular files to a path that is a non-empty directory
FROM gcr.io/kaniko-test/extraction-base-image:latest

View File

@ -0,0 +1,2 @@
FROM scratch
COPY . .

View File

@ -21,6 +21,8 @@ USER testuser:1001
RUN echo "hey2" >> /tmp/foo RUN echo "hey2" >> /tmp/foo
USER root USER root
RUN echo "hi" > $HOME/file
COPY context/foo $HOME/foo
RUN useradd -ms /bin/bash newuser RUN useradd -ms /bin/bash newuser
USER newuser USER newuser

112
integration/dockerignore.go Normal file
View File

@ -0,0 +1,112 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package integration
import (
"fmt"
"os"
"os/exec"
"path"
"path/filepath"
"runtime"
"strings"
)
var filesToIgnore = []string{"ignore/fo*", "!ignore/foobar", "ignore/Dockerfile_test_ignore"}
const (
ignoreDir = "ignore"
ignoreDockerfile = "Dockerfile_test_ignore"
ignoreDockerfileContents = `FROM scratch
COPY . .`
)
// Set up a test dir to ignore with the structure:
// ignore
// -- Dockerfile_test_ignore
// -- foo
// -- foobar
func setupIgnoreTestDir() error {
if err := os.MkdirAll(ignoreDir, 0750); err != nil {
return err
}
// Create and write contents to dockerfile
path := filepath.Join(ignoreDir, ignoreDockerfile)
f, err := os.Create(path)
if err != nil {
return err
}
defer f.Close()
if _, err := f.Write([]byte(ignoreDockerfileContents)); err != nil {
return err
}
additionalFiles := []string{"ignore/foo", "ignore/foobar"}
for _, add := range additionalFiles {
a, err := os.Create(add)
if err != nil {
return err
}
defer a.Close()
}
return generateDockerIgnore()
}
// generate the .dockerignore file
func generateDockerIgnore() error {
f, err := os.Create(".dockerignore")
if err != nil {
return err
}
defer f.Close()
contents := strings.Join(filesToIgnore, "\n")
if _, err := f.Write([]byte(contents)); err != nil {
return err
}
return nil
}
func generateDockerignoreImages(imageRepo string) error {
dockerfilePath := filepath.Join(ignoreDir, ignoreDockerfile)
dockerImage := strings.ToLower(imageRepo + dockerPrefix + ignoreDockerfile)
dockerCmd := exec.Command("docker", "build",
"-t", dockerImage,
"-f", path.Join(dockerfilePath),
".")
_, err := RunCommandWithoutTest(dockerCmd)
if err != nil {
return fmt.Errorf("Failed to build image %s with docker command \"%s\": %s", dockerImage, dockerCmd.Args, err)
}
_, ex, _, _ := runtime.Caller(0)
cwd := filepath.Dir(ex)
kanikoImage := GetKanikoImage(imageRepo, ignoreDockerfile)
kanikoCmd := exec.Command("docker",
"run",
"-v", os.Getenv("HOME")+"/.config/gcloud:/root/.config/gcloud",
"-v", cwd+":/workspace",
ExecutorImage,
"-f", path.Join(buildContextPath, dockerfilePath),
"-d", kanikoImage,
"-c", buildContextPath)
_, err = RunCommandWithoutTest(kanikoCmd)
return err
}

View File

@ -36,7 +36,7 @@ func CreateIntegrationTarball() (string, error) {
} }
tempDir, err := ioutil.TempDir("", "") tempDir, err := ioutil.TempDir("", "")
if err != nil { if err != nil {
return "", fmt.Errorf("Failed to create temporary directoy to hold tarball: %s", err) return "", fmt.Errorf("Failed to create temporary directory to hold tarball: %s", err)
} }
contextFile := fmt.Sprintf("%s/context_%d.tar.gz", tempDir, time.Now().UnixNano()) contextFile := fmt.Sprintf("%s/context_%d.tar.gz", tempDir, time.Now().UnixNano())
cmd := exec.Command("tar", "-C", dir, "-zcvf", contextFile, ".") cmd := exec.Command("tar", "-C", dir, "-zcvf", contextFile, ".")

View File

@ -30,10 +30,13 @@ import (
const ( const (
// ExecutorImage is the name of the kaniko executor image // ExecutorImage is the name of the kaniko executor image
ExecutorImage = "executor-image" ExecutorImage = "executor-image"
WarmerImage = "warmer-image"
dockerPrefix = "docker-" dockerPrefix = "docker-"
kanikoPrefix = "kaniko-" kanikoPrefix = "kaniko-"
buildContextPath = "/workspace" buildContextPath = "/workspace"
cacheDir = "/workspace/cache"
baseImageToCache = "gcr.io/google-appengine/debian9@sha256:1d6a9a6d106bd795098f60f4abb7083626354fa6735e81743c7f8cfca11259f0"
) )
// Arguments to build Dockerfiles with, used for both docker and kaniko builds // Arguments to build Dockerfiles with, used for both docker and kaniko builds
@ -162,6 +165,7 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
dockerPath}, dockerPath},
additionalFlags...)..., additionalFlags...)...,
) )
_, err := RunCommandWithoutTest(dockerCmd) _, err := RunCommandWithoutTest(dockerCmd)
if err != nil { if err != nil {
return fmt.Errorf("Failed to build image %s with docker command \"%s\": %s", dockerImage, dockerCmd.Args, err) return fmt.Errorf("Failed to build image %s with docker command \"%s\": %s", dockerImage, dockerCmd.Args, err)
@ -208,6 +212,26 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
return nil return nil
} }
func populateVolumeCache() error {
_, ex, _, _ := runtime.Caller(0)
cwd := filepath.Dir(ex)
warmerCmd := exec.Command("docker",
append([]string{"run",
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
"-v", cwd + ":/workspace",
WarmerImage,
"-c", cacheDir,
"-i", baseImageToCache},
)...,
)
if _, err := RunCommandWithoutTest(warmerCmd); err != nil {
return fmt.Errorf("Failed to warm kaniko cache: %s", err)
}
return nil
}
// buildCachedImages builds the images for testing caching via kaniko where version is the nth time this image has been built // buildCachedImages builds the images for testing caching via kaniko where version is the nth time this image has been built
func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesPath string, version int) error { func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesPath string, version int) error {
_, ex, _, _ := runtime.Caller(0) _, ex, _, _ := runtime.Caller(0)
@ -226,7 +250,8 @@ func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesP
"-d", kanikoImage, "-d", kanikoImage,
"-c", buildContextPath, "-c", buildContextPath,
cacheFlag, cacheFlag,
"--cache-repo", cacheRepo})..., "--cache-repo", cacheRepo,
"--cache-dir", cacheDir})...,
) )
if _, err := RunCommandWithoutTest(kanikoCmd); err != nil { if _, err := RunCommandWithoutTest(kanikoCmd); err != nil {

View File

@ -145,6 +145,13 @@ func TestMain(m *testing.M) {
os.Exit(1) os.Exit(1)
} }
fmt.Println("Building cache warmer image")
cmd = exec.Command("docker", "build", "-t", WarmerImage, "-f", "../deploy/Dockerfile_warmer", "..")
if _, err = RunCommandWithoutTest(cmd); err != nil {
fmt.Printf("Building kaniko's cache warmer failed: %s", err)
os.Exit(1)
}
fmt.Println("Building onbuild base image") fmt.Println("Building onbuild base image")
buildOnbuildBase := exec.Command("docker", "build", "-t", config.onbuildBaseImage, "-f", "dockerfiles/Dockerfile_onbuild_base", ".") buildOnbuildBase := exec.Command("docker", "build", "-t", config.onbuildBaseImage, "-f", "dockerfiles/Dockerfile_onbuild_base", ".")
if err := buildOnbuildBase.Run(); err != nil { if err := buildOnbuildBase.Run(); err != nil {
@ -238,6 +245,7 @@ func TestLayers(t *testing.T) {
// Build each image with kaniko twice, and then make sure they're exactly the same // Build each image with kaniko twice, and then make sure they're exactly the same
func TestCache(t *testing.T) { func TestCache(t *testing.T) {
populateVolumeCache()
for dockerfile := range imageBuilder.TestCacheDockerfiles { for dockerfile := range imageBuilder.TestCacheDockerfiles {
t.Run("test_cache_"+dockerfile, func(t *testing.T) { t.Run("test_cache_"+dockerfile, func(t *testing.T) {
cache := filepath.Join(config.imageRepo, "cache", fmt.Sprintf("%v", time.Now().UnixNano())) cache := filepath.Join(config.imageRepo, "cache", fmt.Sprintf("%v", time.Now().UnixNano()))
@ -267,6 +275,31 @@ func TestCache(t *testing.T) {
} }
} }
func TestDockerignore(t *testing.T) {
t.Run(fmt.Sprintf("test_%s", ignoreDockerfile), func(t *testing.T) {
if err := setupIgnoreTestDir(); err != nil {
t.Fatalf("error setting up ignore test dir: %v", err)
}
if err := generateDockerignoreImages(config.imageRepo); err != nil {
t.Fatalf("error generating dockerignore test images: %v", err)
}
dockerImage := GetDockerImage(config.imageRepo, ignoreDockerfile)
kanikoImage := GetKanikoImage(config.imageRepo, ignoreDockerfile)
// container-diff
daemonDockerImage := daemonPrefix + dockerImage
containerdiffCmd := exec.Command("container-diff", "diff",
daemonDockerImage, kanikoImage,
"-q", "--type=file", "--type=metadata", "--json")
diff := RunCommand(containerdiffCmd, t)
t.Logf("diff = %s", string(diff))
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
checkContainerDiffOutput(t, diff, expected)
})
}
type fileDiff struct { type fileDiff struct {
Name string Name string
Size int Size int

39
pkg/cache/cache.go vendored
View File

@ -18,6 +18,7 @@ package cache
import ( import (
"fmt" "fmt"
"path"
"github.com/GoogleContainerTools/kaniko/pkg/config" "github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/google/go-containerregistry/pkg/authn" "github.com/google/go-containerregistry/pkg/authn"
@ -25,13 +26,21 @@ import (
"github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1" "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/remote" "github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
// RetrieveLayer checks the specified cache for a layer with the tag :cacheKey type LayerCache interface {
func RetrieveLayer(opts *config.KanikoOptions, cacheKey string) (v1.Image, error) { RetrieveLayer(string) (v1.Image, error)
cache, err := Destination(opts, cacheKey) }
type RegistryCache struct {
Opts *config.KanikoOptions
}
func (rc *RegistryCache) RetrieveLayer(ck string) (v1.Image, error) {
cache, err := Destination(rc.Opts, ck)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "getting cache destination") return nil, errors.Wrap(err, "getting cache destination")
} }
@ -50,8 +59,11 @@ func RetrieveLayer(opts *config.KanikoOptions, cacheKey string) (v1.Image, error
if err != nil { if err != nil {
return nil, err return nil, err
} }
_, err = img.Layers() // Force the manifest to be populated
return img, err if _, err := img.RawManifest(); err != nil {
return nil, err
}
return img, nil
} }
// Destination returns the repo where the layer should be stored // Destination returns the repo where the layer should be stored
@ -68,3 +80,20 @@ func Destination(opts *config.KanikoOptions, cacheKey string) (string, error) {
} }
return fmt.Sprintf("%s:%s", cache, cacheKey), nil return fmt.Sprintf("%s:%s", cache, cacheKey), nil
} }
func LocalSource(opts *config.KanikoOptions, cacheKey string) (v1.Image, error) {
cache := opts.CacheDir
if cache == "" {
return nil, nil
}
path := path.Join(cache, cacheKey)
imgTar, err := tarball.ImageFromPath(path, nil)
if err != nil {
return nil, errors.Wrap(err, "getting image from path")
}
logrus.Infof("Found %s in local cache", cacheKey)
return imgTar, nil
}

61
pkg/cache/warm.go vendored Normal file
View File

@ -0,0 +1,61 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cache
import (
"fmt"
"path"
"github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func WarmCache(opts *config.WarmerOptions) error {
cacheDir := opts.CacheDir
images := opts.Images
logrus.Debugf("%s\n", cacheDir)
logrus.Debugf("%s\n", images)
for _, image := range images {
cacheRef, err := name.NewTag(image, name.WeakValidation)
if err != nil {
errors.Wrap(err, fmt.Sprintf("Failed to verify image name: %s", image))
}
img, err := remote.Image(cacheRef)
if err != nil {
errors.Wrap(err, fmt.Sprintf("Failed to retrieve image: %s", image))
}
digest, err := img.Digest()
if err != nil {
errors.Wrap(err, fmt.Sprintf("Failed to retrieve digest: %s", image))
}
cachePath := path.Join(cacheDir, digest.String())
err = tarball.WriteToFile(cachePath, cacheRef, img)
if err != nil {
errors.Wrap(err, fmt.Sprintf("Failed to write %s to cache", image))
} else {
logrus.Debugf("Wrote %s to cache", image)
}
}
return nil
}

View File

@ -19,12 +19,13 @@ package commands
import ( import (
"path/filepath" "path/filepath"
"github.com/moby/buildkit/frontend/dockerfile/instructions"
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile" "github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
"github.com/google/go-containerregistry/pkg/v1" "github.com/google/go-containerregistry/pkg/v1"
"github.com/GoogleContainerTools/kaniko/pkg/util" "github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/moby/buildkit/frontend/dockerfile/instructions"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
@ -44,18 +45,13 @@ type AddCommand struct {
// 2. If <src> is a local tar archive: // 2. If <src> is a local tar archive:
// -If <src> is a local tar archive, it is unpacked at the dest, as 'tar -x' would // -If <src> is a local tar archive, it is unpacked at the dest, as 'tar -x' would
func (a *AddCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error { func (a *AddCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
// First, resolve any environment replacement
replacementEnvs := buildArgs.ReplacementEnvs(config.Env) replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
resolvedEnvs, err := util.ResolveEnvironmentReplacementList(a.cmd.SourcesAndDest, replacementEnvs, true)
if err != nil { srcs, dest, err := resolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
return err
}
dest := resolvedEnvs[len(resolvedEnvs)-1]
// Resolve wildcards and get a list of resolved sources
srcs, err := util.ResolveSources(resolvedEnvs, a.buildcontext)
if err != nil { if err != nil {
return err return err
} }
var unresolvedSrcs []string var unresolvedSrcs []string
// If any of the sources are local tar archives: // If any of the sources are local tar archives:
// 1. Unpack them to the specified destination // 1. Unpack them to the specified destination
@ -94,6 +90,7 @@ func (a *AddCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bui
}, },
buildcontext: a.buildcontext, buildcontext: a.buildcontext,
} }
if err := copyCmd.ExecuteCommand(config, buildArgs); err != nil { if err := copyCmd.ExecuteCommand(config, buildArgs); err != nil {
return err return err
} }
@ -111,6 +108,26 @@ func (a *AddCommand) String() string {
return a.cmd.String() return a.cmd.String()
} }
func (a *AddCommand) UsesContext() bool { func (a *AddCommand) FilesUsedFromContext(config *v1.Config, buildArgs *dockerfile.BuildArgs) ([]string, error) {
return true replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
srcs, _, err := resolveEnvAndWildcards(a.cmd.SourcesAndDest, a.buildcontext, replacementEnvs)
if err != nil {
return nil, err
}
files := []string{}
for _, src := range srcs {
if util.IsSrcRemoteFileURL(src) {
continue
}
if util.IsFileLocalTarArchive(src) {
continue
}
fullPath := filepath.Join(a.buildcontext, src)
files = append(files, fullPath)
}
logrus.Infof("Using files from context: %v", files)
return files, nil
} }

View File

@ -16,19 +16,22 @@ limitations under the License.
package commands package commands
import (
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
"github.com/google/go-containerregistry/pkg/v1"
)
type BaseCommand struct { type BaseCommand struct {
cache bool
usesContext bool
} }
func (b *BaseCommand) CacheCommand() bool { func (b *BaseCommand) CacheCommand(v1.Image) DockerCommand {
return b.cache return nil
}
func (b *BaseCommand) UsesContext() bool {
return b.usesContext
} }
func (b *BaseCommand) FilesToSnapshot() []string { func (b *BaseCommand) FilesToSnapshot() []string {
return []string{} return []string{}
} }
func (b *BaseCommand) FilesUsedFromContext(_ *v1.Config, _ *dockerfile.BuildArgs) ([]string, error) {
return []string{}, nil
}

View File

@ -24,6 +24,8 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
type CurrentCacheKey func() (string, error)
type DockerCommand interface { type DockerCommand interface {
// ExecuteCommand is responsible for: // ExecuteCommand is responsible for:
// 1. Making required changes to the filesystem (ex. copying files for ADD/COPY or setting ENV variables) // 1. Making required changes to the filesystem (ex. copying files for ADD/COPY or setting ENV variables)
@ -34,12 +36,11 @@ type DockerCommand interface {
String() string String() string
// A list of files to snapshot, empty for metadata commands or nil if we don't know // A list of files to snapshot, empty for metadata commands or nil if we don't know
FilesToSnapshot() []string FilesToSnapshot() []string
// Return true if this command should be true // Return a cache-aware implementation of this command, if it exists.
// Currently only true for RUN CacheCommand(v1.Image) DockerCommand
CacheCommand() bool
// Return true if this command depends on the build context. // Return true if this command depends on the build context.
UsesContext() bool FilesUsedFromContext(*v1.Config, *dockerfile.BuildArgs) ([]string, error)
} }
func GetCommand(cmd instructions.Command, buildcontext string) (DockerCommand, error) { func GetCommand(cmd instructions.Command, buildcontext string) (DockerCommand, error) {

View File

@ -20,12 +20,14 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"github.com/moby/buildkit/frontend/dockerfile/instructions"
"github.com/sirupsen/logrus"
"github.com/GoogleContainerTools/kaniko/pkg/constants" "github.com/GoogleContainerTools/kaniko/pkg/constants"
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile" "github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
"github.com/GoogleContainerTools/kaniko/pkg/util" "github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/google/go-containerregistry/pkg/v1" "github.com/google/go-containerregistry/pkg/v1"
"github.com/moby/buildkit/frontend/dockerfile/instructions"
) )
type CopyCommand struct { type CopyCommand struct {
@ -40,18 +42,14 @@ func (c *CopyCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
if c.cmd.From != "" { if c.cmd.From != "" {
c.buildcontext = filepath.Join(constants.KanikoDir, c.cmd.From) c.buildcontext = filepath.Join(constants.KanikoDir, c.cmd.From)
} }
replacementEnvs := buildArgs.ReplacementEnvs(config.Env) replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
// First, resolve any environment replacement
resolvedEnvs, err := util.ResolveEnvironmentReplacementList(c.cmd.SourcesAndDest, replacementEnvs, true) srcs, dest, err := resolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
if err != nil {
return err
}
dest := resolvedEnvs[len(resolvedEnvs)-1]
// Resolve wildcards and get a list of resolved sources
srcs, err := util.ResolveSources(resolvedEnvs, c.buildcontext)
if err != nil { if err != nil {
return err return err
} }
// For each source, iterate through and copy it over // For each source, iterate through and copy it over
for _, src := range srcs { for _, src := range srcs {
fullPath := filepath.Join(c.buildcontext, src) fullPath := filepath.Join(c.buildcontext, src)
@ -94,6 +92,18 @@ func (c *CopyCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
return nil return nil
} }
func resolveEnvAndWildcards(sd instructions.SourcesAndDest, buildcontext string, envs []string) ([]string, string, error) {
// First, resolve any environment replacement
resolvedEnvs, err := util.ResolveEnvironmentReplacementList(sd, envs, true)
if err != nil {
return nil, "", err
}
dest := resolvedEnvs[len(resolvedEnvs)-1]
// Resolve wildcards and get a list of resolved sources
srcs, err := util.ResolveSources(resolvedEnvs, buildcontext)
return srcs, dest, err
}
// FilesToSnapshot should return an empty array if still nil; no files were changed // FilesToSnapshot should return an empty array if still nil; no files were changed
func (c *CopyCommand) FilesToSnapshot() []string { func (c *CopyCommand) FilesToSnapshot() []string {
return c.snapshotFiles return c.snapshotFiles
@ -104,6 +114,23 @@ func (c *CopyCommand) String() string {
return c.cmd.String() return c.cmd.String()
} }
func (c *CopyCommand) UsesContext() bool { func (c *CopyCommand) FilesUsedFromContext(config *v1.Config, buildArgs *dockerfile.BuildArgs) ([]string, error) {
return true // We don't use the context if we're performing a copy --from.
if c.cmd.From != "" {
return nil, nil
}
replacementEnvs := buildArgs.ReplacementEnvs(config.Env)
srcs, _, err := resolveEnvAndWildcards(c.cmd.SourcesAndDest, c.buildcontext, replacementEnvs)
if err != nil {
return nil, err
}
files := []string{}
for _, src := range srcs {
fullPath := filepath.Join(c.buildcontext, src)
files = append(files, fullPath)
}
logrus.Infof("Using files from context: %v", files)
return files, nil
} }

View File

@ -127,7 +127,7 @@ func addDefaultHOME(u string, envs []string) []string {
} }
// If user isn't set, set default value of HOME // If user isn't set, set default value of HOME
if u == "" { if u == "" || u == constants.RootUser {
return append(envs, fmt.Sprintf("%s=%s", constants.HOME, constants.DefaultHOMEValue)) return append(envs, fmt.Sprintf("%s=%s", constants.HOME, constants.DefaultHOMEValue))
} }
@ -153,6 +153,35 @@ func (r *RunCommand) FilesToSnapshot() []string {
} }
// CacheCommand returns true since this command should be cached // CacheCommand returns true since this command should be cached
func (r *RunCommand) CacheCommand() bool { func (r *RunCommand) CacheCommand(img v1.Image) DockerCommand {
return true
return &CachingRunCommand{
img: img,
cmd: r.cmd,
}
}
type CachingRunCommand struct {
BaseCommand
img v1.Image
extractedFiles []string
cmd *instructions.RunCommand
}
func (cr *CachingRunCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
logrus.Infof("Found cached layer, extracting to filesystem")
var err error
cr.extractedFiles, err = util.GetFSFromImage(constants.RootDir, cr.img)
if err != nil {
return errors.Wrap(err, "extracting fs from image")
}
return nil
}
func (cr *CachingRunCommand) FilesToSnapshot() []string {
return cr.extractedFiles
}
func (cr *CachingRunCommand) String() string {
return cr.cmd.String()
} }

View File

@ -62,6 +62,17 @@ func Test_addDefaultHOME(t *testing.T) {
"HOME=/", "HOME=/",
}, },
}, },
{
name: "HOME isn't set, user is set to root",
user: "root",
initial: []string{
"PATH=/something/else",
},
expected: []string{
"PATH=/something/else",
"HOME=/root",
},
},
} }
for _, test := range tests { for _, test := range tests {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {

View File

@ -30,8 +30,7 @@ import (
type VolumeCommand struct { type VolumeCommand struct {
BaseCommand BaseCommand
cmd *instructions.VolumeCommand cmd *instructions.VolumeCommand
snapshotFiles []string
} }
func (v *VolumeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error { func (v *VolumeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
@ -57,7 +56,6 @@ func (v *VolumeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.
// Only create and snapshot the dir if it didn't exist already // Only create and snapshot the dir if it didn't exist already
if _, err := os.Stat(volume); os.IsNotExist(err) { if _, err := os.Stat(volume); os.IsNotExist(err) {
logrus.Infof("Creating directory %s", volume) logrus.Infof("Creating directory %s", volume)
v.snapshotFiles = append(v.snapshotFiles, volume)
if err := os.MkdirAll(volume, 0755); err != nil { if err := os.MkdirAll(volume, 0755); err != nil {
return fmt.Errorf("Could not create directory for volume %s: %s", volume, err) return fmt.Errorf("Could not create directory for volume %s: %s", volume, err)
} }
@ -69,7 +67,7 @@ func (v *VolumeCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.
} }
func (v *VolumeCommand) FilesToSnapshot() []string { func (v *VolumeCommand) FilesToSnapshot() []string {
return v.snapshotFiles return []string{}
} }
func (v *VolumeCommand) String() string { func (v *VolumeCommand) String() string {

View File

@ -43,7 +43,6 @@ func TestUpdateVolume(t *testing.T) {
cmd: &instructions.VolumeCommand{ cmd: &instructions.VolumeCommand{
Volumes: volumes, Volumes: volumes,
}, },
snapshotFiles: []string{},
} }
expectedVolumes := map[string]struct{}{ expectedVolumes := map[string]struct{}{

View File

@ -18,20 +18,29 @@ package config
// KanikoOptions are options that are set by command line arguments // KanikoOptions are options that are set by command line arguments
type KanikoOptions struct { type KanikoOptions struct {
DockerfilePath string DockerfilePath string
SrcContext string SrcContext string
SnapshotMode string SnapshotMode string
Bucket string Bucket string
TarPath string TarPath string
Target string Target string
CacheRepo string CacheRepo string
Destinations multiArg CacheDir string
BuildArgs multiArg Destinations multiArg
InsecurePush bool BuildArgs multiArg
SkipTLSVerify bool Insecure bool
SingleSnapshot bool SkipTLSVerify bool
Reproducible bool InsecurePull bool
NoPush bool SkipTLSVerifyPull bool
Cache bool SingleSnapshot bool
Cleanup bool Reproducible bool
NoPush bool
Cache bool
Cleanup bool
}
// WarmerOptions are options that are set by command line arguments to the cache warmer.
type WarmerOptions struct {
Images multiArg
CacheDir string
} }

View File

@ -33,6 +33,9 @@ const (
Author = "kaniko" Author = "kaniko"
// DockerfilePath is the path the Dockerfile is copied to
DockerfilePath = "/kaniko/Dockerfile"
// ContextTar is the default name of the tar uploaded to GCS buckets // ContextTar is the default name of the tar uploaded to GCS buckets
ContextTar = "context.tar.gz" ContextTar = "context.tar.gz"
@ -59,13 +62,11 @@ const (
HOME = "HOME" HOME = "HOME"
// DefaultHOMEValue is the default value Docker sets for $HOME // DefaultHOMEValue is the default value Docker sets for $HOME
DefaultHOMEValue = "/root" DefaultHOMEValue = "/root"
RootUser = "root"
// Docker command names // Docker command names
Cmd = "cmd" Cmd = "cmd"
Entrypoint = "entrypoint" Entrypoint = "entrypoint"
// VolumeCmdName is the name of the volume command
VolumeCmdName = "volume"
) )
// KanikoBuildFiles is the list of files required to build kaniko // KanikoBuildFiles is the list of files required to build kaniko

View File

@ -20,11 +20,13 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"path/filepath"
"strconv" "strconv"
"strings" "strings"
"github.com/GoogleContainerTools/kaniko/pkg/config" "github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/GoogleContainerTools/kaniko/pkg/util" "github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/docker/docker/builder/dockerignore"
"github.com/moby/buildkit/frontend/dockerfile/instructions" "github.com/moby/buildkit/frontend/dockerfile/instructions"
"github.com/moby/buildkit/frontend/dockerfile/parser" "github.com/moby/buildkit/frontend/dockerfile/parser"
"github.com/pkg/errors" "github.com/pkg/errors"
@ -168,3 +170,20 @@ func saveStage(index int, stages []instructions.Stage) bool {
} }
return false return false
} }
// DockerignoreExists returns true if .dockerignore exists in the source context
func DockerignoreExists(opts *config.KanikoOptions) bool {
path := filepath.Join(opts.SrcContext, ".dockerignore")
return util.FilepathExists(path)
}
// ParseDockerignore returns a list of all paths in .dockerignore
func ParseDockerignore(opts *config.KanikoOptions) ([]string, error) {
path := filepath.Join(opts.SrcContext, ".dockerignore")
contents, err := ioutil.ReadFile(path)
if err != nil {
return nil, errors.Wrap(err, "parsing .dockerignore")
}
reader := bytes.NewBuffer(contents)
return dockerignore.ReadAll(reader)
}

View File

@ -49,11 +49,12 @@ type stageBuilder struct {
cf *v1.ConfigFile cf *v1.ConfigFile
snapshotter *snapshot.Snapshotter snapshotter *snapshot.Snapshotter
baseImageDigest string baseImageDigest string
opts *config.KanikoOptions
} }
// newStageBuilder returns a new type stageBuilder which contains all the information required to build the stage // newStageBuilder returns a new type stageBuilder which contains all the information required to build the stage
func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*stageBuilder, error) { func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*stageBuilder, error) {
sourceImage, err := util.RetrieveSourceImage(stage, opts.BuildArgs) sourceImage, err := util.RetrieveSourceImage(stage, opts.BuildArgs, opts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -81,37 +82,11 @@ func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*sta
cf: imageConfig, cf: imageConfig,
snapshotter: snapshotter, snapshotter: snapshotter,
baseImageDigest: digest.String(), baseImageDigest: digest.String(),
opts: opts,
}, nil }, nil
} }
// extractCachedLayer will extract the cached layer and append it to the config file func (s *stageBuilder) build() error {
func (s *stageBuilder) extractCachedLayer(layer v1.Image, createdBy string) error {
logrus.Infof("Found cached layer, extracting to filesystem")
extractedFiles, err := util.GetFSFromImage(constants.RootDir, layer)
if err != nil {
return errors.Wrap(err, "extracting fs from image")
}
if _, err := s.snapshotter.TakeSnapshot(extractedFiles); err != nil {
return err
}
logrus.Infof("Appending cached layer to base image")
l, err := layer.Layers()
if err != nil {
return errors.Wrap(err, "getting cached layer from image")
}
s.image, err = mutate.Append(s.image,
mutate.Addendum{
Layer: l[0],
History: v1.History{
Author: constants.Author,
CreatedBy: createdBy,
},
},
)
return err
}
func (s *stageBuilder) build(opts *config.KanikoOptions) error {
// Unpack file system to root // Unpack file system to root
if _, err := util.GetFSFromImage(constants.RootDir, s.image); err != nil { if _, err := util.GetFSFromImage(constants.RootDir, s.image); err != nil {
return err return err
@ -120,127 +95,158 @@ func (s *stageBuilder) build(opts *config.KanikoOptions) error {
if err := s.snapshotter.Init(); err != nil { if err := s.snapshotter.Init(); err != nil {
return err return err
} }
var volumes []string
// Set the initial cache key to be the base image digest, the build args and the SrcContext. // Set the initial cache key to be the base image digest, the build args and the SrcContext.
compositeKey := NewCompositeCache(s.baseImageDigest) compositeKey := NewCompositeCache(s.baseImageDigest)
contextHash, err := HashDir(opts.SrcContext) compositeKey.AddKey(s.opts.BuildArgs...)
if err != nil {
return err
}
compositeKey.AddKey(opts.BuildArgs...)
args := dockerfile.NewBuildArgs(opts.BuildArgs) cmds := []commands.DockerCommand{}
for index, cmd := range s.stage.Commands { for _, cmd := range s.stage.Commands {
finalCmd := index == len(s.stage.Commands)-1 command, err := commands.GetCommand(cmd, s.opts.SrcContext)
command, err := commands.GetCommand(cmd, opts.SrcContext)
if err != nil { if err != nil {
return err return err
} }
cmds = append(cmds, command)
}
layerCache := &cache.RegistryCache{
Opts: s.opts,
}
if s.opts.Cache {
// Possibly replace commands with their cached implementations.
for i, command := range cmds {
if command == nil {
continue
}
ck, err := compositeKey.Hash()
if err != nil {
return err
}
img, err := layerCache.RetrieveLayer(ck)
if err != nil {
logrus.Infof("No cached layer found for cmd %s", command.String())
break
}
if cacheCmd := command.CacheCommand(img); cacheCmd != nil {
logrus.Infof("Using caching version of cmd: %s", command.String())
cmds[i] = cacheCmd
}
}
}
args := dockerfile.NewBuildArgs(s.opts.BuildArgs)
for index, command := range cmds {
if command == nil { if command == nil {
continue continue
} }
// Add the next command to the cache key. // Add the next command to the cache key.
compositeKey.AddKey(command.String()) compositeKey.AddKey(command.String())
if command.UsesContext() {
compositeKey.AddKey(contextHash) // If the command uses files from the context, add them.
files, err := command.FilesUsedFromContext(&s.cf.Config, args)
if err != nil {
return err
}
for _, f := range files {
if err := compositeKey.AddPath(f); err != nil {
return err
}
} }
logrus.Info(command.String()) logrus.Info(command.String())
if err := command.ExecuteCommand(&s.cf.Config, args); err != nil {
return err
}
files = command.FilesToSnapshot()
var contents []byte
if !s.shouldTakeSnapshot(index, files) {
continue
}
if files == nil || s.opts.SingleSnapshot {
contents, err = s.snapshotter.TakeSnapshotFS()
} else {
// Volumes are very weird. They get created in their command, but snapshotted in the next one.
// Add them to the list of files to snapshot.
for v := range s.cf.Config.Volumes {
files = append(files, v)
}
contents, err = s.snapshotter.TakeSnapshot(files)
}
if err != nil {
return err
}
ck, err := compositeKey.Hash() ck, err := compositeKey.Hash()
if err != nil { if err != nil {
return err return err
} }
if err := s.saveSnapshot(command.String(), ck, contents); err != nil {
if command.CacheCommand() && opts.Cache {
image, err := cache.RetrieveLayer(opts, ck)
if err == nil {
if err := s.extractCachedLayer(image, command.String()); err != nil {
return errors.Wrap(err, "extracting cached layer")
}
continue
}
logrus.Info("No cached layer found, executing command...")
}
if err := command.ExecuteCommand(&s.cf.Config, args); err != nil {
return err
}
files := command.FilesToSnapshot()
if cmd.Name() == constants.VolumeCmdName {
volumes = append(volumes, files...)
continue
}
var contents []byte
// If this is an intermediate stage, we only snapshot for the last command and we
// want to snapshot the entire filesystem since we aren't tracking what was changed
// by previous commands.
if !s.stage.Final {
if finalCmd {
contents, err = s.snapshotter.TakeSnapshotFS()
}
} else {
// If we are in single snapshot mode, we only take a snapshot once, after all
// commands have completed.
if opts.SingleSnapshot {
if finalCmd {
contents, err = s.snapshotter.TakeSnapshotFS()
}
} else {
// Otherwise, in the final stage we take a snapshot at each command. If we know
// the files that were changed, we'll snapshot those explicitly, otherwise we'll
// check if anything in the filesystem changed.
if files != nil {
if len(files) > 0 {
files = append(files, volumes...)
volumes = []string{}
}
contents, err = s.snapshotter.TakeSnapshot(files)
} else {
contents, err = s.snapshotter.TakeSnapshotFS()
volumes = []string{}
}
}
}
if err != nil {
return fmt.Errorf("Error taking snapshot of files for command %s: %s", command, err)
}
if contents == nil {
logrus.Info("No files were changed, appending empty layer to config. No layer added to image.")
continue
}
// Append the layer to the image
opener := func() (io.ReadCloser, error) {
return ioutil.NopCloser(bytes.NewReader(contents)), nil
}
layer, err := tarball.LayerFromOpener(opener)
if err != nil {
return err
}
// Push layer to cache now along with new config file
if command.CacheCommand() && opts.Cache {
if err := pushLayerToCache(opts, ck, layer, command.String()); err != nil {
return err
}
}
s.image, err = mutate.Append(s.image,
mutate.Addendum{
Layer: layer,
History: v1.History{
Author: constants.Author,
CreatedBy: command.String(),
},
},
)
if err != nil {
return err return err
} }
} }
return nil return nil
} }
func (s *stageBuilder) shouldTakeSnapshot(index int, files []string) bool {
isLastCommand := index == len(s.stage.Commands)-1
// We only snapshot the very end of intermediate stages.
if !s.stage.Final {
return isLastCommand
}
// We only snapshot the very end with single snapshot mode on.
if s.opts.SingleSnapshot {
return isLastCommand
}
// nil means snapshot everything.
if files == nil {
return true
}
// Don't snapshot an empty list.
if len(files) == 0 {
return false
}
return true
}
func (s *stageBuilder) saveSnapshot(createdBy string, ck string, contents []byte) error {
if contents == nil {
logrus.Info("No files were changed, appending empty layer to config. No layer added to image.")
return nil
}
// Append the layer to the image
opener := func() (io.ReadCloser, error) {
return ioutil.NopCloser(bytes.NewReader(contents)), nil
}
layer, err := tarball.LayerFromOpener(opener)
if err != nil {
return err
}
// Push layer to cache now along with new config file
if s.opts.Cache {
if err := pushLayerToCache(s.opts, ck, layer, createdBy); err != nil {
return err
}
}
s.image, err = mutate.Append(s.image,
mutate.Addendum{
Layer: layer,
History: v1.History{
Author: constants.Author,
CreatedBy: createdBy,
},
},
)
return err
}
// DoBuild executes building the Dockerfile // DoBuild executes building the Dockerfile
func DoBuild(opts *config.KanikoOptions) (v1.Image, error) { func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
// Parse dockerfile and unpack base image to root // Parse dockerfile and unpack base image to root
@ -253,7 +259,7 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
if err != nil { if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("getting stage builder for stage %d", index)) return nil, errors.Wrap(err, fmt.Sprintf("getting stage builder for stage %d", index))
} }
if err := sb.build(opts); err != nil { if err := sb.build(); err != nil {
return nil, errors.Wrap(err, "error building stage") return nil, errors.Wrap(err, "error building stage")
} }
reviewConfig(stage, &sb.cf.Config) reviewConfig(stage, &sb.cf.Config)

View File

@ -53,6 +53,32 @@ func (s *CompositeCache) Hash() (string, error) {
return util.SHA256(strings.NewReader(s.Key())) return util.SHA256(strings.NewReader(s.Key()))
} }
func (s *CompositeCache) AddPath(p string) error {
sha := sha256.New()
fi, err := os.Lstat(p)
if err != nil {
return err
}
if fi.Mode().IsDir() {
k, err := HashDir(p)
if err != nil {
return err
}
s.keys = append(s.keys, k)
return nil
}
fh, err := util.CacheHasher()(p)
if err != nil {
return err
}
if _, err := sha.Write([]byte(fh)); err != nil {
return err
}
s.keys = append(s.keys, string(sha.Sum(nil)))
return nil
}
// HashDir returns a hash of the directory. // HashDir returns a hash of the directory.
func HashDir(p string) (string, error) { func HashDir(p string) (string, error) {
sha := sha256.New() sha := sha256.New()

View File

@ -71,7 +71,7 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
// continue pushing unless an error occurs // continue pushing unless an error occurs
for _, destRef := range destRefs { for _, destRef := range destRefs {
if opts.InsecurePush { if opts.Insecure {
newReg, err := name.NewInsecureRegistry(destRef.Repository.Registry.Name(), name.WeakValidation) newReg, err := name.NewInsecureRegistry(destRef.Repository.Registry.Name(), name.WeakValidation)
if err != nil { if err != nil {
return errors.Wrap(err, "getting new insecure registry") return errors.Wrap(err, "getting new insecure registry")

View File

@ -133,7 +133,14 @@ func matchSources(srcs, files []string) ([]string, error) {
} }
func IsDestDir(path string) bool { func IsDestDir(path string) bool {
return strings.HasSuffix(path, "/") || path == "." // try to stat the path
fileInfo, err := os.Stat(path)
if err != nil {
// fall back to string-based determination
return strings.HasSuffix(path, "/") || path == "."
}
// if it's a real path, check the fs response
return fileInfo.IsDir()
} }
// DestinationFilepath returns the destination filepath from the build context to the image filesystem // DestinationFilepath returns the destination filepath from the build context to the image filesystem

View File

@ -193,7 +193,7 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
// Check if something already exists at path (symlinks etc.) // Check if something already exists at path (symlinks etc.)
// If so, delete it // If so, delete it
if FilepathExists(path) { if FilepathExists(path) {
if err := os.Remove(path); err != nil { if err := os.RemoveAll(path); err != nil {
return errors.Wrapf(err, "error removing %s to make way for new file.", path) return errors.Wrapf(err, "error removing %s to make way for new file.", path)
} }
} }
@ -242,7 +242,7 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
// Check if something already exists at path // Check if something already exists at path
// If so, delete it // If so, delete it
if FilepathExists(path) { if FilepathExists(path) {
if err := os.Remove(path); err != nil { if err := os.RemoveAll(path); err != nil {
return errors.Wrapf(err, "error removing %s to make way for new link", hdr.Name) return errors.Wrapf(err, "error removing %s to make way for new link", hdr.Name)
} }
} }
@ -260,7 +260,7 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
// Check if something already exists at path // Check if something already exists at path
// If so, delete it // If so, delete it
if FilepathExists(path) { if FilepathExists(path) {
if err := os.Remove(path); err != nil { if err := os.RemoveAll(path); err != nil {
return errors.Wrapf(err, "error removing %s to make way for new symlink", hdr.Name) return errors.Wrapf(err, "error removing %s to make way for new symlink", hdr.Name)
} }
} }
@ -468,7 +468,7 @@ func CopyDir(src, dest string) ([]string, error) {
} }
destPath := filepath.Join(dest, file) destPath := filepath.Join(dest, file)
if fi.IsDir() { if fi.IsDir() {
logrus.Infof("Creating directory %s", destPath) logrus.Debugf("Creating directory %s", destPath)
uid := int(fi.Sys().(*syscall.Stat_t).Uid) uid := int(fi.Sys().(*syscall.Stat_t).Uid)
gid := int(fi.Sys().(*syscall.Stat_t).Gid) gid := int(fi.Sys().(*syscall.Stat_t).Gid)
@ -511,7 +511,7 @@ func CopyFile(src, dest string) error {
if err != nil { if err != nil {
return err return err
} }
logrus.Infof("Copying file %s to %s", src, dest) logrus.Debugf("Copying file %s to %s", src, dest)
srcFile, err := os.Open(src) srcFile, err := os.Open(src)
if err != nil { if err != nil {
return err return err

View File

@ -17,6 +17,8 @@ limitations under the License.
package util package util
import ( import (
"crypto/tls"
"net/http"
"path/filepath" "path/filepath"
"strconv" "strconv"
@ -30,6 +32,7 @@ import (
"github.com/google/go-containerregistry/pkg/v1/tarball" "github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/GoogleContainerTools/kaniko/pkg/cache"
"github.com/GoogleContainerTools/kaniko/pkg/config" "github.com/GoogleContainerTools/kaniko/pkg/config"
"github.com/GoogleContainerTools/kaniko/pkg/constants" "github.com/GoogleContainerTools/kaniko/pkg/constants"
) )
@ -41,7 +44,7 @@ var (
) )
// RetrieveSourceImage returns the base image of the stage at index // RetrieveSourceImage returns the base image of the stage at index
func RetrieveSourceImage(stage config.KanikoStage, buildArgs []string) (v1.Image, error) { func RetrieveSourceImage(stage config.KanikoStage, buildArgs []string, opts *config.KanikoOptions) (v1.Image, error) {
currentBaseName, err := ResolveEnvironmentReplacement(stage.BaseName, buildArgs, false) currentBaseName, err := ResolveEnvironmentReplacement(stage.BaseName, buildArgs, false)
if err != nil { if err != nil {
return nil, err return nil, err
@ -57,8 +60,21 @@ func RetrieveSourceImage(stage config.KanikoStage, buildArgs []string) (v1.Image
return retrieveTarImage(stage.BaseImageIndex) return retrieveTarImage(stage.BaseImageIndex)
} }
// Next, check if local caching is enabled
// If so, look in the local cache before trying the remote registry
if opts.Cache && opts.CacheDir != "" {
cachedImage, err := cachedImage(opts, currentBaseName)
if cachedImage != nil {
return cachedImage, nil
}
if err != nil {
logrus.Warnf("Error while retrieving image from cache: %v", err)
}
}
// Otherwise, initialize image as usual // Otherwise, initialize image as usual
return retrieveRemoteImage(currentBaseName) return retrieveRemoteImage(currentBaseName, opts)
} }
// RetrieveConfigFile returns the config file for an image // RetrieveConfigFile returns the config file for an image
@ -79,16 +95,65 @@ func tarballImage(index int) (v1.Image, error) {
return tarball.ImageFromPath(tarPath, nil) return tarball.ImageFromPath(tarPath, nil)
} }
func remoteImage(image string) (v1.Image, error) { func remoteImage(image string, opts *config.KanikoOptions) (v1.Image, error) {
logrus.Infof("Downloading base image %s", image) logrus.Infof("Downloading base image %s", image)
ref, err := name.ParseReference(image, name.WeakValidation) ref, err := name.ParseReference(image, name.WeakValidation)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if opts.InsecurePull {
newReg, err := name.NewInsecureRegistry(ref.Context().RegistryStr(), name.WeakValidation)
if err != nil {
return nil, err
}
if tag, ok := ref.(name.Tag); ok {
tag.Repository.Registry = newReg
ref = tag
}
if digest, ok := ref.(name.Digest); ok {
digest.Repository.Registry = newReg
ref = digest
}
}
tr := http.DefaultTransport.(*http.Transport)
if opts.SkipTLSVerifyPull {
tr.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true,
}
}
k8sc, err := k8schain.NewNoClient() k8sc, err := k8schain.NewNoClient()
if err != nil { if err != nil {
return nil, err return nil, err
} }
kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc) kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc)
return remote.Image(ref, remote.WithAuthFromKeychain(kc)) return remote.Image(ref, remote.WithTransport(tr), remote.WithAuthFromKeychain(kc))
}
func cachedImage(opts *config.KanikoOptions, image string) (v1.Image, error) {
ref, err := name.ParseReference(image, name.WeakValidation)
if err != nil {
return nil, err
}
var cacheKey string
if d, ok := ref.(name.Digest); ok {
cacheKey = d.DigestStr()
} else {
img, err := remoteImage(image, opts)
if err != nil {
return nil, err
}
d, err := img.Digest()
if err != nil {
return nil, err
}
cacheKey = d.String()
}
return cache.LocalSource(opts, cacheKey)
} }

View File

@ -32,11 +32,11 @@ var (
dockerfile = ` dockerfile = `
FROM gcr.io/distroless/base:latest as base FROM gcr.io/distroless/base:latest as base
COPY . . COPY . .
FROM scratch as second FROM scratch as second
ENV foopath context/foo ENV foopath context/foo
COPY --from=0 $foopath context/b* /foo/ COPY --from=0 $foopath context/b* /foo/
FROM base FROM base
ARG file ARG file
COPY --from=second /foo $file` COPY --from=second /foo $file`
@ -51,13 +51,13 @@ func Test_StandardImage(t *testing.T) {
defer func() { defer func() {
retrieveRemoteImage = original retrieveRemoteImage = original
}() }()
mock := func(image string) (v1.Image, error) { mock := func(image string, opts *config.KanikoOptions) (v1.Image, error) {
return nil, nil return nil, nil
} }
retrieveRemoteImage = mock retrieveRemoteImage = mock
actual, err := RetrieveSourceImage(config.KanikoStage{ actual, err := RetrieveSourceImage(config.KanikoStage{
Stage: stages[0], Stage: stages[0],
}, nil) }, nil, &config.KanikoOptions{})
testutil.CheckErrorAndDeepEqual(t, false, err, nil, actual) testutil.CheckErrorAndDeepEqual(t, false, err, nil, actual)
} }
func Test_ScratchImage(t *testing.T) { func Test_ScratchImage(t *testing.T) {
@ -67,7 +67,7 @@ func Test_ScratchImage(t *testing.T) {
} }
actual, err := RetrieveSourceImage(config.KanikoStage{ actual, err := RetrieveSourceImage(config.KanikoStage{
Stage: stages[1], Stage: stages[1],
}, nil) }, nil, &config.KanikoOptions{})
expected := empty.Image expected := empty.Image
testutil.CheckErrorAndDeepEqual(t, false, err, expected, actual) testutil.CheckErrorAndDeepEqual(t, false, err, expected, actual)
} }
@ -89,7 +89,7 @@ func Test_TarImage(t *testing.T) {
BaseImageStoredLocally: true, BaseImageStoredLocally: true,
BaseImageIndex: 0, BaseImageIndex: 0,
Stage: stages[2], Stage: stages[2],
}, nil) }, nil, &config.KanikoOptions{})
testutil.CheckErrorAndDeepEqual(t, false, err, nil, actual) testutil.CheckErrorAndDeepEqual(t, false, err, nil, actual)
} }

View File

@ -17,11 +17,12 @@ set -e
if [ $# -ne 3 ]; if [ $# -ne 3 ];
then echo "Usage: run_in_docker.sh <path to Dockerfile> <context directory> <image tag>" then echo "Usage: run_in_docker.sh <path to Dockerfile> <context directory> <image tag>"
exit 1
fi fi
dockerfile=$1 dockerfile=$1
context=$2 context=$2
tag=$3 destination=$3
if [[ ! -e $HOME/.config/gcloud/application_default_credentials.json ]]; then if [[ ! -e $HOME/.config/gcloud/application_default_credentials.json ]]; then
echo "Application Default Credentials do not exist. Run [gcloud auth application-default login] to configure them" echo "Application Default Credentials do not exist. Run [gcloud auth application-default login] to configure them"
@ -32,4 +33,4 @@ docker run \
-v $HOME/.config/gcloud:/root/.config/gcloud \ -v $HOME/.config/gcloud:/root/.config/gcloud \
-v ${context}:/workspace \ -v ${context}:/workspace \
gcr.io/kaniko-project/executor:latest \ gcr.io/kaniko-project/executor:latest \
-f ${dockerfile} -d ${tag} -c /workspace/ --dockerfile ${dockerfile} --destination ${destination} --context dir:///workspace/

View File

@ -125,6 +125,11 @@ func (i *compressedImageExtender) Layers() ([]v1.Layer, error) {
// LayerByDigest implements v1.Image // LayerByDigest implements v1.Image
func (i *compressedImageExtender) LayerByDigest(h v1.Hash) (v1.Layer, error) { func (i *compressedImageExtender) LayerByDigest(h v1.Hash) (v1.Layer, error) {
if cfgName, err := i.ConfigName(); err != nil {
return nil, err
} else if cfgName == h {
return ConfigLayer(i)
}
cl, err := i.CompressedImageCore.LayerByDigest(h) cl, err := i.CompressedImageCore.LayerByDigest(h)
if err != nil { if err != nil {
return nil, err return nil, err

View File

@ -37,8 +37,12 @@ type UncompressedLayer interface {
// uncompressedLayerExtender implements v1.Image using the uncompressed base properties. // uncompressedLayerExtender implements v1.Image using the uncompressed base properties.
type uncompressedLayerExtender struct { type uncompressedLayerExtender struct {
UncompressedLayer UncompressedLayer
// TODO(mattmoor): Memoize size/hash so that the methods aren't twice as // Memoize size/hash so that the methods aren't twice as
// expensive as doing this manually. // expensive as doing this manually.
hash v1.Hash
size int64
hashSizeError error
once sync.Once
} }
// Compressed implements v1.Layer // Compressed implements v1.Layer
@ -52,29 +56,31 @@ func (ule *uncompressedLayerExtender) Compressed() (io.ReadCloser, error) {
// Digest implements v1.Layer // Digest implements v1.Layer
func (ule *uncompressedLayerExtender) Digest() (v1.Hash, error) { func (ule *uncompressedLayerExtender) Digest() (v1.Hash, error) {
r, err := ule.Compressed() ule.calcSizeHash()
if err != nil { return ule.hash, ule.hashSizeError
return v1.Hash{}, err
}
defer r.Close()
h, _, err := v1.SHA256(r)
return h, err
} }
// Size implements v1.Layer // Size implements v1.Layer
func (ule *uncompressedLayerExtender) Size() (int64, error) { func (ule *uncompressedLayerExtender) Size() (int64, error) {
r, err := ule.Compressed() ule.calcSizeHash()
if err != nil { return ule.size, ule.hashSizeError
return -1, err }
}
defer r.Close() func (ule *uncompressedLayerExtender) calcSizeHash() {
_, i, err := v1.SHA256(r) ule.once.Do(func() {
return i, err var r io.ReadCloser
r, ule.hashSizeError = ule.Compressed()
if ule.hashSizeError != nil {
return
}
defer r.Close()
ule.hash, ule.size, ule.hashSizeError = v1.SHA256(r)
})
} }
// UncompressedToLayer fills in the missing methods from an UncompressedLayer so that it implements v1.Layer // UncompressedToLayer fills in the missing methods from an UncompressedLayer so that it implements v1.Layer
func UncompressedToLayer(ul UncompressedLayer) (v1.Layer, error) { func UncompressedToLayer(ul UncompressedLayer) (v1.Layer, error) {
return &uncompressedLayerExtender{ul}, nil return &uncompressedLayerExtender{UncompressedLayer: ul}, nil
} }
// UncompressedImageCore represents the bare minimum interface a natively // UncompressedImageCore represents the bare minimum interface a natively

View File

@ -39,7 +39,8 @@ func (bt *basicTransport) RoundTrip(in *http.Request) (*http.Response, error) {
// abstraction, so to avoid forwarding Authorization headers to places // abstraction, so to avoid forwarding Authorization headers to places
// we are redirected, only set it when the authorization header matches // we are redirected, only set it when the authorization header matches
// the host with which we are interacting. // the host with which we are interacting.
if in.Host == bt.target { // In case of redirect http.Client can use an empty Host, check URL too.
if in.Host == bt.target || in.URL.Host == bt.target {
in.Header.Set("Authorization", hdr) in.Header.Set("Authorization", hdr)
} }
in.Header.Set("User-Agent", transportName) in.Header.Set("User-Agent", transportName)

View File

@ -46,22 +46,38 @@ var _ http.RoundTripper = (*bearerTransport)(nil)
// RoundTrip implements http.RoundTripper // RoundTrip implements http.RoundTripper
func (bt *bearerTransport) RoundTrip(in *http.Request) (*http.Response, error) { func (bt *bearerTransport) RoundTrip(in *http.Request) (*http.Response, error) {
hdr, err := bt.bearer.Authorization() sendRequest := func() (*http.Response, error) {
hdr, err := bt.bearer.Authorization()
if err != nil {
return nil, err
}
// http.Client handles redirects at a layer above the http.RoundTripper
// abstraction, so to avoid forwarding Authorization headers to places
// we are redirected, only set it when the authorization header matches
// the registry with which we are interacting.
// In case of redirect http.Client can use an empty Host, check URL too.
if in.Host == bt.registry.RegistryStr() || in.URL.Host == bt.registry.RegistryStr() {
in.Header.Set("Authorization", hdr)
}
in.Header.Set("User-Agent", transportName)
return bt.inner.RoundTrip(in)
}
res, err := sendRequest()
if err != nil { if err != nil {
return nil, err return nil, err
} }
// http.Client handles redirects at a layer above the http.RoundTripper // Perform a token refresh() and retry the request in case the token has expired
// abstraction, so to avoid forwarding Authorization headers to places if res.StatusCode == http.StatusUnauthorized {
// we are redirected, only set it when the authorization header matches if err = bt.refresh(); err != nil {
// the registry with which we are interacting. return nil, err
if in.Host == bt.registry.RegistryStr() { }
in.Header.Set("Authorization", hdr) return sendRequest()
} }
in.Header.Set("User-Agent", transportName)
// TODO(mattmoor): On 401s perform a single refresh() and retry. return res, err
return bt.inner.RoundTrip(in)
} }
func (bt *bearerTransport) refresh() error { func (bt *bearerTransport) refresh() error {