Merge branch 'master' into experiment

This commit is contained in:
Tejal Desai 2020-05-03 21:02:41 -07:00 committed by GitHub
commit d37896b94f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
28 changed files with 732 additions and 84 deletions

View File

@ -4,7 +4,7 @@
![kaniko logo](logo/Kaniko-Logo.png)
kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace.
This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
@ -15,7 +15,7 @@ We'd love to hear from you! Join us on [#kaniko Kubernetes Slack](https://kuber
:mega: **Please fill out our [quick 5-question survey](https://forms.gle/HhZGEM33x4FUz9Qa6)** so that we can learn how satisfied you are with Kaniko, and what improvements we should make. Thank you! :dancers:
Kaniko is not an officially supported Google project.
Kaniko is not an officially supported Google project.
_If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPMENT.md) and [CONTRIBUTING.md](CONTRIBUTING.md)._
@ -50,6 +50,7 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
- [--cache](#--cache)
- [--cache-dir](#--cache-dir)
- [--cache-repo](#--cache-repo)
- [--context-sub-path](#context-sub-path)
- [--digest-file](#--digest-file)
- [--oci-layout-path](#--oci-layout-path)
- [--insecure-registry](#--insecure-registry)
@ -69,6 +70,7 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
- [--verbosity](#--verbosity)
- [--whitelist-var-run](#--whitelist-var-run)
- [--label](#--label)
- [--skip-unused-stages](#skip-unused-stages)
- [Debug Image](#debug-image)
- [Security](#security)
- [Comparison with Other Tools](#comparison-with-other-tools)
@ -121,11 +123,18 @@ Right now, kaniko supports these storage solutions:
- S3 Bucket
- Azure Blob Storage
- Local Directory
- Local Tar
- Standard Input
- Git Repository
_Note: the local directory option refers to a directory within the kaniko container.
_Note about Local Directory: this option refers to a directory within the kaniko container.
If you wish to use this option, you will need to mount in your build context into the container as a directory._
_Note about Local Tar: this option refers to a tar gz file within the kaniko container.
If you wish to use this option, you will need to mount in your build context into the container as a file._
_Note about Standard Input: the only Standard Input allowed by kaniko is in `.tar.gz` format._
If using a GCS or S3 bucket, you will first need to create a compressed tar of your build context and upload it to your bucket.
Once running, kaniko will then download and unpack the compressed tar of the build context before starting the image build.
@ -147,6 +156,7 @@ When running kaniko, use the `--context` flag with the appropriate prefix to spe
|---------|---------|---------|
| Local Directory | dir://[path to a directory in the kaniko container] | `dir:///workspace` |
| Local Tar Gz | tar://[path to a .tar.gz in the kaniko container] | `tar://path/to/context.tar.gz` |
| Standard Input | tar://[stdin] | `tar://stdin` |
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] | `gs://kaniko-bucket/path/to/context.tar.gz` |
| S3 Bucket | s3://[bucket name]/[path to .tar.gz] | `s3://kaniko-bucket/path/to/context.tar.gz` |
| Azure Blob Storage| https://[account].[azureblobhostsuffix]/[container]/[path to .tar.gz] | `https://myaccount.blob.core.windows.net/container/path/to/context.tar.gz` |
@ -161,6 +171,20 @@ If you are using Azure Blob Storage for context file, you will need to pass [Azu
### Using Private Git Repository
You can use `Personal Access Tokens` for Build Contexts from Private Repositories from [GitHub](https://blog.github.com/2012-09-21-easier-builds-and-deployments-using-git-over-https-and-oauth/).
### Using Standard Input
If running kaniko and using Standard Input build context, you will need to add the docker or kubernetes `-i, --interactive` flag.
Once running, kaniko will then get the data from `STDIN` and create the build context as a compressed tar.
It will then unpack the compressed tar of the build context before starting the image build.
If no data is piped during the interactive run, you will need to send the EOF signal by yourself by pressing `Ctrl+D`.
Complete example of how to interactively run kaniko with `.tar.gz` Standard Input data, using docker:
```shell
echo -e 'FROM alpine \nRUN echo "created from standard input"' > Dockerfile | tar -cf - Dockerfile | gzip -9 | docker run \
--interactive -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest \
--context tar://stdin \
--destination=<gcr.io/$project/$image:$tag>
```
### Running kaniko
There are several different ways to deploy and run kaniko:
@ -270,9 +294,9 @@ docker run \
-v "$HOME"/.config/gcloud:/root/.config/gcloud \
-v /path/to/context:/workspace \
gcr.io/kaniko-project/executor:latest \
--dockerfile /workspace/Dockerfile
--destination "gcr.io/$PROJECT_ID/$IMAGE_NAME:$TAG"
--context dir:///workspace/"
--dockerfile /workspace/Dockerfile \
--destination "gcr.io/$PROJECT_ID/$IMAGE_NAME:$TAG" \
--context dir:///workspace/
```
There is also a utility script [`run_in_docker.sh`](./run_in_docker.sh) that can be used as follows:
@ -280,7 +304,7 @@ There is also a utility script [`run_in_docker.sh`](./run_in_docker.sh) that can
./run_in_docker.sh <path to Dockerfile> <path to build context> <destination of final image>
```
_NOTE: `run_in_docker.sh` expects a path to a
_NOTE: `run_in_docker.sh` expects a path to a
Dockerfile relative to the absolute path of the build context._
An example run, specifying the Dockerfile in the container directory `/workspace`, the build
@ -336,7 +360,7 @@ Create a `config.json` file with your Docker registry url and the previous gener
```
{
"auths": {
"https://index.docker.io/v1/": {
"https://index.docker.io/v2/": {
"auth": "xxxxxxxxxxxxxxx"
}
}
@ -526,7 +550,11 @@ You need to set `--destination` as well (for example `--destination=image`).
#### --verbosity
Set this flag as `--verbosity=<panic|fatal|error|warn|info|debug>` to set the logging level. Defaults to `info`.
Set this flag as `--verbosity=<panic|fatal|error|warn|info|debug|trace>` to set the logging level. Defaults to `info`.
#### --log-format
Set this flag as `--log-format=<text|color|json>` to set the log format. Defaults to `color`.
#### --whitelist-var-run
@ -536,6 +564,11 @@ Ignore /var/run when taking image snapshot. Set it to false to preserve /var/run
Set this flag as `--label key=value` to set some metadata to the final image. This is equivalent as using the `LABEL` within the Dockerfile.
#### --skip-unused-stages
This flag builds only used stages if defined to `true`.
Otherwise it builds by default all stages, even the unnecessaries ones until it reaches the target stage / end of Dockerfile
### Debug Image
The kaniko executor image is based on scratch and doesn't contain a shell.

View File

@ -47,9 +47,8 @@ var (
)
func init() {
RootCmd.PersistentFlags().StringVarP(&logLevel, "verbosity", "v", logging.DefaultLevel, "Log level (debug, info, warn, error, fatal, panic")
RootCmd.PersistentFlags().StringVarP(&logLevel, "verbosity", "v", logging.DefaultLevel, "Log level (trace, debug, info, warn, error, fatal, panic)")
RootCmd.PersistentFlags().StringVar(&logFormat, "log-format", logging.FormatColor, "Log format (text, color, json)")
RootCmd.PersistentFlags().BoolVarP(&force, "force", "", false, "Force building outside of a container")
addKanikoOptionsFlags()
@ -161,6 +160,7 @@ func addKanikoOptionsFlags() {
RootCmd.PersistentFlags().StringVarP(&opts.RegistryMirror, "registry-mirror", "", "", "Registry mirror to use has pull-through cache instead of docker.io.")
RootCmd.PersistentFlags().BoolVarP(&opts.WhitelistVarRun, "whitelist-var-run", "", true, "Ignore /var/run directory when taking image snapshot. Set it to false to preserve /var/run/ in destination image. (Default true).")
RootCmd.PersistentFlags().VarP(&opts.Labels, "label", "", "Set metadata for an image. Set it repeatedly for multiple labels.")
RootCmd.PersistentFlags().BoolVarP(&opts.SkipUnusedStages, "skip-unused-stages", "", false, "Build only used stages if defined to true. Otherwise it builds by default all stages, even the unnecessaries ones until it reaches the target stage / end of Dockerfile")
}
// addHiddenFlags marks certain flags as hidden from the executor help text

View File

@ -35,7 +35,7 @@ var (
)
func init() {
RootCmd.PersistentFlags().StringVarP(&logLevel, "verbosity", "v", logging.DefaultLevel, "Log level (debug, info, warn, error, fatal, panic")
RootCmd.PersistentFlags().StringVarP(&logLevel, "verbosity", "v", logging.DefaultLevel, "Log level (trace, debug, info, warn, error, fatal, panic)")
RootCmd.PersistentFlags().StringVar(&logFormat, "log-format", logging.FormatColor, "Log format (text, color, json)")
addKanikoOptionsFlags()

View File

@ -0,0 +1,2 @@
FROM docker.io/library/busybox:latest@sha256:afe605d272837ce1732f390966166c2afff5391208ddd57de10942748694049d
RUN echo ${s%s}

View File

@ -0,0 +1,2 @@
FROM busybox:latest@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135
RUN ["/bin/ln", "-s", "nowhere", "/link"]

View File

@ -0,0 +1,4 @@
FROM scratch
MAINTAINER nobody@domain.test
# Add a file to the image to work around https://github.com/moby/moby/issues/38039
COPY context/foo /foo

View File

@ -73,9 +73,10 @@ var additionalDockerFlagsMap = map[string][]string{
// Arguments to build Dockerfiles with when building with kaniko
var additionalKanikoFlagsMap = map[string][]string{
"Dockerfile_test_add": {"--single-snapshot"},
"Dockerfile_test_scratch": {"--single-snapshot"},
"Dockerfile_test_target": {"--target=second"},
"Dockerfile_test_add": {"--single-snapshot"},
"Dockerfile_test_scratch": {"--single-snapshot"},
"Dockerfile_test_maintainer": {"--single-snapshot"},
"Dockerfile_test_target": {"--target=second"},
}
// output check to do when building with kaniko

View File

@ -32,11 +32,11 @@ import (
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/pkg/errors"
"github.com/GoogleContainerTools/kaniko/pkg/timing"
"github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/GoogleContainerTools/kaniko/testutil"
"github.com/pkg/errors"
)
var config *integrationTestConfig

View File

@ -0,0 +1,151 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package integration
import (
"compress/gzip"
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"sync"
"testing"
"github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/GoogleContainerTools/kaniko/testutil"
)
func TestBuildWithStdin(t *testing.T) {
_, ex, _, _ := runtime.Caller(0)
cwd := filepath.Dir(ex)
testDir := "test_dir"
testDirLongPath := filepath.Join(cwd, testDir)
if err := os.MkdirAll(testDirLongPath, 0750); err != nil {
t.Errorf("Failed to create dir_where_to_extract: %v", err)
}
dockerfile := "Dockerfile_test_stdin"
files := map[string]string{
dockerfile: "FROM debian:9.11\nRUN echo \"hey\"",
}
if err := testutil.SetupFiles(testDir, files); err != nil {
t.Errorf("Failed to setup files %v on %s: %v", files, testDir, err)
}
if err := os.Chdir(testDir); err != nil {
t.Fatalf("Failed to Chdir on %s: %v", testDir, err)
}
tarPath := fmt.Sprintf("%s.tar.gz", dockerfile)
var wg sync.WaitGroup
wg.Add(1)
// Create Tar Gz File with dockerfile inside
go func(wg *sync.WaitGroup) {
defer wg.Done()
tarFile, err := os.Create(tarPath)
if err != nil {
t.Errorf("Failed to create %s: %v", tarPath, err)
}
defer tarFile.Close()
gw := gzip.NewWriter(tarFile)
defer gw.Close()
tw := util.NewTar(gw)
defer tw.Close()
if err := tw.AddFileToTar(dockerfile); err != nil {
t.Errorf("Failed to add %s to %s: %v", dockerfile, tarPath, err)
}
}(&wg)
// Waiting for the Tar Gz file creation to be done before moving on
wg.Wait()
// Build with docker
dockerImage := GetDockerImage(config.imageRepo, dockerfile)
dockerCmd := exec.Command("docker",
append([]string{"build",
"-t", dockerImage,
"-f", dockerfile,
"."})...)
_, err := RunCommandWithoutTest(dockerCmd)
if err != nil {
t.Fatalf("can't run %s: %v", dockerCmd.String(), err)
}
// Build with kaniko using Stdin
kanikoImageStdin := GetKanikoImage(config.imageRepo, dockerfile)
tarCmd := exec.Command("tar", "-cf", "-", dockerfile)
gzCmd := exec.Command("gzip", "-9")
dockerRunFlags := []string{"run", "--interactive", "--net=host", "-v", cwd + ":/workspace"}
dockerRunFlags = addServiceAccountFlags(dockerRunFlags, config.serviceAccount)
dockerRunFlags = append(dockerRunFlags,
ExecutorImage,
"-f", dockerfile,
"-c", "tar://stdin",
"-d", kanikoImageStdin)
kanikoCmdStdin := exec.Command("docker", dockerRunFlags...)
gzCmd.Stdin, err = tarCmd.StdoutPipe()
if err != nil {
t.Fatalf("can't set gzCmd stdin: %v", err)
}
kanikoCmdStdin.Stdin, err = gzCmd.StdoutPipe()
if err != nil {
t.Fatalf("can't set kanikoCmd stdin: %v", err)
}
if err := kanikoCmdStdin.Start(); err != nil {
t.Fatalf("can't start %s: %v", kanikoCmdStdin.String(), err)
}
if err := gzCmd.Start(); err != nil {
t.Fatalf("can't start %s: %v", gzCmd.String(), err)
}
if err := tarCmd.Run(); err != nil {
t.Fatalf("can't start %s: %v", tarCmd.String(), err)
}
if err := gzCmd.Wait(); err != nil {
t.Fatalf("can't wait %s: %v", gzCmd.String(), err)
}
if err := kanikoCmdStdin.Wait(); err != nil {
t.Fatalf("can't wait %s: %v", kanikoCmdStdin.String(), err)
}
diff := containerDiff(t, daemonPrefix+dockerImage, kanikoImageStdin, "--no-cache")
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImageStdin, dockerImage, kanikoImageStdin)
checkContainerDiffOutput(t, diff, expected)
if err := os.RemoveAll(testDirLongPath); err != nil {
t.Errorf("Failed to remove %s: %v", testDirLongPath, err)
}
}

View File

@ -38,25 +38,27 @@ type BuildContext interface {
// parser
func GetBuildContext(srcContext string) (BuildContext, error) {
split := strings.SplitAfter(srcContext, "://")
prefix := split[0]
context := split[1]
if len(split) > 1 {
prefix := split[0]
context := split[1]
switch prefix {
case constants.GCSBuildContextPrefix:
return &GCS{context: context}, nil
case constants.S3BuildContextPrefix:
return &S3{context: context}, nil
case constants.LocalDirBuildContextPrefix:
return &Dir{context: context}, nil
case constants.GitBuildContextPrefix:
return &Git{context: context}, nil
case constants.HTTPSBuildContextPrefix:
if util.ValidAzureBlobStorageHost(srcContext) {
return &AzureBlob{context: srcContext}, nil
switch prefix {
case constants.GCSBuildContextPrefix:
return &GCS{context: context}, nil
case constants.S3BuildContextPrefix:
return &S3{context: context}, nil
case constants.LocalDirBuildContextPrefix:
return &Dir{context: context}, nil
case constants.GitBuildContextPrefix:
return &Git{context: context}, nil
case constants.HTTPSBuildContextPrefix:
if util.ValidAzureBlobStorageHost(srcContext) {
return &AzureBlob{context: srcContext}, nil
}
return nil, errors.New("url provided for https context is not in a supported format, please use the https url for Azure Blob Storage")
case TarBuildContextPrefix:
return &Tar{context: context}, nil
}
return nil, errors.New("url provided for https context is not in a supported format, please use the https url for Azure Blob Storage")
case TarBuildContextPrefix:
return &Tar{context: context}, nil
}
return nil, errors.New("unknown build context prefix provided, please use one of the following: gs://, dir://, tar://, s3://, git://, https://")
}

View File

@ -25,6 +25,16 @@ import (
"gopkg.in/src-d/go-git.v4/plumbing"
)
const (
gitPullMethodEnvKey = "GIT_PULL_METHOD"
gitPullMethodHTTPS = "https"
gitPullMethodHTTP = "http"
)
var (
supportedGitPullMethods = map[string]bool{gitPullMethodHTTPS: true, gitPullMethodHTTP: true}
)
// Git unifies calls to download and unpack the build context.
type Git struct {
context string
@ -35,7 +45,7 @@ func (g *Git) UnpackTarFromBuildContext() (string, error) {
directory := constants.BuildContextDir
parts := strings.Split(g.context, "#")
options := git.CloneOptions{
URL: "https://" + parts[0],
URL: getGitPullMethod() + "://" + parts[0],
Progress: os.Stdout,
}
if len(parts) > 1 {
@ -44,3 +54,11 @@ func (g *Git) UnpackTarFromBuildContext() (string, error) {
_, err := git.PlainClone(directory, false, &options)
return directory, err
}
func getGitPullMethod() string {
gitPullMethod := os.Getenv(gitPullMethodEnvKey)
if ok := supportedGitPullMethods[gitPullMethod]; !ok {
gitPullMethod = gitPullMethodHTTPS
}
return gitPullMethod
}

View File

@ -0,0 +1,82 @@
/*
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package buildcontext
import (
"os"
"testing"
"github.com/GoogleContainerTools/kaniko/testutil"
)
func TestGetGitPullMethod(t *testing.T) {
tests := []struct {
testName string
setEnv func() (expectedValue string)
}{
{
testName: "noEnv",
setEnv: func() (expectedValue string) {
expectedValue = gitPullMethodHTTPS
return
},
},
{
testName: "emptyEnv",
setEnv: func() (expectedValue string) {
_ = os.Setenv(gitPullMethodEnvKey, "")
expectedValue = gitPullMethodHTTPS
return
},
},
{
testName: "httpEnv",
setEnv: func() (expectedValue string) {
err := os.Setenv(gitPullMethodEnvKey, gitPullMethodHTTP)
if nil != err {
expectedValue = gitPullMethodHTTPS
} else {
expectedValue = gitPullMethodHTTP
}
return
},
},
{
testName: "httpsEnv",
setEnv: func() (expectedValue string) {
_ = os.Setenv(gitPullMethodEnvKey, gitPullMethodHTTPS)
expectedValue = gitPullMethodHTTPS
return
},
},
{
testName: "unknownEnv",
setEnv: func() (expectedValue string) {
_ = os.Setenv(gitPullMethodEnvKey, "unknown")
expectedValue = gitPullMethodHTTPS
return
},
},
}
for _, tt := range tests {
t.Run(tt.testName, func(t *testing.T) {
expectedValue := tt.setEnv()
testutil.CheckDeepEqual(t, expectedValue, getGitPullMethod())
})
}
}

View File

@ -17,11 +17,15 @@ limitations under the License.
package buildcontext
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/GoogleContainerTools/kaniko/pkg/constants"
"github.com/GoogleContainerTools/kaniko/pkg/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// Tar unifies calls to download and unpack the build context.
@ -35,6 +39,23 @@ func (t *Tar) UnpackTarFromBuildContext() (string, error) {
if err := os.MkdirAll(directory, 0750); err != nil {
return "", errors.Wrap(err, "unpacking tar from build context")
}
if t.context == "stdin" {
fi, _ := os.Stdin.Stat()
if (fi.Mode() & os.ModeCharDevice) != 0 {
return "", fmt.Errorf("no data found.. don't forget to add the '--interactive, -i' flag")
}
logrus.Infof("To simulate EOF and exit, press 'Ctrl+D'")
// if launched through docker in interactive mode and without piped data
// process will be stuck here until EOF is sent
data, err := util.GetInputFrom(os.Stdin)
if err != nil {
return "", errors.Wrap(err, "fail to get standard input")
}
t.context = filepath.Join(directory, constants.ContextTar)
if err := ioutil.WriteFile(t.context, data, 0644); err != nil {
return "", errors.Wrap(err, "fail to redirect standard input into compressed tar file")
}
}
return directory, util.UnpackCompressedTar(t.context, directory)
}

View File

@ -205,7 +205,8 @@ func (cr *CachingCopyCommand) FilesUsedFromContext(config *v1.Config, buildArgs
func (cr *CachingCopyCommand) FilesToSnapshot() []string {
f := cr.extractedFiles
logrus.Debugf("files extracted by caching copy command %s", f)
logrus.Debugf("%d files extracted by caching copy command", len(f))
logrus.Tracef("Extracted files: %s", f)
return f
}

View File

@ -220,7 +220,8 @@ func (cr *CachingRunCommand) ExecuteCommand(config *v1.Config, buildArgs *docker
func (cr *CachingRunCommand) FilesToSnapshot() []string {
f := cr.extractedFiles
logrus.Debugf("files extracted from caching run command %s", f)
logrus.Debugf("%d files extracted by caching run command", len(f))
logrus.Tracef("Extracted files: %s", f)
return f
}

View File

@ -56,6 +56,7 @@ type KanikoOptions struct {
Cache bool
Cleanup bool
WhitelistVarRun bool
SkipUnusedStages bool
}
// WarmerOptions are options that are set by command line arguments to the cache warmer.

View File

@ -22,6 +22,7 @@ import (
"io/ioutil"
"net/http"
"regexp"
"strconv"
"strings"
v1 "github.com/google/go-containerregistry/pkg/v1"
@ -253,6 +254,9 @@ func MakeKanikoStages(opts *config.KanikoOptions, stages []instructions.Stage, m
if err := resolveStagesArgs(stages, args); err != nil {
return nil, errors.Wrap(err, "resolving args")
}
if opts.SkipUnusedStages {
stages = skipUnusedStages(stages, &targetStage, opts.Target)
}
var kanikoStages []config.KanikoStage
for index, stage := range stages {
if len(stage.Name) > 0 {
@ -312,3 +316,53 @@ func unifyArgs(metaArgs []instructions.ArgCommand, buildArgs []string) []string
}
return args
}
// skipUnusedStages returns the list of used stages without the unnecessaries ones
func skipUnusedStages(stages []instructions.Stage, lastStageIndex *int, target string) []instructions.Stage {
stagesDependencies := make(map[string]bool)
var onlyUsedStages []instructions.Stage
idx := *lastStageIndex
lastStageBaseName := stages[idx].BaseName
for i := idx; i >= 0; i-- {
s := stages[i]
if (s.Name != "" && stagesDependencies[s.Name]) || s.Name == lastStageBaseName || i == idx {
for _, c := range s.Commands {
switch cmd := c.(type) {
case *instructions.CopyCommand:
stageName := cmd.From
if copyFromIndex, err := strconv.Atoi(stageName); err == nil {
stageName = stages[copyFromIndex].Name
}
if !stagesDependencies[stageName] {
stagesDependencies[stageName] = true
}
}
}
if i != idx {
stagesDependencies[s.BaseName] = true
}
}
}
dependenciesLen := len(stagesDependencies)
if target == "" && dependenciesLen == 0 {
return stages
} else if dependenciesLen > 0 {
for i := 0; i < idx; i++ {
if stages[i].Name == "" {
continue
}
s := stages[i]
if stagesDependencies[s.Name] || s.Name == lastStageBaseName {
onlyUsedStages = append(onlyUsedStages, s)
}
}
}
onlyUsedStages = append(onlyUsedStages, stages[idx])
if idx > len(onlyUsedStages)-1 {
*lastStageIndex = len(onlyUsedStages) - 1
}
return onlyUsedStages
}

View File

@ -456,3 +456,193 @@ func Test_ResolveStagesArgs(t *testing.T) {
}
}
}
func Test_SkipingUnusedStages(t *testing.T) {
tests := []struct {
description string
dockerfile string
targets []string
expectedSourceCodes map[string][]string
expectedTargetIndexBeforeSkip map[string]int
expectedTargetIndexAfterSkip map[string]int
}{
{
description: "dockerfile_without_copyFrom",
dockerfile: `
FROM alpine:3.11 AS base-dev
RUN echo dev > /hi
FROM alpine:3.11 AS base-prod
RUN echo prod > /hi
FROM base-dev as final-stage
RUN cat /hi
`,
targets: []string{"base-dev", "base-prod", ""},
expectedSourceCodes: map[string][]string{
"base-dev": {"FROM alpine:3.11 AS base-dev"},
"base-prod": {"FROM alpine:3.11 AS base-prod"},
"": {"FROM alpine:3.11 AS base-dev", "FROM base-dev as final-stage"},
},
expectedTargetIndexBeforeSkip: map[string]int{
"base-dev": 0,
"base-prod": 1,
"": 2,
},
expectedTargetIndexAfterSkip: map[string]int{
"base-dev": 0,
"base-prod": 0,
"": 1,
},
},
{
description: "dockerfile_with_copyFrom",
dockerfile: `
FROM alpine:3.11 AS base-dev
RUN echo dev > /hi
FROM alpine:3.11 AS base-prod
RUN echo prod > /hi
FROM alpine:3.11
COPY --from=base-prod /hi /finalhi
RUN cat /finalhi
`,
targets: []string{"base-dev", "base-prod", ""},
expectedSourceCodes: map[string][]string{
"base-dev": {"FROM alpine:3.11 AS base-dev"},
"base-prod": {"FROM alpine:3.11 AS base-prod"},
"": {"FROM alpine:3.11 AS base-prod", "FROM alpine:3.11"},
},
expectedTargetIndexBeforeSkip: map[string]int{
"base-dev": 0,
"base-prod": 1,
"": 2,
},
expectedTargetIndexAfterSkip: map[string]int{
"base-dev": 0,
"base-prod": 0,
"": 1,
},
},
{
description: "dockerfile_with_two_copyFrom",
dockerfile: `
FROM alpine:3.11 AS base-dev
RUN echo dev > /hi
FROM alpine:3.11 AS base-prod
RUN echo prod > /hi
FROM alpine:3.11
COPY --from=base-dev /hi /finalhidev
COPY --from=base-prod /hi /finalhiprod
RUN cat /finalhidev
RUN cat /finalhiprod
`,
targets: []string{"base-dev", "base-prod", ""},
expectedSourceCodes: map[string][]string{
"base-dev": {"FROM alpine:3.11 AS base-dev"},
"base-prod": {"FROM alpine:3.11 AS base-prod"},
"": {"FROM alpine:3.11 AS base-dev", "FROM alpine:3.11 AS base-prod", "FROM alpine:3.11"},
},
expectedTargetIndexBeforeSkip: map[string]int{
"base-dev": 0,
"base-prod": 1,
"": 2,
},
expectedTargetIndexAfterSkip: map[string]int{
"base-dev": 0,
"base-prod": 0,
"": 2,
},
},
{
description: "dockerfile_with_two_copyFrom_and_arg",
dockerfile: `
FROM debian:9.11 as base
COPY . .
FROM scratch as second
ENV foopath context/foo
COPY --from=0 $foopath context/b* /foo/
FROM second as third
COPY --from=base /context/foo /new/foo
FROM base as fourth
# Make sure that we snapshot intermediate images correctly
RUN date > /date
ENV foo bar
# This base image contains symlinks with relative paths to whitelisted directories
# We need to test they're extracted correctly
FROM fedora@sha256:c4cc32b09c6ae3f1353e7e33a8dda93dc41676b923d6d89afa996b421cc5aa48
FROM fourth
ARG file=/foo2
COPY --from=second /foo ${file}
COPY --from=debian:9.11 /etc/os-release /new
`,
targets: []string{"base", ""},
expectedSourceCodes: map[string][]string{
"base": {"FROM debian:9.11 as base"},
"second": {"FROM debian:9.11 as base", "FROM scratch as second"},
"": {"FROM debian:9.11 as base", "FROM scratch as second", "FROM base as fourth", "FROM fourth"},
},
expectedTargetIndexBeforeSkip: map[string]int{
"base": 0,
"second": 1,
"": 5,
},
expectedTargetIndexAfterSkip: map[string]int{
"base": 0,
"second": 1,
"": 3,
},
},
{
description: "dockerfile_without_final_dependencies",
dockerfile: `
FROM alpine:3.11
FROM debian:9.11 as base
RUN echo foo > /foo
FROM debian:9.11 as fizz
RUN echo fizz >> /fizz
COPY --from=base /foo /fizz
FROM alpine:3.11 as buzz
RUN echo buzz > /buzz
FROM alpine:3.11 as final
RUN echo bar > /bar
`,
targets: []string{"final", "buzz", "fizz", ""},
expectedSourceCodes: map[string][]string{
"final": {"FROM alpine:3.11 as final"},
"buzz": {"FROM alpine:3.11 as buzz"},
"fizz": {"FROM debian:9.11 as base", "FROM debian:9.11 as fizz"},
"": {"FROM alpine:3.11", "FROM debian:9.11 as base", "FROM debian:9.11 as fizz", "FROM alpine:3.11 as buzz", "FROM alpine:3.11 as final"},
},
expectedTargetIndexBeforeSkip: map[string]int{
"final": 4,
"buzz": 3,
"fizz": 2,
"": 4,
},
expectedTargetIndexAfterSkip: map[string]int{
"final": 0,
"buzz": 0,
"fizz": 1,
"": 4,
},
},
}
for _, test := range tests {
stages, _, err := Parse([]byte(test.dockerfile))
testutil.CheckError(t, false, err)
actualSourceCodes := make(map[string][]string)
for _, target := range test.targets {
targetIndex, err := targetStage(stages, target)
testutil.CheckError(t, false, err)
targetIndexBeforeSkip := targetIndex
onlyUsedStages := skipUnusedStages(stages, &targetIndex, target)
for _, s := range onlyUsedStages {
actualSourceCodes[target] = append(actualSourceCodes[target], s.SourceCode)
}
t.Run(test.description, func(t *testing.T) {
testutil.CheckDeepEqual(t, test.expectedSourceCodes[target], actualSourceCodes[target])
testutil.CheckDeepEqual(t, test.expectedTargetIndexBeforeSkip[target], targetIndexBeforeSkip)
testutil.CheckDeepEqual(t, test.expectedTargetIndexAfterSkip[target], targetIndex)
})
}
}
}

View File

@ -339,9 +339,11 @@ func (s *stageBuilder) build() error {
return errors.Wrap(err, "failed to get files used from context")
}
*compositeKey, err = s.populateCompositeKey(command, files, *compositeKey, s.args, s.cf.Config.Env)
if err != nil {
return err
if s.opts.Cache {
*compositeKey, err = s.populateCompositeKey(command, files, *compositeKey, s.args, s.cf.Config.Env)
if err != nil && s.opts.Cache {
return err
}
}
logrus.Info(command.String())
@ -384,19 +386,21 @@ func (s *stageBuilder) build() error {
return errors.Wrap(err, "failed to take snapshot")
}
logrus.Debugf("build: composite key for command %v %v", command.String(), compositeKey)
ck, err := compositeKey.Hash()
if err != nil {
return errors.Wrap(err, "failed to hash composite key")
}
if s.opts.Cache {
logrus.Debugf("build: composite key for command %v %v", command.String(), compositeKey)
ck, err := compositeKey.Hash()
if err != nil {
return errors.Wrap(err, "failed to hash composite key")
}
logrus.Debugf("build: cache key for command %v %v", command.String(), ck)
logrus.Debugf("build: cache key for command %v %v", command.String(), ck)
// Push layer to cache (in parallel) now along with new config file
if s.opts.Cache && command.ShouldCacheOutput() {
cacheGroup.Go(func() error {
return s.pushLayerToCache(s.opts, ck, tarPath, command.String())
})
// Push layer to cache (in parallel) now along with new config file
if command.ShouldCacheOutput() {
cacheGroup.Go(func() error {
return s.pushLayerToCache(s.opts, ck, tarPath, command.String())
})
}
}
if err := s.saveSnapshotToImage(command.String(), tarPath); err != nil {
return errors.Wrap(err, "failed to save snapshot to image")
@ -428,7 +432,7 @@ func (s *stageBuilder) takeSnapshot(files []string) (string, error) {
}
func (s *stageBuilder) shouldTakeSnapshot(index int, files []string, provideFiles bool) bool {
isLastCommand := index == len(s.stage.Commands)-1
isLastCommand := index == len(s.cmds)-1
// We only snapshot the very end with single snapshot mode on.
if s.opts.SingleSnapshot {
@ -713,7 +717,7 @@ func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) e
t := timing.Start("Fetching Extra Stages")
defer timing.DefaultRun.Stop(t)
var names = []string{}
var names []string
for stageIndex, s := range stages {
for _, cmd := range s.Commands {
@ -730,11 +734,10 @@ func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) e
continue
}
// Check if the name is the alias of a previous stage
for _, name := range names {
if name == c.From {
continue
}
if fromPreviousStage(c, names) {
continue
}
// This must be an image name, fetch it.
logrus.Debugf("Found extra base image stage %s", c.From)
sourceImage, err := util.RetrieveRemoteImage(c.From, opts)
@ -755,6 +758,16 @@ func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) e
}
return nil
}
func fromPreviousStage(copyCommand *instructions.CopyCommand, previousStageNames []string) bool {
for _, previousStageName := range previousStageNames {
if previousStageName == copyCommand.From {
return true
}
}
return false
}
func extractImageToDependencyDir(name string, image v1.Image) error {
t := timing.Start("Extracting Image to Dependency Dir")
defer timing.DefaultRun.Stop(t)

View File

@ -90,28 +90,17 @@ func stage(t *testing.T, d string) config.KanikoStage {
}
}
type MockCommand struct {
name string
}
func (m *MockCommand) Name() string {
return m.name
}
func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
commands := []instructions.Command{
&MockCommand{name: "command1"},
&MockCommand{name: "command2"},
&MockCommand{name: "command3"},
}
stage := instructions.Stage{
Commands: commands,
cmds := []commands.DockerCommand{
&MockDockerCommand{command: "command1"},
&MockDockerCommand{command: "command2"},
&MockDockerCommand{command: "command3"},
}
type fields struct {
stage config.KanikoStage
opts *config.KanikoOptions
cmds []commands.DockerCommand
}
type args struct {
index int
@ -129,8 +118,8 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
fields: fields{
stage: config.KanikoStage{
Final: true,
Stage: stage,
},
cmds: cmds,
},
args: args{
index: 1,
@ -142,11 +131,11 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
fields: fields{
stage: config.KanikoStage{
Final: false,
Stage: stage,
},
cmds: cmds,
},
args: args{
index: len(commands) - 1,
index: len(cmds) - 1,
},
want: true,
},
@ -155,8 +144,8 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
fields: fields{
stage: config.KanikoStage{
Final: false,
Stage: stage,
},
cmds: cmds,
},
args: args{
index: 0,
@ -198,9 +187,9 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
fields: fields{
stage: config.KanikoStage{
Final: false,
Stage: stage,
},
opts: &config.KanikoOptions{Cache: true},
cmds: cmds,
},
args: args{
index: 0,
@ -217,6 +206,7 @@ func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
s := &stageBuilder{
stage: tt.fields.stage,
opts: tt.fields.opts,
cmds: tt.fields.cmds,
}
if got := s.shouldTakeSnapshot(tt.args.index, tt.args.files, tt.args.hasFiles); got != tt.want {
t.Errorf("stageBuilder.shouldTakeSnapshot() = %v, want %v", got, tt.want)

View File

@ -35,8 +35,8 @@ import (
// output set.
// * Add all ancestors of each path to the output set.
func ResolvePaths(paths []string, wl []util.WhitelistEntry) (pathsToAdd []string, err error) {
logrus.Info("Resolving paths")
logrus.Debugf("Resolving paths %s", paths)
logrus.Infof("Resolving %d paths", len(paths))
logrus.Tracef("Resolving paths %s", paths)
fileSet := make(map[string]bool)
@ -73,6 +73,7 @@ func ResolvePaths(paths []string, wl []util.WhitelistEntry) (pathsToAdd []string
}
logrus.Debugf("symlink path %s, target does not exist", f)
continue
}
// If the given path is a symlink and the target is part of the whitelist

View File

@ -226,10 +226,28 @@ func writeToTar(t util.Tar, files, whiteouts []string) error {
return err
}
}
addedPaths := make(map[string]bool)
for _, path := range files {
if _, fileExists := addedPaths[path]; fileExists {
continue
}
for _, parentPath := range util.ParentDirectories(path) {
if parentPath == "/" {
continue
}
if _, dirExists := addedPaths[parentPath]; dirExists {
continue
}
if err := t.AddFileToTar(parentPath); err != nil {
return err
}
addedPaths[parentPath] = true
}
if err := t.AddFileToTar(path); err != nil {
return err
}
addedPaths[path] = true
}
return nil
}

View File

@ -64,6 +64,12 @@ func TestSnapshotFSFileChange(t *testing.T) {
fooPath: "newbaz1",
batPath: "baz",
}
for _, path := range util.ParentDirectoriesWithoutLeadingSlash(batPath) {
if path == "/" {
continue
}
snapshotFiles[path+"/"] = ""
}
actualFiles := []string{}
for {
@ -77,6 +83,9 @@ func TestSnapshotFSFileChange(t *testing.T) {
if _, isFile := snapshotFiles[hdr.Name]; !isFile {
t.Fatalf("File %s unexpectedly in tar", hdr.Name)
}
if hdr.Typeflag == tar.TypeDir {
continue
}
contents, _ := ioutil.ReadAll(tr)
if string(contents) != snapshotFiles[hdr.Name] {
t.Fatalf("Contents of %s incorrect, expected: %s, actual: %s", hdr.Name, snapshotFiles[hdr.Name], string(contents))
@ -153,6 +162,12 @@ func TestSnapshotFSChangePermissions(t *testing.T) {
snapshotFiles := map[string]string{
batPathWithoutLeadingSlash: "baz2",
}
for _, path := range util.ParentDirectoriesWithoutLeadingSlash(batPathWithoutLeadingSlash) {
if path == "/" {
continue
}
snapshotFiles[path+"/"] = ""
}
foundFiles := []string{}
for {
@ -164,6 +179,9 @@ func TestSnapshotFSChangePermissions(t *testing.T) {
if _, isFile := snapshotFiles[hdr.Name]; !isFile {
t.Fatalf("File %s unexpectedly in tar", hdr.Name)
}
if hdr.Typeflag == tar.TypeDir {
continue
}
contents, _ := ioutil.ReadAll(tr)
if string(contents) != snapshotFiles[hdr.Name] {
t.Fatalf("Contents of %s incorrect, expected: %s, actual: %s", hdr.Name, snapshotFiles[hdr.Name], string(contents))
@ -203,7 +221,9 @@ func TestSnapshotFiles(t *testing.T) {
expectedFiles := []string{
filepath.Join(testDirWithoutLeadingSlash, "foo"),
}
expectedFiles = append(expectedFiles, util.ParentDirectoriesWithoutLeadingSlash(filepath.Join(testDir, "foo"))...)
for _, path := range util.ParentDirectoriesWithoutLeadingSlash(filepath.Join(testDir, "foo")) {
expectedFiles = append(expectedFiles, strings.TrimRight(path, "/")+"/")
}
f, err := os.Open(tarPath)
if err != nil {

View File

@ -468,10 +468,10 @@ func ParentDirectories(path string) []string {
}
dir, _ = filepath.Split(dir)
dir = filepath.Clean(dir)
paths = append(paths, dir)
paths = append([]string{dir}, paths...)
}
if len(paths) == 0 {
paths = append(paths, config.RootDir)
paths = []string{config.RootDir}
}
return paths
}

View File

@ -213,8 +213,6 @@ func Test_ParentDirectories(t *testing.T) {
defer func() { config.RootDir = original }()
config.RootDir = tt.rootDir
actual := ParentDirectories(tt.path)
sort.Strings(actual)
sort.Strings(tt.expected)
testutil.CheckErrorAndDeepEqual(t, false, nil, tt.expected, actual)
})

View File

@ -85,6 +85,9 @@ func (t *Tar) AddFileToTar(p string) error {
hdr.Name = strings.TrimPrefix(p, config.RootDir)
hdr.Name = strings.TrimLeft(hdr.Name, "/")
}
if hdr.Typeflag == tar.TypeDir && !strings.HasSuffix(hdr.Name, "/") {
hdr.Name = hdr.Name + "/"
}
// rootfs may not have been extracted when using cache, preventing uname/gname from resolving
// this makes this layer unnecessarily differ from a cached layer which does contain this information
hdr.Uname = ""

View File

@ -21,6 +21,7 @@ import (
"crypto/sha256"
"encoding/hex"
"io"
"io/ioutil"
"os"
"runtime"
"strconv"
@ -134,3 +135,12 @@ func currentPlatform() v1.Platform {
Architecture: runtime.GOARCH,
}
}
// GetInputFrom returns Reader content
func GetInputFrom(r io.Reader) ([]byte, error) {
output, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
return output, nil
}

32
pkg/util/util_test.go Normal file
View File

@ -0,0 +1,32 @@
/*
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package util
import (
"bufio"
"bytes"
"testing"
"github.com/GoogleContainerTools/kaniko/testutil"
)
func TestGetInputFrom(t *testing.T) {
validInput := []byte("Valid\n")
validReader := bufio.NewReader(bytes.NewReader((validInput)))
validValue, err := GetInputFrom(validReader)
testutil.CheckErrorAndDeepEqual(t, false, err, validInput, validValue)
}