Resolved Merge Conflicts
This commit is contained in:
commit
2f8bdd99b7
|
|
@ -2,7 +2,7 @@ language: go
|
|||
os: linux
|
||||
|
||||
go:
|
||||
- 1.10.x
|
||||
- "1.10"
|
||||
go_import_path: github.com/GoogleContainerTools/kaniko
|
||||
|
||||
script:
|
||||
|
|
|
|||
29
CHANGELOG.md
29
CHANGELOG.md
|
|
@ -1,3 +1,32 @@
|
|||
# v0.8.0 Release - 1/29/2019
|
||||
|
||||
## New Features
|
||||
* Even faster snapshotting with godirwalk
|
||||
* Added TTL for caching
|
||||
|
||||
## Updates
|
||||
* Change cache key calculation to be more reproducible.
|
||||
* Make the Digest calculation faster for locally-cached images.
|
||||
* Simplify snapshotting.
|
||||
|
||||
## Bug Fixes
|
||||
* Fix bug with USER command and unpacking base images.
|
||||
* Added COPY --from=previous stage name/number validation
|
||||
|
||||
# v0.7.0 Release - 12/10/2018
|
||||
|
||||
## New Features
|
||||
* Add support for COPY --from an unrelated image
|
||||
|
||||
## Updates
|
||||
* Speed up snapshotting by using filepath.SkipDir
|
||||
* Improve layer cache upload performance
|
||||
* Skip unpacking the base image in certain cases
|
||||
|
||||
## Bug Fixes
|
||||
* Fix bug with call loop
|
||||
* Fix caching for multi-step builds
|
||||
|
||||
# v0.6.0 Release - 11/06/2018
|
||||
|
||||
## New Features
|
||||
|
|
|
|||
|
|
@ -1,7 +1,15 @@
|
|||
# How to Contribute
|
||||
# Contributing to Kaniko
|
||||
|
||||
We'd love to accept your patches and contributions to this project. There are
|
||||
just a few small guidelines you need to follow.
|
||||
We'd love to accept your patches and contributions to this project!!
|
||||
|
||||
To get started developing, see our [DEVELOPMENT.md](./DEVELOPMENT.md).
|
||||
|
||||
In this file you'll find info on:
|
||||
|
||||
- [The CLA](#contributor-license-agreement)
|
||||
- [The code review process](#code-reviews)
|
||||
- [Standards](#standards) around [commit messages](#commit-messages) and [code](#coding-standards)
|
||||
- [Finding something to work on](#finding-something-to-work-on)
|
||||
|
||||
## Contributor License Agreement
|
||||
|
||||
|
|
@ -20,4 +28,43 @@ again.
|
|||
All submissions, including submissions by project members, require review. We
|
||||
use GitHub pull requests for this purpose. Consult
|
||||
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
|
||||
information on using pull requests.
|
||||
information on using pull requests.
|
||||
|
||||
## Standards
|
||||
|
||||
This section describes the standards we will try to maintain in this repo.
|
||||
|
||||
### Commit Messages
|
||||
|
||||
All commit messages should follow [these best practices](https://chris.beams.io/posts/git-commit/),
|
||||
specifically:
|
||||
|
||||
- Start with a subject line
|
||||
- Contain a body that explains _why_ you're making the change you're making
|
||||
- Reference an issue number one exists, closing it if applicable (with text such as
|
||||
["Fixes #245" or "Closes #111"](https://help.github.com/articles/closing-issues-using-keywords/))
|
||||
|
||||
Aim for [2 paragraphs in the body](https://www.youtube.com/watch?v=PJjmw9TRB7s).
|
||||
Not sure what to put? Include:
|
||||
|
||||
- What is the problem being solved?
|
||||
- Why is this the best approach?
|
||||
- What other approaches did you consider?
|
||||
- What side effects will this approach have?
|
||||
- What future work remains to be done?
|
||||
|
||||
### Coding standards
|
||||
|
||||
The code in this repo should follow best practices, specifically:
|
||||
|
||||
- [Go code review comments](https://github.com/golang/go/wiki/CodeReviewComments)
|
||||
|
||||
## Finding something to work on
|
||||
|
||||
Thanks so much for considering contributing to our project!! We hope very much you can find something
|
||||
interesting to work on:
|
||||
|
||||
- To find issues that we particularly would like contributors to tackle, look for
|
||||
[issues with the "help wanted" label](https://github.com/GoogleContainerTools/kaniko/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22).
|
||||
- Issues that are good for new folks will additionally be marked with
|
||||
["good first issue"](https://github.com/GoogleContainerTools/kaniko/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22).
|
||||
|
|
@ -97,6 +97,27 @@ Requirements:
|
|||
|
||||
These tests will be kicked off by [reviewers](#reviews) for submitted PRs.
|
||||
|
||||
### Benchmarking
|
||||
|
||||
The goal is for Kaniko to be at least as fast at building Dockerfiles as Docker is, and to that end, we've built
|
||||
in benchmarking to check the speed of not only each full run, but also how long each step of each run takes. To turn
|
||||
on benchmarking, just set the `BENCHMARK_FILE` environment variable, and kaniko will output all the benchmark info
|
||||
of each run to that file location.
|
||||
|
||||
```shell
|
||||
docker run -v $(pwd):/workspace -v ~/.config:/root/.config \
|
||||
-e BENCHMARK_FILE=/workspace/benchmark_file \
|
||||
gcr.io/kaniko-project/executor:latest \
|
||||
--dockerfile=<path to Dockerfile> --context=/workspace \
|
||||
--destination=gcr.io/my-repo/my-image
|
||||
```
|
||||
Additionally, the integration tests can output benchmarking information to a `benchmarks` directory under the
|
||||
`integration` directory if the `BENCHMARK` environment variable is set to `true.`
|
||||
|
||||
```shell
|
||||
BENCHMARK=true go test -v --bucket $GCS_BUCKET --repo $IMAGE_REPO
|
||||
```
|
||||
|
||||
## Creating a PR
|
||||
|
||||
When you have changes you would like to propose to kaniko, you will need to:
|
||||
|
|
|
|||
|
|
@ -591,6 +591,12 @@
|
|||
pruneopts = "NUT"
|
||||
revision = "81db2a75821ed34e682567d48be488a1c3121088"
|
||||
version = "0.5"
|
||||
digest = "1:75b5172f534d05a5abff749f1cf002351f834a2a5592812210aca5e139f9ddad"
|
||||
name = "github.com/karrick/godirwalk"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "cceff240ca8af695e41738831646717e80d2f846"
|
||||
version = "v1.7.7"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:d0164259ed17929689df11205194d80288e8ae25351778f7a3421a24774c36f8"
|
||||
|
|
@ -1337,7 +1343,9 @@
|
|||
"github.com/aws/aws-sdk-go/service/s3",
|
||||
"github.com/aws/aws-sdk-go/service/s3/s3manager",
|
||||
"github.com/docker/docker/builder/dockerfile",
|
||||
"github.com/docker/docker/builder/dockerignore",
|
||||
"github.com/docker/docker/pkg/archive",
|
||||
"github.com/docker/docker/pkg/fileutils",
|
||||
"github.com/docker/docker/pkg/signal",
|
||||
"github.com/genuinetools/amicontained/container",
|
||||
"github.com/google/go-cmp/cmp",
|
||||
|
|
@ -1352,15 +1360,18 @@
|
|||
"github.com/google/go-containerregistry/pkg/v1/remote",
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball",
|
||||
"github.com/google/go-github/github",
|
||||
"github.com/karrick/godirwalk",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/parser",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/shell",
|
||||
"github.com/pkg/errors",
|
||||
"github.com/sirupsen/logrus",
|
||||
"github.com/spf13/cobra",
|
||||
"github.com/spf13/pflag",
|
||||
"golang.org/x/net/context",
|
||||
"golang.org/x/oauth2",
|
||||
"gopkg.in/src-d/go-git.v4",
|
||||
"golang.org/x/sync/errgroup",
|
||||
"k8s.io/client-go/discovery",
|
||||
]
|
||||
solver-name = "gps-cdcl"
|
||||
|
|
|
|||
2
Makefile
2
Makefile
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
# Bump these on release
|
||||
VERSION_MAJOR ?= 0
|
||||
VERSION_MINOR ?= 6
|
||||
VERSION_MINOR ?= 7
|
||||
VERSION_BUILD ?= 0
|
||||
|
||||
VERSION ?= v$(VERSION_MAJOR).$(VERSION_MINOR).$(VERSION_BUILD)
|
||||
|
|
|
|||
169
README.md
169
README.md
|
|
@ -12,36 +12,62 @@ This enables building container images in environments that can't easily or secu
|
|||
kaniko is meant to be run as an image, `gcr.io/kaniko-project/executor`.
|
||||
We do **not** recommend running the kaniko executor binary in another image, as it might not work.
|
||||
|
||||
- [Kaniko](#kaniko)
|
||||
- [How does kaniko work?](#how-does-kaniko-work)
|
||||
- [Known Issues](#known-issues)
|
||||
_If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPMENT.md) and [CONTRIBUTING.md](CONTRIBUTING.md)._
|
||||
|
||||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||||
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
|
||||
|
||||
- [How does kaniko work?](#how-does-kaniko-work)
|
||||
- [Known Issues](#known-issues)
|
||||
- [Demo](#demo)
|
||||
- [Using kaniko](#using-kaniko)
|
||||
- [kaniko Build Contexts](#kaniko-build-contexts)
|
||||
- [Running kaniko](#running-kaniko)
|
||||
- [Running kaniko in a Kubernetes cluster](#running-kaniko-in-a-kubernetes-cluster)
|
||||
- [Kubernetes secret](#kubernetes-secret)
|
||||
- [Running kaniko in gVisor](#running-kaniko-in-gvisor)
|
||||
- [Running kaniko in Google Cloud Build](#running-kaniko-in-google-cloud-build)
|
||||
- [Running kaniko in Docker](#running-kaniko-in-Docker)
|
||||
- [Running kaniko in Docker](#running-kaniko-in-docker)
|
||||
- [Caching](#caching)
|
||||
- [Caching Layers](#caching-layers)
|
||||
- [Caching Base Images](#caching-base-images)
|
||||
- [Pushing to Different Registries](#pushing-to-different-registries)
|
||||
- [Pushing to Amazon ECR](#pushing-to-amazon-ecr)
|
||||
- [Additional Flags](#additional-flags)
|
||||
- [--build-arg](#--build-arg)
|
||||
- [--cache](#--cache)
|
||||
- [--cache-dir](#--cache-dir)
|
||||
- [--cache-repo](#--cache-repo)
|
||||
- [--cleanup](#--cleanup)
|
||||
- [--insecure](#--insecure)
|
||||
- [--insecure-pull](#--insecure-pull)
|
||||
- [--no-push](#--no-push)
|
||||
- [--reproducible](#--reproducible)
|
||||
- [--single-snapshot](#--single-snapshot)
|
||||
- [--snapshotMode](#--snapshotmode)
|
||||
- [--skip-tls-verify](#--skip-tls-verify)
|
||||
- [--skip-tls-verify-pull](#--skip-tls-verify-pull)
|
||||
- [--target](#--target)
|
||||
- [--tarPath](#--tarpath)
|
||||
- [Debug Image](#debug-image)
|
||||
- [Security](#security)
|
||||
- [Comparison with Other Tools](#comparison-with-other-tools)
|
||||
- [Community](#community)
|
||||
- [Limitations](#limitations)
|
||||
- [mtime and snapshotting](#mtime-and-snapshotting)
|
||||
|
||||
_If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPMENT.md) and [CONTRIBUTING.md](CONTRIBUTING.md)._
|
||||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||||
|
||||
### How does kaniko work?
|
||||
## How does kaniko work?
|
||||
|
||||
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry.
|
||||
Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile).
|
||||
We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one.
|
||||
After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
|
||||
|
||||
### Known Issues
|
||||
## Known Issues
|
||||
|
||||
kaniko does not support building Windows containers.
|
||||
|
||||
## Demo
|
||||
|
|
@ -60,7 +86,7 @@ To use kaniko to build and push an image for you, you will need:
|
|||
kaniko's build context is very similar to the build context you would send your Docker daemon for an image build; it represents a directory containing a Dockerfile which kaniko will use to build your image.
|
||||
For example, a `COPY` command in your Dockerfile should refer to a file in the build context.
|
||||
|
||||
You will need to store your build context in a place that kaniko can access.
|
||||
You will need to store your build context in a place that kaniko can access.
|
||||
Right now, kaniko supports these storage solutions:
|
||||
- GCS Bucket
|
||||
- S3 Bucket
|
||||
|
|
@ -69,16 +95,18 @@ Right now, kaniko supports these storage solutions:
|
|||
_Note: the local directory option refers to a directory within the kaniko container.
|
||||
If you wish to use this option, you will need to mount in your build context into the container as a directory._
|
||||
|
||||
If using a GCS or S3 bucket, you will first need to create a compressed tar of your build context and upload it to your bucket.
|
||||
If using a GCS or S3 bucket, you will first need to create a compressed tar of your build context and upload it to your bucket.
|
||||
Once running, kaniko will then download and unpack the compressed tar of the build context before starting the image build.
|
||||
|
||||
To create a compressed tar, you can run:
|
||||
|
||||
```shell
|
||||
tar -C <path to build context> -zcvf context.tar.gz .
|
||||
```
|
||||
Then, copy over the compressed tar into your bucket.
|
||||
Then, copy over the compressed tar into your bucket.
|
||||
For example, we can copy over the compressed tar to a GCS bucket with gsutil:
|
||||
```
|
||||
|
||||
```shell
|
||||
gsutil cp context.tar.gz gs://<bucket name>
|
||||
```
|
||||
|
||||
|
|
@ -87,12 +115,12 @@ When running kaniko, use the `--context` flag with the appropriate prefix to spe
|
|||
| Source | Prefix |
|
||||
|---------|---------|
|
||||
| Local Directory | dir://[path to a directory in the kaniko container] |
|
||||
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] |
|
||||
| GCS Bucket | gs://[bucket name]/[path to .tar.gz] |
|
||||
| S3 Bucket | s3://[bucket name]/[path to .tar.gz] |
|
||||
| Git Repository | git://[repository url] |
|
||||
|
||||
If you don't specify a prefix, kaniko will assume a local directory.
|
||||
For example, to use a GCS bucket called `kaniko-bucket`, you would pass in `--context=gs://kaniko-bucket/path/to/context.tar.gz`.
|
||||
For example, to use a GCS bucket called `kaniko-bucket`, you would pass in `--context=gs://kaniko-bucket/path/to/context.tar.gz`.
|
||||
|
||||
### Using Private Git Repository
|
||||
You can use `Personal Access Tokens` for Build Contexts from Private Repositories from [GitHub](https://blog.github.com/2012-09-21-easier-builds-and-deployments-using-git-over-https-and-oauth/).
|
||||
|
|
@ -128,7 +156,7 @@ To create a secret to authenticate to Google Cloud Registry, follow these steps:
|
|||
kubectl create secret generic kaniko-secret --from-file=<path to kaniko-secret.json>
|
||||
```
|
||||
|
||||
_Note: If using a GCS bucket in the same GCP project as a build context, this service account should now also have permissions to read from that bucket._
|
||||
_Note: If using a GCS bucket in the same GCP project as a build context, this service account should now also have permissions to read from that bucket._
|
||||
|
||||
The Kubernetes Pod spec should look similar to this, with the args parameters filled in:
|
||||
|
||||
|
|
@ -216,28 +244,29 @@ We can run the kaniko executor image locally in a Docker daemon to build and pus
|
|||
### Caching
|
||||
|
||||
#### Caching Layers
|
||||
kaniko currently can cache layers created by `RUN` commands in a remote repository.
|
||||
kaniko currently can cache layers created by `RUN` commands in a remote repository.
|
||||
Before executing a command, kaniko checks the cache for the layer.
|
||||
If it exists, kaniko will pull and extract the cached layer instead of executing the command.
|
||||
If not, kaniko will execute the command and then push the newly created layer to the cache.
|
||||
|
||||
Users can opt in to caching by setting the `--cache=true` flag.
|
||||
A remote repository for storing cached layers can be provided via the `--cache-repo` flag.
|
||||
If this flag isn't provided, a cached repo will be inferred from the `--destination` provided.
|
||||
If this flag isn't provided, a cached repo will be inferred from the `--destination` provided.
|
||||
|
||||
#### Caching Base Images
|
||||
kaniko can cache images in a local directory that can be volume mounted into the kaniko image.
|
||||
To do so, the cache must first be populated, as it is read-only. We provide a kaniko cache warming
|
||||
|
||||
kaniko can cache images in a local directory that can be volume mounted into the kaniko image.
|
||||
To do so, the cache must first be populated, as it is read-only. We provide a kaniko cache warming
|
||||
image at `gcr.io/kaniko-project/warmer`:
|
||||
|
||||
```shell
|
||||
docker run -v $(pwd):/workspace gcr.io/kaniko-project/warmer:latest --cache-dir=/workspace/cache --image=<image to cache> --image=<another image to cache>
|
||||
```
|
||||
|
||||
`--image` can be specified for any number of desired images.
|
||||
`--image` can be specified for any number of desired images.
|
||||
This command will cache those images by digest in a local directory named `cache`.
|
||||
Once the cache is populated, caching is opted into with the same `--cache=true` flag as above.
|
||||
The location of the local cache is provided via the `--cache-dir` flag, defaulting at `/cache` as with the cache warmer.
|
||||
Once the cache is populated, caching is opted into with the same `--cache=true` flag as above.
|
||||
The location of the local cache is provided via the `--cache-dir` flag, defaulting at `/cache` as with the cache warmer.
|
||||
See the `examples` directory for how to use with kubernetes clusters and persistent cache volumes.
|
||||
|
||||
### Pushing to Different Registries
|
||||
|
|
@ -306,57 +335,21 @@ To configure credentials, you will need to do the following:
|
|||
|
||||
### Additional Flags
|
||||
|
||||
#### --snapshotMode
|
||||
|
||||
You can set the `--snapshotMode=<full (default), time>` flag to set how kaniko will snapshot the filesystem.
|
||||
If `--snapshotMode=time` is set, only file mtime will be considered when snapshotting (see
|
||||
[limitations related to mtime](#mtime-and-snapshotting)).
|
||||
|
||||
#### --build-arg
|
||||
|
||||
This flag allows you to pass in ARG values at build time, similarly to Docker.
|
||||
You can set it multiple times for multiple arguments.
|
||||
|
||||
#### --single-snapshot
|
||||
|
||||
This flag takes a single snapshot of the filesystem at the end of the build, so only one layer will be appended to the base image.
|
||||
|
||||
#### --reproducible
|
||||
|
||||
Set this flag to strip timestamps out of the built image and make it reproducible.
|
||||
|
||||
#### --tarPath
|
||||
|
||||
Set this flag as `--tarPath=<path>` to save the image as a tarball at path instead of pushing the image.
|
||||
|
||||
#### --target
|
||||
|
||||
Set this flag to indicate which build stage is the target build stage.
|
||||
|
||||
#### --no-push
|
||||
|
||||
Set this flag if you only want to build the image, without pushing to a registry.
|
||||
|
||||
#### --insecure
|
||||
|
||||
Set this flag if you want to push images to a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --skip-tls-verify
|
||||
|
||||
Set this flag to skip TLS certificate validation when pushing images to a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --insecure-pull
|
||||
|
||||
Set this flag if you want to pull images from a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --skip-tls-verify-pull
|
||||
|
||||
Set this flag to skip TLS certificate validation when pulling images from a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --cache
|
||||
|
||||
Set this flag as `--cache=true` to opt in to caching with kaniko.
|
||||
|
||||
#### --cache-dir
|
||||
|
||||
Set this flag to specify a local directory cache for base images. Defaults to `/cache`.
|
||||
|
||||
_This flag must be used in conjunction with the `--cache=true` flag._
|
||||
|
||||
#### --cache-repo
|
||||
|
||||
Set this flag to specify a remote repository which will be used to store cached layers.
|
||||
|
|
@ -366,15 +359,57 @@ If `--destination=gcr.io/kaniko-project/test`, then cached layers will be stored
|
|||
|
||||
_This flag must be used in conjunction with the `--cache=true` flag._
|
||||
|
||||
#### --cache-dir
|
||||
#### --insecure-registry
|
||||
|
||||
Set this flag to specify a local directory cache for base images. Defaults to `/cache`.
|
||||
Set this flag to use plain HTTP requests when accessing a registry. It is supposed to be useed for testing purposes only and should not be used in production!
|
||||
You can set it multiple times for multiple registries.
|
||||
|
||||
_This flag must be used in conjunction with the `--cache=true` flag._
|
||||
#### --skip-tls-verify-registry
|
||||
|
||||
Set this flag to skip TLS cerificate validation when accessing a registry. It is supposed to be useed for testing purposes only and should not be used in production!
|
||||
You can set it multiple times for multiple registries.
|
||||
|
||||
#### --cleanup
|
||||
|
||||
Set this flag to cleanup the filesystem at the end, leaving a clean kaniko container (if you want to build multiple images in the same container, using the debug kaniko image)
|
||||
Set this flag to clean the filesystem at the end of the build.
|
||||
|
||||
#### --insecure
|
||||
|
||||
Set this flag if you want to push images to a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --insecure-pull
|
||||
|
||||
Set this flag if you want to pull images from a plain HTTP registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --no-push
|
||||
|
||||
Set this flag if you only want to build the image, without pushing to a registry.
|
||||
|
||||
#### --reproducible
|
||||
|
||||
Set this flag to strip timestamps out of the built image and make it reproducible.
|
||||
|
||||
#### --single-snapshot
|
||||
|
||||
This flag takes a single snapshot of the filesystem at the end of the build, so only one layer will be appended to the base image.
|
||||
|
||||
#### --skip-tls-verify
|
||||
|
||||
Set this flag to skip TLS certificate validation when connecting to a registry. It is supposed to be used for testing purposes only and should not be used in production!
|
||||
|
||||
#### --snapshotMode
|
||||
|
||||
You can set the `--snapshotMode=<full (default), time>` flag to set how kaniko will snapshot the filesystem.
|
||||
If `--snapshotMode=time` is set, only file mtime will be considered when snapshotting (see
|
||||
[limitations related to mtime](#mtime-and-snapshotting)).
|
||||
|
||||
#### --target
|
||||
|
||||
Set this flag to indicate which build stage is the target build stage.
|
||||
|
||||
#### --tarPath
|
||||
|
||||
Set this flag as `--tarPath=<path>` to save the image as a tarball at path instead of pushing the image.
|
||||
|
||||
### Debug Image
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,21 @@
|
|||
# Copyright 2018 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
script_name=$0
|
||||
script_full_path=$(dirname "$0")
|
||||
export BENCHMARK=true
|
||||
export IMAGE_REPO="gcr.io/kaniko-test/benchmarks"
|
||||
./${script_full_path}/integration-test.sh
|
||||
|
|
@ -20,15 +20,17 @@ import (
|
|||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/buildcontext"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/executor"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
"github.com/docker/docker/pkg/fileutils"
|
||||
"github.com/genuinetools/amicontained/container"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
|
@ -49,6 +51,7 @@ func init() {
|
|||
addHiddenFlags(RootCmd)
|
||||
}
|
||||
|
||||
// RootCmd is the kaniko command that is run
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "executor",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
|
|
@ -67,7 +70,7 @@ var RootCmd = &cobra.Command{
|
|||
if err := resolveDockerfilePath(); err != nil {
|
||||
return errors.Wrap(err, "error resolving dockerfile path")
|
||||
}
|
||||
return removeIgnoredFiles()
|
||||
return nil
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if !checkContained() {
|
||||
|
|
@ -86,6 +89,21 @@ var RootCmd = &cobra.Command{
|
|||
if err := executor.DoPush(image, opts); err != nil {
|
||||
exit(errors.Wrap(err, "error pushing image"))
|
||||
}
|
||||
|
||||
benchmarkFile := os.Getenv("BENCHMARK_FILE")
|
||||
// false is a keyword for integration tests to turn off benchmarking
|
||||
if benchmarkFile != "" && benchmarkFile != "false" {
|
||||
f, err := os.Create(benchmarkFile)
|
||||
if err != nil {
|
||||
logrus.Warnf("Unable to create benchmarking file %s: %s", benchmarkFile, err)
|
||||
}
|
||||
defer f.Close()
|
||||
s, err := timing.JSON()
|
||||
if err != nil {
|
||||
logrus.Warnf("Unable to write benchmark file: %s", err)
|
||||
}
|
||||
f.WriteString(s)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -110,6 +128,9 @@ func addKanikoOptionsFlags(cmd *cobra.Command) {
|
|||
RootCmd.PersistentFlags().StringVarP(&opts.CacheDir, "cache-dir", "", "/cache", "Specify a local directory to use as a cache.")
|
||||
RootCmd.PersistentFlags().BoolVarP(&opts.Cache, "cache", "", false, "Use cache when building image")
|
||||
RootCmd.PersistentFlags().BoolVarP(&opts.Cleanup, "cleanup", "", false, "Clean the filesystem at the end")
|
||||
RootCmd.PersistentFlags().DurationVarP(&opts.CacheTTL, "cache-ttl", "", time.Hour*336, "Cache timeout in hours. Defaults to two weeks.")
|
||||
RootCmd.PersistentFlags().VarP(&opts.InsecureRegistries, "insecure-registry", "", "Insecure registry using plain HTTP to push and pull. Set it repeatedly for multiple registries.")
|
||||
RootCmd.PersistentFlags().VarP(&opts.SkipTLSVerifyRegistries, "skip-tls-verify-registry", "", "Insecure registry ignoring TLS verify to push and pull. Set it repeatedly for multiple registries.")
|
||||
}
|
||||
|
||||
// addHiddenFlags marks certain flags as hidden from the executor help text
|
||||
|
|
@ -140,6 +161,9 @@ func cacheFlagsValid() error {
|
|||
|
||||
// resolveDockerfilePath resolves the Dockerfile path to an absolute path
|
||||
func resolveDockerfilePath() error {
|
||||
if match, _ := regexp.MatchString("^https?://", opts.DockerfilePath); match {
|
||||
return nil
|
||||
}
|
||||
if util.FilepathExists(opts.DockerfilePath) {
|
||||
abs, err := filepath.Abs(opts.DockerfilePath)
|
||||
if err != nil {
|
||||
|
|
@ -163,7 +187,7 @@ func resolveDockerfilePath() error {
|
|||
// copy Dockerfile to /kaniko/Dockerfile so that if it's specified in the .dockerignore
|
||||
// it won't be copied into the image
|
||||
func copyDockerfile() error {
|
||||
if err := util.CopyFile(opts.DockerfilePath, constants.DockerfilePath); err != nil {
|
||||
if _, err := util.CopyFile(opts.DockerfilePath, constants.DockerfilePath, ""); err != nil {
|
||||
return errors.Wrap(err, "copying dockerfile")
|
||||
}
|
||||
opts.DockerfilePath = constants.DockerfilePath
|
||||
|
|
@ -200,29 +224,6 @@ func resolveSourceContext() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func removeIgnoredFiles() error {
|
||||
if !dockerfile.DockerignoreExists(opts) {
|
||||
return nil
|
||||
}
|
||||
ignore, err := dockerfile.ParseDockerignore(opts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logrus.Infof("Removing ignored files from build context: %s", ignore)
|
||||
files, err := util.RelativeFiles("", opts.SrcContext)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting all files in src context")
|
||||
}
|
||||
for _, f := range files {
|
||||
if rm, _ := fileutils.Matches(f, ignore); rm {
|
||||
if err := os.RemoveAll(f); err != nil {
|
||||
logrus.Errorf("Error removing %s from build context", f)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func exit(err error) {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
|
|
|
|||
|
|
@ -16,8 +16,6 @@
|
|||
|
||||
FROM golang:1.10
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
COPY . .
|
||||
RUN make
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3-static/docker-credential-gcr_linux_amd64-1.4.3.tar.gz /usr/local/bin/
|
||||
RUN tar -C /usr/local/bin/ -xvzf /usr/local/bin/docker-credential-gcr_linux_amd64-1.4.3.tar.gz
|
||||
|
|
@ -25,6 +23,9 @@ RUN tar -C /usr/local/bin/ -xvzf /usr/local/bin/docker-credential-gcr_linux_amd6
|
|||
RUN go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-login/cli/docker-credential-ecr-login
|
||||
RUN make -C /go/src/github.com/awslabs/amazon-ecr-credential-helper linux-amd64
|
||||
|
||||
COPY . .
|
||||
RUN make
|
||||
|
||||
FROM scratch
|
||||
COPY --from=0 /go/src/github.com/GoogleContainerTools/kaniko/out/executor /kaniko/executor
|
||||
COPY --from=0 /usr/local/bin/docker-credential-gcr /kaniko/docker-credential-gcr
|
||||
|
|
|
|||
|
|
@ -17,14 +17,14 @@
|
|||
# Stage 0: Build the executor binary and get credential helpers
|
||||
FROM golang:1.10
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
COPY . .
|
||||
RUN make
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3-static/docker-credential-gcr_linux_amd64-1.4.3.tar.gz /usr/local/bin/
|
||||
RUN tar -C /usr/local/bin/ -xvzf /usr/local/bin/docker-credential-gcr_linux_amd64-1.4.3.tar.gz
|
||||
# Get Amazon ECR credential helper
|
||||
RUN go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-login/cli/docker-credential-ecr-login
|
||||
RUN make -C /go/src/github.com/awslabs/amazon-ecr-credential-helper linux-amd64
|
||||
COPY . .
|
||||
RUN make
|
||||
|
||||
# Stage 1: Get the busybox shell
|
||||
FROM gcr.io/cloud-builders/bazel:latest
|
||||
|
|
|
|||
|
|
@ -3,9 +3,6 @@ steps:
|
|||
- name: "gcr.io/cloud-builders/docker"
|
||||
args: ["build", "-f", "deploy/Dockerfile",
|
||||
"-t", "gcr.io/kaniko-project/executor:${COMMIT_SHA}", "."]
|
||||
- name: "gcr.io/cloud-builders/docker"
|
||||
args: ["build", "-f", "deploy/Dockerfile",
|
||||
"-t", "gcr.io/kaniko-project/executor:latest", "."]
|
||||
# Then, we want to build kaniko:debug
|
||||
- name: "gcr.io/cloud-builders/docker"
|
||||
args: ["build", "-f", "deploy/Dockerfile_debug",
|
||||
|
|
@ -17,12 +14,7 @@ steps:
|
|||
- name: "gcr.io/cloud-builders/docker"
|
||||
args: ["build", "-f", "deploy/Dockerfile_warmer",
|
||||
"-t", "gcr.io/kaniko-project/warmer:${COMMIT_SHA}", "."]
|
||||
- name: "gcr.io/cloud-builders/docker"
|
||||
args: ["build", "-f", "deploy/Dockerfile_warmer",
|
||||
"-t", "gcr.io/kaniko-project/warmer:latest", "."]
|
||||
images: ["gcr.io/kaniko-project/executor:${COMMIT_SHA}",
|
||||
"gcr.io/kaniko-project/executor:latest",
|
||||
"gcr.io/kaniko-project/executor:debug-${COMMIT_SHA}",
|
||||
"gcr.io/kaniko-project/executor:debug",
|
||||
"gcr.io/kaniko-project/warmer:${COMMIT_SHA}",
|
||||
"gcr.io/kaniko-project/warmer:latest"]
|
||||
"gcr.io/kaniko-project/warmer:${COMMIT_SHA}"]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,3 @@
|
|||
# A .dockerignore file to make sure dockerignore support works
|
||||
ignore/**
|
||||
!ignore/foo
|
||||
|
|
@ -18,7 +18,6 @@ package integration
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"testing"
|
||||
)
|
||||
|
|
@ -26,14 +25,8 @@ import (
|
|||
// RunCommandWithoutTest will run cmd and if it fails will output relevant info
|
||||
// for debugging before returning an error. It can be run outside the context of a test.
|
||||
func RunCommandWithoutTest(cmd *exec.Cmd) ([]byte, error) {
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
fmt.Println(cmd.Args)
|
||||
fmt.Println(stderr.String())
|
||||
fmt.Println(string(output))
|
||||
}
|
||||
output, err := cmd.CombinedOutput()
|
||||
|
||||
return output, err
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,15 @@
|
|||
# If the image is built twice, /date should be the same in both images
|
||||
# if the cache is implemented correctly
|
||||
|
||||
FROM gcr.io/google-appengine/debian9@sha256:1d6a9a6d106bd795098f60f4abb7083626354fa6735e81743c7f8cfca11259f0
|
||||
FROM alpine as base_stage
|
||||
|
||||
RUN mkdir foo && echo base_stage > foo/base
|
||||
|
||||
FROM base_stage as cached_stage
|
||||
|
||||
RUN echo cached_stage > foo/cache
|
||||
|
||||
FROM cached_stage as bug_stage
|
||||
|
||||
RUN echo bug_stage > foo/bug
|
||||
RUN date > /date
|
||||
COPY context/foo /foo
|
||||
RUN echo hey
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
FROM busybox
|
||||
RUN while true; do dd if=/dev/zero of=file`date +%s`.txt count=16000 bs=256 > /dev/null 2>&1; done &
|
||||
RUN (while true; do sleep 10; dd if=/dev/zero of=file`date +%s`.txt count=16000 bs=256 > /dev/null 2>&1; done &); sleep 1
|
||||
RUN echo "wait a second..." && sleep 2 && ls -lrat file*.txt || echo "test passed."
|
||||
|
|
|
|||
|
|
@ -0,0 +1,5 @@
|
|||
# This dockerfile makes sure the .dockerignore is working
|
||||
# If so then ignore/foo should copy to /foo
|
||||
# If not, then this image won't build because it will attempt to copy three files to /foo, which is a file not a directory
|
||||
FROM scratch
|
||||
COPY ignore/* /foo
|
||||
|
|
@ -1,3 +1,10 @@
|
|||
FROM composer@sha256:4598feb4b58b4370893a29cbc654afa9420b4debed1d574531514b78a24cd608 AS composer
|
||||
FROM php@sha256:13813f20fec7ded7bf3a4305ea0ccd4df3cea900e263f7f86c3d5737f86669eb
|
||||
COPY --from=composer /usr/bin/composer /usr/bin/composer
|
||||
|
||||
# make sure hardlink extracts correctly
|
||||
FROM jboss/base-jdk@sha256:138591422fdab93a5844c13f6cbcc685631b37a16503675e9f340d2503617a41
|
||||
|
||||
FROM gcr.io/kaniko-test/hardlink-base:latest
|
||||
RUN ls -al /usr/libexec/git-core/git /usr/bin/git /usr/libexec/git-core/git-diff
|
||||
RUN stat /usr/bin/git
|
||||
|
|
|
|||
|
|
@ -14,4 +14,5 @@ FROM ${REGISTRY}/${REPO}/debian9
|
|||
COPY --from=stage1 /hello /tmp
|
||||
|
||||
# /tmp/hey should not get created without the ARG statement
|
||||
RUN touch /tmp/${WORD2}
|
||||
# Use -d 0 to force a time change because of stat resolution
|
||||
RUN touch -d 0 /tmp/${WORD2}
|
||||
|
|
|
|||
|
|
@ -1,17 +1,23 @@
|
|||
FROM gcr.io/distroless/base@sha256:628939ac8bf3f49571d05c6c76b8688cb4a851af6c7088e599388259875bde20 as base
|
||||
FROM gcr.io/google-appengine/debian9@sha256:f0159d14385afcb58a9b2fa8955c0cb64bd3abc365e8589f8c2dd38150fbfdbe as base
|
||||
COPY . .
|
||||
|
||||
FROM scratch as second
|
||||
ENV foopath context/foo
|
||||
COPY --from=0 $foopath context/b* /foo/
|
||||
|
||||
FROM second
|
||||
FROM second as third
|
||||
COPY --from=base /context/foo /new/foo
|
||||
|
||||
FROM base as fourth
|
||||
# Make sure that we snapshot intermediate images correctly
|
||||
RUN date > /date
|
||||
ENV foo bar
|
||||
|
||||
# This base image contains symlinks with relative paths to whitelisted directories
|
||||
# We need to test they're extracted correctly
|
||||
FROM fedora@sha256:c4cc32b09c6ae3f1353e7e33a8dda93dc41676b923d6d89afa996b421cc5aa48
|
||||
|
||||
FROM base
|
||||
FROM fourth
|
||||
ARG file
|
||||
COPY --from=second /foo ${file}
|
||||
COPY --from=gcr.io/google-appengine/debian9@sha256:00109fa40230a081f5ecffe0e814725042ff62a03e2d1eae0563f1f82eaeae9b /etc/os-release /new
|
||||
|
|
|
|||
|
|
@ -0,0 +1,11 @@
|
|||
# Make sure that whitelisting (specifically, filepath.SkipDir) works correctly, and that /var/test/testfile and
|
||||
# /etc/test/testfile end up in the final image
|
||||
|
||||
FROM debian@sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe
|
||||
|
||||
RUN mkdir -p /var/test \
|
||||
&& mkdir -p /etc/test \
|
||||
&& touch /var/test/testfile \
|
||||
&& touch /etc/test/testfile \
|
||||
&& ls -lah /var/test \
|
||||
&& ls -lah /etc/test;
|
||||
|
|
@ -1,112 +0,0 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package integration
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var filesToIgnore = []string{"ignore/fo*", "!ignore/foobar", "ignore/Dockerfile_test_ignore"}
|
||||
|
||||
const (
|
||||
ignoreDir = "ignore"
|
||||
ignoreDockerfile = "Dockerfile_test_ignore"
|
||||
ignoreDockerfileContents = `FROM scratch
|
||||
COPY . .`
|
||||
)
|
||||
|
||||
// Set up a test dir to ignore with the structure:
|
||||
// ignore
|
||||
// -- Dockerfile_test_ignore
|
||||
// -- foo
|
||||
// -- foobar
|
||||
|
||||
func setupIgnoreTestDir() error {
|
||||
if err := os.MkdirAll(ignoreDir, 0750); err != nil {
|
||||
return err
|
||||
}
|
||||
// Create and write contents to dockerfile
|
||||
path := filepath.Join(ignoreDir, ignoreDockerfile)
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
if _, err := f.Write([]byte(ignoreDockerfileContents)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
additionalFiles := []string{"ignore/foo", "ignore/foobar"}
|
||||
for _, add := range additionalFiles {
|
||||
a, err := os.Create(add)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer a.Close()
|
||||
}
|
||||
return generateDockerIgnore()
|
||||
}
|
||||
|
||||
// generate the .dockerignore file
|
||||
func generateDockerIgnore() error {
|
||||
f, err := os.Create(".dockerignore")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
contents := strings.Join(filesToIgnore, "\n")
|
||||
if _, err := f.Write([]byte(contents)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func generateDockerignoreImages(imageRepo string) error {
|
||||
|
||||
dockerfilePath := filepath.Join(ignoreDir, ignoreDockerfile)
|
||||
|
||||
dockerImage := strings.ToLower(imageRepo + dockerPrefix + ignoreDockerfile)
|
||||
dockerCmd := exec.Command("docker", "build",
|
||||
"-t", dockerImage,
|
||||
"-f", path.Join(dockerfilePath),
|
||||
".")
|
||||
_, err := RunCommandWithoutTest(dockerCmd)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build image %s with docker command \"%s\": %s", dockerImage, dockerCmd.Args, err)
|
||||
}
|
||||
|
||||
_, ex, _, _ := runtime.Caller(0)
|
||||
cwd := filepath.Dir(ex)
|
||||
kanikoImage := GetKanikoImage(imageRepo, ignoreDockerfile)
|
||||
kanikoCmd := exec.Command("docker",
|
||||
"run",
|
||||
"-v", os.Getenv("HOME")+"/.config/gcloud:/root/.config/gcloud",
|
||||
"-v", cwd+":/workspace",
|
||||
ExecutorImage,
|
||||
"-f", path.Join(buildContextPath, dockerfilePath),
|
||||
"-d", kanikoImage,
|
||||
"-c", buildContextPath)
|
||||
|
||||
_, err = RunCommandWithoutTest(kanikoCmd)
|
||||
return err
|
||||
}
|
||||
|
|
@ -22,7 +22,6 @@ import (
|
|||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"time"
|
||||
)
|
||||
|
||||
|
|
@ -49,16 +48,19 @@ func CreateIntegrationTarball() (string, error) {
|
|||
|
||||
// UploadFileToBucket will upload the at filePath to gcsBucket. It will return the path
|
||||
// of the file in gcsBucket.
|
||||
func UploadFileToBucket(gcsBucket string, filePath string) (string, error) {
|
||||
log.Printf("Uploading file at %s to GCS bucket at %s\n", filePath, gcsBucket)
|
||||
func UploadFileToBucket(gcsBucket string, filePath string, gcsPath string) (string, error) {
|
||||
dst := fmt.Sprintf("%s/%s", gcsBucket, gcsPath)
|
||||
log.Printf("Uploading file at %s to GCS bucket at %s\n", filePath, dst)
|
||||
|
||||
cmd := exec.Command("gsutil", "cp", filePath, gcsBucket)
|
||||
_, err := RunCommandWithoutTest(cmd)
|
||||
cmd := exec.Command("gsutil", "cp", filePath, dst)
|
||||
out, err := RunCommandWithoutTest(cmd)
|
||||
if err != nil {
|
||||
log.Printf("Error uploading file %s to GCS at %s: %s", filePath, dst, err)
|
||||
log.Println(string(out))
|
||||
return "", fmt.Errorf("Failed to copy tarball to GCS bucket %s: %s", gcsBucket, err)
|
||||
}
|
||||
|
||||
return filepath.Join(gcsBucket, filePath), err
|
||||
return dst, nil
|
||||
}
|
||||
|
||||
// DeleteFromBucket will remove the content at path. path should be the full path
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ package integration
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
|
|
@ -25,12 +26,16 @@ import (
|
|||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
)
|
||||
|
||||
const (
|
||||
// ExecutorImage is the name of the kaniko executor image
|
||||
ExecutorImage = "executor-image"
|
||||
WarmerImage = "warmer-image"
|
||||
//WarmerImage is the name of the kaniko cache warmer image
|
||||
WarmerImage = "warmer-image"
|
||||
|
||||
dockerPrefix = "docker-"
|
||||
kanikoPrefix = "kaniko-"
|
||||
|
|
@ -164,7 +169,9 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
|
|||
additionalFlags...)...,
|
||||
)
|
||||
|
||||
timer := timing.Start(dockerfile + "_docker")
|
||||
_, err := RunCommandWithoutTest(dockerCmd)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build image %s with docker command \"%s\": %s", dockerImage, dockerCmd.Args, err)
|
||||
}
|
||||
|
|
@ -192,13 +199,28 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
|
|||
}
|
||||
}
|
||||
|
||||
benchmarkEnv := "BENCHMARK_FILE=false"
|
||||
benchmarkDir, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b, err := strconv.ParseBool(os.Getenv("BENCHMARK")); err == nil && b {
|
||||
benchmarkEnv = "BENCHMARK_FILE=/kaniko/benchmarks/" + dockerfile
|
||||
benchmarkFile := path.Join(benchmarkDir, dockerfile)
|
||||
fileName := fmt.Sprintf("run_%s_%s", time.Now().Format("2006-01-02-15:04"), dockerfile)
|
||||
dst := path.Join("benchmarks", fileName)
|
||||
defer UploadFileToBucket(gcsBucket, benchmarkFile, dst)
|
||||
}
|
||||
|
||||
// build kaniko image
|
||||
additionalFlags = append(buildArgs, additionalKanikoFlagsMap[dockerfile]...)
|
||||
kanikoImage := GetKanikoImage(imageRepo, dockerfile)
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
"-v", benchmarkDir + ":/kaniko/benchmarks",
|
||||
"-v", cwd + ":/workspace",
|
||||
"-e", benchmarkEnv,
|
||||
ExecutorImage,
|
||||
"-f", path.Join(buildContextPath, dockerfilesPath, dockerfile),
|
||||
"-d", kanikoImage, reproducibleFlag,
|
||||
|
|
@ -206,7 +228,9 @@ func (d *DockerFileBuilder) BuildImage(imageRepo, gcsBucket, dockerfilesPath, do
|
|||
additionalFlags...)...,
|
||||
)
|
||||
|
||||
timer = timing.Start(dockerfile + "_kaniko")
|
||||
_, err = RunCommandWithoutTest(kanikoCmd)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build image %s with kaniko command \"%s\": %s", dockerImage, kanikoCmd.Args, err)
|
||||
}
|
||||
|
|
@ -243,11 +267,17 @@ func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesP
|
|||
cacheFlag := "--cache=true"
|
||||
|
||||
for dockerfile := range d.TestCacheDockerfiles {
|
||||
benchmarkEnv := "BENCHMARK_FILE=false"
|
||||
if b, err := strconv.ParseBool(os.Getenv("BENCHMARK")); err == nil && b {
|
||||
os.Mkdir("benchmarks", 0755)
|
||||
benchmarkEnv = "BENCHMARK_FILE=/workspace/benchmarks/" + dockerfile
|
||||
}
|
||||
kanikoImage := GetVersionedKanikoImage(imageRepo, dockerfile, version)
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
"-v", cwd + ":/workspace",
|
||||
"-e", benchmarkEnv,
|
||||
ExecutorImage,
|
||||
"-f", path.Join(buildContextPath, dockerfilesPath, dockerfile),
|
||||
"-d", kanikoImage,
|
||||
|
|
@ -257,7 +287,10 @@ func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesP
|
|||
"--cache-dir", cacheDir})...,
|
||||
)
|
||||
|
||||
if _, err := RunCommandWithoutTest(kanikoCmd); err != nil {
|
||||
timer := timing.Start(dockerfile + "_kaniko_cached_" + strconv.Itoa(version))
|
||||
_, err := RunCommandWithoutTest(kanikoCmd)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build cached image %s with kaniko command \"%s\": %s", kanikoImage, kanikoCmd.Args, err)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -25,13 +25,17 @@ import (
|
|||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/daemon"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
)
|
||||
|
|
@ -123,7 +127,7 @@ func TestMain(m *testing.M) {
|
|||
os.Exit(1)
|
||||
}
|
||||
|
||||
fileInBucket, err := UploadFileToBucket(config.gcsBucket, contextFile)
|
||||
fileInBucket, err := UploadFileToBucket(config.gcsBucket, contextFile, contextFile)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to upload build context", err)
|
||||
os.Exit(1)
|
||||
|
|
@ -138,67 +142,76 @@ func TestMain(m *testing.M) {
|
|||
RunOnInterrupt(func() { DeleteFromBucket(fileInBucket) })
|
||||
defer DeleteFromBucket(fileInBucket)
|
||||
|
||||
fmt.Println("Building kaniko image")
|
||||
cmd := exec.Command("docker", "build", "-t", ExecutorImage, "-f", "../deploy/Dockerfile", "..")
|
||||
if _, err = RunCommandWithoutTest(cmd); err != nil {
|
||||
fmt.Printf("Building kaniko failed: %s", err)
|
||||
os.Exit(1)
|
||||
setupCommands := []struct {
|
||||
name string
|
||||
command []string
|
||||
}{
|
||||
{
|
||||
name: "Building kaniko image",
|
||||
command: []string{"docker", "build", "-t", ExecutorImage, "-f", "../deploy/Dockerfile", ".."},
|
||||
},
|
||||
{
|
||||
name: "Building cache warmer image",
|
||||
command: []string{"docker", "build", "-t", WarmerImage, "-f", "../deploy/Dockerfile_warmer", ".."},
|
||||
},
|
||||
{
|
||||
name: "Building onbuild base image",
|
||||
command: []string{"docker", "build", "-t", config.onbuildBaseImage, "-f", "dockerfiles/Dockerfile_onbuild_base", "."},
|
||||
},
|
||||
{
|
||||
name: "Pushing onbuild base image",
|
||||
command: []string{"docker", "push", config.onbuildBaseImage},
|
||||
},
|
||||
{
|
||||
name: "Building hardlink base image",
|
||||
command: []string{"docker", "build", "-t", config.hardlinkBaseImage, "-f", "dockerfiles/Dockerfile_hardlink_base", "."},
|
||||
},
|
||||
{
|
||||
name: "Pushing hardlink base image",
|
||||
command: []string{"docker", "push", config.hardlinkBaseImage},
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Println("Building cache warmer image")
|
||||
cmd = exec.Command("docker", "build", "-t", WarmerImage, "-f", "../deploy/Dockerfile_warmer", "..")
|
||||
if _, err = RunCommandWithoutTest(cmd); err != nil {
|
||||
fmt.Printf("Building kaniko's cache warmer failed: %s", err)
|
||||
os.Exit(1)
|
||||
for _, setupCmd := range setupCommands {
|
||||
fmt.Println(setupCmd.name)
|
||||
cmd := exec.Command(setupCmd.command[0], setupCmd.command[1:]...)
|
||||
if _, err := RunCommandWithoutTest(cmd); err != nil {
|
||||
fmt.Printf("%s failed: %s", setupCmd.name, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println("Building onbuild base image")
|
||||
buildOnbuildBase := exec.Command("docker", "build", "-t", config.onbuildBaseImage, "-f", "dockerfiles/Dockerfile_onbuild_base", ".")
|
||||
if err := buildOnbuildBase.Run(); err != nil {
|
||||
fmt.Printf("error building onbuild base: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
pushOnbuildBase := exec.Command("docker", "push", config.onbuildBaseImage)
|
||||
if err := pushOnbuildBase.Run(); err != nil {
|
||||
fmt.Printf("error pushing onbuild base %s: %v", config.onbuildBaseImage, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Println("Building hardlink base image")
|
||||
buildHardlinkBase := exec.Command("docker", "build", "-t", config.hardlinkBaseImage, "-f", "dockerfiles/Dockerfile_hardlink_base", ".")
|
||||
if err := buildHardlinkBase.Run(); err != nil {
|
||||
fmt.Printf("error building hardlink base: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
pushHardlinkBase := exec.Command("docker", "push", config.hardlinkBaseImage)
|
||||
if err := pushHardlinkBase.Run(); err != nil {
|
||||
fmt.Printf("error pushing hardlink base %s: %v", config.hardlinkBaseImage, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
dockerfiles, err := FindDockerFiles(dockerfilesPath)
|
||||
if err != nil {
|
||||
fmt.Printf("Coudn't create map of dockerfiles: %s", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
imageBuilder = NewDockerFileBuilder(dockerfiles)
|
||||
|
||||
g := errgroup.Group{}
|
||||
for dockerfile := range imageBuilder.FilesBuilt {
|
||||
df := dockerfile
|
||||
g.Go(func() error {
|
||||
return imageBuilder.BuildImage(config.imageRepo, config.gcsBucket, dockerfilesPath, df)
|
||||
})
|
||||
}
|
||||
if err := g.Wait(); err != nil {
|
||||
fmt.Printf("Error building images: %s", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
func TestRun(t *testing.T) {
|
||||
for dockerfile, built := range imageBuilder.FilesBuilt {
|
||||
for dockerfile := range imageBuilder.FilesBuilt {
|
||||
t.Run("test_"+dockerfile, func(t *testing.T) {
|
||||
dockerfile := dockerfile
|
||||
t.Parallel()
|
||||
if _, ok := imageBuilder.DockerfilesToIgnore[dockerfile]; ok {
|
||||
t.SkipNow()
|
||||
}
|
||||
if _, ok := imageBuilder.TestCacheDockerfiles[dockerfile]; ok {
|
||||
t.SkipNow()
|
||||
}
|
||||
if !built {
|
||||
err := imageBuilder.BuildImage(config.imageRepo, config.gcsBucket, dockerfilesPath, dockerfile)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to build kaniko and docker images for %s: %s", dockerfile, err)
|
||||
}
|
||||
}
|
||||
dockerImage := GetDockerImage(config.imageRepo, dockerfile)
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, dockerfile)
|
||||
|
||||
|
|
@ -215,6 +228,11 @@ func TestRun(t *testing.T) {
|
|||
|
||||
})
|
||||
}
|
||||
|
||||
err := logBenchmarks("benchmark")
|
||||
if err != nil {
|
||||
t.Logf("Failed to create benchmark file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLayers(t *testing.T) {
|
||||
|
|
@ -222,17 +240,13 @@ func TestLayers(t *testing.T) {
|
|||
"Dockerfile_test_add": 11,
|
||||
"Dockerfile_test_scratch": 3,
|
||||
}
|
||||
for dockerfile, built := range imageBuilder.FilesBuilt {
|
||||
for dockerfile := range imageBuilder.FilesBuilt {
|
||||
t.Run("test_layer_"+dockerfile, func(t *testing.T) {
|
||||
dockerfile := dockerfile
|
||||
t.Parallel()
|
||||
if _, ok := imageBuilder.DockerfilesToIgnore[dockerfile]; ok {
|
||||
t.SkipNow()
|
||||
}
|
||||
if !built {
|
||||
err := imageBuilder.BuildImage(config.imageRepo, config.gcsBucket, dockerfilesPath, dockerfile)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to build kaniko and docker images for %s: %s", dockerfile, err)
|
||||
}
|
||||
}
|
||||
// Pull the kaniko image
|
||||
dockerImage := GetDockerImage(config.imageRepo, dockerfile)
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, dockerfile)
|
||||
|
|
@ -241,6 +255,11 @@ func TestLayers(t *testing.T) {
|
|||
checkLayers(t, dockerImage, kanikoImage, offset[dockerfile])
|
||||
})
|
||||
}
|
||||
|
||||
err := logBenchmarks("benchmark_layers")
|
||||
if err != nil {
|
||||
t.Logf("Failed to create benchmark file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Build each image with kaniko twice, and then make sure they're exactly the same
|
||||
|
|
@ -248,6 +267,8 @@ func TestCache(t *testing.T) {
|
|||
populateVolumeCache()
|
||||
for dockerfile := range imageBuilder.TestCacheDockerfiles {
|
||||
t.Run("test_cache_"+dockerfile, func(t *testing.T) {
|
||||
dockerfile := dockerfile
|
||||
t.Parallel()
|
||||
cache := filepath.Join(config.imageRepo, "cache", fmt.Sprintf("%v", time.Now().UnixNano()))
|
||||
// Build the initial image which will cache layers
|
||||
if err := imageBuilder.buildCachedImages(config.imageRepo, cache, dockerfilesPath, 0); err != nil {
|
||||
|
|
@ -273,31 +294,10 @@ func TestCache(t *testing.T) {
|
|||
checkContainerDiffOutput(t, diff, expected)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDockerignore(t *testing.T) {
|
||||
t.Run(fmt.Sprintf("test_%s", ignoreDockerfile), func(t *testing.T) {
|
||||
if err := setupIgnoreTestDir(); err != nil {
|
||||
t.Fatalf("error setting up ignore test dir: %v", err)
|
||||
}
|
||||
if err := generateDockerignoreImages(config.imageRepo); err != nil {
|
||||
t.Fatalf("error generating dockerignore test images: %v", err)
|
||||
}
|
||||
|
||||
dockerImage := GetDockerImage(config.imageRepo, ignoreDockerfile)
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, ignoreDockerfile)
|
||||
|
||||
// container-diff
|
||||
daemonDockerImage := daemonPrefix + dockerImage
|
||||
containerdiffCmd := exec.Command("container-diff", "diff",
|
||||
daemonDockerImage, kanikoImage,
|
||||
"-q", "--type=file", "--type=metadata", "--json")
|
||||
diff := RunCommand(containerdiffCmd, t)
|
||||
t.Logf("diff = %s", string(diff))
|
||||
|
||||
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
|
||||
checkContainerDiffOutput(t, diff, expected)
|
||||
})
|
||||
if err := logBenchmarks("benchmark_cache"); err != nil {
|
||||
t.Logf("Failed to create benchmark file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
type fileDiff struct {
|
||||
|
|
@ -400,3 +400,15 @@ func getImageDetails(image string) (*imageDetails, error) {
|
|||
digest: digest.Hex,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func logBenchmarks(benchmark string) error {
|
||||
if b, err := strconv.ParseBool(os.Getenv("BENCHMARK")); err == nil && b {
|
||||
f, err := os.Create(benchmark)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
f.WriteString(timing.Summary())
|
||||
defer f.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -17,12 +17,16 @@ limitations under the License.
|
|||
package cache
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/authn/k8schain"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/creds"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
|
|
@ -31,14 +35,17 @@ import (
|
|||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// LayerCache is the layer cache
|
||||
type LayerCache interface {
|
||||
RetrieveLayer(string) (v1.Image, error)
|
||||
}
|
||||
|
||||
// RegistryCache is the registry cache
|
||||
type RegistryCache struct {
|
||||
Opts *config.KanikoOptions
|
||||
}
|
||||
|
||||
// RetrieveLayer retrieves a layer from the cache given the cache key ck.
|
||||
func (rc *RegistryCache) RetrieveLayer(ck string) (v1.Image, error) {
|
||||
cache, err := Destination(rc.Opts, ck)
|
||||
if err != nil {
|
||||
|
|
@ -50,15 +57,40 @@ func (rc *RegistryCache) RetrieveLayer(ck string) (v1.Image, error) {
|
|||
if err != nil {
|
||||
return nil, errors.Wrap(err, fmt.Sprintf("getting reference for %s", cache))
|
||||
}
|
||||
k8sc, err := k8schain.NewNoClient()
|
||||
|
||||
registryName := cacheRef.Repository.Registry.Name()
|
||||
if rc.Opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cacheRef.Repository.Registry = newReg
|
||||
}
|
||||
|
||||
tr := http.DefaultTransport.(*http.Transport)
|
||||
if rc.Opts.SkipTLSVerifyRegistries.Contains(registryName) {
|
||||
tr.TLSClientConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
}
|
||||
|
||||
img, err := remote.Image(cacheRef, remote.WithTransport(tr), remote.WithAuthFromKeychain(creds.GetKeychain()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc)
|
||||
img, err := remote.Image(cacheRef, remote.WithAuthFromKeychain(kc))
|
||||
|
||||
cf, err := img.ConfigFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, fmt.Sprintf("retrieving config file for %s", cache))
|
||||
}
|
||||
|
||||
expiry := cf.Created.Add(rc.Opts.CacheTTL)
|
||||
// Layer is stale, rebuild it.
|
||||
if expiry.Before(time.Now()) {
|
||||
logrus.Infof("Cache entry expired: %s", cache)
|
||||
return nil, errors.New(fmt.Sprintf("Cache entry expired: %s", cache))
|
||||
}
|
||||
|
||||
// Force the manifest to be populated
|
||||
if _, err := img.RawManifest(); err != nil {
|
||||
return nil, err
|
||||
|
|
@ -81,6 +113,7 @@ func Destination(opts *config.KanikoOptions, cacheKey string) (string, error) {
|
|||
return fmt.Sprintf("%s:%s", cache, cacheKey), nil
|
||||
}
|
||||
|
||||
// LocalSource retieves a source image from a local cache given cacheKey
|
||||
func LocalSource(opts *config.KanikoOptions, cacheKey string) (v1.Image, error) {
|
||||
cache := opts.CacheDir
|
||||
if cache == "" {
|
||||
|
|
@ -89,11 +122,43 @@ func LocalSource(opts *config.KanikoOptions, cacheKey string) (v1.Image, error)
|
|||
|
||||
path := path.Join(cache, cacheKey)
|
||||
|
||||
imgTar, err := tarball.ImageFromPath(path, nil)
|
||||
fi, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "geting file info")
|
||||
}
|
||||
|
||||
// A stale cache is a bad cache
|
||||
expiry := fi.ModTime().Add(opts.CacheTTL)
|
||||
if expiry.Before(time.Now()) {
|
||||
logrus.Debugf("Cached image is too old: %v", fi.ModTime())
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
logrus.Infof("Found %s in local cache", cacheKey)
|
||||
return cachedImageFromPath(path)
|
||||
}
|
||||
|
||||
// cachedImage represents a v1.Tarball that is cached locally in a CAS.
|
||||
// Computing the digest for a v1.Tarball is very expensive. If the tarball
|
||||
// is named with the digest we can store this and return it directly rather
|
||||
// than recompute it.
|
||||
type cachedImage struct {
|
||||
digest string
|
||||
v1.Image
|
||||
}
|
||||
|
||||
func (c *cachedImage) Digest() (v1.Hash, error) {
|
||||
return v1.NewHash(c.digest)
|
||||
}
|
||||
|
||||
func cachedImageFromPath(p string) (v1.Image, error) {
|
||||
imgTar, err := tarball.ImageFromPath(p, nil)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "getting image from path")
|
||||
}
|
||||
|
||||
logrus.Infof("Found %s in local cache", cacheKey)
|
||||
return imgTar, nil
|
||||
return &cachedImage{
|
||||
digest: filepath.Base(p),
|
||||
Image: imgTar,
|
||||
}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -70,22 +70,30 @@ func (c *CopyCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.Bu
|
|||
// we need to add '/' to the end to indicate the destination is a directory
|
||||
dest = filepath.Join(cwd, dest) + "/"
|
||||
}
|
||||
copiedFiles, err := util.CopyDir(fullPath, dest)
|
||||
copiedFiles, err := util.CopyDir(fullPath, dest, c.buildcontext)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.snapshotFiles = append(c.snapshotFiles, copiedFiles...)
|
||||
} else if fi.Mode()&os.ModeSymlink != 0 {
|
||||
// If file is a symlink, we want to create the same relative symlink
|
||||
if err := util.CopySymlink(fullPath, destPath); err != nil {
|
||||
exclude, err := util.CopySymlink(fullPath, destPath, c.buildcontext)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exclude {
|
||||
continue
|
||||
}
|
||||
c.snapshotFiles = append(c.snapshotFiles, destPath)
|
||||
} else {
|
||||
// ... Else, we want to copy over a file
|
||||
if err := util.CopyFile(fullPath, destPath); err != nil {
|
||||
exclude, err := util.CopyFile(fullPath, destPath, c.buildcontext)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exclude {
|
||||
continue
|
||||
}
|
||||
c.snapshotFiles = append(c.snapshotFiles, destPath)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -31,6 +31,10 @@ type UserCommand struct {
|
|||
cmd *instructions.UserCommand
|
||||
}
|
||||
|
||||
func (r *UserCommand) RequiresUnpackedFS() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (r *UserCommand) ExecuteCommand(config *v1.Config, buildArgs *dockerfile.BuildArgs) error {
|
||||
logrus.Info("cmd: USER")
|
||||
u := r.cmd.User
|
||||
|
|
|
|||
|
|
@ -42,3 +42,12 @@ func (b *multiArg) Set(value string) error {
|
|||
func (b *multiArg) Type() string {
|
||||
return "multi-arg type"
|
||||
}
|
||||
|
||||
func (b *multiArg) Contains(v string) bool {
|
||||
for _, s := range *b {
|
||||
if s == v {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,27 +16,34 @@ limitations under the License.
|
|||
|
||||
package config
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// KanikoOptions are options that are set by command line arguments
|
||||
type KanikoOptions struct {
|
||||
DockerfilePath string
|
||||
SrcContext string
|
||||
SnapshotMode string
|
||||
Bucket string
|
||||
TarPath string
|
||||
Target string
|
||||
CacheRepo string
|
||||
CacheDir string
|
||||
Destinations multiArg
|
||||
BuildArgs multiArg
|
||||
Insecure bool
|
||||
SkipTLSVerify bool
|
||||
InsecurePull bool
|
||||
SkipTLSVerifyPull bool
|
||||
SingleSnapshot bool
|
||||
Reproducible bool
|
||||
NoPush bool
|
||||
Cache bool
|
||||
Cleanup bool
|
||||
DockerfilePath string
|
||||
SrcContext string
|
||||
SnapshotMode string
|
||||
Bucket string
|
||||
TarPath string
|
||||
Target string
|
||||
CacheRepo string
|
||||
CacheDir string
|
||||
Destinations multiArg
|
||||
BuildArgs multiArg
|
||||
Insecure bool
|
||||
SkipTLSVerify bool
|
||||
InsecurePull bool
|
||||
SkipTLSVerifyPull bool
|
||||
SingleSnapshot bool
|
||||
Reproducible bool
|
||||
NoPush bool
|
||||
Cache bool
|
||||
Cleanup bool
|
||||
CacheTTL time.Duration
|
||||
InsecureRegistries multiArg
|
||||
SkipTLSVerifyRegistries multiArg
|
||||
}
|
||||
|
||||
// WarmerOptions are options that are set by command line arguments to the cache warmer.
|
||||
|
|
|
|||
|
|
@ -67,6 +67,9 @@ const (
|
|||
// Docker command names
|
||||
Cmd = "cmd"
|
||||
Entrypoint = "entrypoint"
|
||||
|
||||
// Name of the .dockerignore file
|
||||
Dockerignore = ".dockerignore"
|
||||
)
|
||||
|
||||
// KanikoBuildFiles is the list of files required to build kaniko
|
||||
|
|
|
|||
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package creds
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
)
|
||||
|
||||
var (
|
||||
setupKeyChainOnce sync.Once
|
||||
keyChain authn.Keychain
|
||||
)
|
||||
|
||||
// GetKeychain returns a keychain for accessing container registries.
|
||||
func GetKeychain() authn.Keychain {
|
||||
setupKeyChainOnce.Do(func() {
|
||||
keyChain = authn.NewMultiKeychain(authn.DefaultKeychain)
|
||||
})
|
||||
return keyChain
|
||||
}
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package creds
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/genuinetools/amicontained/container"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/authn/k8schain"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var (
|
||||
setupKeyChainOnce sync.Once
|
||||
keyChain authn.Keychain
|
||||
)
|
||||
|
||||
// GetKeychain returns a keychain for accessing container registries.
|
||||
func GetKeychain() authn.Keychain {
|
||||
setupKeyChainOnce.Do(func() {
|
||||
keyChain = authn.NewMultiKeychain(authn.DefaultKeychain)
|
||||
|
||||
// Add the Kubernetes keychain if we're on Kubernetes
|
||||
r, err := container.DetectRuntime()
|
||||
if err != nil {
|
||||
logrus.Warnf("Error detecting container runtime. Using default keychain: %s", err)
|
||||
return
|
||||
}
|
||||
if r == container.RuntimeKubernetes {
|
||||
k8sc, err := k8schain.NewNoClient()
|
||||
if err != nil {
|
||||
logrus.Warnf("Error setting up k8schain. Using default keychain %s", err)
|
||||
return
|
||||
}
|
||||
keyChain = authn.NewMultiKeychain(keyChain, k8sc)
|
||||
}
|
||||
})
|
||||
return keyChain
|
||||
}
|
||||
|
|
@ -20,13 +20,13 @@ import (
|
|||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
"github.com/docker/docker/builder/dockerignore"
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
"github.com/moby/buildkit/frontend/dockerfile/parser"
|
||||
"github.com/pkg/errors"
|
||||
|
|
@ -34,10 +34,23 @@ import (
|
|||
|
||||
// Stages parses a Dockerfile and returns an array of KanikoStage
|
||||
func Stages(opts *config.KanikoOptions) ([]config.KanikoStage, error) {
|
||||
d, err := ioutil.ReadFile(opts.DockerfilePath)
|
||||
var err error
|
||||
var d []uint8
|
||||
match, _ := regexp.MatchString("^https?://", opts.DockerfilePath)
|
||||
if match {
|
||||
response, e := http.Get(opts.DockerfilePath)
|
||||
if e != nil {
|
||||
return nil, e
|
||||
}
|
||||
d, err = ioutil.ReadAll(response.Body)
|
||||
} else {
|
||||
d, err = ioutil.ReadFile(opts.DockerfilePath)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, fmt.Sprintf("reading dockerfile at path %s", opts.DockerfilePath))
|
||||
}
|
||||
|
||||
stages, metaArgs, err := Parse(d)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "parsing dockerfile")
|
||||
|
|
@ -126,6 +139,7 @@ func resolveStages(stages []instructions.Stage) {
|
|||
if val, ok := nameToIndex[c.From]; ok {
|
||||
c.From = val
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -172,20 +186,3 @@ func saveStage(index int, stages []instructions.Stage) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// DockerignoreExists returns true if .dockerignore exists in the source context
|
||||
func DockerignoreExists(opts *config.KanikoOptions) bool {
|
||||
path := filepath.Join(opts.SrcContext, ".dockerignore")
|
||||
return util.FilepathExists(path)
|
||||
}
|
||||
|
||||
// ParseDockerignore returns a list of all paths in .dockerignore
|
||||
func ParseDockerignore(opts *config.KanikoOptions) ([]string, error) {
|
||||
path := filepath.Join(opts.SrcContext, ".dockerignore")
|
||||
contents, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "parsing .dockerignore")
|
||||
}
|
||||
reader := bytes.NewBuffer(contents)
|
||||
return dockerignore.ReadAll(reader)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -23,10 +23,15 @@ import (
|
|||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
"github.com/pkg/errors"
|
||||
|
|
@ -38,6 +43,7 @@ import (
|
|||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/snapshot"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
)
|
||||
|
||||
|
|
@ -60,13 +66,16 @@ func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*sta
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
imageConfig, err := util.RetrieveConfigFile(sourceImage)
|
||||
|
||||
imageConfig, err := initializeConfig(sourceImage)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := resolveOnBuild(&stage, &imageConfig.Config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
hasher, err := getHasher(opts.SnapshotMode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -88,6 +97,18 @@ func newStageBuilder(opts *config.KanikoOptions, stage config.KanikoStage) (*sta
|
|||
}, nil
|
||||
}
|
||||
|
||||
func initializeConfig(img partial.WithConfigFile) (*v1.ConfigFile, error) {
|
||||
imageConfig, err := img.ConfigFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if img == empty.Image {
|
||||
imageConfig.Config.Env = constants.ScratchEnvVars
|
||||
}
|
||||
return imageConfig, nil
|
||||
}
|
||||
|
||||
func (s *stageBuilder) optimize(compositeKey CompositeCache, cfg v1.Config, cmds []commands.DockerCommand, args *dockerfile.BuildArgs) error {
|
||||
if !s.opts.Cache {
|
||||
return nil
|
||||
|
|
@ -124,7 +145,9 @@ func (s *stageBuilder) optimize(compositeKey CompositeCache, cfg v1.Config, cmds
|
|||
if command.ShouldCacheOutput() {
|
||||
img, err := layerCache.RetrieveLayer(ck)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to retrieve layer: %s", err)
|
||||
logrus.Infof("No cached layer found for cmd %s", command.String())
|
||||
logrus.Debugf("Key missing was: %s", compositeKey.Key())
|
||||
break
|
||||
}
|
||||
|
||||
|
|
@ -146,7 +169,11 @@ func (s *stageBuilder) optimize(compositeKey CompositeCache, cfg v1.Config, cmds
|
|||
|
||||
func (s *stageBuilder) build() error {
|
||||
// Set the initial cache key to be the base image digest, the build args and the SrcContext.
|
||||
compositeKey := NewCompositeCache(s.baseImageDigest)
|
||||
dgst, err := util.ReproducibleDigest(s.image)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
compositeKey := NewCompositeCache(dgst)
|
||||
compositeKey.AddKey(s.opts.BuildArgs...)
|
||||
|
||||
cmds := []commands.DockerCommand{}
|
||||
|
|
@ -179,17 +206,21 @@ func (s *stageBuilder) build() error {
|
|||
}
|
||||
}
|
||||
if shouldUnpack {
|
||||
t := timing.Start("FS Unpacking")
|
||||
if _, err := util.GetFSFromImage(constants.RootDir, s.image); err != nil {
|
||||
return err
|
||||
}
|
||||
timing.DefaultRun.Stop(t)
|
||||
}
|
||||
if err := util.DetectFilesystemWhitelist(constants.WhitelistPath); err != nil {
|
||||
return err
|
||||
}
|
||||
// Take initial snapshot
|
||||
t := timing.Start("Initial FS snapshot")
|
||||
if err := s.snapshotter.Init(); err != nil {
|
||||
return err
|
||||
}
|
||||
timing.DefaultRun.Stop(t)
|
||||
|
||||
cacheGroup := errgroup.Group{}
|
||||
for index, command := range cmds {
|
||||
|
|
@ -199,7 +230,7 @@ func (s *stageBuilder) build() error {
|
|||
|
||||
// Add the next command to the cache key.
|
||||
compositeKey.AddKey(command.String())
|
||||
|
||||
t := timing.Start("Command: " + command.String())
|
||||
// If the command uses files from the context, add them.
|
||||
files, err := command.FilesUsedFromContext(&s.cf.Config, args)
|
||||
if err != nil {
|
||||
|
|
@ -216,6 +247,7 @@ func (s *stageBuilder) build() error {
|
|||
return err
|
||||
}
|
||||
files = command.FilesToSnapshot()
|
||||
timing.DefaultRun.Stop(t)
|
||||
|
||||
if !s.shouldTakeSnapshot(index, files) {
|
||||
continue
|
||||
|
|
@ -247,30 +279,36 @@ func (s *stageBuilder) build() error {
|
|||
}
|
||||
|
||||
func (s *stageBuilder) takeSnapshot(files []string) (string, error) {
|
||||
var snapshot string
|
||||
var err error
|
||||
t := timing.Start("Snapshotting FS")
|
||||
if files == nil || s.opts.SingleSnapshot {
|
||||
return s.snapshotter.TakeSnapshotFS()
|
||||
snapshot, err = s.snapshotter.TakeSnapshotFS()
|
||||
} else {
|
||||
// Volumes are very weird. They get created in their command, but snapshotted in the next one.
|
||||
// Add them to the list of files to snapshot.
|
||||
for v := range s.cf.Config.Volumes {
|
||||
files = append(files, v)
|
||||
}
|
||||
snapshot, err = s.snapshotter.TakeSnapshot(files)
|
||||
}
|
||||
// Volumes are very weird. They get created in their command, but snapshotted in the next one.
|
||||
// Add them to the list of files to snapshot.
|
||||
for v := range s.cf.Config.Volumes {
|
||||
files = append(files, v)
|
||||
}
|
||||
return s.snapshotter.TakeSnapshot(files)
|
||||
timing.DefaultRun.Stop(t)
|
||||
return snapshot, err
|
||||
}
|
||||
|
||||
func (s *stageBuilder) shouldTakeSnapshot(index int, files []string) bool {
|
||||
isLastCommand := index == len(s.stage.Commands)-1
|
||||
|
||||
// We only snapshot the very end of intermediate stages.
|
||||
if !s.stage.Final {
|
||||
return isLastCommand
|
||||
}
|
||||
|
||||
// We only snapshot the very end with single snapshot mode on.
|
||||
if s.opts.SingleSnapshot {
|
||||
return isLastCommand
|
||||
}
|
||||
|
||||
// Always take snapshots if we're using the cache.
|
||||
if s.opts.Cache {
|
||||
return true
|
||||
}
|
||||
|
||||
// nil means snapshot everything.
|
||||
if files == nil {
|
||||
return true
|
||||
|
|
@ -280,6 +318,7 @@ func (s *stageBuilder) shouldTakeSnapshot(index int, files []string) bool {
|
|||
if len(files) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
|
|
@ -315,11 +354,19 @@ func (s *stageBuilder) saveSnapshotToImage(createdBy string, tarPath string) err
|
|||
|
||||
// DoBuild executes building the Dockerfile
|
||||
func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
||||
t := timing.Start("Total Build Time")
|
||||
// Parse dockerfile and unpack base image to root
|
||||
stages, err := dockerfile.Stages(opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := util.GetExcludedFiles(opts.SrcContext); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Some stages may refer to other random images, not previous stages
|
||||
if err := fetchExtraStages(stages, opts); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for index, stage := range stages {
|
||||
sb, err := newStageBuilder(opts, stage)
|
||||
if err != nil {
|
||||
|
|
@ -349,13 +396,14 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
timing.DefaultRun.Stop(t)
|
||||
return sourceImage, nil
|
||||
}
|
||||
if stage.SaveStage {
|
||||
if err := saveStageAsTarball(index, sourceImage); err != nil {
|
||||
if err := saveStageAsTarball(strconv.Itoa(index), sourceImage); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := extractImageToDependecyDir(index, sourceImage); err != nil {
|
||||
if err := extractImageToDependecyDir(strconv.Itoa(index), sourceImage); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
|
@ -364,11 +412,60 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func extractImageToDependecyDir(index int, image v1.Image) error {
|
||||
dependencyDir := filepath.Join(constants.KanikoDir, strconv.Itoa(index))
|
||||
func fetchExtraStages(stages []config.KanikoStage, opts *config.KanikoOptions) error {
|
||||
t := timing.Start("Fetching Extra Stages")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
|
||||
var names = []string{}
|
||||
|
||||
for stageIndex, s := range stages {
|
||||
for _, cmd := range s.Commands {
|
||||
c, ok := cmd.(*instructions.CopyCommand)
|
||||
if !ok || c.From == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// FROMs at this point are guaranteed to be either an integer referring to a previous stage,
|
||||
// the name of a previous stage, or a name of a remote image.
|
||||
|
||||
// If it is an integer stage index, validate that it is actually a previous index
|
||||
if fromIndex, err := strconv.Atoi(c.From); err == nil && stageIndex > fromIndex && fromIndex >= 0 {
|
||||
continue
|
||||
}
|
||||
// Check if the name is the alias of a previous stage
|
||||
for _, name := range names {
|
||||
if name == c.From {
|
||||
continue
|
||||
}
|
||||
}
|
||||
// This must be an image name, fetch it.
|
||||
logrus.Debugf("Found extra base image stage %s", c.From)
|
||||
sourceImage, err := util.RetrieveRemoteImage(c.From, opts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := saveStageAsTarball(c.From, sourceImage); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := extractImageToDependecyDir(c.From, sourceImage); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Store the name of the current stage in the list with names, if applicable.
|
||||
if s.Name != "" {
|
||||
names = append(names, s.Name)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func extractImageToDependecyDir(name string, image v1.Image) error {
|
||||
t := timing.Start("Extracting Image to Dependency Dir")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
dependencyDir := filepath.Join(constants.KanikoDir, name)
|
||||
if err := os.MkdirAll(dependencyDir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -377,16 +474,18 @@ func extractImageToDependecyDir(index int, image v1.Image) error {
|
|||
return err
|
||||
}
|
||||
|
||||
func saveStageAsTarball(stageIndex int, image v1.Image) error {
|
||||
func saveStageAsTarball(path string, image v1.Image) error {
|
||||
t := timing.Start("Saving stage as tarball")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
destRef, err := name.NewTag("temp/tag", name.WeakValidation)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.MkdirAll(constants.KanikoIntermediateStagesDir, 0750); err != nil {
|
||||
tarPath := filepath.Join(constants.KanikoIntermediateStagesDir, path)
|
||||
logrus.Infof("Storing source image from stage %s at path %s", path, tarPath)
|
||||
if err := os.MkdirAll(filepath.Dir(tarPath), 0750); err != nil {
|
||||
return err
|
||||
}
|
||||
tarPath := filepath.Join(constants.KanikoIntermediateStagesDir, strconv.Itoa(stageIndex))
|
||||
logrus.Infof("Storing source image from stage %d at path %s", stageIndex, tarPath)
|
||||
return tarball.WriteToFile(tarPath, destRef, image)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -19,6 +19,8 @@ package executor
|
|||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/dockerfile"
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
|
|
@ -74,3 +76,107 @@ func stage(t *testing.T, d string) config.KanikoStage {
|
|||
Stage: stages[0],
|
||||
}
|
||||
}
|
||||
|
||||
type MockCommand struct {
|
||||
name string
|
||||
}
|
||||
|
||||
func (m *MockCommand) Name() string {
|
||||
return m.name
|
||||
}
|
||||
|
||||
func Test_stageBuilder_shouldTakeSnapshot(t *testing.T) {
|
||||
commands := []instructions.Command{
|
||||
&MockCommand{name: "command1"},
|
||||
&MockCommand{name: "command2"},
|
||||
&MockCommand{name: "command3"},
|
||||
}
|
||||
|
||||
stage := instructions.Stage{
|
||||
Commands: commands,
|
||||
}
|
||||
|
||||
type fields struct {
|
||||
stage config.KanikoStage
|
||||
opts *config.KanikoOptions
|
||||
}
|
||||
type args struct {
|
||||
index int
|
||||
files []string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
fields fields
|
||||
args args
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "final stage not last command",
|
||||
fields: fields{
|
||||
stage: config.KanikoStage{
|
||||
Final: true,
|
||||
Stage: stage,
|
||||
},
|
||||
},
|
||||
args: args{
|
||||
index: 1,
|
||||
},
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "not final stage last command",
|
||||
fields: fields{
|
||||
stage: config.KanikoStage{
|
||||
Final: false,
|
||||
Stage: stage,
|
||||
},
|
||||
},
|
||||
args: args{
|
||||
index: len(commands) - 1,
|
||||
},
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "not final stage not last command",
|
||||
fields: fields{
|
||||
stage: config.KanikoStage{
|
||||
Final: false,
|
||||
Stage: stage,
|
||||
},
|
||||
},
|
||||
args: args{
|
||||
index: 0,
|
||||
},
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "caching enabled intermediate container",
|
||||
fields: fields{
|
||||
stage: config.KanikoStage{
|
||||
Final: false,
|
||||
Stage: stage,
|
||||
},
|
||||
opts: &config.KanikoOptions{Cache: true},
|
||||
},
|
||||
args: args{
|
||||
index: 0,
|
||||
},
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
||||
if tt.fields.opts == nil {
|
||||
tt.fields.opts = &config.KanikoOptions{}
|
||||
}
|
||||
s := &stageBuilder{
|
||||
stage: tt.fields.stage,
|
||||
opts: tt.fields.opts,
|
||||
}
|
||||
if got := s.shouldTakeSnapshot(tt.args.index, tt.args.files); got != tt.want {
|
||||
t.Errorf("stageBuilder.shouldTakeSnapshot() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,13 +20,14 @@ import (
|
|||
"crypto/tls"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/cache"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/creds"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/version"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/authn/k8schain"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
|
|
@ -52,6 +53,7 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
logrus.Info("Skipping push to container registry due to --no-push flag")
|
||||
return nil
|
||||
}
|
||||
t := timing.Start("Total Push Time")
|
||||
destRefs := []name.Tag{}
|
||||
for _, destination := range opts.Destinations {
|
||||
destRef, err := name.NewTag(destination, name.WeakValidation)
|
||||
|
|
@ -71,27 +73,23 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
|
||||
// continue pushing unless an error occurs
|
||||
for _, destRef := range destRefs {
|
||||
if opts.Insecure {
|
||||
newReg, err := name.NewInsecureRegistry(destRef.Repository.Registry.Name(), name.WeakValidation)
|
||||
registryName := destRef.Repository.Registry.Name()
|
||||
if opts.Insecure || opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting new insecure registry")
|
||||
}
|
||||
destRef.Repository.Registry = newReg
|
||||
}
|
||||
|
||||
k8sc, err := k8schain.NewNoClient()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "getting k8schain client")
|
||||
}
|
||||
kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc)
|
||||
pushAuth, err := kc.Resolve(destRef.Context().Registry)
|
||||
pushAuth, err := creds.GetKeychain().Resolve(destRef.Context().Registry)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "resolving pushAuth")
|
||||
}
|
||||
|
||||
// Create a transport to set our user-agent.
|
||||
tr := http.DefaultTransport
|
||||
if opts.SkipTLSVerify {
|
||||
if opts.SkipTLSVerify || opts.SkipTLSVerifyRegistries.Contains(registryName) {
|
||||
tr.(*http.Transport).TLSClientConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
|
|
@ -102,6 +100,7 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
return errors.Wrap(err, fmt.Sprintf("failed to push to destination %s", destRef))
|
||||
}
|
||||
}
|
||||
timing.DefaultRun.Stop(t)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
@ -118,6 +117,11 @@ func pushLayerToCache(opts *config.KanikoOptions, cacheKey string, tarPath strin
|
|||
}
|
||||
logrus.Infof("Pushing layer %s to cache now", cache)
|
||||
empty := empty.Image
|
||||
empty, err = mutate.CreatedAt(empty, v1.Time{Time: time.Now()})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "setting empty image created time")
|
||||
}
|
||||
|
||||
empty, err = mutate.Append(empty,
|
||||
mutate.Addendum{
|
||||
Layer: layer,
|
||||
|
|
@ -132,5 +136,7 @@ func pushLayerToCache(opts *config.KanikoOptions, cacheKey string, tarPath strin
|
|||
}
|
||||
cacheOpts := *opts
|
||||
cacheOpts.Destinations = []string{cache}
|
||||
cacheOpts.InsecureRegistries = opts.InsecureRegistries
|
||||
cacheOpts.SkipTLSVerifyRegistries = opts.SkipTLSVerifyRegistries
|
||||
return DoPush(empty, &cacheOpts)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ import (
|
|||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
)
|
||||
|
||||
|
|
@ -124,6 +125,8 @@ func (l *LayeredMap) Add(s string) error {
|
|||
// was added.
|
||||
func (l *LayeredMap) MaybeAdd(s string) (bool, error) {
|
||||
oldV, ok := l.Get(s)
|
||||
t := timing.Start("Hashing files")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
newV, err := l.hasher(s)
|
||||
if err != nil {
|
||||
return false, err
|
||||
|
|
|
|||
|
|
@ -19,10 +19,13 @@ package snapshot
|
|||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
|
||||
"github.com/karrick/godirwalk"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/util"
|
||||
|
|
@ -76,34 +79,34 @@ func (s *Snapshotter) TakeSnapshot(files []string) (string, error) {
|
|||
defer t.Close()
|
||||
|
||||
// First add to the tar any parent directories that haven't been added
|
||||
parentDirs := []string{}
|
||||
parentDirs := map[string]struct{}{}
|
||||
for _, file := range files {
|
||||
parents := util.ParentDirectories(file)
|
||||
parentDirs = append(parentDirs, parents...)
|
||||
}
|
||||
for _, file := range parentDirs {
|
||||
file = filepath.Clean(file)
|
||||
if val, ok := snapshottedFiles[file]; ok && val {
|
||||
continue
|
||||
for _, p := range util.ParentDirectories(file) {
|
||||
parentDirs[p] = struct{}{}
|
||||
}
|
||||
}
|
||||
for file := range parentDirs {
|
||||
file = filepath.Clean(file)
|
||||
snapshottedFiles[file] = true
|
||||
|
||||
// The parent directory might already be in a previous layer.
|
||||
fileAdded, err := s.l.MaybeAdd(file)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Unable to add parent dir %s to layered map: %s", file, err)
|
||||
}
|
||||
|
||||
if fileAdded {
|
||||
err = t.AddFileToTar(file)
|
||||
if err != nil {
|
||||
if err = t.AddFileToTar(file); err != nil {
|
||||
return "", fmt.Errorf("Error adding parent dir %s to tar: %s", file, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Next add the files themselves to the tar
|
||||
for _, file := range files {
|
||||
// We might have already added the file above as a parent directory of another file.
|
||||
file = filepath.Clean(file)
|
||||
if val, ok := snapshottedFiles[file]; ok && val {
|
||||
if _, ok := snapshottedFiles[file]; ok {
|
||||
continue
|
||||
}
|
||||
snapshottedFiles[file] = true
|
||||
|
|
@ -140,12 +143,25 @@ func (s *Snapshotter) TakeSnapshotFS() (string, error) {
|
|||
t := util.NewTar(f)
|
||||
defer t.Close()
|
||||
|
||||
timer := timing.Start("Walking filesystem")
|
||||
// Save the fs state in a map to iterate over later.
|
||||
memFs := map[string]os.FileInfo{}
|
||||
filepath.Walk(s.directory, func(path string, info os.FileInfo, err error) error {
|
||||
memFs[path] = info
|
||||
return nil
|
||||
})
|
||||
memFs := map[string]*godirwalk.Dirent{}
|
||||
godirwalk.Walk(s.directory, &godirwalk.Options{
|
||||
Callback: func(path string, ent *godirwalk.Dirent) error {
|
||||
if util.IsInWhitelist(path) {
|
||||
if util.IsDestDir(path) {
|
||||
logrus.Infof("Skipping paths under %s, as it is a whitelisted directory", path)
|
||||
return filepath.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
memFs[path] = ent
|
||||
return nil
|
||||
},
|
||||
Unsorted: true,
|
||||
},
|
||||
)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
|
||||
// First handle whiteouts
|
||||
for p := range memFs {
|
||||
|
|
@ -164,6 +180,7 @@ func (s *Snapshotter) TakeSnapshotFS() (string, error) {
|
|||
}
|
||||
}
|
||||
|
||||
timer = timing.Start("Writing tar file")
|
||||
// Now create the tar.
|
||||
for path := range memFs {
|
||||
whitelisted, err := util.CheckWhitelist(path)
|
||||
|
|
@ -174,7 +191,6 @@ func (s *Snapshotter) TakeSnapshotFS() (string, error) {
|
|||
logrus.Debugf("Not adding %s to layer, as it's whitelisted", path)
|
||||
continue
|
||||
}
|
||||
|
||||
// Only add to the tar if we add it to the layeredmap.
|
||||
maybeAdd, err := s.l.MaybeAdd(path)
|
||||
if err != nil {
|
||||
|
|
@ -187,6 +203,7 @@ func (s *Snapshotter) TakeSnapshotFS() (string, error) {
|
|||
}
|
||||
}
|
||||
}
|
||||
timing.DefaultRun.Stop(timer)
|
||||
|
||||
return f.Name(), nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ package timing
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"encoding/json"
|
||||
"sync"
|
||||
"text/template"
|
||||
"time"
|
||||
|
|
@ -39,12 +39,11 @@ type TimedRun struct {
|
|||
// Stop stops the specified timer and increments the time spent in that category.
|
||||
func (tr *TimedRun) Stop(t *Timer) {
|
||||
stop := currentTimeFunc()
|
||||
tr.cl.Lock()
|
||||
defer tr.cl.Unlock()
|
||||
if _, ok := tr.categories[t.category]; !ok {
|
||||
tr.categories[t.category] = 0
|
||||
}
|
||||
fmt.Println(stop)
|
||||
tr.cl.Lock()
|
||||
defer tr.cl.Unlock()
|
||||
tr.categories[t.category] += stop.Sub(t.startTime)
|
||||
}
|
||||
|
||||
|
|
@ -79,6 +78,10 @@ func Summary() string {
|
|||
return DefaultRun.Summary()
|
||||
}
|
||||
|
||||
func JSON() (string, error) {
|
||||
return DefaultRun.JSON()
|
||||
}
|
||||
|
||||
// Summary outputs a summary of the specified TimedRun.
|
||||
func (tr *TimedRun) Summary() string {
|
||||
b := bytes.Buffer{}
|
||||
|
|
@ -88,3 +91,11 @@ func (tr *TimedRun) Summary() string {
|
|||
DefaultFormat.Execute(&b, tr.categories)
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (tr *TimedRun) JSON() (string, error) {
|
||||
b, err := json.Marshal(tr.categories)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(b), nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -186,7 +186,14 @@ func IsSrcsValid(srcsAndDest instructions.SourcesAndDest, resolvedSources []stri
|
|||
dest := srcsAndDest[len(srcsAndDest)-1]
|
||||
|
||||
if !ContainsWildcards(srcs) {
|
||||
if len(srcs) > 1 && !IsDestDir(dest) {
|
||||
totalSrcs := 0
|
||||
for _, src := range srcs {
|
||||
if excludeFile(src, root) {
|
||||
continue
|
||||
}
|
||||
totalSrcs++
|
||||
}
|
||||
if totalSrcs > 1 && !IsDestDir(dest) {
|
||||
return errors.New("when specifying multiple sources in a COPY command, destination must be a directory and end in '/'")
|
||||
}
|
||||
}
|
||||
|
|
@ -216,7 +223,12 @@ func IsSrcsValid(srcsAndDest instructions.SourcesAndDest, resolvedSources []stri
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
totalFiles += len(files)
|
||||
for _, file := range files {
|
||||
if excludeFile(file, root) {
|
||||
continue
|
||||
}
|
||||
totalFiles++
|
||||
}
|
||||
}
|
||||
if totalFiles == 0 {
|
||||
return errors.New("copy failed: no source files specified")
|
||||
|
|
|
|||
|
|
@ -249,11 +249,13 @@ func Test_MatchSources(t *testing.T) {
|
|||
}
|
||||
|
||||
var isSrcValidTests = []struct {
|
||||
name string
|
||||
srcsAndDest []string
|
||||
resolvedSources []string
|
||||
shouldErr bool
|
||||
}{
|
||||
{
|
||||
name: "dest isn't directory",
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"context/bar",
|
||||
|
|
@ -266,6 +268,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "dest is directory",
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"context/bar",
|
||||
|
|
@ -278,6 +281,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy file to file",
|
||||
srcsAndDest: []string{
|
||||
"context/bar/bam",
|
||||
"dest",
|
||||
|
|
@ -288,16 +292,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"dest",
|
||||
},
|
||||
resolvedSources: []string{
|
||||
"context/foo",
|
||||
},
|
||||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy files with wildcards to dir",
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"context/b*",
|
||||
|
|
@ -310,6 +305,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy multilple files with wildcards to file",
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"context/b*",
|
||||
|
|
@ -322,6 +318,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: true,
|
||||
},
|
||||
{
|
||||
name: "copy two files to file, one of which doesn't exist",
|
||||
srcsAndDest: []string{
|
||||
"context/foo",
|
||||
"context/doesntexist*",
|
||||
|
|
@ -333,6 +330,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy dir to dest not specified as dir",
|
||||
srcsAndDest: []string{
|
||||
"context/",
|
||||
"dest",
|
||||
|
|
@ -343,6 +341,7 @@ var isSrcValidTests = []struct {
|
|||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy url to file",
|
||||
srcsAndDest: []string{
|
||||
testURL,
|
||||
"dest",
|
||||
|
|
@ -352,12 +351,43 @@ var isSrcValidTests = []struct {
|
|||
},
|
||||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy two srcs, one excluded, to file",
|
||||
srcsAndDest: []string{
|
||||
"ignore/foo",
|
||||
"ignore/bar",
|
||||
"dest",
|
||||
},
|
||||
resolvedSources: []string{
|
||||
"ignore/foo",
|
||||
"ignore/bar",
|
||||
},
|
||||
shouldErr: false,
|
||||
},
|
||||
{
|
||||
name: "copy two srcs, both excluded, to file",
|
||||
srcsAndDest: []string{
|
||||
"ignore/baz",
|
||||
"ignore/bar",
|
||||
"dest",
|
||||
},
|
||||
resolvedSources: []string{
|
||||
"ignore/baz",
|
||||
"ignore/bar",
|
||||
},
|
||||
shouldErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
func Test_IsSrcsValid(t *testing.T) {
|
||||
for _, test := range isSrcValidTests {
|
||||
err := IsSrcsValid(test.srcsAndDest, test.resolvedSources, buildContextPath)
|
||||
testutil.CheckError(t, test.shouldErr, err)
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
if err := GetExcludedFiles(buildContextPath); err != nil {
|
||||
t.Fatalf("error getting excluded files: %v", err)
|
||||
}
|
||||
err := IsSrcsValid(test.srcsAndDest, test.resolvedSources, buildContextPath)
|
||||
testutil.CheckError(t, test.shouldErr, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,9 @@ package util
|
|||
import (
|
||||
"archive/tar"
|
||||
"bufio"
|
||||
"bytes"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
|
@ -27,10 +29,11 @@ import (
|
|||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/docker/docker/builder/dockerignore"
|
||||
"github.com/docker/docker/pkg/fileutils"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/constants"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
|
|
@ -59,6 +62,8 @@ var whitelist = []WhitelistEntry{
|
|||
},
|
||||
}
|
||||
|
||||
var excluded []string
|
||||
|
||||
// GetFSFromImage extracts the layers of img to root
|
||||
// It returns a list of all files extracted
|
||||
func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
||||
|
|
@ -70,7 +75,10 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
extractedFiles := []string{}
|
||||
|
||||
// Store a map of files to their mtime. We need to set mtimes in a second pass because creating files
|
||||
// can change the mtime of a directory.
|
||||
extractedFiles := map[string]time.Time{}
|
||||
|
||||
for i, l := range layers {
|
||||
logrus.Debugf("Extracting layer %d", i)
|
||||
|
|
@ -101,10 +109,17 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
if err := extractFile(root, hdr, tr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
extractedFiles = append(extractedFiles, filepath.Join(root, filepath.Clean(hdr.Name)))
|
||||
extractedFiles[filepath.Join(root, filepath.Clean(hdr.Name))] = hdr.ModTime
|
||||
}
|
||||
}
|
||||
return extractedFiles, nil
|
||||
|
||||
fileNames := []string{}
|
||||
for f, t := range extractedFiles {
|
||||
fileNames = append(fileNames, f)
|
||||
os.Chtimes(f, time.Time{}, t)
|
||||
}
|
||||
|
||||
return fileNames, nil
|
||||
}
|
||||
|
||||
// DeleteFilesystem deletes the extracted image file system
|
||||
|
|
@ -245,8 +260,8 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
|
|||
return errors.Wrapf(err, "error removing %s to make way for new link", hdr.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if err := os.Link(filepath.Clean(filepath.Join("/", hdr.Linkname)), path); err != nil {
|
||||
link := filepath.Clean(filepath.Join(dest, hdr.Linkname))
|
||||
if err := os.Link(link, path); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -267,9 +282,19 @@ func extractFile(dest string, hdr *tar.Header, tr io.Reader) error {
|
|||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func IsInWhitelist(path string) bool {
|
||||
for _, wl := range whitelist {
|
||||
if !wl.PrefixMatchOnly && path == wl.Path {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func CheckWhitelist(path string) (bool, error) {
|
||||
abs, err := filepath.Abs(path)
|
||||
if err != nil {
|
||||
|
|
@ -368,7 +393,8 @@ func RelativeFiles(fp string, root string) ([]string, error) {
|
|||
}
|
||||
|
||||
// ParentDirectories returns a list of paths to all parent directories
|
||||
// Ex. /some/temp/dir -> [/, /some, /some/temp, /some/temp/dir]
|
||||
// Ex. /some/temp/dir -> [/some, /some/temp, /some/temp/dir]
|
||||
// This purposefully excludes the /.
|
||||
func ParentDirectories(path string) []string {
|
||||
path = filepath.Clean(path)
|
||||
dirs := strings.Split(path, "/")
|
||||
|
|
@ -453,7 +479,7 @@ func DownloadFileToDest(rawurl, dest string) error {
|
|||
|
||||
// CopyDir copies the file or directory at src to dest
|
||||
// It returns a list of files it copied over
|
||||
func CopyDir(src, dest string) ([]string, error) {
|
||||
func CopyDir(src, dest, buildcontext string) ([]string, error) {
|
||||
files, err := RelativeFiles("", src)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -465,6 +491,10 @@ func CopyDir(src, dest string) ([]string, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if excludeFile(fullPath, buildcontext) {
|
||||
logrus.Debugf("%s found in .dockerignore, ignoring", src)
|
||||
continue
|
||||
}
|
||||
destPath := filepath.Join(dest, file)
|
||||
if fi.IsDir() {
|
||||
logrus.Debugf("Creating directory %s", destPath)
|
||||
|
|
@ -480,12 +510,12 @@ func CopyDir(src, dest string) ([]string, error) {
|
|||
}
|
||||
} else if fi.Mode()&os.ModeSymlink != 0 {
|
||||
// If file is a symlink, we want to create the same relative symlink
|
||||
if err := CopySymlink(fullPath, destPath); err != nil {
|
||||
if _, err := CopySymlink(fullPath, destPath, buildcontext); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
// ... Else, we want to copy over a file
|
||||
if err := CopyFile(fullPath, destPath); err != nil {
|
||||
if _, err := CopyFile(fullPath, destPath, buildcontext); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
|
@ -495,33 +525,78 @@ func CopyDir(src, dest string) ([]string, error) {
|
|||
}
|
||||
|
||||
// CopySymlink copies the symlink at src to dest
|
||||
func CopySymlink(src, dest string) error {
|
||||
func CopySymlink(src, dest, buildcontext string) (bool, error) {
|
||||
if excludeFile(src, buildcontext) {
|
||||
logrus.Debugf("%s found in .dockerignore, ignoring", src)
|
||||
return true, nil
|
||||
}
|
||||
link, err := os.Readlink(src)
|
||||
if err != nil {
|
||||
return err
|
||||
return false, err
|
||||
}
|
||||
linkDst := filepath.Join(dest, link)
|
||||
return os.Symlink(linkDst, dest)
|
||||
if FilepathExists(dest) {
|
||||
if err := os.RemoveAll(dest); err != nil {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
return false, os.Symlink(link, dest)
|
||||
}
|
||||
|
||||
// CopyFile copies the file at src to dest
|
||||
func CopyFile(src, dest string) error {
|
||||
func CopyFile(src, dest, buildcontext string) (bool, error) {
|
||||
if excludeFile(src, buildcontext) {
|
||||
logrus.Debugf("%s found in .dockerignore, ignoring", src)
|
||||
return true, nil
|
||||
}
|
||||
fi, err := os.Stat(src)
|
||||
if err != nil {
|
||||
return err
|
||||
return false, err
|
||||
}
|
||||
logrus.Debugf("Copying file %s to %s", src, dest)
|
||||
srcFile, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
return false, err
|
||||
}
|
||||
defer srcFile.Close()
|
||||
uid := fi.Sys().(*syscall.Stat_t).Uid
|
||||
gid := fi.Sys().(*syscall.Stat_t).Gid
|
||||
return CreateFile(dest, srcFile, fi.Mode(), uid, gid)
|
||||
return false, CreateFile(dest, srcFile, fi.Mode(), uid, gid)
|
||||
}
|
||||
|
||||
// HasFilepathPrefix checks if the given file path begins with prefix
|
||||
// GetExcludedFiles gets a list of files to exclude from the .dockerignore
|
||||
func GetExcludedFiles(buildcontext string) error {
|
||||
path := filepath.Join(buildcontext, ".dockerignore")
|
||||
if !FilepathExists(path) {
|
||||
return nil
|
||||
}
|
||||
contents, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "parsing .dockerignore")
|
||||
}
|
||||
reader := bytes.NewBuffer(contents)
|
||||
excluded, err = dockerignore.ReadAll(reader)
|
||||
return err
|
||||
}
|
||||
|
||||
// excludeFile returns true if the .dockerignore specified this file should be ignored
|
||||
func excludeFile(path, buildcontext string) bool {
|
||||
if HasFilepathPrefix(path, buildcontext, false) {
|
||||
var err error
|
||||
path, err = filepath.Rel(buildcontext, path)
|
||||
if err != nil {
|
||||
logrus.Errorf("unable to get relative path, including %s in build: %v", path, err)
|
||||
return false
|
||||
}
|
||||
}
|
||||
match, err := fileutils.Matches(path, excluded)
|
||||
if err != nil {
|
||||
logrus.Errorf("error matching, including %s in build: %v", path, err)
|
||||
return false
|
||||
}
|
||||
return match
|
||||
}
|
||||
|
||||
// HasFilepathPrefix checks if the given file path begins with prefix
|
||||
func HasFilepathPrefix(path, prefix string, prefixMatchOnly bool) bool {
|
||||
path = filepath.Clean(path)
|
||||
prefix = filepath.Clean(prefix)
|
||||
|
|
|
|||
|
|
@ -367,7 +367,7 @@ func filesAreHardlinks(first, second string) checker {
|
|||
if err != nil {
|
||||
t.Fatalf("error getting file %s", first)
|
||||
}
|
||||
fi2, err := os.Stat(filepath.Join(second))
|
||||
fi2, err := os.Stat(filepath.Join(root, second))
|
||||
if err != nil {
|
||||
t.Fatalf("error getting file %s", second)
|
||||
}
|
||||
|
|
@ -499,11 +499,11 @@ func TestExtractFile(t *testing.T) {
|
|||
tmpdir: "/tmp/hardlink",
|
||||
hdrs: []*tar.Header{
|
||||
fileHeader("/bin/gzip", "gzip-binary", 0751),
|
||||
hardlinkHeader("/bin/uncompress", "/tmp/hardlink/bin/gzip"),
|
||||
hardlinkHeader("/bin/uncompress", "/bin/gzip"),
|
||||
},
|
||||
checkers: []checker{
|
||||
fileExists("/bin/gzip"),
|
||||
filesAreHardlinks("/bin/uncompress", "/tmp/hardlink/bin/gzip"),
|
||||
filesAreHardlinks("/bin/uncompress", "/bin/gzip"),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -536,3 +536,63 @@ func TestExtractFile(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCopySymlink(t *testing.T) {
|
||||
type tc struct {
|
||||
name string
|
||||
linkTarget string
|
||||
dest string
|
||||
beforeLink func(r string) error
|
||||
}
|
||||
|
||||
tcs := []tc{{
|
||||
name: "absolute symlink",
|
||||
linkTarget: "/abs/dest",
|
||||
}, {
|
||||
name: "relative symlink",
|
||||
linkTarget: "rel",
|
||||
}, {
|
||||
name: "symlink copy overwrites existing file",
|
||||
linkTarget: "/abs/dest",
|
||||
dest: "overwrite_me",
|
||||
beforeLink: func(r string) error {
|
||||
return ioutil.WriteFile(filepath.Join(r, "overwrite_me"), nil, 0644)
|
||||
},
|
||||
}}
|
||||
|
||||
for _, tc := range tcs {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
tc := tc
|
||||
t.Parallel()
|
||||
r, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(r)
|
||||
|
||||
if tc.beforeLink != nil {
|
||||
if err := tc.beforeLink(r); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
link := filepath.Join(r, "link")
|
||||
dest := filepath.Join(r, "copy")
|
||||
if tc.dest != "" {
|
||||
dest = filepath.Join(r, tc.dest)
|
||||
}
|
||||
if err := os.Symlink(tc.linkTarget, link); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := CopySymlink(link, dest, ""); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := os.Readlink(dest)
|
||||
if err != nil {
|
||||
t.Fatalf("error reading link %s: %s", link, err)
|
||||
}
|
||||
if got != tc.linkTarget {
|
||||
t.Errorf("link target does not match: %s != %s", got, tc.linkTarget)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -23,12 +23,13 @@ import (
|
|||
"path/filepath"
|
||||
"strconv"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/authn/k8schain"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/creds"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
|
@ -39,13 +40,15 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
// For testing
|
||||
retrieveRemoteImage = remoteImage
|
||||
// RetrieveRemoteImage downloads an image from a remote location
|
||||
RetrieveRemoteImage = remoteImage
|
||||
retrieveTarImage = tarballImage
|
||||
)
|
||||
|
||||
// RetrieveSourceImage returns the base image of the stage at index
|
||||
func RetrieveSourceImage(stage config.KanikoStage, opts *config.KanikoOptions) (v1.Image, error) {
|
||||
t := timing.Start("Retrieving Source Image")
|
||||
defer timing.DefaultRun.Stop(t)
|
||||
buildArgs := opts.BuildArgs
|
||||
var metaArgsString []string
|
||||
for _, arg := range stage.MetaArgs {
|
||||
|
|
@ -67,7 +70,7 @@ func RetrieveSourceImage(stage config.KanikoStage, opts *config.KanikoOptions) (
|
|||
return retrieveTarImage(stage.BaseImageIndex)
|
||||
}
|
||||
|
||||
// Next, check if local caching is enabled
|
||||
// Finally, check if local caching is enabled
|
||||
// If so, look in the local cache before trying the remote registry
|
||||
if opts.Cache && opts.CacheDir != "" {
|
||||
cachedImage, err := cachedImage(opts, currentBaseName)
|
||||
|
|
@ -76,24 +79,12 @@ func RetrieveSourceImage(stage config.KanikoStage, opts *config.KanikoOptions) (
|
|||
}
|
||||
|
||||
if err != nil {
|
||||
logrus.Warnf("Error while retrieving image from cache: %v", err)
|
||||
logrus.Infof("Error while retrieving image from cache: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Otherwise, initialize image as usual
|
||||
return retrieveRemoteImage(currentBaseName, opts)
|
||||
}
|
||||
|
||||
// RetrieveConfigFile returns the config file for an image
|
||||
func RetrieveConfigFile(sourceImage partial.WithConfigFile) (*v1.ConfigFile, error) {
|
||||
imageConfig, err := sourceImage.ConfigFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if sourceImage == empty.Image {
|
||||
imageConfig.Config.Env = constants.ScratchEnvVars
|
||||
}
|
||||
return imageConfig, nil
|
||||
return RetrieveRemoteImage(currentBaseName, opts)
|
||||
}
|
||||
|
||||
func tarballImage(index int) (v1.Image, error) {
|
||||
|
|
@ -109,8 +100,9 @@ func remoteImage(image string, opts *config.KanikoOptions) (v1.Image, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if opts.InsecurePull {
|
||||
newReg, err := name.NewInsecureRegistry(ref.Context().RegistryStr(), name.WeakValidation)
|
||||
registryName := ref.Context().RegistryStr()
|
||||
if opts.InsecurePull || opts.InsecureRegistries.Contains(registryName) {
|
||||
newReg, err := name.NewInsecureRegistry(registryName, name.WeakValidation)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -125,18 +117,13 @@ func remoteImage(image string, opts *config.KanikoOptions) (v1.Image, error) {
|
|||
}
|
||||
|
||||
tr := http.DefaultTransport.(*http.Transport)
|
||||
if opts.SkipTLSVerifyPull {
|
||||
if opts.SkipTLSVerifyPull || opts.SkipTLSVerifyRegistries.Contains(registryName) {
|
||||
tr.TLSClientConfig = &tls.Config{
|
||||
InsecureSkipVerify: true,
|
||||
}
|
||||
}
|
||||
|
||||
k8sc, err := k8schain.NewNoClient()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
kc := authn.NewMultiKeychain(authn.DefaultKeychain, k8sc)
|
||||
return remote.Image(ref, remote.WithTransport(tr), remote.WithAuthFromKeychain(kc))
|
||||
return remote.Image(ref, remote.WithTransport(tr), remote.WithAuthFromKeychain(creds.GetKeychain()))
|
||||
}
|
||||
|
||||
func cachedImage(opts *config.KanikoOptions, image string) (v1.Image, error) {
|
||||
|
|
|
|||
|
|
@ -47,14 +47,14 @@ func Test_StandardImage(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
original := retrieveRemoteImage
|
||||
original := RetrieveRemoteImage
|
||||
defer func() {
|
||||
retrieveRemoteImage = original
|
||||
RetrieveRemoteImage = original
|
||||
}()
|
||||
mock := func(image string, opts *config.KanikoOptions) (v1.Image, error) {
|
||||
return nil, nil
|
||||
}
|
||||
retrieveRemoteImage = mock
|
||||
RetrieveRemoteImage = mock
|
||||
actual, err := RetrieveSourceImage(config.KanikoStage{
|
||||
Stage: stages[0],
|
||||
}, &config.KanikoOptions{})
|
||||
|
|
|
|||
|
|
@ -54,6 +54,7 @@ func (t *Tar) Close() {
|
|||
|
||||
// AddFileToTar adds the file at path p to the tar
|
||||
func (t *Tar) AddFileToTar(p string) error {
|
||||
logrus.Debugf("Adding file %s to tar", p)
|
||||
i, err := os.Lstat(p)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to get file info for %s: %s", p, err)
|
||||
|
|
|
|||
|
|
@ -20,11 +20,14 @@ import (
|
|||
"crypto/md5"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"syscall"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
|
@ -127,3 +130,29 @@ func SHA256(r io.Reader) (string, error) {
|
|||
}
|
||||
return hex.EncodeToString(hasher.Sum(make([]byte, 0, hasher.Size()))), nil
|
||||
}
|
||||
|
||||
type ReproducibleManifest struct {
|
||||
Layers []v1.Descriptor
|
||||
Config v1.Config
|
||||
}
|
||||
|
||||
func ReproducibleDigest(img partial.WithManifestAndConfigFile) (string, error) {
|
||||
mfst, err := img.Manifest()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cfg, err := img.ConfigFile()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
rm := ReproducibleManifest{
|
||||
Layers: mfst.Layers,
|
||||
Config: cfg.Config,
|
||||
}
|
||||
|
||||
b, err := json.Marshal(rm)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(b), nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
BSD 2-Clause License
|
||||
|
||||
Copyright (c) 2017, Karrick McDermott
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Dirent stores the name and file system mode type of discovered file system
|
||||
// entries.
|
||||
type Dirent struct {
|
||||
name string
|
||||
modeType os.FileMode
|
||||
}
|
||||
|
||||
// NewDirent returns a newly initialized Dirent structure, or an error. This
|
||||
// function does not follow symbolic links.
|
||||
//
|
||||
// This function is rarely used, as Dirent structures are provided by other
|
||||
// functions in this library that read and walk directories.
|
||||
func NewDirent(osPathname string) (*Dirent, error) {
|
||||
fi, err := os.Lstat(osPathname)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot lstat")
|
||||
}
|
||||
return &Dirent{
|
||||
name: filepath.Base(osPathname),
|
||||
modeType: fi.Mode() & os.ModeType,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Name returns the basename of the file system entry.
|
||||
func (de Dirent) Name() string { return de.name }
|
||||
|
||||
// ModeType returns the mode bits that specify the file system node type. We
|
||||
// could make our own enum-like data type for encoding the file type, but Go's
|
||||
// runtime already gives us architecture independent file modes, as discussed in
|
||||
// `os/types.go`:
|
||||
//
|
||||
// Go's runtime FileMode type has same definition on all systems, so that
|
||||
// information about files can be moved from one system to another portably.
|
||||
func (de Dirent) ModeType() os.FileMode { return de.modeType }
|
||||
|
||||
// IsDir returns true if and only if the Dirent represents a file system
|
||||
// directory. Note that on some operating systems, more than one file mode bit
|
||||
// may be set for a node. For instance, on Windows, a symbolic link that points
|
||||
// to a directory will have both the directory and the symbolic link bits set.
|
||||
func (de Dirent) IsDir() bool { return de.modeType&os.ModeDir != 0 }
|
||||
|
||||
// IsRegular returns true if and only if the Dirent represents a regular
|
||||
// file. That is, it ensures that no mode type bits are set.
|
||||
func (de Dirent) IsRegular() bool { return de.modeType&os.ModeType == 0 }
|
||||
|
||||
// IsSymlink returns true if and only if the Dirent represents a file system
|
||||
// symbolic link. Note that on some operating systems, more than one file mode
|
||||
// bit may be set for a node. For instance, on Windows, a symbolic link that
|
||||
// points to a directory will have both the directory and the symbolic link bits
|
||||
// set.
|
||||
func (de Dirent) IsSymlink() bool { return de.modeType&os.ModeSymlink != 0 }
|
||||
|
||||
// Dirents represents a slice of Dirent pointers, which are sortable by
|
||||
// name. This type satisfies the `sort.Interface` interface.
|
||||
type Dirents []*Dirent
|
||||
|
||||
// Len returns the count of Dirent structures in the slice.
|
||||
func (l Dirents) Len() int { return len(l) }
|
||||
|
||||
// Less returns true if and only if the Name of the element specified by the
|
||||
// first index is lexicographically less than that of the second index.
|
||||
func (l Dirents) Less(i, j int) bool { return l[i].name < l[j].name }
|
||||
|
||||
// Swap exchanges the two Dirent entries specified by the two provided indexes.
|
||||
func (l Dirents) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
/*
|
||||
Package godirwalk provides functions to read and traverse directory trees.
|
||||
|
||||
In short, why do I use this library?
|
||||
|
||||
* It's faster than `filepath.Walk`.
|
||||
|
||||
* It's more correct on Windows than `filepath.Walk`.
|
||||
|
||||
* It's more easy to use than `filepath.Walk`.
|
||||
|
||||
* It's more flexible than `filepath.Walk`.
|
||||
|
||||
USAGE
|
||||
|
||||
This library will normalize the provided top level directory name based on the
|
||||
os-specific path separator by calling `filepath.Clean` on its first
|
||||
argument. However it always provides the pathname created by using the correct
|
||||
os-specific path separator when invoking the provided callback function.
|
||||
|
||||
dirname := "some/directory/root"
|
||||
err := godirwalk.Walk(dirname, &godirwalk.Options{
|
||||
Callback: func(osPathname string, de *godirwalk.Dirent) error {
|
||||
fmt.Printf("%s %s\n", de.ModeType(), osPathname)
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
This library not only provides functions for traversing a file system directory
|
||||
tree, but also for obtaining a list of immediate descendants of a particular
|
||||
directory, typically much more quickly than using `os.ReadDir` or
|
||||
`os.ReadDirnames`.
|
||||
*/
|
||||
package godirwalk
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
package godirwalk
|
||||
|
||||
// ReadDirents returns a sortable slice of pointers to Dirent structures, each
|
||||
// representing the file system name and mode type for one of the immediate
|
||||
// descendant of the specified directory. If the specified directory is a
|
||||
// symbolic link, it will be resolved.
|
||||
//
|
||||
// If an optional scratch buffer is provided that is at least one page of
|
||||
// memory, it will be used when reading directory entries from the file system.
|
||||
//
|
||||
// children, err := godirwalk.ReadDirents(osDirname, nil)
|
||||
// if err != nil {
|
||||
// return nil, errors.Wrap(err, "cannot get list of directory children")
|
||||
// }
|
||||
// sort.Sort(children)
|
||||
// for _, child := range children {
|
||||
// fmt.Printf("%s %s\n", child.ModeType, child.Name)
|
||||
// }
|
||||
func ReadDirents(osDirname string, scratchBuffer []byte) (Dirents, error) {
|
||||
return readdirents(osDirname, scratchBuffer)
|
||||
}
|
||||
|
||||
// ReadDirnames returns a slice of strings, representing the immediate
|
||||
// descendants of the specified directory. If the specified directory is a
|
||||
// symbolic link, it will be resolved.
|
||||
//
|
||||
// If an optional scratch buffer is provided that is at least one page of
|
||||
// memory, it will be used when reading directory entries from the file system.
|
||||
//
|
||||
// Note that this function, depending on operating system, may or may not invoke
|
||||
// the ReadDirents function, in order to prepare the list of immediate
|
||||
// descendants. Therefore, if your program needs both the names and the file
|
||||
// system mode types of descendants, it will always be faster to invoke
|
||||
// ReadDirents directly, rather than calling this function, then looping over
|
||||
// the results and calling os.Stat for each child.
|
||||
//
|
||||
// children, err := godirwalk.ReadDirnames(osDirname, nil)
|
||||
// if err != nil {
|
||||
// return nil, errors.Wrap(err, "cannot get list of directory children")
|
||||
// }
|
||||
// sort.Strings(children)
|
||||
// for _, child := range children {
|
||||
// fmt.Printf("%s\n", child)
|
||||
// }
|
||||
func ReadDirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
|
||||
return readdirnames(osDirname, scratchBuffer)
|
||||
}
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
// +build darwin freebsd linux netbsd openbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func readdirents(osDirname string, scratchBuffer []byte) (Dirents, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot Open")
|
||||
}
|
||||
|
||||
var entries Dirents
|
||||
|
||||
fd := int(dh.Fd())
|
||||
|
||||
if len(scratchBuffer) < MinimumScratchBufferSize {
|
||||
scratchBuffer = make([]byte, DefaultScratchBufferSize)
|
||||
}
|
||||
|
||||
var de *syscall.Dirent
|
||||
|
||||
for {
|
||||
n, err := syscall.ReadDirent(fd, scratchBuffer)
|
||||
if err != nil {
|
||||
_ = dh.Close() // ignore potential error returned by Close
|
||||
return nil, errors.Wrap(err, "cannot ReadDirent")
|
||||
}
|
||||
if n <= 0 {
|
||||
break // end of directory reached
|
||||
}
|
||||
// Loop over the bytes returned by reading the directory entries.
|
||||
buf := scratchBuffer[:n]
|
||||
for len(buf) > 0 {
|
||||
de = (*syscall.Dirent)(unsafe.Pointer(&buf[0])) // point entry to first syscall.Dirent in buffer
|
||||
buf = buf[de.Reclen:] // advance buffer
|
||||
|
||||
if inoFromDirent(de) == 0 {
|
||||
continue // this item has been deleted, but not yet removed from directory
|
||||
}
|
||||
|
||||
nameSlice := nameFromDirent(de)
|
||||
namlen := len(nameSlice)
|
||||
if (namlen == 0) || (namlen == 1 && nameSlice[0] == '.') || (namlen == 2 && nameSlice[0] == '.' && nameSlice[1] == '.') {
|
||||
continue // skip unimportant entries
|
||||
}
|
||||
osChildname := string(nameSlice)
|
||||
|
||||
// Convert syscall constant, which is in purview of OS, to a
|
||||
// constant defined by Go, assumed by this project to be stable.
|
||||
var mode os.FileMode
|
||||
switch de.Type {
|
||||
case syscall.DT_REG:
|
||||
// regular file
|
||||
case syscall.DT_DIR:
|
||||
mode = os.ModeDir
|
||||
case syscall.DT_LNK:
|
||||
mode = os.ModeSymlink
|
||||
case syscall.DT_CHR:
|
||||
mode = os.ModeDevice | os.ModeCharDevice
|
||||
case syscall.DT_BLK:
|
||||
mode = os.ModeDevice
|
||||
case syscall.DT_FIFO:
|
||||
mode = os.ModeNamedPipe
|
||||
case syscall.DT_SOCK:
|
||||
mode = os.ModeSocket
|
||||
default:
|
||||
// If syscall returned unknown type (e.g., DT_UNKNOWN, DT_WHT),
|
||||
// then resolve actual mode by getting stat.
|
||||
fi, err := os.Lstat(filepath.Join(osDirname, osChildname))
|
||||
if err != nil {
|
||||
_ = dh.Close() // ignore potential error returned by Close
|
||||
return nil, errors.Wrap(err, "cannot Stat")
|
||||
}
|
||||
// We only care about the bits that identify the type of a file
|
||||
// system node, and can ignore append, exclusive, temporary,
|
||||
// setuid, setgid, permission bits, and sticky bits, which are
|
||||
// coincident to the bits that declare type of the file system
|
||||
// node.
|
||||
mode = fi.Mode() & os.ModeType
|
||||
}
|
||||
|
||||
entries = append(entries, &Dirent{name: osChildname, modeType: mode})
|
||||
}
|
||||
}
|
||||
if err = dh.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func readdirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
|
||||
des, err := readdirents(osDirname, scratchBuffer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
names := make([]string, len(des))
|
||||
for i, v := range des {
|
||||
names[i] = v.name
|
||||
}
|
||||
return names, nil
|
||||
}
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// The functions in this file are mere wrappers of what is already provided by
|
||||
// standard library, in order to provide the same API as this library provides.
|
||||
//
|
||||
// The scratch buffer argument is ignored by this architecture.
|
||||
//
|
||||
// Please send PR or link to article if you know of a more performant way of
|
||||
// enumerating directory contents and mode types on Windows.
|
||||
|
||||
func readdirents(osDirname string, _ []byte) (Dirents, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot Open")
|
||||
}
|
||||
|
||||
fileinfos, err := dh.Readdir(0)
|
||||
if er := dh.Close(); err == nil {
|
||||
err = er
|
||||
}
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot Readdir")
|
||||
}
|
||||
|
||||
entries := make(Dirents, len(fileinfos))
|
||||
for i, info := range fileinfos {
|
||||
entries[i] = &Dirent{name: info.Name(), modeType: info.Mode() & os.ModeType}
|
||||
}
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func readdirnames(osDirname string, _ []byte) ([]string, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot Open")
|
||||
}
|
||||
|
||||
entries, err := dh.Readdirnames(0)
|
||||
if er := dh.Close(); err == nil {
|
||||
err = er
|
||||
}
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "cannot Readdirnames")
|
||||
}
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
|
|
@ -0,0 +1,367 @@
|
|||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// DefaultScratchBufferSize specifies the size of the scratch buffer that will
|
||||
// be allocated by Walk, ReadDirents, or ReadDirnames when a scratch buffer is
|
||||
// not provided or the scratch buffer that is provided is smaller than
|
||||
// MinimumScratchBufferSize bytes. This may seem like a large value; however,
|
||||
// when a program intends to enumerate large directories, having a larger
|
||||
// scratch buffer results in fewer operating system calls.
|
||||
const DefaultScratchBufferSize = 64 * 1024
|
||||
|
||||
// MinimumScratchBufferSize specifies the minimum size of the scratch buffer
|
||||
// that Walk, ReadDirents, and ReadDirnames will use when reading file entries
|
||||
// from the operating system. It is initialized to the result from calling
|
||||
// `os.Getpagesize()` during program startup.
|
||||
var MinimumScratchBufferSize int
|
||||
|
||||
func init() {
|
||||
MinimumScratchBufferSize = os.Getpagesize()
|
||||
}
|
||||
|
||||
// Options provide parameters for how the Walk function operates.
|
||||
type Options struct {
|
||||
// ErrorCallback specifies a function to be invoked in the case of an error
|
||||
// that could potentially be ignored while walking a file system
|
||||
// hierarchy. When set to nil or left as its zero-value, any error condition
|
||||
// causes Walk to immediately return the error describing what took
|
||||
// place. When non-nil, this user supplied function is invoked with the OS
|
||||
// pathname of the file system object that caused the error along with the
|
||||
// error that took place. The return value of the supplied ErrorCallback
|
||||
// function determines whether the error will cause Walk to halt immediately
|
||||
// as it would were no ErrorCallback value provided, or skip this file
|
||||
// system node yet continue on with the remaining nodes in the file system
|
||||
// hierarchy.
|
||||
//
|
||||
// ErrorCallback is invoked both for errors that are returned by the
|
||||
// runtime, and for errors returned by other user supplied callback
|
||||
// functions.
|
||||
ErrorCallback func(string, error) ErrorAction
|
||||
|
||||
// FollowSymbolicLinks specifies whether Walk will follow symbolic links
|
||||
// that refer to directories. When set to false or left as its zero-value,
|
||||
// Walk will still invoke the callback function with symbolic link nodes,
|
||||
// but if the symbolic link refers to a directory, it will not recurse on
|
||||
// that directory. When set to true, Walk will recurse on symbolic links
|
||||
// that refer to a directory.
|
||||
FollowSymbolicLinks bool
|
||||
|
||||
// Unsorted controls whether or not Walk will sort the immediate descendants
|
||||
// of a directory by their relative names prior to visiting each of those
|
||||
// entries.
|
||||
//
|
||||
// When set to false or left at its zero-value, Walk will get the list of
|
||||
// immediate descendants of a particular directory, sort that list by
|
||||
// lexical order of their names, and then visit each node in the list in
|
||||
// sorted order. This will cause Walk to always traverse the same directory
|
||||
// tree in the same order, however may be inefficient for directories with
|
||||
// many immediate descendants.
|
||||
//
|
||||
// When set to true, Walk skips sorting the list of immediate descendants
|
||||
// for a directory, and simply visits each node in the order the operating
|
||||
// system enumerated them. This will be more fast, but with the side effect
|
||||
// that the traversal order may be different from one invocation to the
|
||||
// next.
|
||||
Unsorted bool
|
||||
|
||||
// Callback is a required function that Walk will invoke for every file
|
||||
// system node it encounters.
|
||||
Callback WalkFunc
|
||||
|
||||
// PostChildrenCallback is an option function that Walk will invoke for
|
||||
// every file system directory it encounters after its children have been
|
||||
// processed.
|
||||
PostChildrenCallback WalkFunc
|
||||
|
||||
// ScratchBuffer is an optional byte slice to use as a scratch buffer for
|
||||
// Walk to use when reading directory entries, to reduce amount of garbage
|
||||
// generation. Not all architectures take advantage of the scratch
|
||||
// buffer. If omitted or the provided buffer has fewer bytes than
|
||||
// MinimumScratchBufferSize, then a buffer with DefaultScratchBufferSize
|
||||
// bytes will be created and used once per Walk invocation.
|
||||
ScratchBuffer []byte
|
||||
}
|
||||
|
||||
// ErrorAction defines a set of actions the Walk function could take based on
|
||||
// the occurrence of an error while walking the file system. See the
|
||||
// documentation for the ErrorCallback field of the Options structure for more
|
||||
// information.
|
||||
type ErrorAction int
|
||||
|
||||
const (
|
||||
// Halt is the ErrorAction return value when the upstream code wants to halt
|
||||
// the walk process when a runtime error takes place. It matches the default
|
||||
// action the Walk function would take were no ErrorCallback provided.
|
||||
Halt ErrorAction = iota
|
||||
|
||||
// SkipNode is the ErrorAction return value when the upstream code wants to
|
||||
// ignore the runtime error for the current file system node, skip
|
||||
// processing of the node that caused the error, and continue walking the
|
||||
// file system hierarchy with the remaining nodes.
|
||||
SkipNode
|
||||
)
|
||||
|
||||
// WalkFunc is the type of the function called for each file system node visited
|
||||
// by Walk. The pathname argument will contain the argument to Walk as a prefix;
|
||||
// that is, if Walk is called with "dir", which is a directory containing the
|
||||
// file "a", the provided WalkFunc will be invoked with the argument "dir/a",
|
||||
// using the correct os.PathSeparator for the Go Operating System architecture,
|
||||
// GOOS. The directory entry argument is a pointer to a Dirent for the node,
|
||||
// providing access to both the basename and the mode type of the file system
|
||||
// node.
|
||||
//
|
||||
// If an error is returned by the Callback or PostChildrenCallback functions,
|
||||
// and no ErrorCallback function is provided, processing stops. If an
|
||||
// ErrorCallback function is provided, then it is invoked with the OS pathname
|
||||
// of the node that caused the error along along with the error. The return
|
||||
// value of the ErrorCallback function determines whether to halt processing, or
|
||||
// skip this node and continue processing remaining file system nodes.
|
||||
//
|
||||
// The exception is when the function returns the special value
|
||||
// filepath.SkipDir. If the function returns filepath.SkipDir when invoked on a
|
||||
// directory, Walk skips the directory's contents entirely. If the function
|
||||
// returns filepath.SkipDir when invoked on a non-directory file system node,
|
||||
// Walk skips the remaining files in the containing directory. Note that any
|
||||
// supplied ErrorCallback function is not invoked with filepath.SkipDir when the
|
||||
// Callback or PostChildrenCallback functions return that special value.
|
||||
type WalkFunc func(osPathname string, directoryEntry *Dirent) error
|
||||
|
||||
// Walk walks the file tree rooted at the specified directory, calling the
|
||||
// specified callback function for each file system node in the tree, including
|
||||
// root, symbolic links, and other node types. The nodes are walked in lexical
|
||||
// order, which makes the output deterministic but means that for very large
|
||||
// directories this function can be inefficient.
|
||||
//
|
||||
// This function is often much faster than filepath.Walk because it does not
|
||||
// invoke os.Stat for every node it encounters, but rather obtains the file
|
||||
// system node type when it reads the parent directory.
|
||||
//
|
||||
// If a runtime error occurs, either from the operating system or from the
|
||||
// upstream Callback or PostChildrenCallback functions, processing typically
|
||||
// halts. However, when an ErrorCallback function is provided in the provided
|
||||
// Options structure, that function is invoked with the error along with the OS
|
||||
// pathname of the file system node that caused the error. The ErrorCallback
|
||||
// function's return value determines the action that Walk will then take.
|
||||
//
|
||||
// func main() {
|
||||
// dirname := "."
|
||||
// if len(os.Args) > 1 {
|
||||
// dirname = os.Args[1]
|
||||
// }
|
||||
// err := godirwalk.Walk(dirname, &godirwalk.Options{
|
||||
// Callback: func(osPathname string, de *godirwalk.Dirent) error {
|
||||
// fmt.Printf("%s %s\n", de.ModeType(), osPathname)
|
||||
// return nil
|
||||
// },
|
||||
// ErrorCallback: func(osPathname string, err error) godirwalk.ErrorAction {
|
||||
// // Your program may want to log the error somehow.
|
||||
// fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
|
||||
//
|
||||
// // For the purposes of this example, a simple SkipNode will suffice,
|
||||
// // although in reality perhaps additional logic might be called for.
|
||||
// return godirwalk.SkipNode
|
||||
// },
|
||||
// })
|
||||
// if err != nil {
|
||||
// fmt.Fprintf(os.Stderr, "%s\n", err)
|
||||
// os.Exit(1)
|
||||
// }
|
||||
// }
|
||||
func Walk(pathname string, options *Options) error {
|
||||
pathname = filepath.Clean(pathname)
|
||||
|
||||
var fi os.FileInfo
|
||||
var err error
|
||||
|
||||
if options.FollowSymbolicLinks {
|
||||
fi, err = os.Stat(pathname)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "cannot Stat")
|
||||
}
|
||||
} else {
|
||||
fi, err = os.Lstat(pathname)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "cannot Lstat")
|
||||
}
|
||||
}
|
||||
|
||||
mode := fi.Mode()
|
||||
if mode&os.ModeDir == 0 {
|
||||
return errors.Errorf("cannot Walk non-directory: %s", pathname)
|
||||
}
|
||||
|
||||
dirent := &Dirent{
|
||||
name: filepath.Base(pathname),
|
||||
modeType: mode & os.ModeType,
|
||||
}
|
||||
|
||||
// If ErrorCallback is nil, set to a default value that halts the walk
|
||||
// process on all operating system errors. This is done to allow error
|
||||
// handling to be more succinct in the walk code.
|
||||
if options.ErrorCallback == nil {
|
||||
options.ErrorCallback = defaultErrorCallback
|
||||
}
|
||||
|
||||
if len(options.ScratchBuffer) < MinimumScratchBufferSize {
|
||||
options.ScratchBuffer = make([]byte, DefaultScratchBufferSize)
|
||||
}
|
||||
|
||||
err = walk(pathname, dirent, options)
|
||||
if err == filepath.SkipDir {
|
||||
return nil // silence SkipDir for top level
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// defaultErrorCallback always returns Halt because if the upstream code did not
|
||||
// provide an ErrorCallback function, walking the file system hierarchy ought to
|
||||
// halt upon any operating system error.
|
||||
func defaultErrorCallback(_ string, _ error) ErrorAction { return Halt }
|
||||
|
||||
// walk recursively traverses the file system node specified by pathname and the
|
||||
// Dirent.
|
||||
func walk(osPathname string, dirent *Dirent, options *Options) error {
|
||||
err := options.Callback(osPathname, dirent)
|
||||
if err != nil {
|
||||
if err == filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
err = errors.Wrap(err, "Callback") // wrap potential errors returned by callback
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// On some platforms, an entry can have more than one mode type bit set.
|
||||
// For instance, it could have both the symlink bit and the directory bit
|
||||
// set indicating it's a symlink to a directory.
|
||||
if dirent.IsSymlink() {
|
||||
if !options.FollowSymbolicLinks {
|
||||
return nil
|
||||
}
|
||||
// Only need to Stat entry if platform did not already have os.ModeDir
|
||||
// set, such as would be the case for unix like operating systems. (This
|
||||
// guard eliminates extra os.Stat check on Windows.)
|
||||
if !dirent.IsDir() {
|
||||
referent, err := os.Readlink(osPathname)
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "cannot Readlink")
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var osp string
|
||||
if filepath.IsAbs(referent) {
|
||||
osp = referent
|
||||
} else {
|
||||
osp = filepath.Join(filepath.Dir(osPathname), referent)
|
||||
}
|
||||
|
||||
fi, err := os.Stat(osp)
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "cannot Stat")
|
||||
if action := options.ErrorCallback(osp, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
dirent.modeType = fi.Mode() & os.ModeType
|
||||
}
|
||||
}
|
||||
|
||||
if !dirent.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If get here, then specified pathname refers to a directory.
|
||||
deChildren, err := ReadDirents(osPathname, options.ScratchBuffer)
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "cannot ReadDirents")
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if !options.Unsorted {
|
||||
sort.Sort(deChildren) // sort children entries unless upstream says to leave unsorted
|
||||
}
|
||||
|
||||
for _, deChild := range deChildren {
|
||||
osChildname := filepath.Join(osPathname, deChild.name)
|
||||
err = walk(osChildname, deChild, options)
|
||||
if err != nil {
|
||||
if err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
// If received skipdir on a directory, stop processing that
|
||||
// directory, but continue to its siblings. If received skipdir on a
|
||||
// non-directory, stop processing remaining siblings.
|
||||
if deChild.IsSymlink() {
|
||||
// Only need to Stat entry if platform did not already have
|
||||
// os.ModeDir set, such as would be the case for unix like
|
||||
// operating systems. (This guard eliminates extra os.Stat check
|
||||
// on Windows.)
|
||||
if !deChild.IsDir() {
|
||||
// Resolve symbolic link referent to determine whether node
|
||||
// is directory or not.
|
||||
referent, err := os.Readlink(osChildname)
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "cannot Readlink")
|
||||
if action := options.ErrorCallback(osChildname, err); action == SkipNode {
|
||||
continue // with next child
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var osp string
|
||||
if filepath.IsAbs(referent) {
|
||||
osp = referent
|
||||
} else {
|
||||
osp = filepath.Join(osPathname, referent)
|
||||
}
|
||||
|
||||
fi, err := os.Stat(osp)
|
||||
if err != nil {
|
||||
err = errors.Wrap(err, "cannot Stat")
|
||||
if action := options.ErrorCallback(osp, err); action == SkipNode {
|
||||
continue // with next child
|
||||
}
|
||||
return err
|
||||
}
|
||||
deChild.modeType = fi.Mode() & os.ModeType
|
||||
}
|
||||
}
|
||||
if !deChild.IsDir() {
|
||||
// If not directory, return immediately, thus skipping remainder
|
||||
// of siblings.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if options.PostChildrenCallback == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
err = options.PostChildrenCallback(osPathname, dirent)
|
||||
if err == nil || err == filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
|
||||
err = errors.Wrap(err, "PostChildrenCallback") // wrap potential errors returned by callback
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
// +build dragonfly freebsd openbsd netbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import "syscall"
|
||||
|
||||
func inoFromDirent(de *syscall.Dirent) uint64 {
|
||||
return uint64(de.Fileno)
|
||||
}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
// +build darwin linux
|
||||
|
||||
package godirwalk
|
||||
|
||||
import "syscall"
|
||||
|
||||
func inoFromDirent(de *syscall.Dirent) uint64 {
|
||||
return de.Ino
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
// +build darwin dragonfly freebsd netbsd openbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func nameFromDirent(de *syscall.Dirent) []byte {
|
||||
// Because this GOOS' syscall.Dirent provides a Namlen field that says how
|
||||
// long the name is, this function does not need to search for the NULL
|
||||
// byte.
|
||||
ml := int(de.Namlen)
|
||||
|
||||
// Convert syscall.Dirent.Name, which is array of int8, to []byte, by
|
||||
// overwriting Cap, Len, and Data slice header fields to values from
|
||||
// syscall.Dirent fields. Setting the Cap, Len, and Data field values for
|
||||
// the slice header modifies what the slice header points to, and in this
|
||||
// case, the name buffer.
|
||||
var name []byte
|
||||
sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
|
||||
sh.Cap = ml
|
||||
sh.Len = ml
|
||||
sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
|
||||
|
||||
return name
|
||||
}
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
// +build nacl linux solaris
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func nameFromDirent(de *syscall.Dirent) []byte {
|
||||
// Because this GOOS' syscall.Dirent does not provide a field that specifies
|
||||
// the name length, this function must first calculate the max possible name
|
||||
// length, and then search for the NULL byte.
|
||||
ml := int(uint64(de.Reclen) - uint64(unsafe.Offsetof(syscall.Dirent{}.Name)))
|
||||
|
||||
// Convert syscall.Dirent.Name, which is array of int8, to []byte, by
|
||||
// overwriting Cap, Len, and Data slice header fields to values from
|
||||
// syscall.Dirent fields. Setting the Cap, Len, and Data field values for
|
||||
// the slice header modifies what the slice header points to, and in this
|
||||
// case, the name buffer.
|
||||
var name []byte
|
||||
sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
|
||||
sh.Cap = ml
|
||||
sh.Len = ml
|
||||
sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
|
||||
|
||||
if index := bytes.IndexByte(name, 0); index >= 0 {
|
||||
// Found NULL byte; set slice's cap and len accordingly.
|
||||
sh.Cap = index
|
||||
sh.Len = index
|
||||
}
|
||||
|
||||
return name
|
||||
}
|
||||
Loading…
Reference in New Issue