Merge remote-tracking branch 'upstream/master'
This commit is contained in:
commit
cca8814bed
|
|
@ -7,6 +7,7 @@ about: Report a bug in kaniko
|
|||
**Actual behavior**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
|
|
@ -21,3 +22,16 @@ Steps to reproduce the behavior:
|
|||
- Build Context
|
||||
Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
|
||||
- Kaniko Image (fully qualified with digest)
|
||||
|
||||
**Triage Notes for the Maintainers**
|
||||
<!-- 🎉🎉🎉 Thank you for an opening an issue !!! 🎉🎉🎉
|
||||
We are doing our best to get to this. Please help us by helping us prioritize your issue by filling the section below -->
|
||||
|
||||
|
||||
| **Description** | **Yes/No** |
|
||||
|----------------|---------------|
|
||||
| Please check if this a new feature you are proposing | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if the build works in docker but not in kaniko | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if this error is seen when you use `--cache` flag | <ul><li>- [ ] </li></ul>|
|
||||
| Please check if your dockerfile is a multistage dockerfile | <ul><li>- [ ] </li></ul>|
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,41 @@
|
|||
<!-- 🎉🎉🎉 Thank you for the PR!!! 🎉🎉🎉 -->
|
||||
|
||||
|
||||
Fixes `#<issue number>`. _in case of a bug fix, this should point to a bug and any other related issue(s)_
|
||||
|
||||
**Description**
|
||||
|
||||
<!-- Describe your changes here- ideally you can get that description straight from
|
||||
your descriptive commit message(s)! -->
|
||||
|
||||
**Submitter Checklist**
|
||||
|
||||
These are the criteria that every PR should meet, please check them off as you
|
||||
review them:
|
||||
|
||||
- [ ] Includes [unit tests](../DEVELOPMENT.md#creating-a-pr)
|
||||
- [ ] Adds integration tests if needed.
|
||||
|
||||
_See [the contribution guide](../CONTRIBUTING.md) for more details._
|
||||
|
||||
|
||||
**Reviewer Notes**
|
||||
|
||||
- [ ] The code flow looks good.
|
||||
- [ ] Unit tests and or integration tests added.
|
||||
|
||||
|
||||
**Release Notes**
|
||||
|
||||
Describe any changes here so maintainer can include it in the release notes, or delete this block.
|
||||
|
||||
```
|
||||
Examples of user facing changes:
|
||||
- Skaffold config changes like
|
||||
e.g. "Add buildArgs to `Kustomize` deployer skaffold config."
|
||||
- Bug fixes
|
||||
e.g. "Improve skaffold init behaviour when tags are used in manifests"
|
||||
- Any changes in skaffold behavior
|
||||
e.g. "Artiface cachine is turned on by default."
|
||||
|
||||
```
|
||||
70
CHANGELOG.md
70
CHANGELOG.md
|
|
@ -1,3 +1,73 @@
|
|||
# v0.13.0 Release - 2019-10-04
|
||||
|
||||
## New Features
|
||||
* Add `kaniko version` command [#796](https://github.com/GoogleContainerTools/kaniko/pull/796)
|
||||
* Write data about pushed images for GCB kaniko build step if env var `BUILDER_OUTPUT` is set [#602](https://github.com/GoogleContainerTools/kaniko/pull/602)
|
||||
* Support `Dockerfile.dockerignore` relative to `Dockerfile` [#801](https://github.com/GoogleContainerTools/kaniko/pull/801)
|
||||
|
||||
## Bug Fixes
|
||||
* fix creating abs path for urls [#804](https://github.com/GoogleContainerTools/kaniko/pull/804)
|
||||
* Fix #691 - ADD does not understand ENV variables [#768](https://github.com/GoogleContainerTools/kaniko/pull/768)
|
||||
* Resolve relative paths to absolute paths in command line arguments [#736](https://github.com/GoogleContainerTools/kaniko/pull/736)
|
||||
* insecure flag is now honored with `--cache` flag. [#685](https://github.com/GoogleContainerTools/kaniko/pull/685)
|
||||
* Reduce log level for adding file message [#624](https://github.com/GoogleContainerTools/kaniko/pull/624)
|
||||
* Fix SIGSEGV on file system deletion while building [#765](https://github.com/GoogleContainerTools/kaniko/pull/765)
|
||||
|
||||
## Updates and Refactors
|
||||
* add debug level info what is the layer type [#805](https://github.com/GoogleContainerTools/kaniko/pull/805)
|
||||
* Update base image to golang:1.12 [#648](https://github.com/GoogleContainerTools/kaniko/pull/648)
|
||||
* Add some triage notes to issue template. [#794](https://github.com/GoogleContainerTools/kaniko/pull/794)
|
||||
* double help text about skip-verify-tls [#782](https://github.com/GoogleContainerTools/kaniko/pull/782)
|
||||
* Add a pull request template [#795](https://github.com/GoogleContainerTools/kaniko/pull/795)
|
||||
* Correct CheckPushPermission comment. [#671](https://github.com/GoogleContainerTools/kaniko/pull/671)
|
||||
|
||||
## Documentation
|
||||
* Use kaniko with docker config.json password [#129](https://github.com/GoogleContainerTools/kaniko/pull/129)
|
||||
* Add getting started tutorial [#790](https://github.com/GoogleContainerTools/kaniko/pull/790)
|
||||
|
||||
## Performance
|
||||
* feat: optimize build [#694](https://github.com/GoogleContainerTools/kaniko/pull/694)
|
||||
|
||||
Huge thank you for this release towards our contributors:
|
||||
- alexa
|
||||
- Andreas Bergmeier
|
||||
- Carlos Alexandro Becker
|
||||
- Carlos Sanchez
|
||||
- chhsia0
|
||||
- debuggy
|
||||
- Deniz Zoeteman
|
||||
- Don McCasland
|
||||
- Fred Cox
|
||||
- Herrmann Hinz
|
||||
- Hugues Alary
|
||||
- Jason Hall
|
||||
- Johannes 'fish' Ziemke
|
||||
- jonjohnsonjr
|
||||
- Luke Wood
|
||||
- Matthew Dawson
|
||||
- Mingliang Tao
|
||||
- Monard Vong
|
||||
- Nao YONASHIRO
|
||||
- Niels Denissen
|
||||
- Prashant
|
||||
- priyawadhwa
|
||||
- Priya Wadhwa
|
||||
- Sascha Askani
|
||||
- sharifelgamal
|
||||
- Sharif Elgamal
|
||||
- Takeaki Matsumoto
|
||||
- Taylor Barrella
|
||||
- Tejal Desai
|
||||
- Thao-Nguyen Do
|
||||
- tralexa
|
||||
- Victor Noel
|
||||
- v.rul
|
||||
- Warren Seymour
|
||||
- xanonid
|
||||
- Xueshan Feng
|
||||
- Антон Костенко
|
||||
- Роман Небалуев
|
||||
|
||||
# v0.12.0 Release - 2019-09/13
|
||||
|
||||
## New Features
|
||||
|
|
|
|||
|
|
@ -620,6 +620,14 @@
|
|||
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
|
||||
version = "v1.0.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:56eaee71300a91f7a2f096b5d1d1d5389ebe8e69c068ec7d84c20459f599ddde"
|
||||
name = "github.com/minio/HighwayHash"
|
||||
packages = ["."]
|
||||
pruneopts = "NUT"
|
||||
revision = "02ca4b43caa3297fbb615700d8800acc7933be98"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:a4df73029d2c42fabcb6b41e327d2f87e685284ec03edf76921c267d9cfc9c23"
|
||||
name = "github.com/mitchellh/go-homedir"
|
||||
|
|
@ -821,6 +829,17 @@
|
|||
revision = "3e01752db0189b9157070a0e1668a620f9a85da2"
|
||||
version = "v1.0.6"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:90cf76d709ce9b057e7d75bd245bf7c1242d21ba4f908fb22c7a2a96d1dcc0ca"
|
||||
name = "github.com/spf13/afero"
|
||||
packages = [
|
||||
".",
|
||||
"mem",
|
||||
]
|
||||
pruneopts = "NUT"
|
||||
revision = "f4711e4db9e9a1d3887343acb72b2bbfc2f686f5"
|
||||
version = "v1.2.1"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:343d44e06621142ab09ae0c76c1799104cdfddd3ffb445d78b1adf8dc3ffaf3d"
|
||||
name = "github.com/spf13/cobra"
|
||||
|
|
@ -983,9 +1002,10 @@
|
|||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:eeb413d109f4b2813de0b5b23645d7a503db926cae8f10dfdcf248d15499314f"
|
||||
digest = "1:2d5f7cd5c2bc42a4d5b18f711d482f14689a30212bbe0e398e151b3e2147cb86"
|
||||
name = "golang.org/x/sys"
|
||||
packages = [
|
||||
"cpu",
|
||||
"unix",
|
||||
"windows",
|
||||
"windows/registry",
|
||||
|
|
@ -1372,16 +1392,19 @@
|
|||
"github.com/google/go-containerregistry/pkg/v1/layout",
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate",
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial",
|
||||
"github.com/google/go-containerregistry/pkg/v1/random",
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote",
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball",
|
||||
"github.com/google/go-github/github",
|
||||
"github.com/karrick/godirwalk",
|
||||
"github.com/minio/HighwayHash",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/instructions",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/parser",
|
||||
"github.com/moby/buildkit/frontend/dockerfile/shell",
|
||||
"github.com/otiai10/copy",
|
||||
"github.com/pkg/errors",
|
||||
"github.com/sirupsen/logrus",
|
||||
"github.com/spf13/afero",
|
||||
"github.com/spf13/cobra",
|
||||
"github.com/spf13/pflag",
|
||||
"golang.org/x/net/context",
|
||||
|
|
|
|||
|
|
@ -46,3 +46,7 @@ required = [
|
|||
[[constraint]]
|
||||
name = "gopkg.in/src-d/go-git.v4"
|
||||
version = "4.6.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/minio/HighwayHash"
|
||||
version = "1.0.0"
|
||||
|
|
|
|||
4
Makefile
4
Makefile
|
|
@ -14,11 +14,10 @@
|
|||
|
||||
# Bump these on release
|
||||
VERSION_MAJOR ?= 0
|
||||
VERSION_MINOR ?= 12
|
||||
VERSION_MINOR ?= 13
|
||||
VERSION_BUILD ?= 0
|
||||
|
||||
VERSION ?= v$(VERSION_MAJOR).$(VERSION_MINOR).$(VERSION_BUILD)
|
||||
VERSION_PACKAGE = $(REPOPATH/pkg/version)
|
||||
|
||||
SHELL := /bin/bash
|
||||
GOOS ?= $(shell go env GOOS)
|
||||
|
|
@ -28,6 +27,7 @@ PROJECT := kaniko
|
|||
REGISTRY?=gcr.io/kaniko-project
|
||||
|
||||
REPOPATH ?= $(ORG)/$(PROJECT)
|
||||
VERSION_PACKAGE = $(REPOPATH)/pkg/version
|
||||
|
||||
GO_FILES := $(shell find . -type f -name '*.go' -not -path "./vendor/*")
|
||||
GO_LDFLAGS := '-extldflags "-static"
|
||||
|
|
|
|||
29
README.md
29
README.md
|
|
@ -21,6 +21,7 @@ _If you are interested in contributing to kaniko, see [DEVELOPMENT.md](DEVELOPME
|
|||
- [How does kaniko work?](#how-does-kaniko-work)
|
||||
- [Known Issues](#known-issues)
|
||||
- [Demo](#demo)
|
||||
- [Tutorial](#tutorial)
|
||||
- [Using kaniko](#using-kaniko)
|
||||
- [kaniko Build Contexts](#kaniko-build-contexts)
|
||||
- [Running kaniko](#running-kaniko)
|
||||
|
|
@ -77,6 +78,10 @@ kaniko does not support building Windows containers.
|
|||
|
||||

|
||||
|
||||
## Tutorial
|
||||
|
||||
For a detailed example of kaniko with local storage, please refer to a [getting started tutorial](./docs/tutorial.md).
|
||||
|
||||
## Using kaniko
|
||||
|
||||
To use kaniko to build and push an image for you, you will need:
|
||||
|
|
@ -277,7 +282,29 @@ See the `examples` directory for how to use with kubernetes clusters and persist
|
|||
|
||||
kaniko uses Docker credential helpers to push images to a registry.
|
||||
|
||||
kaniko comes with support for GCR and Amazon ECR, but configuring another credential helper should allow pushing to a different registry.
|
||||
kaniko comes with support for GCR, Docker `config.json` and Amazon ECR, but configuring another credential helper should allow pushing to a different registry.
|
||||
|
||||
#### Pushing to Docker Hub
|
||||
|
||||
Get your docker registry user and password encoded in base64
|
||||
|
||||
echo USER:PASSWORD | base64
|
||||
|
||||
Create a `config.json` file with your Docker registry url and the previous generated base64 string
|
||||
|
||||
```
|
||||
{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "xxxxxxxxxxxxxxx"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Run kaniko with the `config.json` inside `/kaniko/.docker/config.json`
|
||||
|
||||
docker run -ti --rm -v `pwd`:/workspace -v config.json:/kaniko/.docker/config.json:ro gcr.io/kaniko-project/executor:latest --dockerfile=Dockerfile --destination=yourimagename
|
||||
|
||||
#### Pushing to Amazon ECR
|
||||
|
||||
|
|
|
|||
|
|
@ -54,20 +54,22 @@ func init() {
|
|||
var RootCmd = &cobra.Command{
|
||||
Use: "executor",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
if err := util.ConfigureLogging(logLevel); err != nil {
|
||||
return err
|
||||
}
|
||||
if !opts.NoPush && len(opts.Destinations) == 0 {
|
||||
return errors.New("You must provide --destination, or use --no-push")
|
||||
}
|
||||
if err := cacheFlagsValid(); err != nil {
|
||||
return errors.Wrap(err, "cache flags invalid")
|
||||
}
|
||||
if err := resolveSourceContext(); err != nil {
|
||||
return errors.Wrap(err, "error resolving source context")
|
||||
}
|
||||
if err := resolveDockerfilePath(); err != nil {
|
||||
return errors.Wrap(err, "error resolving dockerfile path")
|
||||
if cmd.Use == "executor" {
|
||||
if err := util.ConfigureLogging(logLevel); err != nil {
|
||||
return err
|
||||
}
|
||||
if !opts.NoPush && len(opts.Destinations) == 0 {
|
||||
return errors.New("You must provide --destination, or use --no-push")
|
||||
}
|
||||
if err := cacheFlagsValid(); err != nil {
|
||||
return errors.Wrap(err, "cache flags invalid")
|
||||
}
|
||||
if err := resolveSourceContext(); err != nil {
|
||||
return errors.Wrap(err, "error resolving source context")
|
||||
}
|
||||
if err := resolveDockerfilePath(); err != nil {
|
||||
return errors.Wrap(err, "error resolving dockerfile path")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
|
|
@ -81,6 +83,9 @@ var RootCmd = &cobra.Command{
|
|||
if err := executor.CheckPushPermissions(opts); err != nil {
|
||||
exit(errors.Wrap(err, "error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again"))
|
||||
}
|
||||
if err := resolveRelativePaths(); err != nil {
|
||||
exit(errors.Wrap(err, "error resolving relative paths to absolute paths"))
|
||||
}
|
||||
if err := os.Chdir("/"); err != nil {
|
||||
exit(errors.Wrap(err, "error changing to root dir"))
|
||||
}
|
||||
|
|
@ -165,7 +170,7 @@ func cacheFlagsValid() error {
|
|||
|
||||
// resolveDockerfilePath resolves the Dockerfile path to an absolute path
|
||||
func resolveDockerfilePath() error {
|
||||
if match, _ := regexp.MatchString("^https?://", opts.DockerfilePath); match {
|
||||
if isURL(opts.DockerfilePath) {
|
||||
return nil
|
||||
}
|
||||
if util.FilepathExists(opts.DockerfilePath) {
|
||||
|
|
@ -228,7 +233,44 @@ func resolveSourceContext() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func resolveRelativePaths() error {
|
||||
optsPaths := []*string{
|
||||
&opts.DockerfilePath,
|
||||
&opts.SrcContext,
|
||||
&opts.CacheDir,
|
||||
&opts.TarPath,
|
||||
&opts.DigestFile,
|
||||
}
|
||||
|
||||
for _, p := range optsPaths {
|
||||
if path := *p; shdSkip(path) {
|
||||
logrus.Debugf("Skip resolving path %s", path)
|
||||
continue
|
||||
}
|
||||
|
||||
// Resolve relative path to absolute path
|
||||
var err error
|
||||
relp := *p // save original relative path
|
||||
if *p, err = filepath.Abs(*p); err != nil {
|
||||
return errors.Wrapf(err, "Couldn't resolve relative path %s to an absolute path", *p)
|
||||
}
|
||||
logrus.Debugf("Resolved relative path %s to %s", relp, *p)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func exit(err error) {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
func isURL(path string) bool {
|
||||
if match, _ := regexp.MatchString("^https?://", path); match {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func shdSkip(path string) bool {
|
||||
return path == "" || isURL(path) || filepath.IsAbs(path)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,99 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
)
|
||||
|
||||
func TestSkipPath(t *testing.T) {
|
||||
tests := []struct {
|
||||
description string
|
||||
path string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
description: "path is a http url",
|
||||
path: "http://test",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is a https url",
|
||||
path: "https://test",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is a empty",
|
||||
path: "",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is already abs",
|
||||
path: "/tmp/test",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is relative",
|
||||
path: ".././test",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.description, func(t *testing.T) {
|
||||
testutil.CheckDeepEqual(t, tt.expected, shdSkip(tt.path))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsUrl(t *testing.T) {
|
||||
tests := []struct {
|
||||
description string
|
||||
path string
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
description: "path is a http url",
|
||||
path: "http://test",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is a https url",
|
||||
path: "https://test",
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
description: "path is a empty",
|
||||
path: "",
|
||||
},
|
||||
{
|
||||
description: "path is already abs",
|
||||
path: "/tmp/test",
|
||||
},
|
||||
{
|
||||
description: "path is relative",
|
||||
path: ".././test",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.description, func(t *testing.T) {
|
||||
testutil.CheckDeepEqual(t, tt.expected, isURL(tt.path))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
Copyright 2018 Google LLC
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/version"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
}
|
||||
|
||||
var versionCmd = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: "Print the version number of kaniko",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println("Kaniko version : ", version.Version())
|
||||
},
|
||||
}
|
||||
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.5.0/docker-credential-gcr_linux_amd64-1.5.0.tar.gz /usr/local/bin/
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@
|
|||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
# Stage 0: Build the executor binary and get credential helpers
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
# Get GCR credential helper
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.5.0/docker-credential-gcr_linux_amd64-1.5.0.tar.gz /usr/local/bin/
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
|
||||
# Builds the static Go image to execute in a Kubernetes job
|
||||
|
||||
FROM golang:1.10
|
||||
FROM golang:1.12
|
||||
WORKDIR /go/src/github.com/GoogleContainerTools/kaniko
|
||||
COPY . .
|
||||
RUN make out/warmer
|
||||
|
|
|
|||
|
|
@ -0,0 +1,125 @@
|
|||
# Getting Started Tutorial
|
||||
|
||||
This tutorial is for beginners who want to start using kaniko and aims to establish a quick start test case.
|
||||
|
||||
## Table of Content
|
||||
|
||||
1. [Prerequisities](#Prerequisities)
|
||||
2. [Prepare config files for kaniko](#Prepare-config-files-for-kaniko)
|
||||
3. [Prepare the local mounted directory](#Prepare-the-local-mounted-directory)
|
||||
4. [Create a Secret that holds your authorization token](#Create-a-Secret-that-holds-your-authorization-token)
|
||||
5. [Create resources in kubernetes](#Create-resources-in-kubernetes)
|
||||
6. [Pull the image and test](#Pull-the-image-and-test)
|
||||
|
||||
## Prerequisities
|
||||
|
||||
- A Kubernetes Cluster. You could use [Minikube](https://kubernetes.io/docs/setup/minikube/) to deploy kubernetes locally, or use kubernetes service from cloud provider like [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/).
|
||||
- A [dockerhub](https://hub.docker.com/) account to push built image public.
|
||||
|
||||
## Prepare config files for kaniko
|
||||
|
||||
Prepare several config files to create resources in kubernetes, which are:
|
||||
|
||||
- [pod.yaml](../examples/pod.yaml) is for starting a kaniko container to build the example image.
|
||||
- [volume.yaml](../examples/volume.yaml) is for creating a persistent volume used as kaniko build context.
|
||||
- [volume-claim.yaml](../examples/volume-claim.yaml) is for creating a persistent volume claim which will mounted in the kaniko container.
|
||||
|
||||
## Prepare the local mounted directory
|
||||
|
||||
SSH into the cluster, and create a local directory which will be mounted in kaniko container as build context. Create a simple dockerfile there.
|
||||
|
||||
> Note: To ssh into cluster, if you use minikube, you could use `minikube ssh` command. If you use cloud service, please refer to official doc, such as [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/ssh#code-try-0).
|
||||
|
||||
```shell
|
||||
$ mkdir kaniko && cd kaniko
|
||||
$ echo 'FROM ubuntu' >> dockerfile
|
||||
$ echo 'ENTRYPOINT ["/bin/bash", "-c", "echo hello"]' >> dockerfile
|
||||
$ cat dockerfile
|
||||
FROM ubuntu
|
||||
ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
|
||||
$ pwd
|
||||
/home/<user-name>/kaniko # copy this path in volume.yaml file
|
||||
```
|
||||
|
||||
> Note: It is important to notice that the `hostPath` in the volume.yaml need to be replaced with the local directory you created.
|
||||
|
||||
## Create a Secret that holds your authorization token
|
||||
|
||||
A Kubernetes cluster uses the Secret of docker-registry type to authenticate with a docker registry to push an image.
|
||||
|
||||
Create this Secret, naming it regcred:
|
||||
|
||||
```shell
|
||||
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
|
||||
```
|
||||
|
||||
- `<your-registry-server>` is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub)
|
||||
- `<your-name>` is your Docker username.
|
||||
- `<your-pword>` is your Docker password.
|
||||
- `<your-email>` is your Docker email.
|
||||
|
||||
This secret will be used in pod.yaml config.
|
||||
|
||||
## Create resources in kubernetes
|
||||
|
||||
```shell
|
||||
# create persistent volume
|
||||
$ kubectl create -f volume.yaml
|
||||
persistentvolume/dockerfile created
|
||||
|
||||
# create persistent volume claim
|
||||
$ kubectl create -f volume-claim.yaml
|
||||
persistentvolumeclaim/dockerfile-claim created
|
||||
|
||||
# check whether the volume mounted correctly
|
||||
$ kubectl get pv dockerfile
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
dockerfile 10Gi RWO Retain Bound default/dockerfile-claim local-storage 1m
|
||||
|
||||
# create pod
|
||||
$ kubectl create -f pod.yaml
|
||||
pod/kaniko created
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kaniko 0/1 ContainerCreating 0 7s
|
||||
|
||||
# check whether the build complete and show the build logs
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kaniko 0/1 Completed 0 34s
|
||||
$ kubectl logs kaniko
|
||||
➜ kubectl logs kaniko
|
||||
INFO[0000] Resolved base name ubuntu to ubuntu
|
||||
INFO[0000] Resolved base name ubuntu to ubuntu
|
||||
INFO[0000] Downloading base image ubuntu
|
||||
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:1bbdea4846231d91cce6c7ff3907d26fca444fd6b7e3c282b90c7fe4251f9f86: no such file or directory
|
||||
INFO[0000] Downloading base image ubuntu
|
||||
INFO[0001] Built cross stage deps: map[]
|
||||
INFO[0001] Downloading base image ubuntu
|
||||
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:1bbdea4846231d91cce6c7ff3907d26fca444fd6b7e3c282b90c7fe4251f9f86: no such file or directory
|
||||
INFO[0001] Downloading base image ubuntu
|
||||
INFO[0001] Skipping unpacking as no commands require it.
|
||||
INFO[0001] Taking snapshot of full filesystem...
|
||||
INFO[0001] ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
|
||||
```
|
||||
|
||||
> Note: It is important to notice that the `destination` in the pod.yaml need to be replaced with your own.
|
||||
|
||||
## Pull the image and test
|
||||
|
||||
If as expected, the kaniko will build image and push to dockerhub successfully. Pull the image to local and run it to test:
|
||||
|
||||
```shell
|
||||
$ sudo docker run -it <user-name>/<repo-name>
|
||||
Unable to find image 'debuggy/helloworld:latest' locally
|
||||
latest: Pulling from debuggy/helloworld
|
||||
5667fdb72017: Pull complete
|
||||
d83811f270d5: Pull complete
|
||||
ee671aafb583: Pull complete
|
||||
7fc152dfb3a6: Pull complete
|
||||
Digest: sha256:2707d17754ea99ce0cf15d84a7282ae746a44ff90928c2064755ee3b35c1057b
|
||||
Status: Downloaded newer image for debuggy/helloworld:latest
|
||||
hello
|
||||
```
|
||||
|
||||
Congratulation! You have gone through the hello world successfully, please refer to project for more details.
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kaniko
|
||||
spec:
|
||||
containers:
|
||||
- name: kaniko
|
||||
image: gcr.io/kaniko-project/executor:latest
|
||||
args: ["--dockerfile=/workspace/dockerfile",
|
||||
"--context=dir://workspace",
|
||||
"--destination=<user-name>/<repo>"] # replace with your dockerhub account
|
||||
volumeMounts:
|
||||
- name: kaniko-secret
|
||||
mountPath: /root
|
||||
- name: dockerfile-storage
|
||||
mountPath: /workspace
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: kaniko-secret
|
||||
secret:
|
||||
secretName: regcred
|
||||
items:
|
||||
- key: .dockerconfigjson
|
||||
path: .docker/config.json
|
||||
- name: dockerfile-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: dockerfile-claim
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: dockerfile-claim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
storageClassName: local-storage
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: dockerfile
|
||||
labels:
|
||||
type: local
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: local-storage
|
||||
hostPath:
|
||||
path: <local-directory> # replace with local directory, such as "/home/<user-name>/kaniko"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
# This is not included in integration tests because docker build does not exploit Dockerfile.dockerignore
|
||||
# See https://github.com/moby/moby/issues/12886#issuecomment-523706042 for more details
|
||||
# This dockerfile makes sure Dockerfile.dockerignore is working
|
||||
# If so then ignore_relative/foo should copy to /foo
|
||||
# If not, then this image won't build because it will attempt to copy three files to /foo, which is a file not a directory
|
||||
FROM scratch
|
||||
COPY ignore_relative/* /foo
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
# A .dockerignore file to make sure dockerignore support works
|
||||
ignore_relative/**
|
||||
!ignore_relative/foo
|
||||
|
|
@ -28,3 +28,7 @@ ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/downlo
|
|||
# Test environment replacement in the URL
|
||||
ENV VERSION=v1.4.3
|
||||
ADD https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/${VERSION}-static/docker-credential-gcr_linux_amd64-1.4.3.tar.gz /destination
|
||||
|
||||
# Test full url replacement
|
||||
ENV URL=https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v1.4.3/docker-credential-gcr_linux_386-1.4.3.tar.gz
|
||||
ADD $URL /otherdestination
|
||||
|
|
|
|||
|
|
@ -286,3 +286,33 @@ func (d *DockerFileBuilder) buildCachedImages(imageRepo, cacheRepo, dockerfilesP
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// buildRelativePathsImage builds the images for testing passing relatives paths to Kaniko
|
||||
func (d *DockerFileBuilder) buildRelativePathsImage(imageRepo, dockerfile string) error {
|
||||
_, ex, _, _ := runtime.Caller(0)
|
||||
cwd := filepath.Dir(ex)
|
||||
|
||||
buildContextPath := "./relative-subdirectory"
|
||||
kanikoImage := GetKanikoImage(imageRepo, dockerfile)
|
||||
|
||||
kanikoCmd := exec.Command("docker",
|
||||
append([]string{"run",
|
||||
"-v", os.Getenv("HOME") + "/.config/gcloud:/root/.config/gcloud",
|
||||
"-v", cwd + ":/workspace",
|
||||
ExecutorImage,
|
||||
"-f", dockerfile,
|
||||
"-d", kanikoImage,
|
||||
"--digest-file", "./digest",
|
||||
"-c", buildContextPath,
|
||||
})...,
|
||||
)
|
||||
|
||||
timer := timing.Start(dockerfile + "_kaniko_relative_paths")
|
||||
_, err := RunCommandWithoutTest(kanikoCmd)
|
||||
timing.DefaultRun.Stop(timer)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build relative path image %s with kaniko command \"%s\": %s", kanikoImage, kanikoCmd.Args, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -323,7 +323,7 @@ func TestGitBuildContextWithBranch(t *testing.T) {
|
|||
|
||||
func TestLayers(t *testing.T) {
|
||||
offset := map[string]int{
|
||||
"Dockerfile_test_add": 11,
|
||||
"Dockerfile_test_add": 12,
|
||||
"Dockerfile_test_scratch": 3,
|
||||
}
|
||||
for dockerfile := range imageBuilder.FilesBuilt {
|
||||
|
|
@ -386,6 +386,30 @@ func TestCache(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRelativePaths(t *testing.T) {
|
||||
|
||||
dockerfile := "Dockerfile_test_copy"
|
||||
|
||||
t.Run("test_relative_"+dockerfile, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
imageBuilder.buildRelativePathsImage(config.imageRepo, dockerfile)
|
||||
|
||||
dockerImage := GetDockerImage(config.imageRepo, dockerfile)
|
||||
kanikoImage := GetKanikoImage(config.imageRepo, dockerfile)
|
||||
|
||||
// container-diff
|
||||
daemonDockerImage := daemonPrefix + dockerImage
|
||||
containerdiffCmd := exec.Command("container-diff", "diff", "--no-cache",
|
||||
daemonDockerImage, kanikoImage,
|
||||
"-q", "--type=file", "--type=metadata", "--json")
|
||||
diff := RunCommand(containerdiffCmd, t)
|
||||
t.Logf("diff = %s", string(diff))
|
||||
|
||||
expected := fmt.Sprintf(emptyContainerDiff, dockerImage, kanikoImage, dockerImage, kanikoImage)
|
||||
checkContainerDiffOutput(t, diff, expected)
|
||||
})
|
||||
}
|
||||
|
||||
type fileDiff struct {
|
||||
Name string
|
||||
Size int
|
||||
|
|
|
|||
|
|
@ -427,7 +427,7 @@ func DoBuild(opts *config.KanikoOptions) (v1.Image, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := util.GetExcludedFiles(opts.SrcContext); err != nil {
|
||||
if err := util.GetExcludedFiles(opts.DockerfilePath, opts.SrcContext); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Some stages may refer to other random images, not previous stages
|
||||
|
|
|
|||
|
|
@ -18,10 +18,12 @@ package executor
|
|||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
|
|
@ -32,7 +34,7 @@ import (
|
|||
"github.com/GoogleContainerTools/kaniko/pkg/timing"
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/version"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate"
|
||||
|
|
@ -40,6 +42,7 @@ import (
|
|||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
type withUserAgent struct {
|
||||
|
|
@ -165,6 +168,39 @@ func DoPush(image v1.Image, opts *config.KanikoOptions) error {
|
|||
}
|
||||
}
|
||||
timing.DefaultRun.Stop(t)
|
||||
return writeImageOutputs(image, destRefs)
|
||||
}
|
||||
|
||||
var fs = afero.NewOsFs()
|
||||
|
||||
func writeImageOutputs(image v1.Image, destRefs []name.Tag) error {
|
||||
dir := os.Getenv("BUILDER_OUTPUT")
|
||||
if dir == "" {
|
||||
return nil
|
||||
}
|
||||
f, err := fs.Create(filepath.Join(dir, "images"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
d, err := image.Digest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
type imageOutput struct {
|
||||
Name string `json:"name"`
|
||||
Digest string `json:"digest"`
|
||||
}
|
||||
for _, r := range destRefs {
|
||||
if err := json.NewEncoder(f).Encode(imageOutput{
|
||||
Name: r.String(),
|
||||
Digest: d.String(),
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -18,18 +18,91 @@ package executor
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/pkg/config"
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/random"
|
||||
"github.com/google/go-containerregistry/pkg/v1/validate"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
func mustTag(t *testing.T, s string) name.Tag {
|
||||
tag, err := name.NewTag(s, name.StrictValidation)
|
||||
if err != nil {
|
||||
t.Fatalf("NewTag: %v", err)
|
||||
}
|
||||
return tag
|
||||
}
|
||||
|
||||
func TestWriteImageOutputs(t *testing.T) {
|
||||
img, err := random.Image(1024, 3)
|
||||
if err != nil {
|
||||
t.Fatalf("random.Image: %v", err)
|
||||
}
|
||||
d, err := img.Digest()
|
||||
if err != nil {
|
||||
t.Fatalf("Digest: %v", err)
|
||||
}
|
||||
|
||||
for _, c := range []struct {
|
||||
desc, env string
|
||||
tags []name.Tag
|
||||
want string
|
||||
}{{
|
||||
desc: "env unset, no output",
|
||||
env: "",
|
||||
}, {
|
||||
desc: "env set, one tag",
|
||||
env: "/foo",
|
||||
tags: []name.Tag{mustTag(t, "gcr.io/foo/bar:latest")},
|
||||
want: fmt.Sprintf(`{"name":"gcr.io/foo/bar:latest","digest":%q}
|
||||
`, d),
|
||||
}, {
|
||||
desc: "env set, two tags",
|
||||
env: "/foo",
|
||||
tags: []name.Tag{
|
||||
mustTag(t, "gcr.io/foo/bar:latest"),
|
||||
mustTag(t, "gcr.io/baz/qux:latest"),
|
||||
},
|
||||
want: fmt.Sprintf(`{"name":"gcr.io/foo/bar:latest","digest":%q}
|
||||
{"name":"gcr.io/baz/qux:latest","digest":%q}
|
||||
`, d, d),
|
||||
}} {
|
||||
t.Run(c.desc, func(t *testing.T) {
|
||||
fs = afero.NewMemMapFs()
|
||||
if c.want == "" {
|
||||
fs = afero.NewReadOnlyFs(fs) // No files should be written.
|
||||
}
|
||||
|
||||
os.Setenv("BUILDER_OUTPUT", c.env)
|
||||
if err := writeImageOutputs(img, c.tags); err != nil {
|
||||
t.Fatalf("writeImageOutputs: %v", err)
|
||||
}
|
||||
|
||||
if c.want == "" {
|
||||
return
|
||||
}
|
||||
|
||||
b, err := afero.ReadFile(fs, filepath.Join(c.env, "images"))
|
||||
if err != nil {
|
||||
t.Fatalf("ReadFile: %v", err)
|
||||
}
|
||||
|
||||
if got := string(b); got != c.want {
|
||||
t.Fatalf(" got: %s\nwant: %s", got, c.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHeaderAdded(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
|
|
|
|||
|
|
@ -37,13 +37,7 @@ import (
|
|||
func ResolveEnvironmentReplacementList(values, envs []string, isFilepath bool) ([]string, error) {
|
||||
var resolvedValues []string
|
||||
for _, value := range values {
|
||||
var resolved string
|
||||
var err error
|
||||
if IsSrcRemoteFileURL(value) {
|
||||
resolved, err = ResolveEnvironmentReplacement(value, envs, false)
|
||||
} else {
|
||||
resolved, err = ResolveEnvironmentReplacement(value, envs, isFilepath)
|
||||
}
|
||||
resolved, err := ResolveEnvironmentReplacement(value, envs, isFilepath)
|
||||
logrus.Debugf("Resolved %s to %s", value, resolved)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -65,7 +59,8 @@ func ResolveEnvironmentReplacementList(values, envs []string, isFilepath bool) (
|
|||
func ResolveEnvironmentReplacement(value string, envs []string, isFilepath bool) (string, error) {
|
||||
shlex := shell.NewLex(parser.DefaultEscapeToken)
|
||||
fp, err := shlex.ProcessWord(value, envs)
|
||||
if !isFilepath {
|
||||
// Check after replacement if value is a remote URL
|
||||
if !isFilepath || IsSrcRemoteFileURL(fp) {
|
||||
return fp, err
|
||||
}
|
||||
if err != nil {
|
||||
|
|
|
|||
|
|
@ -97,6 +97,22 @@ var testEnvReplacement = []struct {
|
|||
},
|
||||
expectedPath: "8080/udp",
|
||||
},
|
||||
{
|
||||
path: "$url",
|
||||
envs: []string{
|
||||
"url=http://example.com",
|
||||
},
|
||||
isFilepath: true,
|
||||
expectedPath: "http://example.com",
|
||||
},
|
||||
{
|
||||
path: "$url",
|
||||
envs: []string{
|
||||
"url=http://example.com",
|
||||
},
|
||||
isFilepath: false,
|
||||
expectedPath: "http://example.com",
|
||||
},
|
||||
}
|
||||
|
||||
func Test_EnvReplacement(t *testing.T) {
|
||||
|
|
@ -390,7 +406,7 @@ var isSrcValidTests = []struct {
|
|||
func Test_IsSrcsValid(t *testing.T) {
|
||||
for _, test := range isSrcValidTests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
if err := GetExcludedFiles(buildContextPath); err != nil {
|
||||
if err := GetExcludedFiles("", buildContextPath); err != nil {
|
||||
t.Fatalf("error getting excluded files: %v", err)
|
||||
}
|
||||
err := IsSrcsValid(test.srcsAndDest, test.resolvedSources, buildContextPath)
|
||||
|
|
@ -472,14 +488,15 @@ func TestResolveEnvironmentReplacementList(t *testing.T) {
|
|||
name: "url",
|
||||
args: args{
|
||||
values: []string{
|
||||
"https://google.com/$foo", "$bar",
|
||||
"https://google.com/$foo", "$bar", "$url",
|
||||
},
|
||||
envs: []string{
|
||||
"foo=baz",
|
||||
"bar=bat",
|
||||
"url=https://google.com",
|
||||
},
|
||||
},
|
||||
want: []string{"https://google.com/baz", "bat"},
|
||||
want: []string{"https://google.com/baz", "bat", "https://google.com"},
|
||||
},
|
||||
{
|
||||
name: "mixed",
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ import (
|
|||
"archive/tar"
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
|
|
@ -82,11 +83,17 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
extractedFiles := []string{}
|
||||
|
||||
for i, l := range layers {
|
||||
logrus.Debugf("Extracting layer %d", i)
|
||||
if mediaType, err := l.MediaType(); err == nil {
|
||||
logrus.Debugf("Extracting layer %d of media type %s", mediaType)
|
||||
} else {
|
||||
logrus.Debugf("Extracting layer %d", i)
|
||||
}
|
||||
|
||||
r, err := l.Uncompressed()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer r.Close()
|
||||
tr := tar.NewReader(r)
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
|
|
@ -94,7 +101,7 @@ func GetFSFromImage(root string, img v1.Image) ([]string, error) {
|
|||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, fmt.Sprintf("error reading tar %d", i))
|
||||
}
|
||||
path := filepath.Join(root, filepath.Clean(hdr.Name))
|
||||
base := filepath.Base(path)
|
||||
|
|
@ -359,9 +366,6 @@ func RelativeFiles(fp string, root string) ([]string, error) {
|
|||
if CheckWhitelist(path) && !HasFilepathPrefix(path, root, false) {
|
||||
return nil
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
relPath, err := filepath.Rel(root, path)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
@ -554,11 +558,15 @@ func CopyFile(src, dest, buildcontext string) (bool, error) {
|
|||
}
|
||||
|
||||
// GetExcludedFiles gets a list of files to exclude from the .dockerignore
|
||||
func GetExcludedFiles(buildcontext string) error {
|
||||
path := filepath.Join(buildcontext, ".dockerignore")
|
||||
func GetExcludedFiles(dockerfilepath string, buildcontext string) error {
|
||||
path := dockerfilepath + ".dockerignore"
|
||||
if !FilepathExists(path) {
|
||||
path = filepath.Join(buildcontext, ".dockerignore")
|
||||
}
|
||||
if !FilepathExists(path) {
|
||||
return nil
|
||||
}
|
||||
logrus.Infof("Using dockerignore file: %v", path)
|
||||
contents, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "parsing .dockerignore")
|
||||
|
|
@ -588,10 +596,10 @@ func excludeFile(path, buildcontext string) bool {
|
|||
|
||||
// HasFilepathPrefix checks if the given file path begins with prefix
|
||||
func HasFilepathPrefix(path, prefix string, prefixMatchOnly bool) bool {
|
||||
path = filepath.Clean(path)
|
||||
prefix = filepath.Clean(prefix)
|
||||
pathArray := strings.Split(path, "/")
|
||||
prefixArray := strings.Split(prefix, "/")
|
||||
path = filepath.Clean(path)
|
||||
pathArray := strings.SplitN(path, "/", len(prefixArray)+1)
|
||||
|
||||
if len(pathArray) < len(prefixArray) {
|
||||
return false
|
||||
|
|
|
|||
|
|
@ -19,11 +19,13 @@ package util
|
|||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/GoogleContainerTools/kaniko/testutil"
|
||||
|
|
@ -342,6 +344,84 @@ func TestHasFilepathPrefix(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func BenchmarkHasFilepathPrefix(b *testing.B) {
|
||||
tests := []struct {
|
||||
path string
|
||||
prefix string
|
||||
prefixMatchOnly bool
|
||||
}{
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo/bar",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
{
|
||||
path: "/foo/bar/baz/foo/foobar",
|
||||
prefix: "/foo/bar/baz",
|
||||
prefixMatchOnly: true,
|
||||
},
|
||||
}
|
||||
for _, ts := range tests {
|
||||
name := fmt.Sprint("PathDepth=", strings.Count(ts.path, "/"), ",PrefixDepth=", strings.Count(ts.prefix, "/"))
|
||||
b.Run(name, func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
for i := 0; i < b.N; i++ {
|
||||
HasFilepathPrefix(ts.path, ts.prefix, ts.prefixMatchOnly)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type checker func(root string, t *testing.T)
|
||||
|
||||
func fileExists(p string) checker {
|
||||
|
|
@ -694,3 +774,54 @@ func Test_childDirInWhitelist(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_correctDockerignoreFileIsUsed(t *testing.T) {
|
||||
type args struct {
|
||||
dockerfilepath string
|
||||
buildcontext string
|
||||
excluded []string
|
||||
included []string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
}{
|
||||
{
|
||||
name: "relative dockerfile used",
|
||||
args: args{
|
||||
dockerfilepath: "../../integration/dockerfiles/Dockerfile_dockerignore_relative",
|
||||
buildcontext: "../../integration/",
|
||||
excluded: []string{"ignore_relative/bar"},
|
||||
included: []string{"ignore_relative/foo", "ignore/bar"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "context dockerfile is used",
|
||||
args: args{
|
||||
dockerfilepath: "../../integration/dockerfiles/Dockerfile_test_dockerignore",
|
||||
buildcontext: "../../integration/",
|
||||
excluded: []string{"ignore/bar"},
|
||||
included: []string{"ignore/foo", "ignore_relative/bar"},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
if err := GetExcludedFiles(tt.args.dockerfilepath, tt.args.buildcontext); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, excl := range tt.args.excluded {
|
||||
t.Run(tt.name+" to exclude "+excl, func(t *testing.T) {
|
||||
if !excludeFile(excl, tt.args.buildcontext) {
|
||||
t.Errorf("'%v' not excluded", excl)
|
||||
}
|
||||
})
|
||||
}
|
||||
for _, incl := range tt.args.included {
|
||||
t.Run(tt.name+" to include "+incl, func(t *testing.T) {
|
||||
if excludeFile(incl, tt.args.buildcontext) {
|
||||
t.Errorf("'%v' not included", incl)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -23,8 +23,10 @@ import (
|
|||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"sync"
|
||||
"syscall"
|
||||
|
||||
highwayhash "github.com/minio/HighwayHash"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
|
@ -44,8 +46,15 @@ func ConfigureLogging(logLevel string) error {
|
|||
|
||||
// Hasher returns a hash function, used in snapshotting to determine if a file has changed
|
||||
func Hasher() func(string) (string, error) {
|
||||
pool := sync.Pool{
|
||||
New: func() interface{} {
|
||||
b := make([]byte, highwayhash.Size*10*1024)
|
||||
return &b
|
||||
},
|
||||
}
|
||||
key := make([]byte, highwayhash.Size)
|
||||
hasher := func(p string) (string, error) {
|
||||
h := md5.New()
|
||||
h, _ := highwayhash.New(key)
|
||||
fi, err := os.Lstat(p)
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
|
@ -63,7 +72,9 @@ func Hasher() func(string) (string, error) {
|
|||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
buf := pool.Get().(*[]byte)
|
||||
defer pool.Put(buf)
|
||||
if _, err := io.CopyBuffer(h, f, *buf); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,21 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2017 Minio Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
|
@ -0,0 +1,225 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// Package highwayhash implements the pseudo-random-function (PRF) HighwayHash.
|
||||
// HighwayHash is a fast hash function designed to defend hash-flooding attacks
|
||||
// or to authenticate short-lived messages.
|
||||
//
|
||||
// HighwayHash is not a general purpose cryptographic hash function and does not
|
||||
// provide (strong) collision resistance.
|
||||
package highwayhash
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"hash"
|
||||
)
|
||||
|
||||
const (
|
||||
// Size is the size of HighwayHash-256 checksum in bytes.
|
||||
Size = 32
|
||||
// Size128 is the size of HighwayHash-128 checksum in bytes.
|
||||
Size128 = 16
|
||||
// Size64 is the size of HighwayHash-64 checksum in bytes.
|
||||
Size64 = 8
|
||||
)
|
||||
|
||||
var errKeySize = errors.New("highwayhash: invalid key size")
|
||||
|
||||
// New returns a hash.Hash computing the HighwayHash-256 checksum.
|
||||
// It returns a non-nil error if the key is not 32 bytes long.
|
||||
func New(key []byte) (hash.Hash, error) {
|
||||
if len(key) != Size {
|
||||
return nil, errKeySize
|
||||
}
|
||||
h := &digest{size: Size}
|
||||
copy(h.key[:], key)
|
||||
h.Reset()
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// New128 returns a hash.Hash computing the HighwayHash-128 checksum.
|
||||
// It returns a non-nil error if the key is not 32 bytes long.
|
||||
func New128(key []byte) (hash.Hash, error) {
|
||||
if len(key) != Size {
|
||||
return nil, errKeySize
|
||||
}
|
||||
h := &digest{size: Size128}
|
||||
copy(h.key[:], key)
|
||||
h.Reset()
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// New64 returns a hash.Hash computing the HighwayHash-64 checksum.
|
||||
// It returns a non-nil error if the key is not 32 bytes long.
|
||||
func New64(key []byte) (hash.Hash64, error) {
|
||||
if len(key) != Size {
|
||||
return nil, errKeySize
|
||||
}
|
||||
h := new(digest64)
|
||||
h.size = Size64
|
||||
copy(h.key[:], key)
|
||||
h.Reset()
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// Sum computes the HighwayHash-256 checksum of data.
|
||||
// It panics if the key is not 32 bytes long.
|
||||
func Sum(data, key []byte) [Size]byte {
|
||||
if len(key) != Size {
|
||||
panic(errKeySize)
|
||||
}
|
||||
var state [16]uint64
|
||||
initialize(&state, key)
|
||||
if n := len(data) & (^(Size - 1)); n > 0 {
|
||||
update(&state, data[:n])
|
||||
data = data[n:]
|
||||
}
|
||||
if len(data) > 0 {
|
||||
var block [Size]byte
|
||||
offset := copy(block[:], data)
|
||||
hashBuffer(&state, &block, offset)
|
||||
}
|
||||
var hash [Size]byte
|
||||
finalize(hash[:], &state)
|
||||
return hash
|
||||
}
|
||||
|
||||
// Sum128 computes the HighwayHash-128 checksum of data.
|
||||
// It panics if the key is not 32 bytes long.
|
||||
func Sum128(data, key []byte) [Size128]byte {
|
||||
if len(key) != Size {
|
||||
panic(errKeySize)
|
||||
}
|
||||
var state [16]uint64
|
||||
initialize(&state, key)
|
||||
if n := len(data) & (^(Size - 1)); n > 0 {
|
||||
update(&state, data[:n])
|
||||
data = data[n:]
|
||||
}
|
||||
if len(data) > 0 {
|
||||
var block [Size]byte
|
||||
offset := copy(block[:], data)
|
||||
hashBuffer(&state, &block, offset)
|
||||
}
|
||||
var hash [Size128]byte
|
||||
finalize(hash[:], &state)
|
||||
return hash
|
||||
}
|
||||
|
||||
// Sum64 computes the HighwayHash-64 checksum of data.
|
||||
// It panics if the key is not 32 bytes long.
|
||||
func Sum64(data, key []byte) uint64 {
|
||||
if len(key) != Size {
|
||||
panic(errKeySize)
|
||||
}
|
||||
var state [16]uint64
|
||||
initialize(&state, key)
|
||||
if n := len(data) & (^(Size - 1)); n > 0 {
|
||||
update(&state, data[:n])
|
||||
data = data[n:]
|
||||
}
|
||||
if len(data) > 0 {
|
||||
var block [Size]byte
|
||||
offset := copy(block[:], data)
|
||||
hashBuffer(&state, &block, offset)
|
||||
}
|
||||
var hash [Size64]byte
|
||||
finalize(hash[:], &state)
|
||||
return binary.LittleEndian.Uint64(hash[:])
|
||||
}
|
||||
|
||||
type digest64 struct{ digest }
|
||||
|
||||
func (d *digest64) Sum64() uint64 {
|
||||
state := d.state
|
||||
if d.offset > 0 {
|
||||
hashBuffer(&state, &d.buffer, d.offset)
|
||||
}
|
||||
var hash [8]byte
|
||||
finalize(hash[:], &state)
|
||||
return binary.LittleEndian.Uint64(hash[:])
|
||||
}
|
||||
|
||||
type digest struct {
|
||||
state [16]uint64 // v0 | v1 | mul0 | mul1
|
||||
|
||||
key, buffer [Size]byte
|
||||
offset int
|
||||
|
||||
size int
|
||||
}
|
||||
|
||||
func (d *digest) Size() int { return d.size }
|
||||
|
||||
func (d *digest) BlockSize() int { return Size }
|
||||
|
||||
func (d *digest) Reset() {
|
||||
initialize(&d.state, d.key[:])
|
||||
d.offset = 0
|
||||
}
|
||||
|
||||
func (d *digest) Write(p []byte) (n int, err error) {
|
||||
n = len(p)
|
||||
if d.offset > 0 {
|
||||
remaining := Size - d.offset
|
||||
if n < remaining {
|
||||
d.offset += copy(d.buffer[d.offset:], p)
|
||||
return
|
||||
}
|
||||
copy(d.buffer[d.offset:], p[:remaining])
|
||||
update(&d.state, d.buffer[:])
|
||||
p = p[remaining:]
|
||||
d.offset = 0
|
||||
}
|
||||
if nn := len(p) & (^(Size - 1)); nn > 0 {
|
||||
update(&d.state, p[:nn])
|
||||
p = p[nn:]
|
||||
}
|
||||
if len(p) > 0 {
|
||||
d.offset = copy(d.buffer[d.offset:], p)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (d *digest) Sum(b []byte) []byte {
|
||||
state := d.state
|
||||
if d.offset > 0 {
|
||||
hashBuffer(&state, &d.buffer, d.offset)
|
||||
}
|
||||
var hash [Size]byte
|
||||
finalize(hash[:d.size], &state)
|
||||
return append(b, hash[:d.size]...)
|
||||
}
|
||||
|
||||
func hashBuffer(state *[16]uint64, buffer *[32]byte, offset int) {
|
||||
var block [Size]byte
|
||||
mod32 := (uint64(offset) << 32) + uint64(offset)
|
||||
for i := range state[:4] {
|
||||
state[i] += mod32
|
||||
}
|
||||
for i := range state[4:8] {
|
||||
t0 := uint32(state[i+4])
|
||||
t0 = (t0 << uint(offset)) | (t0 >> uint(32-offset))
|
||||
|
||||
t1 := uint32(state[i+4] >> 32)
|
||||
t1 = (t1 << uint(offset)) | (t1 >> uint(32-offset))
|
||||
|
||||
state[i+4] = (uint64(t1) << 32) | uint64(t0)
|
||||
}
|
||||
|
||||
mod4 := offset & 3
|
||||
remain := offset - mod4
|
||||
|
||||
copy(block[:], buffer[:remain])
|
||||
if offset >= 16 {
|
||||
copy(block[28:], buffer[offset-4:])
|
||||
} else if mod4 != 0 {
|
||||
last := uint32(buffer[remain])
|
||||
last += uint32(buffer[remain+mod4>>1]) << 8
|
||||
last += uint32(buffer[offset-1]) << 16
|
||||
binary.LittleEndian.PutUint32(block[16:], last)
|
||||
}
|
||||
update(state, block[:])
|
||||
}
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.8
|
||||
// +build amd64 !gccgo !appengine !nacl
|
||||
|
||||
package highwayhash
|
||||
|
||||
import "golang.org/x/sys/cpu"
|
||||
|
||||
var (
|
||||
useSSE4 = cpu.X86.HasSSE41
|
||||
useAVX2 = cpu.X86.HasAVX2
|
||||
useNEON = false
|
||||
useVMX = false
|
||||
)
|
||||
|
||||
//go:noescape
|
||||
func initializeSSE4(state *[16]uint64, key []byte)
|
||||
|
||||
//go:noescape
|
||||
func initializeAVX2(state *[16]uint64, key []byte)
|
||||
|
||||
//go:noescape
|
||||
func updateSSE4(state *[16]uint64, msg []byte)
|
||||
|
||||
//go:noescape
|
||||
func updateAVX2(state *[16]uint64, msg []byte)
|
||||
|
||||
//go:noescape
|
||||
func finalizeSSE4(out []byte, state *[16]uint64)
|
||||
|
||||
//go:noescape
|
||||
func finalizeAVX2(out []byte, state *[16]uint64)
|
||||
|
||||
func initialize(state *[16]uint64, key []byte) {
|
||||
switch {
|
||||
case useAVX2:
|
||||
initializeAVX2(state, key)
|
||||
case useSSE4:
|
||||
initializeSSE4(state, key)
|
||||
default:
|
||||
initializeGeneric(state, key)
|
||||
}
|
||||
}
|
||||
|
||||
func update(state *[16]uint64, msg []byte) {
|
||||
switch {
|
||||
case useAVX2:
|
||||
updateAVX2(state, msg)
|
||||
case useSSE4:
|
||||
updateSSE4(state, msg)
|
||||
default:
|
||||
updateGeneric(state, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func finalize(out []byte, state *[16]uint64) {
|
||||
switch {
|
||||
case useAVX2:
|
||||
finalizeAVX2(out, state)
|
||||
case useSSE4:
|
||||
finalizeSSE4(out, state)
|
||||
default:
|
||||
finalizeGeneric(out, state)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,249 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.8
|
||||
// +build amd64 !gccgo !appengine !nacl
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
DATA ·consAVX2<>+0x00(SB)/8, $0xdbe6d5d5fe4cce2f
|
||||
DATA ·consAVX2<>+0x08(SB)/8, $0xa4093822299f31d0
|
||||
DATA ·consAVX2<>+0x10(SB)/8, $0x13198a2e03707344
|
||||
DATA ·consAVX2<>+0x18(SB)/8, $0x243f6a8885a308d3
|
||||
DATA ·consAVX2<>+0x20(SB)/8, $0x3bd39e10cb0ef593
|
||||
DATA ·consAVX2<>+0x28(SB)/8, $0xc0acf169b5f18a8c
|
||||
DATA ·consAVX2<>+0x30(SB)/8, $0xbe5466cf34e90c6c
|
||||
DATA ·consAVX2<>+0x38(SB)/8, $0x452821e638d01377
|
||||
GLOBL ·consAVX2<>(SB), (NOPTR+RODATA), $64
|
||||
|
||||
DATA ·zipperMergeAVX2<>+0x00(SB)/8, $0xf010e05020c03
|
||||
DATA ·zipperMergeAVX2<>+0x08(SB)/8, $0x70806090d0a040b
|
||||
DATA ·zipperMergeAVX2<>+0x10(SB)/8, $0xf010e05020c03
|
||||
DATA ·zipperMergeAVX2<>+0x18(SB)/8, $0x70806090d0a040b
|
||||
GLOBL ·zipperMergeAVX2<>(SB), (NOPTR+RODATA), $32
|
||||
|
||||
#define REDUCE_MOD(x0, x1, x2, x3, tmp0, tmp1, y0, y1) \
|
||||
MOVQ $0x3FFFFFFFFFFFFFFF, tmp0 \
|
||||
ANDQ tmp0, x3 \
|
||||
MOVQ x2, y0 \
|
||||
MOVQ x3, y1 \
|
||||
\
|
||||
MOVQ x2, tmp0 \
|
||||
MOVQ x3, tmp1 \
|
||||
SHLQ $1, tmp1 \
|
||||
SHRQ $63, tmp0 \
|
||||
MOVQ tmp1, x3 \
|
||||
ORQ tmp0, x3 \
|
||||
\
|
||||
SHLQ $1, x2 \
|
||||
\
|
||||
MOVQ y0, tmp0 \
|
||||
MOVQ y1, tmp1 \
|
||||
SHLQ $2, tmp1 \
|
||||
SHRQ $62, tmp0 \
|
||||
MOVQ tmp1, y1 \
|
||||
ORQ tmp0, y1 \
|
||||
\
|
||||
SHLQ $2, y0 \
|
||||
\
|
||||
XORQ x0, y0 \
|
||||
XORQ x2, y0 \
|
||||
XORQ x1, y1 \
|
||||
XORQ x3, y1
|
||||
|
||||
#define UPDATE(msg) \
|
||||
VPADDQ msg, Y2, Y2 \
|
||||
VPADDQ Y3, Y2, Y2 \
|
||||
\
|
||||
VPSRLQ $32, Y1, Y0 \
|
||||
BYTE $0xC5; BYTE $0xFD; BYTE $0xF4; BYTE $0xC2 \ // VPMULUDQ Y2, Y0, Y0
|
||||
VPXOR Y0, Y3, Y3 \
|
||||
\
|
||||
VPADDQ Y4, Y1, Y1 \
|
||||
\
|
||||
VPSRLQ $32, Y2, Y0 \
|
||||
BYTE $0xC5; BYTE $0xFD; BYTE $0xF4; BYTE $0xC1 \ // VPMULUDQ Y1, Y0, Y0
|
||||
VPXOR Y0, Y4, Y4 \
|
||||
\
|
||||
VPSHUFB Y5, Y2, Y0 \
|
||||
VPADDQ Y0, Y1, Y1 \
|
||||
\
|
||||
VPSHUFB Y5, Y1, Y0 \
|
||||
VPADDQ Y0, Y2, Y2
|
||||
|
||||
// func initializeAVX2(state *[16]uint64, key []byte)
|
||||
TEXT ·initializeAVX2(SB), 4, $0-32
|
||||
MOVQ state+0(FP), AX
|
||||
MOVQ key_base+8(FP), BX
|
||||
MOVQ $·consAVX2<>(SB), CX
|
||||
|
||||
VMOVDQU 0(BX), Y1
|
||||
VPSHUFD $177, Y1, Y2
|
||||
|
||||
VMOVDQU 0(CX), Y3
|
||||
VMOVDQU 32(CX), Y4
|
||||
|
||||
VPXOR Y3, Y1, Y1
|
||||
VPXOR Y4, Y2, Y2
|
||||
|
||||
VMOVDQU Y1, 0(AX)
|
||||
VMOVDQU Y2, 32(AX)
|
||||
VMOVDQU Y3, 64(AX)
|
||||
VMOVDQU Y4, 96(AX)
|
||||
VZEROUPPER
|
||||
RET
|
||||
|
||||
// func updateAVX2(state *[16]uint64, msg []byte)
|
||||
TEXT ·updateAVX2(SB), 4, $0-32
|
||||
MOVQ state+0(FP), AX
|
||||
MOVQ msg_base+8(FP), BX
|
||||
MOVQ msg_len+16(FP), CX
|
||||
|
||||
CMPQ CX, $32
|
||||
JB DONE
|
||||
|
||||
VMOVDQU 0(AX), Y1
|
||||
VMOVDQU 32(AX), Y2
|
||||
VMOVDQU 64(AX), Y3
|
||||
VMOVDQU 96(AX), Y4
|
||||
|
||||
VMOVDQU ·zipperMergeAVX2<>(SB), Y5
|
||||
|
||||
LOOP:
|
||||
VMOVDQU 0(BX), Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
ADDQ $32, BX
|
||||
SUBQ $32, CX
|
||||
JA LOOP
|
||||
|
||||
VMOVDQU Y1, 0(AX)
|
||||
VMOVDQU Y2, 32(AX)
|
||||
VMOVDQU Y3, 64(AX)
|
||||
VMOVDQU Y4, 96(AX)
|
||||
VZEROUPPER
|
||||
|
||||
DONE:
|
||||
RET
|
||||
|
||||
// func finalizeAVX2(out []byte, state *[16]uint64)
|
||||
TEXT ·finalizeAVX2(SB), 4, $0-32
|
||||
MOVQ state+24(FP), AX
|
||||
MOVQ out_base+0(FP), BX
|
||||
MOVQ out_len+8(FP), CX
|
||||
|
||||
VMOVDQU 0(AX), Y1
|
||||
VMOVDQU 32(AX), Y2
|
||||
VMOVDQU 64(AX), Y3
|
||||
VMOVDQU 96(AX), Y4
|
||||
|
||||
VMOVDQU ·zipperMergeAVX2<>(SB), Y5
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
CMPQ CX, $8
|
||||
JE skipUpdate // Just 4 rounds for 64-bit checksum
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
CMPQ CX, $16
|
||||
JE skipUpdate // 6 rounds for 128-bit checksum
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
VPERM2I128 $1, Y1, Y1, Y0
|
||||
VPSHUFD $177, Y0, Y0
|
||||
UPDATE(Y0)
|
||||
|
||||
skipUpdate:
|
||||
VMOVDQU Y1, 0(AX)
|
||||
VMOVDQU Y2, 32(AX)
|
||||
VMOVDQU Y3, 64(AX)
|
||||
VMOVDQU Y4, 96(AX)
|
||||
VZEROUPPER
|
||||
|
||||
CMPQ CX, $8
|
||||
JE hash64
|
||||
CMPQ CX, $16
|
||||
JE hash128
|
||||
|
||||
// 256-bit checksum
|
||||
MOVQ 0*8(AX), R8
|
||||
MOVQ 1*8(AX), R9
|
||||
MOVQ 4*8(AX), R10
|
||||
MOVQ 5*8(AX), R11
|
||||
ADDQ 8*8(AX), R8
|
||||
ADDQ 9*8(AX), R9
|
||||
ADDQ 12*8(AX), R10
|
||||
ADDQ 13*8(AX), R11
|
||||
|
||||
REDUCE_MOD(R8, R9, R10, R11, R12, R13, R14, R15)
|
||||
MOVQ R14, 0(BX)
|
||||
MOVQ R15, 8(BX)
|
||||
|
||||
MOVQ 2*8(AX), R8
|
||||
MOVQ 3*8(AX), R9
|
||||
MOVQ 6*8(AX), R10
|
||||
MOVQ 7*8(AX), R11
|
||||
ADDQ 10*8(AX), R8
|
||||
ADDQ 11*8(AX), R9
|
||||
ADDQ 14*8(AX), R10
|
||||
ADDQ 15*8(AX), R11
|
||||
|
||||
REDUCE_MOD(R8, R9, R10, R11, R12, R13, R14, R15)
|
||||
MOVQ R14, 16(BX)
|
||||
MOVQ R15, 24(BX)
|
||||
RET
|
||||
|
||||
hash128:
|
||||
MOVQ 0*8(AX), R8
|
||||
MOVQ 1*8(AX), R9
|
||||
ADDQ 6*8(AX), R8
|
||||
ADDQ 7*8(AX), R9
|
||||
ADDQ 8*8(AX), R8
|
||||
ADDQ 9*8(AX), R9
|
||||
ADDQ 14*8(AX), R8
|
||||
ADDQ 15*8(AX), R9
|
||||
MOVQ R8, 0(BX)
|
||||
MOVQ R9, 8(BX)
|
||||
RET
|
||||
|
||||
hash64:
|
||||
MOVQ 0*8(AX), DX
|
||||
ADDQ 4*8(AX), DX
|
||||
ADDQ 8*8(AX), DX
|
||||
ADDQ 12*8(AX), DX
|
||||
MOVQ DX, 0(BX)
|
||||
RET
|
||||
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build !go1.8
|
||||
// +build amd64 !gccgo !appengine !nacl
|
||||
|
||||
package highwayhash
|
||||
|
||||
import "golang.org/x/sys/cpu"
|
||||
|
||||
var (
|
||||
useSSE4 = cpu.X86.HasSSE41
|
||||
useAVX2 = false
|
||||
useNEON = false
|
||||
useVMX = false
|
||||
)
|
||||
|
||||
//go:noescape
|
||||
func initializeSSE4(state *[16]uint64, key []byte)
|
||||
|
||||
//go:noescape
|
||||
func updateSSE4(state *[16]uint64, msg []byte)
|
||||
|
||||
//go:noescape
|
||||
func finalizeSSE4(out []byte, state *[16]uint64)
|
||||
|
||||
func initialize(state *[16]uint64, key []byte) {
|
||||
if useSSE4 {
|
||||
initializeSSE4(state, key)
|
||||
} else {
|
||||
initializeGeneric(state, key)
|
||||
}
|
||||
}
|
||||
|
||||
func update(state *[16]uint64, msg []byte) {
|
||||
if useSSE4 {
|
||||
updateSSE4(state, msg)
|
||||
} else {
|
||||
updateGeneric(state, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func finalize(out []byte, state *[16]uint64) {
|
||||
if useSSE4 {
|
||||
finalizeSSE4(out, state)
|
||||
} else {
|
||||
finalizeGeneric(out, state)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,294 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build amd64 !gccgo !appengine !nacl
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
DATA ·cons<>+0x00(SB)/8, $0xdbe6d5d5fe4cce2f
|
||||
DATA ·cons<>+0x08(SB)/8, $0xa4093822299f31d0
|
||||
DATA ·cons<>+0x10(SB)/8, $0x13198a2e03707344
|
||||
DATA ·cons<>+0x18(SB)/8, $0x243f6a8885a308d3
|
||||
DATA ·cons<>+0x20(SB)/8, $0x3bd39e10cb0ef593
|
||||
DATA ·cons<>+0x28(SB)/8, $0xc0acf169b5f18a8c
|
||||
DATA ·cons<>+0x30(SB)/8, $0xbe5466cf34e90c6c
|
||||
DATA ·cons<>+0x38(SB)/8, $0x452821e638d01377
|
||||
GLOBL ·cons<>(SB), (NOPTR+RODATA), $64
|
||||
|
||||
DATA ·zipperMerge<>+0x00(SB)/8, $0xf010e05020c03
|
||||
DATA ·zipperMerge<>+0x08(SB)/8, $0x70806090d0a040b
|
||||
GLOBL ·zipperMerge<>(SB), (NOPTR+RODATA), $16
|
||||
|
||||
#define v00 X0
|
||||
#define v01 X1
|
||||
#define v10 X2
|
||||
#define v11 X3
|
||||
#define m00 X4
|
||||
#define m01 X5
|
||||
#define m10 X6
|
||||
#define m11 X7
|
||||
|
||||
#define t0 X8
|
||||
#define t1 X9
|
||||
#define t2 X10
|
||||
|
||||
#define REDUCE_MOD(x0, x1, x2, x3, tmp0, tmp1, y0, y1) \
|
||||
MOVQ $0x3FFFFFFFFFFFFFFF, tmp0 \
|
||||
ANDQ tmp0, x3 \
|
||||
MOVQ x2, y0 \
|
||||
MOVQ x3, y1 \
|
||||
\
|
||||
MOVQ x2, tmp0 \
|
||||
MOVQ x3, tmp1 \
|
||||
SHLQ $1, tmp1 \
|
||||
SHRQ $63, tmp0 \
|
||||
MOVQ tmp1, x3 \
|
||||
ORQ tmp0, x3 \
|
||||
\
|
||||
SHLQ $1, x2 \
|
||||
\
|
||||
MOVQ y0, tmp0 \
|
||||
MOVQ y1, tmp1 \
|
||||
SHLQ $2, tmp1 \
|
||||
SHRQ $62, tmp0 \
|
||||
MOVQ tmp1, y1 \
|
||||
ORQ tmp0, y1 \
|
||||
\
|
||||
SHLQ $2, y0 \
|
||||
\
|
||||
XORQ x0, y0 \
|
||||
XORQ x2, y0 \
|
||||
XORQ x1, y1 \
|
||||
XORQ x3, y1
|
||||
|
||||
#define UPDATE(msg0, msg1) \
|
||||
PADDQ msg0, v10 \
|
||||
PADDQ m00, v10 \
|
||||
PADDQ msg1, v11 \
|
||||
PADDQ m01, v11 \
|
||||
\
|
||||
MOVO v00, t0 \
|
||||
MOVO v01, t1 \
|
||||
PSRLQ $32, t0 \
|
||||
PSRLQ $32, t1 \
|
||||
PMULULQ v10, t0 \
|
||||
PMULULQ v11, t1 \
|
||||
PXOR t0, m00 \
|
||||
PXOR t1, m01 \
|
||||
\
|
||||
PADDQ m10, v00 \
|
||||
PADDQ m11, v01 \
|
||||
\
|
||||
MOVO v10, t0 \
|
||||
MOVO v11, t1 \
|
||||
PSRLQ $32, t0 \
|
||||
PSRLQ $32, t1 \
|
||||
PMULULQ v00, t0 \
|
||||
PMULULQ v01, t1 \
|
||||
PXOR t0, m10 \
|
||||
PXOR t1, m11 \
|
||||
\
|
||||
MOVO v10, t0 \
|
||||
PSHUFB t2, t0 \
|
||||
MOVO v11, t1 \
|
||||
PSHUFB t2, t1 \
|
||||
PADDQ t0, v00 \
|
||||
PADDQ t1, v01 \
|
||||
\
|
||||
MOVO v00, t0 \
|
||||
PSHUFB t2, t0 \
|
||||
MOVO v01, t1 \
|
||||
PSHUFB t2, t1 \
|
||||
PADDQ t0, v10 \
|
||||
PADDQ t1, v11
|
||||
|
||||
// func initializeSSE4(state *[16]uint64, key []byte)
|
||||
TEXT ·initializeSSE4(SB), 4, $0-32
|
||||
MOVQ state+0(FP), AX
|
||||
MOVQ key_base+8(FP), BX
|
||||
MOVQ $·cons<>(SB), CX
|
||||
|
||||
MOVOU 0(BX), v00
|
||||
MOVOU 16(BX), v01
|
||||
|
||||
PSHUFD $177, v00, v10
|
||||
PSHUFD $177, v01, v11
|
||||
|
||||
MOVOU 0(CX), m00
|
||||
MOVOU 16(CX), m01
|
||||
MOVOU 32(CX), m10
|
||||
MOVOU 48(CX), m11
|
||||
|
||||
PXOR m00, v00
|
||||
PXOR m01, v01
|
||||
PXOR m10, v10
|
||||
PXOR m11, v11
|
||||
|
||||
MOVOU v00, 0(AX)
|
||||
MOVOU v01, 16(AX)
|
||||
MOVOU v10, 32(AX)
|
||||
MOVOU v11, 48(AX)
|
||||
MOVOU m00, 64(AX)
|
||||
MOVOU m01, 80(AX)
|
||||
MOVOU m10, 96(AX)
|
||||
MOVOU m11, 112(AX)
|
||||
RET
|
||||
|
||||
// func updateSSE4(state *[16]uint64, msg []byte)
|
||||
TEXT ·updateSSE4(SB), 4, $0-32
|
||||
MOVQ state+0(FP), AX
|
||||
MOVQ msg_base+8(FP), BX
|
||||
MOVQ msg_len+16(FP), CX
|
||||
|
||||
CMPQ CX, $32
|
||||
JB DONE
|
||||
|
||||
MOVOU 0(AX), v00
|
||||
MOVOU 16(AX), v01
|
||||
MOVOU 32(AX), v10
|
||||
MOVOU 48(AX), v11
|
||||
MOVOU 64(AX), m00
|
||||
MOVOU 80(AX), m01
|
||||
MOVOU 96(AX), m10
|
||||
MOVOU 112(AX), m11
|
||||
|
||||
MOVOU ·zipperMerge<>(SB), t2
|
||||
|
||||
LOOP:
|
||||
MOVOU 0(BX), t0
|
||||
MOVOU 16(BX), t1
|
||||
|
||||
UPDATE(t0, t1)
|
||||
|
||||
ADDQ $32, BX
|
||||
SUBQ $32, CX
|
||||
JA LOOP
|
||||
|
||||
MOVOU v00, 0(AX)
|
||||
MOVOU v01, 16(AX)
|
||||
MOVOU v10, 32(AX)
|
||||
MOVOU v11, 48(AX)
|
||||
MOVOU m00, 64(AX)
|
||||
MOVOU m01, 80(AX)
|
||||
MOVOU m10, 96(AX)
|
||||
MOVOU m11, 112(AX)
|
||||
|
||||
DONE:
|
||||
RET
|
||||
|
||||
// func finalizeSSE4(out []byte, state *[16]uint64)
|
||||
TEXT ·finalizeSSE4(SB), 4, $0-32
|
||||
MOVQ state+24(FP), AX
|
||||
MOVQ out_base+0(FP), BX
|
||||
MOVQ out_len+8(FP), CX
|
||||
|
||||
MOVOU 0(AX), v00
|
||||
MOVOU 16(AX), v01
|
||||
MOVOU 32(AX), v10
|
||||
MOVOU 48(AX), v11
|
||||
MOVOU 64(AX), m00
|
||||
MOVOU 80(AX), m01
|
||||
MOVOU 96(AX), m10
|
||||
MOVOU 112(AX), m11
|
||||
|
||||
MOVOU ·zipperMerge<>(SB), t2
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
CMPQ CX, $8
|
||||
JE skipUpdate // Just 4 rounds for 64-bit checksum
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
CMPQ CX, $16
|
||||
JE skipUpdate // 6 rounds for 128-bit checksum
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
PSHUFD $177, v01, t0
|
||||
PSHUFD $177, v00, t1
|
||||
UPDATE(t0, t1)
|
||||
|
||||
skipUpdate:
|
||||
MOVOU v00, 0(AX)
|
||||
MOVOU v01, 16(AX)
|
||||
MOVOU v10, 32(AX)
|
||||
MOVOU v11, 48(AX)
|
||||
MOVOU m00, 64(AX)
|
||||
MOVOU m01, 80(AX)
|
||||
MOVOU m10, 96(AX)
|
||||
MOVOU m11, 112(AX)
|
||||
|
||||
CMPQ CX, $8
|
||||
JE hash64
|
||||
CMPQ CX, $16
|
||||
JE hash128
|
||||
|
||||
// 256-bit checksum
|
||||
PADDQ v00, m00
|
||||
PADDQ v10, m10
|
||||
PADDQ v01, m01
|
||||
PADDQ v11, m11
|
||||
|
||||
MOVQ m00, R8
|
||||
PEXTRQ $1, m00, R9
|
||||
MOVQ m10, R10
|
||||
PEXTRQ $1, m10, R11
|
||||
REDUCE_MOD(R8, R9, R10, R11, R12, R13, R14, R15)
|
||||
MOVQ R14, 0(BX)
|
||||
MOVQ R15, 8(BX)
|
||||
|
||||
MOVQ m01, R8
|
||||
PEXTRQ $1, m01, R9
|
||||
MOVQ m11, R10
|
||||
PEXTRQ $1, m11, R11
|
||||
REDUCE_MOD(R8, R9, R10, R11, R12, R13, R14, R15)
|
||||
MOVQ R14, 16(BX)
|
||||
MOVQ R15, 24(BX)
|
||||
RET
|
||||
|
||||
hash128:
|
||||
PADDQ v00, v11
|
||||
PADDQ m00, m11
|
||||
PADDQ v11, m11
|
||||
MOVOU m11, 0(BX)
|
||||
RET
|
||||
|
||||
hash64:
|
||||
PADDQ v00, v10
|
||||
PADDQ m00, m10
|
||||
PADDQ v10, m10
|
||||
MOVQ m10, DX
|
||||
MOVQ DX, 0(BX)
|
||||
RET
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
//+build !noasm
|
||||
|
||||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package highwayhash
|
||||
|
||||
var (
|
||||
useSSE4 = false
|
||||
useAVX2 = false
|
||||
useNEON = true
|
||||
useVMX = false
|
||||
)
|
||||
|
||||
//go:noescape
|
||||
func updateArm64(state *[16]uint64, msg []byte)
|
||||
|
||||
func initialize(state *[16]uint64, key []byte) {
|
||||
initializeGeneric(state, key)
|
||||
}
|
||||
|
||||
func update(state *[16]uint64, msg []byte) {
|
||||
if useNEON {
|
||||
updateArm64(state, msg)
|
||||
} else {
|
||||
updateGeneric(state, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func finalize(out []byte, state *[16]uint64) {
|
||||
finalizeGeneric(out, state)
|
||||
}
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
//+build !noasm !appengine
|
||||
|
||||
//
|
||||
// Minio Cloud Storage, (C) 2017 Minio, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//
|
||||
|
||||
// Use github.com/minio/asm2plan9s on this file to assemble ARM instructions to
|
||||
// the opcodes of their Plan9 equivalents
|
||||
|
||||
TEXT ·updateArm64(SB), 7, $0
|
||||
MOVD state+0(FP), R0
|
||||
MOVD msg_base+8(FP), R1
|
||||
MOVD msg_len+16(FP), R2 // length of message
|
||||
SUBS $32, R2
|
||||
BMI complete
|
||||
|
||||
// Definition of registers
|
||||
// v0 = v0.lo
|
||||
// v1 = v0.hi
|
||||
// v2 = v1.lo
|
||||
// v3 = v1.hi
|
||||
// v4 = mul0.lo
|
||||
// v5 = mul0.hi
|
||||
// v6 = mul1.lo
|
||||
// v7 = mul1.hi
|
||||
|
||||
// Load constants table pointer
|
||||
MOVD $·constants(SB), R3
|
||||
|
||||
// and load constants into v28, v29, and v30
|
||||
WORD $0x4c40607c // ld1 {v28.16b-v30.16b}, [x3]
|
||||
|
||||
WORD $0x4cdf2c00 // ld1 {v0.2d-v3.2d}, [x0], #64
|
||||
WORD $0x4c402c04 // ld1 {v4.2d-v7.2d}, [x0]
|
||||
SUBS $64, R0
|
||||
|
||||
loop:
|
||||
// Main loop
|
||||
WORD $0x4cdfa83a // ld1 {v26.4s-v27.4s}, [x1], #32
|
||||
|
||||
// Add message
|
||||
WORD $0x4efa8442 // add v2.2d, v2.2d, v26.2d
|
||||
WORD $0x4efb8463 // add v3.2d, v3.2d, v27.2d
|
||||
|
||||
// v1 += mul0
|
||||
WORD $0x4ee48442 // add v2.2d, v2.2d, v4.2d
|
||||
WORD $0x4ee58463 // add v3.2d, v3.2d, v5.2d
|
||||
|
||||
// First pair of multiplies
|
||||
WORD $0x4e1d200a // tbl v10.16b,{v0.16b,v1.16b},v29.16b
|
||||
WORD $0x4e1e204b // tbl v11.16b,{v2.16b,v3.16b},v30.16b
|
||||
WORD $0x2eaac16c // umull v12.2d, v11.2s, v10.2s
|
||||
WORD $0x6eaac16d // umull2 v13.2d, v11.4s, v10.4s
|
||||
|
||||
// v0 += mul1
|
||||
WORD $0x4ee68400 // add v0.2d, v0.2d, v6.2d
|
||||
WORD $0x4ee78421 // add v1.2d, v1.2d, v7.2d
|
||||
|
||||
// Second pair of multiplies
|
||||
WORD $0x4e1d204f // tbl v15.16b,{v2.16b,v3.16b},v29.16b
|
||||
WORD $0x4e1e200e // tbl v14.16b,{v0.16b,v1.16b},v30.16b
|
||||
|
||||
// EOR multiplication result in
|
||||
WORD $0x6e2c1c84 // eor v4.16b,v4.16b,v12.16b
|
||||
WORD $0x6e2d1ca5 // eor v5.16b,v5.16b,v13.16b
|
||||
|
||||
WORD $0x2eaec1f0 // umull v16.2d, v15.2s, v14.2s
|
||||
WORD $0x6eaec1f1 // umull2 v17.2d, v15.4s, v14.4s
|
||||
|
||||
// First pair of zipper-merges
|
||||
WORD $0x4e1c0052 // tbl v18.16b,{v2.16b},v28.16b
|
||||
WORD $0x4ef28400 // add v0.2d, v0.2d, v18.2d
|
||||
WORD $0x4e1c0073 // tbl v19.16b,{v3.16b},v28.16b
|
||||
WORD $0x4ef38421 // add v1.2d, v1.2d, v19.2d
|
||||
|
||||
// Second pair of zipper-merges
|
||||
WORD $0x4e1c0014 // tbl v20.16b,{v0.16b},v28.16b
|
||||
WORD $0x4ef48442 // add v2.2d, v2.2d, v20.2d
|
||||
WORD $0x4e1c0035 // tbl v21.16b,{v1.16b},v28.16b
|
||||
WORD $0x4ef58463 // add v3.2d, v3.2d, v21.2d
|
||||
|
||||
// EOR multiplication result in
|
||||
WORD $0x6e301cc6 // eor v6.16b,v6.16b,v16.16b
|
||||
WORD $0x6e311ce7 // eor v7.16b,v7.16b,v17.16b
|
||||
|
||||
SUBS $32, R2
|
||||
BPL loop
|
||||
|
||||
// Store result
|
||||
WORD $0x4c9f2c00 // st1 {v0.2d-v3.2d}, [x0], #64
|
||||
WORD $0x4c002c04 // st1 {v4.2d-v7.2d}, [x0]
|
||||
|
||||
complete:
|
||||
RET
|
||||
|
||||
// Constants for TBL instructions
|
||||
DATA ·constants+0x0(SB)/8, $0x000f010e05020c03 // zipper merge constant
|
||||
DATA ·constants+0x8(SB)/8, $0x070806090d0a040b
|
||||
DATA ·constants+0x10(SB)/8, $0x0f0e0d0c07060504 // setup first register for multiply
|
||||
DATA ·constants+0x18(SB)/8, $0x1f1e1d1c17161514
|
||||
DATA ·constants+0x20(SB)/8, $0x0b0a090803020100 // setup second register for multiply
|
||||
DATA ·constants+0x28(SB)/8, $0x1b1a191813121110
|
||||
|
||||
GLOBL ·constants(SB), 8, $48
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package highwayhash
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
)
|
||||
|
||||
const (
|
||||
v0 = 0
|
||||
v1 = 4
|
||||
mul0 = 8
|
||||
mul1 = 12
|
||||
)
|
||||
|
||||
var (
|
||||
init0 = [4]uint64{0xdbe6d5d5fe4cce2f, 0xa4093822299f31d0, 0x13198a2e03707344, 0x243f6a8885a308d3}
|
||||
init1 = [4]uint64{0x3bd39e10cb0ef593, 0xc0acf169b5f18a8c, 0xbe5466cf34e90c6c, 0x452821e638d01377}
|
||||
)
|
||||
|
||||
func initializeGeneric(state *[16]uint64, k []byte) {
|
||||
var key [4]uint64
|
||||
|
||||
key[0] = binary.LittleEndian.Uint64(k[0:])
|
||||
key[1] = binary.LittleEndian.Uint64(k[8:])
|
||||
key[2] = binary.LittleEndian.Uint64(k[16:])
|
||||
key[3] = binary.LittleEndian.Uint64(k[24:])
|
||||
|
||||
copy(state[mul0:], init0[:])
|
||||
copy(state[mul1:], init1[:])
|
||||
|
||||
for i, k := range key {
|
||||
state[v0+i] = init0[i] ^ k
|
||||
}
|
||||
|
||||
key[0] = key[0]>>32 | key[0]<<32
|
||||
key[1] = key[1]>>32 | key[1]<<32
|
||||
key[2] = key[2]>>32 | key[2]<<32
|
||||
key[3] = key[3]>>32 | key[3]<<32
|
||||
|
||||
for i, k := range key {
|
||||
state[v1+i] = init1[i] ^ k
|
||||
}
|
||||
}
|
||||
|
||||
func updateGeneric(state *[16]uint64, msg []byte) {
|
||||
for len(msg) > 0 {
|
||||
// add message
|
||||
state[v1+0] += binary.LittleEndian.Uint64(msg)
|
||||
state[v1+1] += binary.LittleEndian.Uint64(msg[8:])
|
||||
state[v1+2] += binary.LittleEndian.Uint64(msg[16:])
|
||||
state[v1+3] += binary.LittleEndian.Uint64(msg[24:])
|
||||
|
||||
// v1 += mul0
|
||||
state[v1+0] += state[mul0+0]
|
||||
state[v1+1] += state[mul0+1]
|
||||
state[v1+2] += state[mul0+2]
|
||||
state[v1+3] += state[mul0+3]
|
||||
|
||||
state[mul0+0] ^= uint64(uint32(state[v1+0])) * (state[v0+0] >> 32)
|
||||
state[mul0+1] ^= uint64(uint32(state[v1+1])) * (state[v0+1] >> 32)
|
||||
state[mul0+2] ^= uint64(uint32(state[v1+2])) * (state[v0+2] >> 32)
|
||||
state[mul0+3] ^= uint64(uint32(state[v1+3])) * (state[v0+3] >> 32)
|
||||
|
||||
// v0 += mul1
|
||||
state[v0+0] += state[mul1+0]
|
||||
state[v0+1] += state[mul1+1]
|
||||
state[v0+2] += state[mul1+2]
|
||||
state[v0+3] += state[mul1+3]
|
||||
|
||||
state[mul1+0] ^= uint64(uint32(state[v0+0])) * (state[v1+0] >> 32)
|
||||
state[mul1+1] ^= uint64(uint32(state[v0+1])) * (state[v1+1] >> 32)
|
||||
state[mul1+2] ^= uint64(uint32(state[v0+2])) * (state[v1+2] >> 32)
|
||||
state[mul1+3] ^= uint64(uint32(state[v0+3])) * (state[v1+3] >> 32)
|
||||
|
||||
zipperMerge(state[v1+0], state[v1+1], &state[v0+0], &state[v0+1])
|
||||
zipperMerge(state[v1+2], state[v1+3], &state[v0+2], &state[v0+3])
|
||||
|
||||
zipperMerge(state[v0+0], state[v0+1], &state[v1+0], &state[v1+1])
|
||||
zipperMerge(state[v0+2], state[v0+3], &state[v1+2], &state[v1+3])
|
||||
msg = msg[32:]
|
||||
}
|
||||
}
|
||||
|
||||
func finalizeGeneric(out []byte, state *[16]uint64) {
|
||||
var perm [4]uint64
|
||||
var tmp [32]byte
|
||||
runs := 4
|
||||
if len(out) == 16 {
|
||||
runs = 6
|
||||
} else if len(out) == 32 {
|
||||
runs = 10
|
||||
}
|
||||
for i := 0; i < runs; i++ {
|
||||
perm[0] = state[v0+2]>>32 | state[v0+2]<<32
|
||||
perm[1] = state[v0+3]>>32 | state[v0+3]<<32
|
||||
perm[2] = state[v0+0]>>32 | state[v0+0]<<32
|
||||
perm[3] = state[v0+1]>>32 | state[v0+1]<<32
|
||||
|
||||
binary.LittleEndian.PutUint64(tmp[0:], perm[0])
|
||||
binary.LittleEndian.PutUint64(tmp[8:], perm[1])
|
||||
binary.LittleEndian.PutUint64(tmp[16:], perm[2])
|
||||
binary.LittleEndian.PutUint64(tmp[24:], perm[3])
|
||||
|
||||
update(state, tmp[:])
|
||||
}
|
||||
|
||||
switch len(out) {
|
||||
case 8:
|
||||
binary.LittleEndian.PutUint64(out, state[v0+0]+state[v1+0]+state[mul0+0]+state[mul1+0])
|
||||
case 16:
|
||||
binary.LittleEndian.PutUint64(out, state[v0+0]+state[v1+2]+state[mul0+0]+state[mul1+2])
|
||||
binary.LittleEndian.PutUint64(out[8:], state[v0+1]+state[v1+3]+state[mul0+1]+state[mul1+3])
|
||||
case 32:
|
||||
h0, h1 := reduceMod(state[v0+0]+state[mul0+0], state[v0+1]+state[mul0+1], state[v1+0]+state[mul1+0], state[v1+1]+state[mul1+1])
|
||||
binary.LittleEndian.PutUint64(out[0:], h0)
|
||||
binary.LittleEndian.PutUint64(out[8:], h1)
|
||||
|
||||
h0, h1 = reduceMod(state[v0+2]+state[mul0+2], state[v0+3]+state[mul0+3], state[v1+2]+state[mul1+2], state[v1+3]+state[mul1+3])
|
||||
binary.LittleEndian.PutUint64(out[16:], h0)
|
||||
binary.LittleEndian.PutUint64(out[24:], h1)
|
||||
}
|
||||
}
|
||||
|
||||
func zipperMerge(v0, v1 uint64, d0, d1 *uint64) {
|
||||
m0 := v0 & (0xFF << (2 * 8))
|
||||
m1 := (v1 & (0xFF << (7 * 8))) >> 8
|
||||
m2 := ((v0 & (0xFF << (5 * 8))) + (v1 & (0xFF << (6 * 8)))) >> 16
|
||||
m3 := ((v0 & (0xFF << (3 * 8))) + (v1 & (0xFF << (4 * 8)))) >> 24
|
||||
m4 := (v0 & (0xFF << (1 * 8))) << 32
|
||||
m5 := v0 << 56
|
||||
|
||||
*d0 += m0 + m1 + m2 + m3 + m4 + m5
|
||||
|
||||
m0 = (v0 & (0xFF << (7 * 8))) + (v1 & (0xFF << (2 * 8)))
|
||||
m1 = (v0 & (0xFF << (6 * 8))) >> 8
|
||||
m2 = (v1 & (0xFF << (5 * 8))) >> 16
|
||||
m3 = ((v1 & (0xFF << (3 * 8))) + (v0 & (0xFF << (4 * 8)))) >> 24
|
||||
m4 = (v1 & 0xFF) << 48
|
||||
m5 = (v1 & (0xFF << (1 * 8))) << 24
|
||||
|
||||
*d1 += m3 + m2 + m5 + m1 + m4 + m0
|
||||
}
|
||||
|
||||
// reduce v = [v0, v1, v2, v3] mod the irreducible polynomial x^128 + x^2 + x
|
||||
func reduceMod(v0, v1, v2, v3 uint64) (r0, r1 uint64) {
|
||||
v3 &= 0x3FFFFFFFFFFFFFFF
|
||||
|
||||
r0, r1 = v2, v3
|
||||
|
||||
v3 = (v3 << 1) | (v2 >> (64 - 1))
|
||||
v2 <<= 1
|
||||
r1 = (r1 << 2) | (r0 >> (64 - 2))
|
||||
r0 <<= 2
|
||||
|
||||
r0 ^= v0 ^ v2
|
||||
r1 ^= v1 ^ v3
|
||||
return
|
||||
}
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
//+build !noasm
|
||||
|
||||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package highwayhash
|
||||
|
||||
var (
|
||||
useSSE4 = false
|
||||
useAVX2 = false
|
||||
useNEON = false
|
||||
useVMX = true
|
||||
)
|
||||
|
||||
//go:noescape
|
||||
func updatePpc64Le(state *[16]uint64, msg []byte)
|
||||
|
||||
func initialize(state *[16]uint64, key []byte) {
|
||||
initializeGeneric(state, key)
|
||||
}
|
||||
|
||||
func update(state *[16]uint64, msg []byte) {
|
||||
if useVMX {
|
||||
updatePpc64Le(state, msg)
|
||||
} else {
|
||||
updateGeneric(state, msg)
|
||||
}
|
||||
}
|
||||
|
||||
func finalize(out []byte, state *[16]uint64) {
|
||||
finalizeGeneric(out, state)
|
||||
}
|
||||
|
|
@ -0,0 +1,182 @@
|
|||
//+build !noasm !appengine
|
||||
|
||||
//
|
||||
// Minio Cloud Storage, (C) 2018 Minio, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// Definition of registers
|
||||
#define V0_LO VS32
|
||||
#define V0_LO_ V0
|
||||
#define V0_HI VS33
|
||||
#define V0_HI_ V1
|
||||
#define V1_LO VS34
|
||||
#define V1_LO_ V2
|
||||
#define V1_HI VS35
|
||||
#define V1_HI_ V3
|
||||
#define MUL0_LO VS36
|
||||
#define MUL0_LO_ V4
|
||||
#define MUL0_HI VS37
|
||||
#define MUL0_HI_ V5
|
||||
#define MUL1_LO VS38
|
||||
#define MUL1_LO_ V6
|
||||
#define MUL1_HI VS39
|
||||
#define MUL1_HI_ V7
|
||||
|
||||
// Message
|
||||
#define MSG_LO VS40
|
||||
#define MSG_LO_ V8
|
||||
#define MSG_HI VS41
|
||||
|
||||
// Constants
|
||||
#define ROTATE VS42
|
||||
#define ROTATE_ V10
|
||||
#define MASK VS43
|
||||
#define MASK_ V11
|
||||
|
||||
// Temps
|
||||
#define TEMP1 VS44
|
||||
#define TEMP1_ V12
|
||||
#define TEMP2 VS45
|
||||
#define TEMP2_ V13
|
||||
#define TEMP3 VS46
|
||||
#define TEMP3_ V14
|
||||
#define TEMP4_ V15
|
||||
#define TEMP5_ V16
|
||||
#define TEMP6_ V17
|
||||
#define TEMP7_ V18
|
||||
|
||||
// Regular registers
|
||||
#define STATE R3
|
||||
#define MSG_BASE R4
|
||||
#define MSG_LEN R5
|
||||
#define CONSTANTS R6
|
||||
#define P1 R7
|
||||
#define P2 R8
|
||||
#define P3 R9
|
||||
#define P4 R10
|
||||
#define P5 R11
|
||||
#define P6 R12
|
||||
#define P7 R14 // avoid using R13
|
||||
|
||||
TEXT ·updatePpc64Le(SB), NOFRAME|NOSPLIT, $0-32
|
||||
MOVD state+0(FP), STATE
|
||||
MOVD msg_base+8(FP), MSG_BASE
|
||||
MOVD msg_len+16(FP), MSG_LEN // length of message
|
||||
|
||||
// Sanity check for length
|
||||
CMPU MSG_LEN, $31
|
||||
BLE complete
|
||||
|
||||
// Setup offsets
|
||||
MOVD $16, P1
|
||||
MOVD $32, P2
|
||||
MOVD $48, P3
|
||||
MOVD $64, P4
|
||||
MOVD $80, P5
|
||||
MOVD $96, P6
|
||||
MOVD $112, P7
|
||||
|
||||
// Load state
|
||||
LXVD2X (STATE)(R0), V0_LO
|
||||
LXVD2X (STATE)(P1), V0_HI
|
||||
LXVD2X (STATE)(P2), V1_LO
|
||||
LXVD2X (STATE)(P3), V1_HI
|
||||
LXVD2X (STATE)(P4), MUL0_LO
|
||||
LXVD2X (STATE)(P5), MUL0_HI
|
||||
LXVD2X (STATE)(P6), MUL1_LO
|
||||
LXVD2X (STATE)(P7), MUL1_HI
|
||||
XXPERMDI V0_LO, V0_LO, $2, V0_LO
|
||||
XXPERMDI V0_HI, V0_HI, $2, V0_HI
|
||||
XXPERMDI V1_LO, V1_LO, $2, V1_LO
|
||||
XXPERMDI V1_HI, V1_HI, $2, V1_HI
|
||||
XXPERMDI MUL0_LO, MUL0_LO, $2, MUL0_LO
|
||||
XXPERMDI MUL0_HI, MUL0_HI, $2, MUL0_HI
|
||||
XXPERMDI MUL1_LO, MUL1_LO, $2, MUL1_LO
|
||||
XXPERMDI MUL1_HI, MUL1_HI, $2, MUL1_HI
|
||||
|
||||
// Load constants table pointer
|
||||
MOVD $·constants(SB), CONSTANTS
|
||||
LXVD2X (CONSTANTS)(R0), ROTATE
|
||||
LXVD2X (CONSTANTS)(P1), MASK
|
||||
XXLNAND MASK, MASK, MASK
|
||||
|
||||
loop:
|
||||
// Main highwayhash update loop
|
||||
LXVD2X (MSG_BASE)(R0), MSG_LO
|
||||
VADDUDM V0_LO_, MUL1_LO_, TEMP1_
|
||||
VRLD V0_LO_, ROTATE_, TEMP2_
|
||||
VADDUDM MUL1_HI_, V0_HI_, TEMP3_
|
||||
LXVD2X (MSG_BASE)(P1), MSG_HI
|
||||
ADD $32, MSG_BASE, MSG_BASE
|
||||
XXPERMDI MSG_LO, MSG_LO, $2, MSG_LO
|
||||
XXPERMDI MSG_HI, MSG_HI, $2, V0_LO
|
||||
VADDUDM MSG_LO_, MUL0_LO_, MSG_LO_
|
||||
VADDUDM V0_LO_, MUL0_HI_, V0_LO_
|
||||
VADDUDM MSG_LO_, V1_LO_, V1_LO_
|
||||
VSRD V0_HI_, ROTATE_, MSG_LO_
|
||||
VADDUDM V0_LO_, V1_HI_, V1_HI_
|
||||
VPERM V1_LO_, V1_LO_, MASK_, V0_LO_
|
||||
VMULOUW V1_LO_, TEMP2_, TEMP2_
|
||||
VPERM V1_HI_, V1_HI_, MASK_, TEMP7_
|
||||
VADDUDM V0_LO_, TEMP1_, V0_LO_
|
||||
VMULOUW V1_HI_, MSG_LO_, MSG_LO_
|
||||
VADDUDM TEMP7_, TEMP3_, V0_HI_
|
||||
VPERM V0_LO_, V0_LO_, MASK_, TEMP6_
|
||||
VRLD V1_LO_, ROTATE_, TEMP4_
|
||||
VSRD V1_HI_, ROTATE_, TEMP5_
|
||||
VPERM V0_HI_, V0_HI_, MASK_, TEMP7_
|
||||
XXLXOR MUL0_LO, TEMP2, MUL0_LO
|
||||
VMULOUW TEMP1_, TEMP4_, TEMP1_
|
||||
VMULOUW TEMP3_, TEMP5_, TEMP3_
|
||||
XXLXOR MUL0_HI, MSG_LO, MUL0_HI
|
||||
XXLXOR MUL1_LO, TEMP1, MUL1_LO
|
||||
XXLXOR MUL1_HI, TEMP3, MUL1_HI
|
||||
VADDUDM TEMP6_, V1_LO_, V1_LO_
|
||||
VADDUDM TEMP7_, V1_HI_, V1_HI_
|
||||
|
||||
SUB $32, MSG_LEN, MSG_LEN
|
||||
CMPU MSG_LEN, $32
|
||||
BGE loop
|
||||
|
||||
// Save state
|
||||
XXPERMDI V0_LO, V0_LO, $2, V0_LO
|
||||
XXPERMDI V0_HI, V0_HI, $2, V0_HI
|
||||
XXPERMDI V1_LO, V1_LO, $2, V1_LO
|
||||
XXPERMDI V1_HI, V1_HI, $2, V1_HI
|
||||
XXPERMDI MUL0_LO, MUL0_LO, $2, MUL0_LO
|
||||
XXPERMDI MUL0_HI, MUL0_HI, $2, MUL0_HI
|
||||
XXPERMDI MUL1_LO, MUL1_LO, $2, MUL1_LO
|
||||
XXPERMDI MUL1_HI, MUL1_HI, $2, MUL1_HI
|
||||
STXVD2X V0_LO, (STATE)(R0)
|
||||
STXVD2X V0_HI, (STATE)(P1)
|
||||
STXVD2X V1_LO, (STATE)(P2)
|
||||
STXVD2X V1_HI, (STATE)(P3)
|
||||
STXVD2X MUL0_LO, (STATE)(P4)
|
||||
STXVD2X MUL0_HI, (STATE)(P5)
|
||||
STXVD2X MUL1_LO, (STATE)(P6)
|
||||
STXVD2X MUL1_HI, (STATE)(P7)
|
||||
|
||||
complete:
|
||||
RET
|
||||
|
||||
// Constants table
|
||||
DATA ·constants+0x0(SB)/8, $0x0000000000000020
|
||||
DATA ·constants+0x8(SB)/8, $0x0000000000000020
|
||||
DATA ·constants+0x10(SB)/8, $0x070806090d0a040b // zipper merge constant
|
||||
DATA ·constants+0x18(SB)/8, $0x000f010e05020c03 // zipper merge constant
|
||||
|
||||
GLOBL ·constants(SB), 8, $32
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
// Copyright (c) 2017 Minio Inc. All rights reserved.
|
||||
// Use of this source code is governed by a license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build !amd64
|
||||
// +build !arm64
|
||||
// +build !ppc64le
|
||||
|
||||
package highwayhash
|
||||
|
||||
var (
|
||||
useSSE4 = false
|
||||
useAVX2 = false
|
||||
useNEON = false
|
||||
useVMX = false
|
||||
)
|
||||
|
||||
func initialize(state *[16]uint64, k []byte) {
|
||||
initializeGeneric(state, k)
|
||||
}
|
||||
|
||||
func update(state *[16]uint64, msg []byte) {
|
||||
updateGeneric(state, msg)
|
||||
}
|
||||
|
||||
func finalize(out []byte, state *[16]uint64) {
|
||||
finalizeGeneric(out, state)
|
||||
}
|
||||
|
|
@ -0,0 +1,174 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
// Copyright 2013 tsuru authors. All rights reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package afero provides types and methods for interacting with the filesystem,
|
||||
// as an abstraction layer.
|
||||
|
||||
// Afero also provides a few implementations that are mostly interoperable. One that
|
||||
// uses the operating system filesystem, one that uses memory to store files
|
||||
// (cross platform) and an interface that should be implemented if you want to
|
||||
// provide your own filesystem.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Afero struct {
|
||||
Fs
|
||||
}
|
||||
|
||||
// File represents a file in the filesystem.
|
||||
type File interface {
|
||||
io.Closer
|
||||
io.Reader
|
||||
io.ReaderAt
|
||||
io.Seeker
|
||||
io.Writer
|
||||
io.WriterAt
|
||||
|
||||
Name() string
|
||||
Readdir(count int) ([]os.FileInfo, error)
|
||||
Readdirnames(n int) ([]string, error)
|
||||
Stat() (os.FileInfo, error)
|
||||
Sync() error
|
||||
Truncate(size int64) error
|
||||
WriteString(s string) (ret int, err error)
|
||||
}
|
||||
|
||||
// Fs is the filesystem interface.
|
||||
//
|
||||
// Any simulated or real filesystem should implement this interface.
|
||||
type Fs interface {
|
||||
// Create creates a file in the filesystem, returning the file and an
|
||||
// error, if any happens.
|
||||
Create(name string) (File, error)
|
||||
|
||||
// Mkdir creates a directory in the filesystem, return an error if any
|
||||
// happens.
|
||||
Mkdir(name string, perm os.FileMode) error
|
||||
|
||||
// MkdirAll creates a directory path and all parents that does not exist
|
||||
// yet.
|
||||
MkdirAll(path string, perm os.FileMode) error
|
||||
|
||||
// Open opens a file, returning it or an error, if any happens.
|
||||
Open(name string) (File, error)
|
||||
|
||||
// OpenFile opens a file using the given flags and the given mode.
|
||||
OpenFile(name string, flag int, perm os.FileMode) (File, error)
|
||||
|
||||
// Remove removes a file identified by name, returning an error, if any
|
||||
// happens.
|
||||
Remove(name string) error
|
||||
|
||||
// RemoveAll removes a directory path and any children it contains. It
|
||||
// does not fail if the path does not exist (return nil).
|
||||
RemoveAll(path string) error
|
||||
|
||||
// Rename renames a file.
|
||||
Rename(oldname, newname string) error
|
||||
|
||||
// Stat returns a FileInfo describing the named file, or an error, if any
|
||||
// happens.
|
||||
Stat(name string) (os.FileInfo, error)
|
||||
|
||||
// The name of this FileSystem
|
||||
Name() string
|
||||
|
||||
//Chmod changes the mode of the named file to mode.
|
||||
Chmod(name string, mode os.FileMode) error
|
||||
|
||||
//Chtimes changes the access and modification times of the named file
|
||||
Chtimes(name string, atime time.Time, mtime time.Time) error
|
||||
}
|
||||
|
||||
var (
|
||||
ErrFileClosed = errors.New("File is closed")
|
||||
ErrOutOfRange = errors.New("Out of range")
|
||||
ErrTooLarge = errors.New("Too large")
|
||||
ErrFileNotFound = os.ErrNotExist
|
||||
ErrFileExists = os.ErrExist
|
||||
ErrDestinationExists = os.ErrExist
|
||||
)
|
||||
|
|
@ -0,0 +1,180 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ Lstater = (*BasePathFs)(nil)
|
||||
|
||||
// The BasePathFs restricts all operations to a given path within an Fs.
|
||||
// The given file name to the operations on this Fs will be prepended with
|
||||
// the base path before calling the base Fs.
|
||||
// Any file name (after filepath.Clean()) outside this base path will be
|
||||
// treated as non existing file.
|
||||
//
|
||||
// Note that it does not clean the error messages on return, so you may
|
||||
// reveal the real path on errors.
|
||||
type BasePathFs struct {
|
||||
source Fs
|
||||
path string
|
||||
}
|
||||
|
||||
type BasePathFile struct {
|
||||
File
|
||||
path string
|
||||
}
|
||||
|
||||
func (f *BasePathFile) Name() string {
|
||||
sourcename := f.File.Name()
|
||||
return strings.TrimPrefix(sourcename, filepath.Clean(f.path))
|
||||
}
|
||||
|
||||
func NewBasePathFs(source Fs, path string) Fs {
|
||||
return &BasePathFs{source: source, path: path}
|
||||
}
|
||||
|
||||
// on a file outside the base path it returns the given file name and an error,
|
||||
// else the given file with the base path prepended
|
||||
func (b *BasePathFs) RealPath(name string) (path string, err error) {
|
||||
if err := validateBasePathName(name); err != nil {
|
||||
return name, err
|
||||
}
|
||||
|
||||
bpath := filepath.Clean(b.path)
|
||||
path = filepath.Clean(filepath.Join(bpath, name))
|
||||
if !strings.HasPrefix(path, bpath) {
|
||||
return name, os.ErrNotExist
|
||||
}
|
||||
|
||||
return path, nil
|
||||
}
|
||||
|
||||
func validateBasePathName(name string) error {
|
||||
if runtime.GOOS != "windows" {
|
||||
// Not much to do here;
|
||||
// the virtual file paths all look absolute on *nix.
|
||||
return nil
|
||||
}
|
||||
|
||||
// On Windows a common mistake would be to provide an absolute OS path
|
||||
// We could strip out the base part, but that would not be very portable.
|
||||
if filepath.IsAbs(name) {
|
||||
return os.ErrNotExist
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Chtimes(name string, atime, mtime time.Time) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "chtimes", Path: name, Err: err}
|
||||
}
|
||||
return b.source.Chtimes(name, atime, mtime)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Chmod(name string, mode os.FileMode) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "chmod", Path: name, Err: err}
|
||||
}
|
||||
return b.source.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Name() string {
|
||||
return "BasePathFs"
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Stat(name string) (fi os.FileInfo, err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return nil, &os.PathError{Op: "stat", Path: name, Err: err}
|
||||
}
|
||||
return b.source.Stat(name)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Rename(oldname, newname string) (err error) {
|
||||
if oldname, err = b.RealPath(oldname); err != nil {
|
||||
return &os.PathError{Op: "rename", Path: oldname, Err: err}
|
||||
}
|
||||
if newname, err = b.RealPath(newname); err != nil {
|
||||
return &os.PathError{Op: "rename", Path: newname, Err: err}
|
||||
}
|
||||
return b.source.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) RemoveAll(name string) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "remove_all", Path: name, Err: err}
|
||||
}
|
||||
return b.source.RemoveAll(name)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Remove(name string) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "remove", Path: name, Err: err}
|
||||
}
|
||||
return b.source.Remove(name)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) OpenFile(name string, flag int, mode os.FileMode) (f File, err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return nil, &os.PathError{Op: "openfile", Path: name, Err: err}
|
||||
}
|
||||
sourcef, err := b.source.OpenFile(name, flag, mode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &BasePathFile{sourcef, b.path}, nil
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Open(name string) (f File, err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return nil, &os.PathError{Op: "open", Path: name, Err: err}
|
||||
}
|
||||
sourcef, err := b.source.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &BasePathFile{File: sourcef, path: b.path}, nil
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Mkdir(name string, mode os.FileMode) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "mkdir", Path: name, Err: err}
|
||||
}
|
||||
return b.source.Mkdir(name, mode)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) MkdirAll(name string, mode os.FileMode) (err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return &os.PathError{Op: "mkdir", Path: name, Err: err}
|
||||
}
|
||||
return b.source.MkdirAll(name, mode)
|
||||
}
|
||||
|
||||
func (b *BasePathFs) Create(name string) (f File, err error) {
|
||||
if name, err = b.RealPath(name); err != nil {
|
||||
return nil, &os.PathError{Op: "create", Path: name, Err: err}
|
||||
}
|
||||
sourcef, err := b.source.Create(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &BasePathFile{File: sourcef, path: b.path}, nil
|
||||
}
|
||||
|
||||
func (b *BasePathFs) LstatIfPossible(name string) (os.FileInfo, bool, error) {
|
||||
name, err := b.RealPath(name)
|
||||
if err != nil {
|
||||
return nil, false, &os.PathError{Op: "lstat", Path: name, Err: err}
|
||||
}
|
||||
if lstater, ok := b.source.(Lstater); ok {
|
||||
return lstater.LstatIfPossible(name)
|
||||
}
|
||||
fi, err := b.source.Stat(name)
|
||||
return fi, false, err
|
||||
}
|
||||
|
||||
// vim: ts=4 sw=4 noexpandtab nolist syn=go
|
||||
|
|
@ -0,0 +1,290 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
// If the cache duration is 0, cache time will be unlimited, i.e. once
|
||||
// a file is in the layer, the base will never be read again for this file.
|
||||
//
|
||||
// For cache times greater than 0, the modification time of a file is
|
||||
// checked. Note that a lot of file system implementations only allow a
|
||||
// resolution of a second for timestamps... or as the godoc for os.Chtimes()
|
||||
// states: "The underlying filesystem may truncate or round the values to a
|
||||
// less precise time unit."
|
||||
//
|
||||
// This caching union will forward all write calls also to the base file
|
||||
// system first. To prevent writing to the base Fs, wrap it in a read-only
|
||||
// filter - Note: this will also make the overlay read-only, for writing files
|
||||
// in the overlay, use the overlay Fs directly, not via the union Fs.
|
||||
type CacheOnReadFs struct {
|
||||
base Fs
|
||||
layer Fs
|
||||
cacheTime time.Duration
|
||||
}
|
||||
|
||||
func NewCacheOnReadFs(base Fs, layer Fs, cacheTime time.Duration) Fs {
|
||||
return &CacheOnReadFs{base: base, layer: layer, cacheTime: cacheTime}
|
||||
}
|
||||
|
||||
type cacheState int
|
||||
|
||||
const (
|
||||
// not present in the overlay, unknown if it exists in the base:
|
||||
cacheMiss cacheState = iota
|
||||
// present in the overlay and in base, base file is newer:
|
||||
cacheStale
|
||||
// present in the overlay - with cache time == 0 it may exist in the base,
|
||||
// with cacheTime > 0 it exists in the base and is same age or newer in the
|
||||
// overlay
|
||||
cacheHit
|
||||
// happens if someone writes directly to the overlay without
|
||||
// going through this union
|
||||
cacheLocal
|
||||
)
|
||||
|
||||
func (u *CacheOnReadFs) cacheStatus(name string) (state cacheState, fi os.FileInfo, err error) {
|
||||
var lfi, bfi os.FileInfo
|
||||
lfi, err = u.layer.Stat(name)
|
||||
if err == nil {
|
||||
if u.cacheTime == 0 {
|
||||
return cacheHit, lfi, nil
|
||||
}
|
||||
if lfi.ModTime().Add(u.cacheTime).Before(time.Now()) {
|
||||
bfi, err = u.base.Stat(name)
|
||||
if err != nil {
|
||||
return cacheLocal, lfi, nil
|
||||
}
|
||||
if bfi.ModTime().After(lfi.ModTime()) {
|
||||
return cacheStale, bfi, nil
|
||||
}
|
||||
}
|
||||
return cacheHit, lfi, nil
|
||||
}
|
||||
|
||||
if err == syscall.ENOENT || os.IsNotExist(err) {
|
||||
return cacheMiss, nil, nil
|
||||
}
|
||||
|
||||
return cacheMiss, nil, err
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) copyToLayer(name string) error {
|
||||
return copyToLayer(u.base, u.layer, name)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Chtimes(name string, atime, mtime time.Time) error {
|
||||
st, _, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
case cacheHit:
|
||||
err = u.base.Chtimes(name, atime, mtime)
|
||||
case cacheStale, cacheMiss:
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return err
|
||||
}
|
||||
err = u.base.Chtimes(name, atime, mtime)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.Chtimes(name, atime, mtime)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Chmod(name string, mode os.FileMode) error {
|
||||
st, _, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
case cacheHit:
|
||||
err = u.base.Chmod(name, mode)
|
||||
case cacheStale, cacheMiss:
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return err
|
||||
}
|
||||
err = u.base.Chmod(name, mode)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Stat(name string) (os.FileInfo, error) {
|
||||
st, fi, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch st {
|
||||
case cacheMiss:
|
||||
return u.base.Stat(name)
|
||||
default: // cacheStale has base, cacheHit and cacheLocal the layer os.FileInfo
|
||||
return fi, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Rename(oldname, newname string) error {
|
||||
st, _, err := u.cacheStatus(oldname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
case cacheHit:
|
||||
err = u.base.Rename(oldname, newname)
|
||||
case cacheStale, cacheMiss:
|
||||
if err := u.copyToLayer(oldname); err != nil {
|
||||
return err
|
||||
}
|
||||
err = u.base.Rename(oldname, newname)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Remove(name string) error {
|
||||
st, _, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
case cacheHit, cacheStale, cacheMiss:
|
||||
err = u.base.Remove(name)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.Remove(name)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) RemoveAll(name string) error {
|
||||
st, _, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
case cacheHit, cacheStale, cacheMiss:
|
||||
err = u.base.RemoveAll(name)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.RemoveAll(name)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
st, _, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch st {
|
||||
case cacheLocal, cacheHit:
|
||||
default:
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if flag&(os.O_WRONLY|syscall.O_RDWR|os.O_APPEND|os.O_CREATE|os.O_TRUNC) != 0 {
|
||||
bfi, err := u.base.OpenFile(name, flag, perm)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lfi, err := u.layer.OpenFile(name, flag, perm)
|
||||
if err != nil {
|
||||
bfi.Close() // oops, what if O_TRUNC was set and file opening in the layer failed...?
|
||||
return nil, err
|
||||
}
|
||||
return &UnionFile{Base: bfi, Layer: lfi}, nil
|
||||
}
|
||||
return u.layer.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Open(name string) (File, error) {
|
||||
st, fi, err := u.cacheStatus(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
switch st {
|
||||
case cacheLocal:
|
||||
return u.layer.Open(name)
|
||||
|
||||
case cacheMiss:
|
||||
bfi, err := u.base.Stat(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if bfi.IsDir() {
|
||||
return u.base.Open(name)
|
||||
}
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return u.layer.Open(name)
|
||||
|
||||
case cacheStale:
|
||||
if !fi.IsDir() {
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return u.layer.Open(name)
|
||||
}
|
||||
case cacheHit:
|
||||
if !fi.IsDir() {
|
||||
return u.layer.Open(name)
|
||||
}
|
||||
}
|
||||
// the dirs from cacheHit, cacheStale fall down here:
|
||||
bfile, _ := u.base.Open(name)
|
||||
lfile, err := u.layer.Open(name)
|
||||
if err != nil && bfile == nil {
|
||||
return nil, err
|
||||
}
|
||||
return &UnionFile{Base: bfile, Layer: lfile}, nil
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Mkdir(name string, perm os.FileMode) error {
|
||||
err := u.base.Mkdir(name, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.MkdirAll(name, perm) // yes, MkdirAll... we cannot assume it exists in the cache
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Name() string {
|
||||
return "CacheOnReadFs"
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) MkdirAll(name string, perm os.FileMode) error {
|
||||
err := u.base.MkdirAll(name, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return u.layer.MkdirAll(name, perm)
|
||||
}
|
||||
|
||||
func (u *CacheOnReadFs) Create(name string) (File, error) {
|
||||
bfh, err := u.base.Create(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
lfh, err := u.layer.Create(name)
|
||||
if err != nil {
|
||||
// oops, see comment about OS_TRUNC above, should we remove? then we have to
|
||||
// remember if the file did not exist before
|
||||
bfh.Close()
|
||||
return nil, err
|
||||
}
|
||||
return &UnionFile{Base: bfh, Layer: lfh}, nil
|
||||
}
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
// Copyright © 2016 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build darwin openbsd freebsd netbsd dragonfly
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
const BADFD = syscall.EBADF
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
// Copyright © 2016 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
// +build !darwin
|
||||
// +build !openbsd
|
||||
// +build !freebsd
|
||||
// +build !dragonfly
|
||||
// +build !netbsd
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
const BADFD = syscall.EBADFD
|
||||
|
|
@ -0,0 +1,293 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ Lstater = (*CopyOnWriteFs)(nil)
|
||||
|
||||
// The CopyOnWriteFs is a union filesystem: a read only base file system with
|
||||
// a possibly writeable layer on top. Changes to the file system will only
|
||||
// be made in the overlay: Changing an existing file in the base layer which
|
||||
// is not present in the overlay will copy the file to the overlay ("changing"
|
||||
// includes also calls to e.g. Chtimes() and Chmod()).
|
||||
//
|
||||
// Reading directories is currently only supported via Open(), not OpenFile().
|
||||
type CopyOnWriteFs struct {
|
||||
base Fs
|
||||
layer Fs
|
||||
}
|
||||
|
||||
func NewCopyOnWriteFs(base Fs, layer Fs) Fs {
|
||||
return &CopyOnWriteFs{base: base, layer: layer}
|
||||
}
|
||||
|
||||
// Returns true if the file is not in the overlay
|
||||
func (u *CopyOnWriteFs) isBaseFile(name string) (bool, error) {
|
||||
if _, err := u.layer.Stat(name); err == nil {
|
||||
return false, nil
|
||||
}
|
||||
_, err := u.base.Stat(name)
|
||||
if err != nil {
|
||||
if oerr, ok := err.(*os.PathError); ok {
|
||||
if oerr.Err == os.ErrNotExist || oerr.Err == syscall.ENOENT || oerr.Err == syscall.ENOTDIR {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
if err == syscall.ENOENT {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
return true, err
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) copyToLayer(name string) error {
|
||||
return copyToLayer(u.base, u.layer, name)
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Chtimes(name string, atime, mtime time.Time) error {
|
||||
b, err := u.isBaseFile(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b {
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return u.layer.Chtimes(name, atime, mtime)
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Chmod(name string, mode os.FileMode) error {
|
||||
b, err := u.isBaseFile(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b {
|
||||
if err := u.copyToLayer(name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return u.layer.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Stat(name string) (os.FileInfo, error) {
|
||||
fi, err := u.layer.Stat(name)
|
||||
if err != nil {
|
||||
isNotExist := u.isNotExist(err)
|
||||
if isNotExist {
|
||||
return u.base.Stat(name)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return fi, nil
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) LstatIfPossible(name string) (os.FileInfo, bool, error) {
|
||||
llayer, ok1 := u.layer.(Lstater)
|
||||
lbase, ok2 := u.base.(Lstater)
|
||||
|
||||
if ok1 {
|
||||
fi, b, err := llayer.LstatIfPossible(name)
|
||||
if err == nil {
|
||||
return fi, b, nil
|
||||
}
|
||||
|
||||
if !u.isNotExist(err) {
|
||||
return nil, b, err
|
||||
}
|
||||
}
|
||||
|
||||
if ok2 {
|
||||
fi, b, err := lbase.LstatIfPossible(name)
|
||||
if err == nil {
|
||||
return fi, b, nil
|
||||
}
|
||||
if !u.isNotExist(err) {
|
||||
return nil, b, err
|
||||
}
|
||||
}
|
||||
|
||||
fi, err := u.Stat(name)
|
||||
|
||||
return fi, false, err
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) isNotExist(err error) bool {
|
||||
if e, ok := err.(*os.PathError); ok {
|
||||
err = e.Err
|
||||
}
|
||||
if err == os.ErrNotExist || err == syscall.ENOENT || err == syscall.ENOTDIR {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Renaming files present only in the base layer is not permitted
|
||||
func (u *CopyOnWriteFs) Rename(oldname, newname string) error {
|
||||
b, err := u.isBaseFile(oldname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if b {
|
||||
return syscall.EPERM
|
||||
}
|
||||
return u.layer.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
// Removing files present only in the base layer is not permitted. If
|
||||
// a file is present in the base layer and the overlay, only the overlay
|
||||
// will be removed.
|
||||
func (u *CopyOnWriteFs) Remove(name string) error {
|
||||
err := u.layer.Remove(name)
|
||||
switch err {
|
||||
case syscall.ENOENT:
|
||||
_, err = u.base.Stat(name)
|
||||
if err == nil {
|
||||
return syscall.EPERM
|
||||
}
|
||||
return syscall.ENOENT
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) RemoveAll(name string) error {
|
||||
err := u.layer.RemoveAll(name)
|
||||
switch err {
|
||||
case syscall.ENOENT:
|
||||
_, err = u.base.Stat(name)
|
||||
if err == nil {
|
||||
return syscall.EPERM
|
||||
}
|
||||
return syscall.ENOENT
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
b, err := u.isBaseFile(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if flag&(os.O_WRONLY|os.O_RDWR|os.O_APPEND|os.O_CREATE|os.O_TRUNC) != 0 {
|
||||
if b {
|
||||
if err = u.copyToLayer(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return u.layer.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
dir := filepath.Dir(name)
|
||||
isaDir, err := IsDir(u.base, dir)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
return nil, err
|
||||
}
|
||||
if isaDir {
|
||||
if err = u.layer.MkdirAll(dir, 0777); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return u.layer.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
isaDir, err = IsDir(u.layer, dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if isaDir {
|
||||
return u.layer.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
return nil, &os.PathError{Op: "open", Path: name, Err: syscall.ENOTDIR} // ...or os.ErrNotExist?
|
||||
}
|
||||
if b {
|
||||
return u.base.OpenFile(name, flag, perm)
|
||||
}
|
||||
return u.layer.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
// This function handles the 9 different possibilities caused
|
||||
// by the union which are the intersection of the following...
|
||||
// layer: doesn't exist, exists as a file, and exists as a directory
|
||||
// base: doesn't exist, exists as a file, and exists as a directory
|
||||
func (u *CopyOnWriteFs) Open(name string) (File, error) {
|
||||
// Since the overlay overrides the base we check that first
|
||||
b, err := u.isBaseFile(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If overlay doesn't exist, return the base (base state irrelevant)
|
||||
if b {
|
||||
return u.base.Open(name)
|
||||
}
|
||||
|
||||
// If overlay is a file, return it (base state irrelevant)
|
||||
dir, err := IsDir(u.layer, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !dir {
|
||||
return u.layer.Open(name)
|
||||
}
|
||||
|
||||
// Overlay is a directory, base state now matters.
|
||||
// Base state has 3 states to check but 2 outcomes:
|
||||
// A. It's a file or non-readable in the base (return just the overlay)
|
||||
// B. It's an accessible directory in the base (return a UnionFile)
|
||||
|
||||
// If base is file or nonreadable, return overlay
|
||||
dir, err = IsDir(u.base, name)
|
||||
if !dir || err != nil {
|
||||
return u.layer.Open(name)
|
||||
}
|
||||
|
||||
// Both base & layer are directories
|
||||
// Return union file (if opens are without error)
|
||||
bfile, bErr := u.base.Open(name)
|
||||
lfile, lErr := u.layer.Open(name)
|
||||
|
||||
// If either have errors at this point something is very wrong. Return nil and the errors
|
||||
if bErr != nil || lErr != nil {
|
||||
return nil, fmt.Errorf("BaseErr: %v\nOverlayErr: %v", bErr, lErr)
|
||||
}
|
||||
|
||||
return &UnionFile{Base: bfile, Layer: lfile}, nil
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Mkdir(name string, perm os.FileMode) error {
|
||||
dir, err := IsDir(u.base, name)
|
||||
if err != nil {
|
||||
return u.layer.MkdirAll(name, perm)
|
||||
}
|
||||
if dir {
|
||||
return ErrFileExists
|
||||
}
|
||||
return u.layer.MkdirAll(name, perm)
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Name() string {
|
||||
return "CopyOnWriteFs"
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) MkdirAll(name string, perm os.FileMode) error {
|
||||
dir, err := IsDir(u.base, name)
|
||||
if err != nil {
|
||||
return u.layer.MkdirAll(name, perm)
|
||||
}
|
||||
if dir {
|
||||
// This is in line with how os.MkdirAll behaves.
|
||||
return nil
|
||||
}
|
||||
return u.layer.MkdirAll(name, perm)
|
||||
}
|
||||
|
||||
func (u *CopyOnWriteFs) Create(name string) (File, error) {
|
||||
return u.OpenFile(name, os.O_CREATE|os.O_TRUNC|os.O_RDWR, 0666)
|
||||
}
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type httpDir struct {
|
||||
basePath string
|
||||
fs HttpFs
|
||||
}
|
||||
|
||||
func (d httpDir) Open(name string) (http.File, error) {
|
||||
if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 ||
|
||||
strings.Contains(name, "\x00") {
|
||||
return nil, errors.New("http: invalid character in file path")
|
||||
}
|
||||
dir := string(d.basePath)
|
||||
if dir == "" {
|
||||
dir = "."
|
||||
}
|
||||
|
||||
f, err := d.fs.Open(filepath.Join(dir, filepath.FromSlash(path.Clean("/"+name))))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
type HttpFs struct {
|
||||
source Fs
|
||||
}
|
||||
|
||||
func NewHttpFs(source Fs) *HttpFs {
|
||||
return &HttpFs{source: source}
|
||||
}
|
||||
|
||||
func (h HttpFs) Dir(s string) *httpDir {
|
||||
return &httpDir{basePath: s, fs: h}
|
||||
}
|
||||
|
||||
func (h HttpFs) Name() string { return "h HttpFs" }
|
||||
|
||||
func (h HttpFs) Create(name string) (File, error) {
|
||||
return h.source.Create(name)
|
||||
}
|
||||
|
||||
func (h HttpFs) Chmod(name string, mode os.FileMode) error {
|
||||
return h.source.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (h HttpFs) Chtimes(name string, atime time.Time, mtime time.Time) error {
|
||||
return h.source.Chtimes(name, atime, mtime)
|
||||
}
|
||||
|
||||
func (h HttpFs) Mkdir(name string, perm os.FileMode) error {
|
||||
return h.source.Mkdir(name, perm)
|
||||
}
|
||||
|
||||
func (h HttpFs) MkdirAll(path string, perm os.FileMode) error {
|
||||
return h.source.MkdirAll(path, perm)
|
||||
}
|
||||
|
||||
func (h HttpFs) Open(name string) (http.File, error) {
|
||||
f, err := h.source.Open(name)
|
||||
if err == nil {
|
||||
if httpfile, ok := f.(http.File); ok {
|
||||
return httpfile, nil
|
||||
}
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func (h HttpFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
return h.source.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
func (h HttpFs) Remove(name string) error {
|
||||
return h.source.Remove(name)
|
||||
}
|
||||
|
||||
func (h HttpFs) RemoveAll(path string) error {
|
||||
return h.source.RemoveAll(path)
|
||||
}
|
||||
|
||||
func (h HttpFs) Rename(oldname, newname string) error {
|
||||
return h.source.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
func (h HttpFs) Stat(name string) (os.FileInfo, error) {
|
||||
return h.source.Stat(name)
|
||||
}
|
||||
|
|
@ -0,0 +1,230 @@
|
|||
// Copyright ©2015 The Go Authors
|
||||
// Copyright ©2015 Steve Francia <spf@spf13.com>
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// byName implements sort.Interface.
|
||||
type byName []os.FileInfo
|
||||
|
||||
func (f byName) Len() int { return len(f) }
|
||||
func (f byName) Less(i, j int) bool { return f[i].Name() < f[j].Name() }
|
||||
func (f byName) Swap(i, j int) { f[i], f[j] = f[j], f[i] }
|
||||
|
||||
// ReadDir reads the directory named by dirname and returns
|
||||
// a list of sorted directory entries.
|
||||
func (a Afero) ReadDir(dirname string) ([]os.FileInfo, error) {
|
||||
return ReadDir(a.Fs, dirname)
|
||||
}
|
||||
|
||||
func ReadDir(fs Fs, dirname string) ([]os.FileInfo, error) {
|
||||
f, err := fs.Open(dirname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
list, err := f.Readdir(-1)
|
||||
f.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sort.Sort(byName(list))
|
||||
return list, nil
|
||||
}
|
||||
|
||||
// ReadFile reads the file named by filename and returns the contents.
|
||||
// A successful call returns err == nil, not err == EOF. Because ReadFile
|
||||
// reads the whole file, it does not treat an EOF from Read as an error
|
||||
// to be reported.
|
||||
func (a Afero) ReadFile(filename string) ([]byte, error) {
|
||||
return ReadFile(a.Fs, filename)
|
||||
}
|
||||
|
||||
func ReadFile(fs Fs, filename string) ([]byte, error) {
|
||||
f, err := fs.Open(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
// It's a good but not certain bet that FileInfo will tell us exactly how much to
|
||||
// read, so let's try it but be prepared for the answer to be wrong.
|
||||
var n int64
|
||||
|
||||
if fi, err := f.Stat(); err == nil {
|
||||
// Don't preallocate a huge buffer, just in case.
|
||||
if size := fi.Size(); size < 1e9 {
|
||||
n = size
|
||||
}
|
||||
}
|
||||
// As initial capacity for readAll, use n + a little extra in case Size is zero,
|
||||
// and to avoid another allocation after Read has filled the buffer. The readAll
|
||||
// call will read into its allocated internal buffer cheaply. If the size was
|
||||
// wrong, we'll either waste some space off the end or reallocate as needed, but
|
||||
// in the overwhelmingly common case we'll get it just right.
|
||||
return readAll(f, n+bytes.MinRead)
|
||||
}
|
||||
|
||||
// readAll reads from r until an error or EOF and returns the data it read
|
||||
// from the internal buffer allocated with a specified capacity.
|
||||
func readAll(r io.Reader, capacity int64) (b []byte, err error) {
|
||||
buf := bytes.NewBuffer(make([]byte, 0, capacity))
|
||||
// If the buffer overflows, we will get bytes.ErrTooLarge.
|
||||
// Return that as an error. Any other panic remains.
|
||||
defer func() {
|
||||
e := recover()
|
||||
if e == nil {
|
||||
return
|
||||
}
|
||||
if panicErr, ok := e.(error); ok && panicErr == bytes.ErrTooLarge {
|
||||
err = panicErr
|
||||
} else {
|
||||
panic(e)
|
||||
}
|
||||
}()
|
||||
_, err = buf.ReadFrom(r)
|
||||
return buf.Bytes(), err
|
||||
}
|
||||
|
||||
// ReadAll reads from r until an error or EOF and returns the data it read.
|
||||
// A successful call returns err == nil, not err == EOF. Because ReadAll is
|
||||
// defined to read from src until EOF, it does not treat an EOF from Read
|
||||
// as an error to be reported.
|
||||
func ReadAll(r io.Reader) ([]byte, error) {
|
||||
return readAll(r, bytes.MinRead)
|
||||
}
|
||||
|
||||
// WriteFile writes data to a file named by filename.
|
||||
// If the file does not exist, WriteFile creates it with permissions perm;
|
||||
// otherwise WriteFile truncates it before writing.
|
||||
func (a Afero) WriteFile(filename string, data []byte, perm os.FileMode) error {
|
||||
return WriteFile(a.Fs, filename, data, perm)
|
||||
}
|
||||
|
||||
func WriteFile(fs Fs, filename string, data []byte, perm os.FileMode) error {
|
||||
f, err := fs.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, err := f.Write(data)
|
||||
if err == nil && n < len(data) {
|
||||
err = io.ErrShortWrite
|
||||
}
|
||||
if err1 := f.Close(); err == nil {
|
||||
err = err1
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Random number state.
|
||||
// We generate random temporary file names so that there's a good
|
||||
// chance the file doesn't exist yet - keeps the number of tries in
|
||||
// TempFile to a minimum.
|
||||
var rand uint32
|
||||
var randmu sync.Mutex
|
||||
|
||||
func reseed() uint32 {
|
||||
return uint32(time.Now().UnixNano() + int64(os.Getpid()))
|
||||
}
|
||||
|
||||
func nextSuffix() string {
|
||||
randmu.Lock()
|
||||
r := rand
|
||||
if r == 0 {
|
||||
r = reseed()
|
||||
}
|
||||
r = r*1664525 + 1013904223 // constants from Numerical Recipes
|
||||
rand = r
|
||||
randmu.Unlock()
|
||||
return strconv.Itoa(int(1e9 + r%1e9))[1:]
|
||||
}
|
||||
|
||||
// TempFile creates a new temporary file in the directory dir
|
||||
// with a name beginning with prefix, opens the file for reading
|
||||
// and writing, and returns the resulting *File.
|
||||
// If dir is the empty string, TempFile uses the default directory
|
||||
// for temporary files (see os.TempDir).
|
||||
// Multiple programs calling TempFile simultaneously
|
||||
// will not choose the same file. The caller can use f.Name()
|
||||
// to find the pathname of the file. It is the caller's responsibility
|
||||
// to remove the file when no longer needed.
|
||||
func (a Afero) TempFile(dir, prefix string) (f File, err error) {
|
||||
return TempFile(a.Fs, dir, prefix)
|
||||
}
|
||||
|
||||
func TempFile(fs Fs, dir, prefix string) (f File, err error) {
|
||||
if dir == "" {
|
||||
dir = os.TempDir()
|
||||
}
|
||||
|
||||
nconflict := 0
|
||||
for i := 0; i < 10000; i++ {
|
||||
name := filepath.Join(dir, prefix+nextSuffix())
|
||||
f, err = fs.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
|
||||
if os.IsExist(err) {
|
||||
if nconflict++; nconflict > 10 {
|
||||
randmu.Lock()
|
||||
rand = reseed()
|
||||
randmu.Unlock()
|
||||
}
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// TempDir creates a new temporary directory in the directory dir
|
||||
// with a name beginning with prefix and returns the path of the
|
||||
// new directory. If dir is the empty string, TempDir uses the
|
||||
// default directory for temporary files (see os.TempDir).
|
||||
// Multiple programs calling TempDir simultaneously
|
||||
// will not choose the same directory. It is the caller's responsibility
|
||||
// to remove the directory when no longer needed.
|
||||
func (a Afero) TempDir(dir, prefix string) (name string, err error) {
|
||||
return TempDir(a.Fs, dir, prefix)
|
||||
}
|
||||
func TempDir(fs Fs, dir, prefix string) (name string, err error) {
|
||||
if dir == "" {
|
||||
dir = os.TempDir()
|
||||
}
|
||||
|
||||
nconflict := 0
|
||||
for i := 0; i < 10000; i++ {
|
||||
try := filepath.Join(dir, prefix+nextSuffix())
|
||||
err = fs.Mkdir(try, 0700)
|
||||
if os.IsExist(err) {
|
||||
if nconflict++; nconflict > 10 {
|
||||
randmu.Lock()
|
||||
rand = reseed()
|
||||
randmu.Unlock()
|
||||
}
|
||||
continue
|
||||
}
|
||||
if err == nil {
|
||||
name = try
|
||||
}
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
// Copyright © 2018 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
// Lstater is an optional interface in Afero. It is only implemented by the
|
||||
// filesystems saying so.
|
||||
// It will call Lstat if the filesystem iself is, or it delegates to, the os filesystem.
|
||||
// Else it will call Stat.
|
||||
// In addtion to the FileInfo, it will return a boolean telling whether Lstat was called or not.
|
||||
type Lstater interface {
|
||||
LstatIfPossible(name string) (os.FileInfo, bool, error)
|
||||
}
|
||||
|
|
@ -0,0 +1,110 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
// Copyright 2009 The Go Authors. All rights reserved.
|
||||
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Glob returns the names of all files matching pattern or nil
|
||||
// if there is no matching file. The syntax of patterns is the same
|
||||
// as in Match. The pattern may describe hierarchical names such as
|
||||
// /usr/*/bin/ed (assuming the Separator is '/').
|
||||
//
|
||||
// Glob ignores file system errors such as I/O errors reading directories.
|
||||
// The only possible returned error is ErrBadPattern, when pattern
|
||||
// is malformed.
|
||||
//
|
||||
// This was adapted from (http://golang.org/pkg/path/filepath) and uses several
|
||||
// built-ins from that package.
|
||||
func Glob(fs Fs, pattern string) (matches []string, err error) {
|
||||
if !hasMeta(pattern) {
|
||||
// Lstat not supported by a ll filesystems.
|
||||
if _, err = lstatIfPossible(fs, pattern); err != nil {
|
||||
return nil, nil
|
||||
}
|
||||
return []string{pattern}, nil
|
||||
}
|
||||
|
||||
dir, file := filepath.Split(pattern)
|
||||
switch dir {
|
||||
case "":
|
||||
dir = "."
|
||||
case string(filepath.Separator):
|
||||
// nothing
|
||||
default:
|
||||
dir = dir[0 : len(dir)-1] // chop off trailing separator
|
||||
}
|
||||
|
||||
if !hasMeta(dir) {
|
||||
return glob(fs, dir, file, nil)
|
||||
}
|
||||
|
||||
var m []string
|
||||
m, err = Glob(fs, dir)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
for _, d := range m {
|
||||
matches, err = glob(fs, d, file, matches)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// glob searches for files matching pattern in the directory dir
|
||||
// and appends them to matches. If the directory cannot be
|
||||
// opened, it returns the existing matches. New matches are
|
||||
// added in lexicographical order.
|
||||
func glob(fs Fs, dir, pattern string, matches []string) (m []string, e error) {
|
||||
m = matches
|
||||
fi, err := fs.Stat(dir)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if !fi.IsDir() {
|
||||
return
|
||||
}
|
||||
d, err := fs.Open(dir)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer d.Close()
|
||||
|
||||
names, _ := d.Readdirnames(-1)
|
||||
sort.Strings(names)
|
||||
|
||||
for _, n := range names {
|
||||
matched, err := filepath.Match(pattern, n)
|
||||
if err != nil {
|
||||
return m, err
|
||||
}
|
||||
if matched {
|
||||
m = append(m, filepath.Join(dir, n))
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any of the magic characters
|
||||
// recognized by Match.
|
||||
func hasMeta(path string) bool {
|
||||
// TODO(niemeyer): Should other magic characters be added here?
|
||||
return strings.IndexAny(path, "*?[") >= 0
|
||||
}
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package mem
|
||||
|
||||
type Dir interface {
|
||||
Len() int
|
||||
Names() []string
|
||||
Files() []*FileData
|
||||
Add(*FileData)
|
||||
Remove(*FileData)
|
||||
}
|
||||
|
||||
func RemoveFromMemDir(dir *FileData, f *FileData) {
|
||||
dir.memDir.Remove(f)
|
||||
}
|
||||
|
||||
func AddToMemDir(dir *FileData, f *FileData) {
|
||||
dir.memDir.Add(f)
|
||||
}
|
||||
|
||||
func InitializeDir(d *FileData) {
|
||||
if d.memDir == nil {
|
||||
d.dir = true
|
||||
d.memDir = &DirMap{}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
// Copyright © 2015 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package mem
|
||||
|
||||
import "sort"
|
||||
|
||||
type DirMap map[string]*FileData
|
||||
|
||||
func (m DirMap) Len() int { return len(m) }
|
||||
func (m DirMap) Add(f *FileData) { m[f.name] = f }
|
||||
func (m DirMap) Remove(f *FileData) { delete(m, f.name) }
|
||||
func (m DirMap) Files() (files []*FileData) {
|
||||
for _, f := range m {
|
||||
files = append(files, f)
|
||||
}
|
||||
sort.Sort(filesSorter(files))
|
||||
return files
|
||||
}
|
||||
|
||||
// implement sort.Interface for []*FileData
|
||||
type filesSorter []*FileData
|
||||
|
||||
func (s filesSorter) Len() int { return len(s) }
|
||||
func (s filesSorter) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
func (s filesSorter) Less(i, j int) bool { return s[i].name < s[j].name }
|
||||
|
||||
func (m DirMap) Names() (names []string) {
|
||||
for x := range m {
|
||||
names = append(names, x)
|
||||
}
|
||||
return names
|
||||
}
|
||||
|
|
@ -0,0 +1,317 @@
|
|||
// Copyright © 2015 Steve Francia <spf@spf13.com>.
|
||||
// Copyright 2013 tsuru authors. All rights reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package mem
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
import "time"
|
||||
|
||||
const FilePathSeparator = string(filepath.Separator)
|
||||
|
||||
type File struct {
|
||||
// atomic requires 64-bit alignment for struct field access
|
||||
at int64
|
||||
readDirCount int64
|
||||
closed bool
|
||||
readOnly bool
|
||||
fileData *FileData
|
||||
}
|
||||
|
||||
func NewFileHandle(data *FileData) *File {
|
||||
return &File{fileData: data}
|
||||
}
|
||||
|
||||
func NewReadOnlyFileHandle(data *FileData) *File {
|
||||
return &File{fileData: data, readOnly: true}
|
||||
}
|
||||
|
||||
func (f File) Data() *FileData {
|
||||
return f.fileData
|
||||
}
|
||||
|
||||
type FileData struct {
|
||||
sync.Mutex
|
||||
name string
|
||||
data []byte
|
||||
memDir Dir
|
||||
dir bool
|
||||
mode os.FileMode
|
||||
modtime time.Time
|
||||
}
|
||||
|
||||
func (d *FileData) Name() string {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
return d.name
|
||||
}
|
||||
|
||||
func CreateFile(name string) *FileData {
|
||||
return &FileData{name: name, mode: os.ModeTemporary, modtime: time.Now()}
|
||||
}
|
||||
|
||||
func CreateDir(name string) *FileData {
|
||||
return &FileData{name: name, memDir: &DirMap{}, dir: true}
|
||||
}
|
||||
|
||||
func ChangeFileName(f *FileData, newname string) {
|
||||
f.Lock()
|
||||
f.name = newname
|
||||
f.Unlock()
|
||||
}
|
||||
|
||||
func SetMode(f *FileData, mode os.FileMode) {
|
||||
f.Lock()
|
||||
f.mode = mode
|
||||
f.Unlock()
|
||||
}
|
||||
|
||||
func SetModTime(f *FileData, mtime time.Time) {
|
||||
f.Lock()
|
||||
setModTime(f, mtime)
|
||||
f.Unlock()
|
||||
}
|
||||
|
||||
func setModTime(f *FileData, mtime time.Time) {
|
||||
f.modtime = mtime
|
||||
}
|
||||
|
||||
func GetFileInfo(f *FileData) *FileInfo {
|
||||
return &FileInfo{f}
|
||||
}
|
||||
|
||||
func (f *File) Open() error {
|
||||
atomic.StoreInt64(&f.at, 0)
|
||||
atomic.StoreInt64(&f.readDirCount, 0)
|
||||
f.fileData.Lock()
|
||||
f.closed = false
|
||||
f.fileData.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *File) Close() error {
|
||||
f.fileData.Lock()
|
||||
f.closed = true
|
||||
if !f.readOnly {
|
||||
setModTime(f.fileData, time.Now())
|
||||
}
|
||||
f.fileData.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *File) Name() string {
|
||||
return f.fileData.Name()
|
||||
}
|
||||
|
||||
func (f *File) Stat() (os.FileInfo, error) {
|
||||
return &FileInfo{f.fileData}, nil
|
||||
}
|
||||
|
||||
func (f *File) Sync() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *File) Readdir(count int) (res []os.FileInfo, err error) {
|
||||
if !f.fileData.dir {
|
||||
return nil, &os.PathError{Op: "readdir", Path: f.fileData.name, Err: errors.New("not a dir")}
|
||||
}
|
||||
var outLength int64
|
||||
|
||||
f.fileData.Lock()
|
||||
files := f.fileData.memDir.Files()[f.readDirCount:]
|
||||
if count > 0 {
|
||||
if len(files) < count {
|
||||
outLength = int64(len(files))
|
||||
} else {
|
||||
outLength = int64(count)
|
||||
}
|
||||
if len(files) == 0 {
|
||||
err = io.EOF
|
||||
}
|
||||
} else {
|
||||
outLength = int64(len(files))
|
||||
}
|
||||
f.readDirCount += outLength
|
||||
f.fileData.Unlock()
|
||||
|
||||
res = make([]os.FileInfo, outLength)
|
||||
for i := range res {
|
||||
res[i] = &FileInfo{files[i]}
|
||||
}
|
||||
|
||||
return res, err
|
||||
}
|
||||
|
||||
func (f *File) Readdirnames(n int) (names []string, err error) {
|
||||
fi, err := f.Readdir(n)
|
||||
names = make([]string, len(fi))
|
||||
for i, f := range fi {
|
||||
_, names[i] = filepath.Split(f.Name())
|
||||
}
|
||||
return names, err
|
||||
}
|
||||
|
||||
func (f *File) Read(b []byte) (n int, err error) {
|
||||
f.fileData.Lock()
|
||||
defer f.fileData.Unlock()
|
||||
if f.closed == true {
|
||||
return 0, ErrFileClosed
|
||||
}
|
||||
if len(b) > 0 && int(f.at) == len(f.fileData.data) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
if int(f.at) > len(f.fileData.data) {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
if len(f.fileData.data)-int(f.at) >= len(b) {
|
||||
n = len(b)
|
||||
} else {
|
||||
n = len(f.fileData.data) - int(f.at)
|
||||
}
|
||||
copy(b, f.fileData.data[f.at:f.at+int64(n)])
|
||||
atomic.AddInt64(&f.at, int64(n))
|
||||
return
|
||||
}
|
||||
|
||||
func (f *File) ReadAt(b []byte, off int64) (n int, err error) {
|
||||
atomic.StoreInt64(&f.at, off)
|
||||
return f.Read(b)
|
||||
}
|
||||
|
||||
func (f *File) Truncate(size int64) error {
|
||||
if f.closed == true {
|
||||
return ErrFileClosed
|
||||
}
|
||||
if f.readOnly {
|
||||
return &os.PathError{Op: "truncate", Path: f.fileData.name, Err: errors.New("file handle is read only")}
|
||||
}
|
||||
if size < 0 {
|
||||
return ErrOutOfRange
|
||||
}
|
||||
if size > int64(len(f.fileData.data)) {
|
||||
diff := size - int64(len(f.fileData.data))
|
||||
f.fileData.data = append(f.fileData.data, bytes.Repeat([]byte{00}, int(diff))...)
|
||||
} else {
|
||||
f.fileData.data = f.fileData.data[0:size]
|
||||
}
|
||||
setModTime(f.fileData, time.Now())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *File) Seek(offset int64, whence int) (int64, error) {
|
||||
if f.closed == true {
|
||||
return 0, ErrFileClosed
|
||||
}
|
||||
switch whence {
|
||||
case 0:
|
||||
atomic.StoreInt64(&f.at, offset)
|
||||
case 1:
|
||||
atomic.AddInt64(&f.at, int64(offset))
|
||||
case 2:
|
||||
atomic.StoreInt64(&f.at, int64(len(f.fileData.data))+offset)
|
||||
}
|
||||
return f.at, nil
|
||||
}
|
||||
|
||||
func (f *File) Write(b []byte) (n int, err error) {
|
||||
if f.readOnly {
|
||||
return 0, &os.PathError{Op: "write", Path: f.fileData.name, Err: errors.New("file handle is read only")}
|
||||
}
|
||||
n = len(b)
|
||||
cur := atomic.LoadInt64(&f.at)
|
||||
f.fileData.Lock()
|
||||
defer f.fileData.Unlock()
|
||||
diff := cur - int64(len(f.fileData.data))
|
||||
var tail []byte
|
||||
if n+int(cur) < len(f.fileData.data) {
|
||||
tail = f.fileData.data[n+int(cur):]
|
||||
}
|
||||
if diff > 0 {
|
||||
f.fileData.data = append(bytes.Repeat([]byte{00}, int(diff)), b...)
|
||||
f.fileData.data = append(f.fileData.data, tail...)
|
||||
} else {
|
||||
f.fileData.data = append(f.fileData.data[:cur], b...)
|
||||
f.fileData.data = append(f.fileData.data, tail...)
|
||||
}
|
||||
setModTime(f.fileData, time.Now())
|
||||
|
||||
atomic.StoreInt64(&f.at, int64(len(f.fileData.data)))
|
||||
return
|
||||
}
|
||||
|
||||
func (f *File) WriteAt(b []byte, off int64) (n int, err error) {
|
||||
atomic.StoreInt64(&f.at, off)
|
||||
return f.Write(b)
|
||||
}
|
||||
|
||||
func (f *File) WriteString(s string) (ret int, err error) {
|
||||
return f.Write([]byte(s))
|
||||
}
|
||||
|
||||
func (f *File) Info() *FileInfo {
|
||||
return &FileInfo{f.fileData}
|
||||
}
|
||||
|
||||
type FileInfo struct {
|
||||
*FileData
|
||||
}
|
||||
|
||||
// Implements os.FileInfo
|
||||
func (s *FileInfo) Name() string {
|
||||
s.Lock()
|
||||
_, name := filepath.Split(s.name)
|
||||
s.Unlock()
|
||||
return name
|
||||
}
|
||||
func (s *FileInfo) Mode() os.FileMode {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
return s.mode
|
||||
}
|
||||
func (s *FileInfo) ModTime() time.Time {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
return s.modtime
|
||||
}
|
||||
func (s *FileInfo) IsDir() bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
return s.dir
|
||||
}
|
||||
func (s *FileInfo) Sys() interface{} { return nil }
|
||||
func (s *FileInfo) Size() int64 {
|
||||
if s.IsDir() {
|
||||
return int64(42)
|
||||
}
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
return int64(len(s.data))
|
||||
}
|
||||
|
||||
var (
|
||||
ErrFileClosed = errors.New("File is closed")
|
||||
ErrOutOfRange = errors.New("Out of range")
|
||||
ErrTooLarge = errors.New("Too large")
|
||||
ErrFileNotFound = os.ErrNotExist
|
||||
ErrFileExists = os.ErrExist
|
||||
ErrDestinationExists = os.ErrExist
|
||||
)
|
||||
|
|
@ -0,0 +1,365 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/afero/mem"
|
||||
)
|
||||
|
||||
type MemMapFs struct {
|
||||
mu sync.RWMutex
|
||||
data map[string]*mem.FileData
|
||||
init sync.Once
|
||||
}
|
||||
|
||||
func NewMemMapFs() Fs {
|
||||
return &MemMapFs{}
|
||||
}
|
||||
|
||||
func (m *MemMapFs) getData() map[string]*mem.FileData {
|
||||
m.init.Do(func() {
|
||||
m.data = make(map[string]*mem.FileData)
|
||||
// Root should always exist, right?
|
||||
// TODO: what about windows?
|
||||
m.data[FilePathSeparator] = mem.CreateDir(FilePathSeparator)
|
||||
})
|
||||
return m.data
|
||||
}
|
||||
|
||||
func (*MemMapFs) Name() string { return "MemMapFS" }
|
||||
|
||||
func (m *MemMapFs) Create(name string) (File, error) {
|
||||
name = normalizePath(name)
|
||||
m.mu.Lock()
|
||||
file := mem.CreateFile(name)
|
||||
m.getData()[name] = file
|
||||
m.registerWithParent(file)
|
||||
m.mu.Unlock()
|
||||
return mem.NewFileHandle(file), nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) unRegisterWithParent(fileName string) error {
|
||||
f, err := m.lockfreeOpen(fileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
parent := m.findParent(f)
|
||||
if parent == nil {
|
||||
log.Panic("parent of ", f.Name(), " is nil")
|
||||
}
|
||||
|
||||
parent.Lock()
|
||||
mem.RemoveFromMemDir(parent, f)
|
||||
parent.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) findParent(f *mem.FileData) *mem.FileData {
|
||||
pdir, _ := filepath.Split(f.Name())
|
||||
pdir = filepath.Clean(pdir)
|
||||
pfile, err := m.lockfreeOpen(pdir)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return pfile
|
||||
}
|
||||
|
||||
func (m *MemMapFs) registerWithParent(f *mem.FileData) {
|
||||
if f == nil {
|
||||
return
|
||||
}
|
||||
parent := m.findParent(f)
|
||||
if parent == nil {
|
||||
pdir := filepath.Dir(filepath.Clean(f.Name()))
|
||||
err := m.lockfreeMkdir(pdir, 0777)
|
||||
if err != nil {
|
||||
//log.Println("Mkdir error:", err)
|
||||
return
|
||||
}
|
||||
parent, err = m.lockfreeOpen(pdir)
|
||||
if err != nil {
|
||||
//log.Println("Open after Mkdir error:", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
parent.Lock()
|
||||
mem.InitializeDir(parent)
|
||||
mem.AddToMemDir(parent, f)
|
||||
parent.Unlock()
|
||||
}
|
||||
|
||||
func (m *MemMapFs) lockfreeMkdir(name string, perm os.FileMode) error {
|
||||
name = normalizePath(name)
|
||||
x, ok := m.getData()[name]
|
||||
if ok {
|
||||
// Only return ErrFileExists if it's a file, not a directory.
|
||||
i := mem.FileInfo{FileData: x}
|
||||
if !i.IsDir() {
|
||||
return ErrFileExists
|
||||
}
|
||||
} else {
|
||||
item := mem.CreateDir(name)
|
||||
m.getData()[name] = item
|
||||
m.registerWithParent(item)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Mkdir(name string, perm os.FileMode) error {
|
||||
name = normalizePath(name)
|
||||
|
||||
m.mu.RLock()
|
||||
_, ok := m.getData()[name]
|
||||
m.mu.RUnlock()
|
||||
if ok {
|
||||
return &os.PathError{Op: "mkdir", Path: name, Err: ErrFileExists}
|
||||
}
|
||||
|
||||
m.mu.Lock()
|
||||
item := mem.CreateDir(name)
|
||||
m.getData()[name] = item
|
||||
m.registerWithParent(item)
|
||||
m.mu.Unlock()
|
||||
|
||||
m.Chmod(name, perm|os.ModeDir)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) MkdirAll(path string, perm os.FileMode) error {
|
||||
err := m.Mkdir(path, perm)
|
||||
if err != nil {
|
||||
if err.(*os.PathError).Err == ErrFileExists {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Handle some relative paths
|
||||
func normalizePath(path string) string {
|
||||
path = filepath.Clean(path)
|
||||
|
||||
switch path {
|
||||
case ".":
|
||||
return FilePathSeparator
|
||||
case "..":
|
||||
return FilePathSeparator
|
||||
default:
|
||||
return path
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Open(name string) (File, error) {
|
||||
f, err := m.open(name)
|
||||
if f != nil {
|
||||
return mem.NewReadOnlyFileHandle(f), err
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func (m *MemMapFs) openWrite(name string) (File, error) {
|
||||
f, err := m.open(name)
|
||||
if f != nil {
|
||||
return mem.NewFileHandle(f), err
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func (m *MemMapFs) open(name string) (*mem.FileData, error) {
|
||||
name = normalizePath(name)
|
||||
|
||||
m.mu.RLock()
|
||||
f, ok := m.getData()[name]
|
||||
m.mu.RUnlock()
|
||||
if !ok {
|
||||
return nil, &os.PathError{Op: "open", Path: name, Err: ErrFileNotFound}
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) lockfreeOpen(name string) (*mem.FileData, error) {
|
||||
name = normalizePath(name)
|
||||
f, ok := m.getData()[name]
|
||||
if ok {
|
||||
return f, nil
|
||||
} else {
|
||||
return nil, ErrFileNotFound
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MemMapFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
chmod := false
|
||||
file, err := m.openWrite(name)
|
||||
if os.IsNotExist(err) && (flag&os.O_CREATE > 0) {
|
||||
file, err = m.Create(name)
|
||||
chmod = true
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if flag == os.O_RDONLY {
|
||||
file = mem.NewReadOnlyFileHandle(file.(*mem.File).Data())
|
||||
}
|
||||
if flag&os.O_APPEND > 0 {
|
||||
_, err = file.Seek(0, os.SEEK_END)
|
||||
if err != nil {
|
||||
file.Close()
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if flag&os.O_TRUNC > 0 && flag&(os.O_RDWR|os.O_WRONLY) > 0 {
|
||||
err = file.Truncate(0)
|
||||
if err != nil {
|
||||
file.Close()
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if chmod {
|
||||
m.Chmod(name, perm)
|
||||
}
|
||||
return file, nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Remove(name string) error {
|
||||
name = normalizePath(name)
|
||||
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
if _, ok := m.getData()[name]; ok {
|
||||
err := m.unRegisterWithParent(name)
|
||||
if err != nil {
|
||||
return &os.PathError{Op: "remove", Path: name, Err: err}
|
||||
}
|
||||
delete(m.getData(), name)
|
||||
} else {
|
||||
return &os.PathError{Op: "remove", Path: name, Err: os.ErrNotExist}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) RemoveAll(path string) error {
|
||||
path = normalizePath(path)
|
||||
m.mu.Lock()
|
||||
m.unRegisterWithParent(path)
|
||||
m.mu.Unlock()
|
||||
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
|
||||
for p, _ := range m.getData() {
|
||||
if strings.HasPrefix(p, path) {
|
||||
m.mu.RUnlock()
|
||||
m.mu.Lock()
|
||||
delete(m.getData(), p)
|
||||
m.mu.Unlock()
|
||||
m.mu.RLock()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Rename(oldname, newname string) error {
|
||||
oldname = normalizePath(oldname)
|
||||
newname = normalizePath(newname)
|
||||
|
||||
if oldname == newname {
|
||||
return nil
|
||||
}
|
||||
|
||||
m.mu.RLock()
|
||||
defer m.mu.RUnlock()
|
||||
if _, ok := m.getData()[oldname]; ok {
|
||||
m.mu.RUnlock()
|
||||
m.mu.Lock()
|
||||
m.unRegisterWithParent(oldname)
|
||||
fileData := m.getData()[oldname]
|
||||
delete(m.getData(), oldname)
|
||||
mem.ChangeFileName(fileData, newname)
|
||||
m.getData()[newname] = fileData
|
||||
m.registerWithParent(fileData)
|
||||
m.mu.Unlock()
|
||||
m.mu.RLock()
|
||||
} else {
|
||||
return &os.PathError{Op: "rename", Path: oldname, Err: ErrFileNotFound}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Stat(name string) (os.FileInfo, error) {
|
||||
f, err := m.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fi := mem.GetFileInfo(f.(*mem.File).Data())
|
||||
return fi, nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Chmod(name string, mode os.FileMode) error {
|
||||
name = normalizePath(name)
|
||||
|
||||
m.mu.RLock()
|
||||
f, ok := m.getData()[name]
|
||||
m.mu.RUnlock()
|
||||
if !ok {
|
||||
return &os.PathError{Op: "chmod", Path: name, Err: ErrFileNotFound}
|
||||
}
|
||||
|
||||
m.mu.Lock()
|
||||
mem.SetMode(f, mode)
|
||||
m.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) Chtimes(name string, atime time.Time, mtime time.Time) error {
|
||||
name = normalizePath(name)
|
||||
|
||||
m.mu.RLock()
|
||||
f, ok := m.getData()[name]
|
||||
m.mu.RUnlock()
|
||||
if !ok {
|
||||
return &os.PathError{Op: "chtimes", Path: name, Err: ErrFileNotFound}
|
||||
}
|
||||
|
||||
m.mu.Lock()
|
||||
mem.SetModTime(f, mtime)
|
||||
m.mu.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MemMapFs) List() {
|
||||
for _, x := range m.data {
|
||||
y := mem.FileInfo{FileData: x}
|
||||
fmt.Println(x.Name(), y.Size())
|
||||
}
|
||||
}
|
||||
|
||||
// func debugMemMapList(fs Fs) {
|
||||
// if x, ok := fs.(*MemMapFs); ok {
|
||||
// x.List()
|
||||
// }
|
||||
// }
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
// Copyright © 2014 Steve Francia <spf@spf13.com>.
|
||||
// Copyright 2013 tsuru authors. All rights reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ Lstater = (*OsFs)(nil)
|
||||
|
||||
// OsFs is a Fs implementation that uses functions provided by the os package.
|
||||
//
|
||||
// For details in any method, check the documentation of the os package
|
||||
// (http://golang.org/pkg/os/).
|
||||
type OsFs struct{}
|
||||
|
||||
func NewOsFs() Fs {
|
||||
return &OsFs{}
|
||||
}
|
||||
|
||||
func (OsFs) Name() string { return "OsFs" }
|
||||
|
||||
func (OsFs) Create(name string) (File, error) {
|
||||
f, e := os.Create(name)
|
||||
if f == nil {
|
||||
// while this looks strange, we need to return a bare nil (of type nil) not
|
||||
// a nil value of type *os.File or nil won't be nil
|
||||
return nil, e
|
||||
}
|
||||
return f, e
|
||||
}
|
||||
|
||||
func (OsFs) Mkdir(name string, perm os.FileMode) error {
|
||||
return os.Mkdir(name, perm)
|
||||
}
|
||||
|
||||
func (OsFs) MkdirAll(path string, perm os.FileMode) error {
|
||||
return os.MkdirAll(path, perm)
|
||||
}
|
||||
|
||||
func (OsFs) Open(name string) (File, error) {
|
||||
f, e := os.Open(name)
|
||||
if f == nil {
|
||||
// while this looks strange, we need to return a bare nil (of type nil) not
|
||||
// a nil value of type *os.File or nil won't be nil
|
||||
return nil, e
|
||||
}
|
||||
return f, e
|
||||
}
|
||||
|
||||
func (OsFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
f, e := os.OpenFile(name, flag, perm)
|
||||
if f == nil {
|
||||
// while this looks strange, we need to return a bare nil (of type nil) not
|
||||
// a nil value of type *os.File or nil won't be nil
|
||||
return nil, e
|
||||
}
|
||||
return f, e
|
||||
}
|
||||
|
||||
func (OsFs) Remove(name string) error {
|
||||
return os.Remove(name)
|
||||
}
|
||||
|
||||
func (OsFs) RemoveAll(path string) error {
|
||||
return os.RemoveAll(path)
|
||||
}
|
||||
|
||||
func (OsFs) Rename(oldname, newname string) error {
|
||||
return os.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
func (OsFs) Stat(name string) (os.FileInfo, error) {
|
||||
return os.Stat(name)
|
||||
}
|
||||
|
||||
func (OsFs) Chmod(name string, mode os.FileMode) error {
|
||||
return os.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (OsFs) Chtimes(name string, atime time.Time, mtime time.Time) error {
|
||||
return os.Chtimes(name, atime, mtime)
|
||||
}
|
||||
|
||||
func (OsFs) LstatIfPossible(name string) (os.FileInfo, bool, error) {
|
||||
fi, err := os.Lstat(name)
|
||||
return fi, true, err
|
||||
}
|
||||
|
|
@ -0,0 +1,106 @@
|
|||
// Copyright ©2015 The Go Authors
|
||||
// Copyright ©2015 Steve Francia <spf@spf13.com>
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// readDirNames reads the directory named by dirname and returns
|
||||
// a sorted list of directory entries.
|
||||
// adapted from https://golang.org/src/path/filepath/path.go
|
||||
func readDirNames(fs Fs, dirname string) ([]string, error) {
|
||||
f, err := fs.Open(dirname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
names, err := f.Readdirnames(-1)
|
||||
f.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sort.Strings(names)
|
||||
return names, nil
|
||||
}
|
||||
|
||||
// walk recursively descends path, calling walkFn
|
||||
// adapted from https://golang.org/src/path/filepath/path.go
|
||||
func walk(fs Fs, path string, info os.FileInfo, walkFn filepath.WalkFunc) error {
|
||||
err := walkFn(path, info, nil)
|
||||
if err != nil {
|
||||
if info.IsDir() && err == filepath.SkipDir {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
names, err := readDirNames(fs, path)
|
||||
if err != nil {
|
||||
return walkFn(path, info, err)
|
||||
}
|
||||
|
||||
for _, name := range names {
|
||||
filename := filepath.Join(path, name)
|
||||
fileInfo, err := lstatIfPossible(fs, filename)
|
||||
if err != nil {
|
||||
if err := walkFn(filename, fileInfo, err); err != nil && err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
err = walk(fs, filename, fileInfo, walkFn)
|
||||
if err != nil {
|
||||
if !fileInfo.IsDir() || err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// if the filesystem supports it, use Lstat, else use fs.Stat
|
||||
func lstatIfPossible(fs Fs, path string) (os.FileInfo, error) {
|
||||
if lfs, ok := fs.(Lstater); ok {
|
||||
fi, _, err := lfs.LstatIfPossible(path)
|
||||
return fi, err
|
||||
}
|
||||
return fs.Stat(path)
|
||||
}
|
||||
|
||||
// Walk walks the file tree rooted at root, calling walkFn for each file or
|
||||
// directory in the tree, including root. All errors that arise visiting files
|
||||
// and directories are filtered by walkFn. The files are walked in lexical
|
||||
// order, which makes the output deterministic but means that for very
|
||||
// large directories Walk can be inefficient.
|
||||
// Walk does not follow symbolic links.
|
||||
|
||||
func (a Afero) Walk(root string, walkFn filepath.WalkFunc) error {
|
||||
return Walk(a.Fs, root, walkFn)
|
||||
}
|
||||
|
||||
func Walk(fs Fs, root string, walkFn filepath.WalkFunc) error {
|
||||
info, err := lstatIfPossible(fs, root)
|
||||
if err != nil {
|
||||
return walkFn(root, nil, err)
|
||||
}
|
||||
return walk(fs, root, info, walkFn)
|
||||
}
|
||||
|
|
@ -0,0 +1,80 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
var _ Lstater = (*ReadOnlyFs)(nil)
|
||||
|
||||
type ReadOnlyFs struct {
|
||||
source Fs
|
||||
}
|
||||
|
||||
func NewReadOnlyFs(source Fs) Fs {
|
||||
return &ReadOnlyFs{source: source}
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) ReadDir(name string) ([]os.FileInfo, error) {
|
||||
return ReadDir(r.source, name)
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Chtimes(n string, a, m time.Time) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Chmod(n string, m os.FileMode) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Name() string {
|
||||
return "ReadOnlyFilter"
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Stat(name string) (os.FileInfo, error) {
|
||||
return r.source.Stat(name)
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) LstatIfPossible(name string) (os.FileInfo, bool, error) {
|
||||
if lsf, ok := r.source.(Lstater); ok {
|
||||
return lsf.LstatIfPossible(name)
|
||||
}
|
||||
fi, err := r.Stat(name)
|
||||
return fi, false, err
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Rename(o, n string) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) RemoveAll(p string) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Remove(n string) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
if flag&(os.O_WRONLY|syscall.O_RDWR|os.O_APPEND|os.O_CREATE|os.O_TRUNC) != 0 {
|
||||
return nil, syscall.EPERM
|
||||
}
|
||||
return r.source.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Open(n string) (File, error) {
|
||||
return r.source.Open(n)
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Mkdir(n string, p os.FileMode) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) MkdirAll(n string, p os.FileMode) error {
|
||||
return syscall.EPERM
|
||||
}
|
||||
|
||||
func (r *ReadOnlyFs) Create(n string) (File, error) {
|
||||
return nil, syscall.EPERM
|
||||
}
|
||||
|
|
@ -0,0 +1,214 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"os"
|
||||
"regexp"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
// The RegexpFs filters files (not directories) by regular expression. Only
|
||||
// files matching the given regexp will be allowed, all others get a ENOENT error (
|
||||
// "No such file or directory").
|
||||
//
|
||||
type RegexpFs struct {
|
||||
re *regexp.Regexp
|
||||
source Fs
|
||||
}
|
||||
|
||||
func NewRegexpFs(source Fs, re *regexp.Regexp) Fs {
|
||||
return &RegexpFs{source: source, re: re}
|
||||
}
|
||||
|
||||
type RegexpFile struct {
|
||||
f File
|
||||
re *regexp.Regexp
|
||||
}
|
||||
|
||||
func (r *RegexpFs) matchesName(name string) error {
|
||||
if r.re == nil {
|
||||
return nil
|
||||
}
|
||||
if r.re.MatchString(name) {
|
||||
return nil
|
||||
}
|
||||
return syscall.ENOENT
|
||||
}
|
||||
|
||||
func (r *RegexpFs) dirOrMatches(name string) error {
|
||||
dir, err := IsDir(r.source, name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dir {
|
||||
return nil
|
||||
}
|
||||
return r.matchesName(name)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Chtimes(name string, a, m time.Time) error {
|
||||
if err := r.dirOrMatches(name); err != nil {
|
||||
return err
|
||||
}
|
||||
return r.source.Chtimes(name, a, m)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Chmod(name string, mode os.FileMode) error {
|
||||
if err := r.dirOrMatches(name); err != nil {
|
||||
return err
|
||||
}
|
||||
return r.source.Chmod(name, mode)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Name() string {
|
||||
return "RegexpFs"
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Stat(name string) (os.FileInfo, error) {
|
||||
if err := r.dirOrMatches(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return r.source.Stat(name)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Rename(oldname, newname string) error {
|
||||
dir, err := IsDir(r.source, oldname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dir {
|
||||
return nil
|
||||
}
|
||||
if err := r.matchesName(oldname); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := r.matchesName(newname); err != nil {
|
||||
return err
|
||||
}
|
||||
return r.source.Rename(oldname, newname)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) RemoveAll(p string) error {
|
||||
dir, err := IsDir(r.source, p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !dir {
|
||||
if err := r.matchesName(p); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return r.source.RemoveAll(p)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Remove(name string) error {
|
||||
if err := r.dirOrMatches(name); err != nil {
|
||||
return err
|
||||
}
|
||||
return r.source.Remove(name)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) OpenFile(name string, flag int, perm os.FileMode) (File, error) {
|
||||
if err := r.dirOrMatches(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return r.source.OpenFile(name, flag, perm)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Open(name string) (File, error) {
|
||||
dir, err := IsDir(r.source, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !dir {
|
||||
if err := r.matchesName(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
f, err := r.source.Open(name)
|
||||
return &RegexpFile{f: f, re: r.re}, nil
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Mkdir(n string, p os.FileMode) error {
|
||||
return r.source.Mkdir(n, p)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) MkdirAll(n string, p os.FileMode) error {
|
||||
return r.source.MkdirAll(n, p)
|
||||
}
|
||||
|
||||
func (r *RegexpFs) Create(name string) (File, error) {
|
||||
if err := r.matchesName(name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return r.source.Create(name)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Close() error {
|
||||
return f.f.Close()
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Read(s []byte) (int, error) {
|
||||
return f.f.Read(s)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) ReadAt(s []byte, o int64) (int, error) {
|
||||
return f.f.ReadAt(s, o)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Seek(o int64, w int) (int64, error) {
|
||||
return f.f.Seek(o, w)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Write(s []byte) (int, error) {
|
||||
return f.f.Write(s)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) WriteAt(s []byte, o int64) (int, error) {
|
||||
return f.f.WriteAt(s, o)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Name() string {
|
||||
return f.f.Name()
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Readdir(c int) (fi []os.FileInfo, err error) {
|
||||
var rfi []os.FileInfo
|
||||
rfi, err = f.f.Readdir(c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, i := range rfi {
|
||||
if i.IsDir() || f.re.MatchString(i.Name()) {
|
||||
fi = append(fi, i)
|
||||
}
|
||||
}
|
||||
return fi, nil
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Readdirnames(c int) (n []string, err error) {
|
||||
fi, err := f.Readdir(c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, s := range fi {
|
||||
n = append(n, s.Name())
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Stat() (os.FileInfo, error) {
|
||||
return f.f.Stat()
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Sync() error {
|
||||
return f.f.Sync()
|
||||
}
|
||||
|
||||
func (f *RegexpFile) Truncate(s int64) error {
|
||||
return f.f.Truncate(s)
|
||||
}
|
||||
|
||||
func (f *RegexpFile) WriteString(s string) (int, error) {
|
||||
return f.f.WriteString(s)
|
||||
}
|
||||
|
|
@ -0,0 +1,316 @@
|
|||
package afero
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// The UnionFile implements the afero.File interface and will be returned
|
||||
// when reading a directory present at least in the overlay or opening a file
|
||||
// for writing.
|
||||
//
|
||||
// The calls to
|
||||
// Readdir() and Readdirnames() merge the file os.FileInfo / names from the
|
||||
// base and the overlay - for files present in both layers, only those
|
||||
// from the overlay will be used.
|
||||
//
|
||||
// When opening files for writing (Create() / OpenFile() with the right flags)
|
||||
// the operations will be done in both layers, starting with the overlay. A
|
||||
// successful read in the overlay will move the cursor position in the base layer
|
||||
// by the number of bytes read.
|
||||
type UnionFile struct {
|
||||
Base File
|
||||
Layer File
|
||||
Merger DirsMerger
|
||||
off int
|
||||
files []os.FileInfo
|
||||
}
|
||||
|
||||
func (f *UnionFile) Close() error {
|
||||
// first close base, so we have a newer timestamp in the overlay. If we'd close
|
||||
// the overlay first, we'd get a cacheStale the next time we access this file
|
||||
// -> cache would be useless ;-)
|
||||
if f.Base != nil {
|
||||
f.Base.Close()
|
||||
}
|
||||
if f.Layer != nil {
|
||||
return f.Layer.Close()
|
||||
}
|
||||
return BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Read(s []byte) (int, error) {
|
||||
if f.Layer != nil {
|
||||
n, err := f.Layer.Read(s)
|
||||
if (err == nil || err == io.EOF) && f.Base != nil {
|
||||
// advance the file position also in the base file, the next
|
||||
// call may be a write at this position (or a seek with SEEK_CUR)
|
||||
if _, seekErr := f.Base.Seek(int64(n), os.SEEK_CUR); seekErr != nil {
|
||||
// only overwrite err in case the seek fails: we need to
|
||||
// report an eventual io.EOF to the caller
|
||||
err = seekErr
|
||||
}
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Read(s)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) ReadAt(s []byte, o int64) (int, error) {
|
||||
if f.Layer != nil {
|
||||
n, err := f.Layer.ReadAt(s, o)
|
||||
if (err == nil || err == io.EOF) && f.Base != nil {
|
||||
_, err = f.Base.Seek(o+int64(n), os.SEEK_SET)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.ReadAt(s, o)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Seek(o int64, w int) (pos int64, err error) {
|
||||
if f.Layer != nil {
|
||||
pos, err = f.Layer.Seek(o, w)
|
||||
if (err == nil || err == io.EOF) && f.Base != nil {
|
||||
_, err = f.Base.Seek(o, w)
|
||||
}
|
||||
return pos, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Seek(o, w)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Write(s []byte) (n int, err error) {
|
||||
if f.Layer != nil {
|
||||
n, err = f.Layer.Write(s)
|
||||
if err == nil && f.Base != nil { // hmm, do we have fixed size files where a write may hit the EOF mark?
|
||||
_, err = f.Base.Write(s)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Write(s)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) WriteAt(s []byte, o int64) (n int, err error) {
|
||||
if f.Layer != nil {
|
||||
n, err = f.Layer.WriteAt(s, o)
|
||||
if err == nil && f.Base != nil {
|
||||
_, err = f.Base.WriteAt(s, o)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.WriteAt(s, o)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Name() string {
|
||||
if f.Layer != nil {
|
||||
return f.Layer.Name()
|
||||
}
|
||||
return f.Base.Name()
|
||||
}
|
||||
|
||||
// DirsMerger is how UnionFile weaves two directories together.
|
||||
// It takes the FileInfo slices from the layer and the base and returns a
|
||||
// single view.
|
||||
type DirsMerger func(lofi, bofi []os.FileInfo) ([]os.FileInfo, error)
|
||||
|
||||
var defaultUnionMergeDirsFn = func(lofi, bofi []os.FileInfo) ([]os.FileInfo, error) {
|
||||
var files = make(map[string]os.FileInfo)
|
||||
|
||||
for _, fi := range lofi {
|
||||
files[fi.Name()] = fi
|
||||
}
|
||||
|
||||
for _, fi := range bofi {
|
||||
if _, exists := files[fi.Name()]; !exists {
|
||||
files[fi.Name()] = fi
|
||||
}
|
||||
}
|
||||
|
||||
rfi := make([]os.FileInfo, len(files))
|
||||
|
||||
i := 0
|
||||
for _, fi := range files {
|
||||
rfi[i] = fi
|
||||
i++
|
||||
}
|
||||
|
||||
return rfi, nil
|
||||
|
||||
}
|
||||
|
||||
// Readdir will weave the two directories together and
|
||||
// return a single view of the overlayed directories
|
||||
// At the end of the directory view, the error is io.EOF.
|
||||
func (f *UnionFile) Readdir(c int) (ofi []os.FileInfo, err error) {
|
||||
var merge DirsMerger = f.Merger
|
||||
if merge == nil {
|
||||
merge = defaultUnionMergeDirsFn
|
||||
}
|
||||
|
||||
if f.off == 0 {
|
||||
var lfi []os.FileInfo
|
||||
if f.Layer != nil {
|
||||
lfi, err = f.Layer.Readdir(-1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var bfi []os.FileInfo
|
||||
if f.Base != nil {
|
||||
bfi, err = f.Base.Readdir(-1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
}
|
||||
merged, err := merge(lfi, bfi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f.files = append(f.files, merged...)
|
||||
}
|
||||
|
||||
if f.off >= len(f.files) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
if c == -1 {
|
||||
return f.files[f.off:], nil
|
||||
}
|
||||
|
||||
if c > len(f.files) {
|
||||
c = len(f.files)
|
||||
}
|
||||
|
||||
defer func() { f.off += c }()
|
||||
return f.files[f.off:c], nil
|
||||
}
|
||||
|
||||
func (f *UnionFile) Readdirnames(c int) ([]string, error) {
|
||||
rfi, err := f.Readdir(c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var names []string
|
||||
for _, fi := range rfi {
|
||||
names = append(names, fi.Name())
|
||||
}
|
||||
return names, nil
|
||||
}
|
||||
|
||||
func (f *UnionFile) Stat() (os.FileInfo, error) {
|
||||
if f.Layer != nil {
|
||||
return f.Layer.Stat()
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Stat()
|
||||
}
|
||||
return nil, BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Sync() (err error) {
|
||||
if f.Layer != nil {
|
||||
err = f.Layer.Sync()
|
||||
if err == nil && f.Base != nil {
|
||||
err = f.Base.Sync()
|
||||
}
|
||||
return err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Sync()
|
||||
}
|
||||
return BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) Truncate(s int64) (err error) {
|
||||
if f.Layer != nil {
|
||||
err = f.Layer.Truncate(s)
|
||||
if err == nil && f.Base != nil {
|
||||
err = f.Base.Truncate(s)
|
||||
}
|
||||
return err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.Truncate(s)
|
||||
}
|
||||
return BADFD
|
||||
}
|
||||
|
||||
func (f *UnionFile) WriteString(s string) (n int, err error) {
|
||||
if f.Layer != nil {
|
||||
n, err = f.Layer.WriteString(s)
|
||||
if err == nil && f.Base != nil {
|
||||
_, err = f.Base.WriteString(s)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
if f.Base != nil {
|
||||
return f.Base.WriteString(s)
|
||||
}
|
||||
return 0, BADFD
|
||||
}
|
||||
|
||||
func copyToLayer(base Fs, layer Fs, name string) error {
|
||||
bfh, err := base.Open(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer bfh.Close()
|
||||
|
||||
// First make sure the directory exists
|
||||
exists, err := Exists(layer, filepath.Dir(name))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !exists {
|
||||
err = layer.MkdirAll(filepath.Dir(name), 0777) // FIXME?
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Create the file on the overlay
|
||||
lfh, err := layer.Create(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, err := io.Copy(lfh, bfh)
|
||||
if err != nil {
|
||||
// If anything fails, clean up the file
|
||||
layer.Remove(name)
|
||||
lfh.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
bfi, err := bfh.Stat()
|
||||
if err != nil || bfi.Size() != n {
|
||||
layer.Remove(name)
|
||||
lfh.Close()
|
||||
return syscall.EIO
|
||||
}
|
||||
|
||||
err = lfh.Close()
|
||||
if err != nil {
|
||||
layer.Remove(name)
|
||||
lfh.Close()
|
||||
return err
|
||||
}
|
||||
return layer.Chtimes(name, bfi.ModTime(), bfi.ModTime())
|
||||
}
|
||||
|
|
@ -0,0 +1,330 @@
|
|||
// Copyright ©2015 Steve Francia <spf@spf13.com>
|
||||
// Portions Copyright ©2015 The Hugo Authors
|
||||
// Portions Copyright 2016-present Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package afero
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"golang.org/x/text/transform"
|
||||
"golang.org/x/text/unicode/norm"
|
||||
)
|
||||
|
||||
// Filepath separator defined by os.Separator.
|
||||
const FilePathSeparator = string(filepath.Separator)
|
||||
|
||||
// Takes a reader and a path and writes the content
|
||||
func (a Afero) WriteReader(path string, r io.Reader) (err error) {
|
||||
return WriteReader(a.Fs, path, r)
|
||||
}
|
||||
|
||||
func WriteReader(fs Fs, path string, r io.Reader) (err error) {
|
||||
dir, _ := filepath.Split(path)
|
||||
ospath := filepath.FromSlash(dir)
|
||||
|
||||
if ospath != "" {
|
||||
err = fs.MkdirAll(ospath, 0777) // rwx, rw, r
|
||||
if err != nil {
|
||||
if err != os.ErrExist {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
file, err := fs.Create(path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
_, err = io.Copy(file, r)
|
||||
return
|
||||
}
|
||||
|
||||
// Same as WriteReader but checks to see if file/directory already exists.
|
||||
func (a Afero) SafeWriteReader(path string, r io.Reader) (err error) {
|
||||
return SafeWriteReader(a.Fs, path, r)
|
||||
}
|
||||
|
||||
func SafeWriteReader(fs Fs, path string, r io.Reader) (err error) {
|
||||
dir, _ := filepath.Split(path)
|
||||
ospath := filepath.FromSlash(dir)
|
||||
|
||||
if ospath != "" {
|
||||
err = fs.MkdirAll(ospath, 0777) // rwx, rw, r
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
exists, err := Exists(fs, path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if exists {
|
||||
return fmt.Errorf("%v already exists", path)
|
||||
}
|
||||
|
||||
file, err := fs.Create(path)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
_, err = io.Copy(file, r)
|
||||
return
|
||||
}
|
||||
|
||||
func (a Afero) GetTempDir(subPath string) string {
|
||||
return GetTempDir(a.Fs, subPath)
|
||||
}
|
||||
|
||||
// GetTempDir returns the default temp directory with trailing slash
|
||||
// if subPath is not empty then it will be created recursively with mode 777 rwx rwx rwx
|
||||
func GetTempDir(fs Fs, subPath string) string {
|
||||
addSlash := func(p string) string {
|
||||
if FilePathSeparator != p[len(p)-1:] {
|
||||
p = p + FilePathSeparator
|
||||
}
|
||||
return p
|
||||
}
|
||||
dir := addSlash(os.TempDir())
|
||||
|
||||
if subPath != "" {
|
||||
// preserve windows backslash :-(
|
||||
if FilePathSeparator == "\\" {
|
||||
subPath = strings.Replace(subPath, "\\", "____", -1)
|
||||
}
|
||||
dir = dir + UnicodeSanitize((subPath))
|
||||
if FilePathSeparator == "\\" {
|
||||
dir = strings.Replace(dir, "____", "\\", -1)
|
||||
}
|
||||
|
||||
if exists, _ := Exists(fs, dir); exists {
|
||||
return addSlash(dir)
|
||||
}
|
||||
|
||||
err := fs.MkdirAll(dir, 0777)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
dir = addSlash(dir)
|
||||
}
|
||||
return dir
|
||||
}
|
||||
|
||||
// Rewrite string to remove non-standard path characters
|
||||
func UnicodeSanitize(s string) string {
|
||||
source := []rune(s)
|
||||
target := make([]rune, 0, len(source))
|
||||
|
||||
for _, r := range source {
|
||||
if unicode.IsLetter(r) ||
|
||||
unicode.IsDigit(r) ||
|
||||
unicode.IsMark(r) ||
|
||||
r == '.' ||
|
||||
r == '/' ||
|
||||
r == '\\' ||
|
||||
r == '_' ||
|
||||
r == '-' ||
|
||||
r == '%' ||
|
||||
r == ' ' ||
|
||||
r == '#' {
|
||||
target = append(target, r)
|
||||
}
|
||||
}
|
||||
|
||||
return string(target)
|
||||
}
|
||||
|
||||
// Transform characters with accents into plain forms.
|
||||
func NeuterAccents(s string) string {
|
||||
t := transform.Chain(norm.NFD, transform.RemoveFunc(isMn), norm.NFC)
|
||||
result, _, _ := transform.String(t, string(s))
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func isMn(r rune) bool {
|
||||
return unicode.Is(unicode.Mn, r) // Mn: nonspacing marks
|
||||
}
|
||||
|
||||
func (a Afero) FileContainsBytes(filename string, subslice []byte) (bool, error) {
|
||||
return FileContainsBytes(a.Fs, filename, subslice)
|
||||
}
|
||||
|
||||
// Check if a file contains a specified byte slice.
|
||||
func FileContainsBytes(fs Fs, filename string, subslice []byte) (bool, error) {
|
||||
f, err := fs.Open(filename)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
return readerContainsAny(f, subslice), nil
|
||||
}
|
||||
|
||||
func (a Afero) FileContainsAnyBytes(filename string, subslices [][]byte) (bool, error) {
|
||||
return FileContainsAnyBytes(a.Fs, filename, subslices)
|
||||
}
|
||||
|
||||
// Check if a file contains any of the specified byte slices.
|
||||
func FileContainsAnyBytes(fs Fs, filename string, subslices [][]byte) (bool, error) {
|
||||
f, err := fs.Open(filename)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
return readerContainsAny(f, subslices...), nil
|
||||
}
|
||||
|
||||
// readerContains reports whether any of the subslices is within r.
|
||||
func readerContainsAny(r io.Reader, subslices ...[]byte) bool {
|
||||
|
||||
if r == nil || len(subslices) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
largestSlice := 0
|
||||
|
||||
for _, sl := range subslices {
|
||||
if len(sl) > largestSlice {
|
||||
largestSlice = len(sl)
|
||||
}
|
||||
}
|
||||
|
||||
if largestSlice == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
bufflen := largestSlice * 4
|
||||
halflen := bufflen / 2
|
||||
buff := make([]byte, bufflen)
|
||||
var err error
|
||||
var n, i int
|
||||
|
||||
for {
|
||||
i++
|
||||
if i == 1 {
|
||||
n, err = io.ReadAtLeast(r, buff[:halflen], halflen)
|
||||
} else {
|
||||
if i != 2 {
|
||||
// shift left to catch overlapping matches
|
||||
copy(buff[:], buff[halflen:])
|
||||
}
|
||||
n, err = io.ReadAtLeast(r, buff[halflen:], halflen)
|
||||
}
|
||||
|
||||
if n > 0 {
|
||||
for _, sl := range subslices {
|
||||
if bytes.Contains(buff, sl) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (a Afero) DirExists(path string) (bool, error) {
|
||||
return DirExists(a.Fs, path)
|
||||
}
|
||||
|
||||
// DirExists checks if a path exists and is a directory.
|
||||
func DirExists(fs Fs, path string) (bool, error) {
|
||||
fi, err := fs.Stat(path)
|
||||
if err == nil && fi.IsDir() {
|
||||
return true, nil
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
func (a Afero) IsDir(path string) (bool, error) {
|
||||
return IsDir(a.Fs, path)
|
||||
}
|
||||
|
||||
// IsDir checks if a given path is a directory.
|
||||
func IsDir(fs Fs, path string) (bool, error) {
|
||||
fi, err := fs.Stat(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return fi.IsDir(), nil
|
||||
}
|
||||
|
||||
func (a Afero) IsEmpty(path string) (bool, error) {
|
||||
return IsEmpty(a.Fs, path)
|
||||
}
|
||||
|
||||
// IsEmpty checks if a given file or directory is empty.
|
||||
func IsEmpty(fs Fs, path string) (bool, error) {
|
||||
if b, _ := Exists(fs, path); !b {
|
||||
return false, fmt.Errorf("%q path does not exist", path)
|
||||
}
|
||||
fi, err := fs.Stat(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if fi.IsDir() {
|
||||
f, err := fs.Open(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
defer f.Close()
|
||||
list, err := f.Readdir(-1)
|
||||
return len(list) == 0, nil
|
||||
}
|
||||
return fi.Size() == 0, nil
|
||||
}
|
||||
|
||||
func (a Afero) Exists(path string) (bool, error) {
|
||||
return Exists(a.Fs, path)
|
||||
}
|
||||
|
||||
// Check if a file or directory exists.
|
||||
func Exists(fs Fs, path string) (bool, error) {
|
||||
_, err := fs.Stat(path)
|
||||
if err == nil {
|
||||
return true, nil
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
func FullBaseFsPath(basePathFs *BasePathFs, relativePath string) string {
|
||||
combinedPath := filepath.Join(basePathFs.path, relativePath)
|
||||
if parent, ok := basePathFs.source.(*BasePathFs); ok {
|
||||
return FullBaseFsPath(parent, combinedPath)
|
||||
}
|
||||
|
||||
return combinedPath
|
||||
}
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package cpu implements processor feature detection for
|
||||
// various CPU architectures.
|
||||
package cpu
|
||||
|
||||
// CacheLinePad is used to pad structs to avoid false sharing.
|
||||
type CacheLinePad struct{ _ [cacheLineSize]byte }
|
||||
|
||||
// X86 contains the supported CPU features of the
|
||||
// current X86/AMD64 platform. If the current platform
|
||||
// is not X86/AMD64 then all feature flags are false.
|
||||
//
|
||||
// X86 is padded to avoid false sharing. Further the HasAVX
|
||||
// and HasAVX2 are only set if the OS supports XMM and YMM
|
||||
// registers in addition to the CPUID feature bit being set.
|
||||
var X86 struct {
|
||||
_ CacheLinePad
|
||||
HasAES bool // AES hardware implementation (AES NI)
|
||||
HasADX bool // Multi-precision add-carry instruction extensions
|
||||
HasAVX bool // Advanced vector extension
|
||||
HasAVX2 bool // Advanced vector extension 2
|
||||
HasBMI1 bool // Bit manipulation instruction set 1
|
||||
HasBMI2 bool // Bit manipulation instruction set 2
|
||||
HasERMS bool // Enhanced REP for MOVSB and STOSB
|
||||
HasFMA bool // Fused-multiply-add instructions
|
||||
HasOSXSAVE bool // OS supports XSAVE/XRESTOR for saving/restoring XMM registers.
|
||||
HasPCLMULQDQ bool // PCLMULQDQ instruction - most often used for AES-GCM
|
||||
HasPOPCNT bool // Hamming weight instruction POPCNT.
|
||||
HasSSE2 bool // Streaming SIMD extension 2 (always available on amd64)
|
||||
HasSSE3 bool // Streaming SIMD extension 3
|
||||
HasSSSE3 bool // Supplemental streaming SIMD extension 3
|
||||
HasSSE41 bool // Streaming SIMD extension 4 and 4.1
|
||||
HasSSE42 bool // Streaming SIMD extension 4 and 4.2
|
||||
_ CacheLinePad
|
||||
}
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 32
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 64
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build 386 amd64 amd64p32
|
||||
// +build !gccgo
|
||||
|
||||
package cpu
|
||||
|
||||
// cpuid is implemented in cpu_x86.s for gc compiler
|
||||
// and in cpu_gccgo.c for gccgo.
|
||||
func cpuid(eaxArg, ecxArg uint32) (eax, ebx, ecx, edx uint32)
|
||||
|
||||
// xgetbv with ecx = 0 is implemented in cpu_x86.s for gc compiler
|
||||
// and in cpu_gccgo.c for gccgo.
|
||||
func xgetbv() (eax, edx uint32)
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build 386 amd64 amd64p32
|
||||
// +build gccgo
|
||||
|
||||
#include <cpuid.h>
|
||||
#include <stdint.h>
|
||||
|
||||
// Need to wrap __get_cpuid_count because it's declared as static.
|
||||
int
|
||||
gccgoGetCpuidCount(uint32_t leaf, uint32_t subleaf,
|
||||
uint32_t *eax, uint32_t *ebx,
|
||||
uint32_t *ecx, uint32_t *edx)
|
||||
{
|
||||
return __get_cpuid_count(leaf, subleaf, eax, ebx, ecx, edx);
|
||||
}
|
||||
|
||||
// xgetbv reads the contents of an XCR (Extended Control Register)
|
||||
// specified in the ECX register into registers EDX:EAX.
|
||||
// Currently, the only supported value for XCR is 0.
|
||||
//
|
||||
// TODO: Replace with a better alternative:
|
||||
//
|
||||
// #include <xsaveintrin.h>
|
||||
//
|
||||
// #pragma GCC target("xsave")
|
||||
//
|
||||
// void gccgoXgetbv(uint32_t *eax, uint32_t *edx) {
|
||||
// unsigned long long x = _xgetbv(0);
|
||||
// *eax = x & 0xffffffff;
|
||||
// *edx = (x >> 32) & 0xffffffff;
|
||||
// }
|
||||
//
|
||||
// Note that _xgetbv is defined starting with GCC 8.
|
||||
void
|
||||
gccgoXgetbv(uint32_t *eax, uint32_t *edx)
|
||||
{
|
||||
__asm(" xorl %%ecx, %%ecx\n"
|
||||
" xgetbv"
|
||||
: "=a"(*eax), "=d"(*edx));
|
||||
}
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build 386 amd64 amd64p32
|
||||
// +build gccgo
|
||||
|
||||
package cpu
|
||||
|
||||
//extern gccgoGetCpuidCount
|
||||
func gccgoGetCpuidCount(eaxArg, ecxArg uint32, eax, ebx, ecx, edx *uint32)
|
||||
|
||||
func cpuid(eaxArg, ecxArg uint32) (eax, ebx, ecx, edx uint32) {
|
||||
var a, b, c, d uint32
|
||||
gccgoGetCpuidCount(eaxArg, ecxArg, &a, &b, &c, &d)
|
||||
return a, b, c, d
|
||||
}
|
||||
|
||||
//extern gccgoXgetbv
|
||||
func gccgoXgetbv(eax, edx *uint32)
|
||||
|
||||
func xgetbv() (eax, edx uint32) {
|
||||
var a, d uint32
|
||||
gccgoXgetbv(&a, &d)
|
||||
return a, d
|
||||
}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build mips64 mips64le
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 32
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build mips mipsle
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 32
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ppc64 ppc64le
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 128
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 256
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build 386 amd64 amd64p32
|
||||
|
||||
package cpu
|
||||
|
||||
const cacheLineSize = 64
|
||||
|
||||
func init() {
|
||||
maxID, _, _, _ := cpuid(0, 0)
|
||||
|
||||
if maxID < 1 {
|
||||
return
|
||||
}
|
||||
|
||||
_, _, ecx1, edx1 := cpuid(1, 0)
|
||||
X86.HasSSE2 = isSet(26, edx1)
|
||||
|
||||
X86.HasSSE3 = isSet(0, ecx1)
|
||||
X86.HasPCLMULQDQ = isSet(1, ecx1)
|
||||
X86.HasSSSE3 = isSet(9, ecx1)
|
||||
X86.HasFMA = isSet(12, ecx1)
|
||||
X86.HasSSE41 = isSet(19, ecx1)
|
||||
X86.HasSSE42 = isSet(20, ecx1)
|
||||
X86.HasPOPCNT = isSet(23, ecx1)
|
||||
X86.HasAES = isSet(25, ecx1)
|
||||
X86.HasOSXSAVE = isSet(27, ecx1)
|
||||
|
||||
osSupportsAVX := false
|
||||
// For XGETBV, OSXSAVE bit is required and sufficient.
|
||||
if X86.HasOSXSAVE {
|
||||
eax, _ := xgetbv()
|
||||
// Check if XMM and YMM registers have OS support.
|
||||
osSupportsAVX = isSet(1, eax) && isSet(2, eax)
|
||||
}
|
||||
|
||||
X86.HasAVX = isSet(28, ecx1) && osSupportsAVX
|
||||
|
||||
if maxID < 7 {
|
||||
return
|
||||
}
|
||||
|
||||
_, ebx7, _, _ := cpuid(7, 0)
|
||||
X86.HasBMI1 = isSet(3, ebx7)
|
||||
X86.HasAVX2 = isSet(5, ebx7) && osSupportsAVX
|
||||
X86.HasBMI2 = isSet(8, ebx7)
|
||||
X86.HasERMS = isSet(9, ebx7)
|
||||
X86.HasADX = isSet(19, ebx7)
|
||||
}
|
||||
|
||||
func isSet(bitpos uint, value uint32) bool {
|
||||
return value&(1<<bitpos) != 0
|
||||
}
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
// Copyright 2018 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build 386 amd64 amd64p32
|
||||
// +build !gccgo
|
||||
|
||||
#include "textflag.h"
|
||||
|
||||
// func cpuid(eaxArg, ecxArg uint32) (eax, ebx, ecx, edx uint32)
|
||||
TEXT ·cpuid(SB), NOSPLIT, $0-24
|
||||
MOVL eaxArg+0(FP), AX
|
||||
MOVL ecxArg+4(FP), CX
|
||||
CPUID
|
||||
MOVL AX, eax+8(FP)
|
||||
MOVL BX, ebx+12(FP)
|
||||
MOVL CX, ecx+16(FP)
|
||||
MOVL DX, edx+20(FP)
|
||||
RET
|
||||
|
||||
// func xgetbv() (eax, edx uint32)
|
||||
TEXT ·xgetbv(SB),NOSPLIT,$0-8
|
||||
MOVL $0, CX
|
||||
XGETBV
|
||||
MOVL AX, eax+0(FP)
|
||||
MOVL DX, edx+4(FP)
|
||||
RET
|
||||
Loading…
Reference in New Issue