Merge pull request #7 from democratic-csi/next

Next
This commit is contained in:
Travis Glenn Hansen 2020-07-08 17:31:19 -06:00 committed by GitHub
commit 55b25e9772
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 1297 additions and 744 deletions

View File

@ -1,3 +1,4 @@
# https://medium.com/@quentin.mcgaw/cross-architecture-docker-builds-with-travis-ci-arm-s390x-etc-8f754e20aaef
dist: bionic dist: bionic
sudo: required sudo: required
@ -14,19 +15,25 @@ env:
# - DOCKER_BUILD_PLATFORM=linux/arm64 # - DOCKER_BUILD_PLATFORM=linux/arm64
# - FOO=bar BAR=baz # - FOO=bar BAR=baz
#addons:
# apt:
# packages:
# - docker-ce
# uname -m # uname -m
# aarch64 # aarch64
# x86_64 # x86_64
# armv7l # armv7l
before_install: before_install:
- uname -a - uname -a
- sudo cat /etc/docker/daemon.json
- sudo systemctl status docker.service
- sudo service docker restart
- export ARCH=$([ $(uname -m) = "x86_64" ] && echo "amd64" || echo "arm64") - export ARCH=$([ $(uname -m) = "x86_64" ] && echo "amd64" || echo "arm64")
- mkdir -p ~/.docker/cli-plugins/ - mkdir -p ~/.docker/cli-plugins/
- wget -O ~/.docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-${ARCH} - wget -O ~/.docker/cli-plugins/docker-buildx https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-${ARCH}
- chmod a+x ~/.docker/cli-plugins/docker-buildx - chmod a+x ~/.docker/cli-plugins/docker-buildx
- echo '{"experimental":"enabled"}' | sudo tee /etc/docker/daemon.json - sudo cat /etc/docker/daemon.json
- mkdir -p $HOME/.docker
- echo '{"experimental":"enabled"}' | sudo tee $HOME/.docker/config.json
- docker info - docker info
- docker buildx version - docker buildx version
install: install:

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019 Travis Glenn Hansen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

67
README.md Normal file
View File

@ -0,0 +1,67 @@
# Introduction
`democratic-csi` implements the `csi` (container storage interface) spec
providing storage for various container orchestration systems (ie: Kubernetes).
The current focus is providing storage via iscsi/nfs from zfs-based storage
systems, predominantly `FreeNAS / TrueNAS` and `ZoL` on `Ubuntu`.
The current drivers implement depth and breadth of the `csi` spec, so you have
access to resizing, snapshots, etc, etc.
`democratic-csi` is 2 things:
- several implementations of `csi` drivers
- freenas-nfs (manages zfs datasets to share over nfs)
- freenas-iscsi (manages zfs zvols to share over iscsi)
- zfs-generic-nfs (works with any ZoL installation...ie: Ubuntu)
- zfs-generic-iscsi (works with any ZoL installation...ie: Ubuntu)
- zfs-local-ephemeral-inline (provisions node-local zfs datasets)
- framework for developing `csi` drivers
If you have any interest in providing a `csi` driver, simply open an issue to
discuss. The project provides an extensive framework to build from making it
relatively easy to implement new drivers.
# Installation
Predominantly 2 things are needed:
- node prep: https://netapp-trident.readthedocs.io/en/stable-v20.04/kubernetes/operations/tasks/worker.html
- deploy the driver into the cluster (`helm` chart provided with sample
`values.yaml`)
You should install/configure the requirements for both nfs and iscsi.
## Helm Installation
```
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
helm search democratic-csi/
# copy proper values file from https://github.com/democratic-csi/charts/tree/master/stable/democratic-csi/examples
# edit as appropriate
# examples are from helm v2, alter as appropriate for v3
helm upgrade \
--install \
--values freenas-iscsi.yaml \
--namespace democratic-csi \
zfs-iscsi democratic-csi/democratic-csi
helm upgrade \
--install \
--values freenas-nfs.yaml \
--namespace democratic-csi \
zfs-nfs democratic-csi/democratic-csi
```
## Multiple Deployments
You may install multiple deployments of each/any driver. It requires the following:
- Use a new helm release name for each deployment
- Make sure you have a unique `csiDriver.name` in the values file
- Use unqiue names for your storage classes (per cluster)
- Use a unique parent dataset (ie: don't try to use the same parent across deployments or clusters)

View File

@ -11,7 +11,7 @@ const args = require("yargs")
.option("driver-config-file", { .option("driver-config-file", {
describe: "provide a path to driver config file", describe: "provide a path to driver config file",
config: true, config: true,
configParser: path => { configParser: (path) => {
try { try {
options = JSON.parse(fs.readFileSync(path, "utf-8")); options = JSON.parse(fs.readFileSync(path, "utf-8"));
return true; return true;
@ -23,40 +23,40 @@ const args = require("yargs")
} catch (e) {} } catch (e) {}
throw new Error("failed parsing config file: " + path); throw new Error("failed parsing config file: " + path);
} },
}) })
.demandOption(["driver-config-file"], "driver-config-file is required") .demandOption(["driver-config-file"], "driver-config-file is required")
.option("log-level", { .option("log-level", {
describe: "log level", describe: "log level",
choices: ["error", "warn", "info", "verbose", "debug", "silly"] choices: ["error", "warn", "info", "verbose", "debug", "silly"],
}) })
.option("csi-version", { .option("csi-version", {
describe: "versin of the csi spec to load", describe: "versin of the csi spec to load",
choices: ["0.2.0", "0.3.0", "1.0.0", "1.1.0", "1.2.0"] choices: ["0.2.0", "0.3.0", "1.0.0", "1.1.0", "1.2.0"],
}) })
.demandOption(["csi-version"], "csi-version is required") .demandOption(["csi-version"], "csi-version is required")
.option("csi-name", { .option("csi-name", {
describe: "name to use for driver registration" describe: "name to use for driver registration",
}) })
.demandOption(["csi-name"], "csi-name is required") .demandOption(["csi-name"], "csi-name is required")
.option("csi-mode", { .option("csi-mode", {
describe: "mode of the controller", describe: "mode of the controller",
choices: ["controller", "node"], choices: ["controller", "node"],
type: "array", type: "array",
default: ["controller", "node"] default: ["controller", "node"],
}) })
.demandOption(["csi-mode"], "csi-mode is required") .demandOption(["csi-mode"], "csi-mode is required")
.option("server-address", { .option("server-address", {
describe: "listen address for the server", describe: "listen address for the server",
type: "string" type: "string",
}) })
.option("server-port", { .option("server-port", {
describe: "listen port for the server", describe: "listen port for the server",
type: "number" type: "number",
}) })
.option("server-socket", { .option("server-socket", {
describe: "listen socket for the server", describe: "listen socket for the server",
type: "string" type: "string",
}) })
.version() .version()
.help().argv; .help().argv;
@ -87,7 +87,7 @@ const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
longs: String, longs: String,
enums: String, enums: String,
defaults: true, defaults: true,
oneofs: true oneofs: true,
}); });
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition); const protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
@ -97,7 +97,10 @@ logger.info("initializing csi driver: %s", options.driver);
let driver; let driver;
try { try {
driver = require("../src/driver/factory").factory({ logger, args, cache, package }, options); driver = require("../src/driver/factory").factory(
{ logger, args, cache, package },
options
);
} catch (err) { } catch (err) {
logger.error(err.toString()); logger.error(err.toString());
process.exit(1); process.exit(1);
@ -127,20 +130,26 @@ async function requestHandlerProxy(call, callback, serviceMethodName) {
); );
callback(null, response); callback(null, response);
} catch (e) { } catch (e) {
let message;
if (e instanceof Error) {
message = e.toString();
} else {
message = JSON.stringify(e);
}
logger.error( logger.error(
"handler error - driver: %s method: %s error: %s", "handler error - driver: %s method: %s error: %s",
driver.constructor.name, driver.constructor.name,
serviceMethodName, serviceMethodName,
JSON.stringify(e) message
); );
if (e.name == "GrpcError") { if (e.name == "GrpcError") {
callback(e); callback(e);
} else { } else {
// TODO: only show real error string in development mode // TODO: only show real error string in development mode
const message = true message = true ? message : "unknown error, please inspect service logs";
? e.toString()
: "unknown error, please inspect service logs";
callback({ code: grpc.status.INTERNAL, message }); callback({ code: grpc.status.INTERNAL, message });
} }
} }
@ -159,7 +168,7 @@ function getServer() {
}, },
async Probe(call, callback) { async Probe(call, callback) {
requestHandlerProxy(call, callback, arguments.callee.name); requestHandlerProxy(call, callback, arguments.callee.name);
} },
}); });
// Controller Service // Controller Service
@ -200,7 +209,7 @@ function getServer() {
}, },
async ControllerExpandVolume(call, callback) { async ControllerExpandVolume(call, callback) {
requestHandlerProxy(call, callback, arguments.callee.name); requestHandlerProxy(call, callback, arguments.callee.name);
} },
}); });
} }
@ -230,7 +239,7 @@ function getServer() {
}, },
async NodeGetInfo(call, callback) { async NodeGetInfo(call, callback) {
requestHandlerProxy(call, callback, arguments.callee.name); requestHandlerProxy(call, callback, arguments.callee.name);
} },
}); });
} }
@ -274,8 +283,8 @@ if (bindSocket) {
csiServer.start(); csiServer.start();
[`SIGINT`, `SIGUSR1`, `SIGUSR2`, `uncaughtException`, `SIGTERM`].forEach( [`SIGINT`, `SIGUSR1`, `SIGUSR2`, `uncaughtException`, `SIGTERM`].forEach(
eventType => { (eventType) => {
process.on(eventType, code => { process.on(eventType, (code) => {
console.log(`running server shutdown, exit code: ${code}`); console.log(`running server shutdown, exit code: ${code}`);
let socketPath = args.serverSocket || ""; let socketPath = args.serverSocket || "";
socketPath = socketPath.replace(/^unix:\/\//g, ""); socketPath = socketPath.replace(/^unix:\/\//g, "");

View File

@ -0,0 +1,16 @@
driver: zfs-local-ephemeral-inline
service:
identity: {}
controller: {}
node: {}
zfs:
#chroot: "/host"
datasetParentName: tank/k8s/inline
properties:
# add any arbitrary properties you want here
#refquota:
# value: 10M
# allowOverride: false # default is to allow inline settings to override
#refreservation:
# value: 5M
# ...

1041
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -18,17 +18,17 @@
"url": "https://github.com/democratic-csi/democratic-csi.git" "url": "https://github.com/democratic-csi/democratic-csi.git"
}, },
"dependencies": { "dependencies": {
"@grpc/proto-loader": "^0.5.3", "@grpc/proto-loader": "^0.5.4",
"bunyan": "^1.8.12", "bunyan": "^1.8.14",
"eslint": "^6.6.0", "eslint": "^7.4.0",
"grpc-uds": "^0.1.4", "grpc-uds": "^0.1.4",
"js-yaml": "^3.13.1", "js-yaml": "^3.14.0",
"lru-cache": "^5.1.1", "lru-cache": "^5.1.1",
"request": "^2.88.0", "request": "^2.88.2",
"ssh2": "^0.8.6", "ssh2": "^0.8.9",
"uri-js": "^4.2.2", "uri-js": "^4.2.2",
"uuid": "^3.3.3", "uuid": "^8.2.0",
"winston": "^3.2.1", "winston": "^3.3.3",
"yargs": "^15.0.2" "yargs": "^15.4.0"
} }
} }

View File

@ -3,6 +3,7 @@ const SshClient = require("../../utils/ssh").SshClient;
const { GrpcError, grpc } = require("../../utils/grpc"); const { GrpcError, grpc } = require("../../utils/grpc");
const { Zetabyte, ZfsSshProcessManager } = require("../../utils/zfs"); const { Zetabyte, ZfsSshProcessManager } = require("../../utils/zfs");
const uuidv4 = require("uuid").v4;
// zfs common properties // zfs common properties
const MANAGED_PROPERTY_NAME = "democratic-csi:managed_resource"; const MANAGED_PROPERTY_NAME = "democratic-csi:managed_resource";
@ -56,7 +57,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
options.service.identity.capabilities.service = [ options.service.identity.capabilities.service = [
//"UNKNOWN", //"UNKNOWN",
"CONTROLLER_SERVICE" "CONTROLLER_SERVICE",
//"VOLUME_ACCESSIBILITY_CONSTRAINTS" //"VOLUME_ACCESSIBILITY_CONSTRAINTS"
]; ];
} }
@ -66,7 +67,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
options.service.identity.capabilities.volume_expansion = [ options.service.identity.capabilities.volume_expansion = [
//"UNKNOWN", //"UNKNOWN",
"ONLINE" "ONLINE",
//"OFFLINE" //"OFFLINE"
]; ];
} }
@ -84,7 +85,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"LIST_SNAPSHOTS", "LIST_SNAPSHOTS",
"CLONE_VOLUME", "CLONE_VOLUME",
//"PUBLISH_READONLY", //"PUBLISH_READONLY",
"EXPAND_VOLUME" "EXPAND_VOLUME",
]; ];
} }
@ -96,7 +97,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
options.service.node.capabilities.rpc = [ options.service.node.capabilities.rpc = [
//"UNKNOWN", //"UNKNOWN",
"STAGE_UNSTAGE_VOLUME", "STAGE_UNSTAGE_VOLUME",
"GET_VOLUME_STATS" "GET_VOLUME_STATS",
//"EXPAND_VOLUME" //"EXPAND_VOLUME"
]; ];
break; break;
@ -105,7 +106,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
//"UNKNOWN", //"UNKNOWN",
"STAGE_UNSTAGE_VOLUME", "STAGE_UNSTAGE_VOLUME",
"GET_VOLUME_STATS", "GET_VOLUME_STATS",
"EXPAND_VOLUME" "EXPAND_VOLUME",
]; ];
break; break;
} }
@ -115,7 +116,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
getSshClient() { getSshClient() {
return new SshClient({ return new SshClient({
logger: this.ctx.logger, logger: this.ctx.logger,
connection: this.options.sshConnection connection: this.options.sshConnection,
}); });
} }
@ -123,7 +124,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
const sshClient = this.getSshClient(); const sshClient = this.getSshClient();
return new Zetabyte({ return new Zetabyte({
executor: new ZfsSshProcessManager(sshClient), executor: new ZfsSshProcessManager(sshClient),
idempotent: true idempotent: true,
}); });
} }
@ -160,7 +161,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
let message = null; let message = null;
//[{"access_mode":{"mode":"SINGLE_NODE_WRITER"},"mount":{"mount_flags":["noatime","_netdev"],"fs_type":"nfs"},"access_type":"mount"}] //[{"access_mode":{"mode":"SINGLE_NODE_WRITER"},"mount":{"mount_flags":["noatime","_netdev"],"fs_type":"nfs"},"access_type":"mount"}]
const valid = capabilities.every(capability => { const valid = capabilities.every((capability) => {
switch (driverZfsResourceType) { switch (driverZfsResourceType) {
case "filesystem": case "filesystem":
if (capability.access_type != "mount") { if (capability.access_type != "mount") {
@ -183,7 +184,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"SINGLE_NODE_READER_ONLY", "SINGLE_NODE_READER_ONLY",
"MULTI_NODE_READER_ONLY", "MULTI_NODE_READER_ONLY",
"MULTI_NODE_SINGLE_WRITER", "MULTI_NODE_SINGLE_WRITER",
"MULTI_NODE_MULTI_WRITER" "MULTI_NODE_MULTI_WRITER",
].includes(capability.access_mode.mode) ].includes(capability.access_mode.mode)
) { ) {
message = `invalid access_mode, ${capability.access_mode.mode}`; message = `invalid access_mode, ${capability.access_mode.mode}`;
@ -210,7 +211,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"SINGLE_NODE_WRITER", "SINGLE_NODE_WRITER",
"SINGLE_NODE_READER_ONLY", "SINGLE_NODE_READER_ONLY",
"MULTI_NODE_READER_ONLY", "MULTI_NODE_READER_ONLY",
"MULTI_NODE_SINGLE_WRITER" "MULTI_NODE_SINGLE_WRITER",
].includes(capability.access_mode.mode) ].includes(capability.access_mode.mode)
) { ) {
message = `invalid access_mode, ${capability.access_mode.mode}`; message = `invalid access_mode, ${capability.access_mode.mode}`;
@ -436,12 +437,12 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
// remove snapshots from target // remove snapshots from target
await this.removeSnapshotsFromDatatset(datasetName, { await this.removeSnapshotsFromDatatset(datasetName, {
force: true force: true,
}); });
} else { } else {
try { try {
response = await zb.zfs.clone(fullSnapshotName, datasetName, { response = await zb.zfs.clone(fullSnapshotName, datasetName, {
properties: volumeProperties properties: volumeProperties,
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("dataset does not exist")) { if (err.toString().includes("dataset does not exist")) {
@ -461,7 +462,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
await zb.zfs.destroy(fullSnapshotName, { await zb.zfs.destroy(fullSnapshotName, {
recurse: true, recurse: true,
force: true, force: true,
defer: true defer: true,
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("dataset does not exist")) { if (err.toString().includes("dataset does not exist")) {
@ -543,21 +544,21 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
// remove snapshots from target // remove snapshots from target
await this.removeSnapshotsFromDatatset(datasetName, { await this.removeSnapshotsFromDatatset(datasetName, {
force: true force: true,
}); });
// remove snapshot from source // remove snapshot from source
await zb.zfs.destroy(fullSnapshotName, { await zb.zfs.destroy(fullSnapshotName, {
recurse: true, recurse: true,
force: true, force: true,
defer: true defer: true,
}); });
} else { } else {
// create clone // create clone
// zfs origin property contains parent info, ie: pool0/k8s/test/PVC-111@clone-test // zfs origin property contains parent info, ie: pool0/k8s/test/PVC-111@clone-test
try { try {
response = await zb.zfs.clone(fullSnapshotName, datasetName, { response = await zb.zfs.clone(fullSnapshotName, datasetName, {
properties: volumeProperties properties: volumeProperties,
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("dataset does not exist")) { if (err.toString().includes("dataset does not exist")) {
@ -587,7 +588,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
await zb.zfs.create(datasetName, { await zb.zfs.create(datasetName, {
parents: true, parents: true,
properties: volumeProperties, properties: volumeProperties,
size: driverZfsResourceType == "volume" ? capacity_bytes : false size: driverZfsResourceType == "volume" ? capacity_bytes : false,
}); });
} }
@ -632,7 +633,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"compression", "compression",
VOLUME_CSI_NAME_PROPERTY_NAME, VOLUME_CSI_NAME_PROPERTY_NAME,
VOLUME_CONTENT_SOURCE_TYPE_PROPERTY_NAME, VOLUME_CONTENT_SOURCE_TYPE_PROPERTY_NAME,
VOLUME_CONTENT_SOURCE_ID_PROPERTY_NAME VOLUME_CONTENT_SOURCE_ID_PROPERTY_NAME,
]); ]);
properties = properties[datasetName]; properties = properties[datasetName];
driver.ctx.logger.debug("zfs props data: %j", properties); driver.ctx.logger.debug("zfs props data: %j", properties);
@ -641,7 +642,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
if (this.options.zfs.datasetPermissionsMode) { if (this.options.zfs.datasetPermissionsMode) {
command = sshClient.buildCommand("chmod", [ command = sshClient.buildCommand("chmod", [
this.options.zfs.datasetPermissionsMode, this.options.zfs.datasetPermissionsMode,
properties.mountpoint.value properties.mountpoint.value,
]); ]);
driver.ctx.logger.verbose("set permission command: %s", command); driver.ctx.logger.verbose("set permission command: %s", command);
response = await sshClient.exec(command); response = await sshClient.exec(command);
@ -660,7 +661,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
(this.options.zfs.datasetPermissionsGroup (this.options.zfs.datasetPermissionsGroup
? this.options.zfs.datasetPermissionsGroup ? this.options.zfs.datasetPermissionsGroup
: ""), : ""),
properties.mountpoint.value properties.mountpoint.value,
]); ]);
driver.ctx.logger.verbose("set ownership command: %s", command); driver.ctx.logger.verbose("set ownership command: %s", command);
response = await sshClient.exec(command); response = await sshClient.exec(command);
@ -691,7 +692,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
volume_context = await this.createShare(call, datasetName); volume_context = await this.createShare(call, datasetName);
await zb.zfs.set(datasetName, { await zb.zfs.set(datasetName, {
[SHARE_VOLUME_CONTEXT_PROPERTY_NAME]: [SHARE_VOLUME_CONTEXT_PROPERTY_NAME]:
"'" + JSON.stringify(volume_context) + "'" "'" + JSON.stringify(volume_context) + "'",
}); });
volume_context["provisioner_driver"] = driver.options.driver; volume_context["provisioner_driver"] = driver.options.driver;
@ -714,8 +715,8 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
? capacity_bytes ? capacity_bytes
: 0, : 0,
content_source: volume_content_source, content_source: volume_content_source,
volume_context volume_context,
} },
}; };
return res; return res;
@ -761,7 +762,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"origin", "origin",
"refquota", "refquota",
"compression", "compression",
VOLUME_CSI_NAME_PROPERTY_NAME VOLUME_CSI_NAME_PROPERTY_NAME,
]); ]);
properties = properties[datasetName]; properties = properties[datasetName];
} catch (err) { } catch (err) {
@ -798,7 +799,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
await zb.zfs.destroy(properties.origin.value, { await zb.zfs.destroy(properties.origin.value, {
recurse: true, recurse: true,
force: true, force: true,
defer: true defer: true,
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("snapshot has dependent clones")) { if (err.toString().includes("snapshot has dependent clones")) {
@ -939,7 +940,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
return { return {
capacity_bytes: this.options.zfs.datasetEnableQuotas ? capacity_bytes : 0, capacity_bytes: this.options.zfs.datasetEnableQuotas ? capacity_bytes : 0,
node_expansion_required: driverZfsResourceType == "volume" ? true : false node_expansion_required: driverZfsResourceType == "volume" ? true : false,
}; };
} }
@ -1017,7 +1018,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
} }
const data = { const data = {
entries: entries, entries: entries,
next_token: next_token next_token: next_token,
}; };
return data; return data;
@ -1061,7 +1062,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
SHARE_VOLUME_CONTEXT_PROPERTY_NAME, SHARE_VOLUME_CONTEXT_PROPERTY_NAME,
SUCCESS_PROPERTY_NAME, SUCCESS_PROPERTY_NAME,
VOLUME_CONTEXT_PROVISIONER_INSTANCE_ID_PROPERTY_NAME, VOLUME_CONTEXT_PROVISIONER_INSTANCE_ID_PROPERTY_NAME,
VOLUME_CONTEXT_PROVISIONER_DRIVER_PROPERTY_NAME VOLUME_CONTEXT_PROVISIONER_DRIVER_PROPERTY_NAME,
], ],
{ types, recurse: true } { types, recurse: true }
); );
@ -1069,7 +1070,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
if (err.toString().includes("dataset does not exist")) { if (err.toString().includes("dataset does not exist")) {
return { return {
entries: [], entries: [],
next_token: null next_token: null,
}; };
} }
@ -1084,7 +1085,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
} }
entries = []; entries = [];
response.indexed.forEach(row => { response.indexed.forEach((row) => {
// ignore rows were csi_name is empty // ignore rows were csi_name is empty
if (row[MANAGED_PROPERTY_NAME] != "true") { if (row[MANAGED_PROPERTY_NAME] != "true") {
return; return;
@ -1142,8 +1143,8 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
? row["refquota"] ? row["refquota"]
: row["volsize"], : row["volsize"],
content_source: volume_content_source, content_source: volume_content_source,
volume_context volume_context,
} },
}); });
}); });
@ -1159,7 +1160,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
const data = { const data = {
entries: entries, entries: entries,
next_token: next_token next_token: next_token,
}; };
return data; return data;
@ -1205,7 +1206,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
} }
const data = { const data = {
entries: entries, entries: entries,
next_token: next_token next_token: next_token,
}; };
return data; return data;
@ -1290,7 +1291,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
"used", "used",
VOLUME_CSI_NAME_PROPERTY_NAME, VOLUME_CSI_NAME_PROPERTY_NAME,
SNAPSHOT_CSI_NAME_PROPERTY_NAME, SNAPSHOT_CSI_NAME_PROPERTY_NAME,
MANAGED_PROPERTY_NAME MANAGED_PROPERTY_NAME,
], ],
{ types, recurse: true } { types, recurse: true }
); );
@ -1314,7 +1315,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
throw new GrpcError(grpc.status.FAILED_PRECONDITION, e.toString()); throw new GrpcError(grpc.status.FAILED_PRECONDITION, e.toString());
} }
response.indexed.forEach(row => { response.indexed.forEach((row) => {
// skip any snapshots not explicitly created by CO // skip any snapshots not explicitly created by CO
if (row[MANAGED_PROPERTY_NAME] != "true") { if (row[MANAGED_PROPERTY_NAME] != "true") {
return; return;
@ -1371,10 +1372,10 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
//https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto //https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto
creation_time: { creation_time: {
seconds: row["creation"], seconds: row["creation"],
nanos: 0 nanos: 0,
}, },
ready_to_use: true ready_to_use: true,
} },
}); });
}); });
} }
@ -1391,7 +1392,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
const data = { const data = {
entries: entries, entries: entries,
next_token: next_token next_token: next_token,
}; };
return data; return data;
@ -1552,7 +1553,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
{ {
recurse: true, recurse: true,
force: true, force: true,
defer: true defer: true,
} }
); );
@ -1560,12 +1561,12 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
await zb.zfs.destroy(tmpSnapshotName, { await zb.zfs.destroy(tmpSnapshotName, {
recurse: true, recurse: true,
force: true, force: true,
defer: true defer: true,
}); });
} else { } else {
try { try {
await zb.zfs.snapshot(fullSnapshotName, { await zb.zfs.snapshot(fullSnapshotName, {
properties: snapshotProperties properties: snapshotProperties,
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("dataset does not exist")) { if (err.toString().includes("dataset does not exist")) {
@ -1592,7 +1593,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
VOLUME_CSI_NAME_PROPERTY_NAME, VOLUME_CSI_NAME_PROPERTY_NAME,
SNAPSHOT_CSI_NAME_PROPERTY_NAME, SNAPSHOT_CSI_NAME_PROPERTY_NAME,
SNAPSHOT_CSI_SOURCE_VOLUME_ID_PROPERTY_NAME, SNAPSHOT_CSI_SOURCE_VOLUME_ID_PROPERTY_NAME,
MANAGED_PROPERTY_NAME MANAGED_PROPERTY_NAME,
], ],
{ types } { types }
); );
@ -1623,10 +1624,10 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
//https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto //https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto
creation_time: { creation_time: {
seconds: properties.creation.value, seconds: properties.creation.value,
nanos: 0 nanos: 0,
}, },
ready_to_use: true ready_to_use: true,
} },
}; };
} }
@ -1673,7 +1674,7 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
await zb.zfs.destroy(fullSnapshotName, { await zb.zfs.destroy(fullSnapshotName, {
recurse: true, recurse: true,
force: true, force: true,
defer: zb.helpers.isZfsSnapshot(snapshot_id) // only defer when snapshot defer: zb.helpers.isZfsSnapshot(snapshot_id), // only defer when snapshot
}); });
} catch (err) { } catch (err) {
if (err.toString().includes("snapshot has dependent clones")) { if (err.toString().includes("snapshot has dependent clones")) {
@ -1720,8 +1721,8 @@ class ControllerZfsSshBaseDriver extends CsiBaseDriver {
confirmed: { confirmed: {
volume_context: call.request.volume_context, volume_context: call.request.volume_context,
volume_capabilities: call.request.volume_capabilities, // TODO: this is a bit crude, should return *ALL* capabilities, not just what was requested volume_capabilities: call.request.volume_capabilities, // TODO: this is a bit crude, should return *ALL* capabilities, not just what was requested
parameters: call.request.parameters parameters: call.request.parameters,
} },
}; };
} }
} }

View File

@ -1,14 +1,21 @@
const { FreeNASDriver } = require("./freenas"); const { FreeNASDriver } = require("./freenas");
const { ControllerZfsGenericDriver } = require("./controller-zfs-generic"); const { ControllerZfsGenericDriver } = require("./controller-zfs-generic");
const {
ZfsLocalEphemeralInlineDriver,
} = require("./zfs-local-ephemeral-inline");
function factory(ctx, options) { function factory(ctx, options) {
switch (options.driver) { switch (options.driver) {
case "freenas-nfs": case "freenas-nfs":
case "freenas-iscsi": case "freenas-iscsi":
case "truenas-nfs":
case "truenas-iscsi":
return new FreeNASDriver(ctx, options); return new FreeNASDriver(ctx, options);
case "zfs-generic-nfs": case "zfs-generic-nfs":
case "zfs-generic-iscsi": case "zfs-generic-iscsi":
return new ControllerZfsGenericDriver(ctx, options); return new ControllerZfsGenericDriver(ctx, options);
case "zfs-local-ephemeral-inline":
return new ZfsLocalEphemeralInlineDriver(ctx, options);
default: default:
throw new Error("invalid csi driver: " + options.driver); throw new Error("invalid csi driver: " + options.driver);
} }

View File

@ -298,7 +298,11 @@ class CsiBaseDriver {
break; break;
case "iscsi": case "iscsi":
// create DB entry // create DB entry
let nodeDB = {}; // https://library.netapp.com/ecmdocs/ECMP1654943/html/GUID-8EC685B4-8CB6-40D8-A8D5-031A3899BCDC.html
// put these options in place to force targets managed by csi to be explicitly attached (in the case of unclearn shutdown etc)
let nodeDB = {
"node.startup": "manual"
};
const nodeDBKeyPrefix = "node-db."; const nodeDBKeyPrefix = "node-db.";
const normalizedSecrets = this.getNormalizedParameters( const normalizedSecrets = this.getNormalizedParameters(
call.request.secrets, call.request.secrets,

View File

@ -0,0 +1,471 @@
const fs = require("fs");
const { CsiBaseDriver } = require("../index");
const { GrpcError, grpc } = require("../../utils/grpc");
const { Filesystem } = require("../../utils/filesystem");
const SshClient = require("../../utils/ssh").SshClient;
const { Zetabyte, ZfsSshProcessManager } = require("../../utils/zfs");
// zfs common properties
const MANAGED_PROPERTY_NAME = "democratic-csi:managed_resource";
const SUCCESS_PROPERTY_NAME = "democratic-csi:provision_success";
const VOLUME_CSI_NAME_PROPERTY_NAME = "democratic-csi:csi_volume_name";
const VOLUME_CONTEXT_PROVISIONER_DRIVER_PROPERTY_NAME =
"democratic-csi:volume_context_provisioner_driver";
const VOLUME_CONTEXT_PROVISIONER_INSTANCE_ID_PROPERTY_NAME =
"democratic-csi:volume_context_provisioner_instance_id";
/**
* https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md
* https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html
*
* Sample calls:
* - https://gcsweb.k8s.io/gcs/kubernetes-jenkins/pr-logs/pull/92387/pull-kubernetes-e2e-gce/1280784994997899264/artifacts/_sig-storage_CSI_Volumes/_Driver_csi-hostpath_/_Testpattern_inline_ephemeral_CSI_volume_ephemeral/should_create_read_write_inline_ephemeral_volume/
* - https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/92387/pull-kubernetes-e2e-gce/1280784994997899264/artifacts/_sig-storage_CSI_Volumes/_Driver_csi-hostpath_/_Testpattern_inline_ephemeral_CSI_volume_ephemeral/should_create_read-only_inline_ephemeral_volume/csi-hostpathplugin-0-hostpath.log
*
* inline drivers are assumed to be mount only (no block support)
* purposely there is no native support for size contraints
*
* TODO: support creating zvols and formatting and mounting locally instead of using zfs dataset?
*
*/
class ZfsLocalEphemeralInlineDriver extends CsiBaseDriver {
constructor(ctx, options) {
super(...arguments);
options = options || {};
options.service = options.service || {};
options.service.identity = options.service.identity || {};
options.service.controller = options.service.controller || {};
options.service.node = options.service.node || {};
options.service.identity.capabilities =
options.service.identity.capabilities || {};
options.service.controller.capabilities =
options.service.controller.capabilities || {};
options.service.node.capabilities = options.service.node.capabilities || {};
if (!("service" in options.service.identity.capabilities)) {
this.ctx.logger.debug("setting default identity service caps");
options.service.identity.capabilities.service = [
"UNKNOWN",
//"CONTROLLER_SERVICE"
//"VOLUME_ACCESSIBILITY_CONSTRAINTS"
];
}
if (!("volume_expansion" in options.service.identity.capabilities)) {
this.ctx.logger.debug("setting default identity volume_expansion caps");
options.service.identity.capabilities.volume_expansion = [
"UNKNOWN",
//"ONLINE",
//"OFFLINE"
];
}
if (!("rpc" in options.service.controller.capabilities)) {
this.ctx.logger.debug("setting default controller caps");
options.service.controller.capabilities.rpc = [
//"UNKNOWN",
//"CREATE_DELETE_VOLUME",
//"PUBLISH_UNPUBLISH_VOLUME",
//"LIST_VOLUMES",
//"GET_CAPACITY",
//"CREATE_DELETE_SNAPSHOT",
//"LIST_SNAPSHOTS",
//"CLONE_VOLUME",
//"PUBLISH_READONLY",
//"EXPAND_VOLUME"
];
}
if (!("rpc" in options.service.node.capabilities)) {
this.ctx.logger.debug("setting default node caps");
options.service.node.capabilities.rpc = [
//"UNKNOWN",
//"STAGE_UNSTAGE_VOLUME",
"GET_VOLUME_STATS",
//"EXPAND_VOLUME",
];
}
}
getSshClient() {
return new SshClient({
logger: this.ctx.logger,
connection: this.options.sshConnection,
});
}
getZetabyte() {
let sshClient;
let executor;
if (this.options.sshConnection) {
sshClient = this.getSshClient();
executor = new ZfsSshProcessManager(sshClient);
}
return new Zetabyte({
executor,
idempotent: true,
chroot: this.options.zfs.chroot,
paths: {
zpool: "/usr/sbin/zpool",
zfs: "/usr/sbin/zfs",
},
});
}
getDatasetParentName() {
let datasetParentName = this.options.zfs.datasetParentName;
datasetParentName = datasetParentName.replace(/\/$/, "");
return datasetParentName;
}
getVolumeParentDatasetName() {
let datasetParentName = this.getDatasetParentName();
datasetParentName += "/v";
datasetParentName = datasetParentName.replace(/\/$/, "");
return datasetParentName;
}
assertCapabilities(capabilities) {
// hard code this for now
const driverZfsResourceType = "filesystem";
this.ctx.logger.verbose("validating capabilities: %j", capabilities);
let message = null;
//[{"access_mode":{"mode":"SINGLE_NODE_WRITER"},"mount":{"mount_flags":["noatime","_netdev"],"fs_type":"nfs"},"access_type":"mount"}]
const valid = capabilities.every((capability) => {
switch (driverZfsResourceType) {
case "filesystem":
if (capability.access_type != "mount") {
message = `invalid access_type ${capability.access_type}`;
return false;
}
if (
capability.mount.fs_type &&
!["zfs"].includes(capability.mount.fs_type)
) {
message = `invalid fs_type ${capability.mount.fs_type}`;
return false;
}
if (
capability.mount.mount_flags &&
capability.mount.mount_flags.length > 0
) {
message = `invalid mount_flags ${capability.mount.mount_flags}`;
return false;
}
if (
![
"UNKNOWN",
"SINGLE_NODE_WRITER",
"SINGLE_NODE_READER_ONLY",
].includes(capability.access_mode.mode)
) {
message = `invalid access_mode, ${capability.access_mode.mode}`;
return false;
}
return true;
case "volume":
if (capability.access_type == "mount") {
if (
capability.mount.fs_type &&
!["ext3", "ext4", "ext4dev", "xfs"].includes(
capability.mount.fs_type
)
) {
message = `invalid fs_type ${capability.mount.fs_type}`;
return false;
}
}
if (
![
"UNKNOWN",
"SINGLE_NODE_WRITER",
"SINGLE_NODE_READER_ONLY",
].includes(capability.access_mode.mode)
) {
message = `invalid access_mode, ${capability.access_mode.mode}`;
return false;
}
return true;
}
});
return { valid, message };
}
/**
* This should create a dataset with appropriate volume properties, ensuring
* the mountpoint is the target_path
*
* Any volume_context attributes starting with property.<name> will be set as zfs properties
*
* {
"target_path": "/var/lib/kubelet/pods/f8b237db-19e8-44ae-b1d2-740c9aeea702/volumes/kubernetes.io~csi/my-volume-0/mount",
"volume_capability": {
"AccessType": {
"Mount": {}
},
"access_mode": {
"mode": 1
}
},
"volume_context": {
"csi.storage.k8s.io/ephemeral": "true",
"csi.storage.k8s.io/pod.name": "inline-volume-tester-2ptb7",
"csi.storage.k8s.io/pod.namespace": "ephemeral-468",
"csi.storage.k8s.io/pod.uid": "f8b237db-19e8-44ae-b1d2-740c9aeea702",
"csi.storage.k8s.io/serviceAccount.name": "default",
"foo": "bar"
},
"volume_id": "csi-8228252978a824126924de00126e6aec7c989a48a39d577bd3ab718647df5555"
}
*
* @param {*} call
*/
async NodePublishVolume(call) {
const driver = this;
const zb = this.getZetabyte();
const volume_id = call.request.volume_id;
const staging_target_path = call.request.staging_target_path || "";
const target_path = call.request.target_path;
const capability = call.request.volume_capability;
const access_type = capability.access_type || "mount";
const readonly = call.request.readonly;
const volume_context = call.request.volume_context;
let datasetParentName = this.getVolumeParentDatasetName();
let name = volume_id;
if (!datasetParentName) {
throw new GrpcError(
grpc.status.FAILED_PRECONDITION,
`invalid configuration: missing datasetParentName`
);
}
if (!name) {
throw new GrpcError(
grpc.status.INVALID_ARGUMENT,
`volume_id is required`
);
}
if (!target_path) {
throw new GrpcError(
grpc.status.INVALID_ARGUMENT,
`target_path is required`
);
}
if (capability) {
const result = this.assertCapabilities([capability]);
if (result.valid !== true) {
throw new GrpcError(grpc.status.INVALID_ARGUMENT, result.message);
}
}
const datasetName = datasetParentName + "/" + name;
// TODO: support arbitrary values from config
// TODO: support arbitrary props from volume_context
let volumeProperties = {};
// set user-supplied properties
// this come from volume_context from keys starting with property.<foo>
const base_key = "property.";
const prefixLength = `${base_key}`.length;
Object.keys(volume_context).forEach(function (key) {
if (key.startsWith(base_key)) {
let normalizedKey = key.slice(prefixLength);
volumeProperties[normalizedKey] = volume_context[key];
}
});
// set standard properties
volumeProperties[VOLUME_CSI_NAME_PROPERTY_NAME] = name;
volumeProperties[MANAGED_PROPERTY_NAME] = "true";
volumeProperties[VOLUME_CONTEXT_PROVISIONER_DRIVER_PROPERTY_NAME] =
driver.options.driver;
if (driver.options.instance_id) {
volumeProperties[VOLUME_CONTEXT_PROVISIONER_INSTANCE_ID_PROPERTY_NAME] =
driver.options.instance_id;
}
volumeProperties[SUCCESS_PROPERTY_NAME] = "true";
// NOTE: setting mountpoint will automatically create the full path as necessary so no need for mkdir etc
volumeProperties["mountpoint"] = target_path;
// does not really make sense for ephemeral volumes..but we'll put it here in case
if (readonly) {
volumeProperties["readonly"] = "on";
}
// set driver config properties
if (this.options.zfs.properties) {
Object.keys(driver.options.zfs.properties).forEach(function (key) {
const value = driver.options.zfs.properties[key]["value"];
const allowOverride =
"allowOverride" in driver.options.zfs.properties[key]
? driver.options.zfs.properties[key]["allowOverride"]
: true;
if (!allowOverride || !(key in volumeProperties)) {
volumeProperties[key] = value;
}
});
}
await zb.zfs.create(datasetName, {
parents: true,
properties: volumeProperties,
});
return {};
}
/**
* This should destroy the dataset and remove target_path as appropriate
*
*{
"target_path": "/var/lib/kubelet/pods/f8b237db-19e8-44ae-b1d2-740c9aeea702/volumes/kubernetes.io~csi/my-volume-0/mount",
"volume_id": "csi-8228252978a824126924de00126e6aec7c989a48a39d577bd3ab718647df5555"
}
*
* @param {*} call
*/
async NodeUnpublishVolume(call) {
const zb = this.getZetabyte();
const filesystem = new Filesystem();
let result;
const volume_id = call.request.volume_id;
const target_path = call.request.target_path;
let datasetParentName = this.getVolumeParentDatasetName();
let name = volume_id;
if (!datasetParentName) {
throw new GrpcError(
grpc.status.FAILED_PRECONDITION,
`invalid configuration: missing datasetParentName`
);
}
if (!name) {
throw new GrpcError(
grpc.status.INVALID_ARGUMENT,
`volume_id is required`
);
}
if (!target_path) {
throw new GrpcError(
grpc.status.INVALID_ARGUMENT,
`target_path is required`
);
}
const datasetName = datasetParentName + "/" + name;
// NOTE: -f does NOT allow deletes if dependent filesets exist
// NOTE: -R will recursively delete items + dependent filesets
// delete dataset
try {
await zb.zfs.destroy(datasetName, { recurse: true, force: true });
} catch (err) {
if (err.toString().includes("filesystem has dependent clones")) {
throw new GrpcError(
grpc.status.FAILED_PRECONDITION,
"filesystem has dependent clones"
);
}
throw err;
}
// cleanup publish directory
result = await filesystem.pathExists(target_path);
if (result) {
if (fs.lstatSync(target_path).isDirectory()) {
result = await filesystem.rmdir(target_path);
} else {
result = await filesystem.rm([target_path]);
}
}
return {};
}
/**
* TODO: consider volume_capabilities?
*
* @param {*} call
*/
async GetCapacity(call) {
const driver = this;
const zb = this.getZetabyte();
let datasetParentName = this.getVolumeParentDatasetName();
if (!datasetParentName) {
throw new GrpcError(
grpc.status.FAILED_PRECONDITION,
`invalid configuration: missing datasetParentName`
);
}
if (call.request.volume_capabilities) {
const result = this.assertCapabilities(call.request.volume_capabilities);
if (result.valid !== true) {
return { available_capacity: 0 };
}
}
const datasetName = datasetParentName;
let properties;
properties = await zb.zfs.get(datasetName, ["avail"]);
properties = properties[datasetName];
return { available_capacity: properties.available.value };
}
/**
*
* @param {*} call
*/
async ValidateVolumeCapabilities(call) {
const driver = this;
const result = this.assertCapabilities(call.request.volume_capabilities);
if (result.valid !== true) {
return { message: result.message };
}
return {
confirmed: {
volume_context: call.request.volume_context,
volume_capabilities: call.request.volume_capabilities, // TODO: this is a bit crude, should return *ALL* capabilities, not just what was requested
parameters: call.request.parameters,
},
};
}
}
module.exports.ZfsLocalEphemeralInlineDriver = ZfsLocalEphemeralInlineDriver;

View File

@ -19,13 +19,17 @@ class Zetabyte {
options.paths.sudo = "/usr/bin/sudo"; options.paths.sudo = "/usr/bin/sudo";
} }
if (!options.paths.chroot) {
options.paths.chroot = "/usr/sbin/chroot";
}
if (!options.timeout) { if (!options.timeout) {
options.timeout = 10 * 60 * 1000; options.timeout = 10 * 60 * 1000;
} }
if (!options.executor) { if (!options.executor) {
options.executor = { options.executor = {
spawn: cp.spawn spawn: cp.spawn,
}; };
} }
@ -36,7 +40,7 @@ class Zetabyte {
"free", "free",
"cap", "cap",
"health", "health",
"altroot" "altroot",
]; ];
zb.DEFAULT_ZFS_LIST_PROPERTIES = [ zb.DEFAULT_ZFS_LIST_PROPERTIES = [
@ -45,11 +49,11 @@ class Zetabyte {
"avail", "avail",
"refer", "refer",
"type", "type",
"mountpoint" "mountpoint",
]; ];
zb.helpers = { zb.helpers = {
zfsErrorStr: function(error, stderr) { zfsErrorStr: function (error, stderr) {
if (!error) return null; if (!error) return null;
if (error.killed) return "Process killed due to timeout."; if (error.killed) return "Process killed due to timeout.";
@ -57,11 +61,11 @@ class Zetabyte {
return error.message || (stderr ? stderr.toString() : ""); return error.message || (stderr ? stderr.toString() : "");
}, },
zfsError: function(error, stderr) { zfsError: function (error, stderr) {
return new Error(zb.helpers.zfsErrorStr(error, stderr)); return new Error(zb.helpers.zfsErrorStr(error, stderr));
}, },
parseTabSeperatedTable: function(data) { parseTabSeperatedTable: function (data) {
if (!data) { if (!data) {
return []; return [];
} }
@ -86,7 +90,7 @@ class Zetabyte {
* *
* and those fields are tab-separated. * and those fields are tab-separated.
*/ */
parsePropertyList: function(data) { parsePropertyList: function (data) {
if (!data) { if (!data) {
return {}; return {};
} }
@ -94,22 +98,22 @@ class Zetabyte {
const lines = data.trim().split("\n"); const lines = data.trim().split("\n");
const properties = {}; const properties = {};
lines.forEach(function(line) { lines.forEach(function (line) {
const fields = line.split("\t"); const fields = line.split("\t");
if (!properties[fields[0]]) properties[fields[0]] = {}; if (!properties[fields[0]]) properties[fields[0]] = {};
properties[fields[0]][fields[1]] = { properties[fields[0]][fields[1]] = {
value: fields[2], value: fields[2],
received: fields[3], received: fields[3],
source: fields[4] source: fields[4],
}; };
}); });
return properties; return properties;
}, },
listTableToPropertyList: function(properties, data) { listTableToPropertyList: function (properties, data) {
const entries = []; const entries = [];
data.forEach(row => { data.forEach((row) => {
let entry = {}; let entry = {};
properties.forEach((value, index) => { properties.forEach((value, index) => {
entry[value] = row[index]; entry[value] = row[index];
@ -120,11 +124,11 @@ class Zetabyte {
return entries; return entries;
}, },
extractSnapshotName: function(datasetName) { extractSnapshotName: function (datasetName) {
return datasetName.substring(datasetName.indexOf("@") + 1); return datasetName.substring(datasetName.indexOf("@") + 1);
}, },
extractDatasetName: function(datasetName) { extractDatasetName: function (datasetName) {
if (datasetName.includes("@")) { if (datasetName.includes("@")) {
return datasetName.substring(0, datasetName.indexOf("@")); return datasetName.substring(0, datasetName.indexOf("@"));
} }
@ -132,26 +136,26 @@ class Zetabyte {
return datasetName; return datasetName;
}, },
isZfsSnapshot: function(snapshotName) { isZfsSnapshot: function (snapshotName) {
return snapshotName.includes("@"); return snapshotName.includes("@");
}, },
extractPool: function(datasetName) { extractPool: function (datasetName) {
const parts = datasetName.split("/"); const parts = datasetName.split("/");
return parts[0]; return parts[0];
}, },
extractParentDatasetName: function(datasetName) { extractParentDatasetName: function (datasetName) {
const parts = datasetName.split("/"); const parts = datasetName.split("/");
parts.pop(); parts.pop();
return parts.join("/"); return parts.join("/");
}, },
extractLeafName: function(datasetName) { extractLeafName: function (datasetName) {
return datasetName.split("/").pop(); return datasetName.split("/").pop();
}, },
isPropertyValueSet: function(value) { isPropertyValueSet: function (value) {
if ( if (
value === undefined || value === undefined ||
value === null || value === null ||
@ -164,7 +168,7 @@ class Zetabyte {
return true; return true;
}, },
generateZvolSize: function(capacity_bytes, block_size) { generateZvolSize: function (capacity_bytes, block_size) {
block_size = "" + block_size; block_size = "" + block_size;
block_size = block_size.toLowerCase(); block_size = block_size.toLowerCase();
switch (block_size) { switch (block_size) {
@ -211,7 +215,7 @@ class Zetabyte {
result = Number(result) + Number(block_size); result = Number(result) + Number(block_size);
return result; return result;
} },
}; };
zb.zpool = { zb.zpool = {
@ -221,7 +225,7 @@ class Zetabyte {
* @param {*} pool * @param {*} pool
* @param {*} vdevs * @param {*} vdevs
*/ */
add: function(pool, vdevs) { add: function (pool, vdevs) {
// -f force // -f force
// -n noop // -n noop
}, },
@ -233,7 +237,7 @@ class Zetabyte {
* @param {*} device * @param {*} device
* @param {*} new_device * @param {*} new_device
*/ */
attach: function(pool, device, new_device) { attach: function (pool, device, new_device) {
// -f Forces use of new_device, even if its appears to be in use. // -f Forces use of new_device, even if its appears to be in use.
}, },
@ -242,7 +246,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
checkpoint: function(pool) {}, checkpoint: function (pool) {},
/** /**
* zpool clear [-F [-n]] pool [device] * zpool clear [-F [-n]] pool [device]
@ -250,7 +254,7 @@ class Zetabyte {
* @param {*} pool * @param {*} pool
* @param {*} device * @param {*} device
*/ */
clear: function(pool, device) {}, clear: function (pool, device) {},
/** /**
* zpool create [-fnd] [-o property=value] ... [-O * zpool create [-fnd] [-o property=value] ... [-O
@ -261,7 +265,7 @@ class Zetabyte {
* zpool create command, including log devices, cache devices, and hot spares. * zpool create command, including log devices, cache devices, and hot spares.
* The input is an object of the form produced by the disklayout library. * The input is an object of the form produced by the disklayout library.
*/ */
create: function(pool, options) { create: function (pool, options) {
if (arguments.length != 2) if (arguments.length != 2)
throw Error("Invalid arguments, 2 arguments required"); throw Error("Invalid arguments, 2 arguments required");
@ -290,10 +294,10 @@ class Zetabyte {
if (options.tempname) args = args.concat(["-t", options.tempname]); if (options.tempname) args = args.concat(["-t", options.tempname]);
args.push(pool); args.push(pool);
options.vdevs.forEach(function(vdev) { options.vdevs.forEach(function (vdev) {
if (vdev.type) args.push(vdev.type); if (vdev.type) args.push(vdev.type);
if (vdev.devices) { if (vdev.devices) {
vdev.devices.forEach(function(dev) { vdev.devices.forEach(function (dev) {
args.push(dev.name); args.push(dev.name);
}); });
} else { } else {
@ -303,21 +307,21 @@ class Zetabyte {
if (options.spares) { if (options.spares) {
args.push("spare"); args.push("spare");
options.spares.forEach(function(dev) { options.spares.forEach(function (dev) {
args.push(dev.name); args.push(dev.name);
}); });
} }
if (options.logs) { if (options.logs) {
args.push("log"); args.push("log");
options.logs.forEach(function(dev) { options.logs.forEach(function (dev) {
args.push(dev.name); args.push(dev.name);
}); });
} }
if (options.cache) { if (options.cache) {
args.push("cache"); args.push("cache");
options.cache.forEach(function(dev) { options.cache.forEach(function (dev) {
args.push(dev.name); args.push(dev.name);
}); });
} }
@ -326,7 +330,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -339,7 +343,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
destroy: function(pool) { destroy: function (pool) {
if (arguments.length != 1) throw Error("Invalid arguments"); if (arguments.length != 1) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -352,7 +356,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -366,7 +370,7 @@ class Zetabyte {
* @param {*} pool * @param {*} pool
* @param {*} device * @param {*} device
*/ */
detach: function(pool, device) { detach: function (pool, device) {
if (arguments.length != 2) throw Error("Invalid arguments"); if (arguments.length != 2) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -379,7 +383,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -392,7 +396,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
export: function(pool) { export: function (pool) {
if (arguments.length != 2) throw Error("Invalid arguments"); if (arguments.length != 2) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -400,7 +404,7 @@ class Zetabyte {
args.push("export"); args.push("export");
if (options.force) args.push("-f"); if (options.force) args.push("-f");
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -411,7 +415,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -422,21 +426,21 @@ class Zetabyte {
/** /**
* zpool get [-Hp] [-o field[,...]] all | property[,...] pool ... * zpool get [-Hp] [-o field[,...]] all | property[,...] pool ...
*/ */
get: function() {}, get: function () {},
/** /**
* zpool history [-il] [pool] ... * zpool history [-il] [pool] ...
* *
* @param {*} pool * @param {*} pool
*/ */
history: function(pool) { history: function (pool) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("history"); args.push("history");
if (options.internal) args.push("-i"); if (options.internal) args.push("-i");
if (options.longFormat) args.push("-l"); if (options.longFormat) args.push("-l");
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -447,7 +451,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -468,7 +472,7 @@ class Zetabyte {
* *
* @param {*} options * @param {*} options
*/ */
import: function(options = {}) { import: function (options = {}) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("import"); args.push("import");
@ -480,7 +484,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -493,14 +497,14 @@ class Zetabyte {
* *
* @param {*} options * @param {*} options
*/ */
iostat: function(options = {}) {}, iostat: function (options = {}) {},
/** /**
* zpool labelclear [-f] device * zpool labelclear [-f] device
* *
* @param {*} device * @param {*} device
*/ */
labelclear: function(device) {}, labelclear: function (device) {},
/** /**
* zpool list [-Hpv] [-o property[,...]] [-T d|u] [pool] ... [inverval * zpool list [-Hpv] [-o property[,...]] [-T d|u] [pool] ... [inverval
@ -509,7 +513,7 @@ class Zetabyte {
* @param {*} pool * @param {*} pool
* @param {*} options * @param {*} options
*/ */
list: function(pool, properties, options = {}) { list: function (pool, properties, options = {}) {
if (!(arguments.length >= 1)) throw Error("Invalid arguments"); if (!(arguments.length >= 1)) throw Error("Invalid arguments");
if (!properties) properties = zb.DEFAULT_ZPOOL_LIST_PROPERTIES; if (!properties) properties = zb.DEFAULT_ZPOOL_LIST_PROPERTIES;
@ -535,7 +539,7 @@ class Zetabyte {
if (options.timestamp) args = args.concat(["-T", options.timestamp]); if (options.timestamp) args = args.concat(["-T", options.timestamp]);
if (pool) { if (pool) {
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -549,7 +553,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
if (options.parse) { if (options.parse) {
let data = zb.helpers.parseTabSeperatedTable(stdout); let data = zb.helpers.parseTabSeperatedTable(stdout);
@ -560,7 +564,7 @@ class Zetabyte {
return resolve({ return resolve({
properties, properties,
data, data,
indexed indexed,
}); });
} }
return resolve({ properties, data: stdout }); return resolve({ properties, data: stdout });
@ -576,7 +580,7 @@ class Zetabyte {
* @param {*} device * @param {*} device
* @param {*} options * @param {*} options
*/ */
offline: function(pool, device, options = {}) { offline: function (pool, device, options = {}) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("offline"); args.push("offline");
@ -588,7 +592,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -603,7 +607,7 @@ class Zetabyte {
* @param {*} device * @param {*} device
* @param {*} options * @param {*} options
*/ */
online: function(pool, device, options = {}) { online: function (pool, device, options = {}) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("online"); args.push("online");
@ -615,7 +619,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -628,7 +632,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
reguid: function(pool) { reguid: function (pool) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("reguid"); args.push("reguid");
@ -638,7 +642,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -654,7 +658,7 @@ class Zetabyte {
* @param {*} pool * @param {*} pool
* @param {*} device * @param {*} device
*/ */
remove: function(pool, device, options = {}) { remove: function (pool, device, options = {}) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("remove"); args.push("remove");
@ -670,7 +674,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -683,7 +687,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
reopen: function(pool) { reopen: function (pool) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("reopen"); args.push("reopen");
@ -693,7 +697,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -708,7 +712,7 @@ class Zetabyte {
* @param {*} device * @param {*} device
* @param {*} new_device * @param {*} new_device
*/ */
replace: function(pool, device, new_device) { replace: function (pool, device, new_device) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("replace"); args.push("replace");
@ -723,7 +727,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -736,14 +740,14 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
scrub: function(pool) { scrub: function (pool) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("scrub"); args.push("scrub");
if (options.stop) args.push("-s"); if (options.stop) args.push("-s");
if (options.pause) args.push("-p"); if (options.pause) args.push("-p");
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -754,7 +758,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -769,7 +773,7 @@ class Zetabyte {
* @param {*} property * @param {*} property
* @param {*} value * @param {*} value
*/ */
set: function(pool, property, value) { set: function (pool, property, value) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("set"); args.push("set");
@ -780,7 +784,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
@ -796,12 +800,12 @@ class Zetabyte {
* @param {*} newpool * @param {*} newpool
* @param {*} device * @param {*} device
*/ */
split: function(pool, newpool, device) {}, split: function (pool, newpool, device) {},
/** /**
* zpool status [-vx] [-T d|u] [pool] ... [interval [count]] * zpool status [-vx] [-T d|u] [pool] ... [interval [count]]
*/ */
status: function(pool, options = {}) { status: function (pool, options = {}) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
if (!("parse" in options)) options.parse = true; if (!("parse" in options)) options.parse = true;
@ -811,7 +815,7 @@ class Zetabyte {
if (options.timestamp) args = args.concat(["-T", options.timestamp]); if (options.timestamp) args = args.concat(["-T", options.timestamp]);
if (pool) { if (pool) {
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -825,7 +829,7 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (options.parse) { if (options.parse) {
stdout = stdout.trim(); stdout = stdout.trim();
if (error || stdout == "no pools available\n") { if (error || stdout == "no pools available\n") {
@ -855,7 +859,7 @@ class Zetabyte {
* *
* @param {*} pool * @param {*} pool
*/ */
upgrade: function(pool) { upgrade: function (pool) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("upgrade"); args.push("upgrade");
@ -863,7 +867,7 @@ class Zetabyte {
if (options.all) args.push("-a"); if (options.all) args.push("-a");
if (pool) { if (pool) {
if (Array.isArray(pool)) { if (Array.isArray(pool)) {
pool.forEach(item => { pool.forEach((item) => {
args.push(item); args.push(item);
}); });
} else { } else {
@ -875,13 +879,13 @@ class Zetabyte {
zb.options.paths.zpool, zb.options.paths.zpool,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(stderr); if (error) return reject(stderr);
return resolve(stdout); return resolve(stdout);
} }
); );
}); });
} },
}; };
zb.zfs = { zb.zfs = {
@ -892,7 +896,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
create: function(dataset, options = {}) { create: function (dataset, options = {}) {
if (!(arguments.length >= 1)) throw new (Error("Invalid arguments"))(); if (!(arguments.length >= 1)) throw new (Error("Invalid arguments"))();
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -921,7 +925,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if ( if (
error && error &&
!(idempotent && stderr.includes("dataset already exists")) !(idempotent && stderr.includes("dataset already exists"))
@ -942,7 +946,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
destroy: function(dataset, options = {}) { destroy: function (dataset, options = {}) {
if (!(arguments.length >= 1)) throw Error("Invalid arguments"); if (!(arguments.length >= 1)) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -969,7 +973,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if ( if (
error && error &&
!( !(
@ -993,7 +997,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
snapshot: function(dataset, options = {}) { snapshot: function (dataset, options = {}) {
if (!(arguments.length >= 1)) throw Error("Invalid arguments"); if (!(arguments.length >= 1)) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1022,7 +1026,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if ( if (
error && error &&
!(idempotent && stderr.includes("dataset already exists")) !(idempotent && stderr.includes("dataset already exists"))
@ -1040,7 +1044,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
rollback: function(dataset, options = {}) { rollback: function (dataset, options = {}) {
if (!(arguments.length >= 1)) throw Error("Invalid arguments"); if (!(arguments.length >= 1)) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1055,7 +1059,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
/** /**
* cannot rollback to 'foo/bar/baz@foobar': more recent snapshots or bookmarks exist * cannot rollback to 'foo/bar/baz@foobar': more recent snapshots or bookmarks exist
* use '-r' to force deletion of the following snapshots and bookmarks: * use '-r' to force deletion of the following snapshots and bookmarks:
@ -1074,7 +1078,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
clone: function(snapshot, dataset, options = {}) { clone: function (snapshot, dataset, options = {}) {
if (!(arguments.length >= 2)) throw Error("Invalid arguments"); if (!(arguments.length >= 2)) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1101,7 +1105,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if ( if (
error && error &&
!(idempotent && stderr.includes("dataset already exists")) !(idempotent && stderr.includes("dataset already exists"))
@ -1139,7 +1143,7 @@ class Zetabyte {
args.push("'" + command.join(" ") + "'"); args.push("'" + command.join(" ") + "'");
zb.exec("/bin/sh", args, { timeout: zb.options.timeout }, function( zb.exec("/bin/sh", args, { timeout: zb.options.timeout }, function (
error, error,
stdout, stdout,
stderr stderr
@ -1155,7 +1159,7 @@ class Zetabyte {
* *
* @param {*} dataset * @param {*} dataset
*/ */
promote: function(dataset) { promote: function (dataset) {
if (arguments.length != 1) throw Error("Invalid arguments"); if (arguments.length != 1) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1167,7 +1171,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
@ -1185,7 +1189,7 @@ class Zetabyte {
* @param {*} target * @param {*} target
* @param {*} options * @param {*} options
*/ */
rename: function(source, target, options = {}) { rename: function (source, target, options = {}) {
if (!(arguments.length >= 2)) throw Error("Invalid arguments"); if (!(arguments.length >= 2)) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1202,7 +1206,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
@ -1218,7 +1222,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} options * @param {*} options
*/ */
list: function(dataset, properties, options = {}) { list: function (dataset, properties, options = {}) {
if (!(arguments.length >= 1)) throw Error("Invalid arguments"); if (!(arguments.length >= 1)) throw Error("Invalid arguments");
if (!properties) properties = zb.DEFAULT_ZFS_LIST_PROPERTIES; if (!properties) properties = zb.DEFAULT_ZFS_LIST_PROPERTIES;
@ -1258,7 +1262,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
if (options.parse) { if (options.parse) {
let data = zb.helpers.parseTabSeperatedTable(stdout); let data = zb.helpers.parseTabSeperatedTable(stdout);
@ -1269,7 +1273,7 @@ class Zetabyte {
return resolve({ return resolve({
properties, properties,
data, data,
indexed indexed,
}); });
} }
return resolve({ properties, data: stdout }); return resolve({ properties, data: stdout });
@ -1284,7 +1288,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} properties * @param {*} properties
*/ */
set: function(dataset, properties) { set: function (dataset, properties) {
if (arguments.length != 2) throw Error("Invalid arguments"); if (arguments.length != 2) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1307,7 +1311,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
@ -1327,7 +1331,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} properties * @param {*} properties
*/ */
get: function(dataset, properties = "all", options = {}) { get: function (dataset, properties = "all", options = {}) {
if (!(arguments.length >= 2)) throw Error("Invalid arguments"); if (!(arguments.length >= 2)) throw Error("Invalid arguments");
if (!properties) properties = "all"; if (!properties) properties = "all";
if (Array.isArray(properties) && !properties.length > 0) if (Array.isArray(properties) && !properties.length > 0)
@ -1344,7 +1348,7 @@ class Zetabyte {
if (options.parse) if (options.parse)
args = args.concat([ args = args.concat([
"-o", "-o",
["name", "property", "value", "received", "source"] ["name", "property", "value", "received", "source"],
]); ]);
if (options.fields && !options.parse) { if (options.fields && !options.parse) {
let fields; let fields;
@ -1394,7 +1398,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
if (options.parse) { if (options.parse) {
return resolve(zb.helpers.parsePropertyList(stdout)); return resolve(zb.helpers.parsePropertyList(stdout));
@ -1411,7 +1415,7 @@ class Zetabyte {
* @param {*} dataset * @param {*} dataset
* @param {*} property * @param {*} property
*/ */
inherit: function(dataset, property) { inherit: function (dataset, property) {
if (arguments.length != 2) throw Error("Invalid arguments"); if (arguments.length != 2) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1426,7 +1430,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
@ -1439,7 +1443,7 @@ class Zetabyte {
* *
* @param {*} dataset * @param {*} dataset
*/ */
remap: function(dataset) { remap: function (dataset) {
if (arguments.length != 1) throw Error("Invalid arguments"); if (arguments.length != 1) throw Error("Invalid arguments");
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -1451,7 +1455,7 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
@ -1465,7 +1469,7 @@ class Zetabyte {
* *
* @param {*} dataset * @param {*} dataset
*/ */
upgrade: function(options = {}, dataset) { upgrade: function (options = {}, dataset) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
let args = []; let args = [];
args.push("upgrade"); args.push("upgrade");
@ -1481,13 +1485,13 @@ class Zetabyte {
zb.options.paths.zfs, zb.options.paths.zfs,
args, args,
{ timeout: zb.options.timeout }, { timeout: zb.options.timeout },
function(error, stdout, stderr) { function (error, stdout, stderr) {
if (error) return reject(zb.helpers.zfsError(error, stderr)); if (error) return reject(zb.helpers.zfsError(error, stderr));
return resolve(stdout); return resolve(stdout);
} }
); );
}); });
} },
}; };
} }
@ -1518,6 +1522,13 @@ class Zetabyte {
break; break;
} }
if (zb.options.chroot) {
args = args || [];
args.unshift(command);
args.unshift(zb.options.chroot);
command = zb.options.paths.chroot;
}
if (zb.options.sudo) { if (zb.options.sudo) {
args = args || []; args = args || [];
args.unshift(command); args.unshift(command);
@ -1535,15 +1546,15 @@ class Zetabyte {
} }
if (callback) { if (callback) {
child.stdout.on("data", function(data) { child.stdout.on("data", function (data) {
stdout = stdout + data; stdout = stdout + data;
}); });
child.stderr.on("data", function(data) { child.stderr.on("data", function (data) {
stderr = stderr + data; stderr = stderr + data;
}); });
child.on("close", function(error) { child.on("close", function (error) {
if (timeout) { if (timeout) {
clearTimeout(timeout); clearTimeout(timeout);
} }
@ -1600,7 +1611,7 @@ class ZfsSshProcessManager {
proxy.stdout = stdout; proxy.stdout = stdout;
proxy.stderr = stderr; proxy.stderr = stderr;
proxy.kill = function(signal = "SIGTERM") { proxy.kill = function (signal = "SIGTERM") {
proxy.emit("kill", signal); proxy.emit("kill", signal);
}; };
@ -1609,7 +1620,7 @@ class ZfsSshProcessManager {
client.debug("ZfsProcessManager arguments: " + JSON.stringify(arguments)); client.debug("ZfsProcessManager arguments: " + JSON.stringify(arguments));
client.logger.verbose("ZfsProcessManager command: " + command); client.logger.verbose("ZfsProcessManager command: " + command);
client.exec(command, {}, proxy).catch(err => { client.exec(command, {}, proxy).catch((err) => {
proxy.stderr.emit("data", err.message); proxy.stderr.emit("data", err.message);
proxy.emit("close", 1, "SIGQUIT"); proxy.emit("close", 1, "SIGQUIT");
}); });