Improved README.md and moved freenas examples

This commit is contained in:
D1StrX 2022-06-02 21:16:55 +02:00
parent 7041b7fe45
commit 2b3491bf35
7 changed files with 342 additions and 327 deletions

598
README.md
View File

@ -3,331 +3,377 @@
# Introduction # Introduction
`democratic-csi` implements the `csi` (container storage interface) spec ## What is Democratic-CSI?
providing storage for various container orchestration systems (ie: Kubernetes).
The current focus is providing storage via iscsi/nfs from zfs-based storage `Democratic-CSI` implements the `CSI` (Container Storage Interface) specifications providing storage for various container orchestration systems (_ie: Kubernetes, Nomad, OpenShift_).
systems, predominantly `FreeNAS / TrueNAS` and `ZoL` on `Ubuntu`.
The current drivers implement the depth and breadth of the `csi` spec, so you The current _focus_ is providing storage via iSCSI or NFS from ZFS-based storage systems, predominantly `TrueNAS / FreeNAS` and `ZoL on Ubuntu`.
have access to resizing, snapshots, clones, etc functionality. The current _drivers_ implement the depth and breadth of the `csi` specifications, so you have access to resizing, snapshots, clones, etc functionality.
`democratic-csi` is 2 things: ## What can Democratic-CSI offer?
- several implementations of `csi` drivers **Several implementations of `CSI` drivers**
- `freenas-nfs` (manages zfs datasets to share over nfs) :arrow_forward: `freenas-nfs` (manages zfs datasets to share over nfs)
- `freenas-iscsi` (manages zfs zvols to share over iscsi) :arrow_forward: `freenas-iscsi` (manages zfs zvols to share over iscsi)
- `freenas-smb` (manages zfs datasets to share over smb) :arrow_forward: `freenas-smb` (manages zfs datasets to share over smb)
- `freenas-api-nfs` experimental use with SCALE only (manages zfs datasets to share over nfs) :arrow_forward: `freenas-api-nfs` experimental use with SCALE only (manages zfs datasets to share over nfs)
- `freenas-api-iscsi` experimental use with SCALE only (manages zfs zvols to share over iscsi) :arrow_forward: `freenas-api-iscsi` experimental use with SCALE only (manages zfs zvols to share over iscsi)
- `freenas-api-smb` experimental use with SCALE only (manages zfs datasets to share over smb) :arrow_forward: `freenas-api-smb` experimental use with SCALE only (manages zfs datasets to share over smb)
- `zfs-generic-nfs` (works with any ZoL installation...ie: Ubuntu) :arrow_forward: `zfs-generic-nfs` (works with any ZoL installation...ie: Ubuntu)
- `zfs-generic-iscsi` (works with any ZoL installation...ie: Ubuntu) :arrow_forward: `zfs-generic-iscsi` (works with any ZoL installation...ie: Ubuntu)
- `zfs-local-ephemeral-inline` (provisions node-local zfs datasets) :arrow_forward: `zfs-local-ephemeral-inline` (provisions node-local zfs datasets)
- `zfs-local-dataset` (provision node-local volume as dataset) :arrow_forward: `synology-iscsi` experimental (manages volumes to share over iscsi)
- `zfs-local-zvol` (provision node-local volume as zvol) :arrow_forward: `lustre-client` (crudely provisions storage using a shared lustre share/directory for all volumes)
- `synology-iscsi` experimental (manages volumes to share over iscsi) :arrow_forward: `nfs-client` (crudely provisions storage using a shared nfs share/directory for all volumes)
- `lustre-client` (crudely provisions storage using a shared lustre :arrow_forward: `smb-client` (crudely provisions storage using a shared smb share/directory for all volumes)
share/directory for all volumes) :arrow_forward: `node-manual` (allows connecting to manually created smb, nfs, lustre, and iscsi volumes, see sample PVs in the `examples` directory)
- `nfs-client` (crudely provisions storage using a shared nfs share/directory
for all volumes)
- `smb-client` (crudely provisions storage using a shared smb share/directory
for all volumes)
- `local-hostpath` (crudely provisions node-local directories)
- `node-manual` (allows connecting to manually created smb, nfs, lustre,
oneclient, and iscsi volumes, see sample PVs in the `examples` directory)
- framework for developing `csi` drivers
If you have any interest in providing a `csi` driver, simply open an issue to **Development**
discuss. The project provides an extensive framework to build from making it :arrow_forward: Framework for developing `CSI` drivers
If you have any interest in providing a `CSI` driver, simply open an issue to
discuss. The project provides an extensive framework to build and making it
relatively easy to implement new drivers. relatively easy to implement new drivers.
# Installation
Predominantly 3 things are needed:
- node prep (ie: your kubernetes cluster nodes)
- server prep (ie: your storage server)
- deploy the driver into the cluster (`helm` chart provided with sample
`values.yaml`)
## Community Guides ## Community Guides
- https://jonathangazeley.com/2021/01/05/using-truenas-to-provide-persistent-storage-for-kubernetes/ [Using TrueNAS to provide persistent storage for Kubernetes](https://jonathangazeley.com/2021/01/05/using-truenas-to-provide-persistent-storage-for-kubernetes/)
- https://www.lisenet.com/2021/moving-to-truenas-and-democratic-csi-for-kubernetes-persistent-storage/ [Migrating from `NFS-client-provisioner` to `democratic-CSI`](https://gist.github.com/admun/4372899f20421a947b7544e5fc9f9117)
- https://gist.github.com/admun/4372899f20421a947b7544e5fc9f9117 (migrating [Migrating between storage classes using `Velero`](https://gist.github.com/deefdragon/d58a4210622ff64088bd62a5d8a4e8cc)
from `nfs-client-provisioner` to `democratic-csi`)
- https://gist.github.com/deefdragon/d58a4210622ff64088bd62a5d8a4e8cc
(migrating between storage classes using `velero`)
## Node Prep # Installation
You should install/configure the requirements for both nfs and iscsi. Predominantly 3 prerequisites are needed:
### nfs - Nodes preperation (ie: Kubernetes cluster nodes)
- Storage server preperation
- Deployment of the driver into the cluster (`helm` chart provided with sample
`values.yaml`)
``` ## :wrench: Node preperation
RHEL / CentOS
Alright, you have chosen your driver. Let's start by configuring the prerequisites for your Node.
You can choose to use either **NFS** or **iSCSI** or **both**.
### NFS configuration
---
#### RHEL / CentOS
```bash
sudo yum install -y nfs-utils sudo yum install -y nfs-utils
```
Ubuntu / Debian #### Ubuntu / Debian
```bash
sudo apt-get install -y nfs-common sudo apt-get install -y nfs-common
``` ```
### iscsi ### iSCSI configuration
Note that `multipath` is supported for the `iscsi`-based drivers. Simply setup ---
multipath to your liking and set multiple portals in the config as appropriate.
If you are running Kubernetes with rancher/rke please see the following: **RHEL / CentOS**
Install the following system packages:
- https://github.com/rancher/rke/issues/1846 ```bash
```
RHEL / CentOS
# Install the following system packages
sudo yum install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath sudo yum install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
```
# Enable multipathing Enable multipathing:
```bash
sudo mpathconf --enable --with_multipathd y sudo mpathconf --enable --with_multipathd y
```
# Ensure that iscsid and multipathd are running Ensure that `iscsid` and `multipathd` are running:
sudo systemctl enable iscsid multipathd
sudo systemctl start iscsid multipathd
# Start and enable iscsi ```bash
sudo systemctl enable iscsi sudo systemctl enable iscsid multipathd && sudo systemctl start iscsid multipathd
sudo systemctl start iscsi ```
Start and enable iSCSI:
Ubuntu / Debian ```bash
sudo systemctl enable iscsi && sudo systemctl start iscsi
```
# Install the following system packages **Ubuntu / Debian**
Install the following system packages:
```bash
sudo apt-get install -y open-iscsi lsscsi sg3-utils multipath-tools scsitools sudo apt-get install -y open-iscsi lsscsi sg3-utils multipath-tools scsitools
```
# Enable multipathing #### Multipathing
`Multipath` is supported for the `iSCSI`-based drivers. Simply setup multipath to your liking and set multiple portals in the config as appropriate.
_NOTE:_ If you are running Kubernetes with Rancher/RKE please see the following:
[Support host iscsi simultaneously with kubelet iscsi (pvc)](https://github.com/rancher/rke/issues/1846>)
Add the mutlipath configuration:
```bash
sudo tee /etc/multipath.conf <<-'EOF' sudo tee /etc/multipath.conf <<-'EOF'
defaults { defaults {
user_friendly_names yes user_friendly_names yes
find_multipaths yes find_multipaths yes
} }
EOF EOF
```
sudo systemctl enable multipath-tools.service Enable the `multipath-tools` service and restart to load the configuration:
sudo service multipath-tools restart
# Ensure that open-iscsi and multipath-tools are enabled and running ```bash
sudo systemctl enable multipath-tools && sudo service multipath-tools restart
```
Ensure that `open-iscsi` and `multipath-tools` are enabled and running:
```bash
sudo systemctl status multipath-tools sudo systemctl status multipath-tools
sudo systemctl enable open-iscsi.service sudo systemctl enable open-iscsi.service
sudo service open-iscsi start sudo service open-iscsi start
sudo systemctl status open-iscsi sudo systemctl status open-iscsi
``` ```
### freenas-smb ### FreeNAS-SMB
---
If using with Windows based machines you may need to enable guest access (even If using with Windows based machines you may need to enable guest access (even
if you are connecting with credentials) if you are connecting with credentials)
``` ```powershell
Set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters AllowInsecureGuestAuth -Value 1 Set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters AllowInsecureGuestAuth -Value 1 ; Restart-Service LanmanWorkstation -Force
Restart-Service LanmanWorkstation -Force
``` ```
### zfs-local-ephemeral-inline ### ZFS-local-ephemeral-inline
This `driver` provisions node-local ephemeral storage on a per-pod basis. Each ---
node should have an identically named zfs pool created and avaialble to the
`driver`. Note, this is _NOT_ the same thing as using the docker zfs storage This `driver` provisions node-local ephemeral storage on a per-pod basis. Each node should have an identically named ZFS pool created and avaialble to the `driver`.
_NOTE:_ This is _NOT_ the same thing as using the docker ZFS storage
driver (although the same pool could be used). No other requirements are driver (although the same pool could be used). No other requirements are
necessary. necessary. More regarding to this can be found here: [Pod Inline Volume Support](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
- https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/20190122-csi-inline-volumes.md ## :wrench: Storage server preperation
- https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html
### zfs-local-{dataset,zvol} Storage server preperation depends slightly on which `driver` you are using.
The recommended version of FreeNAS / TrueNAS is 12.0-U2 or higher, however the driver should work
This `driver` provisions node-local storage. Each node should have an
identically named zfs pool created and avaialble to the `driver`. Note, this is
_NOT_ the same thing as using the docker zfs storage driver (although the same
pool could be used). Nodes should have the standard `zfs` utilities installed.
In the name of ease-of-use these drivers by default report `MULTI_NODE` support
(`ReadWriteMany` in k8s) however the volumes will implicity only work on the
node where originally provisioned. Topology contraints manage this in an
automated fashion preventing any undesirable behavior. So while you may
provision `MULTI_NODE` / `RWX` volumes, any workloads using the volume will
always land on a single node and that node will always be the node where the
volume is/was provisioned.
### local-hostpath
This `driver` provisions node-local storage. Each node should have an
identically name folder where volumes will be created.
In the name of ease-of-use these drivers by default report `MULTI_NODE` support
(`ReadWriteMany` in k8s) however the volumes will implicity only work on the
node where originally provisioned. Topology contraints manage this in an
automated fashion preventing any undesirable behavior. So while you may
provision `MULTI_NODE` / `RWX` volumes, any workloads using the volume will
always land on a single node and that node will always be the node where the
volume is/was provisioned.
The nature of this `driver` also prevents the enforcement of quotas. In short
the requested volume size is generally ignored.
## Server Prep
Server preparation depends slightly on which `driver` you are using.
### FreeNAS (freenas-nfs, freenas-iscsi, freenas-smb, freenas-api-nfs, freenas-api-iscsi, freenas-api-smb)
The recommended version of FreeNAS is 12.0-U2+, however the driver should work
with much older versions as well. with much older versions as well.
The various `freenas-api-*` drivers are currently EXPERIMENTAL and can only be ### TrueNAS / FreeNAS (freenas-nfs, freenas-iscsi, freenas-smb, freenas-api-nfs, freenas-api-iscsi, freenas-api-smb)
used with SCALE 21.08+. Fundamentally these drivers remove the need for `ssh`
connections and do all operations entirely with the TrueNAS api. With that in
mind, any ssh/shell/etc requirements below can be safely ignored. Also note the
following known issues:
- https://jira.ixsystems.com/browse/NAS-111870 #### API without SSH
- https://github.com/democratic-csi/democratic-csi/issues/112
- https://github.com/democratic-csi/democratic-csi/issues/101
Ensure the following services are configurged and running: ---
- ssh (if you use a password for authentication make sure it is allowed) Configuration templates can be found [HERE](https://github.com/D1StrX/democratic-csi/blob/667354978e497fb4624d52e909609ca278e4bd25/examples/api-with-ssh)
- ensure `zsh`, `bash`, or `sh` is set as the root shell, `csh` gives false errors due to quoting The various `freenas-api-*` drivers are currently EXPERIMENTAL and can only be used with SCALE 21.08+. Fundamentally these drivers remove the need for `ssh` connections and do all operations entirely with the TrueNAS api. With that in mind, any `ssh/shell/etc` requirements below can be safely ignored. Also note the following known issues:
- nfs
- iscsi
- (fixed in 12.0-U2+) when using the FreeNAS API concurrently the
`/etc/ctl.conf` file on the server can become invalid, some sample scripts
are provided in the `contrib` directory to clean things up ie: copy the
script to the server and directly and run - `./ctld-config-watchdog-db.sh | logger -t ctld-config-watchdog-db.sh &`
please read the scripts and set the variables as appropriate for your server.
- ensure you have pre-emptively created portals, initatior groups, auths
- make note of the respective IDs (the true ID may not reflect what is
visible in the UI)
- IDs can be visible by clicking the the `Edit` link and finding the ID in the
browser address bar
- Optionally you may use the following to retrieve appropiate IDs:
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/portal'`
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/initiator'`
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/auth'`
- The maximum number of volumes is limited to 255 by default on FreeBSD (physical devices such as disks and CD-ROM drives count against this value).
Be sure to properly adjust both [tunables](https://www.freebsd.org/cgi/man.cgi?query=ctl&sektion=4#end) `kern.cam.ctl.max_ports` and `kern.cam.ctl.max_luns` to avoid running out of resources when dynamically provisioning iSCSI volumes on FreeNAS or TrueNAS Core.
- smb - [Additional middleware changes to support Democratic CSI use of native API](https://jira.ixsystems.com/browse/NAS-111870)
- [TrueNAS Scale 21.08 - Could not log into all portals](https://github.com/democratic-csi/democratic-csi/issues/112)
- [Pure api based truenas driver (ssh dependency removed)](https://github.com/democratic-csi/democratic-csi/issues/101)
If you would prefer you can configure `democratic-csi` to use a [Continue configuration](#Service-configuration)
non-`root` user when connecting to the FreeNAS server:
- Create a non-`root` user (e.g., `csi`) #### API with SSH
- Ensure that user has passwordless `sudo` privileges: ---
``` Configuration templates can be found [HERE](https://github.com/D1StrX/democratic-csi/blob/667354978e497fb4624d52e909609ca278e4bd25/examples/api-with-ssh)
csi ALL=(ALL) NOPASSWD:ALL
# if on CORE 12.0-u3+ you should be able to do the following [Continue configuration](#Service-configuration)
# which will ensure it does not get reset during reboots etc
# at the command prompt
cli
# after you enter the truenas cli and are at that prompt ### Service configuration
account user query select=id,username,uid,sudo_nopasswd
# find the `id` of the user you want to update (note, this is distinct from the `uid`) Ensure the following services are _configured_, _running_ and starting automatically:
account user update id=<id> sudo=true
account user update id=<id> sudo_nopasswd=true
# optional if you want to disable password
#account user update id=<id> password_disabled=true
# exit cli by hitting ctrl-d #### SSH configuration
# confirm sudoers file is appropriate ---
cat /usr/local/etc/sudoers
```
(note this can get reset by FreeNAS if you alter the user via the - When creating a custom user (e.g., `CSI`):
GUI later)
- Instruct `democratic-csi` to use `sudo` by adding the following to - Ensure `ZSH`, `BASH`, or `SH` is set as `shell`, `CSH` gives false errors due to quoting (also applicable when using `root`)
your driver configuration: &emsp;![image](https://user-images.githubusercontent.com/40062371/147365044-007b2657-30f9-428b-ae12-7622a572866d.png)
- Ensure that user has passwordless `sudo` privileges:
_NOTE:_ This could get reset by FreeNAS if you alter the user via the GUI later
``` - On TrueNAS CORE 12.0-u3 or higher, open the Shell:
zfs:
cli:
sudoEnabled: true
```
Starting with TrueNAS CORE 12 it is also possible to use an `apiKey` instead of ```bash
the `root` password for the http connection. cli
```
Issues to review: After you enter the truenas cli and are at that prompt:
- https://jira.ixsystems.com/browse/NAS-108519 ```bash
- https://jira.ixsystems.com/browse/NAS-108520 account user query select=id,username,uid,sudo_nopasswd
- https://jira.ixsystems.com/browse/NAS-108521 ```
- https://jira.ixsystems.com/browse/NAS-108522
- https://jira.ixsystems.com/browse/NAS-107219
### ZoL (zfs-generic-nfs, zfs-generic-iscsi, zfs-generic-smb) find the `id` of the user you want to update (note, this is distinct from the `uid`)
```bash
account user update id=<id> sudo=true
```
```bash
account user update id=<id> sudo_nopasswd=true
```
(Optional) If you want to enable passwordless authentication via CLI:
```bash
account user update id=<id> password_disabled=true
```
Exit the CLI by pressing `ctrl-d`
- On other versions add the user to the sudoers file:
```bash
visudo
```
```bash
<username> ALL=(ALL) NOPASSWD:ALL
```
Confirm sudoers file is appropriate:
```bash
cat /usr/local/etc/sudoers
```
- `CSI` has a homefolder, this is used to store its SSH Public Key
&emsp;![image](https://user-images.githubusercontent.com/40062371/147370105-6030b22e-ceb3-4768-b4a0-8e55fafe7f0f.png)
- Add the user to `wheel` or create/use a group that will be used for permissions later on
#### **NFS configuration**
---
- Bind the network interface to the NFS service
- It is recommended to use NFS 3
#### **iSCSI configuration**
---
- Bind the network interface to the iSCSI service
_NOTE:_ (Fixed in 12.0-U2+) when using the FreeNAS API concurrently, the `/etc/ctl.conf` file on the server can become invalid, some sample scripts are provided in the `contrib` directory to clean things up ie:
Copy the script to the server and directly and run - `./ctld-config-watchdog-db.sh | logger -t ctld-config-watchdog-db.sh &`
Please read the scripts and set the variables appropriate for your server.
- Ensure you have pre\*emptively created portals, initatior groups, auths
- Make note of the respective IDs (the true ID may not reflect what is
visible in the UI)
- IDs can be visible by clicking the the `Edit` link and finding the ID in the
browser address bar
- Optionally you may use the following to retrieve appropiate IDs:
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/portal'`
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/initiator'`
- `curl --header "Accept: application/json" --user root:<password> 'http(s)://<ip>/api/v2.0/iscsi/auth'`
### SMB configuration
---
- Bind the network interface to the SMB service
### ZoL (zfs-generic-nfs, zfs-generic-iscsi)
---
Ensure ssh and zfs is installed on the nfs/iscsi server and that you have installed Ensure ssh and zfs is installed on the nfs/iscsi server and that you have installed
`targetcli`. `targetcli`.
The driver executes many commands over an ssh connection. You may consider ```bash
disabling all the `motd` details for the ssh user as it can spike the cpu sudo yum install targetcli -y
unecessarily:
- https://askubuntu.com/questions/318592/how-can-i-remove-the-landscape-canonical-com-greeting-from-motd
- https://linuxconfig.org/disable-dynamic-motd-and-news-on-ubuntu-20-04-focal-fossa-linux
``` ```
####### iscsi
yum install targetcli -y
apt-get -y install targetcli-fb
####### smb ```bash
apt-get install -y samba smbclient sudo apt-get -y install targetcli-fb
# create posix user
groupadd -g 1001 smbroot
useradd -u 1001 -g 1001 -M -N -s /sbin/nologin smbroot
passwd smbroot (optional)
# create smb user and set password
smbpasswd -L -a smbroot
``` ```
### Synology (synology-iscsi) ### Synology (synology-iscsi)
Ensure iscsi manager has been installed and is generally setup/configured. DSM 6.3+ is supported. ---
## Helm Installation Ensure iSCSI Manager has been installed and is generally setup/configured.
### :wrench: YAML Values configuration
---
Instruct `Democratic-CSI` to use `sudo` by uncommenting the following in your configuration template:
```yaml
zfs:
cli:
sudoEnabled: true
``` ```
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
# helm v2
helm search democratic-csi/
# helm v3 As you read at the [Storage server preperation](#wrench-storage-server-preperation), YAML configuration comes in two flavours. _With_ and _without_ API configuration.
Starting with TrueNAS CORE 12 it is also possible to use an `apiKey` instead of the user/root password for the HTTP connection. The `apiKey` can be generated by clicking on the `Settings icon` -> `API Keys` -> `ADD`
![image](https://user-images.githubusercontent.com/40062371/147371451-ff712de3-cce0-448e-b59f-29269179d2d6.png)
Issues to review:
[ixsystems NAS-108519](https://jira.ixsystems.com/browse/NAS-108519)
[ixsystems NAS-108520](https://jira.ixsystems.com/browse/NAS-108520)
[ixsystems NAS-108521](https://jira.ixsystems.com/browse/NAS-108521)
[ixsystems NAS-108522](https://jira.ixsystems.com/browse/NAS-108522)
[ixsystems NAS-107219](https://jira.ixsystems.com/browse/NAS-107219)
## :wrench: Helm Installation
---
Copy proper example Values file from the examples:
[API without SSH](https://github.com/D1StrX/democratic-csi/blob/667354978e497fb4624d52e909609ca278e4bd25/examples/api-without-ssh)
[API with SSH](https://github.com/D1StrX/democratic-csi/blob/667354978e497fb4624d52e909609ca278e4bd25/examples/api-with-ssh)
Add the `Democratic-CSI` Helm repository:
```bash
helm search repo democratic-csi/ helm search repo democratic-csi/
```
# copy proper values file from https://github.com/democratic-csi/charts/tree/master/stable/democratic-csi/examples Update your Helm repository to get latest charts:
# edit as appropriate
# examples are from helm v2, alter as appropriate for v3
# add --create-namespace for helm v3 ```bash
helm upgrade \ helm repo update
--install \ ```
--values freenas-iscsi.yaml \
--namespace democratic-csi \
zfs-iscsi democratic-csi/democratic-csi
### Helm V3
---
Install `Democratic-CSI` with your configured values. Helm V3 requires that you `--create-namespace`
```bash
helm install zfs-nfs democratic-csi/democratic-csi --values truenas-isci.yaml --create-namespace democratic-csi
```
Update/Upgrade Values:
```bash
helm upgrade <name> democratic-csi/democratic-csi --values <freenas-*>.yaml --namespace <namespace>
```
### Helm V2
---
Install `Democratic-CSI` with your configured values.
```bash
helm upgrade \ helm upgrade \
--install \ --install \
--values freenas-nfs.yaml \ --values freenas-nfs.yaml \
@ -335,11 +381,9 @@ helm upgrade \
zfs-nfs democratic-csi/democratic-csi zfs-nfs democratic-csi/democratic-csi
``` ```
### A note on non standard kubelet paths ### On non standard Kubelet paths
Some distrobutions, such as `minikube` and `microk8s` use a non-standard Some distrobutions, such as `minikube` and `microk8s` use a non-standard kubelet path. In such cases it is ecessary to provide a new kubelet host path, microk8s example below:
kubelet path. In such cases it is necessary to provide a new kubelet host path,
microk8s example below:
```bash ```bash
microk8s helm upgrade \ microk8s helm upgrade \
@ -353,25 +397,32 @@ microk8s helm upgrade \
- microk8s - `/var/snap/microk8s/common/var/lib/kubelet` - microk8s - `/var/snap/microk8s/common/var/lib/kubelet`
- pivotal - `/var/vcap/data/kubelet` - pivotal - `/var/vcap/data/kubelet`
### openshift ### OpenShift
`democratic-csi` generally works fine with openshift. Some special parameters `Democratic-CSI` generally works fine with openshift. Some special parameters
need to be set with helm (support added in chart version `0.6.1`): need to be set with helm (support added in chart version `0.6.1`):
``` _NOTE:_ for sure required
# for sure required
--set node.rbac.openshift.privileged=true
--set node.driver.localtimeHostPath=false
# unlikely, but in special circumstances may be required ```bash
--set node.rbac.openshift.privileged=true
```
```bash
--set node.driver.localtimeHostPath=false
```
_NOTE:_ Unlikely, but in special circumstances may be required
```bash
--set controller.rbac.openshift.privileged=true --set controller.rbac.openshift.privileged=true
``` ```
### Nomad ### **Nomad**
`democratic-csi` works with Nomad in a functioning but limted capacity. See the [Nomad docs](docs/nomad.md) for details. `Democratic-CSI` works with Nomad in a functioning but limted capacity. See the [Nomad docs](docs/nomad.md) for details.
## Multiple Deployments ## :wrench: **Multiple Deployments**
You may install multiple deployments of each/any driver. It requires the following: You may install multiple deployments of each/any driver. It requires the following:
@ -380,54 +431,59 @@ You may install multiple deployments of each/any driver. It requires the followi
- Use unqiue names for your storage classes (per cluster) - Use unqiue names for your storage classes (per cluster)
- Use a unique parent dataset (ie: don't try to use the same parent across deployments or clusters) - Use a unique parent dataset (ie: don't try to use the same parent across deployments or clusters)
# Snapshot Support ## :wrench: **Snapshot Support**
Install beta (v1.17+) CRDs (once per cluster): [**External-snapshotter CRD**](https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd)
Install beta (v1.17+) CRDs (one per cluster):
- https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd ```bash
```
kubectl apply -f snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f snapshot.storage.k8s.io_volumesnapshotclasses.yaml
```
```bash
kubectl apply -f snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl apply -f snapshot.storage.k8s.io_volumesnapshotcontents.yaml
```
```bash
kubectl apply -f snapshot.storage.k8s.io_volumesnapshots.yaml kubectl apply -f snapshot.storage.k8s.io_volumesnapshots.yaml
``` ```
Install snapshot controller (once per cluster): [**External-snapshotter Controller**](https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller)
Install snapshot controller (once per cluster):
Add `--namespace` references to your need.
- https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller ```bash
```
# replace namespace references to your liking
kubectl apply -f rbac-snapshot-controller.yaml kubectl apply -f rbac-snapshot-controller.yaml
```
```bash
kubectl apply -f setup-snapshot-controller.yaml kubectl apply -f setup-snapshot-controller.yaml
``` ```
Install `democratic-csi` as usual with `volumeSnapshotClasses` defined as appropriate. Install `Democratic-CSI` with `volumeSnapshotClasses` appropriatly defined.
- https://kubernetes.io/docs/concepts/storage/volume-snapshots/ [Volume-snapshots](https://kubernetes.io/docs/concepts/storage/volume-snapshots/)
- https://github.com/kubernetes-csi/external-snapshotter#usage [External-snapshotter Usage](https://github.com/kubernetes-csi/external-snapshotter#usage)
- https://github.com/democratic-csi/democratic-csi/issues/129#issuecomment-961489810 [Democratic-CSI Issue](https://github.com/democratic-csi/democratic-csi/issues/129#issuecomment-961489810)
# Migrating from freenas-provisioner and freenas-iscsi-provisioner ## :wrench: Migrating from freenas-provisioner and freenas-iscsi-provisioner
It is possible to migrate all volumes from the non-csi freenas provisioners It is possible to migrate all volumes from the non-csi freenas provisioners
to `democratic-csi`. to `Democratic-CSI`.
Copy the `contrib/freenas-provisioner-to-democratic-csi.sh` script from the Copy the `contrib/freenas-provisioner-to-democratic-csi.sh` script from the project to your workstation. Read the script in detail, and edit the variables
project to your workstation, read the script in detail, and edit the variables
to your needs to start migrating! to your needs to start migrating!
# Sponsors # :trophy: **Sponsors**
A special shout out to the wonderful sponsors of the project! A special shout out to the wonderful sponsors of this project!
[![ixSystems](https://www.ixsystems.com/wp-content/uploads/2021/06/ix_logo_200x47.png "ixSystems")](http://ixsystems.com/) [![ixSystems](https://www.ixsystems.com/wp-content/uploads/2021/06/ix_logo_200x47.png "ixSystems")](http://ixsystems.com/)
# Related ## **Related**
- https://github.com/nmaupu/freenas-provisioner [Freenas provisioner](https://github.com/nmaupu/freenas-provisioner)
- https://github.com/travisghansen/freenas-iscsi-provisioner [Freenas iSCSI provisioner](https://github.com/travisghansen/freenas-iscsi-provisioner)
- https://datamattsson.tumblr.com/post/624751011659202560/welcome-truenas-core-container-storage-provider [Welcome TrueNAS Core container storageprovider](https://datamattsson.tumblr.com/post/624751011659202560/welcome-truenas-core-container-storage-provider)
- https://github.com/dravanet/truenas-csi [Truenas CSI](https://github.com/dravanet/truenas-csi)
- https://github.com/SynologyOpenSource/synology-csi [Synology CSI](https://github.com/SynologyOpenSource/synology-csi)
- https://github.com/openebs/zfs-localpv

View File

@ -72,7 +72,6 @@ iscsi:
#nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}" #nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: csi- namePrefix: csi-
nameSuffix: "-clustera" nameSuffix: "-clustera"
# add as many as needed # add as many as needed
targetGroups: targetGroups:
# get the correct ID from the "portal" section in the UI # get the correct ID from the "portal" section in the UI
@ -85,7 +84,6 @@ iscsi:
# only required if using Chap # only required if using Chap
targetGroupAuthGroup: targetGroupAuthGroup:
#extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true extentInsecureTpc: true
extentXenCompat: false extentXenCompat: false
extentDisablePhysicalBlocksize: true extentDisablePhysicalBlocksize: true

View File

@ -51,8 +51,8 @@ zfs:
datasetEnableQuotas: true datasetEnableQuotas: true
datasetEnableReservation: false datasetEnableReservation: false
datasetPermissionsMode: "0777" datasetPermissionsMode: "0777"
datasetPermissionsUser: 0 datasetPermissionsUser: root
datasetPermissionsGroup: 0 datasetPermissionsGroup: wheel
#datasetPermissionsAcls: #datasetPermissionsAcls:
#- "-m everyone@:full_set:allow" #- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow" #- "-m u:kube:full_set:allow"

View File

@ -46,9 +46,7 @@ zfs:
datasetProperties: datasetProperties:
aclmode: restricted aclmode: restricted
aclinherit: passthrough casesensitivity: mixed
acltype: nfsv4
casesensitivity: insensitive
datasetParentName: tank/k8s/a/vols datasetParentName: tank/k8s/a/vols
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap # do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
@ -56,41 +54,12 @@ zfs:
detachedSnapshotsDatasetParentName: tank/k8s/a/snaps detachedSnapshotsDatasetParentName: tank/k8s/a/snaps
datasetEnableQuotas: true datasetEnableQuotas: true
datasetEnableReservation: false datasetEnableReservation: false
datasetPermissionsMode: "0770" datasetPermissionsMode: "0777"
datasetPermissionsUser: nobody
# as appropriate create a dedicated user for smb connections datasetPermissionsGroup: nobody
# and set this
datasetPermissionsUser: 65534
datasetPermissionsGroup: 65534
# CORE
#datasetPermissionsAclsBinary: setfacl
# SCALE
#datasetPermissionsAclsBinary: nfs4xdr_setfacl
# if using a user other than guest/nobody comment the 'everyone@' acl
# and uncomment the appropriate block below
datasetPermissionsAcls: datasetPermissionsAcls:
- "-m everyone@:full_set:fd:allow" - "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow"
# CORE
# in CORE you cannot have multiple entries for the same principle
# or said differently, they are declarative so using -m will replace
# whatever the current value is for the principle rather than adding a
# entry in the acl list
#- "-m g:builtin_users:full_set:fd:allow"
#- "-m group@:modify_set:fd:allow"
#- "-m owner@:full_set:fd:allow"
# SCALE
# https://www.truenas.com/community/threads/get-setfacl-on-scale-with-nfsv4-acls.95231/
# -s replaces everything
# so we put this in specific order to mimic the defaults of SCALE when using the api
#- -s group:builtin_users:full_set:fd:allow
#- -a group:builtin_users:modify_set:fd:allow
#- -a group@:modify_set:fd:allow
#- -a owner@:full_set:fd:allow
smb: smb:
shareHost: server address shareHost: server address
@ -108,7 +77,7 @@ smb:
shareAllowedHosts: [] shareAllowedHosts: []
shareDeniedHosts: [] shareDeniedHosts: []
#shareDefaultPermissions: true #shareDefaultPermissions: true
shareGuestOk: false shareGuestOk: true
#shareGuestOnly: true #shareGuestOnly: true
#shareShowHiddenFiles: true #shareShowHiddenFiles: true
shareRecycleBin: true shareRecycleBin: true

View File

@ -37,8 +37,7 @@ zfs:
# total volume name (zvol/<datasetParentName>/<pvc name>) length cannot exceed 63 chars # total volume name (zvol/<datasetParentName>/<pvc name>) length cannot exceed 63 chars
# https://www.ixsystems.com/documentation/freenas/11.2-U5/storage.html#zfs-zvol-config-opts-tab # https://www.ixsystems.com/documentation/freenas/11.2-U5/storage.html#zfs-zvol-config-opts-tab
# standard volume naming overhead is 46 chars # standard volume naming overhead is 46 chars
# datasetParentName should therefore be 17 chars or less when using TrueNAS 12 or below (SCALE and 13+ do not have the same limits) # datasetParentName should therefore be 17 chars or less when using TrueNAS 12 or below
# for work-arounds see https://github.com/democratic-csi/democratic-csi/issues/54
datasetParentName: tank/k8s/b/vols datasetParentName: tank/k8s/b/vols
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap # do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other # they may be siblings, but neither should be nested in the other
@ -63,7 +62,6 @@ iscsi:
#nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}" #nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: csi- namePrefix: csi-
nameSuffix: "-clustera" nameSuffix: "-clustera"
# add as many as needed # add as many as needed
targetGroups: targetGroups:
# get the correct ID from the "portal" section in the UI # get the correct ID from the "portal" section in the UI
@ -76,7 +74,6 @@ iscsi:
# only required if using Chap # only required if using Chap
targetGroupAuthGroup: targetGroupAuthGroup:
#extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true extentInsecureTpc: true
extentXenCompat: false extentXenCompat: false
extentDisablePhysicalBlocksize: true extentDisablePhysicalBlocksize: true

View File

@ -43,8 +43,6 @@ zfs:
datasetPermissionsMode: "0777" datasetPermissionsMode: "0777"
datasetPermissionsUser: 0 datasetPermissionsUser: 0
datasetPermissionsGroup: 0 datasetPermissionsGroup: 0
# not supported yet
#datasetPermissionsAcls: #datasetPermissionsAcls:
#- "-m everyone@:full_set:allow" #- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow" #- "-m u:kube:full_set:allow"

View File

@ -34,10 +34,9 @@ zfs:
# "org.freenas:test": "{{ parameters.foo }}" # "org.freenas:test": "{{ parameters.foo }}"
# "org.freenas:test2": "some value" # "org.freenas:test2": "some value"
# these are managed automatically via the volume creation process when flagged as an smb volume datasetProperties:
#datasetProperties: aclmode: restricted
# aclmode: restricted casesensitivity: mixed
# casesensitivity: mixed
datasetParentName: tank/k8s/a/vols datasetParentName: tank/k8s/a/vols
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap # do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
@ -48,10 +47,8 @@ zfs:
datasetPermissionsMode: "0777" datasetPermissionsMode: "0777"
datasetPermissionsUser: 0 datasetPermissionsUser: 0
datasetPermissionsGroup: 0 datasetPermissionsGroup: 0
datasetPermissionsAcls:
# not supported yet in api - "-m everyone@:full_set:allow"
#datasetPermissionsAcls:
#- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow" #- "-m u:kube:full_set:allow"
smb: smb: