Add README and other bits

This commit is contained in:
Liam Young 2020-04-02 13:49:37 +00:00
parent 8accdc1553
commit 8e1eba48f7
9 changed files with 186 additions and 35 deletions

143
README.md
View File

@ -1,21 +1,138 @@
Ceph iSCSI Gateway charm
========================
# Overview
To use, first pull in dependencies:
The charm provides the Ceph iSCSI gateway service. It is intended to be used
in conjunction with the ceph-osd and ceph-mon charms.
```bash
./charm-prep.sh
> **Warning**: This charm is in a preview state for testing and should not
be used outside of the lab.
# Usage
## Deployment
When deploying ceph-iscsi ensure that exactly two units of the charm are being
deployed, this will provide multiple data paths to clients.
> **Note**: Deploying four units is also theoretical possible but has not
been tested.
A sample `bundle.yaml` file's contents:
```yaml
series: focal
applications:
ceph-iscsi:
charm: cs:ceph-iscsi
num_units: 2
ceph-osd:
charm: cs:ceph-osd
num_units: 3
storage:
osd-devices: /dev/vdb
options:
source: cloud:bionic-train
ceph-mon:
charm: cs:ceph-mon
num_units: 3
options:
monitor-count: '3'
source: cloud:bionic-train
relations:
- - ceph-mon:client
- ceph-iscsi:ceph-client
- - ceph-osd:mon
- ceph-mon:osd
```
To deploy with an example and test:
> **Important**: Make sure the designated block device passed to the ceph-osd
charms exists and is not currently in use.
Deploy the bundle:
juju deploy ./bundle.yaml
## Managing Targets
The charm provides an action for creating a simple target. If more complex
managment of targets is requires then the `gwcli` tool should be used. `gwcli`
is available from the root account on the gateway nodes.
```bash
cd test
./deploy.sh
./01-setup-client-apt.sh
./02-setup-gw.sh
./03-setup-client-iscsi.sh
$ juju ssh ceph-iscsi/1
$ sudo gwcli
/> ls
```
## Actions
This section covers Juju [actions][juju-docs-actions] supported by the charm.
Actions allow specific operations to be performed on a per-unit basis.
### create-target
Run this action to create an iscsi target.
```bash
$ juju run-action ceph-iscsi/0 create-target \
image-size=2G \
image-name=bob \
client-initiatorname=iqn.1993-08.org.debian:01:aaa2299be916 \
client-username=usera \
client-password=testpass
Action queued with id: "28"
```
If the iqn of the created target is returned in the ouput from the action:
```bash
$ juju show-action-output 28
UnitId: ceph-iscsi/0
results:
iqn: iqn.2003-01.com.ubuntu.iscsi-gw:iscsi-igw
status: completed
timing:
completed: 2020-04-02 13:32:02 +0000 UTC
enqueued: 2020-04-02 13:18:42 +0000 UTC
started: 2020-04-02 13:18:45 +0000 UTC
```
### pause
Pause the ceph-iscsi unit. This action will stop the rbd services.
### resume
Resume the ceph-iscsi unit. This action will start the rbd services if paused.
## Network spaces
This charm supports the use of Juju [network spaces][juju-docs-spaces] (Juju
`v.2.0`). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
connected to.
> **Note**: Spaces must be configured in the backing cloud prior to deployment.
The ceph-iscsi charm exposes the following traffic types (bindings):
- 'public' (front-side)
- 'cluster' (back-side)
For example, providing that spaces 'data-space' and 'cluster-space' exist, the
deploy command above could look like this:
juju deploy --config ceph-iscsi.yaml -n 2 ceph-iscsi \
--bind "public=data-space cluster=cluster-space"
Alternatively, configuration can be provided as part of a bundle:
```yaml
ceph-iscsi:
charm: cs:ceph-iscsi
num_units: 2
bindings:
public: data-space
cluster: cluster-space
```
To run the charm tests (tested on OpenStack provider):
tox -e func-smoke

View File

@ -11,20 +11,24 @@ resume:
corresponding hacluster unit on the node must be resumed as well.
security-checklist:
description: Validate the running configuration against the OpenStack security guides checklist
add-trusted-ip:
description: "Add IP address that is permitted to talk to API"
params:
ips:
type: string
default: ''
description: "Space seperated list of trusted ips"
create-target:
description: "Create a new cache tier"
params:
gateway-units:
type: string
default: writeback
description: "Space seperated list of gateway units eg 'ceph-iscsi/0 ceph-scsi/1'"
iqn:
type: string
default: writeback
description: "iSCSI Qualified Name"
image-size:
type: string
default: 1G
description: "Target size"
image-name:
type: string
@ -32,7 +36,6 @@ create-target:
description: "Image name "
client-initiatorname:
type: string
default: 1G
description: "The initiator name of the client that will mount the target"
client-username:
type: string
@ -41,8 +44,6 @@ create-target:
type: string
description: "The CHAPs password to be created for the client"
required:
- gateway-units
- iqn
- image-size
- image-name
- client-initiatorname

View File

@ -12,6 +12,9 @@ tags:
series:
- focal
subordinate: false
extra-bindings:
public:
cluster:
requires:
ceph-client:
interface: ceph-client

@ -1 +1 @@
Subproject commit cb3557ba8aa2997936e19ed876d2b2b962a75868
Subproject commit 6a99f92ae090aad224044d0862a3da78c7a04a55

@ -1 +1 @@
Subproject commit fabb0335c4d02c915dd93f754c38c78685ed54b6
Subproject commit d259e0919fc19075b1e3636a5dd3c94ab81fd416

@ -1 +1 @@
Subproject commit f203311d4664fb68871b1d4a2367f6588fb1af29
Subproject commit bef6f2161be12eeb3385aac113a738aecc85d807

View File

@ -92,7 +92,7 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
"mgr", "allow r"]
RESTART_MAP = {
'/etc/ceph/ceph.conf': ['rbd-target-api'],
'/etc/ceph/ceph.conf': ['rbd-target-api', 'rbd-target-gw'],
'/etc/ceph/iscsi-gateway.cfg': ['rbd-target-api'],
'/etc/ceph/ceph.client.ceph-iscsi.keyring': ['rbd-target-api']}
@ -104,6 +104,7 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
logging.info("Using {} class".format(self.release))
self.state.set_default(target_created=False)
self.state.set_default(enable_tls=False)
self.state.set_default(additional_trusted_ips=[])
self.ceph_client = interface_ceph_client.CephClientRequires(
self,
'ceph-client')
@ -119,17 +120,28 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
self.framework.observe(self.peers.on.has_peers, self)
self.framework.observe(self.peers.on.ready_peers, self)
self.framework.observe(self.on.create_target_action, self)
self.framework.observe(self.on.add_trusted_ip_action, self)
self.framework.observe(self.on.certificates_relation_joined, self)
self.framework.observe(self.on.certificates_relation_changed, self)
self.framework.observe(self.on.config_changed, self)
self.framework.observe(self.on.upgrade_charm, self)
def on_add_trusted_ip_action(self, event):
self.state.additional_trusted_ips.append(event.params['ips'].split(' '))
logging.info(self.state.additional_trusted_ips)
def on_create_target_action(self, event):
gw_client = gwcli_client.GatewayClient()
gw_client.create_target(event.params['iqn'])
target = event.params.get('iqn', self.DEFAULT_TARGET)
gateway_units = event.params.get(
'gateway-units',
[u for u in self.peers.ready_peer_details.keys()])
gw_client.create_target(target)
for gw_unit, gw_config in self.peers.ready_peer_details.items():
added_gateways = []
if gw_unit in event.params['gateway-units']:
if gw_unit in gateway_units:
gw_client.add_gateway_to_target(
event.params['iqn'],
target,
gw_config['ip'],
gw_config['fqdn'])
added_gateways.append(gw_unit)
@ -138,18 +150,19 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
event.params['image-name'],
event.params['image-size'])
gw_client.add_client_to_target(
event.params['iqn'],
target,
event.params['client-initiatorname'])
gw_client.add_client_auth(
event.params['iqn'],
target,
event.params['client-initiatorname'],
event.params['client-username'],
event.params['client-password'])
gw_client.add_disk_to_client(
event.params['iqn'],
target,
event.params['client-initiatorname'],
self.model.config['rbd-pool'],
event.params['image-name'])
event.set_results({'iqn': target})
def setup_default_target(self):
gw_client = gwcli_client.GatewayClient()
@ -173,7 +186,11 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
logging.info("Initial target setup already complete")
return
else:
self.setup_default_target()
# This appears to race and sometime runs before the
# peer is 100% ready. There is probably little value
# in this anyway so may just remove it.
# self.setup_default_target()
return
def on_has_peers(self, event):
logging.info("Unit has peers")
@ -191,6 +208,19 @@ class CephISCSIGatewayCharmBase(ops_openstack.OSBaseCharm):
self.ceph_client.request_ceph_permissions(
'ceph-iscsi',
self.CEPH_CAPABILITIES)
self.ceph_client.request_osd_settings({
'osd heartbeat grace': 20,
'osd heartbeat interval': 5})
def on_config_changed(self, event):
if self.state.is_started:
self.on_pools_available(event)
self.on_ceph_client_relation_joined(event)
def on_upgrade_charm(self, event):
if self.state.is_started:
self.on_pools_available(event)
self.on_ceph_client_relation_joined(event)
def on_pools_available(self, event):
logging.info("on_pools_available")

View File

@ -6,7 +6,7 @@ import socket
from ops.framework import (
StoredState,
EventBase,
EventsBase,
EventSetBase,
EventSource,
Object)
@ -19,7 +19,7 @@ class ReadyPeersEvent(EventBase):
pass
class CephISCSIGatewayPeerEvents(EventsBase):
class CephISCSIGatewayPeerEvents(EventSetBase):
has_peers = EventSource(HasPeersEvent)
ready_peers = EventSource(ReadyPeersEvent)

View File

@ -10,7 +10,7 @@ applications:
options:
rbd-pool: tmbtil
ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
charm: cs:~gnuoy/ceph-osd-5
num_units: 3
storage:
osd-devices: 'cinder,10G'
@ -18,7 +18,7 @@ applications:
osd-devices: '/dev/test-non-existent'
source: cloud:bionic-train
ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
charm: cs:~gnuoy/ceph-mon-6
num_units: 3
options:
monitor-count: '3'
@ -26,7 +26,7 @@ applications:
vault:
num_units: 1
# charm: cs:~openstack-charmers-next/vault
charm: cs:~gnuoy/vault-28
charm: cs:~gnuoy/vault-29
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1