Node storage configuration support

- Refactor YAML schema for storage specification
- Add Drydock models for HostVolume/List, HostVolumeGroup/List
  HostStorageDevice/List, HostPartition/List
- Add MAAS API models for block device, partition, volume group
- Add implementation of ApplyNodeStorage driver task
- Add documentation for authoring storage configuration
- Add unit tests for YAML parsing
- Add unit tests for size calculation

Change-Id: I94fa00b2f2bcaff1607b645a421f7e54e6d1f11e
This commit is contained in:
Scott Hussey 2017-09-08 16:26:25 -05:00
parent 907d08699d
commit 689445280e
25 changed files with 2270 additions and 579 deletions

132
docs/topology.rst Normal file
View File

@ -0,0 +1,132 @@
=======================
Authoring Site Topology
=======================
Drydock uses a YAML-formatted site topology definition to configure
downstream drivers to provision baremetal nodes. This topology describes
the networking configuration of a site as well as the set of node configurations
that will be deployed. A node configuration consists of network attachment,
network addressing, local storage, kernel selection and configuration and
metadata.
The best source for a sample of the YAML schema for a topology is the unit
test input source_ /tests/yaml_samples/fullsite.yaml in tests/yaml_samples/fullsite.yaml.
Defining Node Storage
=====================
Storage can be defined in the `storage` stanza of either a HostProfile or BaremetalNode
document. The storage configuration can describe creation of partitions on physical disks,
the assignment of physical disks and/or partitions to volume groups, and the creation of
logical volumes. Drydock will make a best effort to parse out system-level storage such
as the root filesystem or boot filesystem and take appropriate steps to configure them in
the active node provisioning driver.
Example YAML schema of the `storage` stanza::
storage:
physical_devices:
sda:
labels:
bootdrive: true
partitions:
- name: 'root'
size: '10g'
bootable: true
filesystem:
mountpoint: '/'
fstype: 'ext4'
mount_options: 'defaults'
- name: 'boot'
size: '1g'
filesystem:
mountpoint: '/boot'
fstype: 'ext4'
mount_options: 'defaults'
sdb:
volume_group: 'log_vg'
volume_groups:
log_vg:
logical_volumes:
- name: 'log_lv'
size: '500m'
filesystem:
mountpoint: '/var/log'
fstype: 'xfs'
mount_options: 'defaults'
Schema
------
The `storage` stanza can contain two top level keys: `physical_devices` and
`volume_groups`. The latter is optional.
Physical Devices and Partitions
-------------------------------
A physical device can either be carved up in partitions (including a single partition
consuming the entire device) or added to a volume group as a physical volume. Each
key in the `physical_devices` mapping represents a device on a node. The key should either
be a device alias defined in the HardwareProfile or the name of the device published
by the OS. The value of each key must be a mapping with the following keys
* `labels`: A mapping of key/value strings providing generic labels for the device
* `partitions`: A sequence of mappings listing the partitions to be created on the device.
The mapping is described below. Incompatible with the `volume_group` specification.
* `volume_group`: A volume group name to add the device to as a physical volume. Incompatible
with the `partitions` specification.
Partition
~~~~~~~~~
A partition mapping describes a GPT partition on a physical disk. It can left as a raw
block device or formatted and mounted as a filesystem
* `name`: Metadata describing the partition in the topology
* `size`: The size of the partition. See the *Size Format* section below
* `bootable`: Boolean whether this partition should be the bootable device
* `part_uuid`: A UUID4 formatted UUID to assign to the partition. If not specified one will be generated
* `filesystem`: A optional mapping describing how the partition should be formatted and mounted
* `mountpoint`: Where the filesystem should be mounted. If not specified the partition will be left as a raw deice
* `fstype`: The format of the filesyste. Defaults to ext4
* `mount_options`: fstab style mount options. Default is 'defaults'
* `fs_uuid`: A UUID4 formatted UUID to assign to the filesystem. If not specified one will be generated
* `fs_label`: A filesystem label to assign to the filesystem. Optional.
Size Format
~~~~~~~~~~~
The size specification for a partition or logical volume is formed from three parts
* The first character can optionally be `>` indicating that the size specified is a minimum and the
calculated size should be at least the minimum and should take the rest of the available space on
the physical device or volume group.
* The second part is the numeric portion and must be an integer
* The third part is a label
* `m`\|`M`\|`mb`\|`MB`: Megabytes or 10^6 * the numeric
* `g`\|`G`\|`gb`\|`GB`: Gigabytes or 10^9 * the numeric
* `t`\|`T`\|`tb`\|`TB`: Terabytes or 10^12 * the numeric
* `%`: The percentage of total device or volume group space
Volume Groups and Logical Volumes
---------------------------------
Logical volumes can be used to create RAID-0 volumes spanning multiple physical disks or partitions.
Each key in the `volume_groups` mapping is a name assigned to a volume group. This name must be specified
as the `volume_group` attribute on one or more physical devices or partitions, or the configuration is invalid.
Each mapping value is another mapping describing the volume group.
* `vg_uuid`: A UUID4 format uuid applied to the volume group. If not specified, one is generated
* `logical_volumes`: A sequence of mappings listing the logical volumes to be created in the volume group
Logical Volume
~~~~~~~~~~~~~~
A logical volume is a RAID-0 volume. Using logical volumes for `/` and `/boot` is supported
* `name`: Required field. Used as the logical volume name.
* `size`: The logical volume size. See *Size Format* above for details.
* `lv_uuid`: A UUID4 format uuid applied to the logical volume: If not specified, one is generated
* `filesystem`: A mapping specifying how the logical volume should be formatted and mounted. See the
*Partition* section above for filesystem details.

View File

@ -128,6 +128,10 @@ class DrydockConfig(object):
'apply_node_networking',
default=5,
help='Timeout in minutes for configuring node networking'),
cfg.IntOpt(
'apply_node_storage',
default=5,
help='Timeout in minutes for configuring node storage'),
cfg.IntOpt(
'apply_node_platform',
default=5,

View File

@ -90,7 +90,7 @@ class BootdataResource(StatefulResource):
r"""[Unit]
Description=Promenade Initialization Service
Documentation=http://github.com/att-comdev/drydock
After=network.target local-fs.target
After=network-online.target local-fs.target
ConditionPathExists=!/var/lib/prom.done
[Service]

View File

@ -52,5 +52,5 @@ class NodeDriver(ProviderDriver):
if task_action in self.supported_actions:
return
else:
raise DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))
raise errors.DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))

View File

@ -18,6 +18,8 @@ import requests
import requests.auth as req_auth
import base64
import drydock_provisioner.error as errors
class MaasOauth(req_auth.AuthBase):
def __init__(self, apikey):
@ -74,7 +76,7 @@ class MaasRequestFactory(object):
def test_connectivity(self):
try:
resp = self.get('version/')
except requests.Timeout(ex):
except requests.Timeout as ex:
raise errors.TransientDriverError("Timeout connection to MaaS")
if resp.status_code in [500, 503]:
@ -89,10 +91,11 @@ class MaasRequestFactory(object):
def test_authentication(self):
try:
resp = self.get('account/', op='list_authorisation_tokens')
except requests.Timeout(ex):
except requests.Timeout as ex:
raise errors.TransientDriverError("Timeout connection to MaaS")
except:
raise errors.PersistentDriverError("Error accessing MaaS")
except Exception as ex:
raise errors.PersistentDriverError(
"Error accessing MaaS: %s" % str(ex))
if resp.status_code in [401, 403]:
raise errors.PersistentDriverError(
@ -172,4 +175,6 @@ class MaasRequestFactory(object):
% (prepared_req.method, prepared_req.url,
str(prepared_req.body).replace('\\r\\n', '\n'),
resp.status_code, resp.text))
raise errors.DriverError("MAAS Error: %s - %s" % (resp.status_code,
resp.text))
return resp

View File

@ -18,6 +18,8 @@ import logging
import traceback
import sys
import uuid
import re
import math
from oslo_config import cfg
@ -25,6 +27,7 @@ import drydock_provisioner.error as errors
import drydock_provisioner.drivers as drivers
import drydock_provisioner.objects.fields as hd_fields
import drydock_provisioner.objects.task as task_model
import drydock_provisioner.objects.hostprofile as hostprofile
from drydock_provisioner.drivers.node import NodeDriver
from drydock_provisioner.drivers.node.maasdriver.api_client import MaasRequestFactory
@ -37,6 +40,8 @@ import drydock_provisioner.drivers.node.maasdriver.models.tag as maas_tag
import drydock_provisioner.drivers.node.maasdriver.models.sshkey as maas_keys
import drydock_provisioner.drivers.node.maasdriver.models.boot_resource as maas_boot_res
import drydock_provisioner.drivers.node.maasdriver.models.rack_controller as maas_rack
import drydock_provisioner.drivers.node.maasdriver.models.partition as maas_partition
import drydock_provisioner.drivers.node.maasdriver.models.volumegroup as maas_vg
class MaasNodeDriver(NodeDriver):
@ -168,8 +173,6 @@ class MaasNodeDriver(NodeDriver):
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
site_design = self.orchestrator.get_effective_site(design_id)
if task.action == hd_fields.OrchestratorAction.CreateNetworkTemplate:
self.orchestrator.task_field_update(
@ -529,6 +532,99 @@ class MaasNodeDriver(NodeDriver):
else:
result = hd_fields.ActionResult.Failure
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=result,
result_detail=result_detail)
elif task.action == hd_fields.OrchestratorAction.ApplyNodeStorage:
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
self.logger.debug(
"Starting subtask to configure the storage on %s nodes." %
(len(task.node_list)))
subtasks = []
result_detail = {
'detail': [],
'failed_nodes': [],
'successful_nodes': [],
}
for n in task.node_list:
subtask = self.orchestrator.create_task(
task_model.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodeStorage,
task_scope={'node_names': [n]})
runner = MaasTaskRunner(
state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id())
self.logger.info(
"Starting thread for task %s to config node %s storage" %
(subtask.get_id(), n))
runner.start()
subtasks.append(subtask.get_id())
cleaned_subtasks = []
attempts = 0
max_attempts = cfg.CONF.timeouts.apply_node_storage * (
60 // cfg.CONF.poll_interval)
worked = failed = False
self.logger.debug(
"Polling for subtask completion every %d seconds, a max of %d polls."
% (cfg.CONF.poll_interval, max_attempts))
while len(cleaned_subtasks) < len(
subtasks) and attempts < max_attempts:
for t in subtasks:
if t in cleaned_subtasks:
continue
subtask = self.state_manager.get_task(t)
if subtask.status == hd_fields.TaskStatus.Complete:
self.logger.info(
"Task %s to configure node storage complete - status %s"
% (subtask.get_id(), subtask.get_result()))
cleaned_subtasks.append(t)
if subtask.result == hd_fields.ActionResult.Success:
result_detail['successful_nodes'].extend(
subtask.node_list)
worked = True
elif subtask.result == hd_fields.ActionResult.Failure:
result_detail['failed_nodes'].extend(
subtask.node_list)
failed = True
elif subtask.result == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
time.sleep(cfg.CONF.poll_interval)
attempts = attempts + 1
if len(cleaned_subtasks) < len(subtasks):
self.logger.warning(
"Time out for task %s before all subtask threads complete"
% (task.get_id()))
result = hd_fields.ActionResult.DependentFailure
result_detail['detail'].append(
'Some subtasks did not complete before the timeout threshold'
)
elif worked and failed:
result = hd_fields.ActionResult.PartialSuccess
elif worked:
result = hd_fields.ActionResult.Success
else:
result = hd_fields.ActionResult.Failure
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
@ -719,260 +815,6 @@ class MaasNodeDriver(NodeDriver):
status=hd_fields.TaskStatus.Complete,
result=result,
result_detail=result_detail)
elif task.action == hd_fields.OrchestratorAction.ApplyNodeNetworking:
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
self.logger.debug(
"Starting subtask to configure networking on %s nodes." %
(len(task.node_list)))
subtasks = []
result_detail = {
'detail': [],
'failed_nodes': [],
'successful_nodes': [],
}
for n in task.node_list:
subtask = self.orchestrator.create_task(
task_model.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodeNetworking,
site_name=task.site_name,
task_scope={'site': task.site_name,
'node_names': [n]})
runner = MaasTaskRunner(
state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id())
self.logger.info(
"Starting thread for task %s to configure networking on node %s"
% (subtask.get_id(), n))
runner.start()
subtasks.append(subtask.get_id())
running_subtasks = len(subtasks)
attempts = 0
worked = failed = False
while running_subtasks > 0 and attempts < cfg.CONF.timeouts.apply_node_networking:
for t in subtasks:
subtask = self.state_manager.get_task(t)
if subtask.status == hd_fields.TaskStatus.Complete:
self.logger.info(
"Task %s to apply networking on node %s complete - status %s"
% (subtask.get_id(), n, subtask.get_result()))
running_subtasks = running_subtasks - 1
if subtask.result == hd_fields.ActionResult.Success:
result_detail['successful_nodes'].extend(
subtask.node_list)
worked = True
elif subtask.result == hd_fields.ActionResult.Failure:
result_detail['failed_nodes'].extend(
subtask.node_list)
failed = True
elif subtask.result == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
time.sleep(1 * 60)
attempts = attempts + 1
if running_subtasks > 0:
self.logger.warning(
"Time out for task %s before all subtask threads complete"
% (task.get_id()))
result = hd_fields.ActionResult.DependentFailure
result_detail['detail'].append(
'Some subtasks did not complete before the timeout threshold'
)
elif worked and failed:
result = hd_fields.ActionResult.PartialSuccess
elif worked:
result = hd_fields.ActionResult.Success
else:
result = hd_fields.ActionResult.Failure
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=result,
result_detail=result_detail)
elif task.action == hd_fields.OrchestratorAction.ApplyNodePlatform:
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
self.logger.debug(
"Starting subtask to configure the platform on %s nodes." %
(len(task.node_list)))
subtasks = []
result_detail = {
'detail': [],
'failed_nodes': [],
'successful_nodes': [],
}
for n in task.node_list:
subtask = self.orchestrator.create_task(
task_model.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodePlatform,
site_name=task.site_name,
task_scope={'site': task.site_name,
'node_names': [n]})
runner = MaasTaskRunner(
state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id())
self.logger.info(
"Starting thread for task %s to config node %s platform" %
(subtask.get_id(), n))
runner.start()
subtasks.append(subtask.get_id())
running_subtasks = len(subtasks)
attempts = 0
worked = failed = False
while running_subtasks > 0 and attempts < cfg.CONF.timeouts.apply_node_platform:
for t in subtasks:
subtask = self.state_manager.get_task(t)
if subtask.status == hd_fields.TaskStatus.Complete:
self.logger.info(
"Task %s to configure node %s platform complete - status %s"
% (subtask.get_id(), n, subtask.get_result()))
running_subtasks = running_subtasks - 1
if subtask.result == hd_fields.ActionResult.Success:
result_detail['successful_nodes'].extend(
subtask.node_list)
worked = True
elif subtask.result == hd_fields.ActionResult.Failure:
result_detail['failed_nodes'].extend(
subtask.node_list)
failed = True
elif subtask.result == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
time.sleep(1 * 60)
attempts = attempts + 1
if running_subtasks > 0:
self.logger.warning(
"Time out for task %s before all subtask threads complete"
% (task.get_id()))
result = hd_fields.ActionResult.DependentFailure
result_detail['detail'].append(
'Some subtasks did not complete before the timeout threshold'
)
elif worked and failed:
result = hd_fields.ActionResult.PartialSuccess
elif worked:
result = hd_fields.ActionResult.Success
else:
result = hd_fields.ActionResult.Failure
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=result,
result_detail=result_detail)
elif task.action == hd_fields.OrchestratorAction.DeployNode:
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
self.logger.debug("Starting subtask to deploy %s nodes." %
(len(task.node_list)))
subtasks = []
result_detail = {
'detail': [],
'failed_nodes': [],
'successful_nodes': [],
}
for n in task.node_list:
subtask = self.orchestrator.create_task(
task_model.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.DeployNode,
site_name=task.site_name,
task_scope={'site': task.site_name,
'node_names': [n]})
runner = MaasTaskRunner(
state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id())
self.logger.info(
"Starting thread for task %s to deploy node %s" %
(subtask.get_id(), n))
runner.start()
subtasks.append(subtask.get_id())
running_subtasks = len(subtasks)
attempts = 0
worked = failed = False
while running_subtasks > 0 and attempts < cfg.CONF.timeouts.deploy_node:
for t in subtasks:
subtask = self.state_manager.get_task(t)
if subtask.status == hd_fields.TaskStatus.Complete:
self.logger.info(
"Task %s to deploy node %s complete - status %s" %
(subtask.get_id(), n, subtask.get_result()))
running_subtasks = running_subtasks - 1
if subtask.result == hd_fields.ActionResult.Success:
result_detail['successful_nodes'].extend(
subtask.node_list)
worked = True
elif subtask.result == hd_fields.ActionResult.Failure:
result_detail['failed_nodes'].extend(
subtask.node_list)
failed = True
elif subtask.result == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
time.sleep(1 * 60)
attempts = attempts + 1
if running_subtasks > 0:
self.logger.warning(
"Time out for task %s before all subtask threads complete"
% (task.get_id()))
result = hd_fields.ActionResult.DependentFailure
result_detail['detail'].append(
'Some subtasks did not complete before the timeout threshold'
)
elif worked and failed:
result = hd_fields.ActionResult.PartialSuccess
elif worked:
result = hd_fields.ActionResult.Success
else:
result = hd_fields.ActionResult.Failure
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=result,
result_detail=result_detail)
class MaasTaskRunner(drivers.DriverTaskRunner):
@ -1060,7 +902,8 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
# Ensure that the MTU of the untagged VLAN on the fabric
# matches that on the NetworkLink config
vlan_list = maas_vlan.Vlans(self.maas_client, fabric_id=link_fabric.resource_id)
vlan_list = maas_vlan.Vlans(
self.maas_client, fabric_id=link_fabric.resource_id)
vlan_list.refresh()
vlan = vlan_list.singleton({'vid': 0})
vlan.mtu = l.mtu
@ -1126,7 +969,7 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
self.maas_client,
name=n.name,
cidr=n.cidr,
dns_servers = n.dns_servers,
dns_servers=n.dns_servers,
fabric=fabric.resource_id,
vlan=vlan.resource_id,
gateway_ip=n.get_default_gateway())
@ -1202,53 +1045,62 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
"DHCP enabled for subnet %s, activating in MaaS"
% (subnet.name))
rack_ctlrs = maas_rack.RackControllers(self.maas_client)
rack_ctlrs = maas_rack.RackControllers(
self.maas_client)
rack_ctlrs.refresh()
dhcp_config_set=False
dhcp_config_set = False
for r in rack_ctlrs:
if n.dhcp_relay_upstream_target is not None:
if r.interface_for_ip(n.dhcp_relay_upstream_target):
iface = r.interface_for_ip(n.dhcp_relay_upstream_target)
if r.interface_for_ip(
n.dhcp_relay_upstream_target):
iface = r.interface_for_ip(
n.dhcp_relay_upstream_target)
vlan.relay_vlan = iface.vlan
self.logger.debug(
"Relaying DHCP on vlan %s to vlan %s" % (vlan.resource_id, vlan.relay_vlan)
)
"Relaying DHCP on vlan %s to vlan %s"
% (vlan.resource_id,
vlan.relay_vlan))
result_detail['detail'].append(
"Relaying DHCP on vlan %s to vlan %s" % (vlan.resource_id, vlan.relay_vlan))
"Relaying DHCP on vlan %s to vlan %s"
% (vlan.resource_id,
vlan.relay_vlan))
vlan.update()
dhcp_config_set=True
dhcp_config_set = True
break
else:
for i in r.interfaces:
if i.vlan == vlan.resource_id:
self.logger.debug(
"Rack controller %s has interface on vlan %s" %
(r.resource_id, vlan.resource_id))
"Rack controller %s has interface on vlan %s"
% (r.resource_id,
vlan.resource_id))
rackctl_id = r.resource_id
vlan.dhcp_on = True
vlan.primary_rack = rackctl_id
self.logger.debug(
"Enabling DHCP on VLAN %s managed by rack ctlr %s"
% (vlan.resource_id, rackctl_id))
% (vlan.resource_id,
rackctl_id))
result_detail['detail'].append(
"Enabling DHCP on VLAN %s managed by rack ctlr %s"
% (vlan.resource_id, rackctl_id))
% (vlan.resource_id,
rackctl_id))
vlan.update()
dhcp_config_set=True
dhcp_config_set = True
break
if dhcp_config_set:
break
if not dhcp_config_set:
self.logger.error(
"Network %s requires DHCP, but could not locate a rack controller to serve it." %
(n.name))
"Network %s requires DHCP, but could not locate a rack controller to serve it."
% (n.name))
result_detail['detail'].append(
"Network %s requires DHCP, but could not locate a rack controller to serve it." %
(n.name))
"Network %s requires DHCP, but could not locate a rack controller to serve it."
% (n.name))
elif dhcp_on and vlan.dhcp_on:
self.logger.info(
@ -1465,7 +1317,8 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
except:
self.logger.warning(
"Error updating node %s status during commissioning, will re-attempt."
% (n))
% (n),
exc_info=True)
if machine.status_name == 'Ready':
self.logger.info("Node %s commissioned." % (n))
result_detail['detail'].append(
@ -1611,8 +1464,8 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
if iface.effective_mtu != nl.mtu:
self.logger.debug(
"Updating interface %s MTU to %s"
% (i.device_name, nl.mtu))
"Updating interface %s MTU to %s" %
(i.device_name, nl.mtu))
iface.set_mtu(nl.mtu)
for iface_net in getattr(i, 'networks', []):
@ -1886,6 +1739,247 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
else:
final_result = hd_fields.ActionResult.Success
self.orchestrator.task_field_update(
self.task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=final_result,
result_detail=result_detail)
elif task_action == hd_fields.OrchestratorAction.ApplyNodeStorage:
try:
machine_list = maas_machine.Machines(self.maas_client)
machine_list.refresh()
except Exception as ex:
self.logger.error(
"Error configuring node storage, cannot access MaaS: %s" %
str(ex))
traceback.print_tb(sys.last_traceback)
self.orchestrator.task_field_update(
self.task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=hd_fields.ActionResult.Failure,
result_detail={
'detail': 'Error accessing MaaS API',
'retry': True
})
return
nodes = self.task.node_list
result_detail = {'detail': []}
worked = failed = False
for n in nodes:
try:
self.logger.debug(
"Locating node %s for storage configuration" % (n))
node = site_design.get_baremetal_node(n)
machine = machine_list.identify_baremetal_node(
node, update_name=False)
if machine is None:
self.logger.warning(
"Could not locate machine for node %s" % n)
result_detail['detail'].append(
"Could not locate machine for node %s" % n)
failed = True
continue
except Exception as ex1:
failed = True
self.logger.error(
"Error locating machine for node %s: %s" % (n,
str(ex1)))
result_detail['detail'].append(
"Error locating machine for node %s" % (n))
continue
try:
"""
1. Clear VGs
2. Clear partitions
3. Apply partitioning
4. Create VGs
5. Create logical volumes
"""
self.logger.debug(
"Clearing current storage layout on node %s." %
node.name)
machine.reset_storage_config()
(root_dev, root_block) = node.find_fs_block_device('/')
(boot_dev, boot_block) = node.find_fs_block_device('/boot')
storage_layout = dict()
if isinstance(root_block, hostprofile.HostPartition):
storage_layout['layout_type'] = 'flat'
storage_layout['root_device'] = root_dev.name
storage_layout['root_size'] = root_block.size
elif isinstance(root_block, hostprofile.HostVolume):
storage_layout['layout_type'] = 'lvm'
if len(root_dev.physical_devices) != 1:
msg = "Root LV in VG with multiple physical devices on node %s" % (
node.name)
self.logger.error(msg)
result_detail['detail'].append(msg)
failed = True
continue
storage_layout[
'root_device'] = root_dev.physical_devices[0]
storage_layout['root_lv_size'] = root_block.size
storage_layout['root_lv_name'] = root_block.name
storage_layout['root_vg_name'] = root_dev.name
if boot_block is not None:
storage_layout['boot_size'] = boot_block.size
self.logger.debug(
"Setting node %s root storage layout: %s" %
(node.name, str(storage_layout)))
machine.set_storage_layout(**storage_layout)
vg_devs = {}
for d in node.storage_devices:
maas_dev = machine.block_devices.singleton({
'name':
d.name
})
if maas_dev is None:
self.logger.warning("Dev %s not found on node %s" %
(d.name, node.name))
continue
if d.volume_group is not None:
self.logger.debug(
"Adding dev %s to volume group %s" %
(d.name, d.volume_group))
if d.volume_group not in vg_devs:
vg_devs[d.volume_group] = {'b': [], 'p': []}
vg_devs[d.volume_group]['b'].append(
maas_dev.resource_id)
continue
self.logger.debug("Partitioning dev %s on node %s" %
(d.name, node.name))
for p in d.partitions:
if p.is_sys():
self.logger.debug(
"Skipping manually configuring a system partition."
)
continue
maas_dev.refresh()
size = MaasTaskRunner.calculate_bytes(
size_str=p.size, context=maas_dev)
part = maas_partition.Partition(
self.maas_client,
size=size,
bootable=p.bootable)
if p.part_uuid is not None:
part.uuid = p.part_uuid
self.logger.debug(
"Creating partition %s on dev %s" % (p.name,
d.name))
part = maas_dev.create_partition(part)
if p.volume_group is not None:
self.logger.debug(
"Adding partition %s to volume group %s" %
(p.name, p.volume_group))
if p.volume_group not in vg_devs:
vg_devs[p.volume_group] = {
'b': [],
'p': []
}
vg_devs[p.volume_group]['p'].append(
part.resource_id)
if p.mountpoint is not None:
format_opts = {'fstype': p.fstype}
if p.fs_uuid is not None:
format_opts['uuid'] = str(p.fs_uuid)
if p.fs_label is not None:
format_opts['label'] = p.fs_label
self.logger.debug(
"Formatting partition %s as %s" %
(p.name, p.fstype))
part.format(**format_opts)
mount_opts = {
'mount_point': p.mountpoint,
'mount_options': p.mount_options,
}
self.logger.debug(
"Mounting partition %s on %s" % (p.name,
p.mount))
part.mount(**mount_opts)
self.logger.debug(
"Finished configuring node %s partitions" % node.name)
for v in node.volume_groups:
if v.is_sys():
self.logger.debug(
"Skipping manually configuraing system VG.")
continue
if v.name not in vg_devs:
self.logger.warning(
"No physical volumes defined for VG %s, skipping."
% (v.name))
continue
maas_volgroup = maas_vg.VolumeGroup(
self.maas_client, name=v.name)
if v.vg_uuid is not None:
maas_volgroup.uuid = v.vg_uuid
if len(vg_devs[v.name]['b']) > 0:
maas_volgroup.block_devices = ','.join(
[str(x) for x in vg_devs[v.name]['b']])
if len(vg_devs[v.name]['p']) > 0:
maas_volgroup.partitions = ','.join(
[str(x) for x in vg_devs[v.name]['p']])
self.logger.debug(
"Creating volume group %s on node %s" %
(v.name, node.name))
maas_volgroup = machine.volume_groups.add(
maas_volgroup)
maas_volgroup.refresh()
for lv in v.logical_volumes:
calc_size = MaasTaskRunner.calculate_bytes(size_str=lv.size, context=maas_volgroup)
bd_id = maas_volgroup.create_lv(
name=lv.name,
uuid_str=lv.lv_uuid,
size=calc_size)
if lv.mountpoint is not None:
machine.refresh()
maas_lv = machine.block_devices.select(bd_id)
self.logger.debug(
"Formatting LV %s as filesystem on node %s."
% (lv.name, node.name))
maas_lv.format(
fstype=lv.fstype, uuid_str=lv.fs_uuid)
self.logger.debug(
"Mounting LV %s at %s on node %s." %
(lv.name, lv.mountpoint, node.name))
maas_lv.mount(
mount_point=lv.mountpoint,
mount_options=lv.mount_options)
except Exception as ex:
raise errors.DriverError(str(ex))
if worked and failed:
final_result = hd_fields.ActionResult.PartialSuccess
elif failed:
final_result = hd_fields.ActionResult.Failure
else:
final_result = hd_fields.ActionResult.Success
self.orchestrator.task_field_update(
self.task.get_id(),
status=hd_fields.TaskStatus.Complete,
@ -2018,6 +2112,62 @@ class MaasTaskRunner(drivers.DriverTaskRunner):
result=final_result,
result_detail=result_detail)
@classmethod
def calculate_bytes(cls, size_str=None, context=None):
"""Calculate the size on bytes of a size_str.
Calculate the size as specified in size_str in the context of the provided
blockdev or vg. Valid size_str format below.
#m or #M or #mb or #MB = # * 1024 * 1024
#g or #G or #gb or #GB = # * 1024 * 1024 * 1024
#t or #T or #tb or #TB = # * 1024 * 1024 * 1024 * 1024
#% = Percentage of the total storage in the context
Prepend '>' to the above to note the size as a minimum and the calculated size being the
remaining storage available above the minimum
If the calculated size is not available in the context, a NotEnoughStorage exception is
raised.
:param size_str: A string representing the desired size
:param context: An instance of maasdriver.models.blockdev.BlockDevice or
instance of maasdriver.models.volumegroup.VolumeGroup. The
size_str is interpreted in the context of this device
:return size: The calculated size in bytes
"""
pattern = '(>?)(\d+)([mMbBgGtT%]{1,2})'
regex = re.compile(pattern)
match = regex.match(size_str)
if not match:
raise errors.InvalidSizeFormat(
"Invalid size string format: %s" % size_str)
if ((match.group(1) == '>' or match.group(3) == '%') and not context):
raise errors.InvalidSizeFormat(
'Sizes using the ">" or "%" format must specify a '
'block device or volume group context')
base_size = int(match.group(2))
if match.group(3) in ['m', 'M', 'mb', 'MB']:
computed_size = base_size * (1000 * 1000)
elif match.group(3) in ['g', 'G', 'gb', 'GB']:
computed_size = base_size * (1000 * 1000 * 1000)
elif match.group(3) in ['t', 'T', 'tb', 'TB']:
computed_size = base_size * (1000 * 1000 * 1000 * 1000)
elif match.group(3) == '%':
computed_size = math.floor((base_size / 100) * int(context.size))
if computed_size > int(context.available_size):
raise errors.NotEnoughStorage()
if match.group(1) == '>':
computed_size = int(context.available_size)
return computed_size
def list_opts():
return {MaasNodeDriver.driver_key: MaasNodeDriver.maasdriver_options}

View File

@ -11,16 +11,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A representation of a MaaS REST resource.
Should be subclassed for different resources and
augmented with operations specific to those resources
"""
import json
import re
import logging
import drydock_provisioner.error as errors
"""
A representation of a MaaS REST resource. Should be subclassed
for different resources and augmented with operations specific
to those resources
"""
class ResourceBase(object):
@ -46,10 +47,16 @@ class ResourceBase(object):
resp = self.api_client.get(url)
updated_fields = resp.json()
updated_model = self.from_dict(self.api_client, updated_fields)
for f in self.fields:
if f in updated_fields.keys():
setattr(self, f, updated_fields.get(f))
if hasattr(updated_model, f):
setattr(self, f, getattr(updated_model, f))
def delete(self):
"""Delete this resource in MaaS."""
url = self.interpolate_url()
resp = self.api_client.delete(url)
"""
Parse URL for placeholders and replace them with current
@ -157,8 +164,7 @@ class ResourceBase(object):
class ResourceCollectionBase(object):
"""
A collection of MaaS resources.
"""A collection of MaaS resources.
Rather than a simple list, we will key the collection on resource
ID for more efficient access.
@ -175,10 +181,7 @@ class ResourceCollectionBase(object):
self.logger = logging.getLogger('drydock.nodedriver.maasdriver')
def interpolate_url(self):
"""
Parse URL for placeholders and replace them with current
instance values
"""
"""Parse URL for placeholders and replace them with current instance values."""
pattern = '\{([a-z_]+)\}'
regex = re.compile(pattern)
start = 0
@ -273,8 +276,7 @@ class ResourceCollectionBase(object):
return result
def singleton(self, query):
"""
A query that requires a single item response
"""A query that requires a single item response.
:param query: A dict of k:v pairs defining the query parameters
"""
@ -298,11 +300,8 @@ class ResourceCollectionBase(object):
else:
return None
"""
Iterate over the resources in the collection
"""
def __iter__(self):
"""Iterate over the resources in the collection."""
return iter(self.resources.values())
"""

View File

@ -0,0 +1,270 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""API model for MaaS node block device resource."""
import uuid
from . import base as model_base
from . import partition as maas_partition
import drydock_provisioner.error as errors
class BlockDevice(model_base.ResourceBase):
resource_url = 'nodes/{system_id}/blockdevices/{resource_id}/'
fields = [
'resource_id',
'system_id',
'name',
'path',
'size',
'type',
'path',
'partitions',
'uuid',
'filesystem',
'tags',
'serial',
'model',
'id_path',
'bootable',
'available_size',
]
json_fields = [
'name',
]
"""Filesystem dictionary fields:
mount_point: the mount point on the system directory hierarchy
fstype: The filesystem format, defaults to ext4
mount_options: The mount options specified in /etc/fstab, defaults to 'defaults'
label: The filesystem lab
uuid: The filesystem uuid
"""
def __init__(self, api_client, **kwargs):
super().__init__(api_client, **kwargs)
if getattr(self, 'resource_id', None) is not None:
try:
self.partitions = maas_partition.Partitions(
api_client,
system_id=self.system_id,
device_id=self.resource_id)
self.partitions.refresh()
except Exception as ex:
self.logger.warning(
"Could not load partitions on node %s block device %s" %
(self.system_id, self.resource_id))
else:
self.partitions = None
def format(self, fstype='ext4', uuid_str=None, label=None):
"""Format this block device with a filesystem.
:param fstype: String of the filesystem format to use, defaults to ext4
:param uuid: String of the UUID to assign to the filesystem. One will be
generated if this is left as None
"""
try:
data = {'fstype': fstype}
if uuid_str:
data['uuid'] = str(uuid_str)
else:
data['uuid'] = str(uuid.uuid4())
url = self.interpolate_url()
self.logger.debug(
"Formatting device %s on node %s as filesystem: fstype=%s, uuid=%s"
% (self.name, self.system_id, fstype, uuid))
resp = self.api_client.post(url, op='format', files=data)
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: format of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def unformat(self):
"""Unformat this block device.
Will attempt to unmount the device first.
"""
try:
self.refresh()
if self.filesystem is None:
self.logger.debug(
"Device %s not currently formatted, skipping unformat." %
(self.name))
return
if self.filesystem.get('mount_pount', None) is not None:
self.unmount()
url = self.interpolate_url()
self.logger.debug("Unformatting device %s on node %s" %
(self.name, self.system_id))
resp = self.api_client.post(url, op='unformat')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: unformat of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def mount(self, mount_point=None, mount_options='defaults'):
"""Mount this block device with a filesystem.
:param mount_point: The mountpoint on the system
:param mount_options: fstab style mount options, defaults to 'defaults'
"""
try:
if mount_point is None:
raise errors.DriverError(
"Cannot mount a block device on an empty mount point.")
data = {'mount_point': mount_point, 'mount_options': mount_options}
url = self.interpolate_url()
self.logger.debug(
"Mounting device %s on node %s at mount point %s" %
(self.resource_id, self.system_id, mount_point))
resp = self.api_client.post(url, op='mount', files=data)
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: mount of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def unmount(self):
"""Unmount this block device."""
try:
self.refresh()
if self.filesystem is None or self.filesystem.get(
'mount_point', None) is None:
self.logger.debug(
"Device %s not currently mounted, skipping unmount." %
(self.name))
url = self.interpolate_url()
self.logger.debug("Unmounting device %s on node %s" %
(self.name, self.system_id))
resp = self.api_client.post(url, op='unmount')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: unmount of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def set_bootable(self):
"""Set this disk as the system bootdisk."""
try:
url = self.interpolate_url()
self.logger.debug("Setting device %s on node %s as bootable." %
(self.resource_id, self.system_id))
resp = self.api_client.post(url, op='set_boot_disk')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: setting device %s on node %s to boot failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def create_partition(self, partition):
"""Create a partition on this block device.
:param partition: Instance of models.partition.Partition to be carved out of this block device
"""
if self.type == 'physical':
if self.partitions is not None:
partition = self.partitions.add(partition)
self.partitions.refresh()
return self.partitions.select(partition.resource_id)
else:
msg = "Error: could not access device %s partition list" % self.name
self.logger.error(msg)
raise errors.DriverError(msg)
else:
msg = "Error: cannot partition non-physical device %s." % (
self.name)
self.logger.error(msg)
raise errors.DriverError(msg)
def delete_partition(self, partition_id):
if self.partitions is not None:
part = self.partitions.select(partition_id)
if part is not None:
part.delete()
self.refresh()
def clear_partitions(self):
for p in getattr(self, 'partitions', []):
p.delete()
self.refresh()
@classmethod
def from_dict(cls, api_client, obj_dict):
"""Instantiate this model from a dictionary.
Because MaaS decides to replace the resource ids with the
representation of the resource, we must reverse it for a true
representation of the block device
"""
refined_dict = {k: obj_dict.get(k, None) for k in cls.fields}
if 'id' in obj_dict.keys():
refined_dict['resource_id'] = obj_dict.get('id')
i = cls(api_client, **refined_dict)
return i
class BlockDevices(model_base.ResourceCollectionBase):
collection_url = 'nodes/{system_id}/blockdevices/'
collection_resource = BlockDevice
def __init__(self, api_client, **kwargs):
super().__init__(api_client)
self.system_id = kwargs.get('system_id', None)

View File

@ -27,11 +27,24 @@ class Interface(model_base.ResourceBase):
resource_url = 'nodes/{system_id}/interfaces/{resource_id}/'
fields = [
'resource_id', 'system_id', 'name', 'type', 'mac_address', 'vlan',
'links', 'effective_mtu', 'fabric_id', 'mtu',
'resource_id',
'system_id',
'name',
'type',
'mac_address',
'vlan',
'links',
'effective_mtu',
'fabric_id',
'mtu',
]
json_fields = [
'name', 'type', 'mac_address', 'vlan', 'links', 'mtu',
'name',
'type',
'mac_address',
'vlan',
'links',
'mtu',
]
def __init__(self, api_client, **kwargs):

View File

@ -11,22 +11,36 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Model representing MAAS node/machine resource."""
import drydock_provisioner.error as errors
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
import drydock_provisioner.drivers.node.maasdriver.models.interface as maas_interface
import drydock_provisioner.drivers.node.maasdriver.models.blockdev as maas_blockdev
import drydock_provisioner.drivers.node.maasdriver.models.volumegroup as maas_vg
import bson
import yaml
class Machine(model_base.ResourceBase):
resource_url = 'machines/{resource_id}/'
fields = [
'resource_id', 'hostname', 'power_type', 'power_state',
'power_parameters', 'interfaces', 'boot_interface', 'memory',
'cpu_count', 'tag_names', 'status_name', 'boot_mac', 'owner_data'
'resource_id',
'hostname',
'power_type',
'power_state',
'power_parameters',
'interfaces',
'boot_interface',
'memory',
'cpu_count',
'tag_names',
'status_name',
'boot_mac',
'owner_data',
'block_devices',
'volume_groups',
]
json_fields = ['hostname', 'power_type']
@ -38,8 +52,24 @@ class Machine(model_base.ResourceBase):
self.interfaces = maas_interface.Interfaces(
api_client, system_id=self.resource_id)
self.interfaces.refresh()
try:
self.block_devices = maas_blockdev.BlockDevices(
api_client, system_id=self.resource_id)
self.block_devices.refresh()
except Exception as ex:
self.logger.warning("Failed loading node %s block devices." %
(self.resource_id))
try:
self.volume_groups = maas_vg.VolumeGroups(
api_client, system_id=self.resource_id)
self.volume_groups.refresh()
except Exception as ex:
self.logger.warning("Failed load node %s volume groups." %
(self.resource_id))
else:
self.interfaces = None
self.block_devices = None
self.volume_groups = None
def interface_for_ip(self, ip_address):
"""Find the machine interface that will respond to ip_address.
@ -61,6 +91,100 @@ class Machine(model_base.ResourceBase):
if resp.status_code == 200:
self.power_parameters = resp.json()
def reset_storage_config(self):
"""Reset storage config on this machine.
Removes all the volume groups/logical volumes and all the physical
device partitions on this machine.
"""
self.logger.info("Resetting storage configuration on node %s" %
(self.resource_id))
if self.volume_groups is not None and self.volume_groups.len() > 0:
for vg in self.volume_groups:
self.logger.debug("Removing VG %s" % vg.name)
vg.delete()
else:
self.logger.debug("No VGs configured on node %s" %
(self.resource_id))
if self.block_devices is not None:
for d in self.block_devices:
if d.partitions is not None and d.partitions.len() > 0:
self.logger.debug(
"Clearing partitions on device %s" % d.name)
d.clear_partitions()
else:
self.logger.debug(
"No partitions found on device %s" % d.name)
else:
self.logger.debug("No block devices found on node %s" %
(self.resource_id))
def set_storage_layout(self,
layout_type='flat',
root_device=None,
root_size=None,
boot_size=None,
root_lv_size=None,
root_vg_name=None,
root_lv_name=None):
"""Set machine storage layout for the root disk.
:param layout_type: Whether to use 'flat' (partitions) or 'lvm' for the root filesystem
:param root_device: Name of the block device to place the root partition on
:param root_size: Size of the root partition in bytes
:param boot_size: Size of the boot partition in bytes
:param root_lv_size: Size of the root logical volume in bytes for LVM layout
:param root_vg_name: Name of the volume group with root LV
:param root_lv_name: Name of the root LV
"""
try:
url = self.interpolate_url()
self.block_devices.refresh()
root_dev = self.block_devices.singleton({'name': root_device})
if root_dev is None:
msg = "Error: cannot find storage device %s to set as root device" % root_device
self.logger.error(msg)
raise errors.DriverError(msg)
root_dev.set_bootable()
data = {
'storage_layout': layout_type,
'root_device': root_dev.resource_id,
}
self.logger.debug("Setting node %s storage layout to %s" %
(self.hostname, layout_type))
if root_size:
data['root_size'] = root_size
if boot_size:
data['boot_size'] = boot_size
if layout_type == 'lvm':
if root_lv_size:
data['lv_size'] = root_lv_size
if root_vg_name:
data['vg_name'] = root_vg_name
if root_lv_name:
data['lv_name'] = root_lv_name
resp = self.api_client.post(
url, op='set_storage_layout', files=data)
if not resp.ok:
raise Exception("MAAS Error: %s - %s" % (resp.status_code,
resp.text))
except Exception as ex:
msg = "Error: failed configuring node %s storage layout: %s" % (
self.resource_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def commission(self, debug=False):
url = self.interpolate_url()

View File

@ -0,0 +1,216 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""API model for MaaS node storage partition resource."""
import uuid
from . import base as model_base
import drydock_provisioner.error as errors
class Partition(model_base.ResourceBase):
resource_url = 'nodes/{system_id}/blockdevices/{device_id}/partition/{resource_id}'
fields = [
'resource_id',
'system_id',
'device_id',
'name',
'path',
'size',
'type',
'uuid',
'filesystem',
'bootable',
]
json_fields = [
'size',
'uuid',
'bootable',
]
"""Filesystem dictionary fields:
mount_point: the mount point on the system directory hierarchy
fstype: The filesystem format, defaults to ext4
mount_options: The mount options specified in /etc/fstab, defaults to 'defaults'
label: The filesystem lab
uuid: The filesystem uuid
"""
def __init__(self, api_client, **kwargs):
super().__init__(api_client, **kwargs)
def format(self, fstype='ext4', uuid_str=None, fs_label=None):
"""Format this partition with a filesystem.
:param fstype: String of the filesystem format to use, defaults to ext4
:param uuid: String of the UUID to assign to the filesystem. One will be
generated if this is left as None
"""
try:
data = {'fstype': fstype}
if uuid_str:
data['uuid'] = str(uuid_str)
else:
data['uuid'] = str(uuid.uuid4())
if fs_label is not None:
data['label'] = fs_label
url = self.interpolate_url()
self.logger.debug(
"Formatting device %s on node %s as filesystem: %s" %
(self.name, self.system_id, data))
resp = self.api_client.post(url, op='format', files=data)
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: format of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def unformat(self):
"""Unformat this block device.
Will attempt to unmount the device first.
"""
try:
self.refresh()
if self.filesystem is None:
self.logger.debug(
"Device %s not currently formatted, skipping unformat." %
(self.name))
return
if self.filesystem.get('mount_pount', None) is not None:
self.unmount()
url = self.interpolate_url()
self.logger.debug("Unformatting device %s on node %s" %
(self.name, self.system_id))
resp = self.api_client.post(url, op='unformat')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: unformat of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def mount(self, mount_point=None, mount_options='defaults'):
"""Mount this block device with a filesystem.
:param mount_point: The mountpoint on the system
:param mount_options: fstab style mount options, defaults to 'defaults'
"""
try:
if mount_point is None:
raise errors.DriverError(
"Cannot mount a block device on an empty mount point.")
data = {'mount_point': mount_point, 'mount_options': mount_options}
url = self.interpolate_url()
self.logger.debug(
"Mounting device %s on node %s at mount point %s" %
(self.resource_id, self.system_id, mount_point))
resp = self.api_client.post(url, op='mount', files=data)
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: mount of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def unmount(self):
"""Unmount this block device."""
try:
self.refresh()
if self.filesystem is None or self.filesystem.get(
'mount_point', None) is None:
self.logger.debug(
"Device %s not currently mounted, skipping unmount." %
(self.name))
url = self.interpolate_url()
self.logger.debug("Unmounting device %s on node %s" %
(self.name, self.system_id))
resp = self.api_client.post(url, op='unmount')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: unmount of device %s on node %s failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
def set_bootable(self):
"""Set this disk as the system bootdisk."""
try:
url = self.interpolate_url()
self.logger.debug("Setting device %s on node %s as bootable." %
(self.resource_id, self.system_id))
resp = self.api_client.post(url, op='set_boot_disk')
if not resp.ok:
raise Exception("MAAS error: %s - %s" % (resp.status_code,
resp.text))
self.refresh()
except Exception as ex:
msg = "Error: setting device %s on node %s to boot failed: %s" \
% (self.name, self.system_id, str(ex))
self.logger.error(msg)
raise errors.DriverError(msg)
@classmethod
def from_dict(cls, api_client, obj_dict):
"""Instantiate this model from a dictionary.
Because MaaS decides to replace the resource ids with the
representation of the resource, we must reverse it for a true
representation of the block device
"""
refined_dict = {k: obj_dict.get(k, None) for k in cls.fields}
if 'id' in obj_dict.keys():
refined_dict['resource_id'] = obj_dict.get('id')
i = cls(api_client, **refined_dict)
return i
class Partitions(model_base.ResourceCollectionBase):
collection_url = 'nodes/{system_id}/blockdevices/{device_id}/partitions/'
collection_resource = Partition
def __init__(self, api_client, **kwargs):
super().__init__(api_client)
self.system_id = kwargs.get('system_id', None)
self.device_id = kwargs.get('device_id', None)

View File

@ -21,12 +21,26 @@ class Vlan(model_base.ResourceBase):
resource_url = 'fabrics/{fabric_id}/vlans/{api_id}/'
fields = [
'resource_id', 'name', 'description', 'vid', 'fabric_id', 'dhcp_on',
'mtu', 'primary_rack', 'secondary_rack', 'relay_vlan',
'resource_id',
'name',
'description',
'vid',
'fabric_id',
'dhcp_on',
'mtu',
'primary_rack',
'secondary_rack',
'relay_vlan',
]
json_fields = [
'name', 'description', 'vid', 'dhcp_on', 'mtu', 'primary_rack',
'secondary_rack', 'relay_vlan',
'name',
'description',
'vid',
'dhcp_on',
'mtu',
'primary_rack',
'secondary_rack',
'relay_vlan',
]
def __init__(self, api_client, **kwargs):

View File

@ -0,0 +1,153 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""API model for MaaS node volume group resource."""
import uuid
from . import base as model_base
import drydock_provisioner.error as errors
class VolumeGroup(model_base.ResourceBase):
resource_url = 'nodes/{system_id}/volume-group/{resource_id}/'
fields = [
'resource_id',
'system_id',
'name',
'size',
'available_size',
'uuid',
'logical_volumes',
'block_devices',
'partitions',
]
json_fields = [
'name',
'size',
'uuid',
'block_devices',
'partitions',
]
def create_lv(self, name=None, uuid_str=None, size=None):
"""Create a logical volume in this volume group.
:param name: Name of the logical volume
:param uuid_str: A UUID4-format string specifying the LV uuid. Will be generated if left as None
:param size: The size of the logical volume
"""
try:
if name is None or size is None:
raise Exception(
"Cannot create logical volume without specified name and size"
)
if uuid_str is None:
uuid_str = str(uuid.uuid4())
data = {'name': name, 'uuid': uuid_str, 'size': size}
self.logger.debug(
"Creating logical volume %s in VG %s on node %s" %
(name, self.name, self.system_id))
url = self.interpolate_url()
resp = self.api_client.post(
url, op='create_logical_volume', files=data)
if not resp.ok:
raise Exception("MAAS error - %s - %s" % (resp.status_code,
resp.txt))
res = resp.json()
if 'id' in res:
return res['id']
except Exception as ex:
msg = "Error: Could not create logical volume: %s" % str(ex)
self.logger.error(msg)
raise errors.DriverError(msg)
def delete_lv(self, lv_id=None, lv_name=None):
"""Delete a logical volume from this volume group.
:param lv_id: Resource ID of the logical volume
:param lv_name: Name of the logical volume, only referenced if no lv_id is specified
"""
try:
self.refresh()
if self.logical_volumes is not None:
if lv_id and lv_id in self.logical_volumes.values():
target_lv = lv_id
elif lv_name and lv_name in self.logical_volumes:
target_lv = self.logical_volumes[lv_name]
else:
raise Exception(
"lv_id %s and lv_name %s not found in VG %s" %
(lv_id, lv_name, self.name))
url = self.interpolate_url()
resp = self.api_client.post(
url, op='delete_logical_volume', files={'id': target_lv})
if not resp.ok:
raise Exception("MAAS error - %s - %s" % (resp.status_code,
resp.text))
else:
raise Exception("VG %s has no logical volumes" % self.name)
except Exception as ex:
msg = "Error: Could not delete logical volume: %s" % str(ex)
self.logger.error(msg)
raise errors.DriverError(msg)
@classmethod
def from_dict(cls, api_client, obj_dict):
"""Instantiate this model from a dictionary.
Because MaaS decides to replace the resource ids with the
representation of the resource, we must reverse it for a true
representation of the block device
"""
refined_dict = {k: obj_dict.get(k, None) for k in cls.fields}
if 'id' in obj_dict:
refined_dict['resource_id'] = obj_dict.get('id')
if 'logical_volumes' in refined_dict and isinstance(
refined_dict.get('logical_volumes'), list):
lvs = {}
for v in refined_dict.get('logical_volumes'):
lvs[v.get('name')] = v.get('id')
refined_dict['logical_volumes'] = lvs
i = cls(api_client, **refined_dict)
return i
class VolumeGroups(model_base.ResourceCollectionBase):
collection_url = 'nodes/{system_id}/volume-groups/'
collection_resource = VolumeGroup
def __init__(self, api_client, **kwargs):
super().__init__(api_client)
self.system_id = kwargs.get('system_id', None)
def add(self, res):
res = super().add(res)
res.system_id = self.system_id
return res

View File

@ -46,6 +46,14 @@ class PersistentDriverError(DriverError):
pass
class NotEnoughStorage(DriverError):
pass
class InvalidSizeFormat(DriverError):
pass
class ApiError(Exception):
def __init__(self, msg, code=500):
super().__init__(msg)
@ -53,7 +61,7 @@ class ApiError(Exception):
self.status_code = code
def to_json(self):
err_dict = {'error': msg, 'type': self.__class__.__name__}
err_dict = {'error': self.message, 'type': self.__class__.__name__}
return json.dumps(err_dict)

View File

@ -11,8 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""YAML Ingester.
This data ingester will consume YAML site topology documents."""
"""This data ingester will consume YAML site topology documents."""
import yaml
import logging
import base64
@ -336,36 +336,83 @@ class YamlIngester(IngesterPlugin):
model.oob_parameters[k] = v
storage = spec.get('storage', {})
model.storage_layout = storage.get('layout', 'lvm')
bootdisk = storage.get('bootdisk', {})
model.bootdisk_device = bootdisk.get(
'device', None)
model.bootdisk_root_size = bootdisk.get(
'root_size', None)
model.bootdisk_boot_size = bootdisk.get(
'boot_size', None)
phys_devs = storage.get('physical_devices', {})
partitions = storage.get('partitions', [])
model.partitions = objects.HostPartitionList()
model.storage_devices = objects.HostStorageDeviceList(
)
for p in partitions:
part_model = objects.HostPartition()
for k, v in phys_devs.items():
sd = objects.HostStorageDevice(name=k)
sd.source = hd_fields.ModelSource.Designed
part_model.name = p.get('name', None)
part_model.source = hd_fields.ModelSource.Designed
part_model.device = p.get('device', None)
part_model.part_uuid = p.get('part_uuid', None)
part_model.size = p.get('size', None)
part_model.mountpoint = p.get(
'mountpoint', None)
part_model.fstype = p.get('fstype', 'ext4')
part_model.mount_options = p.get(
'mount_options', 'defaults')
part_model.fs_uuid = p.get('fs_uuid', None)
part_model.fs_label = p.get('fs_label', None)
if 'labels' in v:
sd.labels = v.get('labels').copy()
model.partitions.append(part_model)
if 'volume_group' in v:
vg = v.get('volume_group')
sd.volume_group = vg
elif 'partitions' in v:
sd.partitions = objects.HostPartitionList()
for vv in v.get('partitions', []):
part_model = objects.HostPartition()
part_model.name = vv.get('name')
part_model.source = hd_fields.ModelSource.Designed
part_model.part_uuid = vv.get(
'part_uuid', None)
part_model.size = vv.get('size', None)
if 'labels' in vv:
part_model.labels = vv.get(
'labels').copy()
if 'volume_group' in vv:
part_model.volume_group = vv.get(
'vg')
elif 'filesystem' in vv:
fs_info = vv.get('filesystem', {})
part_model.mountpoint = fs_info.get(
'mountpoint', None)
part_model.fstype = fs_info.get(
'fstype', 'ext4')
part_model.mount_options = fs_info.get(
'mount_options', 'defaults')
part_model.fs_uuid = fs_info.get(
'fs_uuid', None)
part_model.fs_label = fs_info.get(
'fs_label', None)
sd.partitions.append(part_model)
model.storage_devices.append(sd)
model.volume_groups = objects.HostVolumeGroupList()
vol_groups = storage.get('volume_groups', {})
for k, v in vol_groups.items():
vg = objects.HostVolumeGroup(name=k)
vg.vg_uuid = v.get('vg_uuid', None)
vg.logical_volumes = objects.HostVolumeList()
model.volume_groups.append(vg)
for vv in v.get('logical_volumes', []):
lv = objects.HostVolume(
name=vv.get('name'))
lv.size = vv.get('size', None)
lv.lv_uuid = vv.get('lv_uuid', None)
if 'filesystem' in vv:
fs_info = vv.get('filesystem', {})
lv.mountpoint = fs_info.get(
'mountpoint', None)
lv.fstype = fs_info.get(
'fstype', 'ext4')
lv.mount_options = fs_info.get(
'mount_options', 'defaults')
lv.fs_uuid = fs_info.get(
'fs_uuid', None)
lv.fs_label = fs_info.get(
'fs_label', None)
vg.logical_volumes.append(lv)
interfaces = spec.get('interfaces', [])
model.interfaces = objects.HostInterfaceList()

View File

@ -89,6 +89,11 @@ class Utils(object):
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
@ -123,6 +128,11 @@ class Utils(object):
@staticmethod
def merge_dicts(child_dict, parent_dict):
if child_dict is None:
return parent_dict
if parent_dict is None:
return child_dict
effective_dict = {}

View File

@ -104,6 +104,9 @@ class DrydockObjectListBase(base.ObjectListBase):
@classmethod
def from_basic_list(cls, obj_list):
if obj_list is None:
return None
model_list = cls()
for o in obj_list:

View File

@ -11,7 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Models representing host profiles and constituent parts."""
from copy import deepcopy
import oslo_versionedobjects.fields as obj_fields
@ -27,30 +28,42 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'name': obj_fields.StringField(nullable=False),
'site': obj_fields.StringField(nullable=False),
'source': hd_fields.ModelSourceField(nullable=False),
'parent_profile': obj_fields.StringField(nullable=True),
'hardware_profile': obj_fields.StringField(nullable=True),
'oob_type': obj_fields.StringField(nullable=True),
'oob_parameters': obj_fields.DictOfStringsField(nullable=True),
'storage_layout': obj_fields.StringField(nullable=True),
'bootdisk_device': obj_fields.StringField(nullable=True),
# Consider a custom field for storage size
'bootdisk_root_size': obj_fields.StringField(nullable=True),
'bootdisk_boot_size': obj_fields.StringField(nullable=True),
'partitions': obj_fields.ObjectField(
'HostPartitionList', nullable=True),
'interfaces': obj_fields.ObjectField(
'HostInterfaceList', nullable=True),
'tags': obj_fields.ListOfStringsField(nullable=True),
'owner_data': obj_fields.DictOfStringsField(nullable=True),
'rack': obj_fields.StringField(nullable=True),
'base_os': obj_fields.StringField(nullable=True),
'image': obj_fields.StringField(nullable=True),
'kernel': obj_fields.StringField(nullable=True),
'kernel_params': obj_fields.DictOfStringsField(nullable=True),
'primary_network': obj_fields.StringField(nullable=True),
'name':
obj_fields.StringField(nullable=False),
'site':
obj_fields.StringField(nullable=False),
'source':
hd_fields.ModelSourceField(nullable=False),
'parent_profile':
obj_fields.StringField(nullable=True),
'hardware_profile':
obj_fields.StringField(nullable=True),
'oob_type':
obj_fields.StringField(nullable=True),
'oob_parameters':
obj_fields.DictOfStringsField(nullable=True),
'storage_devices':
obj_fields.ObjectField('HostStorageDeviceList', nullable=True),
'volume_groups':
obj_fields.ObjectField('HostVolumeGroupList', nullable=True),
'interfaces':
obj_fields.ObjectField('HostInterfaceList', nullable=True),
'tags':
obj_fields.ListOfStringsField(nullable=True),
'owner_data':
obj_fields.DictOfStringsField(nullable=True),
'rack':
obj_fields.StringField(nullable=True),
'base_os':
obj_fields.StringField(nullable=True),
'image':
obj_fields.StringField(nullable=True),
'kernel':
obj_fields.StringField(nullable=True),
'kernel_params':
obj_fields.DictOfStringsField(nullable=True),
'primary_network':
obj_fields.StringField(nullable=True),
}
def __init__(self, **kwargs):
@ -114,12 +127,17 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
self.kernel_params = objects.Utils.merge_dicts(self.kernel_params,
parent.kernel_params)
self.storage_devices = HostStorageDeviceList.from_basic_list(
HostStorageDevice.merge_lists(self.storage_devices,
parent.storage_devices))
self.volume_groups = HostVolumeGroupList.from_basic_list(
HostVolumeGroup.merge_lists(self.volume_groups,
parent.volume_groups))
self.interfaces = HostInterfaceList.from_basic_list(
HostInterface.merge_lists(self.interfaces, parent.interfaces))
self.partitions = HostPartitionList.from_basic_list(
HostPartition.merge_lists(self.partitions, parent.partitions))
self.source = hd_fields.ModelSource.Compiled
return
@ -194,6 +212,12 @@ class HostInterface(base.DrydockObject):
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
if len(child_list) == 0 and len(parent_list) > 0:
@ -281,8 +305,236 @@ class HostInterfaceList(base.DrydockObjectListBase, base.DrydockObject):
fields = {'objects': obj_fields.ListOfObjectsField('HostInterface')}
@base.DrydockObjectRegistry.register
class HostVolumeGroup(base.DrydockObject):
"""Model representing a host volume group."""
VERSION = '1.0'
fields = {
'name': obj_fields.StringField(),
'vg_uuid': obj_fields.StringField(nullable=True),
'logical_volumes': obj_fields.ObjectField(
'HostVolumeList', nullable=True),
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.physical_devices = []
def get_name(self):
return self.name
def get_id(self):
return self.name
def add_pv(self, pv):
self.physical_devices.append(pv)
def is_sys(self):
"""Is this the VG for root and/or boot?"""
for lv in getattr(self, 'logical_volumes', []):
if lv.is_sys():
return True
return False
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
if len(child_list) == 0 and len(parent_list) > 0:
for p in parent_list:
pp = deepcopy(p)
pp.source = hd_fields.ModelSource.Compiled
effective_list.append(pp)
elif len(parent_list) == 0 and len(child_list) > 0:
for i in child_list:
if i.get_name().startswith('!'):
continue
else:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
elif len(parent_list) > 0 and len(child_list) > 0:
parent_devs = []
for i in parent_list:
parent_name = i.get_name()
parent_devs.append(parent_name)
add = True
for j in child_list:
if j.get_name() == ("!" + parent_name):
add = False
break
elif j.get_name() == parent_name:
p = objects.HostVolumeGroup()
p.name = j.get_name()
inheritable_field_list = ['vg_uuid']
for f in inheritable_field_list:
setattr(p, f,
objects.Utils.apply_field_inheritance(
getattr(j, f, None),
getattr(i, f, None)))
p.partitions = HostPartitionList.from_basic_list(
HostPartition.merge_lists(
getattr(j, 'logical_volumes', None),
getattr(i, 'logical_volumes', None)))
add = False
p.source = hd_fields.ModelSource.Compiled
effective_list.append(p)
if add:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
for j in child_list:
if (j.get_name() not in parent_devs
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
effective_list.append(jj)
return effective_list
@base.DrydockObjectRegistry.register
class HostVolumeGroupList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {'objects': obj_fields.ListOfObjectsField('HostVolumeGroup')}
def add_device_to_vg(self, vg_name, device_name):
for vg in self.objects:
if vg.name == vg_name:
vg.add_pv(device_name)
return
vg = objects.HostVolumeGroup(name=vg_name)
vg.add_pv(device_name)
self.objects.append(vg)
return
@base.DrydockObjectRegistry.register
class HostStorageDevice(base.DrydockObject):
"""Model representing a host physical storage device."""
VERSION = '1.0'
fields = {
'name': obj_fields.StringField(),
'volume_group': obj_fields.StringField(nullable=True),
'labels': obj_fields.DictOfStringsField(nullable=True),
'partitions': obj_fields.ObjectField(
'HostPartitionList', nullable=True),
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.physical_devices = []
def get_name(self):
return self.name
def get_id(self):
return self.name
def add_partition(self, partition):
self.partitions.append(partition)
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
if len(child_list) == 0 and len(parent_list) > 0:
for p in parent_list:
pp = deepcopy(p)
pp.source = hd_fields.ModelSource.Compiled
effective_list.append(pp)
elif len(parent_list) == 0 and len(child_list) > 0:
for i in child_list:
if i.get_name().startswith('!'):
continue
else:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
elif len(parent_list) > 0 and len(child_list) > 0:
parent_devs = []
for i in parent_list:
parent_name = i.get_name()
parent_devs.append(parent_name)
add = True
for j in child_list:
if j.get_name() == ("!" + parent_name):
add = False
break
elif j.get_name() == parent_name:
p = objects.HostStorageDevice()
p.name = j.get_name()
inherit_field_list = ['volume_group']
for f in inherit_field_list:
setattr(p, f,
objects.Utils.apply_field_inheritance(
getattr(j, f, None),
getattr(i, f, None)))
p.labels = objects.Utils.merge_dicts(
getattr(j, 'labels', None),
getattr(i, 'labels', None))
p.partitions = HostPartitionList.from_basic_list(
HostPartition.merge_lists(
getattr(j, 'partitions', None),
getattr(i, 'partitions', None)))
add = False
p.source = hd_fields.ModelSource.Compiled
effective_list.append(p)
if add:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
for j in child_list:
if (j.get_name() not in parent_devs
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
effective_list.append(jj)
return effective_list
@base.DrydockObjectRegistry.register
class HostStorageDeviceList(base.DrydockObjectListBase, base.DrydockObject):
"""Model representing a list of host physical storage devices."""
VERSION = '1.0'
fields = {'objects': obj_fields.ListOfObjectsField('HostStorageDevice')}
@base.DrydockObjectRegistry.register
class HostPartition(base.DrydockObject):
"""Model representing a host GPT partition."""
VERSION = '1.0'
@ -291,7 +543,9 @@ class HostPartition(base.DrydockObject):
obj_fields.StringField(),
'source':
hd_fields.ModelSourceField(),
'device':
'bootable':
obj_fields.BooleanField(default=False),
'volume_group':
obj_fields.StringField(nullable=True),
'part_uuid':
obj_fields.UUIDField(nullable=True),
@ -307,12 +561,10 @@ class HostPartition(base.DrydockObject):
obj_fields.UUIDField(nullable=True),
'fs_label':
obj_fields.StringField(nullable=True),
'selector':
obj_fields.ObjectField('HardwareDeviceSelector', nullable=True),
}
def __init__(self, **kwargs):
super(HostPartition, self).__init__(**kwargs)
super().__init__(**kwargs)
def get_device(self):
return self.device
@ -324,17 +576,11 @@ class HostPartition(base.DrydockObject):
def get_name(self):
return self.name
# The device attribute may be hardware alias that translates to a
# physical device address. If the device attribute does not match an
# alias, we assume it directly identifies a OS device name. When the
# apply_hardware_profile method is called on the parent Node of this
# device, the selector will be decided and applied
def set_selector(self, selector):
self.selector = selector
def get_selector(self):
return self.selector
def is_sys(self):
"""Is this partition for root and/or boot?"""
if self.mountpoint is not None and self.mountpoint in ['/', '/boot']:
return True
return False
"""
Merge two lists of HostPartition models with child_list taking
@ -345,6 +591,12 @@ class HostPartition(base.DrydockObject):
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
if len(child_list) == 0 and len(parent_list) > 0:
@ -362,8 +614,16 @@ class HostPartition(base.DrydockObject):
effective_list.append(ii)
elif len(parent_list) > 0 and len(child_list) > 0:
inherit_field_list = [
"device", "part_uuid", "size", "mountpoint", "fstype",
"mount_options", "fs_uuid", "fs_label"
"device",
"part_uuid",
"size",
"mountpoint",
"fstype",
"mount_options",
"fs_uuid",
"fs_label",
"volume_group",
"bootable",
]
parent_partitions = []
for i in parent_list:
@ -392,7 +652,7 @@ class HostPartition(base.DrydockObject):
effective_list.append(ii)
for j in child_list:
if (j.get_name() not in parent_list
if (j.get_name() not in parent_partitions
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
@ -407,3 +667,130 @@ class HostPartitionList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {'objects': obj_fields.ListOfObjectsField('HostPartition')}
@base.DrydockObjectRegistry.register
class HostVolume(base.DrydockObject):
"""Model representing a host logical volume."""
VERSION = '1.0'
fields = {
'name':
obj_fields.StringField(),
'source':
hd_fields.ModelSourceField(),
'lv_uuid':
obj_fields.UUIDField(nullable=True),
'size':
obj_fields.StringField(nullable=True),
'mountpoint':
obj_fields.StringField(nullable=True),
'fstype':
obj_fields.StringField(nullable=True, default='ext4'),
'mount_options':
obj_fields.StringField(nullable=True, default='defaults'),
'fs_uuid':
obj_fields.UUIDField(nullable=True),
'fs_label':
obj_fields.StringField(nullable=True),
}
def __init__(self, **kwargs):
super().__init__(**kwargs)
# HostVolume keyed by name
def get_id(self):
return self.get_name()
def get_name(self):
return self.name
def is_sys(self):
"""Is this LV for root and/or boot?"""
if self.mountpoint is not None and self.mountpoint in ['/', '/boot']:
return True
return False
"""
Merge two lists of HostVolume models with child_list taking
priority when conflicts. If a member of child_list has a name
beginning with '!' it indicates that HostPartition should be
removed from the merged list
"""
@staticmethod
def merge_lists(child_list, parent_list):
if child_list is None:
return parent_list
if parent_list is None:
return child_list
effective_list = []
if len(child_list) == 0 and len(parent_list) > 0:
for p in parent_list:
pp = deepcopy(p)
pp.source = hd_fields.ModelSource.Compiled
effective_list.append(pp)
elif len(parent_list) == 0 and len(child_list) > 0:
for i in child_list:
if i.get_name().startswith('!'):
continue
else:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
elif len(parent_list) > 0 and len(child_list) > 0:
inherit_field_list = [
"lv_uuid",
"size",
"mountpoint",
"fstype",
"mount_options",
"fs_uuid",
"fs_label",
]
parent_volumes = []
for i in parent_list:
parent_name = i.get_name()
parent_volumes.append(parent_name)
add = True
for j in child_list:
if j.get_name() == ("!" + parent_name):
add = False
break
elif j.get_name() == parent_name:
p = objects.HostPartition()
p.name = j.get_name()
for f in inherit_field_list:
setattr(p, f,
objects.Utils.apply_field_inheritance(
getattr(j, f, None),
getattr(i, f, None)))
add = False
p.source = hd_fields.ModelSource.Compiled
effective_list.append(p)
if add:
ii = deepcopy(i)
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
for j in child_list:
if (j.get_name() not in parent_volumes
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
effective_list.append(jj)
return effective_list
@base.DrydockObjectRegistry.register
class HostVolumeList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {'objects': obj_fields.ListOfObjectsField('HostVolume')}

View File

@ -14,9 +14,7 @@
#
# Models for drydock_provisioner
#
import logging
from copy import deepcopy
"""Drydock model of a baremetal node."""
from oslo_versionedobjects import fields as ovo_fields
@ -96,6 +94,24 @@ class BaremetalNode(drydock_provisioner.objects.hostprofile.HostProfile):
return None
def find_fs_block_device(self, fs_mount=None):
if not fs_mount:
return (None, None)
if self.volume_groups is not None:
for vg in self.volume_groups:
if vg.logical_volumes is not None:
for lv in vg.logical_volumes:
if lv.mountpoint is not None and lv.mountpoint == fs_mount:
return (vg, lv)
if self.storage_devices is not None:
for sd in self.storage_devices:
if sd.partitions is not None:
for p in sd.partitions:
if p.mountpoint is not None and p.mountpoint == fs_mount:
return (sd, p)
return (None, None)
@base.DrydockObjectRegistry.register
class BaremetalNodeList(base.DrydockObjectListBase, base.DrydockObject):

View File

@ -464,7 +464,6 @@ class Orchestrator(object):
hd_fields.ActionResult.PartialSuccess,
hd_fields.ActionResult.Failure
]:
# TODO(sh8121att) This threshold should be a configurable default and tunable by task API
if node_identify_attempts > max_attempts:
failed = True
break
@ -580,12 +579,55 @@ class Orchestrator(object):
]:
failed = True
node_storage_task = None
if len(node_networking_task.result_detail['successful_nodes']) > 0:
self.logger.info(
"Found %s successfully networked nodes, configuring platform."
"Found %s successfully networked nodes, configuring storage."
% (len(node_networking_task.result_detail[
'successful_nodes'])))
node_storage_task = self.create_task(
tasks.DriverTask,
parent_Task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodeStorage,
task_scope={
'node_names':
node_networking_task.result_detail['successful_nodes']
})
self.logger.info(
"Starting node driver task %s to configure node storage." %
(node_storage_task.get_id()))
node_driver.execute_task(node_storage_task.get_id())
node_storage_task = self.state_manager.get_task(
node_storage_task.get_id())
if node_storage_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_storage_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
else:
self.logger.warning(
"No nodes successfully networked, skipping storage configuration subtask."
)
node_platform_task = None
if (node_storage_task is not None and
len(node_storage_task.result_detail['successful_nodes']) >
0):
self.logger.info(
"Configured storage on %s nodes, configuring platform." %
(len(node_storage_task.result_detail['successful_nodes'])))
node_platform_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
@ -593,7 +635,7 @@ class Orchestrator(object):
action=hd_fields.OrchestratorAction.ApplyNodePlatform,
task_scope={
'node_names':
node_networking_task.result_detail['successful_nodes']
node_storage_task.result_detail['successful_nodes']
})
self.logger.info(
"Starting node driver task %s to configure node platform."
@ -614,49 +656,49 @@ class Orchestrator(object):
hd_fields.ActionResult.PartialSuccess
]:
failed = True
if len(node_platform_task.result_detail['successful_nodes']
) > 0:
self.logger.info(
"Configured platform on %s nodes, starting deployment."
% (len(node_platform_task.result_detail[
'successful_nodes'])))
node_deploy_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.DeployNode,
task_scope={
'node_names':
node_platform_task.result_detail[
'successful_nodes']
})
self.logger.info(
"Starting node driver task %s to deploy nodes." %
(node_deploy_task.get_id()))
node_driver.execute_task(node_deploy_task.get_id())
node_deploy_task = self.state_manager.get_task(
node_deploy_task.get_id())
if node_deploy_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_deploy_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
else:
self.logger.warning(
"Unable to configure platform on any nodes, skipping deploy subtask"
)
else:
self.logger.warning(
"No nodes successfully networked, skipping platform configuration subtask"
"No nodes with storage configuration, skipping platform configuration subtask."
)
node_deploy_task = None
if node_platform_task is not None and len(
node_platform_task.result_detail['successful_nodes']) > 0:
self.logger.info(
"Configured platform on %s nodes, starting deployment." %
(len(node_platform_task.result_detail['successful_nodes'])
))
node_deploy_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.DeployNode,
task_scope={
'node_names':
node_platform_task.result_detail['successful_nodes']
})
self.logger.info(
"Starting node driver task %s to deploy nodes." %
(node_deploy_task.get_id()))
node_driver.execute_task(node_deploy_task.get_id())
node_deploy_task = self.state_manager.get_task(
node_deploy_task.get_id())
if node_deploy_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_deploy_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
else:
self.logger.warning(
"Unable to configure platform on any nodes, skipping deploy subtask"
)
final_result = None

View File

@ -24,13 +24,19 @@ is compatible with the physical state of the site.
#### Validations ####
* All baremetal nodes have an address, either static or DHCP, for all networks they are attached to.
* No static IP assignments are duplicated
* No static IP assignments are outside of the network they are targetted for
* All IP assignments are within declared ranges on the network
* Networks assigned to each node's interface are within the set of of the attached link's allowed_networks
* No network is allowed on multiple network links
* Boot drive is above minimum size
* Networking
** No static IP assignments are duplicated
** No static IP assignments are outside of the network they are targetted for
** All IP assignments are within declared ranges on the network
** Networks assigned to each node's interface are within the set of of the attached link's allowed\_networks
** No network is allowed on multiple network links
** Network MTU is equal or less than NetworkLink MTU
** MTU values are sane
* Storage
** Boot drive is above minimum size
** Root drive is above minimum size
** No physical device specifies a target VG and a partition list
** No partition specifies a target VG and a filesystem
### VerifySite ###
@ -102,4 +108,4 @@ Destroy current node configuration and rebootstrap from scratch
Based on the requested task and the current known state of a node
the orchestrator will call the enabled downstream drivers with one
or more tasks. Each call will provide the driver with the desired
state (the applied model) and current known state (the build model).
state (the applied model) and current known state (the build model).

View File

@ -0,0 +1,196 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''Tests for the maasdriver calculate_bytes routine.'''
import pytest
import math
from drydock_provisioner import error
from drydock_provisioner.drivers.node.maasdriver.driver import MaasTaskRunner
from drydock_provisioner.drivers.node.maasdriver.models.blockdev import BlockDevice
from drydock_provisioner.drivers.node.maasdriver.models.volumegroup import VolumeGroup
class TestCalculateBytes():
def test_calculate_m_label(self):
'''Convert megabyte labels to x * 10^6 bytes.'''
size_str = '15m'
drive_size = 20 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000
def test_calculate_mb_label(self):
'''Convert megabyte labels to x * 10^6 bytes.'''
size_str = '15mb'
drive_size = 20 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000
def test_calculate_M_label(self):
'''Convert megabyte labels to x * 10^6 bytes.'''
size_str = '15M'
drive_size = 20 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000
def test_calculate_MB_label(self):
'''Convert megabyte labels to x * 10^6 bytes.'''
size_str = '15MB'
drive_size = 20 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000
def test_calculate_g_label(self):
'''Convert gigabyte labels to x * 10^9 bytes.'''
size_str = '15g'
drive_size = 20 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000
def test_calculate_gb_label(self):
'''Convert gigabyte labels to x * 10^9 bytes.'''
size_str = '15gb'
drive_size = 20 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000
def test_calculate_G_label(self):
'''Convert gigabyte labels to x * 10^9 bytes.'''
size_str = '15G'
drive_size = 20 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000
def test_calculate_GB_label(self):
'''Convert gigabyte labels to x * 10^9 bytes.'''
size_str = '15GB'
drive_size = 20 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000
def test_calculate_t_label(self):
'''Convert terabyte labels to x * 10^12 bytes.'''
size_str = '15t'
drive_size = 20 * 1000 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000 * 1000
def test_calculate_tb_label(self):
'''Convert terabyte labels to x * 10^12 bytes.'''
size_str = '15tb'
drive_size = 20 * 1000 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000 * 1000
def test_calculate_T_label(self):
'''Convert terabyte labels to x * 10^12 bytes.'''
size_str = '15T'
drive_size = 20 * 1000 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000 * 1000
def test_calculate_TB_label(self):
'''Convert terabyte labels to x * 10^12 bytes.'''
size_str = '15TB'
drive_size = 20 * 1000 * 1000 * 1000 * 1000
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == 15 * 1000 * 1000 * 1000 * 1000
def test_calculate_percent_blockdev(self):
'''Convert a percent of total blockdev space to explicit byte count.'''
drive_size = 20 * 1000 * 1000 # 20 mb drive
part_size = math.floor(.2 * drive_size) # calculate 20% of drive size
size_str = '20%'
drive = BlockDevice(None, size=drive_size, available_size=drive_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=drive)
assert calc_size == part_size
def test_calculate_percent_vg(self):
'''Convert a percent of total blockdev space to explicit byte count.'''
vg_size = 20 * 1000 * 1000 # 20 mb drive
lv_size = math.floor(.2 * vg_size) # calculate 20% of drive size
size_str = '20%'
vg = VolumeGroup(None, size=vg_size, available_size=vg_size)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=vg)
assert calc_size == lv_size
def test_calculate_overprovision(self):
'''When calculated space is higher than available space, raise an exception.'''
vg_size = 20 * 1000 * 1000 # 20 mb drive
vg_available = 10 # 10 bytes available
lv_size = math.floor(.8 * vg_size) # calculate 80% of drive size
size_str = '80%'
vg = VolumeGroup(None, size=vg_size, available_size=vg_available)
with pytest.raises(error.NotEnoughStorage):
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=vg)
def test_calculate_min_label(self):
'''Adding the min marker '>' should provision all available space.'''
vg_size = 20 * 1000 * 1000 # 20 mb drive
vg_available = 15 * 1000 * 1000
lv_size = math.floor(.1 * vg_size) # calculate 20% of drive size
size_str = '>10%'
vg = VolumeGroup(None, size=vg_size, available_size=vg_available)
calc_size = MaasTaskRunner.calculate_bytes(size_str=size_str, context=vg)
assert calc_size == vg_available

View File

@ -1,107 +0,0 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Generic testing for the orchestrator
#
import pytest
#from pytest_mock import mocker
#import mock
import os
import shutil
import uuid
from drydock_provisioner.ingester import Ingester
import drydock_provisioner.orchestrator as orch
import drydock_provisioner.objects.fields as hd_fields
import drydock_provisioner.statemgmt as statemgmt
import drydock_provisioner.objects as objects
import drydock_provisioner.objects.task as task
import drydock_provisioner.drivers as drivers
import drydock_provisioner.ingester.plugins.yaml as yaml_ingester
class TestClass(object):
design_id = str(uuid.uuid4())
# sthussey None of these work right until I figure out correct
# mocking of pyghmi
def test_oob_verify_all_node(self, loaded_design):
#mocker.patch('pyghmi.ipmi.private.session.Session')
#mocker.patch.object('pyghmi.ipmi.command.Command','get_asset_tag')
orchestrator = orch.Orchestrator(state_manager=loaded_design,
enabled_drivers={'oob': 'drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'})
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='sitename',
design_id=self.design_id,
action=hd_fields.OrchestratorAction.VerifyNode)
orchestrator.execute_task(orch_task.get_id())
orch_task = loaded_design.get_task(orch_task.get_id())
assert True
"""
def test_oob_prepare_all_nodes(self, loaded_design):
#mocker.patch('pyghmi.ipmi.private.session.Session')
#mocker.patch.object('pyghmi.ipmi.command.Command','set_bootdev')
orchestrator = orch.Orchestrator(state_manager=loaded_design,
enabled_drivers={'oob': 'drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'})
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='sitename',
action=enum.OrchestratorAction.PrepareNode)
orchestrator.execute_task(orch_task.get_id())
#assert pyghmi.ipmi.command.Command.set_bootdev.call_count == 3
#assert pyghmi.ipmi.command.Command.set_power.call_count == 6
"""
@pytest.fixture(scope='module')
def loaded_design(self, input_files):
objects.register_all()
input_file = input_files.join("oob.yaml")
design_state = statemgmt.DesignState()
design_data = objects.SiteDesign(id=self.design_id)
design_state.post_design(design_data)
ingester = Ingester()
ingester.enable_plugins([yaml_ingester.YamlIngester])
ingester.ingest_data(plugin_name='yaml', design_state=design_state,
design_id=self.design_id, filenames=[str(input_file)])
return design_state
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
samples_dir = os.path.dirname(str(request.fspath)) + "../yaml_samples"
samples = os.listdir(samples_dir)
for f in samples:
src_file = samples_dir + "/" + f
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir

View File

@ -299,33 +299,36 @@ spec:
credential: admin
# Specify storage layout of base OS. Ceph out of scope
storage:
# How storage should be carved up: lvm (logical volumes), flat
# (single partition)
layout: lvm
# Info specific to the boot and root disk/partitions
bootdisk:
# Device will specify an alias defined in hwdefinition.yaml
device: primary_boot
# For LVM, the size of the partition added to VG as a PV
# For flat, the size of the partition formatted as ext4
root_size: 50g
# The /boot partition. If not specified, /boot will in root
boot_size: 2g
# Info for additional partitions. Need to balance between
# flexibility and complexity
partitions:
- name: logs
device: primary_boot
# Partition uuid if needed
part_uuid: 84db9664-f45e-11e6-823d-080027ef795a
size: 10g
# Optional, can carve up unformatted block devices
mountpoint: /var/log
fstype: ext4
mount_options: defaults
# Filesystem UUID or label can be specified. UUID recommended
fs_uuid: cdb74f1c-9e50-4e51-be1d-068b0e9ff69e
fs_label: logs
physical_devices:
sda:
labels:
role: rootdisk
partitions:
- name: root
size: 20g
bootable: true
filesystem:
mountpoint: '/'
fstype: 'ext4'
mount_options: 'defaults'
- name: boot
size: 1g
bootable: false
filesystem:
mountpoint: '/boot'
fstype: 'ext4'
mount_options: 'defaults'
sdb:
volume_group: 'log_vg'
volume_groups:
log_vg:
logical_volumes:
- name: 'log_lv'
size: '500m'
filesystem:
mountpoint: '/var/log'
fstype: 'xfs'
mount_options: 'defaults'
# Platform (Operating System) settings
platform:
image: ubuntu_16.04

View File

@ -33,6 +33,6 @@ commands = flake8 \
{posargs}
[flake8]
ignore=E302,H306,D101,D102,D103,D104
ignore=E302,H306,H304,D101,D102,D103,D104
exclude= venv,.venv,.git,.idea,.tox,*.egg-info,*.eggs,bin,dist,./build/
max-line-length=119