Fix issues failing CI pipeline

- Run codebase through YAPF for formatting
- Add tox configuration for yapf and pep8
- Fix some non-YAPF pep8 failures
- Enhance verify_site for better MaaS-integration testing
- Create initial basic functional test

Change-Id: Ie5b5275d7795693a6551764362aee916b99b3e56
This commit is contained in:
Scott Hussey 2017-08-15 14:33:43 -05:00
parent c71e76aac2
commit e892df58dc
86 changed files with 4860 additions and 2324 deletions

3
.style.yapf Normal file
View File

@ -0,0 +1,3 @@
[style]
based_on_style = pep8
column_limit = 119

View File

@ -32,8 +32,8 @@ RUN apt -qq update && \
libssl-dev --no-install-recommends
# Copy direct dependency requirements only to build a dependency layer
COPY ./requirements-direct.txt /tmp/drydock/
RUN pip3 install -r /tmp/drydock/requirements-direct.txt
COPY ./requirements-lock.txt /tmp/drydock/
RUN pip3 install -r /tmp/drydock/requirements-lock.txt
COPY . /tmp/drydock

View File

@ -28,8 +28,6 @@ The service account must then be included in the drydock.conf::
delay_auth_decision = true
auth_type = password
auth_section = keystone_authtoken_password
[keystone_authtoken_password]
auth_url = http://<keystone_ip>:5000
project_name = service
project_domain_name = ucp

View File

@ -2,78 +2,58 @@
Installing Drydock in a Dev Environment
=======================================
Drydock runs in Python 3.x only and is tested on Ubuntu 16.04 standard
images. It is recommended that your development environment be a Ubuntu
16.04 virtual machine.
Bootstrap Kubernetes
--------------------
MaaS
----
You can bootstrap your Helm-enabled Kubernetes cluster via the Openstack-Helm
AIO_ http://openstack-helm.readthedocs.io/en/latest/install/developer/all-in-one.html
process or using the UCP Promenade_ https://github.com/att-comdev/promenade tool.
Drydock requires a downstream node provisioning service and currently
the only driver implemented is for Canonical MaaS. So to begin with
install MaaS following their instructions_ https://docs.ubuntu.com/maas/2.2/en/installconfig-package-install.
The MaaS region and rack controllers can be installed in the same VM
as Drydock or a separate VM.
Deploy Drydock and Dependencies
-------------------------------
On the VM that MaaS is installed on, create an admin user:
Drydock is most easily deployed using Armada to deploy the Drydock
container into a Kubernetes cluster via Helm charts. The Drydock chart
is in aic-helm_ https://github.com/att-comdev/aic-helm. It depends on
the deployments of the MaaS_ https://github.com/openstack/openstack-helm-addons chart
and the Keystone_ https://github.com/openstack/openstack-helm chart.
A integrated deployment of these charts can be accomplished using the
Armada_ https://github.com/att-comdev/armada tool. An example integration
chart can be found in the UCP integrations_ https://github.com/att-comdev/ucp-integration
repo in the manifests/basic_ucp directory.
::
$ git clone https://github.com/att-comdev/ucp-integration
$ sudo docker run -ti -v $(pwd):/target -v ~/.kube:/armaada/.kube quay.io/attcomdev/armada:master apply --tiller-host <host_ip> --tiller-port 44134 /target/manifests/basic_ucp/ucp-armada.yaml
$ # wait for all pods in kubectl get pods -n ucp are 'Running'
$ KS_POD=$(kubectl get pods -n ucp | grep keystone | cut -d' ' -f1)
$ TOKEN=$(docker run --rm --net=host -e 'OS_AUTH_URL=http://keystone-api.ucp.svc.cluster.local:80/v3' -e 'OS_PASSWORD=password' -e 'OS_PROJECT_DOMAIN_NAME=default' -e 'OS_PROJECT_NAME=service' -e 'OS_REGION_NAME=RegionOne' -e 'OS_USERNAME=drydock' -e 'OS_USER_DOMAIN_NAME=default' -e 'OS_IDENTITY_API_VERSION=3' kolla/ubuntu-source-keystone:3.0.3 openstack token issue -f shell | grep ^id | cut -d'=' -f2 | tr -d '"')
$ docker run --rm -ti --net=host -e "DD_TOKEN=$TOKEN" -e "DD_URL=http://drydock-api.ucp.svc.cluster.local:9000" -e "LC_ALL=C.UTF-8" -e "LANG=C.UTF-8" $DRYDOCK_IMAGE /bin/bash
$ sudo maas createadmin --username=admin --email=admin@example.com
You can now access the MaaS UI by pointing a browser at http://maas_vm_ip:5240/MAAS
and follow the configuration journey_ https://docs.ubuntu.com/maas/2.2/en/installconfig-webui-conf-journey
to finish getting MaaS ready for use.
Drydock Configuration
---------------------
Clone the git repo and customize your configuration file
::
git clone https://github.com/att-comdev/drydock
cd drydock
tox -e genconfig
cp -r etc /tmp/drydock-etc
In `/tmp/drydock-etc/drydock/drydock.conf` customize your maas_api_url to be
the URL you used when opening the web UI and maas_api_key.
When starting the Drydock container, /tmp/drydock-etc/drydock will be
mounted as /etc/drydock with your customized configuration.
Drydock
-------
Drydock is easily installed via the Docker image at quay.io/attcomdev/drydock:latest.
You will need to customize and mount your configuration file
::
$ sudo docker run -v /tmp/drydock-etc/drydock:/etc/drydock -P -d drydock:latest
Configure Site
--------------
Load Site
---------
To use Drydock for site configuration, you must craft and load a site topology
YAML. An example of this is in examples/designparts_v1.0.yaml.
Load Site
---------
Documentation on building your topology document is under construction
Use the Drydock CLI create a design and load the configuration
::
$ drydock --token <token> --url <drydock_url> design create
$ drydock --token <token> --url <drydock_url> part create -d <design_id> -f <yaml_file>
# drydock design create
# drydock part create -d <design_id> -f <yaml_file>
Use the CLI to create tasks to deploy your site
::
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a verify_site
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a prepare_site
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a prepare_node
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a deploy_node
# drydock task create -d <design_id> -a verify_site
# drydock task create -d <design_id> -a prepare_site
# drydock task create -d <design_id> -a prepare_node
# drydock task create -d <design_id> -a deploy_node
A demo of this process is available at https://asciinema.org/a/133906

View File

@ -15,13 +15,16 @@
"""
import logging
class CliAction: # pylint: disable=too-few-public-methods
class CliAction: # pylint: disable=too-few-public-methods
""" Action base for CliActions
"""
def __init__(self, api_client):
self.logger = logging.getLogger('drydock_cli')
self.api_client = api_client
self.logger.debug("Action initialized with client %s", self.api_client.session.host)
self.logger.debug("Action initialized with client %s",
self.api_client.session.host)
def invoke(self):
""" The action to be taken. By default, this is not implemented

View File

@ -24,18 +24,20 @@ from .design import commands as design
from .part import commands as part
from .task import commands as task
@click.group()
@click.option('--debug/--no-debug',
help='Enable or disable debugging',
default=False)
@click.option('--token',
'-t',
help='The auth token to be used',
default=lambda: os.environ.get('DD_TOKEN', ''))
@click.option('--url',
'-u',
help='The url of the running drydock instance',
default=lambda: os.environ.get('DD_URL', ''))
@click.option(
'--debug/--no-debug', help='Enable or disable debugging', default=False)
@click.option(
'--token',
'-t',
help='The auth token to be used',
default=lambda: os.environ.get('DD_TOKEN', ''))
@click.option(
'--url',
'-u',
help='The url of the running drydock instance',
default=lambda: os.environ.get('DD_URL', ''))
@click.pass_context
def drydock(ctx, debug, token, url):
""" Drydock CLI to invoke the running instance of the drydock API
@ -70,9 +72,12 @@ def drydock(ctx, debug, token, url):
logger.debug(url_parse_result)
if not url_parse_result.scheme:
ctx.fail('URL must specify a scheme and hostname, optionally a port')
ctx.obj['CLIENT'] = DrydockClient(DrydockSession(scheme=url_parse_result.scheme,
host=url_parse_result.netloc,
token=token))
ctx.obj['CLIENT'] = DrydockClient(
DrydockSession(
scheme=url_parse_result.scheme,
host=url_parse_result.netloc,
token=token))
drydock.add_command(design.design)
drydock.add_command(part.part)

View File

@ -11,13 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Actions related to design
"""
"""Actions related to design."""
from drydock_provisioner.cli.action import CliAction
class DesignList(CliAction): # pylint: disable=too-few-public-methods
""" Action to list designs
"""
class DesignList(CliAction): # pylint: disable=too-few-public-methods
"""Action to list designs."""
def __init__(self, api_client):
super().__init__(api_client)
self.logger.debug("DesignList action initialized")
@ -25,32 +25,39 @@ class DesignList(CliAction): # pylint: disable=too-few-public-methods
def invoke(self):
return self.api_client.get_design_ids()
class DesignCreate(CliAction): # pylint: disable=too-few-public-methods
""" Action to create designs
"""
class DesignCreate(CliAction): # pylint: disable=too-few-public-methods
"""Action to create designs."""
def __init__(self, api_client, base_design=None):
"""
:param string base_design: A UUID of the base design to model after
"""Constructor.
:param string base_design: A UUID of the base design to model after
"""
super().__init__(api_client)
self.logger.debug("DesignCreate action initialized with base_design=%s", base_design)
self.logger.debug(
"DesignCreate action initialized with base_design=%s", base_design)
self.base_design = base_design
def invoke(self):
return self.api_client.create_design(base_design=self.base_design)
class DesignShow(CliAction): # pylint: disable=too-few-public-methods
""" Action to show a design.
:param string design_id: A UUID design_id
:param string source: (Optional) The model source to return. 'designed' is as input,
'compiled' is after merging
class DesignShow(CliAction): # pylint: disable=too-few-public-methods
"""Action to show a design.
:param string design_id: A UUID design_id
:param string source: (Optional) The model source to return. 'designed' is as input,
'compiled' is after merging
"""
def __init__(self, api_client, design_id, source='designed'):
super().__init__(api_client)
self.design_id = design_id
self.source = source
self.logger.debug("DesignShow action initialized for design_id = %s", design_id)
self.logger.debug("DesignShow action initialized for design_id = %s",
design_id)
def invoke(self):
return self.api_client.get_design(design_id=self.design_id, source=self.source)
return self.api_client.get_design(
design_id=self.design_id, source=self.source)

View File

@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" cli.design.commands
Contains commands related to designs
"""cli.design.commands.
Contains commands related to designs
"""
import click
@ -20,37 +21,36 @@ from drydock_provisioner.cli.design.actions import DesignList
from drydock_provisioner.cli.design.actions import DesignShow
from drydock_provisioner.cli.design.actions import DesignCreate
@click.group()
def design():
""" Drydock design commands
"""
"""Drydock design commands."""
pass
@design.command(name='create')
@click.option('--base-design',
'-b',
help='The base design to model this new design after')
@click.option(
'--base-design',
'-b',
help='The base design to model this new design after')
@click.pass_context
def design_create(ctx, base_design=None):
""" Create a design
"""
"""Create a design."""
click.echo(DesignCreate(ctx.obj['CLIENT'], base_design).invoke())
@design.command(name='list')
@click.pass_context
def design_list(ctx):
""" List designs
"""
"""List designs."""
click.echo(DesignList(ctx.obj['CLIENT']).invoke())
@design.command(name='show')
@click.option('--design-id',
'-i',
help='The design id to show')
@click.option('--design-id', '-i', help='The design id to show')
@click.pass_context
def design_show(ctx, design_id):
""" show designs
"""
"""show designs."""
if not design_id:
ctx.fail('The design id must be specified by --design-id')

View File

@ -11,76 +11,82 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Actions related to part command
"""
"""Actions related to part command."""
from drydock_provisioner.cli.action import CliAction
class PartBase(CliAction): # pylint: disable=too-few-public-methods
""" base class to set up part actions requiring a design_id
"""
class PartBase(CliAction): # pylint: disable=too-few-public-methods
"""base class to set up part actions requiring a design_id."""
def __init__(self, api_client, design_id):
super().__init__(api_client)
self.design_id = design_id
self.logger.debug('Initializing a Part action with design_id=%s', design_id)
self.logger.debug('Initializing a Part action with design_id=%s',
design_id)
class PartList(PartBase): # pylint: disable=too-few-public-methods
"""Action to list parts of a design."""
class PartList(PartBase): # pylint: disable=too-few-public-methods
""" Action to list parts of a design
"""
def __init__(self, api_client, design_id):
"""
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: The UUID of the design for which to list parts
"""Constructor.
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: The UUID of the design for which to list parts
"""
super().__init__(api_client, design_id)
self.logger.debug('PartList action initialized')
def invoke(self):
#TODO: change the api call
# TODO(sh8121att): change the api call
return 'This function does not yet have an implementation to support the request'
class PartCreate(PartBase): # pylint: disable=too-few-public-methods
""" Action to create parts of a design
"""
class PartCreate(PartBase): # pylint: disable=too-few-public-methods
"""Action to create parts of a design."""
def __init__(self, api_client, design_id, in_file):
"""
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: The UUID of the design for which to create a part
:param in_file: The file containing the specification of the part
"""Constructor.
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: The UUID of the design for which to create a part
:param in_file: The file containing the specification of the part
"""
super().__init__(api_client, design_id)
self.in_file = in_file
self.logger.debug('PartCreate action init. Input file (trunc to 100 chars)=%s', in_file[:100])
self.logger.debug(
'PartCreate action init. Input file (trunc to 100 chars)=%s',
in_file[:100])
def invoke(self):
return self.api_client.load_parts(self.design_id, self.in_file)
class PartShow(PartBase): # pylint: disable=too-few-public-methods
""" Action to show a part of a design.
"""
class PartShow(PartBase): # pylint: disable=too-few-public-methods
"""Action to show a part of a design."""
def __init__(self, api_client, design_id, kind, key, source='designed'):
"""
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: the UUID of the design containing this part
:param string kind: the string represesnting the 'kind' of the document to return
:param string key: the string representing the key of the document to return.
:param string source: 'designed' (default) if this is the designed version,
'compiled' if the compiled version (after merging)
"""Constructor.
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: the UUID of the design containing this part
:param string kind: the string represesnting the 'kind' of the document to return
:param string key: the string representing the key of the document to return.
:param string source: 'designed' (default) if this is the designed version,
'compiled' if the compiled version (after merging)
"""
super().__init__(api_client, design_id)
self.kind = kind
self.key = key
self.source = source
self.logger.debug('DesignShow action initialized for design_id=%s,'
' kind=%s, key=%s, source=%s',
design_id,
kind,
key,
' kind=%s, key=%s, source=%s', design_id, kind, key,
source)
def invoke(self):
return self.api_client.get_part(design_id=self.design_id,
kind=self.kind,
key=self.key,
source=self.source)
return self.api_client.get_part(
design_id=self.design_id,
kind=self.kind,
key=self.key,
source=self.source)

View File

@ -11,75 +11,76 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" cli.part.commands
Contains commands related to parts of designs
"""cli.part.commands.
Contains commands related to parts of designs.
"""
import click
from drydock_provisioner.cli.part.actions import PartList
from drydock_provisioner.cli.part.actions import PartShow
from drydock_provisioner.cli.part.actions import PartCreate
@click.group()
@click.option('--design-id',
'-d',
help='The id of the design containing the target parts')
@click.option(
'--design-id',
'-d',
help='The id of the design containing the target parts')
@click.pass_context
def part(ctx, design_id=None):
""" Drydock part commands
"""
"""Drydock part commands."""
if not design_id:
ctx.fail('Error: Design id must be specified using --design-id')
ctx.obj['DESIGN_ID'] = design_id
@part.command(name='create')
@click.option('--file',
'-f',
help='The file name containing the part to create')
@click.option(
'--file', '-f', help='The file name containing the part to create')
@click.pass_context
def part_create(ctx, file=None):
""" Create a part
"""
"""Create a part."""
if not file:
ctx.fail('A file to create a part is required using --file')
with open(file, 'r') as file_input:
file_contents = file_input.read()
# here is where some potential validation could be done on the input file
click.echo(PartCreate(ctx.obj['CLIENT'],
design_id=ctx.obj['DESIGN_ID'],
in_file=file_contents).invoke())
click.echo(
PartCreate(
ctx.obj['CLIENT'],
design_id=ctx.obj['DESIGN_ID'],
in_file=file_contents).invoke())
@part.command(name='list')
@click.pass_context
def part_list(ctx):
""" List parts of a design
"""
click.echo(PartList(ctx.obj['CLIENT'], design_id=ctx.obj['DESIGN_ID']).invoke())
"""List parts of a design."""
click.echo(
PartList(ctx.obj['CLIENT'], design_id=ctx.obj['DESIGN_ID']).invoke())
@part.command(name='show')
@click.option('--source',
'-s',
help='designed | compiled')
@click.option('--kind',
'-k',
help='The kind value of the document to show')
@click.option('--key',
'-i',
help='The key value of the document to show')
@click.option('--source', '-s', help='designed | compiled')
@click.option('--kind', '-k', help='The kind value of the document to show')
@click.option('--key', '-i', help='The key value of the document to show')
@click.pass_context
def part_show(ctx, source, kind, key):
""" show a part of a design
"""
"""show a part of a design."""
if not kind:
ctx.fail('The kind must be specified by --kind')
if not key:
ctx.fail('The key must be specified by --key')
click.echo(PartShow(ctx.obj['CLIENT'],
design_id=ctx.obj['DESIGN_ID'],
kind=kind,
key=key,
source=source).invoke())
click.echo(
PartShow(
ctx.obj['CLIENT'],
design_id=ctx.obj['DESIGN_ID'],
kind=kind,
key=key,
source=source).invoke())

View File

@ -16,9 +16,11 @@
from drydock_provisioner.cli.action import CliAction
class TaskList(CliAction): # pylint: disable=too-few-public-methods
class TaskList(CliAction): # pylint: disable=too-few-public-methods
""" Action to list tasks
"""
def __init__(self, api_client):
"""
:param DrydockClient api_client: The api client used for invocation.
@ -29,10 +31,18 @@ class TaskList(CliAction): # pylint: disable=too-few-public-methods
def invoke(self):
return self.api_client.get_tasks()
class TaskCreate(CliAction): # pylint: disable=too-few-public-methods
class TaskCreate(CliAction): # pylint: disable=too-few-public-methods
""" Action to create tasks against a design
"""
def __init__(self, api_client, design_id, action_name=None, node_names=None, rack_names=None, node_tags=None):
def __init__(self,
api_client,
design_id,
action_name=None,
node_names=None,
rack_names=None,
node_tags=None):
"""
:param DrydockClient api_client: The api client used for invocation.
:param string design_id: The UUID of the design for which to create a task
@ -44,7 +54,8 @@ class TaskCreate(CliAction): # pylint: disable=too-few-public-methods
super().__init__(api_client)
self.design_id = design_id
self.action_name = action_name
self.logger.debug('TaskCreate action initialized for design=%s', design_id)
self.logger.debug('TaskCreate action initialized for design=%s',
design_id)
self.logger.debug('Action is %s', action_name)
if node_names is None:
node_names = []
@ -57,19 +68,23 @@ class TaskCreate(CliAction): # pylint: disable=too-few-public-methods
self.logger.debug("Rack names = %s", rack_names)
self.logger.debug("Node tags = %s", node_tags)
self.node_filter = {'node_names' : node_names,
'rack_names' : rack_names,
'node_tags' : node_tags
}
self.node_filter = {
'node_names': node_names,
'rack_names': rack_names,
'node_tags': node_tags
}
def invoke(self):
return self.api_client.create_task(design_id=self.design_id,
task_action=self.action_name,
node_filter=self.node_filter)
return self.api_client.create_task(
design_id=self.design_id,
task_action=self.action_name,
node_filter=self.node_filter)
class TaskShow(CliAction): # pylint: disable=too-few-public-methods
class TaskShow(CliAction): # pylint: disable=too-few-public-methods
""" Action to show a task's detial.
"""
def __init__(self, api_client, task_id):
"""
:param DrydockClient api_client: The api client used for invocation.
@ -77,7 +92,8 @@ class TaskShow(CliAction): # pylint: disable=too-few-public-methods
"""
super().__init__(api_client)
self.task_id = task_id
self.logger.debug('TaskShow action initialized for task_id=%s,', task_id)
self.logger.debug('TaskShow action initialized for task_id=%s,',
task_id)
def invoke(self):
return self.api_client.get_task(task_id=self.task_id)

View File

@ -20,29 +20,35 @@ from drydock_provisioner.cli.task.actions import TaskList
from drydock_provisioner.cli.task.actions import TaskShow
from drydock_provisioner.cli.task.actions import TaskCreate
@click.group()
def task():
""" Drydock task commands
"""
@task.command(name='create')
@click.option('--design-id',
'-d',
help='The design id for this action')
@click.option('--action',
'-a',
help='The action to perform')
@click.option('--node-names',
'-n',
help='The nodes targeted by this action, comma separated')
@click.option('--rack-names',
'-r',
help='The racks targeted by this action, comma separated')
@click.option('--node-tags',
'-t',
help='The nodes by tag name targeted by this action, comma separated')
@click.option('--design-id', '-d', help='The design id for this action')
@click.option('--action', '-a', help='The action to perform')
@click.option(
'--node-names',
'-n',
help='The nodes targeted by this action, comma separated')
@click.option(
'--rack-names',
'-r',
help='The racks targeted by this action, comma separated')
@click.option(
'--node-tags',
'-t',
help='The nodes by tag name targeted by this action, comma separated')
@click.pass_context
def task_create(ctx, design_id=None, action=None, node_names=None, rack_names=None, node_tags=None):
def task_create(ctx,
design_id=None,
action=None,
node_names=None,
rack_names=None,
node_tags=None):
""" Create a task
"""
if not design_id:
@ -51,13 +57,18 @@ def task_create(ctx, design_id=None, action=None, node_names=None, rack_names=No
if not action:
ctx.fail('Error: Action must be specified using --action')
click.echo(TaskCreate(ctx.obj['CLIENT'],
design_id=design_id,
action_name=action,
node_names=[x.strip() for x in node_names.split(',')] if node_names else [],
rack_names=[x.strip() for x in rack_names.split(',')] if rack_names else [],
node_tags=[x.strip() for x in node_tags.split(',')] if node_tags else []
).invoke())
click.echo(
TaskCreate(
ctx.obj['CLIENT'],
design_id=design_id,
action_name=action,
node_names=[x.strip() for x in node_names.split(',')]
if node_names else [],
rack_names=[x.strip() for x in rack_names.split(',')]
if rack_names else [],
node_tags=[x.strip() for x in node_tags.split(',')]
if node_tags else []).invoke())
@task.command(name='list')
@click.pass_context
@ -66,10 +77,9 @@ def task_list(ctx):
"""
click.echo(TaskList(ctx.obj['CLIENT']).invoke())
@task.command(name='show')
@click.option('--task-id',
'-t',
help='The required task id')
@click.option('--task-id', '-t', help='The required task id')
@click.pass_context
def task_show(ctx, task_id=None):
""" show a task's details
@ -77,5 +87,4 @@ def task_show(ctx, task_id=None):
if not task_id:
ctx.fail('The task id must be specified by --task-id')
click.echo(TaskShow(ctx.obj['CLIENT'],
task_id=task_id).invoke())
click.echo(TaskShow(ctx.obj['CLIENT'], task_id=task_id).invoke())

View File

@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Single point of entry to generate the sample configuration file.
This module collects all the necessary info from the other modules in this
@ -40,51 +39,103 @@ import keystoneauth1.loading as loading
IGNORED_MODULES = ('drydock', 'config')
class DrydockConfig(object):
"""
Initialize all the core options
"""
# Default options
options = [
cfg.IntOpt('poll_interval', default=10, help='Polling interval in seconds for checking subtask or downstream status'),
cfg.IntOpt(
'poll_interval',
default=10,
help=
'Polling interval in seconds for checking subtask or downstream status'
),
]
# Logging options
logging_options = [
cfg.StrOpt('log_level', default='INFO', help='Global log level for Drydock'),
cfg.StrOpt('global_logger_name', default='drydock', help='Logger name for the top-level logger'),
cfg.StrOpt('oobdriver_logger_name', default='${global_logger_name}.oobdriver', help='Logger name for OOB driver logging'),
cfg.StrOpt('nodedriver_logger_name', default='${global_logger_name}.nodedriver', help='Logger name for Node driver logging'),
cfg.StrOpt('control_logger_name', default='${global_logger_name}.control', help='Logger name for API server logging'),
cfg.StrOpt(
'log_level', default='INFO', help='Global log level for Drydock'),
cfg.StrOpt(
'global_logger_name',
default='drydock',
help='Logger name for the top-level logger'),
cfg.StrOpt(
'oobdriver_logger_name',
default='${global_logger_name}.oobdriver',
help='Logger name for OOB driver logging'),
cfg.StrOpt(
'nodedriver_logger_name',
default='${global_logger_name}.nodedriver',
help='Logger name for Node driver logging'),
cfg.StrOpt(
'control_logger_name',
default='${global_logger_name}.control',
help='Logger name for API server logging'),
]
# Enabled plugins
plugin_options = [
cfg.MultiStrOpt('ingester',
default=['drydock_provisioner.ingester.plugins.yaml.YamlIngester'],
help='Module path string of a input ingester to enable'),
cfg.MultiStrOpt('oob_driver',
default=['drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'],
help='Module path string of a OOB driver to enable'),
cfg.StrOpt('node_driver',
default='drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver',
help='Module path string of the Node driver to enable'),
cfg.MultiStrOpt(
'ingester',
default=['drydock_provisioner.ingester.plugins.yaml.YamlIngester'],
help='Module path string of a input ingester to enable'),
cfg.MultiStrOpt(
'oob_driver',
default=[
'drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'
],
help='Module path string of a OOB driver to enable'),
cfg.StrOpt(
'node_driver',
default=
'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver',
help='Module path string of the Node driver to enable'),
# TODO Network driver not yet implemented
cfg.StrOpt('network_driver',
default=None,
help='Module path string of the Network driver enable'),
cfg.StrOpt(
'network_driver',
default=None,
help='Module path string of the Network driver enable'),
]
# Timeouts for various tasks specified in minutes
timeout_options = [
cfg.IntOpt('drydock_timeout', default=5, help='Fallback timeout when a specific one is not configured'),
cfg.IntOpt('create_network_template', default=2, help='Timeout in minutes for creating site network templates'),
cfg.IntOpt('configure_user_credentials', default=2, help='Timeout in minutes for creating user credentials'),
cfg.IntOpt('identify_node', default=10, help='Timeout in minutes for initial node identification'),
cfg.IntOpt('configure_hardware', default=30, help='Timeout in minutes for node commissioning and hardware configuration'),
cfg.IntOpt('apply_node_networking', default=5, help='Timeout in minutes for configuring node networking'),
cfg.IntOpt('apply_node_platform', default=5, help='Timeout in minutes for configuring node platform'),
cfg.IntOpt('deploy_node', default=45, help='Timeout in minutes for deploying a node'),
cfg.IntOpt(
'drydock_timeout',
default=5,
help='Fallback timeout when a specific one is not configured'),
cfg.IntOpt(
'create_network_template',
default=2,
help='Timeout in minutes for creating site network templates'),
cfg.IntOpt(
'configure_user_credentials',
default=2,
help='Timeout in minutes for creating user credentials'),
cfg.IntOpt(
'identify_node',
default=10,
help='Timeout in minutes for initial node identification'),
cfg.IntOpt(
'configure_hardware',
default=30,
help=
'Timeout in minutes for node commissioning and hardware configuration'
),
cfg.IntOpt(
'apply_node_networking',
default=5,
help='Timeout in minutes for configuring node networking'),
cfg.IntOpt(
'apply_node_platform',
default=5,
help='Timeout in minutes for configuring node platform'),
cfg.IntOpt(
'deploy_node',
default=45,
help='Timeout in minutes for deploying a node'),
]
def __init__(self):
@ -94,17 +145,23 @@ class DrydockConfig(object):
self.conf.register_opts(DrydockConfig.options)
self.conf.register_opts(DrydockConfig.logging_options, group='logging')
self.conf.register_opts(DrydockConfig.plugin_options, group='plugins')
self.conf.register_opts(DrydockConfig.timeout_options, group='timeouts')
self.conf.register_opts(loading.get_auth_plugin_conf_options('password'), group='keystone_authtoken')
self.conf.register_opts(
DrydockConfig.timeout_options, group='timeouts')
self.conf.register_opts(
loading.get_auth_plugin_conf_options('password'),
group='keystone_authtoken')
config_mgr = DrydockConfig()
def list_opts():
opts = {'DEFAULT': DrydockConfig.options,
'logging': DrydockConfig.logging_options,
'plugins': DrydockConfig.plugin_options,
'timeouts': DrydockConfig.timeout_options
}
opts = {
'DEFAULT': DrydockConfig.options,
'logging': DrydockConfig.logging_options,
'plugins': DrydockConfig.plugin_options,
'timeouts': DrydockConfig.timeout_options
}
package_path = os.path.dirname(os.path.abspath(__file__))
parent_module = ".".join(__name__.split('.')[:-1])
@ -112,13 +169,16 @@ def list_opts():
imported_modules = _import_modules(module_names)
_append_config_options(imported_modules, opts)
# Assume we'll use the password plugin, so include those options in the configuration template
opts['keystone_authtoken'] = loading.get_auth_plugin_conf_options('password')
opts['keystone_authtoken'] = loading.get_auth_plugin_conf_options(
'password')
return _tupleize(opts)
def _tupleize(d):
"""Convert a dict of options to the 2-tuple format."""
return [(key, value) for key, value in d.items()]
def _list_module_names(pkg_path, parent_module):
module_names = []
for _, module_name, ispkg in pkgutil.iter_modules(path=[pkg_path]):
@ -126,11 +186,14 @@ def _list_module_names(pkg_path, parent_module):
# Skip this module.
continue
elif ispkg:
module_names.extend(_list_module_names(pkg_path + "/" + module_name, parent_module + "." + module_name))
module_names.extend(
_list_module_names(pkg_path + "/" + module_name, parent_module
+ "." + module_name))
else:
module_names.append(parent_module + "." + module_name)
return module_names
def _import_modules(module_names):
imported_modules = []
for module_name in module_names:
@ -140,6 +203,7 @@ def _import_modules(module_names):
imported_modules.append(module)
return imported_modules
def _append_config_options(imported_modules, config_options):
for module in imported_modules:
configs = module.list_opts()

View File

@ -20,6 +20,7 @@ from .bootdata import *
from .base import DrydockRequest
from .middleware import AuthMiddleware, ContextMiddleware, LoggingMiddleware
def start_api(state_manager=None, ingester=None, orchestrator=None):
"""
Start the Drydock API service
@ -30,24 +31,35 @@ def start_api(state_manager=None, ingester=None, orchestrator=None):
part input
:param orchestrator: Instance of drydock_provisioner.orchestrator.Orchestrator for managing tasks
"""
control_api = falcon.API(request_type=DrydockRequest,
middleware=[AuthMiddleware(), ContextMiddleware(), LoggingMiddleware()])
control_api = falcon.API(
request_type=DrydockRequest,
middleware=[
AuthMiddleware(),
ContextMiddleware(),
LoggingMiddleware()
])
# v1.0 of Drydock API
v1_0_routes = [
# API for managing orchestrator tasks
('/tasks', TasksResource(state_manager=state_manager, orchestrator=orchestrator)),
# API for managing orchestrator tasks
('/tasks', TasksResource(
state_manager=state_manager, orchestrator=orchestrator)),
('/tasks/{task_id}', TaskResource(state_manager=state_manager)),
# API for managing site design data
# API for managing site design data
('/designs', DesignsResource(state_manager=state_manager)),
('/designs/{design_id}', DesignResource(state_manager=state_manager, orchestrator=orchestrator)),
('/designs/{design_id}/parts', DesignsPartsResource(state_manager=state_manager, ingester=ingester)),
('/designs/{design_id}/parts/{kind}', DesignsPartsKindsResource(state_manager=state_manager)),
('/designs/{design_id}/parts/{kind}/{name}', DesignsPartResource(state_manager=state_manager, orchestrator=orchestrator)),
('/designs/{design_id}', DesignResource(
state_manager=state_manager, orchestrator=orchestrator)),
('/designs/{design_id}/parts', DesignsPartsResource(
state_manager=state_manager, ingester=ingester)),
('/designs/{design_id}/parts/{kind}', DesignsPartsKindsResource(
state_manager=state_manager)),
('/designs/{design_id}/parts/{kind}/{name}', DesignsPartResource(
state_manager=state_manager, orchestrator=orchestrator)),
# API for nodes to discover their bootdata during curtin install
('/bootdata/{hostname}/{data_key}', BootdataResource(state_manager=state_manager, orchestrator=orchestrator))
# API for nodes to discover their bootdata during curtin install
('/bootdata/{hostname}/{data_key}', BootdataResource(
state_manager=state_manager, orchestrator=orchestrator))
]
for path, res in v1_0_routes:

View File

@ -20,8 +20,8 @@ import falcon.request
import drydock_provisioner.error as errors
class BaseResource(object):
class BaseResource(object):
def __init__(self):
self.logger = logging.getLogger('control')
@ -41,7 +41,8 @@ class BaseResource(object):
if req.content_length is None or req.content_length == 0:
return None
if req.content_type is not None and req.content_type.lower() == 'application/json':
if req.content_type is not None and req.content_type.lower(
) == 'application/json':
raw_body = req.stream.read(req.content_length or 0)
if raw_body is None:
@ -51,20 +52,23 @@ class BaseResource(object):
json_body = json.loads(raw_body.decode('utf-8'))
return json_body
except json.JSONDecodeError as jex:
raise errors.InvalidFormat("%s: Invalid JSON in body: %s" % (req.path, jex))
print("Invalid JSON in request: \n%s" % raw_body.decode('utf-8'))
self.error(req.context, "Invalid JSON in request: \n%s" % raw_body.decode('utf-8'))
raise errors.InvalidFormat("%s: Invalid JSON in body: %s" %
(req.path, jex))
else:
raise errors.InvalidFormat("Requires application/json payload")
def return_error(self, resp, status_code, message="", retry=False):
resp.body = json.dumps({'type': 'error', 'message': message, 'retry': retry})
resp.body = json.dumps({
'type': 'error',
'message': message,
'retry': retry
})
resp.status = status_code
def log_error(self, ctx, level, msg):
extra = {
'user': 'N/A',
'req_id': 'N/A',
'external_ctx': 'N/A'
}
extra = {'user': 'N/A', 'req_id': 'N/A', 'external_ctx': 'N/A'}
if ctx is not None:
extra = {
@ -89,27 +93,29 @@ class BaseResource(object):
class StatefulResource(BaseResource):
def __init__(self, state_manager=None, **kwargs):
super(StatefulResource, self).__init__(**kwargs)
if state_manager is None:
self.error(None, "StatefulResource:init - StatefulResources require a state manager be set")
raise ValueError("StatefulResources require a state manager be set")
self.error(
None,
"StatefulResource:init - StatefulResources require a state manager be set"
)
raise ValueError(
"StatefulResources require a state manager be set")
self.state_manager = state_manager
class DrydockRequestContext(object):
def __init__(self):
self.log_level = 'ERROR'
self.user = None # Username
self.user_id = None # User ID (UUID)
self.user_domain_id = None # Domain owning user
self.user = None # Username
self.user_id = None # User ID (UUID)
self.user_domain_id = None # Domain owning user
self.roles = []
self.project_id = None
self.project_domain_id = None # Domain owning project
self.project_domain_id = None # Domain owning project
self.is_admin_project = False
self.authenticated = False
self.request_id = str(uuid.uuid4())
@ -133,8 +139,7 @@ class DrydockRequestContext(object):
self.roles.extend(roles)
def remove_role(self, role):
self.roles = [x for x in self.roles
if x != role]
self.roles = [x for x in self.roles if x != role]
def set_external_marker(self, marker):
self.external_marker = marker

View File

@ -20,10 +20,14 @@ from oslo_config import cfg
from .base import StatefulResource
class BootdataResource(StatefulResource):
bootdata_options = [
cfg.StrOpt('prom_init', default='/etc/drydock/bootdata/join.sh', help='Path to file to distribute for prom_init.sh')
cfg.StrOpt(
'prom_init',
default='/etc/drydock/bootdata/join.sh',
help='Path to file to distribute for prom_init.sh')
]
def __init__(self, orchestrator=None, **kwargs):
@ -31,7 +35,8 @@ class BootdataResource(StatefulResource):
self.authorized_roles = ['anyone']
self.orchestrator = orchestrator
cfg.CONF.register_opts(BootdataResource.bootdata_options, group='bootdata')
cfg.CONF.register_opts(
BootdataResource.bootdata_options, group='bootdata')
init_file = open(cfg.CONF.bootdata.prom_init, 'r')
self.prom_init = init_file.read()
@ -39,7 +44,7 @@ class BootdataResource(StatefulResource):
def on_get(self, req, resp, hostname, data_key):
if data_key == 'promservice':
resp.body = BootdataResource.prom_init_service
resp.body = BootdataResource.prom_init_service
resp.content_type = 'text/plain'
return
elif data_key == 'vfservice':
@ -60,7 +65,8 @@ class BootdataResource(StatefulResource):
resp.content_type = 'text/plain'
host_design_id = bootdata.get('design_id', None)
host_design = self.orchestrator.get_effective_site(host_design_id)
host_design = self.orchestrator.get_effective_site(
host_design_id)
host_model = host_design.get_baremetal_node(hostname)
@ -71,9 +77,12 @@ class BootdataResource(StatefulResource):
all_configs = host_design.get_promenade_config(part_selectors)
part_list = [i.document for i in all_configs]
part_list = [i.document for i in all_configs]
resp.body = "---\n" + "---\n".join([base64.b64decode(i.encode()).decode('utf-8') for i in part_list]) + "\n..."
resp.body = "---\n" + "---\n".join([
base64.b64decode(i.encode()).decode('utf-8')
for i in part_list
]) + "\n..."
return
@ -106,5 +115,6 @@ ExecStart=/bin/sh -c '/bin/echo 4 >/sys/class/net/ens3f0/device/sriov_numvfs'
WantedBy=multi-user.target
"""
def list_opts():
return {'bootdata': BootdataResource.bootdata_options}

View File

@ -21,8 +21,8 @@ import drydock_provisioner.error as errors
from .base import StatefulResource
class DesignsResource(StatefulResource):
class DesignsResource(StatefulResource):
def __init__(self, **kwargs):
super(DesignsResource, self).__init__(**kwargs)
@ -38,7 +38,11 @@ class DesignsResource(StatefulResource):
resp.status = falcon.HTTP_200
except Exception as ex:
self.error(req.context, "Exception raised: %s" % str(ex))
self.return_error(resp, falcon.HTTP_500, message="Error accessing design list", retry=True)
self.return_error(
resp,
falcon.HTTP_500,
message="Error accessing design list",
retry=True)
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_post(self, req, resp):
@ -52,7 +56,8 @@ class DesignsResource(StatefulResource):
if base_design is not None:
base_design = uuid.UUID(base_design)
design = hd_objects.SiteDesign(base_design_id=base_design_uuid)
design = hd_objects.SiteDesign(
base_design_id=base_design_uuid)
else:
design = hd_objects.SiteDesign()
design.assign_id()
@ -62,14 +67,18 @@ class DesignsResource(StatefulResource):
resp.status = falcon.HTTP_201
except errors.StateError as stex:
self.error(req.context, "Error updating persistence")
self.return_error(resp, falcon.HTTP_500, message="Error updating persistence", retry=True)
self.return_error(
resp,
falcon.HTTP_500,
message="Error updating persistence",
retry=True)
except errors.InvalidFormat as fex:
self.error(req.context, str(fex))
self.return_error(resp, falcon.HTTP_400, message=str(fex), retry=False)
self.return_error(
resp, falcon.HTTP_400, message=str(fex), retry=False)
class DesignResource(StatefulResource):
def __init__(self, orchestrator=None, **kwargs):
super(DesignResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
@ -90,47 +99,81 @@ class DesignResource(StatefulResource):
resp.body = json.dumps(design.obj_to_simple())
except errors.DesignError:
self.error(req.context, "Design %s not found" % design_id)
self.return_error(resp, falcon.HTTP_404, message="Design %s not found" % design_id, retry=False)
self.return_error(
resp,
falcon.HTTP_404,
message="Design %s not found" % design_id,
retry=False)
class DesignsPartsResource(StatefulResource):
def __init__(self, ingester=None, **kwargs):
super(DesignsPartsResource, self).__init__(**kwargs)
self.ingester = ingester
self.authorized_roles = ['user']
if ingester is None:
self.error(None, "DesignsPartsResource requires a configured Ingester instance")
raise ValueError("DesignsPartsResource requires a configured Ingester instance")
self.error(
None,
"DesignsPartsResource requires a configured Ingester instance")
raise ValueError(
"DesignsPartsResource requires a configured Ingester instance")
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_post(self, req, resp, design_id):
ingester_name = req.params.get('ingester', None)
if ingester_name is None:
self.error(None, "DesignsPartsResource POST requires parameter 'ingester'")
self.return_error(resp, falcon.HTTP_400, message="POST requires parameter 'ingester'", retry=False)
self.error(
None,
"DesignsPartsResource POST requires parameter 'ingester'")
self.return_error(
resp,
falcon.HTTP_400,
message="POST requires parameter 'ingester'",
retry=False)
else:
try:
raw_body = req.stream.read(req.content_length or 0)
if raw_body is not None and len(raw_body) > 0:
parsed_items = self.ingester.ingest_data(plugin_name=ingester_name, design_state=self.state_manager,
content=raw_body, design_id=design_id, context=req.context)
parsed_items = self.ingester.ingest_data(
plugin_name=ingester_name,
design_state=self.state_manager,
content=raw_body,
design_id=design_id,
context=req.context)
resp.status = falcon.HTTP_201
resp.body = json.dumps([x.obj_to_simple() for x in parsed_items])
resp.body = json.dumps(
[x.obj_to_simple() for x in parsed_items])
else:
self.return_error(resp, falcon.HTTP_400, message="Empty body not supported", retry=False)
self.return_error(
resp,
falcon.HTTP_400,
message="Empty body not supported",
retry=False)
except ValueError:
self.return_error(resp, falcon.HTTP_500, message="Error processing input", retry=False)
self.return_error(
resp,
falcon.HTTP_500,
message="Error processing input",
retry=False)
except LookupError:
self.return_error(resp, falcon.HTTP_400, message="Ingester %s not registered" % ingester_name, retry=False)
self.return_error(
resp,
falcon.HTTP_400,
message="Ingester %s not registered" % ingester_name,
retry=False)
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_get(self, req, resp, design_id):
try:
design = self.state_manager.get_design(design_id)
except DesignError:
self.return_error(resp, falcon.HTTP_404, message="Design %s nout found" % design_id, retry=False)
self.return_error(
resp,
falcon.HTTP_404,
message="Design %s nout found" % design_id,
retry=False)
part_catalog = []
@ -138,15 +181,30 @@ class DesignsPartsResource(StatefulResource):
part_catalog.append({'kind': 'Region', 'key': site.get_id()})
part_catalog.extend([{'kind': 'Network', 'key': n.get_id()} for n in design.networks])
part_catalog.extend([{
'kind': 'Network',
'key': n.get_id()
} for n in design.networks])
part_catalog.extend([{'kind': 'NetworkLink', 'key': l.get_id()} for l in design.network_links])
part_catalog.extend([{
'kind': 'NetworkLink',
'key': l.get_id()
} for l in design.network_links])
part_catalog.extend([{'kind': 'HostProfile', 'key': p.get_id()} for p in design.host_profiles])
part_catalog.extend([{
'kind': 'HostProfile',
'key': p.get_id()
} for p in design.host_profiles])
part_catalog.extend([{'kind': 'HardwareProfile', 'key': p.get_id()} for p in design.hardware_profiles])
part_catalog.extend([{
'kind': 'HardwareProfile',
'key': p.get_id()
} for p in design.hardware_profiles])
part_catalog.extend([{'kind': 'BaremetalNode', 'key': n.get_id()} for n in design.baremetal_nodes])
part_catalog.extend([{
'kind': 'BaremetalNode',
'key': n.get_id()
} for n in design.baremetal_nodes])
resp.body = json.dumps(part_catalog)
resp.status = falcon.HTTP_200
@ -154,7 +212,6 @@ class DesignsPartsResource(StatefulResource):
class DesignsPartsKindsResource(StatefulResource):
def __init__(self, **kwargs):
super(DesignsPartsKindsResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
@ -165,15 +222,15 @@ class DesignsPartsKindsResource(StatefulResource):
resp.status = falcon.HTTP_200
class DesignsPartResource(StatefulResource):
class DesignsPartResource(StatefulResource):
def __init__(self, orchestrator=None, **kwargs):
super(DesignsPartResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
self.orchestrator = orchestrator
@policy.ApiEnforcer('physical_provisioner:read_data')
def on_get(self, req , resp, design_id, kind, name):
def on_get(self, req, resp, design_id, kind, name):
ctx = req.context
source = req.params.get('source', 'designed')
@ -199,13 +256,19 @@ class DesignsPartResource(StatefulResource):
part = design.get_baremetal_node(name)
else:
self.error(req.context, "Kind %s unknown" % kind)
self.return_error(resp, falcon.HTTP_404, message="Kind %s unknown" % kind, retry=False)
self.return_error(
resp,
falcon.HTTP_404,
message="Kind %s unknown" % kind,
retry=False)
return
resp.body = json.dumps(part.obj_to_simple())
except errors.DesignError as dex:
self.error(req.context, str(dex))
self.return_error(resp, falcon.HTTP_404, message=str(dex), retry=False)
self.return_error(
resp, falcon.HTTP_404, message=str(dex), retry=False)
except Exception as exc:
self.error(req.context, str(exc))
self.return_error(resp. falcon.HTTP_500, message=str(exc), retry=False)
self.return_error(
resp.falcon.HTTP_500, message=str(exc), retry=False)

View File

@ -20,8 +20,8 @@ from oslo_config import cfg
from drydock_provisioner import policy
class AuthMiddleware(object):
class AuthMiddleware(object):
def __init__(self):
self.logger = logging.getLogger('drydock')
@ -44,11 +44,21 @@ class AuthMiddleware(object):
if auth_status == 'Confirmed':
# Process account and roles
ctx.authenticated = True
ctx.user = req.get_header('X-SERVICE-USER-NAME') if service else req.get_header('X-USER-NAME')
ctx.user_id = req.get_header('X-SERVICE-USER-ID') if service else req.get_header('X-USER-ID')
ctx.user_domain_id = req.get_header('X-SERVICE-USER-DOMAIN-ID') if service else req.get_header('X-USER-DOMAIN-ID')
ctx.project_id = req.get_header('X-SERVICE-PROJECT-ID') if service else req.get_header('X-PROJECT-ID')
ctx.project_domain_id = req.get_header('X-SERVICE-PROJECT-DOMAIN-ID') if service else req.get_header('X-PROJECT-DOMAIN-NAME')
ctx.user = req.get_header(
'X-SERVICE-USER-NAME') if service else req.get_header(
'X-USER-NAME')
ctx.user_id = req.get_header(
'X-SERVICE-USER-ID') if service else req.get_header(
'X-USER-ID')
ctx.user_domain_id = req.get_header(
'X-SERVICE-USER-DOMAIN-ID') if service else req.get_header(
'X-USER-DOMAIN-ID')
ctx.project_id = req.get_header(
'X-SERVICE-PROJECT-ID') if service else req.get_header(
'X-PROJECT-ID')
ctx.project_domain_id = req.get_header(
'X-SERVICE-PROJECT-DOMAIN-ID') if service else req.get_header(
'X-PROJECT-DOMAIN-NAME')
if service:
ctx.add_roles(req.get_header('X-SERVICE-ROLES').split(','))
else:
@ -59,16 +69,17 @@ class AuthMiddleware(object):
else:
ctx.is_admin_project = False
self.logger.debug('Request from authenticated user %s with roles %s' % (ctx.user, ','.join(ctx.roles)))
self.logger.debug(
'Request from authenticated user %s with roles %s' %
(ctx.user, ','.join(ctx.roles)))
else:
ctx.authenticated = False
class ContextMiddleware(object):
def __init__(self):
# Setup validation pattern for external marker
UUIDv4_pattern = '^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$';
UUIDv4_pattern = '^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$'
self.marker_re = re.compile(UUIDv4_pattern, re.I)
def process_request(self, req, resp):
@ -81,7 +92,6 @@ class ContextMiddleware(object):
class LoggingMiddleware(object):
def __init__(self):
self.logger = logging.getLogger(cfg.CONF.logging.control_logger_name)

View File

@ -22,8 +22,8 @@ from drydock_provisioner import error as errors
import drydock_provisioner.objects.task as obj_task
from .base import StatefulResource
class TasksResource(StatefulResource):
class TasksResource(StatefulResource):
def __init__(self, orchestrator=None, **kwargs):
super(TasksResource, self).__init__(**kwargs)
self.orchestrator = orchestrator
@ -35,162 +35,204 @@ class TasksResource(StatefulResource):
resp.body = json.dumps(task_id_list)
resp.status = falcon.HTTP_200
except Exception as ex:
self.error(req.context, "Unknown error: %s\n%s" % (str(ex), traceback.format_exc()))
self.return_error(resp, falcon.HTTP_500, message="Unknown error", retry=False)
self.error(req.context, "Unknown error: %s\n%s" %
(str(ex), traceback.format_exc()))
self.return_error(
resp, falcon.HTTP_500, message="Unknown error", retry=False)
@policy.ApiEnforcer('physical_provisioner:create_task')
def on_post(self, req, resp):
# A map of supported actions to the handlers for tasks for those actions
supported_actions = {
'validate_design': TasksResource.task_validate_design,
'verify_site': TasksResource.task_verify_site,
'prepare_site': TasksResource.task_prepare_site,
'verify_node': TasksResource.task_verify_node,
'prepare_node': TasksResource.task_prepare_node,
'deploy_node': TasksResource.task_deploy_node,
'destroy_node': TasksResource.task_destroy_node,
}
'validate_design': TasksResource.task_validate_design,
'verify_site': TasksResource.task_verify_site,
'prepare_site': TasksResource.task_prepare_site,
'verify_node': TasksResource.task_verify_node,
'prepare_node': TasksResource.task_prepare_node,
'deploy_node': TasksResource.task_deploy_node,
'destroy_node': TasksResource.task_destroy_node,
}
try:
ctx = req.context
json_data = self.req_json(req)
action = json_data.get('action', None)
if action not in supported_actions:
self.error(req,context, "Unsupported action %s" % action)
self.return_error(resp, falcon.HTTP_400, message="Unsupported action %s" % action, retry=False)
if supported_actions.get(action, None) is None:
self.error(req.context, "Unsupported action %s" % action)
self.return_error(
resp,
falcon.HTTP_400,
message="Unsupported action %s" % action,
retry=False)
else:
supported_actions.get(action)(self, req, resp)
supported_actions.get(action)(self, req, resp, json_data)
except Exception as ex:
self.error(req.context, "Unknown error: %s\n%s" % (str(ex), traceback.format_exc()))
self.return_error(resp, falcon.HTTP_500, message="Unknown error", retry=False)
self.error(req.context, "Unknown error: %s\n%s" %
(str(ex), traceback.format_exc()))
self.return_error(
resp, falcon.HTTP_500, message="Unknown error", retry=False)
@policy.ApiEnforcer('physical_provisioner:validate_design')
def task_validate_design(self, req, resp):
json_data = self.req_json(req)
def task_validate_design(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'validate_design':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_validate_design" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_validate_design"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:verify_site')
def task_verify_site(self, req, resp):
json_data = self.req_json(req)
def task_verify_site(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'verify_site':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_verify_site" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_verify_site"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:prepare_site')
def task_prepare_site(self, req, resp):
json_data = self.req_json(req)
def task_prepare_site(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'prepare_site':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_prepare_site" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_prepare_site"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:verify_node')
def task_verify_node(self, req, resp):
json_data = self.req_json(req)
def task_verify_node(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'verify_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_verify_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_verify_node"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:prepare_node')
def task_prepare_node(self, req, resp):
json_data = self.req_json(req)
def task_prepare_node(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'prepare_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_prepare_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_prepare_node"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:deploy_node')
def task_deploy_node(self, req, resp):
json_data = self.req_json(req)
def task_deploy_node(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'deploy_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_deploy_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_deploy_node"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:destroy_node')
def task_destroy_node(self, req, resp):
json_data = self.req_json(req)
def task_destroy_node(self, req, resp, json_data):
action = json_data.get('action', None)
if action != 'destroy_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_destroy_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
self.error(
req.context,
"Task body ended up in wrong handler: action %s in task_destroy_node"
% action)
self.return_error(
resp, falcon.HTTP_500, message="Error", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.append_header('Location',
"/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
self.return_error(
resp, falcon.HTTP_400, message=ex.msg, retry=False)
def create_task(self, task_body):
"""
@ -214,41 +256,47 @@ class TasksResource(StatefulResource):
action = task_body.get('action', None)
if design_id is None or action is None:
raise errors.InvalidFormat('Task creation requires fields design_id, action')
raise errors.InvalidFormat(
'Task creation requires fields design_id, action')
task = self.orchestrator.create_task(obj_task.OrchestratorTask, design_id=design_id,
action=action, node_filter=node_filter)
task = self.orchestrator.create_task(
obj_task.OrchestratorTask,
design_id=design_id,
action=action,
node_filter=node_filter)
task_thread = threading.Thread(target=self.orchestrator.execute_task, args=[task.get_id()])
task_thread = threading.Thread(
target=self.orchestrator.execute_task, args=[task.get_id()])
task_thread.start()
return task
class TaskResource(StatefulResource):
class TaskResource(StatefulResource):
def __init__(self, orchestrator=None, **kwargs):
super(TaskResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
self.orchestrator = orchestrator
@policy.ApiEnforcer('physical_provisioner:read_task')
def on_get(self, req, resp, task_id):
ctx = req.context
policy_action = 'physical_provisioner:read_task'
try:
if not self.check_policy(policy_action, ctx):
self.access_denied(req, resp, policy_action)
return
task = self.state_manager.get_task(task_id)
if task is None:
self.info(req.context, "Task %s does not exist" % task_id )
self.return_error(resp, falcon.HTTP_404, message="Task %s does not exist" % task_id, retry=False)
self.info(req.context, "Task %s does not exist" % task_id)
self.return_error(
resp,
falcon.HTTP_404,
message="Task %s does not exist" % task_id,
retry=False)
return
resp.body = json.dumps(task.to_dict())
resp.status = falcon.HTTP_200
except Exception as ex:
self.error(req.context, "Unknown error: %s" % (str(ex)))
self.return_error(resp, falcon.HTTP_500, message="Unknown error", retry=False)
self.return_error(
resp, falcon.HTTP_500, message="Unknown error", retry=False)

View File

@ -20,6 +20,7 @@ import drydock_provisioner.statemgmt as statemgmt
import drydock_provisioner.objects.task as tasks
import drydock_provisioner.error as errors
# This is the interface for the orchestrator to access a driver
# TODO Need to have each driver spin up a seperate thread to manage
# driver tasks and feed them via queue
@ -43,28 +44,26 @@ class ProviderDriver(object):
# These are the actions that this driver supports
self.supported_actions = [hd_fields.OrchestratorAction.Noop]
def execute_task(self, task_id):
task = self.state_manager.get_task(task_id)
task_action = task.action
if task_action in self.supported_actions:
task_runner = DriverTaskRunner(task_id, self.state_manager,
self.orchestrator)
self.orchestrator)
task_runner.start()
while task_runner.is_alive():
time.sleep(1)
return
else:
raise errors.DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))
raise errors.DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))
# Execute a single task in a separate thread
class DriverTaskRunner(Thread):
def __init__(self, task_id, state_manager=None, orchestrator=None):
super(DriverTaskRunner, self).__init__()
@ -84,21 +83,21 @@ class DriverTaskRunner(Thread):
def execute_task(self):
if self.task.action == hd_fields.OrchestratorAction.Noop:
self.orchestrator.task_field_update(self.task.get_id(),
status=hd_fields.TaskStatus.Running)
self.orchestrator.task_field_update(
self.task.get_id(), status=hd_fields.TaskStatus.Running)
i = 0
while i < 5:
self.task = self.state_manager.get_task(self.task.get_id())
i = i + 1
if self.task.terminate:
self.orchestrator.task_field_update(self.task.get_id(),
status=hd_fields.TaskStatus.Terminated)
self.orchestrator.task_field_update(
self.task.get_id(),
status=hd_fields.TaskStatus.Terminated)
return
else:
time.sleep(1)
self.orchestrator.task_field_update(self.task.get_id(),
status=hd_fields.TaskStatus.Complete)
return
self.orchestrator.task_field_update(
self.task.get_id(), status=hd_fields.TaskStatus.Complete)
return

View File

@ -18,6 +18,7 @@ import drydock_provisioner.error as errors
from drydock_provisioner.drivers import ProviderDriver
class NodeDriver(ProviderDriver):
driver_name = "node_generic"
@ -27,20 +28,22 @@ class NodeDriver(ProviderDriver):
def __init__(self, **kwargs):
super(NodeDriver, self).__init__(**kwargs)
self.supported_actions = [hd_fields.OrchestratorAction.ValidateNodeServices,
hd_fields.OrchestratorAction.CreateNetworkTemplate,
hd_fields.OrchestratorAction.CreateStorageTemplate,
hd_fields.OrchestratorAction.CreateBootMedia,
hd_fields.OrchestratorAction.PrepareHardwareConfig,
hd_fields.OrchestratorAction.IdentifyNode,
hd_fields.OrchestratorAction.ConfigureHardware,
hd_fields.OrchestratorAction.InterrogateNode,
hd_fields.OrchestratorAction.ApplyNodeNetworking,
hd_fields.OrchestratorAction.ApplyNodeStorage,
hd_fields.OrchestratorAction.ApplyNodePlatform,
hd_fields.OrchestratorAction.DeployNode,
hd_fields.OrchestratorAction.DestroyNode,
hd_fields.OrchestratorAction.ConfigureUserCredentials]
self.supported_actions = [
hd_fields.OrchestratorAction.ValidateNodeServices,
hd_fields.OrchestratorAction.CreateNetworkTemplate,
hd_fields.OrchestratorAction.CreateStorageTemplate,
hd_fields.OrchestratorAction.CreateBootMedia,
hd_fields.OrchestratorAction.PrepareHardwareConfig,
hd_fields.OrchestratorAction.IdentifyNode,
hd_fields.OrchestratorAction.ConfigureHardware,
hd_fields.OrchestratorAction.InterrogateNode,
hd_fields.OrchestratorAction.ApplyNodeNetworking,
hd_fields.OrchestratorAction.ApplyNodeStorage,
hd_fields.OrchestratorAction.ApplyNodePlatform,
hd_fields.OrchestratorAction.DeployNode,
hd_fields.OrchestratorAction.DestroyNode,
hd_fields.OrchestratorAction.ConfigureUserCredentials
]
def execute_task(self, task_id):
task = self.state_manager.get_task(task_id)
@ -50,10 +53,4 @@ class NodeDriver(ProviderDriver):
return
else:
raise DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))
(task_action, self.driver_desc))

View File

@ -18,14 +18,20 @@ import requests
import requests.auth as req_auth
import base64
class MaasOauth(req_auth.AuthBase):
def __init__(self, apikey):
self.consumer_key, self.token_key, self.token_secret = apikey.split(':')
self.consumer_key, self.token_key, self.token_secret = apikey.split(
':')
self.consumer_secret = ""
self.realm = "OAuth"
self.oauth_client = oauth1.Client(self.consumer_key, self.consumer_secret,
self.token_key, self.token_secret, signature_method=oauth1.SIGNATURE_PLAINTEXT,
self.oauth_client = oauth1.Client(
self.consumer_key,
self.consumer_secret,
self.token_key,
self.token_secret,
signature_method=oauth1.SIGNATURE_PLAINTEXT,
realm=self.realm)
def __call__(self, req):
@ -34,14 +40,15 @@ class MaasOauth(req_auth.AuthBase):
method = req.method
body = None if req.body is None or len(req.body) == 0 else req.body
new_url, signed_headers, new_body = self.oauth_client.sign(url, method, body, headers)
new_url, signed_headers, new_body = self.oauth_client.sign(
url, method, body, headers)
req.headers['Authorization'] = signed_headers['Authorization']
return req
class MaasRequestFactory(object):
class MaasRequestFactory(object):
def __init__(self, base_url, apikey):
self.base_url = base_url
self.apikey = apikey
@ -63,7 +70,7 @@ class MaasRequestFactory(object):
def put(self, endpoint, **kwargs):
return self._send_request('PUT', endpoint, **kwargs)
def test_connectivity(self):
try:
resp = self.get('version/')
@ -74,10 +81,11 @@ class MaasRequestFactory(object):
raise errors.TransientDriverError("Received 50x error from MaaS")
if resp.status_code != 200:
raise errors.PersistentDriverError("Received unexpected error from MaaS")
raise errors.PersistentDriverError(
"Received unexpected error from MaaS")
return True
def test_authentication(self):
try:
resp = self.get('account/', op='list_authorisation_tokens')
@ -86,15 +94,17 @@ class MaasRequestFactory(object):
except:
raise errors.PersistentDriverError("Error accessing MaaS")
if resp.status_code in [401, 403] :
raise errors.PersistentDriverError("MaaS API Authentication Failed")
if resp.status_code in [401, 403]:
raise errors.PersistentDriverError(
"MaaS API Authentication Failed")
if resp.status_code in [500, 503]:
raise errors.TransientDriverError("Received 50x error from MaaS")
if resp.status_code != 200:
raise errors.PersistentDriverError("Received unexpected error from MaaS")
raise errors.PersistentDriverError(
"Received unexpected error from MaaS")
return True
def _send_request(self, method, endpoint, **kwargs):
@ -114,7 +124,13 @@ class MaasRequestFactory(object):
for (k, v) in files.items():
if v is None:
continue
files_tuples[k] = (None, base64.b64encode(str(v).encode('utf-8')).decode('utf-8'), 'text/plain; charset="utf-8"', {'Content-Transfer-Encoding': 'base64'})
files_tuples[k] = (
None,
base64.b64encode(str(v).encode('utf-8')).decode('utf-8'),
'text/plain; charset="utf-8"', {
'Content-Transfer-Encoding': 'base64'
})
# elif isinstance(v, str):
# files_tuples[k] = (None, base64.b64encode(v.encode('utf-8')).decode('utf-8'), 'text/plain; charset="utf-8"', {'Content-Transfer-Encoding': 'base64'})
# elif isinstance(v, int) or isinstance(v, bool):
@ -122,7 +138,6 @@ class MaasRequestFactory(object):
# v = int(v)
# files_tuples[k] = (None, base64.b64encode(v.to_bytes(2, byteorder='big')), 'application/octet-stream', {'Content-Transfer-Encoding': 'base64'})
kwargs['files'] = files_tuples
params = kwargs.get('params', None)
@ -139,15 +154,22 @@ class MaasRequestFactory(object):
if timeout is None:
timeout = (2, 30)
request = requests.Request(method=method, url=self.base_url + endpoint, auth=self.signer,
headers=headers, params=params, **kwargs)
request = requests.Request(
method=method,
url=self.base_url + endpoint,
auth=self.signer,
headers=headers,
params=params,
**kwargs)
prepared_req = self.http_session.prepare_request(request)
resp = self.http_session.send(prepared_req, timeout=timeout)
if resp.status_code >= 400:
self.logger.debug("FAILED API CALL:\nURL: %s %s\nBODY:\n%s\nRESPONSE: %s\nBODY:\n%s" %
(prepared_req.method, prepared_req.url, str(prepared_req.body).replace('\\r\\n','\n'),
resp.status_code, resp.text))
self.logger.debug(
"FAILED API CALL:\nURL: %s %s\nBODY:\n%s\nRESPONSE: %s\nBODY:\n%s"
% (prepared_req.method, prepared_req.url,
str(prepared_req.body).replace('\\r\\n', '\n'),
resp.status_code, resp.text))
return resp

File diff suppressed because it is too large Load Diff

View File

@ -21,6 +21,8 @@ A representation of a MaaS REST resource. Should be subclassed
for different resources and augmented with operations specific
to those resources
"""
class ResourceBase(object):
resource_url = '/{id}'
@ -38,6 +40,7 @@ class ResourceBase(object):
"""
Update resource attributes from MaaS
"""
def refresh(self):
url = self.interpolate_url()
resp = self.api_client.get(url)
@ -52,13 +55,14 @@ class ResourceBase(object):
Parse URL for placeholders and replace them with current
instance values
"""
def interpolate_url(self):
pattern = '\{([a-z_]+)\}'
regex = re.compile(pattern)
start = 0
new_url = self.resource_url
while (start+1) < len(self.resource_url):
while (start + 1) < len(self.resource_url):
match = regex.search(self.resource_url, start)
if match is None:
return new_url
@ -75,6 +79,7 @@ class ResourceBase(object):
"""
Update MaaS with current resource attributes
"""
def update(self):
data_dict = self.to_dict()
url = self.interpolate_url()
@ -83,15 +88,17 @@ class ResourceBase(object):
if resp.status_code == 200:
return True
raise errors.DriverError("Failed updating MAAS url %s - return code %s\n%s"
% (url, resp.status_code, resp.text))
raise errors.DriverError(
"Failed updating MAAS url %s - return code %s\n%s" %
(url, resp.status_code, resp.text))
"""
Set the resource_id for this instance
Should only be called when creating new instances and MAAS has assigned
an id
"""
def set_resource_id(self, res_id):
self.resource_id = res_id
@ -99,6 +106,7 @@ class ResourceBase(object):
Serialize this resource instance into JSON matching the
MaaS respresentation of this resource
"""
def to_json(self):
return json.dumps(self.to_dict())
@ -106,6 +114,7 @@ class ResourceBase(object):
Serialize this resource instance into a dict matching the
MAAS representation of the resource
"""
def to_dict(self):
data_dict = {}
@ -122,6 +131,7 @@ class ResourceBase(object):
Create a instance of this resource class based on the MaaS
representation of this resource type
"""
@classmethod
def from_json(cls, api_client, json_string):
parsed = json.loads(json_string)
@ -135,6 +145,7 @@ class ResourceBase(object):
Create a instance of this resource class based on a dict
of MaaS type attributes
"""
@classmethod
def from_dict(cls, api_client, obj_dict):
refined_dict = {k: obj_dict.get(k, None) for k in cls.fields}
@ -173,7 +184,7 @@ class ResourceCollectionBase(object):
start = 0
new_url = self.collection_url
while (start+1) < len(self.collection_url):
while (start + 1) < len(self.collection_url):
match = regex.search(self.collection_url, start)
if match is None:
return new_url
@ -190,23 +201,26 @@ class ResourceCollectionBase(object):
"""
Create a new resource in this collection in MaaS
"""
def add(self, res):
data_dict = res.to_dict()
url = self.interpolate_url()
resp = self.api_client.post(url, files=data_dict)
if resp.status_code in [200,201]:
if resp.status_code in [200, 201]:
resp_json = resp.json()
res.set_resource_id(resp_json.get('id'))
return res
raise errors.DriverError("Failed updating MAAS url %s - return code %s"
% (url, resp.status_code))
raise errors.DriverError(
"Failed updating MAAS url %s - return code %s" %
(url, resp.status_code))
"""
Append a resource instance to the list locally only
"""
def append(self, res):
if isinstance(res, self.collection_resource):
self.resources[res.resource_id] = res
@ -214,6 +228,7 @@ class ResourceCollectionBase(object):
"""
Initialize or refresh the collection list from MaaS
"""
def refresh(self):
url = self.interpolate_url()
resp = self.api_client.get(url)
@ -232,6 +247,7 @@ class ResourceCollectionBase(object):
"""
Check if resource id is in this collection
"""
def contains(self, res_id):
if res_id in self.resources.keys():
return True
@ -241,17 +257,18 @@ class ResourceCollectionBase(object):
"""
Select a resource based on ID or None if not found
"""
def select(self, res_id):
return self.resources.get(res_id, None)
"""
Query the collection based on a resource attribute other than primary id
"""
def query(self, query):
result = list(self.resources.values())
for (k, v) in query.items():
result = [i for i in result
if str(getattr(i, k, None)) == str(v)]
result = [i for i in result if str(getattr(i, k, None)) == str(v)]
return result
@ -269,10 +286,11 @@ class ResourceCollectionBase(object):
return result[0]
return None
"""
If the collection contains a single item, return it
"""
def single(self):
if self.len() == 1:
for v in self.resources.values():
@ -283,11 +301,13 @@ class ResourceCollectionBase(object):
"""
Iterate over the resources in the collection
"""
def __iter__(self):
return iter(self.resources.values())
"""
Resource count
"""
def len(self):
return len(self.resources)
return len(self.resources)

View File

@ -0,0 +1,57 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Model for MaaS API boot_resource type."""
import drydock_provisioner.error as errors
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
class BootResource(model_base.ResourceBase):
resource_url = 'boot-resources/{resource_id}/'
fields = [
'resource_id', 'name', 'type', 'subarches', 'architecture',
]
json_fields = [
'name', 'type', 'subarches', 'architecture',
]
def __init__(self, api_client, **kwargs):
super().__init__(api_client, **kwargs)
class BootResources(model_base.ResourceCollectionBase):
collection_url = 'boot-resources/'
collection_resource = BootResource
def __init__(self, api_client, **kwargs):
super().__init__(api_client)
def is_importing(self):
"""Check if boot resources are importing."""
url = self.interpolate_url()
self.logger.debug(
"Checking if boot resources are importing.")
resp = self.api_client.get(url, op='is_importing')
if resp.status_code == 200:
resp_json = resp.json()
self.logger.debug("Boot resource importing status: %s" % resp_json)
return resp_json
else:
msg = "Error checking import status of boot resources: %s - %s" % (resp.status_code, resp.text)
self.logger.error(msg)
raise errors.DriverError(msg)

View File

@ -16,6 +16,7 @@ import json
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
import drydock_provisioner.drivers.node.maasdriver.models.vlan as model_vlan
class Fabric(model_base.ResourceBase):
resource_url = 'fabrics/{resource_id}/'
@ -30,24 +31,25 @@ class Fabric(model_base.ResourceBase):
def refresh(self):
super(Fabric, self).refresh()
self.refresh_vlans()
return
def refresh_vlans(self):
self.vlans = model_vlan.Vlans(self.api_client, fabric_id=self.resource_id)
self.vlans = model_vlan.Vlans(
self.api_client, fabric_id=self.resource_id)
self.vlans.refresh()
def set_resource_id(self, res_id):
self.resource_id = res_id
self.refresh_vlans()
class Fabrics(model_base.ResourceCollectionBase):
collection_url = 'fabrics/'
collection_resource = Fabric
def __init__(self, api_client):
super(Fabrics, self).__init__(api_client)
super(Fabrics, self).__init__(api_client)

View File

@ -20,12 +20,17 @@ import drydock_provisioner.drivers.node.maasdriver.models.vlan as maas_vlan
import drydock_provisioner.error as errors
class Interface(model_base.ResourceBase):
resource_url = 'nodes/{system_id}/interfaces/{resource_id}/'
fields = ['resource_id', 'system_id', 'name', 'type', 'mac_address', 'vlan',
'links', 'effective_mtu', 'fabric_id']
json_fields = ['name', 'type', 'mac_address', 'vlan', 'links', 'effective_mtu']
fields = [
'resource_id', 'system_id', 'name', 'type', 'mac_address', 'vlan',
'links', 'effective_mtu', 'fabric_id'
]
json_fields = [
'name', 'type', 'mac_address', 'vlan', 'links', 'effective_mtu'
]
def __init__(self, api_client, **kwargs):
super(Interface, self).__init__(api_client, **kwargs)
@ -41,7 +46,7 @@ class Interface(model_base.ResourceBase):
"""
fabric = None
fabrics = maas_fabric.Fabrics(self.api_client)
fabrics.refresh()
@ -54,21 +59,27 @@ class Interface(model_base.ResourceBase):
raise ValueError("Must specify fabric_id or fabric_name")
if fabric is None:
self.logger.warning("Fabric not found in MaaS for fabric_id %s, fabric_name %s" %
(fabric_id, fabric_name))
raise errors.DriverError("Fabric not found in MaaS for fabric_id %s, fabric_name %s" %
(fabric_id, fabric_name))
self.logger.warning(
"Fabric not found in MaaS for fabric_id %s, fabric_name %s" %
(fabric_id, fabric_name))
raise errors.DriverError(
"Fabric not found in MaaS for fabric_id %s, fabric_name %s" %
(fabric_id, fabric_name))
# Locate the untagged VLAN for this fabric.
fabric_vlan = fabric.vlans.singleton({'vid': 0})
if fabric_vlan is None:
self.logger.warning("Cannot locate untagged VLAN on fabric %s" % (fabric_id))
raise errors.DriverError("Cannot locate untagged VLAN on fabric %s" % (fabric_id))
self.logger.warning("Cannot locate untagged VLAN on fabric %s" %
(fabric_id))
raise errors.DriverError(
"Cannot locate untagged VLAN on fabric %s" % (fabric_id))
self.vlan = fabric_vlan.resource_id
self.logger.info("Attaching interface %s on system %s to VLAN %s on fabric %s" %
(self.resource_id, self.system_id, fabric_vlan.resource_id, fabric.resource_id))
self.logger.info(
"Attaching interface %s on system %s to VLAN %s on fabric %s" %
(self.resource_id, self.system_id, fabric_vlan.resource_id,
fabric.resource_id))
self.update()
def is_linked(self, subnet_id):
@ -83,16 +94,25 @@ class Interface(model_base.ResourceBase):
if l.get('subnet_id', None) == subnet_id:
url = self.interpolate_url()
resp = self.api_client.post(url, op='unlink_subnet', files={'id': l.get('resource_id')})
resp = self.api_client.post(
url,
op='unlink_subnet',
files={'id': l.get('resource_id')})
if not resp.ok:
raise errors.DriverError("Error unlinking subnet")
else:
return
raise errors.DriverError("Error unlinking interface, Link to subnet_id %s not found." % subnet_id)
raise errors.DriverError(
"Error unlinking interface, Link to subnet_id %s not found." %
subnet_id)
def link_subnet(self, subnet_id=None, subnet_cidr=None, ip_address=None, primary=False):
def link_subnet(self,
subnet_id=None,
subnet_cidr=None,
ip_address=None,
primary=False):
"""
Link this interface to a MaaS subnet. One of subnet_id or subnet_cidr
should be specified. If both are, subnet_id rules.
@ -119,23 +139,26 @@ class Interface(model_base.ResourceBase):
raise ValueError("Must specify subnet_id or subnet_cidr")
if subnet is None:
self.logger.warning("Subnet not found in MaaS for subnet_id %s, subnet_cidr %s" %
(subnet_id, subnet_cidr))
raise errors.DriverError("Subnet not found in MaaS for subnet_id %s, subnet_cidr %s" %
(subnet_id, subnet_cidr))
self.logger.warning(
"Subnet not found in MaaS for subnet_id %s, subnet_cidr %s" %
(subnet_id, subnet_cidr))
raise errors.DriverError(
"Subnet not found in MaaS for subnet_id %s, subnet_cidr %s" %
(subnet_id, subnet_cidr))
url = self.interpolate_url()
if self.is_linked(subnet.resource_id):
self.logger.info("Interface %s already linked to subnet %s, unlinking." %
(self.resource_id, subnet.resource_id))
self.logger.info(
"Interface %s already linked to subnet %s, unlinking." %
(self.resource_id, subnet.resource_id))
self.unlink_subnet(subnet.resource_id)
# TODO Probably need to enumerate link mode
options = { 'subnet': subnet.resource_id,
'default_gateway': primary,
}
options = {
'subnet': subnet.resource_id,
'default_gateway': primary,
}
if ip_address == 'dhcp':
options['mode'] = 'dhcp'
@ -145,16 +168,21 @@ class Interface(model_base.ResourceBase):
else:
options['mode'] = 'link_up'
self.logger.debug("Linking interface %s to subnet: subnet=%s, mode=%s, address=%s, primary=%s" %
(self.resource_id, subnet.resource_id, options['mode'], ip_address, primary))
self.logger.debug(
"Linking interface %s to subnet: subnet=%s, mode=%s, address=%s, primary=%s"
% (self.resource_id, subnet.resource_id, options['mode'],
ip_address, primary))
resp = self.api_client.post(url, op='link_subnet', files=options)
if not resp.ok:
self.logger.error("Error linking interface %s to subnet %s - MaaS response %s: %s" %
(self.resouce_id, subnet.resource_id, resp.status_code, resp.text))
raise errors.DriverError("Error linking interface %s to subnet %s - MaaS response %s" %
(self.resouce_id, subnet.resource_id, resp.status_code))
self.logger.error(
"Error linking interface %s to subnet %s - MaaS response %s: %s"
% (self.resouce_id, subnet.resource_id, resp.status_code,
resp.text))
raise errors.DriverError(
"Error linking interface %s to subnet %s - MaaS response %s" %
(self.resouce_id, subnet.resource_id, resp.status_code))
self.refresh()
@ -174,14 +202,12 @@ class Interface(model_base.ResourceBase):
if isinstance(refined_dict.get('vlan', None), dict):
refined_dict['fabric_id'] = refined_dict['vlan']['fabric_id']
refined_dict['vlan'] = refined_dict['vlan']['id']
link_list = []
if isinstance(refined_dict.get('links', None), list):
for l in refined_dict['links']:
if isinstance(l, dict):
link = { 'resource_id': l['id'],
'mode': l['mode']
}
link = {'resource_id': l['id'], 'mode': l['mode']}
if l.get('subnet', None) is not None:
link['subnet_id'] = l['subnet']['id']
@ -194,6 +220,7 @@ class Interface(model_base.ResourceBase):
i = cls(api_client, **refined_dict)
return i
class Interfaces(model_base.ResourceCollectionBase):
collection_url = 'nodes/{system_id}/interfaces/'
@ -218,60 +245,76 @@ class Interfaces(model_base.ResourceCollectionBase):
parent_iface = self.singleton({'name': parent_name})
if parent_iface is None:
self.logger.error("Cannot locate parent interface %s" % (parent_name))
raise errors.DriverError("Cannot locate parent interface %s" % (parent_name))
self.logger.error("Cannot locate parent interface %s" %
(parent_name))
raise errors.DriverError("Cannot locate parent interface %s" %
(parent_name))
if parent_iface.type != 'physical':
self.logger.error("Cannot create VLAN interface on parent of type %s" % (parent_iface.type))
raise errors.DriverError("Cannot create VLAN interface on parent of type %s" % (parent_iface.type))
self.logger.error(
"Cannot create VLAN interface on parent of type %s" %
(parent_iface.type))
raise errors.DriverError(
"Cannot create VLAN interface on parent of type %s" %
(parent_iface.type))
if parent_iface.vlan is None:
self.logger.error("Cannot create VLAN interface on disconnected parent %s" % (parent_iface.resource_id))
raise errors.DriverError("Cannot create VLAN interface on disconnected parent %s" % (parent_iface.resource_id))
self.logger.error(
"Cannot create VLAN interface on disconnected parent %s" %
(parent_iface.resource_id))
raise errors.DriverError(
"Cannot create VLAN interface on disconnected parent %s" %
(parent_iface.resource_id))
vlans = maas_vlan.Vlans(self.api_client, fabric_id=parent_iface.fabric_id)
vlans = maas_vlan.Vlans(
self.api_client, fabric_id=parent_iface.fabric_id)
vlans.refresh()
vlan = vlans.singleton({'vid': vlan_tag})
if vlan is None:
self.logger.error("Cannot locate VLAN %s on fabric %s to attach interface" %
(vlan_tag, parent_iface.fabric_id))
self.logger.error(
"Cannot locate VLAN %s on fabric %s to attach interface" %
(vlan_tag, parent_iface.fabric_id))
exists = self.singleton({'vlan': vlan.resource_id})
if exists is not None:
self.logger.info("Interface for VLAN %s already exists on node %s, skipping" %
(vlan_tag, self.system_id))
self.logger.info(
"Interface for VLAN %s already exists on node %s, skipping" %
(vlan_tag, self.system_id))
return exists
url = self.interpolate_url()
options = { 'tags': ','.join(tags),
'vlan': vlan.resource_id,
'parent': parent_iface.resource_id,
}
options = {
'tags': ','.join(tags),
'vlan': vlan.resource_id,
'parent': parent_iface.resource_id,
}
if mtu is not None:
options['mtu'] = mtu
resp = self.api_client.post(url, op='create_vlan', files=options)
if resp.status_code == 200:
resp_json = resp.json()
vlan_iface = Interface.from_dict(self.api_client, resp_json)
self.logger.debug("Created VLAN interface %s for parent %s attached to VLAN %s" %
(vlan_iface.resource_id, parent_iface.resource_id, vlan.resource_id))
self.logger.debug(
"Created VLAN interface %s for parent %s attached to VLAN %s" %
(vlan_iface.resource_id, parent_iface.resource_id,
vlan.resource_id))
return vlan_iface
else:
self.logger.error("Error creating VLAN interface to VLAN %s on system %s - MaaS response %s: %s" %
(vlan.resource_id, self.system_id, resp.status_code, resp.text))
raise errors.DriverError("Error creating VLAN interface to VLAN %s on system %s - MaaS response %s" %
(vlan.resource_id, self.system_id, resp.status_code))
self.logger.error(
"Error creating VLAN interface to VLAN %s on system %s - MaaS response %s: %s"
% (vlan.resource_id, self.system_id, resp.status_code,
resp.text))
raise errors.DriverError(
"Error creating VLAN interface to VLAN %s on system %s - MaaS response %s"
% (vlan.resource_id, self.system_id, resp.status_code))
self.refresh()
return
return

View File

@ -12,13 +12,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import drydock_provisioner.error as errors
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
class IpRange(model_base.ResourceBase):
resource_url = 'iprange/{resource_id}/'
fields = ['resource_id', 'comment', 'subnet', 'type', 'start_ip', 'end_ip']
json_fields = ['comment','start_ip', 'end_ip']
json_fields = ['comment', 'start_ip', 'end_ip']
def __init__(self, api_client, **kwargs):
super(IpRange, self).__init__(api_client, **kwargs)
@ -31,10 +33,11 @@ class IpRange(model_base.ResourceBase):
if isinstance(refined_dict.get('subnet', None), dict):
refined_dict['subnet'] = refined_dict['subnet']['id']
i = cls(api_client, **refined_dict)
return i
class IpRanges(model_base.ResourceCollectionBase):
collection_url = 'ipranges/'
@ -59,7 +62,7 @@ class IpRanges(model_base.ResourceCollectionBase):
if range_type is not None:
data_dict['type'] = range_type
url = self.interpolate_url()
resp = self.api_client.post(url, files=data_dict)
@ -68,6 +71,7 @@ class IpRanges(model_base.ResourceCollectionBase):
resp_json = resp.json()
res.set_resource_id(resp_json.get('id'))
return res
raise errors.DriverError("Failed updating MAAS url %s - return code %s"
% (url, resp.status_code))
raise errors.DriverError(
"Failed updating MAAS url %s - return code %s" %
(url, resp.status_code))

View File

@ -19,11 +19,15 @@ import drydock_provisioner.drivers.node.maasdriver.models.interface as maas_inte
import bson
import yaml
class Machine(model_base.ResourceBase):
resource_url = 'machines/{resource_id}/'
fields = ['resource_id', 'hostname', 'power_type', 'power_state', 'power_parameters', 'interfaces',
'boot_interface', 'memory', 'cpu_count', 'tag_names', 'status_name', 'boot_mac', 'owner_data']
fields = [
'resource_id', 'hostname', 'power_type', 'power_state',
'power_parameters', 'interfaces', 'boot_interface', 'memory',
'cpu_count', 'tag_names', 'status_name', 'boot_mac', 'owner_data'
]
json_fields = ['hostname', 'power_type']
def __init__(self, api_client, **kwargs):
@ -31,7 +35,8 @@ class Machine(model_base.ResourceBase):
# Replace generic dicts with interface collection model
if getattr(self, 'resource_id', None) is not None:
self.interfaces = maas_interface.Interfaces(api_client, system_id=self.resource_id)
self.interfaces = maas_interface.Interfaces(
api_client, system_id=self.resource_id)
self.interfaces.refresh()
else:
self.interfaces = None
@ -56,9 +61,13 @@ class Machine(model_base.ResourceBase):
# Need to sort out how to handle exceptions
if not resp.ok:
self.logger.error("Error commissioning node, received HTTP %s from MaaS" % resp.status_code)
self.logger.error(
"Error commissioning node, received HTTP %s from MaaS" %
resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error commissioning node, received HTTP %s from MaaS" % resp.status_code)
raise errors.DriverError(
"Error commissioning node, received HTTP %s from MaaS" %
resp.status_code)
def deploy(self, user_data=None, platform=None, kernel=None):
deploy_options = {}
@ -73,13 +82,19 @@ class Machine(model_base.ResourceBase):
deploy_options['hwe_kernel'] = kernel
url = self.interpolate_url()
resp = self.api_client.post(url, op='deploy',
files=deploy_options if len(deploy_options) > 0 else None)
resp = self.api_client.post(
url,
op='deploy',
files=deploy_options if len(deploy_options) > 0 else None)
if not resp.ok:
self.logger.error("Error deploying node, received HTTP %s from MaaS" % resp.status_code)
self.logger.error(
"Error deploying node, received HTTP %s from MaaS" %
resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error deploying node, received HTTP %s from MaaS" % resp.status_code)
raise errors.DriverError(
"Error deploying node, received HTTP %s from MaaS" %
resp.status_code)
def get_network_interface(self, iface_name):
if self.interfaces is not None:
@ -106,13 +121,17 @@ class Machine(model_base.ResourceBase):
url = self.interpolate_url()
resp = self.api_client.post(url, op='set_owner_data', files={key: value})
resp = self.api_client.post(
url, op='set_owner_data', files={key: value})
if resp.status_code != 200:
self.logger.error("Error setting node metadata, received HTTP %s from MaaS" % resp.status_code)
self.logger.error(
"Error setting node metadata, received HTTP %s from MaaS" %
resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error setting node metadata, received HTTP %s from MaaS" % resp.status_code)
raise errors.DriverError(
"Error setting node metadata, received HTTP %s from MaaS" %
resp.status_code)
def to_dict(self):
"""
@ -151,11 +170,13 @@ class Machine(model_base.ResourceBase):
# Capture the boot interface MAC to allow for node id of VMs
if 'boot_interface' in obj_dict.keys():
if isinstance(obj_dict['boot_interface'], dict):
refined_dict['boot_mac'] = obj_dict['boot_interface']['mac_address']
refined_dict['boot_mac'] = obj_dict['boot_interface'][
'mac_address']
i = cls(api_client, **refined_dict)
return i
class Machines(model_base.ResourceCollectionBase):
collection_url = 'machines/'
@ -185,22 +206,27 @@ class Machines(model_base.ResourceCollectionBase):
raise errors.DriverError("Node %s not found" % (node_name))
if node.status_name != 'Ready':
self.logger.info("Node %s status '%s' does not allow deployment, should be 'Ready'." %
(node_name, node.status_name))
raise errors.DriverError("Node %s status '%s' does not allow deployment, should be 'Ready'." %
(node_name, node.status_name))
self.logger.info(
"Node %s status '%s' does not allow deployment, should be 'Ready'."
% (node_name, node.status_name))
raise errors.DriverError(
"Node %s status '%s' does not allow deployment, should be 'Ready'."
% (node_name, node.status_name))
url = self.interpolate_url()
resp = self.api_client.post(url, op='allocate', files={'system_id': node.resource_id})
resp = self.api_client.post(
url, op='allocate', files={'system_id': node.resource_id})
if not resp.ok:
self.logger.error("Error acquiring node, MaaS returned %s" % resp.status_code)
self.logger.error(
"Error acquiring node, MaaS returned %s" % resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error acquiring node, MaaS returned %s" % resp.status_code)
raise errors.DriverError(
"Error acquiring node, MaaS returned %s" % resp.status_code)
return node
def identify_baremetal_node(self, node_model, update_name=True):
"""
Search all the defined MaaS Machines and attempt to match
@ -210,7 +236,7 @@ class Machines(model_base.ResourceCollectionBase):
:param node_model: Instance of objects.node.BaremetalNode to search MaaS for matching resource
:param update_name: Whether Drydock should update the MaaS resource name to match the Drydock design
"""
maas_node = None
if node_model.oob_type == 'ipmi':
@ -224,9 +250,14 @@ class Machines(model_base.ResourceCollectionBase):
try:
self.collect_power_params()
maas_node = self.singleton({'power_params.power_address': node_oob_ip})
maas_node = self.singleton({
'power_params.power_address':
node_oob_ip
})
except ValueError as ve:
self.logger.warn("Error locating matching MaaS resource for OOB IP %s" % (node_oob_ip))
self.logger.warn(
"Error locating matching MaaS resource for OOB IP %s" %
(node_oob_ip))
return None
else:
# Use boot_mac for node's not using IPMI
@ -236,15 +267,18 @@ class Machines(model_base.ResourceCollectionBase):
maas_node = self.singleton({'boot_mac': node_model.boot_mac})
if maas_node is None:
self.logger.info("Could not locate node %s in MaaS" % node_model.name)
self.logger.info(
"Could not locate node %s in MaaS" % node_model.name)
return None
self.logger.debug("Found MaaS resource %s matching Node %s" % (maas_node.resource_id, node_model.get_id()))
self.logger.debug("Found MaaS resource %s matching Node %s" %
(maas_node.resource_id, node_model.get_id()))
if maas_node.hostname != node_model.name and update_name:
maas_node.hostname = node_model.name
maas_node.update()
self.logger.debug("Updated MaaS resource %s hostname to %s" % (maas_node.resource_id, node_model.name))
self.logger.debug("Updated MaaS resource %s hostname to %s" %
(maas_node.resource_id, node_model.name))
return maas_node
@ -256,15 +290,19 @@ class Machines(model_base.ResourceCollectionBase):
for (k, v) in query.items():
if k.startswith('power_params.'):
field = k[13:]
result = [i for i in result
if str(getattr(i,'power_parameters', {}).get(field, None)) == str(v)]
result = [
i for i in result
if str(
getattr(i, 'power_parameters', {}).get(field, None)) ==
str(v)
]
else:
result = [i for i in result
if str(getattr(i, k, None)) == str(v)]
result = [
i for i in result if str(getattr(i, k, None)) == str(v)
]
return result
def add(self, res):
"""
Create a new resource in this collection in MaaS
@ -280,6 +318,7 @@ class Machines(model_base.ResourceCollectionBase):
resp_json = resp.json()
res.set_resource_id(resp_json.get('system_id'))
return res
raise errors.DriverError("Failed updating MAAS url %s - return code %s"
% (url, resp.status_code))
raise errors.DriverError(
"Failed updating MAAS url %s - return code %s" %
(url, resp.status_code))

View File

@ -0,0 +1,163 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Model for MaaS rack-controller API resource."""
import bson
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
import drydock_provisioner.drivers.node.maasdriver.models.interface as maas_interface
class RackController(model_base.ResourceBase):
"""Model for a rack controller singleton."""
# These are the services that must be 'running'
# to consider a rack controller healthy
REQUIRED_SERVICES = ['http', 'tgt', 'dhcpd', 'ntp_rack', 'rackd',
'tftp']
resource_url = 'rackcontrollers/{resource_id}/'
fields = [
'resource_id', 'hostname', 'power_type', 'power_state',
'power_parameters', 'interfaces', 'boot_interface', 'memory',
'cpu_count', 'tag_names', 'status_name', 'boot_mac', 'owner_data',
'service_set',
]
json_fields = ['hostname', 'power_type']
def __init__(self, api_client, **kwargs):
super().__init__(api_client, **kwargs)
# Replace generic dicts with interface collection model
if getattr(self, 'resource_id', None) is not None:
self.interfaces = maas_interface.Interfaces(
api_client, system_id=self.resource_id)
self.interfaces.refresh()
else:
self.interfaces = None
def get_power_params(self):
"""Get parameters for managing server power."""
url = self.interpolate_url()
resp = self.api_client.get(url, op='power_parameters')
if resp.status_code == 200:
self.power_parameters = resp.json()
def get_network_interface(self, iface_name):
"""Retrieve network interface on this machine."""
if self.interfaces is not None:
iface = self.interfaces.singleton({'name': iface_name})
return iface
def get_services(self):
"""Get status of required services on this rack controller."""
self.refresh()
svc_status = {svc: None for svc in RackController.REQUIRED_SERVICES}
self.logger.debug("Checking service status on rack controller %s" % (self.resource_id))
for s in getattr(self, 'service_set', []):
svc = s.get('name')
status = s.get('status')
if svc in svc_status:
self.logger.debug("Service %s on rack controller %s is %s" %
(svc, self.resource_id, status))
svc_status[svc] = status
return svc_status
def get_details(self):
url = self.interpolate_url()
resp = self.api_client.get(url, op='details')
if resp.status_code == 200:
detail_config = bson.loads(resp.text)
return detail_config
def to_dict(self):
"""Serialize this resource instance.
Serialize into a dict matching the MAAS representation of the resource
"""
data_dict = {}
for f in self.json_fields:
if getattr(self, f, None) is not None:
if f == 'resource_id':
data_dict['system_id'] = getattr(self, f)
else:
data_dict[f] = getattr(self, f)
return data_dict
@classmethod
def from_dict(cls, api_client, obj_dict):
"""Create a instance of this resource class based on a dict of MaaS type attributes.
Customized for Machine due to use of system_id instead of id
as resource key
:param api_client: Instance of api_client.MaasRequestFactory for accessing MaaS API
:param obj_dict: Python dict as parsed from MaaS API JSON representing this resource type
"""
refined_dict = {k: obj_dict.get(k, None) for k in cls.fields}
if 'system_id' in obj_dict.keys():
refined_dict['resource_id'] = obj_dict.get('system_id')
# Capture the boot interface MAC to allow for node id of VMs
if 'boot_interface' in obj_dict.keys():
if isinstance(obj_dict['boot_interface'], dict):
refined_dict['boot_mac'] = obj_dict['boot_interface'][
'mac_address']
i = cls(api_client, **refined_dict)
return i
class RackControllers(model_base.ResourceCollectionBase):
"""Model for a collection of rack controllers."""
collection_url = 'rackcontrollers/'
collection_resource = RackController
def __init__(self, api_client, **kwargs):
super().__init__(api_client)
# Add the OOB power parameters to each machine instance
def collect_power_params(self):
for k, v in self.resources.items():
v.get_power_params()
def query(self, query):
"""Custom query method to deal with complex fields."""
result = list(self.resources.values())
for (k, v) in query.items():
if k.startswith('power_params.'):
field = k[13:]
result = [
i for i in result
if str(
getattr(i, 'power_parameters', {}).get(field, None)) ==
str(v)
]
else:
result = [
i for i in result if str(getattr(i, k, None)) == str(v)
]
return result

View File

@ -15,6 +15,7 @@
import drydock_provisioner.error as errors
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
class SshKey(model_base.ResourceBase):
resource_url = 'account/prefs/sshkeys/{resource_id}/'
@ -25,7 +26,8 @@ class SshKey(model_base.ResourceBase):
super(SshKey, self).__init__(api_client, **kwargs)
#Keys should never have newlines, but sometimes they get added
self.key = self.key.replace("\n","")
self.key = self.key.replace("\n", "")
class SshKeys(model_base.ResourceCollectionBase):
@ -34,4 +36,3 @@ class SshKeys(model_base.ResourceCollectionBase):
def __init__(self, api_client, **kwargs):
super(SshKeys, self).__init__(api_client)

View File

@ -14,13 +14,18 @@
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
import drydock_provisioner.drivers.node.maasdriver.models.iprange as maas_iprange
class Subnet(model_base.ResourceBase):
resource_url = 'subnets/{resource_id}/'
fields = ['resource_id', 'name', 'description', 'fabric', 'vlan', 'vid',
'cidr', 'gateway_ip', 'rdns_mode', 'allow_proxy', 'dns_servers']
json_fields = ['name', 'description','vlan', 'cidr', 'gateway_ip', 'rdns_mode',
'allow_proxy', 'dns_servers']
fields = [
'resource_id', 'name', 'description', 'fabric', 'vlan', 'vid', 'cidr',
'gateway_ip', 'rdns_mode', 'allow_proxy', 'dns_servers'
]
json_fields = [
'name', 'description', 'vlan', 'cidr', 'gateway_ip', 'rdns_mode',
'allow_proxy', 'dns_servers'
]
def __init__(self, api_client, **kwargs):
super(Subnet, self).__init__(api_client, **kwargs)
@ -36,29 +41,37 @@ class Subnet(model_base.ResourceBase):
current_ranges = maas_iprange.IpRanges(self.api_client)
current_ranges.refresh()
exists = current_ranges.query({'start_ip': addr_range.get('start', None),
'end_ip': addr_range.get('end', None)})
exists = current_ranges.query({
'start_ip':
addr_range.get('start', None),
'end_ip':
addr_range.get('end', None)
})
if len(exists) > 0:
self.logger.info('Address range from %s to %s already exists, skipping.' %
(addr_range.get('start', None), addr_range.get('end', None)))
self.logger.info(
'Address range from %s to %s already exists, skipping.' %
(addr_range.get('start', None), addr_range.get('end', None)))
return
# Static ranges are what is left after reserved (not assigned by MaaS)
# and DHCP ranges are removed from a subnet
if addr_range.get('type', None) in ['reserved','dhcp']:
if addr_range.get('type', None) in ['reserved', 'dhcp']:
range_type = addr_range.get('type', None)
if range_type == 'dhcp':
range_type = 'dynamic'
maas_range = maas_iprange.IpRange(self.api_client, comment="Configured by Drydock", subnet=self.resource_id,
type=range_type, start_ip=addr_range.get('start', None),
end_ip=addr_range.get('end', None))
maas_range = maas_iprange.IpRange(
self.api_client,
comment="Configured by Drydock",
subnet=self.resource_id,
type=range_type,
start_ip=addr_range.get('start', None),
end_ip=addr_range.get('end', None))
maas_ranges = maas_iprange.IpRanges(self.api_client)
maas_ranges.add(maas_range)
@classmethod
def from_dict(cls, api_client, obj_dict):
"""
@ -73,10 +86,11 @@ class Subnet(model_base.ResourceBase):
if isinstance(refined_dict.get('vlan', None), dict):
refined_dict['fabric'] = refined_dict['vlan']['fabric_id']
refined_dict['vlan'] = refined_dict['vlan']['id']
i = cls(api_client, **refined_dict)
return i
class Subnets(model_base.ResourceCollectionBase):
collection_url = 'subnets/'

View File

@ -17,11 +17,12 @@ import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
import yaml
class Tag(model_base.ResourceBase):
resource_url = 'tags/{resource_id}/'
fields = ['resource_id', 'name', 'defintion', 'kernel_opts']
json_fields = ['name','kernel_opts', 'comment', 'definition']
json_fields = ['name', 'kernel_opts', 'comment', 'definition']
def __init__(self, api_client, **kwargs):
super(Tag, self).__init__(api_client, **kwargs)
@ -48,9 +49,13 @@ class Tag(model_base.ResourceBase):
return system_id_list
else:
self.logger.error("Error retrieving node/tag pairs, received HTTP %s from MaaS" % resp.status_code)
self.logger.error(
"Error retrieving node/tag pairs, received HTTP %s from MaaS" %
resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error retrieving node/tag pairs, received HTTP %s from MaaS" % resp.status_code)
raise errors.DriverError(
"Error retrieving node/tag pairs, received HTTP %s from MaaS" %
resp.status_code)
def apply_to_node(self, system_id):
"""
@ -60,16 +65,22 @@ class Tag(model_base.ResourceBase):
"""
if system_id in self.get_applied_nodes():
self.logger.debug("Tag %s already applied to node %s" % (self.name, system_id))
self.logger.debug("Tag %s already applied to node %s" %
(self.name, system_id))
else:
url = self.interpolate_url()
resp = self.api_client.post(url, op='update_nodes', files={'add': system_id})
resp = self.api_client.post(
url, op='update_nodes', files={'add': system_id})
if not resp.ok:
self.logger.error("Error applying tag to node, received HTTP %s from MaaS" % resp.status_code)
self.logger.error(
"Error applying tag to node, received HTTP %s from MaaS" %
resp.status_code)
self.logger.debug("MaaS response: %s" % resp.text)
raise errors.DriverError("Error applying tag to node, received HTTP %s from MaaS" % resp.status_code)
raise errors.DriverError(
"Error applying tag to node, received HTTP %s from MaaS" %
resp.status_code)
def to_dict(self):
"""
@ -108,6 +119,7 @@ class Tag(model_base.ResourceBase):
i = cls(api_client, **refined_dict)
return i
class Tags(model_base.ResourceCollectionBase):
collection_url = 'tags/'
@ -133,9 +145,10 @@ class Tags(model_base.ResourceCollectionBase):
resp_json = resp.json()
res.set_resource_id(resp_json.get('name'))
return res
elif resp.status_code == 400 and resp.text.find('Tag with this Name already exists.') != -1:
elif resp.status_code == 400 and resp.text.find(
'Tag with this Name already exists.') != -1:
raise errors.DriverError("Tag %s already exists" % res.name)
else:
raise errors.DriverError("Failed updating MAAS url %s - return code %s"
% (url, resp.status_code))
raise errors.DriverError(
"Failed updating MAAS url %s - return code %s" %
(url, resp.status_code))

View File

@ -16,12 +16,18 @@ import json
import drydock_provisioner.error as errors
import drydock_provisioner.drivers.node.maasdriver.models.base as model_base
class Vlan(model_base.ResourceBase):
resource_url = 'fabrics/{fabric_id}/vlans/{api_id}/'
fields = ['resource_id', 'name', 'description', 'vid', 'fabric_id', 'dhcp_on', 'mtu',
'primary_rack', 'secondary_rack']
json_fields = ['name', 'description', 'vid', 'dhcp_on', 'mtu', 'primary_rack', 'secondary_rack']
fields = [
'resource_id', 'name', 'description', 'vid', 'fabric_id', 'dhcp_on',
'mtu', 'primary_rack', 'secondary_rack'
]
json_fields = [
'name', 'description', 'vid', 'dhcp_on', 'mtu', 'primary_rack',
'secondary_rack'
]
def __init__(self, api_client, **kwargs):
super(Vlan, self).__init__(api_client, **kwargs)
@ -30,7 +36,7 @@ class Vlan(model_base.ResourceBase):
self.vid = 0
# the MaaS API decided that the URL endpoint for VLANs should use
# the VLAN tag (vid) rather than the resource ID. So to update the
# the VLAN tag (vid) rather than the resource ID. So to update the
# vid, we have to keep two copies so that the resource_url
# is accurate for updates
self.api_id = self.vid
@ -46,6 +52,7 @@ class Vlan(model_base.ResourceBase):
else:
self.vid = int(new_vid)
class Vlans(model_base.ResourceCollectionBase):
collection_url = 'fabrics/{fabric_id}/vlans/'
@ -55,6 +62,7 @@ class Vlans(model_base.ResourceCollectionBase):
super(Vlans, self).__init__(api_client)
self.fabric_id = kwargs.get('fabric_id', None)
"""
Create a new resource in this collection in MaaS
def add(self, res):
@ -84,4 +92,4 @@ class Vlans(model_base.ResourceCollectionBase):
raise errors.DriverError("Failed updating MAAS url %s - return code %s\n%s"
% (url, resp.status_code, resp.text))
"""
"""

View File

@ -17,6 +17,7 @@ import drydock_provisioner.error as errors
from drydock_provisioner.drivers import ProviderDriver
class OobDriver(ProviderDriver):
oob_types_supported = ['']
@ -24,13 +25,15 @@ class OobDriver(ProviderDriver):
def __init__(self, **kwargs):
super(OobDriver, self).__init__(**kwargs)
self.supported_actions = [hd_fields.OrchestratorAction.ValidateOobServices,
hd_fields.OrchestratorAction.ConfigNodePxe,
hd_fields.OrchestratorAction.SetNodeBoot,
hd_fields.OrchestratorAction.PowerOffNode,
hd_fields.OrchestratorAction.PowerOnNode,
hd_fields.OrchestratorAction.PowerCycleNode,
hd_fields.OrchestratorAction.InterrogateOob]
self.supported_actions = [
hd_fields.OrchestratorAction.ValidateOobServices,
hd_fields.OrchestratorAction.ConfigNodePxe,
hd_fields.OrchestratorAction.SetNodeBoot,
hd_fields.OrchestratorAction.PowerOffNode,
hd_fields.OrchestratorAction.PowerOnNode,
hd_fields.OrchestratorAction.PowerCycleNode,
hd_fields.OrchestratorAction.InterrogateOob
]
self.driver_name = "oob_generic"
self.driver_key = "oob_generic"
@ -44,7 +47,7 @@ class OobDriver(ProviderDriver):
return
else:
raise DriverError("Unsupported action %s for driver %s" %
(task_action, self.driver_desc))
(task_action, self.driver_desc))
@classmethod
def oob_type_support(cls, type_string):
@ -57,4 +60,4 @@ class OobDriver(ProviderDriver):
if type_string in cls.oob_types_supported:
return True
return False
return False

View File

@ -46,10 +46,11 @@ class ManualDriver(oob.OobDriver):
raise errors.DriverError("Invalid task %s" % (task_id))
if task.action not in self.supported_actions:
self.logger.error("Driver %s doesn't support task action %s"
% (self.driver_desc, task.action))
raise errors.DriverError("Driver %s doesn't support task action %s"
% (self.driver_desc, task.action))
self.logger.error("Driver %s doesn't support task action %s" %
(self.driver_desc, task.action))
raise errors.DriverError(
"Driver %s doesn't support task action %s" % (self.driver_desc,
task.action))
design_id = getattr(task, 'design_id', None)
@ -57,13 +58,15 @@ class ManualDriver(oob.OobDriver):
raise errors.DriverError("No design ID specified in task %s" %
(task_id))
self.orchestrator.task_field_update(task.get_id(),
status=hd_fields.TaskStatus.Running)
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
self.logger.info("Sleeping 60s to allow time for manual OOB %s action" % task.action)
self.logger.info("Sleeping 60s to allow time for manual OOB %s action"
% task.action)
time.sleep(60)
self.orchestrator.task_field_update(task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=hd_fields.ActionResult.Success)
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=hd_fields.ActionResult.Success)

View File

@ -30,7 +30,10 @@ import drydock_provisioner.drivers as drivers
class PyghmiDriver(oob.OobDriver):
pyghmi_driver_options = [
cfg.IntOpt('poll_interval', default=10, help='Polling interval in seconds for querying IPMI status'),
cfg.IntOpt(
'poll_interval',
default=10,
help='Polling interval in seconds for querying IPMI status'),
]
oob_types_supported = ['ipmi']
@ -44,7 +47,8 @@ class PyghmiDriver(oob.OobDriver):
def __init__(self, **kwargs):
super(PyghmiDriver, self).__init__(**kwargs)
cfg.CONF.register_opts(PyghmiDriver.pyghmi_driver_options, group=PyghmiDriver.driver_key)
cfg.CONF.register_opts(
PyghmiDriver.pyghmi_driver_options, group=PyghmiDriver.driver_key)
self.logger = logging.getLogger(cfg.CONF.logging.oobdriver_logger_name)
@ -56,10 +60,11 @@ class PyghmiDriver(oob.OobDriver):
raise errors.DriverError("Invalid task %s" % (task_id))
if task.action not in self.supported_actions:
self.logger.error("Driver %s doesn't support task action %s"
% (self.driver_desc, task.action))
raise errors.DriverError("Driver %s doesn't support task action %s"
% (self.driver_desc, task.action))
self.logger.error("Driver %s doesn't support task action %s" %
(self.driver_desc, task.action))
raise errors.DriverError(
"Driver %s doesn't support task action %s" % (self.driver_desc,
task.action))
design_id = getattr(task, 'design_id', None)
@ -67,48 +72,58 @@ class PyghmiDriver(oob.OobDriver):
raise errors.DriverError("No design ID specified in task %s" %
(task_id))
self.orchestrator.task_field_update(task.get_id(),
status=hd_fields.TaskStatus.Running)
self.orchestrator.task_field_update(
task.get_id(), status=hd_fields.TaskStatus.Running)
if task.action == hd_fields.OrchestratorAction.ValidateOobServices:
self.orchestrator.task_field_update(task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=hd_fields.ActionResult.Success)
self.orchestrator.task_field_update(
task.get_id(),
status=hd_fields.TaskStatus.Complete,
result=hd_fields.ActionResult.Success)
return
site_design = self.orchestrator.get_effective_site(design_id)
target_nodes = []
if len(task.node_list) > 0:
target_nodes.extend([x
for x in site_design.baremetal_nodes
if x.get_name() in task.node_list])
target_nodes.extend([
x for x in site_design.baremetal_nodes
if x.get_name() in task.node_list
])
else:
target_nodes.extend(site_design.baremetal_nodes)
incomplete_subtasks = []
# For each target node, create a subtask and kick off a runner
for n in target_nodes:
subtask = self.orchestrator.create_task(task_model.DriverTask,
parent_task_id=task.get_id(), design_id=design_id,
action=task.action,
task_scope={'node_names': [n.get_name()]})
subtask = self.orchestrator.create_task(
task_model.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=task.action,
task_scope={'node_names': [n.get_name()]})
incomplete_subtasks.append(subtask.get_id())
runner = PyghmiTaskRunner(state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id(), node=n)
runner = PyghmiTaskRunner(
state_manager=self.state_manager,
orchestrator=self.orchestrator,
task_id=subtask.get_id(),
node=n)
runner.start()
attempts = 0
max_attempts = getattr(cfg.CONF.timeouts, task.action, cfg.CONF.timeouts.drydock_timeout) * (60 / cfg.CONF.pyghmi_driver.poll_interval)
max_attempts = getattr(cfg.CONF.timeouts, task.action,
cfg.CONF.timeouts.drydock_timeout) * (
60 / cfg.CONF.pyghmi_driver.poll_interval)
while (len(incomplete_subtasks) > 0 and attempts <= max_attempts):
for n in incomplete_subtasks:
t = self.state_manager.get_task(n)
if t.get_status() in [hd_fields.TaskStatus.Terminated,
hd_fields.TaskStatus.Complete,
hd_fields.TaskStatus.Errored]:
if t.get_status() in [
hd_fields.TaskStatus.Terminated,
hd_fields.TaskStatus.Complete,
hd_fields.TaskStatus.Errored
]:
incomplete_subtasks.remove(n)
time.sleep(cfg.CONF.pyghmi_driver.poll_interval)
attempts = attempts + 1
@ -116,13 +131,17 @@ class PyghmiDriver(oob.OobDriver):
task = self.state_manager.get_task(task.get_id())
subtasks = map(self.state_manager.get_task, task.get_subtasks())
success_subtasks = [x
for x in subtasks
if x.get_result() == hd_fields.ActionResult.Success]
nosuccess_subtasks = [x
for x in subtasks
if x.get_result() in [hd_fields.ActionResult.PartialSuccess,
hd_fields.ActionResult.Failure]]
success_subtasks = [
x for x in subtasks
if x.get_result() == hd_fields.ActionResult.Success
]
nosuccess_subtasks = [
x for x in subtasks
if x.get_result() in [
hd_fields.ActionResult.PartialSuccess,
hd_fields.ActionResult.Failure
]
]
task_result = None
if len(success_subtasks) > 0 and len(nosuccess_subtasks) > 0:
@ -134,13 +153,14 @@ class PyghmiDriver(oob.OobDriver):
else:
task_result = hd_fields.ActionResult.Incomplete
self.orchestrator.task_field_update(task.get_id(),
result=task_result,
status=hd_fields.TaskStatus.Complete)
self.orchestrator.task_field_update(
task.get_id(),
result=task_result,
status=hd_fields.TaskStatus.Complete)
return
class PyghmiTaskRunner(drivers.DriverTaskRunner):
class PyghmiTaskRunner(drivers.DriverTaskRunner):
def __init__(self, node=None, **kwargs):
super(PyghmiTaskRunner, self).__init__(**kwargs)
@ -157,59 +177,71 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
task_action = self.task.action
if len(self.task.node_list) != 1:
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Incomplete,
status=hd_fields.TaskStatus.Errored)
raise errors.DriverError("Multiple names (%s) in task %s node_list"
% (len(self.task.node_list), self.task.get_id()))
raise errors.DriverError(
"Multiple names (%s) in task %s node_list" %
(len(self.task.node_list), self.task.get_id()))
target_node_name = self.task.node_list[0]
if self.node.get_name() != target_node_name:
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Incomplete,
status=hd_fields.TaskStatus.Errored)
raise errors.DriverError("Runner node does not match " \
"task node scope")
self.orchestrator.task_field_update(self.task.get_id(),
status=hd_fields.TaskStatus.Running)
self.orchestrator.task_field_update(
self.task.get_id(), status=hd_fields.TaskStatus.Running)
if task_action == hd_fields.OrchestratorAction.ConfigNodePxe:
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
elif task_action == hd_fields.OrchestratorAction.SetNodeBoot:
worked = False
self.logger.debug("Setting bootdev to PXE for %s" % self.node.name)
self.exec_ipmi_command(Command.set_bootdev, 'pxe')
time.sleep(3)
bootdev = self.exec_ipmi_command(Command.get_bootdev)
if bootdev.get('bootdev', '') == 'network':
self.logger.debug("%s reports bootdev of network" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
self.logger.debug(
"%s reports bootdev of network" % self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Success,
status=hd_fields.TaskStatus.Complete)
return
else:
self.logger.warning("%s reports bootdev of %s" % (ipmi_address, bootdev.get('bootdev', None)))
self.logger.warning("%s reports bootdev of %s" %
(self.node.name,
bootdev.get('bootdev', None)))
worked = False
self.logger.error("Giving up on IPMI command to %s after 3 attempts" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
self.logger.error(
"Giving up on IPMI command to %s after 3 attempts" %
self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
elif task_action == hd_fields.OrchestratorAction.PowerOffNode:
worked = False
self.logger.debug("Sending set_power = off command to %s" % self.node.name)
self.logger.debug(
"Sending set_power = off command to %s" % self.node.name)
self.exec_ipmi_command(Command.set_power, 'off')
i = 18
@ -225,19 +257,23 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
i = i - 1
if worked:
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Success,
status=hd_fields.TaskStatus.Complete)
else:
self.logger.error("Giving up on IPMI command to %s" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
self.logger.error(
"Giving up on IPMI command to %s" % self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
elif task_action == hd_fields.OrchestratorAction.PowerOnNode:
worked = False
self.logger.debug("Sending set_power = off command to %s" % self.node.name)
self.logger.debug(
"Sending set_power = off command to %s" % self.node.name)
self.exec_ipmi_command(Command.set_power, 'off')
i = 18
@ -253,17 +289,21 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
i = i - 1
if worked:
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Success,
status=hd_fields.TaskStatus.Complete)
else:
self.logger.error("Giving up on IPMI command to %s" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
self.logger.error(
"Giving up on IPMI command to %s" % self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
elif task_action == hd_fields.OrchestratorAction.PowerCycleNode:
self.logger.debug("Sending set_power = off command to %s" % self.node.name)
self.logger.debug(
"Sending set_power = off command to %s" % self.node.name)
self.exec_ipmi_command(Command.set_power, 'off')
# Wait for power state of off before booting back up
@ -272,50 +312,65 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
while i > 0:
power_state = self.exec_ipmi_command(Command.get_power)
if power_state is not None and power_state.get('powerstate', '') == 'off':
self.logger.debug("%s reports powerstate of off" % self.node.name)
if power_state is not None and power_state.get(
'powerstate', '') == 'off':
self.logger.debug(
"%s reports powerstate of off" % self.node.name)
break
elif power_state is None:
self.logger.debug("None response on IPMI power query to %s" % self.node.name)
self.logger.debug("None response on IPMI power query to %s"
% self.node.name)
time.sleep(10)
i = i - 1
if power_state.get('powerstate', '') == 'on':
self.logger.warning("Failed powering down node %s during power cycle task" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
self.logger.warning(
"Failed powering down node %s during power cycle task" %
self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
self.logger.debug("Sending set_power = on command to %s" % self.node.name)
self.logger.debug(
"Sending set_power = on command to %s" % self.node.name)
self.exec_ipmi_command(Command.set_power, 'on')
i = 18
while i > 0:
power_state = self.exec_ipmi_command(Command.get_power)
if power_state is not None and power_state.get('powerstate', '') == 'on':
self.logger.debug("%s reports powerstate of on" % self.node.name)
if power_state is not None and power_state.get(
'powerstate', '') == 'on':
self.logger.debug(
"%s reports powerstate of on" % self.node.name)
break
elif power_state is None:
self.logger.debug("None response on IPMI power query to %s" % self.node.name)
self.logger.debug("None response on IPMI power query to %s"
% self.node.name)
time.sleep(10)
i = i - 1
if power_state.get('powerstate', '') == 'on':
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Success,
status=hd_fields.TaskStatus.Complete)
else:
self.logger.warning("Failed powering up node %s during power cycle task" % self.node.name)
self.orchestrator.task_field_update(self.task.get_id(),
self.logger.warning(
"Failed powering up node %s during power cycle task" %
self.node.name)
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Failure,
status=hd_fields.TaskStatus.Complete)
return
elif task_action == hd_fields.OrchestratorAction.InterrogateOob:
mci_id = ipmi_session.get_mci()
mci_id = self.exec_ipmi_command(Command.get_mci)
self.orchestrator.task_field_update(self.task.get_id(),
self.orchestrator.task_field_update(
self.task.get_id(),
result=hd_fields.ActionResult.Success,
status=hd_fields.TaskStatus.Complete,
result_detail=mci_id)
@ -338,16 +393,15 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
if ipmi_address is None:
raise errors.DriverError("Node %s has no IPMI address" %
(node.name))
(node.name))
ipmi_account = self.node.oob_parameters['account']
ipmi_credential = self.node.oob_parameters['credential']
self.logger.debug("Starting IPMI session to %s with %s/%s" %
(ipmi_address, ipmi_account, ipmi_credential[:1]))
ipmi_session = Command(bmc=ipmi_address, userid=ipmi_account,
password=ipmi_credential)
(ipmi_address, ipmi_account, ipmi_credential[:1]))
ipmi_session = Command(
bmc=ipmi_address, userid=ipmi_account, password=ipmi_credential)
return ipmi_session
@ -364,23 +418,28 @@ class PyghmiTaskRunner(drivers.DriverTaskRunner):
self.logger.debug("Initializing IPMI session")
ipmi_session = self.get_ipmi_session()
except IpmiException as iex:
self.logger.error("Error initializing IPMI session for node %s" % self.node.name)
self.logger.error("Error initializing IPMI session for node %s"
% self.node.name)
self.logger.debug("IPMI Exception: %s" % str(iex))
self.logger.warning("IPMI command failed, retrying after 15 seconds...")
self.logger.warning(
"IPMI command failed, retrying after 15 seconds...")
time.sleep(15)
attempts = attempts + 1
attempts = attempts + 1
continue
try:
self.logger.debug("Calling IPMI command %s on %s" % (callable.__name__, self.node.name))
self.logger.debug("Calling IPMI command %s on %s" %
(callable.__name__, self.node.name))
response = callable(ipmi_session, *args)
ipmi_session.ipmi_session.logout()
return response
except IpmiException as iex:
self.logger.error("Error sending command: %s" % str(iex))
self.logger.warning("IPMI command failed, retrying after 15 seconds...")
self.logger.warning(
"IPMI command failed, retrying after 15 seconds...")
time.sleep(15)
attempts = attempts + 1
def list_opts():
return {PyghmiDriver.driver_key: PyghmiDriver.pyghmi_driver_options}

View File

@ -25,12 +25,14 @@ import drydock_provisioner.statemgmt as statemgmt
import drydock_provisioner.orchestrator as orch
import drydock_provisioner.control.api as api
def start_drydock():
objects.register_all()
# Setup configuration parsing
cli_options = [
cfg.BoolOpt('debug', short='d', default=False, help='Enable debug logging'),
cfg.BoolOpt(
'debug', short='d', default=False, help='Enable debug logging'),
]
cfg.CONF.register_cli_opts(cli_options)
@ -38,21 +40,26 @@ def start_drydock():
cfg.CONF(sys.argv[1:])
if cfg.CONF.debug:
cfg.CONF.set_override(name='log_level', override='DEBUG', group='logging')
cfg.CONF.set_override(
name='log_level', override='DEBUG', group='logging')
# Setup root logger
logger = logging.getLogger(cfg.CONF.logging.global_logger_name)
logger.setLevel(cfg.CONF.logging.log_level)
ch = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(filename)s:%(funcName)s - %(message)s')
formatter = logging.Formatter(
'%(asctime)s - %(levelname)s - %(filename)s:%(funcName)s - %(message)s'
)
ch.setFormatter(formatter)
logger.addHandler(ch)
# Specalized format for API logging
logger = logging.getLogger(cfg.CONF.logging.control_logger_name)
logger.propagate = False
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(user)s - %(req_id)s - %(external_ctx)s - %(message)s')
formatter = logging.Formatter(
'%(asctime)s - %(levelname)s - %(user)s - %(req_id)s - %(external_ctx)s - %(message)s'
)
ch = logging.StreamHandler()
ch.setFormatter(formatter)
@ -67,25 +74,32 @@ def start_drydock():
# Check if we have an API key in the environment
# Hack around until we move MaaS configs to the YAML schema
if 'MAAS_API_KEY' in os.environ:
cfg.CONF.set_override(name='maas_api_key', override=os.environ['MAAS_API_KEY'], group='maasdriver')
cfg.CONF.set_override(
name='maas_api_key',
override=os.environ['MAAS_API_KEY'],
group='maasdriver')
# Setup the RBAC policy enforcer
policy.policy_engine = policy.DrydockPolicy()
policy.policy_engine.register_policy()
# Ensure that the policy_engine is initialized before starting the API
wsgi_callable = api.start_api(state_manager=state, ingester=input_ingester,
orchestrator=orchestrator)
wsgi_callable = api.start_api(
state_manager=state,
ingester=input_ingester,
orchestrator=orchestrator)
# Now that loggers are configured, log the effective config
cfg.CONF.log_opt_values(logging.getLogger(cfg.CONF.logging.global_logger_name), logging.DEBUG)
cfg.CONF.log_opt_values(
logging.getLogger(cfg.CONF.logging.global_logger_name), logging.DEBUG)
return wsgi_callable
# Initialization compatible with PasteDeploy
def paste_start_drydock(global_conf, **kwargs):
# At this time just ignore everything in the paste configuration and rely on oslo_config
return drydock
drydock = start_drydock()
drydock = start_drydock()

View File

@ -13,9 +13,11 @@
# limitations under the License.
import json
import requests
import logging
from drydock_provisioner import error as errors
class DrydockClient(object):
""""
A client for the Drydock API
@ -25,6 +27,7 @@ class DrydockClient(object):
def __init__(self, session):
self.session = session
self.logger = logging.getLogger(__name__)
def get_design_ids(self):
"""
@ -50,7 +53,6 @@ class DrydockClient(object):
"""
endpoint = "v1.0/designs/%s" % design_id
resp = self.session.get(endpoint, query={'source': source})
self._check_response(resp)
@ -67,7 +69,8 @@ class DrydockClient(object):
endpoint = 'v1.0/designs'
if base_design is not None:
resp = self.session.post(endpoint, data={'base_design_id': base_design})
resp = self.session.post(
endpoint, data={'base_design_id': base_design})
else:
resp = self.session.post(endpoint)
@ -106,7 +109,8 @@ class DrydockClient(object):
endpoint = "v1.0/designs/%s/parts" % (design_id)
resp = self.session.post(endpoint, query={'ingester': 'yaml'}, body=yaml_string)
resp = self.session.post(
endpoint, query={'ingester': 'yaml'}, body=yaml_string)
self._check_response(resp)
@ -157,11 +161,13 @@ class DrydockClient(object):
endpoint = 'v1.0/tasks'
task_dict = {
'action': task_action,
'design_id': design_id,
'node_filter': node_filter
'action': task_action,
'design_id': design_id,
'node_filter': node_filter,
}
self.logger.debug("drydock_client is calling %s API: body is %s" % (endpoint, str(task_dict)))
resp = self.session.post(endpoint, data=task_dict)
self._check_response(resp)
@ -170,8 +176,12 @@ class DrydockClient(object):
def _check_response(self, resp):
if resp.status_code == 401:
raise errors.ClientUnauthorizedError("Unauthorized access to %s, include valid token." % resp.url)
raise errors.ClientUnauthorizedError(
"Unauthorized access to %s, include valid token." % resp.url)
elif resp.status_code == 403:
raise errors.ClientForbiddenError("Forbidden access to %s" % resp.url)
raise errors.ClientForbiddenError(
"Forbidden access to %s" % resp.url)
elif not resp.ok:
raise errors.ClientError("Error - received %d: %s" % (resp.status_code, resp.text), code=resp.status_code)
raise errors.ClientError(
"Error - received %d: %s" % (resp.status_code, resp.text),
code=resp.status_code)

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import requests
import logging
class DrydockSession(object):
"""
@ -23,15 +24,25 @@ class DrydockSession(object):
:param string marker: (optional) external context marker
"""
def __init__(self, host, *, port=None, scheme='http', token=None, marker=None):
def __init__(self,
host,
*,
port=None,
scheme='http',
token=None,
marker=None):
self.__session = requests.Session()
self.__session.headers.update({'X-Auth-Token': token, 'X-Context-Marker': marker})
self.__session.headers.update({
'X-Auth-Token': token,
'X-Context-Marker': marker
})
self.host = host
self.scheme = scheme
if port:
self.port = port
self.base_url = "%s://%s:%s/api/" % (self.scheme, self.host, self.port)
self.base_url = "%s://%s:%s/api/" % (self.scheme, self.host,
self.port)
else:
#assume default port for scheme
self.base_url = "%s://%s/api/" % (self.scheme, self.host)
@ -39,6 +50,8 @@ class DrydockSession(object):
self.token = token
self.marker = marker
self.logger = logging.getLogger(__name__)
# TODO Add keystone authentication to produce a token for this session
def get(self, endpoint, query=None):
"""
@ -48,7 +61,8 @@ class DrydockSession(object):
:param dict query: A dict of k, v pairs to add to the query string
:return: A requests.Response object
"""
resp = self.__session.get(self.base_url + endpoint, params=query, timeout=10)
resp = self.__session.get(
self.base_url + endpoint, params=query, timeout=10)
return resp
@ -64,10 +78,14 @@ class DrydockSession(object):
:return: A requests.Response object
"""
self.logger.debug("Sending POST with drydock_client session")
if body is not None:
resp = self.__session.post(self.base_url + endpoint, params=query, data=body, timeout=10)
self.logger.debug("Sending POST with explicit body: \n%s" % body)
resp = self.__session.post(
self.base_url + endpoint, params=query, data=body, timeout=10)
else:
resp = self.__session.post(self.base_url + endpoint, params=query, json=data, timeout=10)
self.logger.debug("Sending POST with JSON body: \n%s" % str(data))
resp = self.__session.post(
self.base_url + endpoint, params=query, json=data, timeout=10)
return resp

View File

@ -13,6 +13,7 @@
# limitations under the License.
import json
class DesignError(Exception):
pass
@ -66,6 +67,7 @@ class ClientError(ApiError):
super().__init__(msg)
class ClientUnauthorizedError(ClientError):
def __init__(self, msg):
super().__init__(msg, code=401)

View File

@ -17,7 +17,7 @@
import logging
import yaml
import uuid
import uuid
import importlib
import drydock_provisioner.objects as objects
@ -30,8 +30,8 @@ import drydock_provisioner.objects.promenade as prom
from drydock_provisioner.statemgmt import DesignState
class Ingester(object):
class Ingester(object):
def __init__(self):
self.logger = logging.getLogger("drydock.ingester")
self.registered_plugins = {}
@ -62,14 +62,19 @@ class Ingester(object):
plugin_name = new_plugin.get_name()
self.registered_plugins[plugin_name] = new_plugin
except Exception as ex:
self.logger.error("Could not enable plugin %s - %s" % (plugin, str(ex)))
self.logger.error("Could not enable plugin %s - %s" %
(plugin, str(ex)))
if len(self.registered_plugins) == 0:
self.logger.error("Could not enable at least one plugin")
raise Exception("Could not enable at least one plugin")
def ingest_data(self, plugin_name='', design_state=None, design_id=None, context=None, **kwargs):
def ingest_data(self,
plugin_name='',
design_state=None,
design_id=None,
context=None,
**kwargs):
"""
ingest_data - Execute a data ingestion using the named plugin (assuming it is enabled)
@ -80,25 +85,35 @@ class Ingester(object):
:param kwargs: - Keywork arguments to pass to the ingester plugin
"""
if design_state is None:
self.logger.error("Ingester:ingest_data called without valid DesignState handler")
self.logger.error(
"Ingester:ingest_data called without valid DesignState handler"
)
raise ValueError("Invalid design_state handler")
# If no design_id is specified, instantiate a new one
if 'design_id' is None:
self.logger.error("Ingester:ingest_data required kwarg 'design_id' missing")
raise ValueError("Ingester:ingest_data required kwarg 'design_id' missing")
self.logger.error(
"Ingester:ingest_data required kwarg 'design_id' missing")
raise ValueError(
"Ingester:ingest_data required kwarg 'design_id' missing")
design_data = design_state.get_design(design_id)
self.logger.debug("Ingester:ingest_data ingesting design parts for design %s" % design_id)
self.logger.debug(
"Ingester:ingest_data ingesting design parts for design %s" %
design_id)
if plugin_name in self.registered_plugins:
try:
design_items = self.registered_plugins[plugin_name].ingest_data(**kwargs)
design_items = self.registered_plugins[
plugin_name].ingest_data(**kwargs)
except ValueError as vex:
self.logger.warn("Ingester:ingest_data - Error process data - %s" % (str(vex)))
self.logger.warn(
"Ingester:ingest_data - Error process data - %s" %
(str(vex)))
return None
self.logger.debug("Ingester:ingest_data parsed %s design parts" % str(len(design_items)))
self.logger.debug("Ingester:ingest_data parsed %s design parts" %
str(len(design_items)))
for m in design_items:
if context is not None:
m.set_create_fields(context)
@ -119,7 +134,6 @@ class Ingester(object):
design_state.put_design(design_data)
return design_items
else:
self.logger.error("Could not find plugin %s to ingest data." % (plugin_name))
self.logger.error("Could not find plugin %s to ingest data." %
(plugin_name))
raise LookupError("Could not find plugin %s" % plugin_name)

View File

@ -17,8 +17,8 @@
import logging
class IngesterPlugin(object):
class IngesterPlugin(object):
def __init__(self):
self.log = logging.Logger('ingester')
return

View File

@ -14,8 +14,8 @@
#
# AIC YAML Ingester - This data ingester will consume a AIC YAML design
# file
#
# file
#
import yaml
import logging
import base64
@ -25,8 +25,8 @@ import drydock_provisioner.objects.fields as hd_fields
from drydock_provisioner import objects
from drydock_provisioner.ingester.plugins import IngesterPlugin
class YamlIngester(IngesterPlugin):
class YamlIngester(IngesterPlugin):
def __init__(self):
super(YamlIngester, self).__init__()
self.logger = logging.getLogger('drydock.ingester.yaml')
@ -42,39 +42,43 @@ class YamlIngester(IngesterPlugin):
returns an array of objects from drydock_provisioner.model
"""
def ingest_data(self, **kwargs):
models = []
if 'filenames' in kwargs:
# TODO validate filenames is array
# TODO(sh8121att): validate filenames is array
for f in kwargs.get('filenames'):
try:
file = open(f,'rt')
file = open(f, 'rt')
contents = file.read()
file.close()
models.extend(self.parse_docs(contents))
except OSError as err:
self.logger.error(
"Error opening input file %s for ingestion: %s"
% (filename, err))
"Error opening input file %s for ingestion: %s" %
(f, err))
continue
elif 'content' in kwargs:
models.extend(self.parse_docs(kwargs.get('content')))
else:
raise ValueError('Missing parameter "filename"')
return models
"""
Translate a YAML string into the internal Drydock model
"""
def parse_docs(self, yaml_string):
models = []
self.logger.debug("yamlingester:parse_docs - Parsing YAML string \n%s" % (yaml_string))
self.logger.debug(
"yamlingester:parse_docs - Parsing YAML string \n%s" %
(yaml_string))
try:
parsed_data = yaml.load_all(yaml_string)
except yaml.YAMLError as err:
raise ValueError("Error parsing YAML in %s: %s" % (f,err))
raise ValueError("Error parsing YAML: %s" % (err))
for d in parsed_data:
kind = d.get('kind', '')
@ -96,7 +100,8 @@ class YamlIngester(IngesterPlugin):
spec = d.get('spec', {})
model.tag_definitions = objects.NodeTagDefinitionList()
model.tag_definitions = objects.NodeTagDefinitionList(
)
tag_defs = spec.get('tag_definitions', [])
@ -107,8 +112,8 @@ class YamlIngester(IngesterPlugin):
tag_model.definition = t.get('definition', '')
if tag_model.type not in ['lshw_xpath']:
raise ValueError('Unknown definition type in ' \
'NodeTagDefinition: %s' % (self.definition_type))
raise ValueError('Unknown definition type in '
'NodeTagDefinition: %s' % (t.definition_type))
model.tag_definitions.append(tag_model)
auth_keys = spec.get('authorized_keys', [])
@ -117,7 +122,9 @@ class YamlIngester(IngesterPlugin):
models.append(model)
else:
raise ValueError('Unknown API version %s of Region kind' %s (api_version))
raise ValueError(
'Unknown API version %s of Region kind' %
(api_version))
elif kind == 'NetworkLink':
if api_version == "v1":
model = objects.NetworkLink()
@ -136,27 +143,36 @@ class YamlIngester(IngesterPlugin):
else:
model.metalabels.append(l)
bonding = spec.get('bonding', {})
model.bonding_mode = bonding.get('mode',
hd_fields.NetworkLinkBondingMode.Disabled)
model.bonding_mode = bonding.get(
'mode',
hd_fields.NetworkLinkBondingMode.Disabled)
# How should we define defaults for CIs not in the input?
if model.bonding_mode == hd_fields.NetworkLinkBondingMode.LACP:
model.bonding_xmit_hash = bonding.get('hash', 'layer3+4')
model.bonding_peer_rate = bonding.get('peer_rate', 'fast')
model.bonding_mon_rate = bonding.get('mon_rate', '100')
model.bonding_up_delay = bonding.get('up_delay', '200')
model.bonding_down_delay = bonding.get('down_delay', '200')
model.bonding_xmit_hash = bonding.get(
'hash', 'layer3+4')
model.bonding_peer_rate = bonding.get(
'peer_rate', 'fast')
model.bonding_mon_rate = bonding.get(
'mon_rate', '100')
model.bonding_up_delay = bonding.get(
'up_delay', '200')
model.bonding_down_delay = bonding.get(
'down_delay', '200')
model.mtu = spec.get('mtu', None)
model.linkspeed = spec.get('linkspeed', None)
trunking = spec.get('trunking', {})
model.trunk_mode = trunking.get('mode', hd_fields.NetworkLinkTrunkingMode.Disabled)
model.native_network = trunking.get('default_network', None)
model.trunk_mode = trunking.get(
'mode',
hd_fields.NetworkLinkTrunkingMode.Disabled)
model.native_network = trunking.get(
'default_network', None)
model.allowed_networks = spec.get('allowed_networks', None)
model.allowed_networks = spec.get(
'allowed_networks', None)
models.append(model)
else:
@ -178,9 +194,10 @@ class YamlIngester(IngesterPlugin):
model.metalabels = [l]
else:
model.metalabels.append(l)
model.cidr = spec.get('cidr', None)
model.allocation_strategy = spec.get('allocation', 'static')
model.allocation_strategy = spec.get(
'allocation', 'static')
model.vlan_id = spec.get('vlan', None)
model.mtu = spec.get('mtu', None)
@ -192,19 +209,27 @@ class YamlIngester(IngesterPlugin):
model.ranges = []
for r in ranges:
model.ranges.append({'type': r.get('type', None),
'start': r.get('start', None),
'end': r.get('end', None),
})
model.ranges.append({
'type':
r.get('type', None),
'start':
r.get('start', None),
'end':
r.get('end', None),
})
routes = spec.get('routes', [])
model.routes = []
for r in routes:
model.routes.append({'subnet': r.get('subnet', None),
'gateway': r.get('gateway', None),
'metric': r.get('metric', None),
})
model.routes.append({
'subnet':
r.get('subnet', None),
'gateway':
r.get('gateway', None),
'metric':
r.get('metric', None),
})
models.append(model)
elif kind == 'HardwareProfile':
if api_version == 'v1':
@ -224,9 +249,11 @@ class YamlIngester(IngesterPlugin):
model.hw_version = spec.get('hw_version', None)
model.bios_version = spec.get('bios_version', None)
model.boot_mode = spec.get('boot_mode', None)
model.bootstrap_protocol = spec.get('bootstrap_protocol', None)
model.pxe_interface = spec.get('pxe_interface', None)
model.bootstrap_protocol = spec.get(
'bootstrap_protocol', None)
model.pxe_interface = spec.get(
'pxe_interface', None)
model.devices = objects.HardwareDeviceAliasList()
device_aliases = spec.get('device_aliases', {})
@ -257,13 +284,15 @@ class YamlIngester(IngesterPlugin):
model.site = metadata.get('region', '')
model.source = hd_fields.ModelSource.Designed
model.parent_profile = spec.get('host_profile', None)
model.hardware_profile = spec.get('hardware_profile', None)
model.parent_profile = spec.get(
'host_profile', None)
model.hardware_profile = spec.get(
'hardware_profile', None)
oob = spec.get('oob', {})
model.oob_parameters = {}
for k,v in oob.items():
for k, v in oob.items():
if k == 'type':
model.oob_type = oob.get('type', None)
else:
@ -273,9 +302,12 @@ class YamlIngester(IngesterPlugin):
model.storage_layout = storage.get('layout', 'lvm')
bootdisk = storage.get('bootdisk', {})
model.bootdisk_device = bootdisk.get('device', None)
model.bootdisk_root_size = bootdisk.get('root_size', None)
model.bootdisk_boot_size = bootdisk.get('boot_size', None)
model.bootdisk_device = bootdisk.get(
'device', None)
model.bootdisk_root_size = bootdisk.get(
'root_size', None)
model.bootdisk_boot_size = bootdisk.get(
'boot_size', None)
partitions = storage.get('partitions', [])
model.partitions = objects.HostPartitionList()
@ -288,9 +320,11 @@ class YamlIngester(IngesterPlugin):
part_model.device = p.get('device', None)
part_model.part_uuid = p.get('part_uuid', None)
part_model.size = p.get('size', None)
part_model.mountpoint = p.get('mountpoint', None)
part_model.mountpoint = p.get(
'mountpoint', None)
part_model.fstype = p.get('fstype', 'ext4')
part_model.mount_options = p.get('mount_options', 'defaults')
part_model.mount_options = p.get(
'mount_options', 'defaults')
part_model.fs_uuid = p.get('fs_uuid', None)
part_model.fs_label = p.get('fs_label', None)
@ -302,8 +336,10 @@ class YamlIngester(IngesterPlugin):
for i in interfaces:
int_model = objects.HostInterface()
int_model.device_name = i.get('device_name', None)
int_model.network_link = i.get('device_link', None)
int_model.device_name = i.get(
'device_name', None)
int_model.network_link = i.get(
'device_link', None)
int_model.hardware_slaves = []
slaves = i.get('slaves', [])
@ -316,7 +352,7 @@ class YamlIngester(IngesterPlugin):
for n in networks:
int_model.networks.append(n)
model.interfaces.append(int_model)
platform = spec.get('platform', {})
@ -325,11 +361,13 @@ class YamlIngester(IngesterPlugin):
model.kernel = platform.get('kernel', None)
model.kernel_params = {}
for k,v in platform.get('kernel_params', {}).items():
for k, v in platform.get('kernel_params',
{}).items():
model.kernel_params[k] = v
model.primary_network = spec.get('primary_network', None)
model.primary_network = spec.get(
'primary_network', None)
node_metadata = spec.get('metadata', {})
metadata_tags = node_metadata.get('tags', [])
@ -344,16 +382,18 @@ class YamlIngester(IngesterPlugin):
model.rack = node_metadata.get('rack', None)
if kind == 'BaremetalNode':
model.boot_mac = node_metadata.get('boot_mac', None)
model.boot_mac = node_metadata.get(
'boot_mac', None)
addresses = spec.get('addressing', [])
if len(addresses) == 0:
raise ValueError('BaremetalNode needs at least' \
raise ValueError('BaremetalNode needs at least'
' 1 assigned address')
model.addressing = objects.IpAddressAssignmentList()
model.addressing = objects.IpAddressAssignmentList(
)
for a in addresses:
assignment = objects.IpAddressAssignment()
@ -371,15 +411,17 @@ class YamlIngester(IngesterPlugin):
model.addressing.append(assignment)
else:
self.log.error("Invalid address assignment %s on Node %s"
% (address, self.name))
self.log.error(
"Invalid address assignment %s on Node %s"
% (address, self.name))
models.append(model)
else:
raise ValueError('Unknown API version %s of Kind HostProfile' % (api_version))
raise ValueError(
'Unknown API version %s of Kind HostProfile' %
(api_version))
else:
self.log.error(
"Error processing document in %s, no kind field"
% (f))
"Error processing document, no kind field")
continue
elif api.startswith('promenade/'):
(foo, api_version) = api.split('/')
@ -389,7 +431,12 @@ class YamlIngester(IngesterPlugin):
target = metadata.get('target', 'all')
name = metadata.get('name', None)
model = objects.PromenadeConfig(target=target, name=name, kind=kind,
document=base64.b64encode(bytearray(yaml.dump(d), encoding='utf-8')).decode('ascii'))
model = objects.PromenadeConfig(
target=target,
name=name,
kind=kind,
document=base64.b64encode(
bytearray(yaml.dump(d), encoding='utf-8')).decode(
'ascii'))
models.append(model)
return models

View File

@ -31,10 +31,11 @@ def register_all():
importlib.import_module('drydock_provisioner.objects.site')
importlib.import_module('drydock_provisioner.objects.promenade')
# Utility class for calculating inheritance
class Utils(object):
class Utils(object):
"""
apply_field_inheritance - apply inheritance rules to a single field value
@ -84,6 +85,7 @@ class Utils(object):
3. All remaining members of the parent list
"""
@staticmethod
def merge_lists(child_list, parent_list):
@ -117,6 +119,7 @@ class Utils(object):
3. All remaining members of the parent dict
"""
@staticmethod
def merge_dicts(child_dict, parent_dict):
@ -136,5 +139,5 @@ class Utils(object):
effective_dict[k] = deepcopy(child_dict[k])
except TypeError:
raise TypeError("Error iterating dict argument")
return effective_dict

View File

@ -18,14 +18,16 @@ from oslo_versionedobjects import fields as obj_fields
import drydock_provisioner.objects as objects
class DrydockObjectRegistry(base.VersionedObjectRegistry):
# Steal this from Cinder to bring all registered objects
# into the drydock_provisioner.objects namespace
def registration_hook(self, cls, index):
setattr(objects, cls.obj_name(), cls)
class DrydockObject(base.VersionedObject):
VERSION = '1.0'
@ -54,8 +56,8 @@ class DrydockObject(base.VersionedObject):
for name, field in self.fields.items():
if self.obj_attr_is_set(name):
value = getattr(self, name)
if (hasattr(value, 'obj_to_simple') and
callable(value.obj_to_simple)):
if (hasattr(value, 'obj_to_simple')
and callable(value.obj_to_simple)):
primitive[name] = value.obj_to_simple()
else:
value = field.to_primitive(self, name, value)
@ -84,7 +86,6 @@ class DrydockPersistentObject(base.VersionedObject):
class DrydockObjectListBase(base.ObjectListBase):
def __init__(self, **kwargs):
super(DrydockObjectListBase, self).__init__(**kwargs)
@ -92,7 +93,7 @@ class DrydockObjectListBase(base.ObjectListBase):
self.objects.append(obj)
def replace_by_id(self, obj):
i = 0;
i = 0
while i < len(self.objects):
if self.objects[i].get_id() == obj.get_id():
objects[i] = obj

View File

@ -14,10 +14,12 @@
from oslo_versionedobjects import fields
class BaseDrydockEnum(fields.Enum):
def __init__(self):
super(BaseDrydockEnum, self).__init__(valid_values=self.__class__.ALL)
class OrchestratorAction(BaseDrydockEnum):
# Orchestrator actions
Noop = 'noop'
@ -61,16 +63,18 @@ class OrchestratorAction(BaseDrydockEnum):
ConfigurePortProduction = 'config_port_production'
ALL = (Noop, ValidateDesign, VerifySite, PrepareSite, VerifyNode,
PrepareNode, DeployNode, DestroyNode, ConfigNodePxe,
SetNodeBoot, PowerOffNode, PowerOnNode, PowerCycleNode,
InterrogateOob, CreateNetworkTemplate, CreateStorageTemplate,
CreateBootMedia, PrepareHardwareConfig, ConfigureHardware,
InterrogateNode, ApplyNodeNetworking, ApplyNodeStorage,
ApplyNodePlatform, DeployNode, DestroyNode)
PrepareNode, DeployNode, DestroyNode, ConfigNodePxe, SetNodeBoot,
PowerOffNode, PowerOnNode, PowerCycleNode, InterrogateOob,
CreateNetworkTemplate, CreateStorageTemplate, CreateBootMedia,
PrepareHardwareConfig, ConfigureHardware, InterrogateNode,
ApplyNodeNetworking, ApplyNodeStorage, ApplyNodePlatform,
DeployNode, DestroyNode)
class OrchestratorActionField(fields.BaseEnumField):
AUTO_TYPE = OrchestratorAction()
class ActionResult(BaseDrydockEnum):
Incomplete = 'incomplete'
Success = 'success'
@ -80,9 +84,11 @@ class ActionResult(BaseDrydockEnum):
ALL = (Incomplete, Success, PartialSuccess, Failure, DependentFailure)
class ActionResultField(fields.BaseEnumField):
AUTO_TYPE = ActionResult()
class TaskStatus(BaseDrydockEnum):
Created = 'created'
Waiting = 'waiting'
@ -93,12 +99,14 @@ class TaskStatus(BaseDrydockEnum):
Complete = 'complete'
Stopped = 'stopped'
ALL = (Created, Waiting, Running, Stopping, Terminated,
Errored, Complete, Stopped)
ALL = (Created, Waiting, Running, Stopping, Terminated, Errored, Complete,
Stopped)
class TaskStatusField(fields.BaseEnumField):
AUTO_TYPE = TaskStatus()
class ModelSource(BaseDrydockEnum):
Designed = 'designed'
Compiled = 'compiled'
@ -106,9 +114,11 @@ class ModelSource(BaseDrydockEnum):
ALL = (Designed, Compiled, Build)
class ModelSourceField(fields.BaseEnumField):
AUTO_TYPE = ModelSource()
class SiteStatus(BaseDrydockEnum):
Unknown = 'unknown'
DesignStarted = 'design_started'
@ -120,40 +130,44 @@ class SiteStatus(BaseDrydockEnum):
ALL = (Unknown, Deploying, Deployed)
class SiteStatusField(fields.BaseEnumField):
AUTO_TYPE = SiteStatus()
class NodeStatus(BaseDrydockEnum):
Unknown = 'unknown'
Designed = 'designed'
Compiled = 'compiled' # Node attributes represent effective config after inheritance/merge
Present = 'present' # IPMI access verified
BasicVerifying = 'basic_verifying' # Base node verification in process
FailedBasicVerify = 'failed_basic_verify' # Base node verification failed
BasicVerified = 'basic_verified' # Base node verification successful
Preparing = 'preparing' # Node preparation in progress
FailedPrepare = 'failed_prepare' # Node preparation failed
Prepared = 'prepared' # Node preparation complete
FullyVerifying = 'fully_verifying' # Node full verification in progress
FailedFullVerify = 'failed_full_verify' # Node full verification failed
FullyVerified = 'fully_verified' # Deeper verification successful
Deploying = 'deploy' # Node deployment in progress
FailedDeploy = 'failed_deploy' # Node deployment failed
Deployed = 'deployed' # Node deployed successfully
Bootstrapping = 'bootstrapping' # Node bootstrapping
FailedBootstrap = 'failed_bootstrap' # Node bootstrapping failed
Bootstrapped = 'bootstrapped' # Node fully bootstrapped
Complete = 'complete' # Node is complete
Compiled = 'compiled' # Node attributes represent effective config after inheritance/merge
Present = 'present' # IPMI access verified
BasicVerifying = 'basic_verifying' # Base node verification in process
FailedBasicVerify = 'failed_basic_verify' # Base node verification failed
BasicVerified = 'basic_verified' # Base node verification successful
Preparing = 'preparing' # Node preparation in progress
FailedPrepare = 'failed_prepare' # Node preparation failed
Prepared = 'prepared' # Node preparation complete
FullyVerifying = 'fully_verifying' # Node full verification in progress
FailedFullVerify = 'failed_full_verify' # Node full verification failed
FullyVerified = 'fully_verified' # Deeper verification successful
Deploying = 'deploy' # Node deployment in progress
FailedDeploy = 'failed_deploy' # Node deployment failed
Deployed = 'deployed' # Node deployed successfully
Bootstrapping = 'bootstrapping' # Node bootstrapping
FailedBootstrap = 'failed_bootstrap' # Node bootstrapping failed
Bootstrapped = 'bootstrapped' # Node fully bootstrapped
Complete = 'complete' # Node is complete
ALL = (Unknown, Designed, Compiled, Present, BasicVerifying, FailedBasicVerify,
BasicVerified, Preparing, FailedPrepare, Prepared, FullyVerifying,
FailedFullVerify, FullyVerified, Deploying, FailedDeploy, Deployed,
Bootstrapping, FailedBootstrap, Bootstrapped, Complete)
ALL = (Unknown, Designed, Compiled, Present, BasicVerifying,
FailedBasicVerify, BasicVerified, Preparing, FailedPrepare,
Prepared, FullyVerifying, FailedFullVerify, FullyVerified,
Deploying, FailedDeploy, Deployed, Bootstrapping, FailedBootstrap,
Bootstrapped, Complete)
class NodeStatusField(fields.BaseEnumField):
AUTO_TYPE = NodeStatus()
class NetworkLinkBondingMode(BaseDrydockEnum):
Disabled = 'disabled'
LACP = '802.3ad'
@ -162,14 +176,17 @@ class NetworkLinkBondingMode(BaseDrydockEnum):
ALL = (Disabled, LACP, RoundRobin, Standby)
class NetworkLinkBondingModeField(fields.BaseEnumField):
AUTO_TYPE = NetworkLinkBondingMode()
class NetworkLinkTrunkingMode(BaseDrydockEnum):
Disabled = 'disabled'
Tagged = '802.1q'
ALL = (Disabled, Tagged)
class NetworkLinkTrunkingModeField(fields.BaseEnumField):
AUTO_TYPE = NetworkLinkTrunkingMode()

View File

@ -39,14 +39,14 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
# Consider a custom field for storage size
'bootdisk_root_size': obj_fields.StringField(nullable=True),
'bootdisk_boot_size': obj_fields.StringField(nullable=True),
'partitions': obj_fields.ObjectField('HostPartitionList',
nullable=True),
'interfaces': obj_fields.ObjectField('HostInterfaceList',
nullable=True),
'partitions': obj_fields.ObjectField(
'HostPartitionList', nullable=True),
'interfaces': obj_fields.ObjectField(
'HostInterfaceList', nullable=True),
'tags': obj_fields.ListOfStringsField(nullable=True),
'owner_data': obj_fields.DictOfStringsField(nullable=True),
'rack': obj_fields.StringField(nullable=True),
'base_os': obj_fields.StringField(nullable=True),
'base_os': obj_fields.StringField(nullable=True),
'image': obj_fields.StringField(nullable=True),
'kernel': obj_fields.StringField(nullable=True),
'kernel_params': obj_fields.DictOfStringsField(nullable=True),
@ -56,7 +56,6 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
def __init__(self, **kwargs):
super(HostProfile, self).__init__(**kwargs)
def get_rack(self):
return self.rack
@ -70,7 +69,7 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
def has_tag(self, tag):
if tag in self.tags:
return True
return False
def apply_inheritance(self, site_design):
@ -83,8 +82,8 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
parent = site_design.get_host_profile(self.parent_profile)
if parent is None:
raise NameError("Cannot find parent profile %s for %s"
% (self.design['parent_profile'], self.name))
raise NameError("Cannot find parent profile %s for %s" %
(self.design['parent_profile'], self.name))
parent.apply_inheritance(site_design)
@ -92,43 +91,47 @@ class HostProfile(base.DrydockPersistentObject, base.DrydockObject):
inheritable_field_list = [
'hardware_profile', 'oob_type', 'storage_layout',
'bootdisk_device', 'bootdisk_root_size', 'bootdisk_boot_size',
'rack', 'base_os', 'image', 'kernel', 'primary_network']
'rack', 'base_os', 'image', 'kernel', 'primary_network'
]
# Create applied data from self design values and parent
# applied values
for f in inheritable_field_list:
setattr(self, f, objects.Utils.apply_field_inheritance(
getattr(self, f, None),
getattr(parent, f, None)))
setattr(self, f,
objects.Utils.apply_field_inheritance(
getattr(self, f, None), getattr(parent, f, None)))
# Now compute inheritance for complex types
self.oob_parameters = objects.Utils.merge_dicts(self.oob_parameters, parent.oob_parameters)
self.oob_parameters = objects.Utils.merge_dicts(
self.oob_parameters, parent.oob_parameters)
self.tags = objects.Utils.merge_lists(self.tags, parent.tags)
self.owner_data = objects.Utils.merge_dicts(self.owner_data, parent.owner_data)
self.owner_data = objects.Utils.merge_dicts(self.owner_data,
parent.owner_data)
self.kernel_params = objects.Utils.merge_dicts(self.kernel_params, parent.kernel_params)
self.kernel_params = objects.Utils.merge_dicts(self.kernel_params,
parent.kernel_params)
self.interfaces = HostInterfaceList.from_basic_list(
HostInterface.merge_lists(self.interfaces, parent.interfaces))
HostInterface.merge_lists(self.interfaces, parent.interfaces))
self.partitions = HostPartitionList.from_basic_list(
HostPartition.merge_lists(self.partitions, parent.partitions))
HostPartition.merge_lists(self.partitions, parent.partitions))
self.source = hd_fields.ModelSource.Compiled
return
@base.DrydockObjectRegistry.register
class HostProfileList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': obj_fields.ListOfObjectsField('HostProfile')
}
fields = {'objects': obj_fields.ListOfObjectsField('HostProfile')}
@base.DrydockObjectRegistry.register
class HostInterface(base.DrydockObject):
@ -136,13 +139,18 @@ class HostInterface(base.DrydockObject):
VERSION = '1.0'
fields = {
'device_name': obj_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'network_link': obj_fields.StringField(nullable=True),
'hardware_slaves': obj_fields.ListOfStringsField(nullable=True),
'slave_selectors': obj_fields.ObjectField('HardwareDeviceSelectorList',
nullable=True),
'networks': obj_fields.ListOfStringsField(nullable=True),
'device_name':
obj_fields.StringField(),
'source':
hd_fields.ModelSourceField(),
'network_link':
obj_fields.StringField(nullable=True),
'hardware_slaves':
obj_fields.ListOfStringsField(nullable=True),
'slave_selectors':
obj_fields.ObjectField('HardwareDeviceSelectorList', nullable=True),
'networks':
obj_fields.ListOfStringsField(nullable=True),
}
def __init__(self, **kwargs):
@ -214,31 +222,34 @@ class HostInterface(base.DrydockObject):
elif j.get_name() == parent_name:
m = objects.HostInterface()
m.device_name = j.get_name()
m.network_link = \
objects.Utils.apply_field_inheritance(
getattr(j, 'network_link', None),
getattr(i, 'network_link', None))
s = [x for x
in getattr(i, 'hardware_slaves', [])
if ("!" + x) not in getattr(j, 'hardware_slaves', [])]
s = [
x for x in getattr(i, 'hardware_slaves', [])
if ("!" + x
) not in getattr(j, 'hardware_slaves', [])
]
s.extend(
[x for x
in getattr(j, 'hardware_slaves', [])
if not x.startswith("!")])
s.extend([
x for x in getattr(j, 'hardware_slaves', [])
if not x.startswith("!")
])
m.hardware_slaves = s
n = [x for x
in getattr(i, 'networks',[])
if ("!" + x) not in getattr(j, 'networks', [])]
n = [
x for x in getattr(i, 'networks', [])
if ("!" + x) not in getattr(j, 'networks', [])
]
n.extend(
[x for x
in getattr(j, 'networks', [])
if not x.startswith("!")])
n.extend([
x for x in getattr(j, 'networks', [])
if not x.startswith("!")
])
m.networks = n
m.source = hd_fields.ModelSource.Compiled
@ -254,21 +265,21 @@ class HostInterface(base.DrydockObject):
for j in child_list:
if (j.device_name not in parent_interfaces
and not j.get_name().startswith("!")):
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
effective_list.append(jj)
return effective_list
@base.DrydockObjectRegistry.register
class HostInterfaceList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': obj_fields.ListOfObjectsField('HostInterface')
}
fields = {'objects': obj_fields.ListOfObjectsField('HostInterface')}
@base.DrydockObjectRegistry.register
class HostPartition(base.DrydockObject):
@ -276,18 +287,28 @@ class HostPartition(base.DrydockObject):
VERSION = '1.0'
fields = {
'name': obj_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'device': obj_fields.StringField(nullable=True),
'part_uuid': obj_fields.UUIDField(nullable=True),
'size': obj_fields.StringField(nullable=True),
'mountpoint': obj_fields.StringField(nullable=True),
'fstype': obj_fields.StringField(nullable=True, default='ext4'),
'mount_options': obj_fields.StringField(nullable=True, default='defaults'),
'fs_uuid': obj_fields.UUIDField(nullable=True),
'fs_label': obj_fields.StringField(nullable=True),
'selector': obj_fields.ObjectField('HardwareDeviceSelector',
nullable=True),
'name':
obj_fields.StringField(),
'source':
hd_fields.ModelSourceField(),
'device':
obj_fields.StringField(nullable=True),
'part_uuid':
obj_fields.UUIDField(nullable=True),
'size':
obj_fields.StringField(nullable=True),
'mountpoint':
obj_fields.StringField(nullable=True),
'fstype':
obj_fields.StringField(nullable=True, default='ext4'),
'mount_options':
obj_fields.StringField(nullable=True, default='defaults'),
'fs_uuid':
obj_fields.UUIDField(nullable=True),
'fs_label':
obj_fields.StringField(nullable=True),
'selector':
obj_fields.ObjectField('HardwareDeviceSelector', nullable=True),
}
def __init__(self, **kwargs):
@ -299,7 +320,7 @@ class HostPartition(base.DrydockObject):
# HostPartition keyed by name
def get_id(self):
return self.get_name()
def get_name(self):
return self.name
@ -340,9 +361,10 @@ class HostPartition(base.DrydockObject):
ii.source = hd_fields.ModelSource.Compiled
effective_list.append(ii)
elif len(parent_list) > 0 and len(child_list) > 0:
inherit_field_list = ["device", "part_uuid", "size",
"mountpoint", "fstype", "mount_options",
"fs_uuid", "fs_label"]
inherit_field_list = [
"device", "part_uuid", "size", "mountpoint", "fstype",
"mount_options", "fs_uuid", "fs_label"
]
parent_partitions = []
for i in parent_list:
parent_name = i.get_name()
@ -358,8 +380,9 @@ class HostPartition(base.DrydockObject):
for f in inherit_field_list:
setattr(p, f,
objects.Utils.apply_field_inheritance(getattr(j, f, None),
getattr(i, f, None)))
objects.Utils.apply_field_inheritance(
getattr(j, f, None),
getattr(i, f, None)))
add = False
p.source = hd_fields.ModelSource.Compiled
effective_list.append(p)
@ -369,8 +392,8 @@ class HostPartition(base.DrydockObject):
effective_list.append(ii)
for j in child_list:
if (j.get_name() not in parent_list and
not j.get_name().startswith("!")):
if (j.get_name() not in parent_list
and not j.get_name().startswith("!")):
jj = deepcopy(j)
jj.source = hd_fields.ModelSource.Compiled
effective_list.append(jj)
@ -383,6 +406,4 @@ class HostPartitionList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': obj_fields.ListOfObjectsField('HostPartition')
}
fields = {'objects': obj_fields.ListOfObjectsField('HostPartition')}

View File

@ -20,24 +20,35 @@ import drydock_provisioner.objects as objects
import drydock_provisioner.objects.base as base
import drydock_provisioner.objects.fields as hd_fields
@base.DrydockObjectRegistry.register
class HardwareProfile(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'name': ovo_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'site': ovo_fields.StringField(),
'vendor': ovo_fields.StringField(nullable=True),
'generation': ovo_fields.StringField(nullable=True),
'hw_version': ovo_fields.StringField(nullable=True),
'bios_version': ovo_fields.StringField(nullable=True),
'boot_mode': ovo_fields.StringField(nullable=True),
'bootstrap_protocol': ovo_fields.StringField(nullable=True),
'pxe_interface': ovo_fields.StringField(nullable=True),
'devices': ovo_fields.ObjectField('HardwareDeviceAliasList',
nullable=True),
'name':
ovo_fields.StringField(),
'source':
hd_fields.ModelSourceField(),
'site':
ovo_fields.StringField(),
'vendor':
ovo_fields.StringField(nullable=True),
'generation':
ovo_fields.StringField(nullable=True),
'hw_version':
ovo_fields.StringField(nullable=True),
'bios_version':
ovo_fields.StringField(nullable=True),
'boot_mode':
ovo_fields.StringField(nullable=True),
'bootstrap_protocol':
ovo_fields.StringField(nullable=True),
'pxe_interface':
ovo_fields.StringField(nullable=True),
'devices':
ovo_fields.ObjectField('HardwareDeviceAliasList', nullable=True),
}
def __init__(self, **kwargs):
@ -51,7 +62,7 @@ class HardwareProfile(base.DrydockPersistentObject, base.DrydockObject):
def get_name(self):
return self.name
def resolve_alias(self, alias_type, alias):
for d in self.devices:
if d.alias == alias and d.bus_type == alias_type:
@ -63,14 +74,14 @@ class HardwareProfile(base.DrydockPersistentObject, base.DrydockObject):
return None
@base.DrydockObjectRegistry.register
class HardwareProfileList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('HardwareProfile')
}
fields = {'objects': ovo_fields.ListOfObjectsField('HardwareProfile')}
@base.DrydockObjectRegistry.register
class HardwareDeviceAlias(base.DrydockObject):
@ -78,9 +89,9 @@ class HardwareDeviceAlias(base.DrydockObject):
VERSION = '1.0'
fields = {
'alias': ovo_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'address': ovo_fields.StringField(),
'alias': ovo_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'address': ovo_fields.StringField(),
'bus_type': ovo_fields.StringField(),
'dev_type': ovo_fields.StringField(nullable=True),
}
@ -91,15 +102,15 @@ class HardwareDeviceAlias(base.DrydockObject):
# HardwareDeviceAlias keyed on alias
def get_id(self):
return self.alias
@base.DrydockObjectRegistry.register
class HardwareDeviceAliasList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('HardwareDeviceAlias')
}
fields = {'objects': ovo_fields.ListOfObjectsField('HardwareDeviceAlias')}
@base.DrydockObjectRegistry.register
class HardwareDeviceSelector(base.DrydockObject):
@ -107,19 +118,21 @@ class HardwareDeviceSelector(base.DrydockObject):
VERSION = '1.0'
fields = {
'selector_type': ovo_fields.StringField(),
'address': ovo_fields.StringField(),
'device_type': ovo_fields.StringField()
'selector_type': ovo_fields.StringField(),
'address': ovo_fields.StringField(),
'device_type': ovo_fields.StringField()
}
def __init__(self, **kwargs):
super(HardwareDeviceSelector, self).__init__(**kwargs)
@base.DrydockObjectRegistry.register
class HardwareDeviceSelectorList(base.DrydockObjectListBase, base.DrydockObject):
class HardwareDeviceSelectorList(base.DrydockObjectListBase,
base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('HardwareDeviceSelector')
}
'objects': ovo_fields.ListOfObjectsField('HardwareDeviceSelector')
}

View File

@ -24,28 +24,43 @@ import drydock_provisioner.objects as objects
import drydock_provisioner.objects.base as base
import drydock_provisioner.objects.fields as hd_fields
@base.DrydockObjectRegistry.register
class NetworkLink(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'name': ovo_fields.StringField(),
'site': ovo_fields.StringField(),
'metalabels': ovo_fields.ListOfStringsField(nullable=True),
'bonding_mode': hd_fields.NetworkLinkBondingModeField(
default=hd_fields.NetworkLinkBondingMode.Disabled),
'bonding_xmit_hash': ovo_fields.StringField(nullable=True, default='layer3+4'),
'bonding_peer_rate': ovo_fields.StringField(nullable=True, default='slow'),
'bonding_mon_rate': ovo_fields.IntegerField(nullable=True, default=100),
'bonding_up_delay': ovo_fields.IntegerField(nullable=True, default=200),
'bonding_down_delay': ovo_fields.IntegerField(nullable=True, default=200),
'mtu': ovo_fields.IntegerField(default=1500),
'linkspeed': ovo_fields.StringField(default='auto'),
'trunk_mode': hd_fields.NetworkLinkTrunkingModeField(
default=hd_fields.NetworkLinkTrunkingMode.Disabled),
'native_network': ovo_fields.StringField(nullable=True),
'allowed_networks': ovo_fields.ListOfStringsField(),
'name':
ovo_fields.StringField(),
'site':
ovo_fields.StringField(),
'metalabels':
ovo_fields.ListOfStringsField(nullable=True),
'bonding_mode':
hd_fields.NetworkLinkBondingModeField(
default=hd_fields.NetworkLinkBondingMode.Disabled),
'bonding_xmit_hash':
ovo_fields.StringField(nullable=True, default='layer3+4'),
'bonding_peer_rate':
ovo_fields.StringField(nullable=True, default='slow'),
'bonding_mon_rate':
ovo_fields.IntegerField(nullable=True, default=100),
'bonding_up_delay':
ovo_fields.IntegerField(nullable=True, default=200),
'bonding_down_delay':
ovo_fields.IntegerField(nullable=True, default=200),
'mtu':
ovo_fields.IntegerField(default=1500),
'linkspeed':
ovo_fields.StringField(default='auto'),
'trunk_mode':
hd_fields.NetworkLinkTrunkingModeField(
default=hd_fields.NetworkLinkTrunkingMode.Disabled),
'native_network':
ovo_fields.StringField(nullable=True),
'allowed_networks':
ovo_fields.ListOfStringsField(),
}
def __init__(self, **kwargs):
@ -65,7 +80,7 @@ class NetworkLinkList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('NetworkLink'),
'objects': ovo_fields.ListOfObjectsField('NetworkLink'),
}
@ -75,19 +90,19 @@ class Network(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'name': ovo_fields.StringField(),
'site': ovo_fields.StringField(),
'metalabels': ovo_fields.ListOfStringsField(nullable=True),
'cidr': ovo_fields.StringField(),
'allocation_strategy': ovo_fields.StringField(),
'vlan_id': ovo_fields.StringField(nullable=True),
'mtu': ovo_fields.IntegerField(nullable=True),
'dns_domain': ovo_fields.StringField(nullable=True),
'dns_servers': ovo_fields.StringField(nullable=True),
'name': ovo_fields.StringField(),
'site': ovo_fields.StringField(),
'metalabels': ovo_fields.ListOfStringsField(nullable=True),
'cidr': ovo_fields.StringField(),
'allocation_strategy': ovo_fields.StringField(),
'vlan_id': ovo_fields.StringField(nullable=True),
'mtu': ovo_fields.IntegerField(nullable=True),
'dns_domain': ovo_fields.StringField(nullable=True),
'dns_servers': ovo_fields.StringField(nullable=True),
# Keys of ranges are 'type', 'start', 'end'
'ranges': ovo_fields.ListOfDictOfNullableStringsField(),
'ranges': ovo_fields.ListOfDictOfNullableStringsField(),
# Keys of routes are 'subnet', 'gateway', 'metric'
'routes': ovo_fields.ListOfDictOfNullableStringsField(),
'routes': ovo_fields.ListOfDictOfNullableStringsField(),
}
def __init__(self, **kwargs):
@ -96,25 +111,26 @@ class Network(base.DrydockPersistentObject, base.DrydockObject):
# Network keyed on name
def get_id(self):
return self.get_name()
def get_name(self):
return self.name
def get_default_gateway(self):
for r in getattr(self,'routes', []):
for r in getattr(self, 'routes', []):
if r.get('subnet', '') == '0.0.0.0/0':
return r.get('gateway', None)
return None
@base.DrydockObjectRegistry.register
class NetworkList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('Network'),
'objects': ovo_fields.ListOfObjectsField('Network'),
}
def __init__(self, **kwargs):
super(NetworkList, self).__init__(**kwargs)
super(NetworkList, self).__init__(**kwargs)

View File

@ -25,14 +25,15 @@ import drydock_provisioner.objects.hostprofile
import drydock_provisioner.objects.base as base
import drydock_provisioner.objects.fields as hd_fields
@base.DrydockObjectRegistry.register
class BaremetalNode(drydock_provisioner.objects.hostprofile.HostProfile):
VERSION = '1.0'
fields = {
'addressing': ovo_fields.ObjectField('IpAddressAssignmentList'),
'boot_mac': ovo_fields.StringField(nullable=True),
'addressing': ovo_fields.ObjectField('IpAddressAssignmentList'),
'boot_mac': ovo_fields.StringField(nullable=True),
}
# A BaremetalNode is really nothing more than a physical
@ -76,7 +77,7 @@ class BaremetalNode(drydock_provisioner.objects.hostprofile.HostProfile):
if selector is None:
selector = objects.HardwareDeviceSelector()
selector.selector_type = 'name'
selector.address = p.get_device()
selector.address = p.get_device()
p.set_selector(selector)
return
@ -88,10 +89,9 @@ class BaremetalNode(drydock_provisioner.objects.hostprofile.HostProfile):
return None
def get_network_address(self, network_name):
for a in getattr(self, 'addressing', []):
if a.network == network_name:
if a.network == network_name:
return a.address
return None
@ -102,9 +102,7 @@ class BaremetalNodeList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('BaremetalNode')
}
fields = {'objects': ovo_fields.ListOfObjectsField('BaremetalNode')}
@base.DrydockObjectRegistry.register
@ -113,9 +111,9 @@ class IpAddressAssignment(base.DrydockObject):
VERSION = '1.0'
fields = {
'type': ovo_fields.StringField(),
'address': ovo_fields.StringField(nullable=True),
'network': ovo_fields.StringField(),
'type': ovo_fields.StringField(),
'address': ovo_fields.StringField(nullable=True),
'network': ovo_fields.StringField(),
}
def __init__(self, **kwargs):
@ -125,11 +123,10 @@ class IpAddressAssignment(base.DrydockObject):
def get_id(self):
return self.network
@base.DrydockObjectRegistry.register
class IpAddressAssignmentList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('IpAddressAssignment')
}
fields = {'objects': ovo_fields.ListOfObjectsField('IpAddressAssignment')}

View File

@ -18,6 +18,7 @@ import drydock_provisioner.objects as objects
import drydock_provisioner.objects.base as base
import drydock_provisioner.objects.fields as hd_fields
@base.DrydockObjectRegistry.register
class PromenadeConfig(base.DrydockPersistentObject, base.DrydockObject):
@ -42,14 +43,15 @@ class PromenadeConfig(base.DrydockPersistentObject, base.DrydockObject):
def get_name(self):
return self.name
@base.DrydockObjectRegistry.register
class PromenadeConfigList(base.DrydockObjectListBase, base.DrydockObject):
VERSION = '1.0'
fields = {
'objects': ovo_fields.ListOfObjectsField('PromenadeConfig'),
}
'objects': ovo_fields.ListOfObjectsField('PromenadeConfig'),
}
def select_for_target(self, target):
"""
@ -59,4 +61,3 @@ class PromenadeConfigList(base.DrydockObjectListBase, base.DrydockObject):
"""
return [x for x in self.objects if x.target == target]

View File

@ -20,6 +20,7 @@ import datetime
import oslo_versionedobjects.fields as ovo_fields
import drydock_provisioner.error as errors
import drydock_provisioner.objects as objects
import drydock_provisioner.objects.base as base
import drydock_provisioner.objects.fields as hd_fields
@ -31,13 +32,18 @@ class Site(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'name': ovo_fields.StringField(),
'status': hd_fields.SiteStatusField(default=hd_fields.SiteStatus.Unknown),
'source': hd_fields.ModelSourceField(),
'tag_definitions': ovo_fields.ObjectField('NodeTagDefinitionList',
nullable=True),
'repositories': ovo_fields.ObjectField('RepositoryList', nullable=True),
'authorized_keys': ovo_fields.ListOfStringsField(nullable=True),
'name':
ovo_fields.StringField(),
'status':
hd_fields.SiteStatusField(default=hd_fields.SiteStatus.Unknown),
'source':
hd_fields.ModelSourceField(),
'tag_definitions':
ovo_fields.ObjectField('NodeTagDefinitionList', nullable=True),
'repositories':
ovo_fields.ObjectField('RepositoryList', nullable=True),
'authorized_keys':
ovo_fields.ListOfStringsField(nullable=True),
}
def __init__(self, **kwargs):
@ -55,6 +61,7 @@ class Site(base.DrydockPersistentObject, base.DrydockObject):
def add_key(self, key_string):
self.authorized_keys.append(key_string)
@base.DrydockObjectRegistry.register
class NodeTagDefinition(base.DrydockObject):
@ -64,7 +71,7 @@ class NodeTagDefinition(base.DrydockObject):
'tag': ovo_fields.StringField(),
'type': ovo_fields.StringField(),
'definition': ovo_fields.StringField(),
'source': hd_fields.ModelSourceField(),
'source': hd_fields.ModelSourceField(),
}
def __init__(self, **kwargs):
@ -74,6 +81,7 @@ class NodeTagDefinition(base.DrydockObject):
def get_id(self):
return self.tag
@base.DrydockObjectRegistry.register
class NodeTagDefinitionList(base.DrydockObjectListBase, base.DrydockObject):
@ -83,6 +91,7 @@ class NodeTagDefinitionList(base.DrydockObjectListBase, base.DrydockObject):
'objects': ovo_fields.ListOfObjectsField('NodeTagDefinition'),
}
# Need to determine how best to define a repository that can encompass
# all repositories needed
@base.DrydockObjectRegistry.register
@ -101,6 +110,7 @@ class Repository(base.DrydockObject):
def get_id(self):
return self.name
@base.DrydockObjectRegistry.register
class RepositoryList(base.DrydockObjectListBase, base.DrydockObject):
@ -110,23 +120,34 @@ class RepositoryList(base.DrydockObjectListBase, base.DrydockObject):
'objects': ovo_fields.ListOfObjectsField('Repository'),
}
@base.DrydockObjectRegistry.register
class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
VERSION = '1.0'
fields = {
'id': ovo_fields.UUIDField(),
'id':
ovo_fields.UUIDField(),
# if null, indicates this is the site base design
'base_design_id': ovo_fields.UUIDField(nullable=True),
'source': hd_fields.ModelSourceField(),
'site': ovo_fields.ObjectField('Site', nullable=True),
'networks': ovo_fields.ObjectField('NetworkList', nullable=True),
'network_links': ovo_fields.ObjectField('NetworkLinkList', nullable=True),
'host_profiles': ovo_fields.ObjectField('HostProfileList', nullable=True),
'hardware_profiles': ovo_fields.ObjectField('HardwareProfileList', nullable=True),
'baremetal_nodes': ovo_fields.ObjectField('BaremetalNodeList', nullable=True),
'prom_configs': ovo_fields.ObjectField('PromenadeConfigList', nullable=True),
'base_design_id':
ovo_fields.UUIDField(nullable=True),
'source':
hd_fields.ModelSourceField(),
'site':
ovo_fields.ObjectField('Site', nullable=True),
'networks':
ovo_fields.ObjectField('NetworkList', nullable=True),
'network_links':
ovo_fields.ObjectField('NetworkLinkList', nullable=True),
'host_profiles':
ovo_fields.ObjectField('HostProfileList', nullable=True),
'hardware_profiles':
ovo_fields.ObjectField('HardwareProfileList', nullable=True),
'baremetal_nodes':
ovo_fields.ObjectField('BaremetalNodeList', nullable=True),
'prom_configs':
ovo_fields.ObjectField('PromenadeConfigList', nullable=True),
}
def __init__(self, **kwargs):
@ -143,13 +164,13 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
def get_site(self):
return self.site
def set_site(self, site):
self.site = site
def add_network(self, new_network):
if new_network is None:
raise DesignError("Invalid Network model")
raise errors.DesignError("Invalid Network model")
if self.networks is None:
self.networks = objects.NetworkList()
@ -161,12 +182,11 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if n.get_id() == network_key:
return n
raise DesignError("Network %s not found in design state"
% network_key)
raise errors.DesignError("Network %s not found in design state" % network_key)
def add_network_link(self, new_network_link):
if new_network_link is None:
raise DesignError("Invalid NetworkLink model")
raise errors.DesignError("Invalid NetworkLink model")
if self.network_links is None:
self.network_links = objects.NetworkLinkList()
@ -178,12 +198,12 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if l.get_id() == link_key:
return l
raise DesignError("NetworkLink %s not found in design state"
% link_key)
raise errors.DesignError(
"NetworkLink %s not found in design state" % link_key)
def add_host_profile(self, new_host_profile):
if new_host_profile is None:
raise DesignError("Invalid HostProfile model")
raise errors.DesignError("Invalid HostProfile model")
if self.host_profiles is None:
self.host_profiles = objects.HostProfileList()
@ -195,12 +215,12 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if p.get_id() == profile_key:
return p
raise DesignError("HostProfile %s not found in design state"
% profile_key)
raise errors.DesignError(
"HostProfile %s not found in design state" % profile_key)
def add_hardware_profile(self, new_hardware_profile):
if new_hardware_profile is None:
raise DesignError("Invalid HardwareProfile model")
raise errors.DesignError("Invalid HardwareProfile model")
if self.hardware_profiles is None:
self.hardware_profiles = objects.HardwareProfileList()
@ -212,12 +232,12 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if p.get_id() == profile_key:
return p
raise DesignError("HardwareProfile %s not found in design state"
% profile_key)
raise errors.DesignError(
"HardwareProfile %s not found in design state" % profile_key)
def add_baremetal_node(self, new_baremetal_node):
if new_baremetal_node is None:
raise DesignError("Invalid BaremetalNode model")
raise errors.DesignError("Invalid BaremetalNode model")
if self.baremetal_nodes is None:
self.baremetal_nodes = objects.BaremetalNodeList()
@ -229,8 +249,8 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if n.get_id() == node_key:
return n
raise DesignError("BaremetalNode %s not found in design state"
% node_key)
raise errors.DesignError(
"BaremetalNode %s not found in design state" % node_key)
def add_promenade_config(self, prom_conf):
if self.prom_configs is None:
@ -270,6 +290,7 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
values. The final result is an intersection of all the
filters
"""
def get_filtered_nodes(self, node_filter):
effective_nodes = self.baremetal_nodes
@ -278,26 +299,24 @@ class SiteDesign(base.DrydockPersistentObject, base.DrydockObject):
if rack_filter is not None:
rack_list = rack_filter.split(',')
effective_nodes = [x
for x in effective_nodes
if x.get_rack() in rack_list]
effective_nodes = [
x for x in effective_nodes if x.get_rack() in rack_list
]
# filter by name
name_filter = node_filter.get('nodename', None)
if name_filter is not None:
name_list = name_filter.split(',')
effective_nodes = [x
for x in effective_nodes
if x.get_name() in name_list]
effective_nodes = [
x for x in effective_nodes if x.get_name() in name_list
]
# filter by tag
tag_filter = node_filter.get('tags', None)
if tag_filter is not None:
tag_list = tag_filter.split(',')
effective_nodes = [x
for x in effective_nodes
for t in tag_list
if x.has_tag(t)]
effective_nodes = [
x for x in effective_nodes for t in tag_list if x.has_tag(t)
]
return effective_nodes

View File

@ -19,8 +19,8 @@ import drydock_provisioner.error as errors
import drydock_provisioner.objects.fields as hd_fields
class Task(object):
class Task(object):
def __init__(self, **kwargs):
self.task_id = uuid.uuid4()
self.status = hd_fields.TaskStatus.Created
@ -31,7 +31,7 @@ class Task(object):
self.result_detail = None
self.action = kwargs.get('action', hd_fields.OrchestratorAction.Noop)
self.parent_task_id = kwargs.get('parent_task_id','')
self.parent_task_id = kwargs.get('parent_task_id', '')
def get_id(self):
return self.task_id
@ -68,26 +68,28 @@ class Task(object):
def to_dict(self):
return {
'task_id': str(self.task_id),
'action': self.action,
'task_id': str(self.task_id),
'action': self.action,
'parent_task': str(self.parent_task_id),
'status': self.status,
'result': self.result,
'status': self.status,
'result': self.result,
'result_detail': self.result_detail,
'subtasks': [str(x) for x in self.subtasks],
}
class OrchestratorTask(Task):
class OrchestratorTask(Task):
def __init__(self, design_id=None, **kwargs):
super(OrchestratorTask, self).__init__(**kwargs)
self.design_id = design_id
if self.action in [hd_fields.OrchestratorAction.VerifyNode,
hd_fields.OrchestratorAction.PrepareNode,
hd_fields.OrchestratorAction.DeployNode,
hd_fields.OrchestratorAction.DestroyNode]:
if self.action in [
hd_fields.OrchestratorAction.VerifyNode,
hd_fields.OrchestratorAction.PrepareNode,
hd_fields.OrchestratorAction.DeployNode,
hd_fields.OrchestratorAction.DestroyNode
]:
self.node_filter = kwargs.get('node_filter', None)
def to_dict(self):
@ -98,6 +100,7 @@ class OrchestratorTask(Task):
return _dict
class DriverTask(Task):
def __init__(self, task_scope={}, **kwargs):
super(DriverTask, self).__init__(**kwargs)

View File

@ -1,4 +1,3 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -12,13 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
import time
import threading
import importlib
import logging
from copy import deepcopy
from oslo_config import cfg
import drydock_provisioner.drivers as drivers
@ -26,6 +22,7 @@ import drydock_provisioner.objects.task as tasks
import drydock_provisioner.error as errors
import drydock_provisioner.objects.fields as hd_fields
class Orchestrator(object):
# enabled_drivers is a map which provider drivers
@ -52,8 +49,10 @@ class Orchestrator(object):
if oob_driver_class is not None:
if self.enabled_drivers.get('oob', None) is None:
self.enabled_drivers['oob'] = []
self.enabled_drivers['oob'].append(oob_driver_class(state_manager=state_manager,
orchestrator=self))
self.enabled_drivers['oob'].append(
oob_driver_class(
state_manager=state_manager,
orchestrator=self))
node_driver_name = enabled_drivers.node_driver
if node_driver_name is not None:
@ -61,18 +60,17 @@ class Orchestrator(object):
node_driver_class = \
getattr(importlib.import_module(m), c, None)
if node_driver_class is not None:
self.enabled_drivers['node'] = node_driver_class(state_manager=state_manager,
orchestrator=self)
self.enabled_drivers['node'] = node_driver_class(
state_manager=state_manager, orchestrator=self)
network_driver_name = enabled_drivers.network_driver
if network_driver_name is not None:
m, c = network_driver_name.rsplit('.', 1)
network_driver_class = \
getattr(importlib.import_module(m), c, None)
if network_driver_class is not None:
self.enabled_drivers['network'] = network_driver_class(state_manager=state_manager,
orchestrator=self)
self.enabled_drivers['network'] = network_driver_class(
state_manager=state_manager, orchestrator=self)
"""
execute_task
@ -82,120 +80,144 @@ class Orchestrator(object):
the current designed state and current built state from the statemgmt
module. Based on those 3 inputs, we'll decide what is needed next.
"""
def execute_task(self, task_id):
if self.state_manager is None:
raise errors.OrchestratorError("Cannot execute task without" \
raise errors.OrchestratorError("Cannot execute task without"
" initialized state manager")
task = self.state_manager.get_task(task_id)
if task is None:
raise errors.OrchestratorError("Task %s not found."
% (task_id))
raise errors.OrchestratorError("Task %s not found." % (task_id))
design_id = task.design_id
# Just for testing now, need to implement with enabled_drivers
# logic
if task.action == hd_fields.OrchestratorAction.Noop:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
driver_task = self.create_task(tasks.DriverTask,
design_id=0,
action=hd_fields.OrchestratorAction.Noop,
parent_task_id=task.get_id())
driver_task = self.create_task(
tasks.DriverTask,
design_id=0,
action=hd_fields.OrchestratorAction.Noop,
parent_task_id=task.get_id())
driver = drivers.ProviderDriver(state_manager=self.state_manager,
orchestrator=self)
driver = drivers.ProviderDriver(
state_manager=self.state_manager, orchestrator=self)
driver.execute_task(driver_task.get_id())
driver_task = self.state_manager.get_task(driver_task.get_id())
self.task_field_update(task_id, status=driver_task.get_status())
return
elif task.action == hd_fields.OrchestratorAction.ValidateDesign:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
try:
site_design = self.get_effective_site(design_id)
self.task_field_update(task_id,
result=hd_fields.ActionResult.Success)
except:
self.task_field_update(task_id,
result=hd_fields.ActionResult.Failure)
self.task_field_update(task_id, status=hd_fields.TaskStatus.Complete)
self.task_field_update(
task_id, result=hd_fields.ActionResult.Success)
except Exception:
self.task_field_update(
task_id, result=hd_fields.ActionResult.Failure)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Complete)
return
elif task.action == hd_fields.OrchestratorAction.VerifySite:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
node_driver = self.enabled_drivers['node']
if node_driver is not None:
node_driver_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ValidateNodeServices)
node_driver_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ValidateNodeServices)
node_driver.execute_task(node_driver_task.get_id())
node_driver_task = self.state_manager.get_task(node_driver_task.get_id())
node_driver_task = self.state_manager.get_task(
node_driver_task.get_id())
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Complete,
result=node_driver_task.get_result())
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Complete,
result=node_driver_task.get_result())
return
elif task.action == hd_fields.OrchestratorAction.PrepareSite:
driver = self.enabled_drivers['node']
if driver is None:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure)
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure)
return
worked = failed = False
site_network_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.CreateNetworkTemplate)
site_network_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.CreateNetworkTemplate)
self.logger.info("Starting node driver task %s to create network templates" % (site_network_task.get_id()))
self.logger.info(
"Starting node driver task %s to create network templates" %
(site_network_task.get_id()))
driver.execute_task(site_network_task.get_id())
site_network_task = self.state_manager.get_task(site_network_task.get_id())
site_network_task = self.state_manager.get_task(
site_network_task.get_id())
if site_network_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if site_network_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
if site_network_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
if site_network_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
self.logger.info("Node driver task %s complete" % (site_network_task.get_id()))
self.logger.info("Node driver task %s complete" %
(site_network_task.get_id()))
user_creds_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ConfigureUserCredentials)
user_creds_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ConfigureUserCredentials)
self.logger.info("Starting node driver task %s to configure user credentials" % (user_creds_task.get_id()))
self.logger.info(
"Starting node driver task %s to configure user credentials" %
(user_creds_task.get_id()))
driver.execute_task(user_creds_task.get_id())
self.logger.info("Node driver task %s complete" % (site_network_task.get_id()))
self.logger.info("Node driver task %s complete" %
(site_network_task.get_id()))
user_creds_task = self.state_manager.get_task(site_network_task.get_id())
user_creds_task = self.state_manager.get_task(
site_network_task.get_id())
if user_creds_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if user_creds_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
if user_creds_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
if user_creds_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
if worked and failed:
@ -205,12 +227,14 @@ class Orchestrator(object):
else:
final_result = hd_fields.ActionResult.Failure
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
return
elif task.action == hd_fields.OrchestratorAction.VerifyNode:
self.task_field_update(task_id, status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
site_design = self.get_effective_site(design_id)
@ -229,7 +253,7 @@ class Orchestrator(object):
result_detail = {'detail': []}
worked = failed = False
# TODO Need to multithread tasks for different OOB types
# TODO(sh8121att) Need to multithread tasks for different OOB types
for oob_type, oob_nodes in oob_type_partition.items():
oob_driver = None
for d in self.enabled_drivers['oob']:
@ -238,33 +262,42 @@ class Orchestrator(object):
break
if oob_driver is None:
self.logger.warning("Node OOB type %s has no enabled driver." % oob_type)
result_detail['detail'].append("Error: No oob driver configured for type %s" % oob_type)
self.logger.warning(
"Node OOB type %s has no enabled driver." % oob_type)
result_detail['detail'].append(
"Error: No oob driver configured for type %s" %
oob_type)
continue
target_names = [x.get_name() for x in oob_nodes]
task_scope = {'node_names' : target_names}
task_scope = {'node_names': target_names}
oob_driver_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.InterrogateOob,
task_scope=task_scope)
oob_driver_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.InterrogateOob,
task_scope=task_scope)
self.logger.info("Starting task %s for node verification via OOB type %s" %
(oob_driver_task.get_id(), oob_type))
self.logger.info(
"Starting task %s for node verification via OOB type %s" %
(oob_driver_task.get_id(), oob_type))
oob_driver.execute_task(oob_driver_task.get_id())
oob_driver_task = self.state_manager.get_task(oob_driver_task.get_id())
oob_driver_task = self.state_manager.get_task(
oob_driver_task.get_id())
if oob_driver_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if oob_driver_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
if oob_driver_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
if oob_driver_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
final_result = None
@ -276,27 +309,33 @@ class Orchestrator(object):
else:
final_result = hd_fields.ActionResult.Failure
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result,
result_detail=result_detail)
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result,
result_detail=result_detail)
return
elif task.action == hd_fields.OrchestratorAction.PrepareNode:
failed = worked = False
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
# NOTE Should we attempt to interrogate the node via Node Driver to see if
# it is in a deployed state before we start rebooting? Or do we just leverage
# NOTE Should we attempt to interrogate the node via Node
# Driver to see if it is in a deployed state before we
# start rebooting? Or do we just leverage
# Drydock internal state via site build data (when implemented)?
node_driver = self.enabled_drivers['node']
if node_driver is None:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure,
result_detail={'detail': 'Error: No node driver configured', 'retry': False})
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure,
result_detail={
'detail': 'Error: No node driver configured',
'retry': False
})
return
site_design = self.get_effective_site(design_id)
@ -316,7 +355,7 @@ class Orchestrator(object):
result_detail = {'detail': []}
worked = failed = False
# TODO Need to multithread tasks for different OOB types
# TODO(sh8121att) Need to multithread tasks for different OOB types
for oob_type, oob_nodes in oob_type_partition.items():
oob_driver = None
for d in self.enabled_drivers['oob']:
@ -325,54 +364,66 @@ class Orchestrator(object):
break
if oob_driver is None:
self.logger.warning("Node OOB type %s has no enabled driver." % oob_type)
result_detail['detail'].append("Error: No oob driver configured for type %s" % oob_type)
self.logger.warning(
"Node OOB type %s has no enabled driver." % oob_type)
result_detail['detail'].append(
"Error: No oob driver configured for type %s" %
oob_type)
continue
target_names = [x.get_name() for x in oob_nodes]
task_scope = {'node_names' : target_names}
task_scope = {'node_names': target_names}
setboot_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.SetNodeBoot,
task_scope=task_scope)
self.logger.info("Starting OOB driver task %s to set PXE boot for OOB type %s" %
(setboot_task.get_id(), oob_type))
setboot_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.SetNodeBoot,
task_scope=task_scope)
self.logger.info(
"Starting OOB driver task %s to set PXE boot for OOB type %s"
% (setboot_task.get_id(), oob_type))
oob_driver.execute_task(setboot_task.get_id())
self.logger.info("OOB driver task %s complete" % (setboot_task.get_id()))
self.logger.info("OOB driver task %s complete" %
(setboot_task.get_id()))
setboot_task = self.state_manager.get_task(setboot_task.get_id())
setboot_task = self.state_manager.get_task(
setboot_task.get_id())
if setboot_task.get_result() == hd_fields.ActionResult.Success:
worked = True
elif setboot_task.get_result() == hd_fields.ActionResult.PartialSuccess:
elif setboot_task.get_result(
) == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
elif setboot_task.get_result() == hd_fields.ActionResult.Failure:
elif setboot_task.get_result(
) == hd_fields.ActionResult.Failure:
failed = True
cycle_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.PowerCycleNode,
task_scope=task_scope)
cycle_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.PowerCycleNode,
task_scope=task_scope)
self.logger.info("Starting OOB driver task %s to power cycle nodes for OOB type %s" %
(cycle_task.get_id(), oob_type))
self.logger.info(
"Starting OOB driver task %s to power cycle nodes for OOB type %s"
% (cycle_task.get_id(), oob_type))
oob_driver.execute_task(cycle_task.get_id())
self.logger.info("OOB driver task %s complete" % (cycle_task.get_id()))
self.logger.info("OOB driver task %s complete" %
(cycle_task.get_id()))
cycle_task = self.state_manager.get_task(cycle_task.get_id())
if cycle_task.get_result() == hd_fields.ActionResult.Success:
worked = True
elif cycle_task.get_result() == hd_fields.ActionResult.PartialSuccess:
elif cycle_task.get_result(
) == hd_fields.ActionResult.PartialSuccess:
worked = failed = True
elif cycle_task.get_result() == hd_fields.ActionResult.Failure:
failed = True
@ -382,30 +433,38 @@ class Orchestrator(object):
# Each attempt is a new task which might make the final task tree a bit confusing
node_identify_attempts = 0
max_attempts = cfg.CONF.timeouts.identify_node * (60 / cfg.CONF.poll_interval)
max_attempts = cfg.CONF.timeouts.identify_node * (
60 / cfg.CONF.poll_interval)
while True:
node_identify_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.IdentifyNode,
task_scope=task_scope)
node_identify_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.IdentifyNode,
task_scope=task_scope)
self.logger.info("Starting node driver task %s to identify node - attempt %s" %
(node_identify_task.get_id(), node_identify_attempts+1))
self.logger.info(
"Starting node driver task %s to identify node - attempt %s"
% (node_identify_task.get_id(),
node_identify_attempts + 1))
node_driver.execute_task(node_identify_task.get_id())
node_identify_attempts = node_identify_attempts + 1
node_identify_task = self.state_manager.get_task(node_identify_task.get_id())
node_identify_task = self.state_manager.get_task(
node_identify_task.get_id())
if node_identify_task.get_result() == hd_fields.ActionResult.Success:
if node_identify_task.get_result(
) == hd_fields.ActionResult.Success:
worked = True
break
elif node_identify_task.get_result() in [hd_fields.ActionResult.PartialSuccess,
hd_fields.ActionResult.Failure]:
# TODO This threshold should be a configurable default and tunable by task API
elif node_identify_task.get_result() in [
hd_fields.ActionResult.PartialSuccess,
hd_fields.ActionResult.Failure
]:
# TODO(sh8121att) This threshold should be a configurable default and tunable by task API
if node_identify_attempts > max_attempts:
failed = True
break
@ -414,26 +473,43 @@ class Orchestrator(object):
# We can only commission nodes that were successfully identified in the provisioner
if len(node_identify_task.result_detail['successful_nodes']) > 0:
self.logger.info("Found %s successfully identified nodes, starting commissioning." %
(len(node_identify_task.result_detail['successful_nodes'])))
node_commission_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(), design_id=design_id,
action=hd_fields.OrchestratorAction.ConfigureHardware,
task_scope={'node_names': node_identify_task.result_detail['successful_nodes']})
self.logger.info(
"Found %s successfully identified nodes, starting commissioning."
%
(len(node_identify_task.result_detail['successful_nodes'])
))
node_commission_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ConfigureHardware,
task_scope={
'node_names':
node_identify_task.result_detail['successful_nodes']
})
self.logger.info("Starting node driver task %s to commission nodes." % (node_commission_task.get_id()))
self.logger.info(
"Starting node driver task %s to commission nodes." %
(node_commission_task.get_id()))
node_driver.execute_task(node_commission_task.get_id())
node_commission_task = self.state_manager.get_task(node_commission_task.get_id())
node_commission_task = self.state_manager.get_task(
node_commission_task.get_id())
if node_commission_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if node_commission_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_commission_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
elif node_commission_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
else:
self.logger.warning("No nodes successfully identified, skipping commissioning subtask")
self.logger.warning(
"No nodes successfully identified, skipping commissioning subtask"
)
final_result = None
if worked and failed:
@ -443,24 +519,29 @@ class Orchestrator(object):
else:
final_result = hd_fields.ActionResult.Failure
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
return
elif task.action == hd_fields.OrchestratorAction.DeployNode:
failed = worked = False
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Running)
self.task_field_update(
task_id, status=hd_fields.TaskStatus.Running)
node_driver = self.enabled_drivers['node']
if node_driver is None:
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure,
result_detail={'detail': 'Error: No node driver configured', 'retry': False})
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Errored,
result=hd_fields.ActionResult.Failure,
result_detail={
'detail': 'Error: No node driver configured',
'retry': False
})
return
site_design = self.get_effective_site(design_id)
@ -471,71 +552,112 @@ class Orchestrator(object):
target_names = [x.get_name() for x in target_nodes]
task_scope = {'node_names' : target_names}
task_scope = {'node_names': target_names}
node_networking_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(), design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodeNetworking,
task_scope=task_scope)
node_networking_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodeNetworking,
task_scope=task_scope)
self.logger.info("Starting node driver task %s to apply networking on nodes." % (node_networking_task.get_id()))
self.logger.info(
"Starting node driver task %s to apply networking on nodes." %
(node_networking_task.get_id()))
node_driver.execute_task(node_networking_task.get_id())
node_networking_task = self.state_manager.get_task(node_networking_task.get_id())
node_networking_task = self.state_manager.get_task(
node_networking_task.get_id())
if node_networking_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if node_networking_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
if node_networking_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
if node_networking_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
if len(node_networking_task.result_detail['successful_nodes']) > 0:
self.logger.info("Found %s successfully networked nodes, configuring platform." %
(len(node_networking_task.result_detail['successful_nodes'])))
self.logger.info(
"Found %s successfully networked nodes, configuring platform."
% (len(node_networking_task.result_detail[
'successful_nodes'])))
node_platform_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(), design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodePlatform,
task_scope={'node_names': node_networking_task.result_detail['successful_nodes']})
self.logger.info("Starting node driver task %s to configure node platform." % (node_platform_task.get_id()))
node_platform_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.ApplyNodePlatform,
task_scope={
'node_names':
node_networking_task.result_detail['successful_nodes']
})
self.logger.info(
"Starting node driver task %s to configure node platform."
% (node_platform_task.get_id()))
node_driver.execute_task(node_platform_task.get_id())
node_platform_task = self.state_manager.get_task(node_platform_task.get_id())
node_platform_task = self.state_manager.get_task(
node_platform_task.get_id())
if node_platform_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if node_platform_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_platform_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
elif node_platform_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
if len(node_platform_task.result_detail['successful_nodes']) > 0:
self.logger.info("Configured platform on %s nodes, starting deployment." %
(len(node_platform_task.result_detail['successful_nodes'])))
node_deploy_task = self.create_task(tasks.DriverTask,
parent_task_id=task.get_id(), design_id=design_id,
action=hd_fields.OrchestratorAction.DeployNode,
task_scope={'node_names': node_platform_task.result_detail['successful_nodes']})
if len(node_platform_task.result_detail['successful_nodes']
) > 0:
self.logger.info(
"Configured platform on %s nodes, starting deployment."
% (len(node_platform_task.result_detail[
'successful_nodes'])))
node_deploy_task = self.create_task(
tasks.DriverTask,
parent_task_id=task.get_id(),
design_id=design_id,
action=hd_fields.OrchestratorAction.DeployNode,
task_scope={
'node_names':
node_platform_task.result_detail[
'successful_nodes']
})
self.logger.info("Starting node driver task %s to deploy nodes." % (node_deploy_task.get_id()))
self.logger.info(
"Starting node driver task %s to deploy nodes." %
(node_deploy_task.get_id()))
node_driver.execute_task(node_deploy_task.get_id())
node_deploy_task = self.state_manager.get_task(node_deploy_task.get_id())
node_deploy_task = self.state_manager.get_task(
node_deploy_task.get_id())
if node_deploy_task.get_result() in [hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess]:
if node_deploy_task.get_result() in [
hd_fields.ActionResult.Success,
hd_fields.ActionResult.PartialSuccess
]:
worked = True
elif node_deploy_task.get_result() in [hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess]:
elif node_deploy_task.get_result() in [
hd_fields.ActionResult.Failure,
hd_fields.ActionResult.PartialSuccess
]:
failed = True
else:
self.logger.warning("Unable to configure platform on any nodes, skipping deploy subtask")
self.logger.warning(
"Unable to configure platform on any nodes, skipping deploy subtask"
)
else:
self.logger.warning("No nodes successfully networked, skipping platform configuration subtask")
self.logger.warning(
"No nodes successfully networked, skipping platform configuration subtask"
)
final_result = None
if worked and failed:
@ -545,13 +667,14 @@ class Orchestrator(object):
else:
final_result = hd_fields.ActionResult.Failure
self.task_field_update(task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
self.task_field_update(
task_id,
status=hd_fields.TaskStatus.Complete,
result=final_result)
else:
raise errors.OrchestratorError("Action %s not supported"
% (task.action))
raise errors.OrchestratorError("Action %s not supported" %
(task.action))
"""
terminate_task
@ -559,6 +682,7 @@ class Orchestrator(object):
Mark a task for termination and optionally propagate the termination
recursively to all subtasks
"""
def terminate_task(self, task_id, propagate=True):
task = self.state_manager.get_task(task_id)
@ -572,7 +696,7 @@ class Orchestrator(object):
if propagate:
# Get subtasks list
subtasks = task.get_subtasks()
for st in subtasks:
self.terminate_task(st, propagate=True)
else:
@ -593,8 +717,8 @@ class Orchestrator(object):
lock_id = self.state_manager.lock_task(task_id)
if lock_id is not None:
task = self.state_manager.get_task(task_id)
for k,v in kwargs.items():
for k, v in kwargs.items():
setattr(task, k, v)
self.state_manager.put_task(task, lock_id=lock_id)
@ -615,7 +739,7 @@ class Orchestrator(object):
return False
def compute_model_inheritance(self, site_design):
# For now the only thing that really incorporates inheritance is
# host profiles and baremetal nodes. So we'll just resolve it for
# the baremetal nodes which recursively resolves it for host profiles
@ -623,8 +747,9 @@ class Orchestrator(object):
for n in getattr(site_design, 'baremetal_nodes', []):
n.compile_applied_model(site_design)
return
"""
compute_model_inheritance - given a fully populated Site model,
compute the effecitve design by applying inheritance and references
@ -634,7 +759,7 @@ class Orchestrator(object):
def get_described_site(self, design_id):
site_design = self.state_manager.get_design(design_id)
return site_design
def get_effective_site(self, design_id):
@ -649,25 +774,24 @@ class Orchestrator(object):
if node_filter is None:
return target_nodes
node_names = node_filter.get('node_names', [])
node_racks = node_filter.get('rack_names', [])
node_tags = node_filter.get('node_tags', [])
if len(node_names) > 0:
target_nodes = [x
for x in target_nodes
if x.get_name() in node_names]
target_nodes = [
x for x in target_nodes if x.get_name() in node_names
]
if len(node_racks) > 0:
target_nodes = [x
for x in target_nodes
if x.get_rack() in node_racks]
target_nodes = [
x for x in target_nodes if x.get_rack() in node_racks
]
if len(node_tags) > 0:
target_nodes = [x
for x in target_nodes
for t in node_tags
if x.has_tag(t)]
target_nodes = [
x for x in target_nodes for t in node_tags if x.has_tag(t)
]
return target_nodes

View File

@ -14,6 +14,7 @@
#
import logging
import functools
import falcon
from oslo_config import cfg
from oslo_policy import policy
@ -21,6 +22,7 @@ from oslo_policy import policy
# Global reference to a instantiated DrydockPolicy. Will be initialized by drydock.py
policy_engine = None
class DrydockPolicy(object):
"""
Initialize policy defaults
@ -28,39 +30,107 @@ class DrydockPolicy(object):
# Base Policy
base_rules = [
policy.RuleDefault('admin_required', 'role:admin or is_admin:1', description='Actions requiring admin authority'),
policy.RuleDefault(
'admin_required',
'role:admin or is_admin:1',
description='Actions requiring admin authority'),
]
# Orchestrator Policy
task_rules = [
policy.DocumentedRuleDefault('physical_provisioner:read_task', 'role:admin', 'Get task status',
[{'path': '/api/v1.0/tasks', 'method': 'GET'},
{'path': '/api/v1.0/tasks/{task_id}', 'method': 'GET'}]),
policy.DocumentedRuleDefault('physical_provisioner:validate_design', 'role:admin', 'Create validate_design task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_site', 'role:admin', 'Create verify_site task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_site', 'role:admin', 'Create prepare_site task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_node', 'role:admin', 'Create verify_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_node', 'role:admin', 'Create prepare_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:deploy_node', 'role:admin', 'Create deploy_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:destroy_node', 'role:admin', 'Create destroy_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:read_task',
'role:admin', 'Get task status', [{
'path':
'/api/v1.0/tasks',
'method':
'GET'
}, {
'path':
'/api/v1.0/tasks/{task_id}',
'method':
'GET'
}]),
policy.DocumentedRuleDefault('physical_provisioner:create_task',
'role:admin',
'Create a task', [{
'path':
'/api/v1.0/tasks',
'method':
'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:validate_design',
'role:admin',
'Create validate_design task', [{
'path':
'/api/v1.0/tasks',
'method':
'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_site',
'role:admin', 'Create verify_site task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_site',
'role:admin', 'Create prepare_site task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_node',
'role:admin', 'Create verify_node task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_node',
'role:admin', 'Create prepare_node task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:deploy_node',
'role:admin', 'Create deploy_node task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
policy.DocumentedRuleDefault('physical_provisioner:destroy_node',
'role:admin', 'Create destroy_node task',
[{
'path': '/api/v1.0/tasks',
'method': 'POST'
}]),
]
# Data Management Policy
data_rules = [
policy.DocumentedRuleDefault('physical_provisioner:read_data', 'role:admin', 'Read loaded design data',
[{'path': '/api/v1.0/designs', 'method': 'GET'},
{'path': '/api/v1.0/designs/{design_id}', 'method': 'GET'}]),
policy.DocumentedRuleDefault('physical_provisioner:ingest_data', 'role:admin', 'Load design data',
[{'path': '/api/v1.0/designs', 'method': 'POST'},
{'path': '/api/v1.0/designs/{design_id}/parts', 'method': 'POST'}])
policy.DocumentedRuleDefault('physical_provisioner:read_data',
'role:admin',
'Read loaded design data', [{
'path':
'/api/v1.0/designs',
'method':
'GET'
}, {
'path':
'/api/v1.0/designs/{design_id}',
'method':
'GET'
}]),
policy.DocumentedRuleDefault('physical_provisioner:ingest_data',
'role:admin', 'Load design data', [{
'path':
'/api/v1.0/designs',
'method':
'POST'
}, {
'path':
'/api/v1.0/designs/{design_id}/parts',
'method':
'POST'
}])
]
def __init__(self):
@ -76,6 +146,7 @@ class DrydockPolicy(object):
target = {'project_id': ctx.project_id, 'user_id': ctx.user_id}
return self.enforcer.authorize(action, target, ctx.to_policy_view())
class ApiEnforcer(object):
"""
A decorator class for enforcing RBAC policies
@ -87,24 +158,38 @@ class ApiEnforcer(object):
def __call__(self, f):
@functools.wraps(f)
def secure_handler(slf, req, resp, *args):
def secure_handler(slf, req, resp, *args, **kwargs):
ctx = req.context
policy_engine = ctx.policy_engine
self.logger.debug("Enforcing policy %s on request %s" % (self.action, ctx.request_id))
self.logger.debug("Enforcing policy %s on request %s" %
(self.action, ctx.request_id))
if policy_engine is not None and policy_engine.authorize(self.action, ctx):
return f(slf, req, resp, *args)
if policy_engine is not None and policy_engine.authorize(
self.action, ctx):
return f(slf, req, resp, *args, **kwargs)
else:
if ctx.authenticated:
slf.info(ctx, "Error - Forbidden access - action: %s" % self.action)
slf.return_error(resp, falcon.HTTP_403, message="Forbidden", retry=False)
slf.info(
ctx,
"Error - Forbidden access - action: %s" % self.action)
slf.return_error(
resp,
falcon.HTTP_403,
message="Forbidden",
retry=False)
else:
slf.info(ctx, "Error - Unauthenticated access")
slf.return_error(resp, falcon.HTTP_401, message="Unauthenticated", retry=False)
slf.return_error(
resp,
falcon.HTTP_401,
message="Unauthenticated",
retry=False)
return secure_handler
def list_policies():
default_policy = []
default_policy.extend(DrydockPolicy.base_rules)

View File

@ -23,8 +23,8 @@ import drydock_provisioner.objects.task as tasks
from drydock_provisioner.error import DesignError, StateError
class DesignState(object):
class DesignState(object):
def __init__(self):
self.designs = {}
self.designs_lock = Lock()
@ -54,8 +54,7 @@ class DesignState(object):
def post_design(self, site_design):
if site_design is not None:
my_lock = self.designs_lock.acquire(blocking=True,
timeout=10)
my_lock = self.designs_lock.acquire(blocking=True, timeout=10)
if my_lock:
design_id = site_design.id
if design_id not in self.designs.keys():
@ -71,8 +70,7 @@ class DesignState(object):
def put_design(self, site_design):
if site_design is not None:
my_lock = self.designs_lock.acquire(blocking=True,
timeout=10)
my_lock = self.designs_lock.acquire(blocking=True, timeout=10)
if my_lock:
design_id = site_design.id
if design_id not in self.designs.keys():
@ -108,13 +106,14 @@ class DesignState(object):
if site_build is not None and isinstance(site_build, SiteBuild):
my_lock = self.builds_lock.acquire(block=True, timeout=10)
if my_lock:
exists = [b for b in self.builds
if b.build_id == site_build.build_id]
exists = [
b for b in self.builds if b.build_id == site_build.build_id
]
if len(exists) > 0:
self.builds_lock.release()
raise DesignError("Already a site build with ID %s" %
(str(site_build.build_id)))
(str(site_build.build_id)))
self.builds.append(deepcopy(site_build))
self.builds_lock.release()
return True
@ -149,8 +148,9 @@ class DesignState(object):
my_lock = self.tasks_lock.acquire(blocking=True, timeout=10)
if my_lock:
task_id = task.get_id()
matching_tasks = [t for t in self.tasks
if t.get_id() == task_id]
matching_tasks = [
t for t in self.tasks if t.get_id() == task_id
]
if len(matching_tasks) > 0:
self.tasks_lock.release()
raise StateError("Task %s already created" % task_id)
@ -174,10 +174,10 @@ class DesignState(object):
raise StateError("Task locked for updates")
task.lock_id = lock_id
self.tasks = [i
if i.get_id() != task_id
else deepcopy(task)
for i in self.tasks]
self.tasks = [
i if i.get_id() != task_id else deepcopy(task)
for i in self.tasks
]
self.tasks_lock.release()
return True
@ -223,13 +223,15 @@ class DesignState(object):
self.promenade_lock.release()
return None
else:
raise StateError("Could not acquire lock")
raise StateError("Could not acquire lock")
def get_promenade_parts(self, target):
parts = self.promenade.get(target, None)
if parts is not None:
return [objects.PromenadeConfig.obj_from_primitive(p) for p in parts]
return [
objects.PromenadeConfig.obj_from_primitive(p) for p in parts
]
else:
# Return an empty list just to play nice with extend
return []

View File

@ -7,7 +7,7 @@ requests
oauthlib
uwsgi===2.0.15
bson===0.4.7
oslo.config
oslo.config===3.16.0
click===6.7
PasteDeploy==1.5.2
keystonemiddleware===4.9.1

View File

@ -5,3 +5,5 @@ mock
tox
oslo.versionedobjects[fixtures]>=1.23.0
oslo.config[fixtures]
yapf
flake8

View File

@ -16,40 +16,36 @@
# and monitor the provisioning of those hosts and execution of bootstrap
# scripts
from setuptools import setup
setup(name='drydock_provisioner',
version='0.1a1',
description='Bootstrapper for Kubernetes infrastructure',
url='http://github.com/att-comdev/drydock',
author='Scott Hussey - AT&T',
author_email='sh8121@att.com',
license='Apache 2.0',
packages=['drydock_provisioner',
'drydock_provisioner.objects',
'drydock_provisioner.ingester',
'drydock_provisioner.ingester.plugins',
'drydock_provisioner.statemgmt',
'drydock_provisioner.orchestrator',
'drydock_provisioner.control',
'drydock_provisioner.drivers',
'drydock_provisioner.drivers.oob',
'drydock_provisioner.drivers.oob.pyghmi_driver',
'drydock_provisioner.drivers.oob.manual_driver',
'drydock_provisioner.drivers.node',
'drydock_provisioner.drivers.node.maasdriver',
'drydock_provisioner.drivers.node.maasdriver.models',
'drydock_provisioner.control',
'drydock_provisioner.cli',
'drydock_provisioner.cli.design',
'drydock_provisioner.cli.part',
'drydock_provisioner.cli.task',
'drydock_provisioner.drydock_client'],
entry_points={
'oslo.config.opts': 'drydock_provisioner = drydock_provisioner.config:list_opts',
'oslo.policy.policies': 'drydock_provisioner = drydock_provisioner.policy:list_policies',
'console_scripts': 'drydock = drydock_provisioner.cli.commands:drydock'
}
)
setup(
name='drydock_provisioner',
version='0.1a1',
description='Bootstrapper for Kubernetes infrastructure',
url='http://github.com/att-comdev/drydock',
author='Scott Hussey - AT&T',
author_email='sh8121@att.com',
license='Apache 2.0',
packages=[
'drydock_provisioner', 'drydock_provisioner.objects',
'drydock_provisioner.ingester', 'drydock_provisioner.ingester.plugins',
'drydock_provisioner.statemgmt', 'drydock_provisioner.orchestrator',
'drydock_provisioner.control', 'drydock_provisioner.drivers',
'drydock_provisioner.drivers.oob',
'drydock_provisioner.drivers.oob.pyghmi_driver',
'drydock_provisioner.drivers.oob.manual_driver',
'drydock_provisioner.drivers.node',
'drydock_provisioner.drivers.node.maasdriver',
'drydock_provisioner.drivers.node.maasdriver.models',
'drydock_provisioner.control', 'drydock_provisioner.cli',
'drydock_provisioner.cli.design', 'drydock_provisioner.cli.part',
'drydock_provisioner.cli.task', 'drydock_provisioner.drydock_client'
],
entry_points={
'oslo.config.opts':
'drydock_provisioner = drydock_provisioner.config:list_opts',
'oslo.policy.policies':
'drydock_provisioner = drydock_provisioner.policy:list_policies',
'console_scripts':
'drydock = drydock_provisioner.cli.commands:drydock'
})

View File

@ -0,0 +1,354 @@
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: helm-toolkit
data:
chart_name: helm-toolkit
release: helm-toolkit
namespace: helm-toolkit
timeout: 100
values: {}
source:
type: git
location: https://git.openstack.org/openstack/openstack-helm
subpath: helm-toolkit
reference: master
dependencies: []
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ceph
data:
chart_name: ceph
release: ceph
namespace: ceph
timeout: 3600
install:
no_hooks: false
upgrade:
no_hooks: false
values:
manifests_enabled:
client_secrets: false
bootstrap:
enabled: true
network:
public: ${CEPH_PUBLIC_NET}
cluster: ${CEPH_CLUSTER_NET}
endpoints:
fqdn: ceph.svc.cluster.local
conf:
ceph:
config:
global:
mon_host: ceph-mon.ceph.svc.cluster.local
source:
type: git
location: ${CEPH_CHART_REPO}
subpath: ceph
reference: ${CEPH_CHART_BRANCH}
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-ceph-config
data:
chart_name: ucp-ceph-config
release: ucp-ceph-config
namespace: ucp
timeout: 3600
install:
no_hooks: false
upgrade:
no_hooks: false
values:
ceph:
namespace: ceph
manifests_enabled:
deployment: False
storage_secrets: False
rbd_provisioner: False
network:
public: ${CEPH_PUBLIC_NET}
cluster: ${CEPH_CLUSTER_NET}
endpoints:
fqdn: ceph.svc.cluster.local
conf:
ceph:
config:
global:
mon_host: ceph-mon.ceph.svc.cluster.local
source:
type: git
location: ${CEPH_CHART_REPO}
subpath: ceph
reference: ${CEPH_CHART_BRANCH}
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-mariadb
data:
chart_name: ucp-mariadb
release: ucp-mariadb
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
values:
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
source:
type: git
location: https://git.openstack.org/openstack/openstack-helm
subpath: mariadb
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-memcached
data:
chart_name: ucp-memcached
release: ucp-memcached
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
values:
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
source:
type: git
location: https://git.openstack.org/openstack/openstack-helm
subpath: memcached
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-keystone
data:
chart_name: ucp-keystone
release: keystone
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
pre:
delete:
- name: keystone-db-sync
type: job
labels:
- job-name: keystone-db-sync
- name: keystone-db-init
type: job
labels:
- job-name: keystone-db-init
post:
delete: []
create: []
values:
conf:
keystone:
override:
paste:
override:
replicas: 2
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
source:
type: git
location: https://git.openstack.org/openstack/openstack-helm
subpath: keystone
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: maas-postgresql
data:
chart_name: maas-postgresql
release: maas-postgresql
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
pre:
delete: []
create: []
post:
delete: []
create: []
values:
development:
enabled: false
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
source:
type: git
location: https://git.openstack.org/openstack/openstack-helm-addons
subpath: postgresql
reference: master
dependencies: []
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: maas
data:
chart_name: maas
release: maas
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
values:
bootdata_url: http://${DRYDOCK_NODE_IP}:${DRYDOCK_NODE_PORT}/api/v1.0/bootdata/
labels:
rack:
node_selector_key: ucp-control-plane
node_selector_value: enabled
region:
node_selector_key: ucp-control-plane
node_selector_value: enabled
network:
proxy:
node_port:
enabled: true
port: 31800
gui:
node_port:
enabled: true
port: 31900
conf:
maas:
credentials:
secret:
namespace: ucp
url:
maas_url: http://${MAAS_NODE_IP}:${MAAS_NODE_PORT}/MAAS
proxy:
enabled: '${PROXY_ENABLED}'
server: ${PROXY_ADDRESS}
ntp:
servers: ntp.ubuntu.com
dns:
upstream_servers: 8.8.8.8
secrets:
maas_region:
value: 3858a12230ac3c915f300c664f12063f
source:
type: git
location: ${MAAS_CHART_REPO}
subpath: maas
reference: ${MAAS_CHART_BRANCH}
dependencies:
- helm-toolkit
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: drydock
data:
chart_name: drydock
release: drydock
namespace: ucp
install:
no_hooks: false
upgrade:
no_hooks: false
values:
images:
drydock: ${DRYDOCK_IMAGE}
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
network:
drydock:
node_port:
enabled: true
port: ${DRYDOCK_NODE_PORT}
conf:
drydock:
maasdriver:
drydock_provisioner:
maas_api_url: http://${MAAS_NODE_IP}:${MAAS_NODE_PORT}/MAAS/api/2.0/
source:
type: git
location: ${DRYDOCK_CHART_REPO}
subpath: drydock
reference: ${DRYDOCK_CHART_BRANCH}
dependencies:
- helm-toolkit
---
schema: armada/Manifest/v1
metadata:
schema: metadata/Document/v1
name: ucp-basic
data:
release_prefix: armada-ucp
chart_groups:
- ceph
- ceph-bootstrap
- ucp-infra
- ucp-services
---
schema: armada/ChartGroup/v1
metadata:
schema: metadata/Document/v1
name: ceph
data:
description: 'Storage Backend'
chart_group:
- ceph
---
schema: armada/ChartGroup/v1
metadata:
schema: metadata/Document/v1
name: ceph-bootstrap
data:
description: 'Storage Backend Config'
chart_group:
- ucp-ceph-config
---
schema: armada/ChartGroup/v1
metadata:
schema: metadata/Document/v1
name: ucp-infra
data:
description: 'UCP Infrastructure'
chart_group:
- ucp-mariadb
- ucp-memcached
- maas-postgresql
---
schema: armada/ChartGroup/v1
metadata:
schema: metadata/Document/v1
name: ucp-services
data:
description: 'UCP Services'
chart_group:
- maas
- drydock
- ucp-keystone
...

View File

@ -0,0 +1,349 @@
#Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Site/Region wide definitions. Each design part will be a constituent
# of the design for exactly one Region
apiVersion: 'drydock/v1'
kind: Region
metadata:
name: atl_foundry
date: 17-FEB-2017
description: Sample site design
author: sh8121@att.com
spec:
# List of query-based definitions for applying tags to deployed nodes
tag_definitions:
- tag: 'high_memory'
# Tag to apply to nodes that qualify for the query
definition_type: 'lshw_xpath'
# Only support on type for now - 'lshw_xpath' used by MaaS
definition: //node[@id="memory"]/'size units="bytes"' > 137438953472
# an xpath query that is run against the output of 'lshw -xml' from the node
# Image and package repositories needed by Drydock drivers. Needs to be defined
repositories:
- name: 'ubuntu-main'
authorized_keys:
- |
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAgqUTJwZEMjZCWOnXQw+FFdvnf/lYrGXm01
rf/ZYUanoymkMWIK1/c8a3Ez9/HY3dyfWBcuzlIV4bNCvJcMg4UPuh6NQBJWAlfp7wfW9O
8ZyDE3x1FYno5u3OB4rRDcvKe6J0ygPcu4Uec5ASsd58yGnE4zTl1D/J30rNa00si+s= r
sa-key-20120124
---
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: oob
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 1 attributes. Primary key is 'name'. These settings will generally be things the switch and server have to agree on
labels:
- 'noconfig'
spec:
bonding:
# Mode can be 'disabled', '802.3ad', 'balanced-rr', 'active-backup'. Defaults to disabled
mode: 'disabled'
# Physical link default MTU size. No default
mtu: 1500
# Physical link speed. Supports 'auto', '100full'. Gigabit+ speeds require auto. No default
linkspeed: 'auto'
# Settings for using a link for multiple L2 networks
trunking:
# Trunking mode. Supports 'disabled', '802.1q'. Defaults to disabled
mode: disabled
# If disabled, what network is this port on. If '802.1q' what is the default network for the port. No default.
default_network: oob
allowed_networks:
- 'oob'
---
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: pxe
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 1 attributes. Primary key is 'name'. These settings will generally be things the switch and server have to agree on
spec:
bonding:
# Mode can be 'disabled', '802.3ad', 'balanced-rr', 'active-backup'. Defaults to disabled
mode: 'disabled'
# Physical link default MTU size. No default
mtu: 1500
# Physical link speed. Supports 'auto', '100full'. Gigabit+ speeds require auto. No default
linkspeed: 'auto'
# Settings for using a link for multiple L2 networks
trunking:
# Trunking mode. Supports 'disabled', '802.1q'. Defaults to disabled
mode: disabled
# If disabled, what network is this port on. If '802.1q' what is the default network for the port. No default.
default_network: pxe
allowed_networks:
- 'pxe'
---
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: oob
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 2 and 3 attributes. Primary key is 'name'.
labels:
- 'noconfig'
spec:
# CIDR representation of network number and netmask
cidr: '172.24.10.0/24'
# How addresses are allocated on the network. Supports 'static', 'dhcp'. Defaults to 'static'
allocation: 'static'
---
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: pxe-rack1
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 2 and 3 attributes. Primary key is 'name'.
spec:
# CIDR representation of network number and netmask
cidr: '172.24.1.0/24'
# How addresses are allocated on the network. Supports 'static', 'dhcp'. Defaults to 'static'
allocation: 'static'
routes:
# The network being routed to in CIDR notation. Default gateway is 0.0.0.0/0.
- subnet: '0.0.0.0/0'
# Next hop for traffic using this route
gateway: '172.24.1.1'
# Selection metric for the host selecting this route. No default
metric: 100
ranges:
# Type of range. Supports 'reserved', 'static' or 'dhcp'. No default
- type: 'reserved'
# Start of the address range, inclusive. No default
start: '172.24.1.1'
# End of the address range, inclusive. No default
end: '172.24.1.100'
- type: 'dhcp'
start: '172.24.1.200'
end: '172.24.1.250'
---
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: pxe-rack2
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 2 and 3 attributes. Primary key is 'name'.
spec:
# CIDR representation of network number and netmask
cidr: '172.24.2.0/24'
# How addresses are allocated on the network. Supports 'static', 'dhcp'. Defaults to 'static'
allocation: 'static'
routes:
# The network being routed to in CIDR notation. Default gateway is 0.0.0.0/0.
- subnet: '0.0.0.0/0'
# Next hop for traffic using this route
gateway: '172.24.2.1'
# Selection metric for the host selecting this route. No default
metric: 100
ranges:
# Type of range. Supports 'reserved', 'static' or 'dhcp'. No default
- type: 'reserved'
# Start of the address range, inclusive. No default
start: '172.24.2.1'
# End of the address range, inclusive. No default
end: '172.24.2.100'
- type: 'dhcp'
start: '172.24.2.200'
end: '172.24.2.250'
---
apiVersion: 'drydock/v1'
kind: HardwareProfile
metadata:
name: DellR820v1
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Describe server hardware attributes. Not a specific server, but profile adopted by a server defintion.
spec:
# Chassis vendor
vendor: 'Dell'
# Chassis model generation
generation: '1'
# Chassis model version
hw_version: '2'
# Certified BIOS version for this chassis
bios_version: '2.2.3'
# Boot mode. Supports 'bios' or 'uefi'
boot_mode: 'bios'
# How the node should be initially bootstrapped. Supports 'pxe'
bootstrap_protocol: 'pxe'
# What network interface to use for PXE booting
# for chassis that support selection
pxe_interface: '0'
# Mapping of hardware alias/role to physical address
device_aliases:
# the device alias that will be referenced in HostProfile or BaremetalNode design parts
- alias: 'pnic01'
# The hardware bus the device resides on. Supports 'pci' and 'scsi'. No default
bus_type: 'pci'
# The type of device as reported by lshw. Can be used to validate hardware manifest. No default
dev_type: 'Intel 10Gbps NIC'
# Physical address on the bus
address: '0000:00:03.0'
---
apiVersion: 'drydock/v1'
kind: HostProfile
metadata:
name: defaults
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Specify a physical server.
spec:
# The HardwareProfile describing the node hardware. No default.
hardware_profile: 'DellR820v1'
primary_network: 'pxe'
# OOB access to node
oob:
# Type of OOB access. Supports 'ipmi'
type: 'ipmi'
# Which network - as defined in a Network design part - to access the OOB interface on
network: 'oob'
# Account name for authenticating on the OOB interface
account: 'root'
# Credential for authentication on the OOB interface. The OOB driver will interpret this.
credential: 'calvin'
# How local node storage is configured
storage:
# How storage is laid out. Supports 'lvm' and 'flat'. Defaults to 'lvm'
layout: 'lvm'
# Configuration for the boot disk
bootdisk:
# Hardware disk (or hardware RAID device) used for booting. Can refer to a
# HardwareProfile device alias or a explicit device name
device: 'bootdisk'
# Size of the root volume. Can be specified by percentage or explicit size in
# megabytes or gigabytes. Defaults to 100% of boot device.
root_size: '100g'
# If a separate boot volume is needed, specify size. Defaults to 0 where /boot goes on root.
boot_size: '0'
# Non-boot volumes that should be carved out of local storage
partitions:
# Name of the volume. Doesn't translate to any operating system config
- name: 'logs'
# Hardware device the volume should go on
device: 'bootdisk'
# Partition UUID. Defaults to None. A value of 'generate' means Drydock will generate a UUID
part_uuid:
# Size of the volume in megabytes or gigabytes
size: '10g'
# Filesystem mountpoint if volume should be a filesystem
mountpoint: '/var/logs'
# The below are ignored if mountpoint is None
# Format of filesystem. Defaults to ext4
fstype: 'ext4'
# Mount options of the file system as used in /etc/fstab. Defaults to 'defaults'
mount_options: 'defaults'
# Filesystem UUID. Defaults to None. A value of 'generate' means Drydock will generate a UUID
fs_uuid:
# A filesystem label. Defaults to None
fs_label:
# Physical and logical network interfaces
interfaces:
# What the interface should be named in the operating system. May not match a hardware device name
- device_name: 'eno1'
# The NetworkLink connected to this interface. Must be the name of a NetworkLink design part
device_link: 'pxe'
# Hardware devices that support this interface. For configurating a physical device, this would be a list of one
# For bonds, this would be a list of all the physical devices in the bond. These can refer to HardwareProfile device aliases
# or explicit device names
slaves:
- 'eno1'
# Network that will be accessed on this interface. These should each be to the name of a Network design part
# Multiple networks listed here assume that this interface is attached to a NetworkLink supporting trunking
networks:
- 'pxe'
platform:
# Which image to deploy on the node, must be available in the provisioner. Defaults to 'ubuntu/xenial'
image: 'ubuntu/xenial'
# Which kernel to enable. Defaults to generic, can also be hwe (hardware enablement)
kernel: 'generic'
# K/V list of kernel parameters to configure on boot. No default. Use value of true for params that are just flags
metadata:
# Explicit tags to propagate to Kubernetes. Simple strings of any value
rack: cab23
---
apiVersion: 'drydock/v1'
kind: BaremetalNode
metadata:
name: cab23-r720-16
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Specify a physical server.
spec:
host_profile: defaults
addressing:
# The name of a defined Network design part also listed in the 'networks' section of a interface definition
- network: 'pxe'
# Address should be an explicit IP address assignment or 'dhcp'
address: '10.23.19.116'
- network: 'oob'
address: '10.23.104.16'
metadata:
tags:
- 'masters'
---
apiVersion: 'drydock/v1'
kind: BaremetalNode
metadata:
name: cab23-r720-17
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Specify a physical server.
spec:
host_profile: defaults
addressing:
# The name of a defined Network design part also listed in the 'networks' section of a interface definition
- network: 'pxe'
# Address should be an explicit IP address assignment or 'dhcp'
address: '10.23.19.117'
- network: 'oob'
address: '10.23.104.17'
metadata:
tags:
- 'masters'
---
apiVersion: 'drydock/v1'
kind: BaremetalNode
metadata:
name: cab23-r720-19
region: atl_foundry
date: 17-FEB-2017
author: sh8121@att.com
description: Specify a physical server.
spec:
host_profile: defaults
addressing:
# The name of a defined Network design part also listed in the 'networks' section of a interface definition
- network: 'pxe'
# Address should be an explicit IP address assignment or 'dhcp'
address: '10.23.19.119'
- network: 'oob'
address: '10.23.104.19'
...

View File

@ -0,0 +1,62 @@
# Setup fake IPMI network
ip link add oob-br type bridge
ip link set dev oob-br up
# Setup rack 1 PXE network
ip link add pxe1-br type bridge
ip link set dev pxe1-br up
# Setup rack 2 PXE network
ip link add pxe2-br type bridge
ip link set dev pxe2-br up
# Setup interface to hold all IP addresses for vbmc instances
ip link add dev oob-if type veth peer name oob-ifp
ip link set dev oob-ifp up master oob-br
ip link set dev oob-if up arp on
# Setup rack 1 PXE gateway
ip link add dev pxe1-if type veth peer name pxe1-ifp
ip link set dev pxe1-ifp up master pxe1-br
ip link set dev pxe1-if up arp on
ip addr add 172.24.1.1/24 dev pxe1-if
# Setup rack 2 PXE gateway
ip link add dev pxe2-if type veth peer name pxe2-ifp
ip link set dev pxe2-ifp up master pxe2-br
ip link set dev pxe2-if up arp on
ip addr add 172.24.2.1/24 dev pxe2-if
# Setup fake IPMI interfaces and vbmc instances
ip addr add 172.24.10.101/24 dev oob-if
vbmc add --address 172.24.10.101 node2
ip addr add 172.24.10.102/24 dev oob-if
vbmc add --address 172.24.10.102 node3
vbmc start
# Setup rules for IP forwarding on PXE networks
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o extbr -j MASQUERADE
iptables -A FORWARD -i extbr -o pxe1-if -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i pxe1-if -o extbr -j ACCEPT
iptables -A FORWARD -i extbr -o pxe2-if -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i pxe2-if -o extbr -j ACCEPT
# Setup external ssh access to genesis VM
iptables -t nat -A PREROUTING -p tcp -d 10.23.19.16 --dport 2222 -j DNAT --to-destination 172.24.1.100:22
# Node1 - Genesis
# PXE1 - 172.24.1.100/24
# OOB - 172.24.10.100/24
# Node2 - Master
# PXE1 - 172.24.1.101/24
# vbmc - 172.24.10.101/24
# Node3 - Master
# PXE2 - 172.24.2.101/24
# vbmc - 172.24.10.102/24

View File

@ -0,0 +1,82 @@
---
apiVersion: promenade/v1
kind: Cluster
metadata:
name: example
target: none
spec:
nodes:
${GENESIS_NODE_NAME}:
ip: ${GENESIS_NODE_IP}
roles:
- master
- genesis
additional_labels:
- beta.kubernetes.io/arch=amd64
- ucp-control-plane=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
${MASTER_NODE_NAME}:
ip: ${MASTER_NODE_IP}
roles:
- master
additional_labels:
- beta.kubernetes.io/arch=amd64
- ucp-control-plane=enabled
- ceph-mon=enabled
- ceph-osd=enabled
- ceph-mds=enabled
---
apiVersion: promenade/v1
kind: Network
metadata:
cluster: example
name: example
target: all
spec:
cluster_domain: cluster.local
cluster_dns: 10.96.0.10
kube_service_ip: 10.96.0.1
pod_ip_cidr: 10.97.0.0/16
service_ip_cidr: 10.96.0.0/16
calico_etcd_service_ip: 10.96.232.136
calico_interface: ${NODE_NET_IFACE}
dns_servers:
- 8.8.8.8
- 8.8.4.4
---
apiVersion: promenade/v1
kind: Versions
metadata:
cluster: example
name: example
target: all
spec:
images:
armada: ${ARMADA_IMAGE}
calico:
cni: quay.io/calico/cni:v1.9.1
etcd: quay.io/coreos/etcd:v3.2.1
node: quay.io/calico/node:v1.3.0
policy-controller: quay.io/calico/kube-policy-controller:v0.6.0
kubernetes:
apiserver: gcr.io/google_containers/hyperkube-amd64:v1.6.7
controller-manager: quay.io/attcomdev/kube-controller-manager:v1.6.7
dns:
dnsmasq: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
kubedns: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.2
sidecar: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2
etcd: quay.io/coreos/etcd:v3.2.1
kubectl: gcr.io/google_containers/hyperkube-amd64:v1.6.7
proxy: gcr.io/google_containers/hyperkube-amd64:v1.6.7
scheduler: gcr.io/google_containers/hyperkube-amd64:v1.6.7
promenade: ${PROMENADE_IMAGE}
tiller: gcr.io/kubernetes-helm/tiller:v2.5.0
packages:
docker: docker.io=1.12.6-0ubuntu1~16.04.1
dnsmasq: dnsmasq=2.75-1ubuntu0.16.04.2
socat: socat=1.7.3.1-1
additional_packages:
- ceph-common=10.2.7-0ubuntu0.16.04.1
...

View File

@ -0,0 +1,16 @@
---
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: generous-permissions
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: system:masters
- kind: Group
name: system:authenticated
- kind: Group
name: system:unauthenticated

View File

@ -0,0 +1,9 @@
export CEPH_CLUSTER_NET=172.24.1.0/24
export CEPH_PUBLIC_NET=172.24.1.0/24
export GENESIS_NODE_IP=172.24.1.100
export MASTER_NODE_IP=172.24.1.101
export NODE_NET_IFACE=ens3
export CEPH_CHART_REPO=https://github.com/sh8121att/helm_charts
export DRYDOCK_CHART_REPO=https://github.com/sh8121att/helm_charts
export MAAS_CHART_REPO=https://github.com/sh8121att/helm_charts
export DRYDOCK_IMAGE=docker.io/sthussey/drydock:latest

View File

@ -0,0 +1,128 @@
#/bin/bash
set -x
# Check that we are root
if [[ $(whoami) != "root" ]]
then
echo "Must be root to run $0"
exit -1
fi
# Install docker
apt -qq update
apt -y install docker.io jq
# Setup environmental variables
# with stable defaults
# Network
export CEPH_CLUSTER_NET=${CEPH_CLUSTER_NET:-"NA"}
export CEPH_PUBLIC_NET=${CEPH_PUBLIC_NET:-"NA"}
export GENESIS_NODE_IP=${GENESIS_NODE_IP:-"NA"}
export DRYDOCK_NODE_IP=${DRYDOCK_NODE_IP:-${GENESIS_NODE_IP}}
export DRYDOCK_NODE_PORT=${DRYDOCK_NODE_PORT:-31000}
export MAAS_NODE_IP=${MAAS_NODE_IP:-${GENESIS_NODE_IP}}
export MAAS_NODE_PORT=${MAAS_NODE_PORT:-31900}
export MASTER_NODE_IP=${MASTER_NODE_IP:-"NA"}
export NODE_NET_IFACE=${NODE_NET_IFACE:-"eth0"}
export PROXY_ADDRESS=${PROXY_ADDRESS:-"http://one.proxy.att.com:8080"}
export PROXY_ENABLED=${PROXY_ENABLED:-"false"}
# Hostnames
export GENESIS_NODE_NAME=${GENESIS_NODE_NAME:-"node1"}
export MASTER_NODE_NAME=${MASTER_NODE_NAME:-"node2"}
# Charts
export CEPH_CHART_REPO=${CEPH_CHART_REPO:-"https://github.com/openstack/openstack-helm"}
export CEPH_CHART_BRANCH=${CEPH_CHART_BRANCH:-"master"}
export DRYDOCK_CHART_REPO=${DRYDOCK_CHART_REPO:-"https://github.com/att-comdev/aic-helm"}
export DRYDOCK_CHART_BRANCH=${DRYDOCK_CHART_BRANCH:-"master"}
export MAAS_CHART_REPO=${MAAS_CHART_REPO:-"https://github.com/openstack/openstack-helm-addons"}
export MAAS_CHART_BRANCH=${MAAS_CHART_BRANCH:-"master"}
# Images
export DRYDOCK_IMAGE=${DRYDOCK_IMAGE:-"quay.io/attcomdev/drydock:0.2.0-a1"}
export ARMADA_IMAGE=${ARMADA_IMAGE:-"quay.io/attcomdev/armada:v0.6.0"}
export PROMENADE_IMAGE=${PROMENADE_IMAGE:-"quay.io/attcomdev/promenade:master"}
# Filenames
export ARMADA_CONFIG=${ARMADA_CONFIG:-"armada.yaml"}
export PROMENADE_CONFIG=${PROMENADE_CONFIG:-"promenade.yaml"}
export UP_SCRIPT_FILE=${UP_SCRIPT_FILE:-"up.sh"}
# Validate environment
if [[ $GENESIS_NODE_IP == "NA" || $MASTER_NODE_IP == "NA" ]]
then
echo "GENESIS_NODE_IP and MASTER_NODE_IP env vars must be set to correct IP addresses."
exit -1
fi
if [[ $CEPH_CLUSTER_NET == "NA" || $CEPH_PUBLIC_NET == "NA" ]]
then
echo "CEPH_CLUSTER_NET and CEPH_PUBLIC_NET en vars must be set to correct IP subnet CIDRs."
exit -1
fi
# Required inputs
# Promenade input-config.yaml
# Armada Manifest for integrated UCP services
cat promenade.yaml.sub | envsubst > ${PROMENADE_CONFIG}
cat armada.yaml.sub | envsubst > ${ARMADA_CONFIG}
rm -rf configs
mkdir configs
# Generate Promenade configuration
docker run -t -v $(pwd):/target ${PROMENADE_IMAGE} promenade generate -c /target/${PROMENADE_CONFIG} -o /target/configs
# Do Promenade genesis process
cd configs
sudo bash ${UP_SCRIPT_FILE} ./${GENESIS_NODE_NAME}.yaml
cd ..
# Setup kubeconfig
mkdir ~/.kube
cp -r /etc/kubernetes/admin/pki ~/.kube/pki
cat /etc/kubernetes/admin/kubeconfig.yaml | sed -e 's/\/etc\/kubernetes\/admin/./' > ~/.kube/config
# Polling to ensure genesis is complete
while [[ -z $(kubectl get pods -n kube-system | grep 'kube-dns' | grep -e '3/3') ]]
do
sleep 5
done
# Squash Kubernetes RBAC to be compatible w/ OSH
kubectl update -f ./rbac-generous-permissions.yaml
# Do Armada deployment of UCP integrated services
docker run -t -v ~/.kube:/root/.kube -v $(pwd):/target --net=host \
${ARMADA_IMAGE} apply --debug-logging /target/${ARMADA_CONFIG} --tiller-host=${GENESIS_NODE_IP} --tiller-port=44134
# Polling for UCP service deployment
while [[ -z $(kubectl get pods -n ucp | grep drydock | grep Running) ]]
do
sleep 5
done
# Run Gabbi tests
TOKEN=$(docker run --rm --net=host -e 'OS_AUTH_URL=http://keystone-api.ucp.svc.cluster.local:80/v3' -e 'OS_PASSWORD=password' -e 'OS_PROJECT_DOMAIN_NAME=default' -e 'OS_PROJECT_NAME=service' -e 'OS_REGION_NAME=RegionOne' -e 'OS_USERNAME=drydock' -e 'OS_USER_DOMAIN_NAME=default' -e 'OS_IDENTITY_API_VERSION=3' kolla/ubuntu-source-keystone:3.0.3 openstack token issue -f shell | grep ^id | cut -d'=' -f2 | tr -d '"')
DESIGN_ID=$(docker run --rm --net=host -e "DD_TOKEN=$TOKEN" -e "DD_URL=http://drydock-api.ucp.svc.cluster.local:9000" -e "LC_ALL=C.UTF-8" -e "LANG=C.UTF-8" --entrypoint /usr/local/bin/drydock $DRYDOCK_IMAGE design create)
TASK_ID=$(docker run --rm --net=host -e "DD_TOKEN=$TOKEN" -e "DD_URL=http://drydock-api.ucp.svc.cluster.local:9000" -e "LC_ALL=C.UTF-8" -e "LANG=C.UTF-8" --entrypoint /usr/local/bin/drydock $DRYDOCK_IMAGE task create -d $DESIGN_ID -a verify_site)
sleep 15
TASK_STATUS=$(docker run --rm --net=host -e "DD_TOKEN=$TOKEN" -e "DD_URL=http://drydock-api.ucp.svc.cluster.local:9000" -e "LC_ALL=C.UTF-8" -e "LANG=C.UTF-8" --entrypoint /usr/local/bin/drydock $DRYDOCK_IMAGE task show -t $TASK_ID | tr "'" '"' | sed -e 's/None/null/g')
if [[ $(echo $TASK_STATUS | jq -r .result) == "success" ]]
then
echo "Action verify_site successful."
exit 0
else
echo "Action verify_site failed."
echo $TASK_STATUS
exit -1
fi

View File

@ -16,15 +16,17 @@ import json
import drydock_provisioner.config as config
import drydock_provisioner.drivers.node.maasdriver.api_client as client
class TestClass(object):
class TestClass(object):
def test_client_authenticate(self):
client_config = config.DrydockConfig.node_driver['maasdriver']
maas_client = client.MaasRequestFactory(client_config['api_url'], client_config['api_key'])
maas_client = client.MaasRequestFactory(client_config['api_url'],
client_config['api_key'])
resp = maas_client.get('account/', params={'op': 'list_authorisation_tokens'})
resp = maas_client.get(
'account/', params={'op': 'list_authorisation_tokens'})
parsed = resp.json()
assert len(parsed) > 0
assert len(parsed) > 0

View File

@ -19,33 +19,37 @@ import drydock_provisioner.drivers.node.maasdriver.api_client as client
import drydock_provisioner.drivers.node.maasdriver.models.fabric as maas_fabric
import drydock_provisioner.drivers.node.maasdriver.models.subnet as maas_subnet
class TestClass(object):
def test_maas_fabric(self):
client_config = config.DrydockConfig.node_driver['maasdriver']
client_config = config.DrydockConfig.node_driver['maasdriver']
maas_client = client.MaasRequestFactory(client_config['api_url'], client_config['api_key'])
maas_client = client.MaasRequestFactory(client_config['api_url'],
client_config['api_key'])
fabric_name = str(uuid.uuid4())
fabric_name = str(uuid.uuid4())
fabric_list = maas_fabric.Fabrics(maas_client)
fabric_list.refresh()
fabric_list = maas_fabric.Fabrics(maas_client)
fabric_list.refresh()
test_fabric = maas_fabric.Fabric(maas_client, name=fabric_name, description='Test Fabric')
test_fabric = fabric_list.add(test_fabric)
test_fabric = maas_fabric.Fabric(
maas_client, name=fabric_name, description='Test Fabric')
test_fabric = fabric_list.add(test_fabric)
assert test_fabric.name == fabric_name
assert test_fabric.resource_id is not None
assert test_fabric.name == fabric_name
assert test_fabric.resource_id is not None
query_fabric = maas_fabric.Fabric(maas_client, resource_id=test_fabric.resource_id)
query_fabric.refresh()
query_fabric = maas_fabric.Fabric(
maas_client, resource_id=test_fabric.resource_id)
query_fabric.refresh()
assert query_fabric.name == test_fabric.name
assert query_fabric.name == test_fabric.name
def test_maas_subnet(self):
client_config = config.DrydockConfig.node_driver['maasdriver']
maas_client = client.MaasRequestFactory(client_config['api_url'], client_config['api_key'])
maas_client = client.MaasRequestFactory(client_config['api_url'],
client_config['api_key'])
subnet_list = maas_subnet.Subnets(maas_client)
subnet_list.refresh()
@ -53,6 +57,3 @@ class TestClass(object):
for s in subnet_list:
print(s.to_dict())
assert False

View File

@ -28,17 +28,22 @@ import drydock_provisioner.objects.task as task
import drydock_provisioner.drivers as drivers
from drydock_provisioner.ingester import Ingester
class TestClass(object):
class TestClass(object):
def test_client_verify(self):
design_state = statemgmt.DesignState()
orchestrator = orch.Orchestrator(state_manager=design_state,
enabled_drivers={'node': 'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver'})
orchestrator = orch.Orchestrator(
state_manager=design_state,
enabled_drivers={
'node':
'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver'
})
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='sitename',
design_id=None,
action=hd_fields.OrchestratorAction.VerifySite)
orch_task = orchestrator.create_task(
task.OrchestratorTask,
site='sitename',
design_id=None,
action=hd_fields.OrchestratorAction.VerifySite)
orchestrator.execute_task(orch_task.get_id())
@ -57,19 +62,28 @@ class TestClass(object):
design_state.post_design(design_data)
ingester = Ingester()
ingester.enable_plugins([drydock_provisioner.ingester.plugins.yaml.YamlIngester])
ingester.ingest_data(plugin_name='yaml', design_state=design_state,
filenames=[str(input_file)], design_id=design_id)
ingester.enable_plugins(
[drydock_provisioner.ingester.plugins.yaml.YamlIngester])
ingester.ingest_data(
plugin_name='yaml',
design_state=design_state,
filenames=[str(input_file)],
design_id=design_id)
design_data = design_state.get_design(design_id)
orchestrator = orch.Orchestrator(state_manager=design_state,
enabled_drivers={'node': 'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver'})
orchestrator = orch.Orchestrator(
state_manager=design_state,
enabled_drivers={
'node':
'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver'
})
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='sitename',
design_id=design_id,
action=hd_fields.OrchestratorAction.PrepareSite)
orch_task = orchestrator.create_task(
task.OrchestratorTask,
site='sitename',
design_id=design_id,
action=hd_fields.OrchestratorAction.PrepareSite)
orchestrator.execute_task(orch_task.get_id())
@ -77,9 +91,6 @@ class TestClass(object):
assert orch_task.result == hd_fields.ActionResult.Success
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
@ -91,4 +102,4 @@ class TestClass(object):
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir
return tmpdir

View File

@ -26,8 +26,8 @@ import falcon
logging.basicConfig(level=logging.DEBUG)
class TestTasksApi():
class TestTasksApi():
def test_read_tasks(self, mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance
@ -70,17 +70,18 @@ class TestTasksApi():
mocker.patch('oslo_policy.policy.Enforcer')
state = mocker.MagicMock()
orch = mocker.MagicMock(spec=Orchestrator, wraps=Orchestrator(state_manager=state))
orch_mock_config = {'execute_task.return_value': True}
orch = mocker.MagicMock(
spec=Orchestrator, wraps=Orchestrator(state_manager=state))
orch_mock_config = {'execute_task.return_value': True}
orch.configure_mock(**orch_mock_config)
ctx = DrydockRequestContext()
policy_engine = policy.DrydockPolicy()
json_body = json.dumps({
'action': 'verify_site',
'design_id': 'foo',
}).encode('utf-8')
json_body = json.dumps({
'action': 'verify_site',
'design_id': 'foo',
}).encode('utf-8')
# Mock policy enforcement
policy_mock_config = {'authorize.return_value': True}

View File

@ -21,9 +21,9 @@ import pytest
logging.basicConfig(level=logging.DEBUG)
class TestEnforcerDecorator():
def test_apienforcer_decorator(self,mocker):
class TestEnforcerDecorator():
def test_apienforcer_decorator(self, mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance. authorized() is
called via the policy.ApiEnforcer decorator.
@ -49,8 +49,12 @@ class TestEnforcerDecorator():
self.target_function(req, resp)
expected_calls = [mocker.call.authorize('physical_provisioner:read_task', {'project_id': project_id, 'user_id': user_id},
ctx.to_policy_view())]
expected_calls = [
mocker.call.authorize('physical_provisioner:read_task', {
'project_id': project_id,
'user_id': user_id
}, ctx.to_policy_view())
]
policy_engine.enforcer.assert_has_calls(expected_calls)

View File

@ -20,57 +20,60 @@ from drydock_provisioner.control.middleware import AuthMiddleware
import pytest
class TestAuthMiddleware():
# the WSGI env for a request processed by keystone middleware
# with user token
ks_user_env = { 'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_PROJECT_ID': '',
'HTTP_X_USER_ID': '',
'HTTP_X_AUTH_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1,0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
ks_user_env = {
'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_PROJECT_ID': '',
'HTTP_X_USER_ID': '',
'HTTP_X_AUTH_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1, 0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
# the WSGI env for a request processed by keystone middleware
# with service token
ks_service_env = { 'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_SERVICE_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_SERVICE_PROJECT_ID': '',
'HTTP_X_SERVICE_USER_ID': '',
'HTTP_X_SERVICE_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1,0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
ks_service_env = {
'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_SERVICE_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_SERVICE_PROJECT_ID': '',
'HTTP_X_SERVICE_USER_ID': '',
'HTTP_X_SERVICE_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1, 0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
def test_process_request_user(self):
''' AuthMiddleware is expected to correctly identify the headers

View File

@ -11,11 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from drydock_provisioner.ingester import Ingester
from drydock_provisioner.statemgmt import DesignState
from drydock_provisioner.orchestrator import Orchestrator
from copy import deepcopy
import pytest
@ -23,50 +18,57 @@ import shutil
import os
import drydock_provisioner.ingester.plugins.yaml
import yaml
import logging
from drydock_provisioner.ingester import Ingester
from drydock_provisioner.statemgmt import DesignState
from drydock_provisioner.orchestrator import Orchestrator
from drydock_provisioner.objects.site import SiteDesign
logging.basicConfig(level=logging.DEBUG)
class TestClass(object):
def test_design_inheritance(self, loaded_design):
orchestrator = Orchestrator(state_manager=loaded_design,
enabled_drivers={'oob': 'drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'})
design_data = orchestrator.load_design_data("sitename")
assert len(design_data.baremetal_nodes) == 2
design_data = orchestrator.compute_model_inheritance(design_data)
node = design_data.get_baremetal_node("controller01")
assert node.applied.get('hardware_profile') == 'HPGen9v3'
iface = node.get_applied_interface('bond0')
assert iface.get_applied_slave_count() == 2
iface = node.get_applied_interface('pxe')
assert iface.get_applied_slave_count() == 1
@pytest.fixture(scope='module')
def loaded_design(self, input_files):
def test_design_inheritance(self, input_files):
input_file = input_files.join("fullsite.yaml")
design_state = DesignState()
design_data = SiteDesign()
design_state.post_design_base(design_data)
design_id = design_data.assign_id()
design_state.post_design(design_data)
ingester = Ingester()
ingester.enable_plugins([drydock_provisioner.ingester.plugins.yaml.YamlIngester])
ingester.ingest_data(plugin_name='yaml', design_state=design_state, filenames=[str(input_file)])
ingester.enable_plugins(
['drydock_provisioner.ingester.plugins.yaml.YamlIngester'])
ingester.ingest_data(
plugin_name='yaml',
design_state=design_state,
design_id=str(design_id),
filenames=[str(input_file)])
return design_state
orchestrator = Orchestrator(state_manager=design_state)
design_data = orchestrator.get_effective_site(design_id)
assert len(design_data.baremetal_nodes) == 2
node = design_data.get_baremetal_node("controller01")
assert node.hardware_profile == 'HPGen9v3'
iface = node.get_applied_interface('bond0')
assert len(iface.get_hw_slaves()) == 2
iface = node.get_applied_interface('pxe')
assert len(iface.get_hw_slaves()) == 1
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
samples_dir = os.path.dirname(str(request.fspath)) + "../yaml_samples"
samples_dir = os.path.dirname(
str(request.fspath)) + "/" + "../yaml_samples"
samples = os.listdir(samples_dir)
for f in samples:

View File

@ -17,10 +17,12 @@ import responses
import drydock_provisioner.drydock_client.session as dc_session
import drydock_provisioner.drydock_client.client as dc_client
def test_blank_session_error():
with pytest.raises(Exception):
dd_ses = dc_session.DrydockSession()
def test_session_init_minimal():
port = 9000
host = 'foo.bar.baz'
@ -29,6 +31,7 @@ def test_session_init_minimal():
assert dd_ses.base_url == "http://%s:%d/api/" % (host, port)
def test_session_init_minimal_no_port():
host = 'foo.bar.baz'
@ -36,6 +39,7 @@ def test_session_init_minimal_no_port():
assert dd_ses.base_url == "http://%s/api/" % (host)
def test_session_init_uuid_token():
host = 'foo.bar.baz'
token = '5f1e08b6-38ec-4a99-9d0f-00d29c4e325b'
@ -45,15 +49,17 @@ def test_session_init_uuid_token():
assert dd_ses.base_url == "http://%s/api/" % (host)
assert dd_ses.token == token
def test_session_init_fernet_token():
host = 'foo.bar.baz'
token = 'gAAAAABU7roWGiCuOvgFcckec-0ytpGnMZDBLG9hA7Hr9qfvdZDHjsak39YN98HXxoYLIqVm19Egku5YR3wyI7heVrOmPNEtmr-fIM1rtahudEdEAPM4HCiMrBmiA1Lw6SU8jc2rPLC7FK7nBCia_BGhG17NVHuQu0S7waA306jyKNhHwUnpsBQ'
dd_ses = dc_session.DrydockSession(host, token=token)
assert dd_ses.base_url == "http://%s/api/" % (host)
assert dd_ses.token == token
def test_session_init_marker():
host = 'foo.bar.baz'
marker = '5f1e08b6-38ec-4a99-9d0f-00d29c4e325b'
@ -63,10 +69,14 @@ def test_session_init_marker():
assert dd_ses.base_url == "http://%s/api/" % (host)
assert dd_ses.marker == marker
@responses.activate
def test_session_get():
responses.add(responses.GET, 'http://foo.bar.baz/api/v1.0/test', body='okay',
status=200)
responses.add(
responses.GET,
'http://foo.bar.baz/api/v1.0/test',
body='okay',
status=200)
host = 'foo.bar.baz'
token = '5f1e08b6-38ec-4a99-9d0f-00d29c4e325b'
marker = '40c3eaf6-6a8a-11e7-a4bd-080027ef795a'
@ -79,11 +89,15 @@ def test_session_get():
assert req.headers.get('X-Auth-Token', None) == token
assert req.headers.get('X-Context-Marker', None) == marker
@responses.activate
def test_client_designs_get():
design_id = '828e88dc-6a8b-11e7-97ae-080027ef795a'
responses.add(responses.GET, 'http://foo.bar.baz/api/v1.0/designs',
json=[design_id], status=200)
responses.add(
responses.GET,
'http://foo.bar.baz/api/v1.0/designs',
json=[design_id],
status=200)
host = 'foo.bar.baz'
token = '5f1e08b6-38ec-4a99-9d0f-00d29c4e325b'
@ -92,19 +106,24 @@ def test_client_designs_get():
dd_client = dc_client.DrydockClient(dd_ses)
design_list = dd_client.get_design_ids()
assert design_id in design_list
assert design_id in design_list
@responses.activate
def test_client_design_get():
design = { 'id': '828e88dc-6a8b-11e7-97ae-080027ef795a',
'model_type': 'SiteDesign'
}
design = {
'id': '828e88dc-6a8b-11e7-97ae-080027ef795a',
'model_type': 'SiteDesign'
}
responses.add(responses.GET, 'http://foo.bar.baz/api/v1.0/designs/828e88dc-6a8b-11e7-97ae-080027ef795a',
json=design, status=200)
responses.add(
responses.GET,
'http://foo.bar.baz/api/v1.0/designs/828e88dc-6a8b-11e7-97ae-080027ef795a',
json=design,
status=200)
host = 'foo.bar.baz'
dd_ses = dc_session.DrydockSession(host)
dd_client = dc_client.DrydockClient(dd_ses)
@ -113,29 +132,36 @@ def test_client_design_get():
assert design_resp['id'] == design['id']
assert design_resp['model_type'] == design['model_type']
@responses.activate
def test_client_task_get():
task = {'action': 'deploy_node',
'result': 'success',
'parent_task': '444a1a40-7b5b-4b80-8265-cadbb783fa82',
'subtasks': [],
'status': 'complete',
'result_detail': {
'detail': ['Node cab23-r720-17 deployed']
},
'site_name': 'mec_demo',
'task_id': '1476902c-758b-49c0-b618-79ff3fd15166',
'node_list': ['cab23-r720-17'],
'design_id': 'fcf37ba1-4cde-48e5-a713-57439fc6e526'}
task = {
'action': 'deploy_node',
'result': 'success',
'parent_task': '444a1a40-7b5b-4b80-8265-cadbb783fa82',
'subtasks': [],
'status': 'complete',
'result_detail': {
'detail': ['Node cab23-r720-17 deployed']
},
'site_name': 'mec_demo',
'task_id': '1476902c-758b-49c0-b618-79ff3fd15166',
'node_list': ['cab23-r720-17'],
'design_id': 'fcf37ba1-4cde-48e5-a713-57439fc6e526'
}
host = 'foo.bar.baz'
responses.add(responses.GET, "http://%s/api/v1.0/tasks/1476902c-758b-49c0-b618-79ff3fd15166" % (host),
json=task, status=200)
responses.add(
responses.GET,
"http://%s/api/v1.0/tasks/1476902c-758b-49c0-b618-79ff3fd15166" %
(host),
json=task,
status=200)
dd_ses = dc_session.DrydockSession(host)
dd_client = dc_client.DrydockClient(dd_ses)
task_resp = dd_client.get_task('1476902c-758b-49c0-b618-79ff3fd15166')
assert task_resp['status'] == task['status']

View File

@ -21,11 +21,8 @@ import shutil
import os
import drydock_provisioner.ingester.plugins.yaml
class TestClass(object):
def setup_method(self, method):
print("Running test {0}".format(method.__name__))
def test_ingest_full_site(self, input_files):
objects.register_all()
@ -37,13 +34,17 @@ class TestClass(object):
design_state.post_design(design_data)
ingester = Ingester()
ingester.enable_plugins([drydock_provisioner.ingester.plugins.yaml.YamlIngester])
ingester.ingest_data(plugin_name='yaml', design_state=design_state,
filenames=[str(input_file)], design_id=design_id)
ingester.enable_plugins(
['drydock_provisioner.ingester.plugins.yaml.YamlIngester'])
ingester.ingest_data(
plugin_name='yaml',
design_state=design_state,
filenames=[str(input_file)],
design_id=design_id)
design_data = design_state.get_design(design_id)
assert len(design_data.host_profiles) == 3
assert len(design_data.host_profiles) == 2
assert len(design_data.baremetal_nodes) == 2
def test_ingest_federated_design(self, input_files):
@ -59,18 +60,27 @@ class TestClass(object):
design_state.post_design(design_data)
ingester = Ingester()
ingester.enable_plugins([drydock_provisioner.ingester.plugins.yaml.YamlIngester])
ingester.ingest_data(plugin_name='yaml', design_state=design_state, design_id=design_id,
filenames=[str(profiles_file), str(networks_file), str(nodes_file)])
ingester.enable_plugins(
['drydock_provisioner.ingester.plugins.yaml.YamlIngester'])
ingester.ingest_data(
plugin_name='yaml',
design_state=design_state,
design_id=design_id,
filenames=[
str(profiles_file),
str(networks_file),
str(nodes_file)
])
design_data = design_state.get_design(design_id)
assert len(design_data.host_profiles) == 3
assert len(design_data.host_profiles) == 2
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
samples_dir = os.path.dirname(str(request.fspath)) + "../yaml_samples"
samples_dir = os.path.dirname(
str(request.fspath)) + "/" + "../yaml_samples"
samples = os.listdir(samples_dir)
for f in samples:

View File

@ -15,14 +15,14 @@ import pytest
import shutil
import os
import uuid
import logging
from drydock_provisioner.ingester.plugins.yaml import YamlIngester
logging.basicConfig(level=logging.DEBUG)
class TestClass(object):
def setup_method(self, method):
print("Running test {0}".format(method.__name__))
def test_ingest_singledoc(self, input_files):
input_file = input_files.join("singledoc.yaml")
@ -44,7 +44,8 @@ class TestClass(object):
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
samples_dir = os.path.dirname(str(request.fspath)) + "../yaml_samples"
samples_dir = os.path.dirname(
str(request.fspath)) + "/" + "../yaml_samples"
samples = os.listdir(samples_dir)
for f in samples:

View File

@ -17,58 +17,69 @@ import pytest
import drydock_provisioner.objects as objects
from drydock_provisioner.objects import fields
class TestClass(object):
class TestClass(object):
def test_hardwareprofile(self):
objects.register_all()
model_attr = {
'versioned_object.namespace': 'drydock_provisioner.objects',
'versioned_object.name': 'HardwareProfile',
'versioned_object.version': '1.0',
'versioned_object.namespace': 'drydock_provisioner.objects',
'versioned_object.name': 'HardwareProfile',
'versioned_object.version': '1.0',
'versioned_object.data': {
'name': 'server',
'source': fields.ModelSource.Designed,
'site': 'test_site',
'vendor': 'Acme',
'generation': '9',
'hw_version': '3',
'bios_version': '2.1.1',
'boot_mode': 'bios',
'bootstrap_protocol': 'pxe',
'pxe_interface': '0',
'devices': {
'versioned_object.namespace': 'drydock_provisioner.objects',
'versioned_object.name': 'HardwareDeviceAliasList',
'versioned_object.version': '1.0',
'name': 'server',
'source': fields.ModelSource.Designed,
'site': 'test_site',
'vendor': 'Acme',
'generation': '9',
'hw_version': '3',
'bios_version': '2.1.1',
'boot_mode': 'bios',
'bootstrap_protocol': 'pxe',
'pxe_interface': '0',
'devices': {
'versioned_object.namespace':
'drydock_provisioner.objects',
'versioned_object.name': 'HardwareDeviceAliasList',
'versioned_object.version': '1.0',
'versioned_object.data': {
'objects': [
{
'versioned_object.namespace': 'drydock_provisioner.objects',
'versioned_object.name': 'HardwareDeviceAlias',
'versioned_object.version': '1.0',
'versioned_object.namespace':
'drydock_provisioner.objects',
'versioned_object.name':
'HardwareDeviceAlias',
'versioned_object.version':
'1.0',
'versioned_object.data': {
'alias': 'nic',
'source': fields.ModelSource.Designed,
'address': '0000:00:03.0',
'bus_type': 'pci',
'dev_type': '82540EM Gigabit Ethernet Controller',
'alias':
'nic',
'source':
fields.ModelSource.Designed,
'address':
'0000:00:03.0',
'bus_type':
'pci',
'dev_type':
'82540EM Gigabit Ethernet Controller',
}
},
{
'versioned_object.namespace': 'drydock_provisioner.objects',
'versioned_object.name': 'HardwareDeviceAlias',
'versioned_object.version': '1.0',
'versioned_object.namespace':
'drydock_provisioner.objects',
'versioned_object.name':
'HardwareDeviceAlias',
'versioned_object.version':
'1.0',
'versioned_object.data': {
'alias': 'bootdisk',
'source': fields.ModelSource.Designed,
'address': '2:0.0.0',
'alias': 'bootdisk',
'source': fields.ModelSource.Designed,
'address': '2:0.0.0',
'bus_type': 'scsi',
'dev_type': 'SSD',
}
},
]
}
}
}
@ -77,9 +88,8 @@ class TestClass(object):
hwprofile = objects.HardwareProfile.obj_from_primitive(model_attr)
assert getattr(hwprofile, 'bootstrap_protocol') == 'pxe'
hwprofile.bootstrap_protocol = 'network'
assert 'bootstrap_protocol' in hwprofile.obj_what_changed()
assert 'bios_version' not in hwprofile.obj_what_changed()

View File

@ -26,13 +26,13 @@ import drydock_provisioner.drivers as drivers
class TestClass(object):
def test_task_complete(self):
state_mgr = statemgmt.DesignState()
orchestrator = orch.Orchestrator(state_manager=state_mgr)
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='default',
action=hd_fields.OrchestratorAction.Noop)
orch_task = orchestrator.create_task(
task.OrchestratorTask,
site='default',
action=hd_fields.OrchestratorAction.Noop)
orchestrator.execute_task(orch_task.get_id())
@ -47,12 +47,13 @@ class TestClass(object):
def test_task_termination(self):
state_mgr = statemgmt.DesignState()
orchestrator = orch.Orchestrator(state_manager=state_mgr)
orch_task = orchestrator.create_task(task.OrchestratorTask,
site='default',
action=hd_fields.OrchestratorAction.Noop)
orch_task = orchestrator.create_task(
task.OrchestratorTask,
site='default',
action=hd_fields.OrchestratorAction.Noop)
orch_thread = threading.Thread(target=orchestrator.execute_task,
args=(orch_task.get_id(),))
orch_thread = threading.Thread(
target=orchestrator.execute_task, args=(orch_task.get_id(), ))
orch_thread.start()
time.sleep(1)
@ -66,4 +67,4 @@ class TestClass(object):
for t_id in orch_task.subtasks:
t = state_mgr.get_task(t_id)
assert t.get_status() == hd_fields.TaskStatus.Terminated
assert t.get_status() == hd_fields.TaskStatus.Terminated

View File

@ -17,8 +17,8 @@ from drydock_provisioner.control.base import DrydockRequestContext
import pytest
class TestDefaultRules():
class TestDefaultRules():
def test_register_policy(self, mocker):
''' DrydockPolicy.register_policy() should correctly register all default
policy rules
@ -28,14 +28,16 @@ class TestDefaultRules():
policy_engine = DrydockPolicy()
policy_engine.register_policy()
expected_calls = [mocker.call.register_defaults(DrydockPolicy.base_rules),
mocker.call.register_defaults(DrydockPolicy.task_rules),
mocker.call.register_defaults(DrydockPolicy.data_rules)]
expected_calls = [
mocker.call.register_defaults(DrydockPolicy.base_rules),
mocker.call.register_defaults(DrydockPolicy.task_rules),
mocker.call.register_defaults(DrydockPolicy.data_rules)
]
# Validate the oslo_policy Enforcer was loaded with expected default policy rules
policy_engine.enforcer.assert_has_calls(expected_calls, any_order=True)
def test_authorize_context(self,mocker):
def test_authorize_context(self, mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance
'''
@ -56,8 +58,10 @@ class TestDefaultRules():
policy_engine = DrydockPolicy()
policy_engine.authorize(policy_action, ctx)
expected_calls = [mocker.call.authorize(policy_action, {'project_id': project_id, 'user_id': user_id},
ctx.to_policy_view())]
expected_calls = [
mocker.call.authorize(
policy_action, {'project_id': project_id,
'user_id': user_id}, ctx.to_policy_view())
]
policy_engine.enforcer.assert_has_calls(expected_calls)

View File

@ -14,15 +14,11 @@
import pytest
import shutil
import drydock_provisioner.objects as objects
import drydock_provisioner.statemgmt as statemgmt
class TestClass(object):
def setup_method(self, method):
print("Running test {0}".format(method.__name__))
def test_sitedesign_post(self):
objects.register_all()
@ -45,4 +41,4 @@ class TestClass(object):
my_design = state_manager.get_design(design_id)
assert design_data.obj_to_primitive() == my_design.obj_to_primitive()
assert design_data.obj_to_primitive() == my_design.obj_to_primitive()

View File

@ -1,4 +1,4 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -18,16 +18,27 @@
####################
# version the schema in this file so consumers can rationally parse it
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Region
metadata:
name: sitename
date: 17-FEB-2017
description: Sample site design
author: sh8121@att.com
# Not sure if we have site wide data that doesn't fall into another 'Kind'
spec:
tag_definitions:
- tag: test
definition_type: lshw_xpath
definition: "//node[@id=\"display\"]/'clock units=\"Hz\"' > 1000000000"
authorized_keys:
- |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDENeyO5hLPbLLQRZ0oafTYWs1ieo5Q+XgyZQs51Ju
jDGc8lKlWsg1/6yei2JewKMgcwG2Buu1eqU92Xn1SvMZLyt9GZURuBkyjcfVc/8GiU5QP1Of8B7CV0c
kfUpHWYJ17olTzT61Hgz10ioicBF6cjgQrLNcyn05xoaJHD2Vpf8Unxzi0YzA2e77yRqBo9jJVRaX2q
wUJuZrzb62x3zw8Knz6GGSZBn8xRKLaw1SKFpd1hwvL62GfqX5ZBAT1AYTZP1j8GcAoK8AFVn193SEU
vjSdUFa+RNWuJhkjBRfylJczIjTIFb5ls0jpbA3bMA9DE7lFKVQl6vVwFmiIVBI1 samplekey
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: oob
@ -43,11 +54,13 @@ spec:
trunking:
mode: disabled
default_network: oob
allowed_networks:
- oob
---
# pxe is a bit of 'magic' indicating the link config used when PXE booting
# a node. All other links indicate network configs applied when the node
# is deployed.
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: pxe
@ -67,8 +80,10 @@ spec:
mode: disabled
# use name, will translate to VLAN ID
default_network: pxe
allowed_networks:
- pxe
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: gp
@ -97,8 +112,11 @@ spec:
trunking:
mode: 802.1q
default_network: mgmt
allowed_networks:
- public
- mgmt
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: oob
@ -117,7 +135,7 @@ spec:
domain: ilo.sitename.att.com
servers: 172.16.100.10
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: pxe
@ -146,7 +164,7 @@ spec:
# DNS servers that a server using this network as its default gateway should use
servers: 172.16.0.10
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: mgmt
@ -181,7 +199,7 @@ spec:
# DNS servers that a server using this network as its default gateway should use
servers: 172.16.1.9,172.16.1.10
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: private
@ -205,7 +223,7 @@ spec:
domain: priv.sitename.example.com
servers: 172.16.2.9,172.16.2.10
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: public
@ -228,12 +246,12 @@ spec:
routes:
- subnet: 0.0.0.0/0
gateway: 172.16.3.1
metric: 9
metric: 10
dns:
domain: sitename.example.com
servers: 8.8.8.8
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: HostProfile
metadata:
name: defaults
@ -285,14 +303,20 @@ spec:
fs_label: logs
# Platform (Operating System) settings
platform:
image: ubuntu_16.04_hwe
kernel_params: default
image: ubuntu_16.04
kernel: generic
kernel_params:
quiet: true
console: ttyS2
# Additional metadata to apply to a node
metadata:
# Base URL of the introspection service - may go in curtin data
introspection_url: http://172.16.1.10:9090
# Freeform tags to be applied to the host
tags:
- deployment=initial
owner_data:
foo: bar
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: HostProfile
metadata:
name: k8-node
@ -314,56 +338,37 @@ spec:
# settings of the host_profile
hardware_profile: HPGen9v3
# Network interfaces.
primary_network: mgmt
interfaces:
# Keyed on device_name
# pxe is a special marker indicating which device should be used for pxe boot
- device_name: pxe
# The network link attached to this
network_link: pxe
device_link: pxe
# Slaves will specify aliases from hwdefinition.yaml
slaves:
- prim_nic01
- prim_nic01
# Which networks will be configured on this interface
networks:
- pxe
- pxe
- device_name: bond0
network_link: gp
# If multiple slaves are specified, but no bonding config
# is applied to the link, design validation will fail
slaves:
- prim_nic01
- prim_nic02
- prim_nic01
- prim_nic02
# If multiple networks are specified, but no trunking
# config is applied to the link, design validation will fail
networks:
- mgmt
- private
- mgmt
- private
metadata:
# Explicit tag assignment
tags:
- 'test'
# MaaS supports key/value pairs. Not sure of the use yet
owner_data:
foo: bar
---
apiVersion: 'v1.0'
kind: HostProfile
metadata:
name: k8-node-public
region: sitename
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 2/3 attributes. Primarily CIs used for configuring server interfaces
spec:
host_profile: k8-node
interfaces:
- device_name: bond0
networks:
# This is additive, so adds a network to those defined in the host_profile
# inheritance chain
- public
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: BaremetalNode
metadata:
name: controller01
@ -372,7 +377,7 @@ metadata:
author: sh8121@att.com
description: Describe layer 2/3 attributes. Primarily CIs used for configuring server interfaces
spec:
host_profile: k8-node-public
host_profile: k8-node
# the hostname for a server, could be used in multiple DNS domains to
# represent different interfaces
interfaces:
@ -395,10 +400,9 @@ spec:
- network: oob
address: 172.16.100.20
metadata:
roles: os_ctl
rack: rack01
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: BaremetalNode
metadata:
name: compute01
@ -417,8 +421,10 @@ spec:
address: 172.16.2.21
- network: oob
address: 172.16.100.21
metadata:
rack: rack02
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: HardwareProfile
metadata:
name: HPGen9v3
@ -456,4 +462,4 @@ spec:
alias: primary_boot
dev_type: 'VBOX HARDDISK'
bus_type: 'scsi'
...

View File

@ -1,4 +1,4 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -18,16 +18,27 @@
####################
# version the schema in this file so consumers can rationally parse it
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Region
metadata:
name: sitename
date: 17-FEB-2017
description: Sample site design
author: sh8121@att.com
# Not sure if we have site wide data that doesn't fall into another 'Kind'
spec:
tag_definitions:
- tag: test
definition_type: lshw_xpath
definition: "//node[@id=\"display\"]/'clock units=\"Hz\"' > 1000000000"
authorized_keys:
- |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDENeyO5hLPbLLQRZ0oafTYWs1ieo5Q+XgyZQs51Ju
jDGc8lKlWsg1/6yei2JewKMgcwG2Buu1eqU92Xn1SvMZLyt9GZURuBkyjcfVc/8GiU5QP1Of8B7CV0c
kfUpHWYJ17olTzT61Hgz10ioicBF6cjgQrLNcyn05xoaJHD2Vpf8Unxzi0YzA2e77yRqBo9jJVRaX2q
wUJuZrzb62x3zw8Knz6GGSZBn8xRKLaw1SKFpd1hwvL62GfqX5ZBAT1AYTZP1j8GcAoK8AFVn193SEU
vjSdUFa+RNWuJhkjBRfylJczIjTIFb5ls0jpbA3bMA9DE7lFKVQl6vVwFmiIVBI1 samplekey
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: HostProfile
metadata:
name: defaults
@ -79,14 +90,20 @@ spec:
fs_label: logs
# Platform (Operating System) settings
platform:
image: ubuntu_16.04_hwe
kernel_params: default
image: ubuntu_16.04
kernel: generic
kernel_params:
quiet: true
console: ttyS2
# Additional metadata to apply to a node
metadata:
# Base URL of the introspection service - may go in curtin data
introspection_url: http://172.16.1.10:9090
# Freeform tags to be applied to the host
tags:
- deployment=initial
owner_data:
foo: bar
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: HostProfile
metadata:
name: k8-node
@ -108,90 +125,33 @@ spec:
# settings of the host_profile
hardware_profile: HPGen9v3
# Network interfaces.
primary_network: mgmt
interfaces:
# Keyed on device_name
# pxe is a special marker indicating which device should be used for pxe boot
- device_name: pxe
# The network link attached to this
network_link: pxe
device_link: pxe
# Slaves will specify aliases from hwdefinition.yaml
slaves:
- prim_nic01
- prim_nic01
# Which networks will be configured on this interface
networks:
- pxe
- pxe
- device_name: bond0
network_link: gp
# If multiple slaves are specified, but no bonding config
# is applied to the link, design validation will fail
slaves:
- prim_nic01
- prim_nic02
- prim_nic01
- prim_nic02
# If multiple networks are specified, but no trunking
# config is applied to the link, design validation will fail
networks:
- mgmt
- private
- mgmt
- private
metadata:
# Explicit tag assignment
tags:
- 'test'
# MaaS supports key/value pairs. Not sure of the use yet
owner_data:
foo: bar
---
apiVersion: 'v1.0'
kind: HostProfile
metadata:
name: k8-node-public
region: sitename
date: 17-FEB-2017
author: sh8121@att.com
description: Describe layer 2/3 attributes. Primarily CIs used for configuring server interfaces
spec:
host_profile: k8-node
interfaces:
- device_name: bond0
networks:
# This is additive, so adds a network to those defined in the host_profile
# inheritance chain
- public
---
apiVersion: 'v1.0'
kind: HardwareProfile
metadata:
name: HPGen9v3
region: sitename
date: 17-FEB-2017
author: Scott Hussey
spec:
# Vendor of the server chassis
vendor: HP
# Generation of the chassis model
generation: '8'
# Version of the chassis model within its generation - not version of the hardware definition
hw_version: '3'
# The certified version of the chassis BIOS
bios_version: '2.2.3'
# Mode of the default boot of hardware - bios, uefi
boot_mode: bios
# Protocol of boot of the hardware - pxe, usb, hdd
bootstrap_protocol: pxe
# Which interface to use for network booting within the OOB manager, not OS device
pxe_interface: 0
# Map hardware addresses to aliases/roles to allow a mix of hardware configs
# in a site to result in a consistent configuration
device_aliases:
- address: 0000:00:03.0
alias: prim_nic01
# type could identify expected hardware - used for hardware manifest validation
dev_type: '82540EM Gigabit Ethernet Controller'
bus_type: 'pci'
- address: 0000:00:04.0
alias: prim_nic02
dev_type: '82540EM Gigabit Ethernet Controller'
bus_type: 'pci'
- address: 2:0.0.0
alias: primary_boot
dev_type: 'VBOX HARDDISK'
bus_type: 'scsi'
...

View File

@ -1,11 +1,30 @@
---
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: Region
metadata:
name: sitename
date: 17-FEB-2017
description: Sample site design
author: sh8121@att.com
spec:
tag_definitions:
- tag: test
definition_type: lshw_xpath
definition: "//node[@id=\"display\"]/'clock units=\"Hz\"' > 1000000000"
authorized_keys:
- |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDENeyO5hLPbLLQRZ0oafTYWs1ieo5Q+XgyZQs51Ju
jDGc8lKlWsg1/6yei2JewKMgcwG2Buu1eqU92Xn1SvMZLyt9GZURuBkyjcfVc/8GiU5QP1Of8B7CV0c
kfUpHWYJ17olTzT61Hgz10ioicBF6cjgQrLNcyn05xoaJHD2Vpf8Unxzi0YzA2e77yRqBo9jJVRaX2q
wUJuZrzb62x3zw8Knz6GGSZBn8xRKLaw1SKFpd1hwvL62GfqX5ZBAT1AYTZP1j8GcAoK8AFVn193SEU
vjSdUFa+RNWuJhkjBRfylJczIjTIFb5ls0jpbA3bMA9DE7lFKVQl6vVwFmiIVBI1 samplekey
---
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: oob
region: sitename
date: 17-FEB-2017
name: Sample network link
author: sh8121@att.com
description: Describe layer 1 attributes. Primary key is 'name'. These settings will generally be things the switch and server have to agree on
spec:
@ -16,17 +35,18 @@ spec:
trunking:
mode: disabled
default_network: oob
allowed_networks:
- oob
---
# pxe is a bit of 'magic' indicating the link config used when PXE booting
# a node. All other links indicate network configs applied when the node
# is deployed.
apiVersion: 'v1.0'
apiVersion: 'drydock/v1'
kind: NetworkLink
metadata:
name: pxe
region: sitename
date: 17-FEB-2017
name: Sample network link
author: sh8121@att.com
description: Describe layer 1 attributes. Primary key is 'name'. These settings will generally be things the switch and server have to agree on
spec:
@ -41,34 +61,6 @@ spec:
mode: disabled
# use name, will translate to VLAN ID
default_network: pxe
---
apiVersion: 'v1.0'
kind: NetworkLink
metadata:
name: gp
region: sitename
date: 17-FEB-2017
name: Sample network link
author: sh8121@att.com
description: Describe layer 1 attributes. These CIs will generally be things the switch and server have to agree on
# pxe is a bit of 'magic' indicating the link config used when PXE booting
# a node. All other links indicate network configs applied when the node
# is deployed.
spec:
# If this link is a bond of physical links, how is it configured
# 802.3ad
# active-backup
# balance-rr
# Can add support for others down the road
bonding:
mode: '802.3ad'
# For LACP (802.3ad) xmit hashing policy: layer2, layer2+3, layer3+4, encap3+4
hash: layer3+4
# 802.3ad specific options
peer_rate: slow
mtu: 9000
linkspeed: auto
# Is this link supporting multiple layer 2 networks?
trunking:
mode: '802.1q'
default_network: mgmt
allowed_networks:
- pxe
...

View File

@ -1,40 +1,19 @@
---
apiVersion: 'v1.0'
kind: HardwareProfile
apiVersion: 'drydock/v1'
kind: Network
metadata:
name: HPGen8v3
name: oob
region: sitename
date: 17-FEB-2017
name: Sample hardware definition
author: Scott Hussey
author: sh8121@att.com
description: Describe layer 2/3 attributes. Primarily CIs used for configuring server interfaces
spec:
# Vendor of the server chassis
vendor: HP
# Generation of the chassis model
generation: '8'
# Version of the chassis model within its generation - not version of the hardware definition
hw_version: '3'
# The certified version of the chassis BIOS
bios_version: '2.2.3'
# Mode of the default boot of hardware - bios, uefi
boot_mode: bios
# Protocol of boot of the hardware - pxe, usb, hdd
bootstrap_protocol: pxe
# Which interface to use for network booting within the OOB manager, not OS device
pxe_interface: 0
# Map hardware addresses to aliases/roles to allow a mix of hardware configs
# in a site to result in a consistent configuration
device_aliases:
- address: 0000:00:03.0
alias: prim_nic01
# type could identify expected hardware - used for hardware manifest validation
dev_type: '82540EM Gigabit Ethernet Controller'
bus_type: 'pci'
- address: 0000:00:04.0
alias: prim_nic02
dev_type: '82540EM Gigabit Ethernet Controller'
bus_type: 'pci'
- address: 2:0.0.0
alias: primary_boot
dev_type: 'VBOX HARDDISK'
bus_type: 'scsi'
allocation: static
cidr: 172.16.100.0/24
ranges:
- type: static
start: 172.16.100.15
end: 172.16.100.254
dns:
domain: ilo.sitename.att.com
servers: 172.16.100.10

21
tox.ini
View File

@ -2,9 +2,20 @@
envlist = py35
[testenv]
basepython=python3.5
deps=
-rrequirements-direct.txt
-rrequirements-test.txt
[testenv:yapf]
whitelist_externals=find
commands=
yapf -i -r --style=pep8 {toxinidir}/setup.py
yapf -i -r --style=pep8 {toxinidir}/drydock_provisioner
yapf -i -r --style=pep8 {toxinidir}/tests
find {toxinidir}/drydock_provisioner -name '__init__.py' -exec yapf -i --style=pep8 \{\} ;
[testenv:unit]
setenv=
PYTHONWARNING=all
commands=
@ -12,12 +23,16 @@ commands=
{posargs}
[testenv:genconfig]
basepython=python3.5
commands = oslo-config-generator --config-file=etc/drydock/drydock-config-generator.conf
[testenv:genpolicy]
basepython=python3.5
commands = oslopolicy-sample-generator --config-file etc/drydock/drydock-policy-generator.conf
[testenv:pep8]
commands = flake8 \
{posargs}
[flake8]
ignore=E302,H306
ignore=E302,H306,D101,D102,D103,D104
exclude= venv,.venv,.git,.idea,.tox,*.egg-info,*.eggs,bin,dist,./build/
max-line-length=119