Keep task stdout/stderr separate in result object
Combining stdout/stderr in the result can lead to problems when e.g. the stdout of a task is used as an input for another task. This is also different from the normal Ansible behavior and can be surprising and hard to debug for users. The new behavior is configurable and off by default to retain backward compatibility. Change-Id: Icaced970650913f9632a8db75a5970a38d3a6bc4 Co-Authored-By: James E. Blair <jim@acmegating.com>
This commit is contained in:
parent
ce37097e11
commit
3b011296e6
|
@ -645,6 +645,21 @@ Here is an example of two job definitions:
|
|||
- name: playbooks/job-playbook.yaml
|
||||
semaphores: playbook-semaphore
|
||||
|
||||
.. attr:: ansible-split-streams
|
||||
:default: False
|
||||
|
||||
Keep stdout/stderr of command and shell tasks separate (the Ansible
|
||||
default behavior) instead of merging stdout and stderr.
|
||||
|
||||
Since version 3, Zuul has combined the stdout and stderr streams
|
||||
in Ansible command tasks, but will soon switch to using the
|
||||
normal Ansible behavior. In an upcoming release of Zuul, this
|
||||
default will change to `True`, and in a later release, this
|
||||
option will be removed altogether.
|
||||
|
||||
This option may be used in the interim to verify playbook
|
||||
compatibility and facilitate upgrading to the new behavior.
|
||||
|
||||
.. attr:: ansible-version
|
||||
|
||||
The ansible version to use for all playbooks of the job. This can be
|
||||
|
|
|
@ -125,3 +125,9 @@ Version 14
|
|||
:Prior Zuul version: 8.2.0
|
||||
:Description: Adds the pre_fail attribute to builds.
|
||||
Affects schedulers.
|
||||
|
||||
Version 15
|
||||
----------
|
||||
:Prior Zuul version: 9.0.0
|
||||
:Description: Adds ansible_split_streams to FrozenJob.
|
||||
Affects schedulers and executors.
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
ZUUL_CONSOLE_PORT: 19887
|
||||
ZUUL_JOB_LOG_CONFIG: "{{ ansible_user_dir}}/logging.json"
|
||||
ZUUL_JOBDIR: "{{ ansible_user_dir}}"
|
||||
ZUUL_ANSIBLE_SPLIT_STREAMS: False
|
||||
PYTHONPATH: "{{ python_path }}"
|
||||
register: _success_output
|
||||
|
||||
|
@ -59,6 +60,7 @@
|
|||
environment:
|
||||
ZUUL_JOB_LOG_CONFIG: "{{ ansible_user_dir}}/logging.json"
|
||||
ZUUL_JOBDIR: "{{ ansible_user_dir}}"
|
||||
ZUUL_ANSIBLE_SPLIT_STREAMS: False
|
||||
PYTHONPATH: "{{ python_path }}"
|
||||
register: _success_output
|
||||
|
||||
|
@ -101,6 +103,7 @@
|
|||
ZUUL_CONSOLE_PORT: 19887
|
||||
ZUUL_JOB_LOG_CONFIG: "{{ ansible_user_dir}}/logging.json"
|
||||
ZUUL_JOBDIR: "{{ ansible_user_dir}}"
|
||||
ZUUL_ANSIBLE_SPLIT_STREAMS: False
|
||||
PYTHONPATH: "{{ python_path }}"
|
||||
|
||||
- name: Save output
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
A new job attribute, `ansible-split-streams` has been added to
|
||||
instruct Zuul to keep stdout and stderr separate when running
|
||||
Ansible command tasks.
|
||||
deprecations:
|
||||
- |
|
||||
Zuul is changing the behavior of its customized Ansible command
|
||||
module to match the standard Ansible behavior.
|
||||
|
||||
The previous behavior and current default is to combine stdout and
|
||||
stderr streams. In a later Zuul release, the
|
||||
`ansible-split-streams` default value will change from `false` to
|
||||
`true` in order to match the standard Ansible behavior and keep
|
||||
the streams separate. A further Zuul release will remove the
|
||||
option altogether.
|
|
@ -1,4 +1,6 @@
|
|||
- hosts: all
|
||||
tasks:
|
||||
- shell: echo "Failure test {{ zuul.executor.src_root }}"
|
||||
- shell: |
|
||||
echo "Failure stdout test {{ zuul.executor.src_root }}"
|
||||
>2& echo "Failure stderr test {{ zuul.executor.src_root }}"
|
||||
- shell: exit 1
|
||||
|
|
|
@ -3,7 +3,9 @@
|
|||
- name: handler
|
||||
shell: echo "This is a handler"
|
||||
tasks:
|
||||
- shell: echo "Standard output test {{ zuul.executor.src_root }}"
|
||||
- shell: |
|
||||
echo "Standard output test {{ zuul.executor.src_root }}"
|
||||
>&2 echo "Standard error test {{ zuul.executor.src_root }}"
|
||||
notify: handler
|
||||
- shell: echo "This should be skipped"
|
||||
when: false
|
||||
|
|
|
@ -25,11 +25,21 @@
|
|||
name: job-output
|
||||
run: playbooks/job-output.yaml
|
||||
|
||||
- job:
|
||||
parent: job-output
|
||||
name: job-output-split-streams
|
||||
ansible-split-streams: true
|
||||
|
||||
- job:
|
||||
name: job-output-failure
|
||||
run: playbooks/job-output.yaml
|
||||
post-run: playbooks/job-output-failure-post.yaml
|
||||
|
||||
- job:
|
||||
name: job-output-failure-split-streams
|
||||
parent: job-output-failure
|
||||
ansible-split-streams: true
|
||||
|
||||
- job:
|
||||
name: job-output-missing-role
|
||||
run: playbooks/job-output-missing-role.yaml
|
||||
|
@ -56,3 +66,15 @@
|
|||
jobs:
|
||||
- job-output-missing-role
|
||||
- job-output-missing-role-include
|
||||
|
||||
- project:
|
||||
name: org/project4
|
||||
check:
|
||||
jobs:
|
||||
- job-output-split-streams
|
||||
|
||||
- project:
|
||||
name: org/project5
|
||||
check:
|
||||
jobs:
|
||||
- job-output-failure-split-streams
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
test
|
|
@ -0,0 +1 @@
|
|||
test
|
|
@ -8,3 +8,5 @@
|
|||
- org/project
|
||||
- org/project2
|
||||
- org/project3
|
||||
- org/project4
|
||||
- org/project5
|
||||
|
|
|
@ -20,6 +20,12 @@
|
|||
zuul_console:
|
||||
port: "{{ test_console_port }}"
|
||||
|
||||
# This is checked in indexed JSON output so order is important
|
||||
- name: Output stream test
|
||||
shell: |
|
||||
echo "Standard output test {{ zuul.executor.src_root }}"
|
||||
>&2 echo "Standard error test {{ zuul.executor.src_root }}"
|
||||
|
||||
- name: Create first file
|
||||
copy:
|
||||
content: "command test one\n"
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
# under the License.
|
||||
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
|
@ -43,7 +44,7 @@ class FunctionalZuulStreamMixIn:
|
|||
ansible_remote = os.environ.get('ZUUL_REMOTE_IPV4')
|
||||
self.assertIsNotNone(ansible_remote)
|
||||
|
||||
def _run_job(self, job_name, create=True):
|
||||
def _run_job(self, job_name, create=True, split='false'):
|
||||
# Keep the jobdir around so we can inspect contents if an
|
||||
# assert fails. It will be cleaned up anyway as it is contained
|
||||
# in a tmp dir which gets cleaned up after the test.
|
||||
|
@ -58,6 +59,7 @@ class FunctionalZuulStreamMixIn:
|
|||
name: {job_name}
|
||||
run: playbooks/{job_name}.yaml
|
||||
ansible-version: {version}
|
||||
ansible-split-streams: {split}
|
||||
vars:
|
||||
test_console_port: {console_port}
|
||||
roles:
|
||||
|
@ -76,6 +78,7 @@ class FunctionalZuulStreamMixIn:
|
|||
""".format(
|
||||
job_name=job_name,
|
||||
version=self.ansible_version,
|
||||
split=split,
|
||||
console_port=self.log_console_port))
|
||||
else:
|
||||
conf = textwrap.dedent(
|
||||
|
@ -100,6 +103,12 @@ class FunctionalZuulStreamMixIn:
|
|||
with open(path) as f:
|
||||
return f.read()
|
||||
|
||||
def _get_job_json(self, build):
|
||||
path = os.path.join(self.jobdir_root, build.uuid,
|
||||
'work', 'logs', 'job-output.json')
|
||||
with open(path) as f:
|
||||
return json.loads(f.read())
|
||||
|
||||
def _assertLogLine(self, line, log, full_match=True):
|
||||
pattern = (r'^\d\d\d\d-\d\d-\d\d \d\d:\d\d\:\d\d\.\d\d\d\d\d\d \| %s%s'
|
||||
% (line, '$' if full_match else ''))
|
||||
|
@ -148,6 +157,123 @@ class FunctionalZuulStreamMixIn:
|
|||
self.assertNotIn('[WARNING]: Failure using method', console_output)
|
||||
|
||||
text = self._get_job_output(build)
|
||||
data = self._get_job_json(build)
|
||||
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
result = data[0]['plays'][1]['tasks'][2]['hosts']['compute1']
|
||||
self.assertEqual("\n".join((token_stdout, token_stderr)),
|
||||
result['stdout'])
|
||||
self.assertEqual("", result['stderr'])
|
||||
|
||||
self.assertLogLine(
|
||||
r'RUN START: \[untrusted : review.example.com/org/project/'
|
||||
r'playbooks/command.yaml@master\]', text)
|
||||
self.assertLogLine(r'PLAY \[all\]', text)
|
||||
self.assertLogLine(
|
||||
r'Ansible version={}'.format(self.ansible_core_version), text)
|
||||
self.assertLogLine(r'TASK \[Show contents of first file\]', text)
|
||||
self.assertLogLine(r'controller \| command test one', text)
|
||||
self.assertLogLine(
|
||||
r'controller \| ok: Runtime: \d:\d\d:\d\d\.\d\d\d\d\d\d', text)
|
||||
self.assertLogLine(r'TASK \[Show contents of second file\]', text)
|
||||
self.assertLogLine(r'compute1 \| command test two', text)
|
||||
self.assertLogLine(r'controller \| command test two', text)
|
||||
self.assertLogLine(r'compute1 \| This is a rescue task', text)
|
||||
self.assertLogLine(r'controller \| This is a rescue task', text)
|
||||
self.assertLogLine(r'compute1 \| This is an always task', text)
|
||||
self.assertLogLine(r'controller \| This is an always task', text)
|
||||
self.assertLogLine(r'compute1 \| This is a handler', text)
|
||||
self.assertLogLine(r'controller \| This is a handler', text)
|
||||
self.assertLogLine(r'controller \| First free task', text)
|
||||
self.assertLogLine(r'controller \| Second free task', text)
|
||||
self.assertLogLine(r'controller \| This is a shell task after an '
|
||||
'included role', text)
|
||||
self.assertLogLine(r'compute1 \| This is a shell task after an '
|
||||
'included role', text)
|
||||
self.assertLogLine(r'controller \| This is a command task after '
|
||||
'an included role', text)
|
||||
self.assertLogLine(r'compute1 \| This is a command task after an '
|
||||
'included role', text)
|
||||
self.assertLogLine(r'controller \| This is a shell task with '
|
||||
'delegate compute1', text)
|
||||
self.assertLogLine(r'controller \| This is a shell task with '
|
||||
'delegate controller', text)
|
||||
self.assertLogLine(r'compute1 \| item_in_loop1', text)
|
||||
self.assertLogLine(r'compute1 \| ok: Item: item_in_loop1 '
|
||||
r'Runtime: \d:\d\d:\d\d\.\d\d\d\d\d\d', text)
|
||||
self.assertLogLine(r'compute1 \| item_in_loop2', text)
|
||||
self.assertLogLine(r'compute1 \| ok: Item: item_in_loop2 '
|
||||
r'Runtime: \d:\d\d:\d\d\.\d\d\d\d\d\d', text)
|
||||
self.assertLogLine(r'compute1 \| failed_in_loop1', text)
|
||||
self.assertLogLine(r'compute1 \| ok: Item: failed_in_loop1 '
|
||||
r'Result: 1', text)
|
||||
self.assertLogLine(r'compute1 \| failed_in_loop2', text)
|
||||
self.assertLogLine(r'compute1 \| ok: Item: failed_in_loop2 '
|
||||
r'Result: 1', text)
|
||||
self.assertLogLine(r'compute1 \| transitive-one', text)
|
||||
self.assertLogLine(r'compute1 \| transitive-two', text)
|
||||
self.assertLogLine(r'compute1 \| transitive-three', text)
|
||||
self.assertLogLine(r'compute1 \| transitive-four', text)
|
||||
self.assertLogLine(
|
||||
r'controller \| ok: Runtime: \d:\d\d:\d\d\.\d\d\d\d\d\d', text)
|
||||
self.assertLogLine('PLAY RECAP', text)
|
||||
self.assertLogLine(
|
||||
r'controller \| ok: \d+ changed: \d+ unreachable: 0 failed: 0 '
|
||||
'skipped: 0 rescued: 1 ignored: 0', text)
|
||||
self.assertLogLine(
|
||||
r'RUN END RESULT_NORMAL: \[untrusted : review.example.com/'
|
||||
r'org/project/playbooks/command.yaml@master]', text)
|
||||
time1, time2 = self._getLogTime(r'TASK \[Command Not Found\]',
|
||||
text)
|
||||
self.assertLess((time2 - time1) / timedelta(milliseconds=1),
|
||||
9000)
|
||||
|
||||
# This is from the debug: msg='{{ ansible_version }}'
|
||||
# testing raw variable output. To make it version
|
||||
# agnostic, match just the start of
|
||||
# compute1 | ok: {'string': '2.9.27'...
|
||||
|
||||
# NOTE(ianw) 2022-08-24 : I don't know why the callback
|
||||
# for debug: msg= doesn't put the hostname first like
|
||||
# other output. Undetermined if bug or feature.
|
||||
self.assertLogLineStartsWith(
|
||||
r"""\{'string': '\d.""", text)
|
||||
# ... handling loops is a different path, and that does
|
||||
self.assertLogLineStartsWith(
|
||||
r"""compute1 \| ok: \{'string': '\d.""", text)
|
||||
|
||||
def test_command_split_streams(self):
|
||||
job = self._run_job('command', split='true')
|
||||
with self.jobLog(job):
|
||||
build = self.history[-1]
|
||||
self.assertEqual(build.result, 'SUCCESS')
|
||||
|
||||
console_output = self.console_output.getvalue()
|
||||
# This should be generic enough to match any callback
|
||||
# plugin failures, which look something like
|
||||
#
|
||||
# [WARNING]: Failure using method (v2_runner_on_ok) in \
|
||||
# callback plugin
|
||||
# (<ansible.plugins.callback.zuul_stream.CallbackModule object at'
|
||||
# 0x7f89f72a20b0>): 'dict' object has no attribute 'startswith'"
|
||||
# Callback Exception:
|
||||
# ...
|
||||
#
|
||||
self.assertNotIn('[WARNING]: Failure using method', console_output)
|
||||
|
||||
text = self._get_job_output(build)
|
||||
data = self._get_job_json(build)
|
||||
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
result = data[0]['plays'][1]['tasks'][2]['hosts']['compute1']
|
||||
self.assertEqual(token_stdout, result['stdout'])
|
||||
self.assertEqual(token_stderr, result['stderr'])
|
||||
|
||||
self.assertLogLine(
|
||||
r'RUN START: \[untrusted : review.example.com/org/project/'
|
||||
|
|
|
@ -7422,6 +7422,39 @@ class TestJobOutput(AnsibleZuulTestCase):
|
|||
with open(p) as f:
|
||||
return f.read()
|
||||
|
||||
def test_job_output_split_streams(self):
|
||||
# Verify that command standard output appears in the job output,
|
||||
# and that failures in the final playbook get logged.
|
||||
|
||||
# This currently only verifies we receive output from
|
||||
# localhost. Notably, it does not verify we receive output
|
||||
# via zuul_console streaming.
|
||||
self.executor_server.keep_jobdir = True
|
||||
A = self.fake_gerrit.addFakeChange('org/project4', 'master', 'A')
|
||||
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
|
||||
self.waitUntilSettled()
|
||||
self.assertHistory([
|
||||
dict(name='job-output-split-streams',
|
||||
result='SUCCESS', changes='1,1'),
|
||||
], ordered=False)
|
||||
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
|
||||
j = json.loads(self._get_file(self.history[0],
|
||||
'work/logs/job-output.json'))
|
||||
result = j[0]['plays'][0]['tasks'][0]['hosts']['test_node']
|
||||
self.assertEqual(token_stdout, result['stdout'])
|
||||
self.assertEqual(token_stderr, result['stderr'])
|
||||
|
||||
job_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt')
|
||||
self.log.info(job_output)
|
||||
self.assertIn(token_stdout, job_output)
|
||||
self.assertIn(token_stderr, job_output)
|
||||
|
||||
def test_job_output(self):
|
||||
# Verify that command standard output appears in the job output,
|
||||
# and that failures in the final playbook get logged.
|
||||
|
@ -7438,12 +7471,18 @@ class TestJobOutput(AnsibleZuulTestCase):
|
|||
dict(name='job-output', result='SUCCESS', changes='1,1'),
|
||||
], ordered=False)
|
||||
|
||||
token = 'Standard output test %s' % (self.history[0].jobdir.src_root)
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
|
||||
j = json.loads(self._get_file(self.history[0],
|
||||
'work/logs/job-output.json'))
|
||||
self.assertEqual(token,
|
||||
j[0]['plays'][0]['tasks'][0]
|
||||
['hosts']['test_node']['stdout'])
|
||||
|
||||
result = j[0]['plays'][0]['tasks'][0]['hosts']['test_node']
|
||||
self.assertEqual("\n".join((token_stdout, token_stderr)),
|
||||
result['stdout'])
|
||||
self.assertEqual("", result['stderr'])
|
||||
self.assertTrue(j[0]['plays'][0]['tasks'][1]
|
||||
['hosts']['test_node']['skipped'])
|
||||
self.assertTrue(j[0]['plays'][0]['tasks'][2]
|
||||
|
@ -7453,11 +7492,11 @@ class TestJobOutput(AnsibleZuulTestCase):
|
|||
j[0]['plays'][0]['tasks'][3]
|
||||
['hosts']['test_node']['stdout'])
|
||||
|
||||
self.log.info(self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt'))
|
||||
self.assertIn(token,
|
||||
self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt'))
|
||||
job_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt')
|
||||
self.log.info(job_output)
|
||||
self.assertIn(token_stdout, job_output)
|
||||
self.assertIn(token_stderr, job_output)
|
||||
|
||||
def test_job_output_missing_role(self):
|
||||
# Verify that ansible errors such as missing roles are part of the
|
||||
|
@ -7480,6 +7519,46 @@ class TestJobOutput(AnsibleZuulTestCase):
|
|||
self.assertIn('the role \'not_existing\' was not found',
|
||||
job_output)
|
||||
|
||||
def test_job_output_failure_log_split_streams(self):
|
||||
logger = logging.getLogger('zuul.AnsibleJob')
|
||||
output = io.StringIO()
|
||||
logger.addHandler(logging.StreamHandler(output))
|
||||
|
||||
# Verify that a failure in the last post playbook emits the contents
|
||||
# of the json output to the log
|
||||
self.executor_server.keep_jobdir = True
|
||||
A = self.fake_gerrit.addFakeChange('org/project5', 'master', 'A')
|
||||
self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1))
|
||||
self.waitUntilSettled()
|
||||
self.assertHistory([
|
||||
dict(name='job-output-failure-split-streams',
|
||||
result='POST_FAILURE', changes='1,1'),
|
||||
], ordered=False)
|
||||
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
|
||||
json_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.json')
|
||||
self.log.info(json_output)
|
||||
j = json.loads(json_output)
|
||||
result = j[0]['plays'][0]['tasks'][0]['hosts']['test_node']
|
||||
self.assertEqual(token_stdout, result['stdout'])
|
||||
self.assertEqual(token_stderr, result['stderr'])
|
||||
|
||||
job_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt')
|
||||
self.log.info(job_output)
|
||||
self.assertIn(token_stdout, job_output)
|
||||
self.assertIn(token_stderr, job_output)
|
||||
|
||||
log_output = output.getvalue()
|
||||
self.assertIn('Final playbook failed', log_output)
|
||||
self.assertIn('Failure stdout test', log_output)
|
||||
self.assertIn('Failure stderr test', log_output)
|
||||
|
||||
def test_job_output_failure_log(self):
|
||||
logger = logging.getLogger('zuul.AnsibleJob')
|
||||
output = io.StringIO()
|
||||
|
@ -7496,22 +7575,30 @@ class TestJobOutput(AnsibleZuulTestCase):
|
|||
result='POST_FAILURE', changes='1,1'),
|
||||
], ordered=False)
|
||||
|
||||
token = 'Standard output test %s' % (self.history[0].jobdir.src_root)
|
||||
j = json.loads(self._get_file(self.history[0],
|
||||
'work/logs/job-output.json'))
|
||||
self.assertEqual(token,
|
||||
j[0]['plays'][0]['tasks'][0]
|
||||
['hosts']['test_node']['stdout'])
|
||||
token_stdout = "Standard output test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
token_stderr = "Standard error test {}".format(
|
||||
self.history[0].jobdir.src_root)
|
||||
|
||||
self.log.info(self._get_file(self.history[0],
|
||||
'work/logs/job-output.json'))
|
||||
self.assertIn(token,
|
||||
self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt'))
|
||||
json_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.json')
|
||||
self.log.info(json_output)
|
||||
j = json.loads(json_output)
|
||||
result = j[0]['plays'][0]['tasks'][0]['hosts']['test_node']
|
||||
self.assertEqual("\n".join((token_stdout, token_stderr)),
|
||||
result['stdout'])
|
||||
self.assertEqual("", result['stderr'])
|
||||
|
||||
job_output = self._get_file(self.history[0],
|
||||
'work/logs/job-output.txt')
|
||||
self.log.info(job_output)
|
||||
self.assertIn(token_stdout, job_output)
|
||||
self.assertIn(token_stderr, job_output)
|
||||
|
||||
log_output = output.getvalue()
|
||||
self.assertIn('Final playbook failed', log_output)
|
||||
self.assertIn('Failure test', log_output)
|
||||
self.assertIn('Failure stdout test', log_output)
|
||||
self.assertIn('Failure stderr test', log_output)
|
||||
|
||||
def test_job_POST_FAILURE_reports_statsd(self):
|
||||
"""Test that POST_FAILURES output job stats."""
|
||||
|
|
|
@ -348,6 +348,7 @@ class TestWeb(BaseTestWeb):
|
|||
|
||||
self.assertEqual([
|
||||
{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -449,6 +450,7 @@ class TestWeb(BaseTestWeb):
|
|||
{
|
||||
'name': 'project-test1',
|
||||
'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 4,
|
||||
'branches': [],
|
||||
|
@ -501,6 +503,7 @@ class TestWeb(BaseTestWeb):
|
|||
}, {
|
||||
'name': 'project-test1',
|
||||
'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': ['stable'],
|
||||
|
@ -557,6 +560,7 @@ class TestWeb(BaseTestWeb):
|
|||
self.assertEqual([
|
||||
{
|
||||
'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -682,6 +686,7 @@ class TestWeb(BaseTestWeb):
|
|||
'api/tenant/tenant-one/project/org/project1').json()
|
||||
|
||||
jobs = [[{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -721,6 +726,7 @@ class TestWeb(BaseTestWeb):
|
|||
'voting': True,
|
||||
'workspace_scheme': 'golang'}],
|
||||
[{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -761,6 +767,7 @@ class TestWeb(BaseTestWeb):
|
|||
'voting': True,
|
||||
'workspace_scheme': 'golang'}],
|
||||
[{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -801,6 +808,7 @@ class TestWeb(BaseTestWeb):
|
|||
'voting': True,
|
||||
'workspace_scheme': 'golang'}],
|
||||
[{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -870,6 +878,7 @@ class TestWeb(BaseTestWeb):
|
|||
}, {'name': 'post',
|
||||
'jobs': [[
|
||||
{'abstract': False,
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': None,
|
||||
'attempts': 3,
|
||||
'branches': [],
|
||||
|
@ -1150,6 +1159,7 @@ class TestWeb(BaseTestWeb):
|
|||
|
||||
job_params = {
|
||||
'job': 'project-test1',
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': '6',
|
||||
'timeout': None,
|
||||
'post_timeout': None,
|
||||
|
@ -1250,6 +1260,7 @@ class TestWeb(BaseTestWeb):
|
|||
"noop")
|
||||
|
||||
job_params = {
|
||||
'ansible_split_streams': None,
|
||||
'ansible_version': '6',
|
||||
'branch': 'master',
|
||||
'extra_vars': {},
|
||||
|
|
|
@ -13,6 +13,8 @@
|
|||
# You should have received a copy of the GNU General Public License
|
||||
# along with this software. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
|
||||
from zuul.ansible import paths
|
||||
|
||||
from ansible.module_utils.parsing.convert_bool import boolean
|
||||
|
@ -51,4 +53,6 @@ class ActionModule(command.ActionModule):
|
|||
count = paths.ZUUL_LOG_ID_MAP.get(key, 0)
|
||||
self._task.args['zuul_log_id'] = "%s-%s-%s" % (
|
||||
self._task._uuid, count, log_host)
|
||||
self._task.args["zuul_ansible_split_streams"] = (
|
||||
os.environ["ZUUL_ANSIBLE_SPLIT_STREAMS"] == "True")
|
||||
return super(ActionModule, self).run(tmp, task_vars)
|
||||
|
|
|
@ -60,10 +60,10 @@ LOG_STREAM_PORT = int(os.environ.get("ZUUL_CONSOLE_PORT", 19885))
|
|||
def zuul_filter_result(result):
|
||||
"""Remove keys from shell/command output.
|
||||
|
||||
Zuul streams stdout into the log above, so including stdout and stderr
|
||||
in the result dict that ansible displays in the logs is duplicate
|
||||
noise. We keep stdout in the result dict so that other callback plugins
|
||||
like ARA could also have access to it. But drop them here.
|
||||
Zuul streams stdout/stderr into the log above, so including stdout and
|
||||
stderr in the result dict that ansible displays in the logs is duplicate
|
||||
noise. We keep stdout/stderr in the result dict so that other callback
|
||||
plugins like ARA could also have access to it. But drop them here.
|
||||
|
||||
Remove changed so that we don't show a bunch of "changed" titles
|
||||
on successful shell tasks, since that doesn't make sense from a Zuul
|
||||
|
@ -81,10 +81,16 @@ def zuul_filter_result(result):
|
|||
if not stdout_lines and stdout:
|
||||
stdout_lines = stdout.split('\n')
|
||||
|
||||
for key in ('changed', 'cmd', 'zuul_log_id', 'invocation',
|
||||
'stderr', 'stderr_lines'):
|
||||
stderr = result.pop('stderr', '')
|
||||
stderr_lines = result.pop('stderr_lines', [])
|
||||
if not stderr_lines and stderr:
|
||||
stderr_lines = stderr.split('\n')
|
||||
|
||||
for key in ('changed', 'cmd', 'zuul_log_id', 'invocation'):
|
||||
result.pop(key, None)
|
||||
return stdout_lines
|
||||
|
||||
# Combine stdout / stderr
|
||||
return stdout_lines + stderr_lines
|
||||
|
||||
|
||||
class CallbackModule(default.CallbackModule):
|
||||
|
|
|
@ -233,6 +233,7 @@ import datetime
|
|||
import glob
|
||||
import os
|
||||
import shlex
|
||||
import select
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils._text import to_native, to_bytes, to_text
|
||||
|
@ -257,8 +258,9 @@ from ansible.module_utils.six.moves import shlex_quote
|
|||
|
||||
LOG_STREAM_FILE = '/tmp/console-{log_uuid}.log'
|
||||
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
|
||||
# List to save stdout log lines in as we collect them
|
||||
# Lists to save stdout/stderr log lines in as we collect them
|
||||
_log_lines = []
|
||||
_stderr_log_lines = []
|
||||
|
||||
|
||||
class Console(object):
|
||||
|
@ -297,25 +299,34 @@ class Console(object):
|
|||
self.logfile.write(outln)
|
||||
|
||||
|
||||
def follow(fd, log_uuid):
|
||||
def follow(stdout, stderr, log_uuid):
|
||||
newline_warning = False
|
||||
with Console(log_uuid) as console:
|
||||
rselect = list(s for s in (stdout, stderr) if s is not None)
|
||||
while True:
|
||||
line = fd.readline()
|
||||
if not line:
|
||||
if not rselect:
|
||||
break
|
||||
_log_lines.append(line)
|
||||
if not line.endswith(b'\n'):
|
||||
line += b'\n'
|
||||
newline_warning = True
|
||||
console.addLine(line)
|
||||
rready, _, __ = select.select(rselect, [], [])
|
||||
for fd in rready:
|
||||
line = fd.readline()
|
||||
if not line:
|
||||
rselect.remove(fd)
|
||||
continue
|
||||
if fd == stdout:
|
||||
_log_lines.append(line)
|
||||
else:
|
||||
_stderr_log_lines.append(line)
|
||||
if not line[-1] != b'\n':
|
||||
line += b'\n'
|
||||
newline_warning = True
|
||||
console.addLine(line)
|
||||
if newline_warning:
|
||||
console.addLine('[Zuul] No trailing newline\n')
|
||||
|
||||
|
||||
# Taken from ansible/module_utils/basic.py ... forking the method for now
|
||||
# so that we can dive in and figure out how to make appropriate hook points
|
||||
def zuul_run_command(self, args, zuul_log_id, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
|
||||
def zuul_run_command(self, args, zuul_log_id, zuul_ansible_split_streams, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
|
||||
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
|
||||
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
|
||||
'''
|
||||
|
@ -467,14 +478,15 @@ def zuul_run_command(self, args, zuul_log_id, check_rc=False, close_fds=True, ex
|
|||
if umask:
|
||||
os.umask(umask)
|
||||
|
||||
# ZUUL: changed stderr to follow stdout
|
||||
# ZUUL: merge stdout/stderr depending on config
|
||||
stderr = subprocess.PIPE if zuul_ansible_split_streams else subprocess.STDOUT
|
||||
kwargs = dict(
|
||||
executable=executable,
|
||||
shell=shell,
|
||||
close_fds=close_fds,
|
||||
stdin=st_in,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
stderr=stderr,
|
||||
preexec_fn=preexec,
|
||||
env=env,
|
||||
)
|
||||
|
@ -508,7 +520,7 @@ def zuul_run_command(self, args, zuul_log_id, check_rc=False, close_fds=True, ex
|
|||
if self.no_log:
|
||||
t = None
|
||||
else:
|
||||
t = threading.Thread(target=follow, args=(cmd.stdout, zuul_log_id))
|
||||
t = threading.Thread(target=follow, args=(cmd.stdout, cmd.stderr, zuul_log_id))
|
||||
t.daemon = True
|
||||
t.start()
|
||||
|
||||
|
@ -543,9 +555,10 @@ def zuul_run_command(self, args, zuul_log_id, check_rc=False, close_fds=True, ex
|
|||
# ZUUL: stdout and stderr are in the console log file
|
||||
# ZUUL: return the saved log lines so we can ship them back
|
||||
stdout = b('').join(_log_lines)
|
||||
stderr = b('').join(_stderr_log_lines)
|
||||
else:
|
||||
stdout = b('')
|
||||
stderr = b('')
|
||||
stderr = b('')
|
||||
|
||||
except (OSError, IOError) as e:
|
||||
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
|
||||
|
@ -637,6 +650,7 @@ def main():
|
|||
stdin_add_newline=dict(type='bool', default=True),
|
||||
strip_empty_ends=dict(type='bool', default=True),
|
||||
zuul_log_id=dict(type='str'),
|
||||
zuul_ansible_split_streams=dict(type='bool'),
|
||||
),
|
||||
supports_check_mode=True,
|
||||
)
|
||||
|
@ -652,6 +666,7 @@ def main():
|
|||
stdin_add_newline = module.params['stdin_add_newline']
|
||||
strip = module.params['strip_empty_ends']
|
||||
zuul_log_id = module.params['zuul_log_id']
|
||||
zuul_ansible_split_streams = module.params["zuul_ansible_split_streams"]
|
||||
|
||||
# we promissed these in 'always' ( _lines get autoaded on action plugin)
|
||||
r = {'changed': False, 'stdout': '', 'stderr': '', 'rc': None, 'cmd': None, 'start': None, 'end': None, 'delta': None, 'msg': ''}
|
||||
|
@ -722,7 +737,7 @@ def main():
|
|||
# actually executes command (or not ...)
|
||||
if not module.check_mode:
|
||||
r['start'] = datetime.datetime.now()
|
||||
r['rc'], r['stdout'], r['stderr'] = zuul_run_command(module, args, zuul_log_id, executable=executable, use_unsafe_shell=shell, encoding=None,
|
||||
r['rc'], r['stdout'], r['stderr'] = zuul_run_command(module, args, zuul_log_id, zuul_ansible_split_streams, executable=executable, use_unsafe_shell=shell, encoding=None,
|
||||
data=stdin, binary_data=(not stdin_add_newline))
|
||||
r['end'] = datetime.datetime.now()
|
||||
else:
|
||||
|
|
|
@ -818,6 +818,7 @@ class JobParser(object):
|
|||
'post-run': playbook_def,
|
||||
'run': playbook_def,
|
||||
'cleanup-run': playbook_def,
|
||||
'ansible-split-streams': bool,
|
||||
'ansible-version': vs.Any(str, float, int),
|
||||
'_source_context': model.SourceContext,
|
||||
'_start_mark': model.ZuulMark,
|
||||
|
@ -957,6 +958,8 @@ class JobParser(object):
|
|||
raise Exception("Once set, the post-review attribute "
|
||||
"may not be unset")
|
||||
|
||||
job.ansible_split_streams = conf.get('ansible-split-streams')
|
||||
|
||||
# Configure and validate ansible version
|
||||
if 'ansible-version' in conf:
|
||||
# The ansible-version can be treated by yaml as a float or
|
||||
|
|
|
@ -982,6 +982,9 @@ class AnsibleJob(object):
|
|||
self.zuul_event_id = self.arguments["zuul_event_id"]
|
||||
# Record ansible version being used for the cleanup phase
|
||||
self.ansible_version = self.job.ansible_version
|
||||
self.ansible_split_streams = self.job.ansible_split_streams
|
||||
if self.ansible_split_streams is None:
|
||||
self.ansible_split_streams = False
|
||||
# TODO(corvus): Remove default setting after 4.3.0; this is to handle
|
||||
# scheduler/executor version skew.
|
||||
self.scheme = self.job.workspace_scheme or zuul.model.SCHEME_GOLANG
|
||||
|
@ -2746,6 +2749,8 @@ class AnsibleJob(object):
|
|||
env_copy['ZUUL_CONSOLE_PORT'] = str(
|
||||
self.executor_server.log_console_port)
|
||||
env_copy['TMP'] = self.jobdir.local_tmp
|
||||
env_copy['ZUUL_ANSIBLE_SPLIT_STREAMS'] = str(
|
||||
self.ansible_split_streams)
|
||||
pythonpath = env_copy.get('PYTHONPATH')
|
||||
if pythonpath:
|
||||
pythonpath = [pythonpath]
|
||||
|
|
|
@ -2292,6 +2292,7 @@ class FrozenJob(zkobject.ZKObject):
|
|||
MAX_DATA_LEN = 10 * 1024
|
||||
|
||||
attributes = ('ansible_version',
|
||||
'ansible_split_streams',
|
||||
'dependencies',
|
||||
'inheritance_path',
|
||||
'name',
|
||||
|
@ -2402,7 +2403,7 @@ class FrozenJob(zkobject.ZKObject):
|
|||
def serialize(self, context):
|
||||
# Ensure that any special handling in this method is matched
|
||||
# in Job.freezeJob so that FrozenJobs are identical regardless
|
||||
# of whether they have been desiraliazed.
|
||||
# of whether they have been deserialized.
|
||||
data = {}
|
||||
for k in self.attributes:
|
||||
# TODO: Backwards compat handling, remove after 5.0
|
||||
|
@ -2456,6 +2457,10 @@ class FrozenJob(zkobject.ZKObject):
|
|||
data['nodeset_index'] = 0
|
||||
del data['nodeset']
|
||||
|
||||
# MODEL_API < 15
|
||||
if 'ansible_split_streams' not in data:
|
||||
data['ansible_split_streams'] = None
|
||||
|
||||
if hasattr(self, 'nodeset_alternatives'):
|
||||
alts = self.nodeset_alternatives
|
||||
else:
|
||||
|
@ -2752,6 +2757,7 @@ class Job(ConfigObject):
|
|||
d['nodeset'] = alts[0].toDict()
|
||||
elif len(alts) > 1:
|
||||
d['nodeset_alternatives'] = [x.toDict() for x in alts]
|
||||
d['ansible_split_streams'] = self.ansible_split_streams
|
||||
if self.ansible_version:
|
||||
d['ansible_version'] = self.ansible_version
|
||||
else:
|
||||
|
@ -2804,6 +2810,7 @@ class Job(ConfigObject):
|
|||
post_run=(),
|
||||
cleanup_run=(),
|
||||
run=(),
|
||||
ansible_split_streams=None,
|
||||
ansible_version=None,
|
||||
semaphores=(),
|
||||
attempts=3,
|
||||
|
|
|
@ -1689,6 +1689,7 @@ class ZuulWebAPI(object):
|
|||
params['override_branch'] = job.override_branch
|
||||
params['override_checkout'] = job.override_checkout
|
||||
params['ansible_version'] = job.ansible_version
|
||||
params['ansible_split_streams'] = job.ansible_split_streams
|
||||
params['workspace_scheme'] = job.workspace_scheme
|
||||
if job.name != 'noop':
|
||||
params['playbooks'] = job.run
|
||||
|
|
Loading…
Reference in New Issue