Pytest вывод ошибки

I am relatively new to pytest hooks and plugins and I am unable to figure out how to get my pytest code to give me test execution summary with reason of failure.

Consider the code:

class Foo:
    def __init__(self, val):
        self.val = val

    def test_compare12():
        f1 = Foo(1)
        f2 = Foo(2)
        assert f1 == f2, "F2 does not match F1"

    def test_compare34():
        f3 = Foo(3)
        f4 = Foo(4)
        assert f3 == f4, "F4 does not match F3"

When I run the pytest script with -v option, it gives me the following result on the console:

========================= test session starts=================================
platform darwin -- Python 2.7.5 -- py-1.4.26 -- pytest-2.7.0 --    /Users/nehau/src/QA/bin/python
rootdir: /Users/nehau/src/QA/test, inifile: 
plugins: capturelog
collected 2 items 

test_foocompare.py::test_compare12 FAILED
test_foocompare.py::test_compare34 FAILED

================================ FAILURES ===============================
_______________________________ test_compare12 _________________________

def test_compare12():
    f1 = Foo(1)
    f2 = Foo(2)
>       assert f1 == f2, "F2 does not match F1"
E       AssertionError: F2 does not match F1
E       assert <test.test_foocompare.Foo instance at 0x107640368> == <test.test_foocompare.Foo instance at 0x107640488>

test_foocompare.py:11: AssertionError
_____________________________ test_compare34______________________________

def test_compare34():
    f3 = Foo(3)
    f4 = Foo(4)
>       assert f3 == f4, "F4 does not match F3"
E       AssertionError: F4 does not match F3
E       assert <test.test_foocompare.Foo instance at 0x107640248> == <test.test_foocompare.Foo instance at 0x10761fe60>

test_foocompare.py:16: AssertionError

=============================== 2 failed in 0.01 seconds ==========================

I am running close to 2000 test cases, so it would be really helpful if I could have pytest display output in the following format:

::
test_foocompare.py::test_compare12 FAILED AssertionError:F2 does not match F1
test_foocompare.py::test_compare34 FAILED AssertionError:F2 does not match F1
::

I have looked at pytest_runtest_makereport plugin but can’t seem to get it working. Anyone has any other ideas?

Thanks

Starting from version 3.1, pytest now automatically catches warnings during test execution
and displays them at the end of the session:

# content of test_show_warnings.py
import warnings


def api_v1():
    warnings.warn(UserWarning("api v1, should use functions from v2"))
    return 1


def test_one():
    assert api_v1() == 1

Running pytest now produces this output:

$ pytest test_show_warnings.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-7.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 1 item

test_show_warnings.py .                                              [100%]

============================= warnings summary =============================
test_show_warnings.py::test_one
  /home/sweet/project/test_show_warnings.py:5: UserWarning: api v1, should use functions from v2
    warnings.warn(UserWarning("api v1, should use functions from v2"))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================= 1 passed, 1 warning in 0.12s =======================

Controlling warnings¶

Similar to Python’s warning filter and -W option flag, pytest provides
its own -W flag to control which warnings are ignored, displayed, or turned into
errors. See the warning filter documentation for more
advanced use-cases.

This code sample shows how to treat any UserWarning category class of warning
as an error:

$ pytest -q test_show_warnings.py -W error::UserWarning
F                                                                    [100%]
================================= FAILURES =================================
_________________________________ test_one _________________________________

    def test_one():
>       assert api_v1() == 1

test_show_warnings.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def api_v1():
>       warnings.warn(UserWarning("api v1, should use functions from v2"))
E       UserWarning: api v1, should use functions from v2

test_show_warnings.py:5: UserWarning
========================= short test summary info ==========================
FAILED test_show_warnings.py::test_one - UserWarning: api v1, should use ...
1 failed in 0.12s

The same option can be set in the pytest.ini or pyproject.toml file using the
filterwarnings ini option. For example, the configuration below will ignore all
user warnings and specific deprecation warnings matching a regex, but will transform
all other warnings into errors.

# pytest.ini
[pytest]
filterwarnings =
    error
    ignore::UserWarning
    ignore:function ham\(\) is deprecated:DeprecationWarning
# pyproject.toml
[tool.pytest.ini_options]
filterwarnings = [
    "error",
    "ignore::UserWarning",
    # note the use of single quote below to denote "raw" strings in TOML
    'ignore:function ham\(\) is deprecated:DeprecationWarning',
]

When a warning matches more than one option in the list, the action for the last matching option
is performed.

Note

The -W flag and the filterwarnings ini option use warning filters that are
similar in structure, but each configuration option interprets its filter
differently. For example, message in filterwarnings is a string containing a
regular expression that the start of the warning message must match,
case-insensitively, while message in -W is a literal string that the start of
the warning message must contain (case-insensitively), ignoring any whitespace at
the start or end of message. Consult the warning filter documentation for more
details.

@pytest.mark.filterwarnings

You can use the @pytest.mark.filterwarnings to add warning filters to specific test items,
allowing you to have finer control of which warnings should be captured at test, class or
even module level:

import warnings


def api_v1():
    warnings.warn(UserWarning("api v1, should use functions from v2"))
    return 1


@pytest.mark.filterwarnings("ignore:api v1")
def test_one():
    assert api_v1() == 1

Filters applied using a mark take precedence over filters passed on the command line or configured
by the filterwarnings ini option.

You may apply a filter to all tests of a class by using the filterwarnings mark as a class
decorator or to all tests in a module by setting the pytestmark variable:

# turns all warnings into errors for this module
pytestmark = pytest.mark.filterwarnings("error")

Credits go to Florian Schulze for the reference implementation in the pytest-warnings
plugin.

Disabling warnings summary¶

Although not recommended, you can use the --disable-warnings command-line option to suppress the
warning summary entirely from the test run output.

Disabling warning capture entirely¶

This plugin is enabled by default but can be disabled entirely in your pytest.ini file with:

[pytest]
addopts = -p no:warnings

Or passing -p no:warnings in the command-line. This might be useful if your test suites handles warnings
using an external system.

DeprecationWarning and PendingDeprecationWarning¶

By default pytest will display DeprecationWarning and PendingDeprecationWarning warnings from
user code and third-party libraries, as recommended by PEP 565.
This helps users keep their code modern and avoid breakages when deprecated warnings are effectively removed.

However, in the specific case where users capture any type of warnings in their test, either with
pytest.warns(), pytest.deprecated_call() or using the recwarn fixture,
no warning will be displayed at all.

Sometimes it is useful to hide some specific deprecation warnings that happen in code that you have no control over
(such as third-party libraries), in which case you might use the warning filters options (ini or marks) to ignore
those warnings.

For example:

[pytest]
filterwarnings =
    ignore:.*U.*mode is deprecated:DeprecationWarning

This will ignore all warnings of type DeprecationWarning where the start of the message matches
the regular expression ".*U.*mode is deprecated".

See @pytest.mark.filterwarnings and
Controlling warnings for more examples.

Note

If warnings are configured at the interpreter level, using
the PYTHONWARNINGS environment variable or the
-W command-line option, pytest will not configure any filters by default.

Also pytest doesn’t follow PEP 506 suggestion of resetting all warning filters because
it might break test suites that configure warning filters themselves
by calling warnings.simplefilter() (see issue #2430 for an example of that).

Ensuring code triggers a deprecation warning¶

You can also use pytest.deprecated_call() for checking
that a certain function call triggers a DeprecationWarning or
PendingDeprecationWarning:

import pytest


def test_myfunction_deprecated():
    with pytest.deprecated_call():
        myfunction(17)

This test will fail if myfunction does not issue a deprecation warning
when called with a 17 argument.

Asserting warnings with the warns function¶

You can check that code raises a particular warning using pytest.warns(),
which works in a similar manner to raises (except that
raises does not capture all exceptions, only the
expected_exception):

import warnings

import pytest


def test_warning():
    with pytest.warns(UserWarning):
        warnings.warn("my warning", UserWarning)

The test will fail if the warning in question is not raised. Use the keyword
argument match to assert that the warning matches a text or regex.
To match a literal string that may contain regular expression metacharacters like ( or ., the pattern can
first be escaped with re.escape.

Some examples:

>>> with warns(UserWarning, match="must be 0 or None"):
...     warnings.warn("value must be 0 or None", UserWarning)
...

>>> with warns(UserWarning, match=r"must be \d+$"):
...     warnings.warn("value must be 42", UserWarning)
...

>>> with warns(UserWarning, match=r"must be \d+$"):
...     warnings.warn("this is not here", UserWarning)
...
Traceback (most recent call last):
  ...
Failed: DID NOT WARN. No warnings of type ...UserWarning... were emitted...

>>> with warns(UserWarning, match=re.escape("issue with foo() func")):
...     warnings.warn("issue with foo() func")
...

You can also call pytest.warns() on a function or code string:

pytest.warns(expected_warning, func, *args, **kwargs)
pytest.warns(expected_warning, "func(*args, **kwargs)")

The function also returns a list of all raised warnings (as
warnings.WarningMessage objects), which you can query for
additional information:

with pytest.warns(RuntimeWarning) as record:
    warnings.warn("another warning", RuntimeWarning)

# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert record[0].message.args[0] == "another warning"

Alternatively, you can examine raised warnings in detail using the
recwarn fixture (see below).

The recwarn fixture automatically ensures to reset the warnings
filter at the end of the test, so no global state is leaked.

Recording warnings¶

You can record raised warnings either using pytest.warns() or with
the recwarn fixture.

To record with pytest.warns() without asserting anything about the warnings,
pass no arguments as the expected warning type and it will default to a generic Warning:

with pytest.warns() as record:
    warnings.warn("user", UserWarning)
    warnings.warn("runtime", RuntimeWarning)

assert len(record) == 2
assert str(record[0].message) == "user"
assert str(record[1].message) == "runtime"

The recwarn fixture will record warnings for the whole function:

import warnings


def test_hello(recwarn):
    warnings.warn("hello", UserWarning)
    assert len(recwarn) == 1
    w = recwarn.pop(UserWarning)
    assert issubclass(w.category, UserWarning)
    assert str(w.message) == "hello"
    assert w.filename
    assert w.lineno

Both recwarn and pytest.warns() return the same interface for recorded
warnings: a WarningsRecorder instance. To view the recorded warnings, you can
iterate over this instance, call len on it to get the number of recorded
warnings, or index into it to get a particular recorded warning.

Full API: WarningsRecorder.

Additional use cases of warnings in tests¶

Here are some use cases involving warnings that often come up in tests, and suggestions on how to deal with them:

  • To ensure that at least one of the indicated warnings is issued, use:

def test_warning():
    with pytest.warns((RuntimeWarning, UserWarning)):
        ...
  • To ensure that only certain warnings are issued, use:

def test_warning(recwarn):
    ...
    assert len(recwarn) == 1
    user_warning = recwarn.pop(UserWarning)
    assert issubclass(user_warning.category, UserWarning)
  • To ensure that no warnings are emitted, use:

def test_warning():
    with warnings.catch_warnings():
        warnings.simplefilter("error")
        ...
  • To suppress warnings, use:

with warnings.catch_warnings():
    warnings.simplefilter("ignore")
    ...

Custom failure messages¶

Recording warnings provides an opportunity to produce custom test
failure messages for when no warnings are issued or other conditions
are met.

def test():
    with pytest.warns(Warning) as record:
        f()
        if not record:
            pytest.fail("Expected a warning!")

If no warnings are issued when calling f, then not record will
evaluate to True. You can then call pytest.fail() with a
custom error message.

Internal pytest warnings¶

pytest may generate its own warnings in some situations, such as improper usage or deprecated features.

For example, pytest will emit a warning if it encounters a class that matches python_classes but also
defines an __init__ constructor, as this prevents the class from being instantiated:

# content of test_pytest_warnings.py
class Test:
    def __init__(self):
        pass

    def test_foo(self):
        assert 1 == 1
$ pytest test_pytest_warnings.py -q

============================= warnings summary =============================
test_pytest_warnings.py:1
  /home/sweet/project/test_pytest_warnings.py:1: PytestCollectionWarning: cannot collect test class 'Test' because it has a __init__ constructor (from: test_pytest_warnings.py)
    class Test:

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
1 warning in 0.12s

These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.

Please read our Backwards Compatibility Policy to learn how we proceed about deprecating and eventually removing
features.

The full list of warnings is listed in the reference documentation.

Resource Warnings¶

Additional information of the source of a ResourceWarning can be obtained when captured by pytest if
tracemalloc module is enabled.

One convenient way to enable tracemalloc when running tests is to set the PYTHONTRACEMALLOC to a large
enough number of frames (say 20, but that number is application dependent).

For more information, consult the Python Development Mode
section in the Python documentation.

I am relatively new to pytest hooks and plugins and I am unable to figure out how to get my pytest code to give me test execution summary with reason of failure.

Consider the code:

class Foo:
    def __init__(self, val):
        self.val = val

    def test_compare12():
        f1 = Foo(1)
        f2 = Foo(2)
        assert f1 == f2, "F2 does not match F1"

    def test_compare34():
        f3 = Foo(3)
        f4 = Foo(4)
        assert f3 == f4, "F4 does not match F3"

When I run the pytest script with -v option, it gives me the following result on the console:

========================= test session starts=================================
platform darwin -- Python 2.7.5 -- py-1.4.26 -- pytest-2.7.0 --    /Users/nehau/src/QA/bin/python
rootdir: /Users/nehau/src/QA/test, inifile: 
plugins: capturelog
collected 2 items 

test_foocompare.py::test_compare12 FAILED
test_foocompare.py::test_compare34 FAILED

================================ FAILURES ===============================
_______________________________ test_compare12 _________________________

def test_compare12():
    f1 = Foo(1)
    f2 = Foo(2)
>       assert f1 == f2, "F2 does not match F1"
E       AssertionError: F2 does not match F1
E       assert <test.test_foocompare.Foo instance at 0x107640368> == <test.test_foocompare.Foo instance at 0x107640488>

test_foocompare.py:11: AssertionError
_____________________________ test_compare34______________________________

def test_compare34():
    f3 = Foo(3)
    f4 = Foo(4)
>       assert f3 == f4, "F4 does not match F3"
E       AssertionError: F4 does not match F3
E       assert <test.test_foocompare.Foo instance at 0x107640248> == <test.test_foocompare.Foo instance at 0x10761fe60>

test_foocompare.py:16: AssertionError

=============================== 2 failed in 0.01 seconds ==========================

I am running close to 2000 test cases, so it would be really helpful if I could have pytest display output in the following format:

::
test_foocompare.py::test_compare12 FAILED AssertionError:F2 does not match F1
test_foocompare.py::test_compare34 FAILED AssertionError:F2 does not match F1
::

I have looked at pytest_runtest_makereport plugin but can’t seem to get it working. Anyone has any other ideas?

Thanks

Pytest is able to provide nice traceback errors for the failed tests but is doing this after all the tests were executed and I am interested in displaying the errors progressively.

I know that one workaround would be to make it fail fast at the first error, but I do not want this, I do want it to continue.

  • python
  • pytest

asked Jan 8, 2015 at 12:47

sorin's user avatar

sorinsorin

162k179 gold badges538 silver badges806 bronze badges

0

2 Answers

Checkout pytest-instafail:

pytest-instafail is a plugin for py.test that shows failures and errors instantly instead of waiting until the end of test session.

answered Jan 8, 2015 at 14:15

Bruno Oliveira's user avatar

Bruno OliveiraBruno Oliveira

13.7k5 gold badges43 silver badges41 bronze badges

1

  • Alternatively pytet-sugar also behaves like this

    Feb 11, 2015 at 13:28

Use fail tag

py.test -r f

PyTest will returns just the names of the failing tests, line numbers of where the fail occurred and the type of error that caused the failure.

answered Jan 8, 2015 at 14:27

hariK's user avatar

hariKhariK

2,76713 silver badges18 bronze badges

1

  • This still waits until all tests are complete before returning anything, so it doesn’t answer the question.

    Jan 3, 2019 at 17:40

One of the advantages of Pytest over the unittest module is that we don’t need to use different
assert methods on different data structures. Pytest, by way of magic (also known as introspection)
can infere the actual value, the expected value, and the operation used in a plain old assert statement and can
provide a rather nice error message.

Let’s see a few of those error messages:

In these examples I’ll keep both the code under test and the testing function in the same file. You’ve already seen how it would look normally if we imported the functions under test from another module. If not check out the getting started with pytest article.

Also, in order to make the results clear, I’ve removed the summary of the test runs and kept only the actual
error reporting.

Comparing numbers for equality in Pytest

Probably the most basic thing to test is whether a function given some input returns an expected number.

examples/python/pt3/test_number_equal.py

def double(n):
    #return 2*n
    return 2+n

def test_string_equal():
    assert double(2) == 4
    assert double(21) == 42

In the above function double someone has mistakenly used + instead of *. The result of the test looks like this

$ pytest test_number_equal.py

    def test_string_equal():
        assert double(2) == 4
>       assert double(21) == 42
E       assert 23 == 42
E        +  where 23 = double(21)

The line starting with the > sign indicates the assert line that failed. The lines starting with E are the details.

Compare numbers relatively

In certain cases we cannot test for equality. For example if we would like to test if some process finishes within a given time, or whether a timeout is triggered at the right time. In such cases we need to compare if a number is less-than or greater-than some other number.

examples/python/pt3/test_number_less_than.py

def get_number():
    return 23

def test_string_equal():
    assert get_number() < 0 

Running the test will provide the following error message:

$ pytest test_number_less_than.py

    def test_string_equal():
>       assert get_number() < 0
E       assert 23 < 0
E        +  where 23 = get_number()

The error-report looks quite similar to what we had above, but in this case too it is clear what was the comparision operation that failed.

Comparing strings

Similar to numbers we might want to know if a string received from some function is the same as we expect it to be.

examples/python/pt3/test_string_equal.py

def get_string():
    return "abc"

def test_string_equal():
    assert get_string() == "abd"

The result looks familiar:

$ pytest test_string_equal.py

    def test_string_equal():
>       assert get_string() == "abd"
E       AssertionError: assert 'abc' == 'abd'
E         - abc
E         + abd

For such short strings seeing both the expected string and the actual string is ok.
We can look at the strings and compare them character by character to see what was the actual difference.

Compare long strings

If the strings are much longer however, it would be really hard for us to pinpoint the specific location of the character (or characters) that differ. Luckily the authors of Pytest have thought about this problem as well:

examples/python/pt3/test_long_strings.py

import string

def get_string(s):
    return string.printable + s + string.printable

def test_long_strings():
    assert get_string('a') == get_string('b')

string.printable is a string containing all the printable ASCII characters:
0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!»#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c

Our brilliant get_string function will return it twice with an additional character between them.
We use this to create two nasty and long strings that differ by a single character.

The output looks like this:

$ pytest test_long_strings.py

    def test_long_strings():
>       assert get_string('a') == get_string('b')
E       AssertionError: assert '0123456789ab...t\n\r\x0b\x0c' == '0123456789abc...t\n\r\x0b\x0c'
E         Skipping 90 identical leading characters in diff, use -v to show
E         Skipping 91 identical trailing characters in diff, use -v to show
E           {|}~
E
E         - a012345678
E         ? ^
E         + b012345678
E         ? ^

I think this explains quite nicely where have the two strings differ and if you really, really want to see the
whole string you can use the -v flag.

Is string in longer string

If you need to check whether a string is part of a larger string we can use the regular in operator.

examples/python/pt3/test_substring.py

import string

def get_string():
    return string.printable * 30

def test_long_strings():
    assert 'hello' in get_string()

In case of failure the result will include only the beginning and the end of the «long string».
This can be very usefule if you need to test whether a certain string appears or not in an HTML page.

examples/python/pt3/test_substring.txt

    def test_long_strings():
>       assert 'hello' in get_string()
E       assert 'hello' in '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c012345...x0b\x0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c'
E        +  where '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c012345...x0b\x0c0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' = get_string()

Testing any expression

Instead of calling a function we might have an expression on one side of the equation. (Actually I am not sure how often this would happen in the real world.
Maybe we only see these in examples on how pytest works.)

examples/python/pt3/test_expression_equal.py

def test_expression_equal():
    a = 3
    assert a % 2 == 0

The test result:

$ pytest test_expression_equal.py

    def test_expression_equal():
        a = 3
>       assert a % 2 == 0
E       assert (3 % 2) == 0

Is element in a list?

Besides comparing individual values we might also want to compare more complex data. First, let’s see what happens if our test must ensure that a value can be found in a list?

examples/python/pt3/test_in_list.py

def get_list():
    return ["monkey", "cat"]

def test_in_list():
    assert "dog" in get_list()

We can use the in operator of Python. The result will look like this:

$ pytest test_in_list.py

    def test_in_list():
>       assert "dog" in get_list()
E       AssertionError: assert 'dog' in ['monkey', 'cat']
E        +  where ['monkey', 'cat'] = get_list()

Pytest will conveniently show us the list that did not contain the expected value.

Compare lists in Pytest

A more interesting case might be testing if the returned list is the same as the expected list. Using the == operator can tell us if the two lists are equal or not, but if we need to understand what went wrong, we’d better know where do the lists differ.
Or at least where do they start to differ.

examples/python/pt3/test_lists.py

import string
import re

def get_list(s):
    return list(string.printable + s + string.printable)

def test_long_lists():
    assert get_list('a') == get_list('b')

The result:

$ pytest test_lists.py

    def test_long_lists():
>       assert get_list('a') == get_list('b')
E       AssertionError: assert ['0', '1', '2...'4', '5', ...] == ['0', '1', '2'...'4', '5', ...]
E         At index 100 diff: 'a' != 'b'
E         Use -v to get the full diff

We could further explore the output for the cases when multiple elements differ and when one list is a sublist of the other.

Compare dictionaries in Pytest

Dictionaries can differ in a number of ways. The keys might be identical, but some values might differ.
Some keys might be missing in the actual result or there might be some extra keys.

In this example we test all of these:

Using the string.printable we create a dictionary where the keys are the printable characters and the values
are their respective ASCII value returned by the ord function. Then we add (or replace) one of key-value pair.

examples/python/pt3/test_dictionaries.py

import string
import re

def get_dictionary(k, v):
    d = dict([x, ord(x)] for x in  string.printable)
    d[k] = v
    return d

def test_big_dictionary_different_value():
    assert get_dictionary('a', 'def') == get_dictionary('a', 'abc')

def test_big_dictionary_differnt_keys():
    assert get_dictionary('abc', 1) == get_dictionary('def', 2)

The result looks like this:

$ pytest test_dictionaries.py

______________ test_big_dictionary_different_value _______________

    def test_big_dictionary_different_value():
>       assert get_dictionary('a', 'def') == get_dictionary('a', 'abc')
E       AssertionError: assert {'\t': 9, '\n...x0c': 12, ...} == {'\t': 9, '\n'...x0c': 12, ...}
E         Omitting 99 identical items, use -v to show
E         Differing items:
E         {'a': 'def'} != {'a': 'abc'}
E         Use -v to get the full diff

_______________ test_big_dictionary_differnt_keys ________________

    def test_big_dictionary_differnt_keys():
>       assert get_dictionary('abc', 1) == get_dictionary('def', 2)
E       AssertionError: assert {'\t': 9, '\n...x0c': 12, ...} == {'\t': 9, '\n'...x0c': 12, ...}
E         Omitting 100 identical items, use -v to show
E         Left contains more items:
E         {'abc': 1}
E         Right contains more items:
E         {'def': 2}
E         Use -v to get the full diff

The first test function got two dictionaries where the value of a single key differed.

The second test function had an extra key in both dictionaries.

Testing for expected exceptions in Pytest

Finally let’s look at exceptions!

A good test suite will test the expected behaviour both when the input is fine and
also when the input triggers some exception. Without testing the exception we cannot
be sure that they will be really raiesed when necessary. An incorrect refactoring
might eliminate the error checking of our code therby letting through invalid data
and either triggering a different exception as in our example, or not generating any exception
just silently doing the wrong thing.

In this brilliant example the divide function checks if the divider is 0 and raises it
own type of exception instead of letting Python rais its own. If this is the defined behavior
someone using our module will probably wrap our code in some try expression and expect
a ValueError error. If someone changes our divide function and removed our
special exception then we basically have broken the exception-handling of our user.

examples/python/pt3/test_exceptions.py

import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The test ha actually two parts. The first part:

    with pytest.raises(ValueError) as e:
        divide(1, 0)

checks if a ValueError was raised during our call to divide(1, 0)
and will assign the exception object to the arbitrarily named variable e.

The second part is a plain assert that checks if the text of the exception is what
we expect to be.

This is now the expected behaviour. Our test passes:

$ pytest test_exceptions.py

test_exceptions.py .

What if someone changes the error message in our exception from Zero to Null?

examples/python/pt3/test_exceptions_text_changed.py

import pytest

def divide(a, b):
    if b == 0:
        raise ValueError('Cannot divide by Null')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The assert in the test will fail indicating the change in the text.
This is actually a plain string comparision.

$ pytest test_exceptions_text_changed.py


    def test_zero_division():
        with pytest.raises(ValueError) as e:
            divide(1, 0)
>       assert str(e.value) == 'Cannot divide by Zero'
E       AssertionError: assert 'Cannot divide by Null' == 'Cannot divide by Zero'
E         - Cannot divide by Null
E         ?                  ^^^^
E         + Cannot divide by Zero
E         ?                  ^^^^

In the second example we show the case when the special exception raising is gone.
Either by mistake or because someone decided that it should not be there.
In this case the first part of our test function will catch the different exception.

examples/python/pt3/test_exceptions_failing.py

import pytest

def divide(a, b):
#    if b == 0:
#        raise ValueError('Cannot divide by Zero')
    return a / b

def test_zero_division():
    with pytest.raises(ValueError) as e:
        divide(1, 0)
    assert str(e.value) == 'Cannot divide by Zero' 

The report will look like this:

$ pytest test_exceptions_failing.py

    def test_zero_division():
        with pytest.raises(ValueError) as e:
>           divide(1, 0)

test_exceptions_failing.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

a = 1, b = 0

    def divide(a, b):
    #    if b == 0:
    #        raise ValueError('Cannot divide by Zero')
>       return a / b
E       ZeroDivisionError: division by zero

Exception depositing money to the bank

Another case when checking for proper exceptions might be important is when
we want to avoid silently incorrect behavior.

For example in this code we have a function called deposit that expects
a non-negative number. We added our input validation that will raise an exception protecting
the balance of our bank account. (In our example we only indicated the location of the code
that actually changes the balance.)

examples/python/pt3/test_bank.py

import pytest

def deposit(money):
    if money < 0:
        raise ValueError('Cannot deposit negative sum')

    # balance += money

def test_negative_deposit():
    with pytest.raises(ValueError) as e:
        deposit(-1)
    assert str(e.value) == 'Cannot deposit negative sum' 

We have also created a test-case that will ensure that the protection is there,
or at least that the function raises an exception if -1 was passed to it.

Conclusion

Pytest and its automatic error reporting is awesome.

Понравилась статья? Поделить с друзьями:
  • Pyscripter could not load a python engine ошибка
  • Psg16 ошибка p0606
  • Pyqt5 всплывающее окно ошибки
  • Psd файл ошибка
  • Pyqt сообщение об ошибке