Skip to content

Adding functional tests for MySQL integration #526

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

hectorhdzg
Copy link
Member

E2E verification for span creation using mysql and dbapi integrations

@hectorhdzg hectorhdzg requested a review from a team March 24, 2020 22:34
@hectorhdzg hectorhdzg added ext needs reviewers PRs with this label are ready for review and needs people to review to move forward. tests labels Mar 24, 2020
break
except mysql.connector.Error as err:
print(err)
time.sleep(5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If all 5 attempts fail, then the test execution will continue and then the test cases will fail, probably with non obvious errors like AttributeError because self._cursor never got defined. This is of course not convenient and probably a bit hard to debug.

This is a situation where there needs to be a separation between test case setup and test case execution. The former are all the operations needed to prepare the test case to be executed, the latter, the execution of the test case itself. The latter is also the one that returns an actual test result, since it is the part where the testing is done. If the setup fails, the execution should not be performed because it makes no sense to run a test case when its setup is not ready.

As far as I know, Pyhton unittest package does not support this, for example:

from unittest import TestCase


class TestCase(TestCase):

    def setUp(self):
        1 / 0

    def test_0(self):
        assert True
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/ocelotl/sandbox/tast
collected 1 item                                                               

test_unittest.py F                                                       [100%]

=================================== FAILURES ===================================
_______________________________ TestCase.test_0 ________________________________

self = <test_unittest.TestCase testMethod=test_0>

    def setUp(self):
>       1 / 0
E       ZeroDivisionError: division by zero

test_unittest.py:7: ZeroDivisionError
=========================== short test summary info ============================
FAILED test_unittest.py::TestCase::test_0 - ZeroDivisionError: division by zero
============================== 1 failed in 0.08s ===============================

Here, a failure is reported, nevertheless this is not right because a failure means there is something wrong with the code under test, and this code never got tested because the test case never got executed.

This is why Pytest supports the concept of error when reporting testing results, for example:

from pytest import fixture


@fixture
def the_fixture():
    1 / 0


def test_0(the_fixture):
    assert True
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/ocelotl/sandbox/tast
collected 1 item                                                               

test_pytest.py E                                                         [100%]

==================================== ERRORS ====================================
___________________________ ERROR at setup of test_0 ___________________________

    @fixture
    def the_fixture():
>       1 / 0
E       ZeroDivisionError: division by zero

test_pytest.py:6: ZeroDivisionError
=========================== short test summary info ============================
ERROR test_pytest.py::test_0 - ZeroDivisionError: division by zero
=============================== 1 error in 0.08s ===============================

This is much more accurate, because a failure is not reported, but an error is, meaning that something went wrong when preparing the test to be executed. This concept of error is very useful, since it also applies to the teardown of test cases:

from pytest import fixture


@fixture
def the_fixture():
    # A pytest fixture uses yield to separate the setup from the teardown
    pass  # setup
    yield
    1 / 0  # teardown


def test_0(the_fixture):
    assert True
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/ocelotl/sandbox/tast
collected 1 item

test_pytest.py .E                                                        [100%]

==================================== ERRORS ====================================
_________________________ ERROR at teardown of test_0 __________________________

    @fixture
    def the_fixture():
        # A pytest fixture uses yield to separate the setup from the teardown
        pass  # setup
        yield
>       1 / 0  # teardown
E       ZeroDivisionError: division by zero

test_pytest.py:9: ZeroDivisionError
=========================== short test summary info ============================
ERROR test_pytest.py::test_0 - ZeroDivisionError: division by zero
========================== 1 passed, 1 error in 0.02s ==========================

This is very accurate test reporting, the test case was executed, the code under test tested and it passed. This is reported accurately with 1 passed. Nevertheless, something went wrong when running the test teardown, and this is also accurately reported with 1 error.

I realize that we have been using the unittest.TestCase style of test case writing so far, but I think we have hit a limitation of this style now with a test case that requires this kind of setup where an external component may fail, rendering the rest of the testing useless. It would be ideal, of course, to be able to use fixtures with our current unittest.TestCase test cases in order to maintain consistency with the rest of the existing test cases, but sadly, this does not seem to be possible:

The following pytest features do not work, and probably never will due to different design philosophies:
Fixtures (except for autouse fixtures, see below);

The autouse fixtures are fixtures that are automatically applied to a complete set of test cases, this may be a solution for this specific situation, but probably this may not work always because of them being applied to all test cases in the set.

Please reach out to the rest of the team if you are unsure about my suggestions here, I understand that I am asking to use Pytest fixtures and non unittest.TestCase test cases which can be a bit controversial.

Thanks and sorry for the long post!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added pytest fixture for DB connection keeping unittest.TestCase structure, if we don't need to refactor the entire tests we currently have I believe is ok to use pytest features.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation @ocelotl, very helpful.

Some reasons not to use fixtures:

  • they make it impossible to run tests without pytest
  • they're magical -- non-autouse fixtures add a phantom arg to test methods and make it hard to reason about the behavior of tests
  • autouse fixtures tend to be used for heavyweight background work -- like starting a docker instance -- that can make tests slow and unreliable

The bad may not outweigh the good here, and this does look for a good case for a fixture. I know it's possible to make a mess out of tests even without fixtures, it's just that fixtures makes that especially easy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can talk more during the SIG about this? I would say a mixed unittest / pytest fixture pattern is the most confusing, as it's the two patterns combined and requires an understanding of how both work.

The ERROR / FAILURE distinction, albeit semantically clearer, don't seem that important IMO. Either way I need to go look at the code that broke, so I'm not sure what that changes from my actions.

def connect_to_db(self):
if self._connection is None:
# Try to connect to DB
for i in range(5):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This for needs an else clause. If after 5 attempts no connection has been achieved, then an exception needs to be raised in order for Pytest to avoid running the test cases where this is setup.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left a similar comment in the postgres tests. I'm voting for fewer retries (if we actually need them), and just re-raising the exception on the last try.

Copy link
Contributor

@ocelotl ocelotl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requesting an else clause

Copy link
Member

@toumorokoshi toumorokoshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requesting similar changes as #528. Thanks!

def connect_to_db(self):
if self._connection is None:
# Try to connect to DB
for i in range(5):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left a similar comment in the postgres tests. I'm voting for fewer retries (if we actually need them), and just re-raising the exception on the last try.

@c24t
Copy link
Member

c24t commented Apr 1, 2020

FYI for anyone testing this on OSX in a virtualenv: https://stackoverflow.com/a/56228387.

Copy link
Member

@c24t c24t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comments about the autouse fixture and a lingering print, otherwise it LGTM following the structure in #340.

The fact that these tests don't run outside tox and that they might block for 30 seconds if the user doesn't have the right image cached doesn't seem great, but I don't see a better easy alternative.

Copy link
Member

@toumorokoshi toumorokoshi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

break
except Exception as ex:
if i == RETRY_COUNT - 1:
raise (ex)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that this is being run outside of the standard testing process, so, I'll just add a suggestion here. I'm marking this approved to remove my blocking request for changes.

I don't find this to be very accurate. What we are doing here is printing every exception traceback but the last one and then raising the last one. Nevertheless, the last exception should not be treated in a different way than the rest because it decreased the attempt counter by one just like the rest.

If what we want is to inform the user of every exception that was raised, we should use exception instead of just printing it to console with traceback.print_exc. In that way, the logging of these exceptions can be handled in the standard way, like the rest of the logs.

Once the attempts are exhausted, we should raise an exception that informs the user precisely about this, something like:

raise Exception("Unable to connect to database at {} after {} attempts".format(...))

In summary, something like this:

from logging import getLogger

logger = getLogger(__name__)

attempts = 3

for attempt in range(3):
    try:
        1 / 0
        print("connected")
        break
    except Exception as error:
        logger.exception(error)
else:
    raise Exception(
        "Unable to print \"connected\" after {} attempts".format(attempts)
    )
division by zero
Traceback (most recent call last):
  File "example.py", line 9, in <module>
    1 / 0
ZeroDivisionError: division by zero
division by zero
Traceback (most recent call last):
  File "example.py", line 9, in <module>
    1 / 0
ZeroDivisionError: division by zero
division by zero
Traceback (most recent call last):
  File "example.py", line 9, in <module>
    1 / 0
ZeroDivisionError: division by zero
Traceback (most recent call last):
  File "example.py", line 16, in <module>
    "Unable to print \"connected\" after {} attempts".format(attempts)
Exception: Unable to print "connected" after 3 attempts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reviewing @ocelotl I updated to use the logger instead of just printing the error, @toumorokoshi had some strong opinion about throwing our own exception instead of throwing last one, I will continue with this change to move this PR forward, this could be addressed in a future PR.

Copy link
Contributor

@ocelotl ocelotl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please take a look at the suggestion, @hectorhdzg 👍

@hectorhdzg hectorhdzg added PR:please merge This PR is ready to be merged by a Maintainer (has enough valid approvals, successful build, etc.) and removed needs reviewers PRs with this label are ready for review and needs people to review to move forward. labels Apr 10, 2020
@toumorokoshi toumorokoshi merged commit b44bf41 into open-telemetry:master Apr 13, 2020
srikanthccv pushed a commit to srikanthccv/opentelemetry-python that referenced this pull request Nov 1, 2020
* fix: zipkin-exporter: don't export after shutdown

According to spec and exporter should return FailedNotRetryable error
after shutdown was called.

* chore: use setTimeout instead setImmediate for browsers

* chore: rename shutdown to isShutdown
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PR:please merge This PR is ready to be merged by a Maintainer (has enough valid approvals, successful build, etc.) tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants