Categories: pytest

WebDriver and Py.test parametrize

On the Web QA team at Mozilla we have been using the py.test unit testing software for a while now and find it very useful. We can extend and customise it using plugins and have done so to great effect with a plugin that joins py.test’s unit testing capabilities with Selenium and Selenium Grid demands.

However for those of us who have migrated from JUnit, TestNG and similar there was one piece of functionality missing: data-driven tests. The py.test team have recently added this functionality so let’s have a look at how we can use it with WebDriver.

A data driven unit test is a test method configured to be fed input parameters and expected data by the unit testing package. The data fed into the method might simply be an array or list but each item in the array will be a separate test case and its test result independent.

In the case of py.test it is performed using a decorator but you may also know this as an annotation. Time for a basic example of using it in conjunction with Selenium:

import py.test
from selenium import webdriver

@pytest.mark.parametrize(('search_term', ‘expected_results’), 
[('Firefox', 1000),('Foxkeh', 1)])
def test_search_results(self, search_term, expected_results):
    selenium = webdriver.Firefox()
    selenium.get(‘http://support.mozilla.org/en-US/home’)
    search_box = selenium.find_element(By.CSS_SELECTOR, ‘form > input.text’)
    search_box.type(search_term)
    selenium.find_element(By.CSS_SELECTOR, ‘input.btn-important’).click()
    actual_results = selenium.find_element(By.CSS_SELECTOR, 'div.search-count > strong:nth-of-type(1)’).text
    assert actual_results == expected_results

As you can see there is very little more to it than a typical Selenium test. We have defined ‘search_term’ and ‘expected_count’ in the parametrize decorator and then filled it with a  list containing Firefox and Foxkeh and the expected number of results for each. This is then passed into the test through the function definition (def) when py.test runs the test. From there we can access and use it in the test, in this case as the string we will search on and assert the number of results for that search term.

The holy grail of programming, simplicity and reduction of code duplication are achieved in one move.

Now that you can effortlessly add test cases for an existing set of steps you may be tempted to do just that. But be careful, adding test cases does not always add benefit and it’s still important to be intelligent about your test coverage.

To see a further example of how the Web QA team are using this feature you can look at these tests:
https://github.com/mozilla/Socorro-Tests/blob/master/tests/test_crash_reports.py
For more advanced users there is the ability to derive test cases from an external data source like a database or XML file but I won’t go into that here.

http://pytest.org/latest/example/parametrize.html

Good luck.

Thanks to the py.test’s team of contributors for introducing this excellent feature.

No comments yet

Post a comment

Leave a Reply

Your email address will not be published. Required fields are marked *