Category Archives: WebDriver

Selenium Conference 2015 Wrap-up Notes

This year, I set out to attend Selenium Conference 2015 with some specific goals:

  1. See which JavaScript framework(s)/librar(y|ies) are being used and recommended for “full-stack” testing (as a potential complement or replacement to our Python stack).  Check out an earlier post on InternJS for our goals and challenges.
  2. See which mobile-testing strategies + tooling are being used.
  3. Look into what folks are using Docker for, and how we might apply it.
  4. Cloud, cloud, cloud (and, more specifically, for us) + AWS – what are companies and organizations of all sizes and needs using it for, testing-wise, and what are the considerations and gotchas, etc., as well as how testing considerations change in the DevOps model that often accompanies cloud.
  5. Which open-source tools can we use + hopefully integrate, to get performance + accessibility testing, along with our functional tests?  (Particularly, as close to Real User Monitoring as we can get, without a third-party, expensive, hosted solution – or in addition.)
  6. Catch up on what the core committers of the Selenium project are doing, thinking, and where the project is next headed.
  7. General information-seeking + evangelizing, particularly around open-source offerings, both in terms of what we could potentially use, but also sharing what we’ve done, are working on, and can share with the community.

Talks I attended and particularly enjoyed were:

  1. Simon Stewart’s keynote, entitled: Selenium: State of the Union – video (no slides available)
  2. Curing Imposter Syndrome – video, slides
  3. Mobile End-to-End Testing at Scale: Stable, Useful, Easy – Pick Three – video, slides
  4. Distributed Automation using Selenium Grid/AWS/Auto-scaling – video, slides
  5. Docker Selenium: Getting Started – video, slides

A few key takeaways:

  1. Very common among many companies for monitoring and continuous deployment were: statsd, graphite, Jenkins CI, and Travis CI.
  2. If not optimized, AWS can be _very_ expensive, and you can very easily end up wasting paid-for computing cycles/resources.
  3. Docker looks very useful if you only need to test (using Selenium Grid) on Linux, primarily – it’s less clear with Windows + Mac.  The particular talk on this used Ruby as its language, although it shouldn’t be too much work to do in Python.
  4. All the core committers I spoke with, and other industry peers as well, recommended “just using” WebDriverJS if we’re going to continue experimenting with JavaScript-based, developer-integration level testing.  Yahoo! has also used Protractor (for Angular apps) + CasperJS, along with WebDriverJS as its core, with good results.
  5. Lots of discussion around mobile testing, and while most of the folks testing mobile do so using Appium (as do we), the needs vary widely, as with good monitoring and responsive design, there isn’t always a need to do full end-to-end tests across all devices.  This also depends on whether you’re testing a mobile website or a native app on Android or iOS, of course.  Appium 1.5 is coming soon, we learned, and will be a nice, clean release with a lot of fixes + features.
  6. Yahoo! has used to drive; it looks really useful, and thankfully seems kept up-to-date.

Action Items:

  1. Catch-up with the remainder of the Selenium Conference 2015 talks I wasn’t able to attend.
  2. Look further into the potential use-case(s) for Docker, test-infrastructure wise.
  3. Investigate integrating a Selenium proxy (such as browsermob-proxy) with HAR files and potentially Apache/Nginx server logs, to augment our HTML Reports with richer, more timely information to help better-report and debug problems (heavily re-inspired by the Facebook talk!)
  4. Look into the possibilities of scaling our tests + increasing performance using AWS/cloud infra, while balancing cost considerations.
  5. Look into evaluating and potentially integrating visual-comparison testing tools, such as Applitools Eyes.
  6. Evangelize – through blog posts, meetups, etc., more of what Web QA and Mozilla are up to – particularly for our peers and community’s benefit.

For more wrap-up posts on last week’s separate, but follow-on Web QA team meetings, look here, as well:

Keep an eye on this blog for future posts; we’ll be posting a little more often than we have in recent months!



Make the intern do it


Hi, I’m Matt Brandt [:mbrandt in irc]. I’m a core contributor on Mozilla’s Web QA team. I work for the dino.

A shared experiment

As a QA team we had a difficult and challenging quarter to end 2014 on; that said, the work with the MDN team was a personal highlight for me. A fulfilling way to cap off the end of the year.

The goal was to implement an experiment: convert several Python-based tests into JavaScript. Place these tests into the build process that the developers use and see what it nets.

David Walsh and I evaluated as an alternative to our Python based
suite of tests that use WebDriver, Pytest, and pytest-mozwebqa. The developers very much want to be involved in end-to-end testing using WebDriver as a browser automation tool but found the context switching between JavaScript and Python a distraction. Web QA wanted our test cases to be more visible to the developers and block deployment if regressions were introduced into the system.

The problem(s) we wanted to address:

  • Scalability of test coverage – joint ownership of creation and maintenance of test cases.
  • Moving test runs further upstream in the build process – providing
    quicker feedback.
  • Test runs and reports are hooked into a build process that the dev team uses.


Presently, Web QA’s MDN tests are Python based — suite of tests — and run on a Jenkins instance which is fire-walled from the public. Both of these components make accessing and monitoring our test runs complex and an event that is separate from the developer workflow.

Developer-level unit tests run on a travis-ci instance and presently there is no hook that runs the Python based end-to-end tests if their build passes. The end-to-end tests run on a cron job and can also be started by an irc bot.

Additionally, our Python-based tests live outside of the developers project: and Adding an additional hurdle to maintainability and visibility.

The New

Working closely with the MDN team I helped identify areas that an end-to-end testing framework needed to take into account and solve. Between David’s and my experiences with testing we were able to narrow our search to a concise set of requirements.

Needs included:

  • Uses real browsers — both on a test developer’s computer as well as the ability to use external services such as Sauce Labs.
  • Generate actionable reports that quickly identify why and where a regression occurred
  • An automatic mechanism that re-runs a test that fails due to a hiccup; such as network problems, etc.
  • Choose an open source project that has a vibrant and active community
  • Choose a framework and harness that is fun to use and provides quick results
  • Hook the end-to-end tests into the build process
  • Ability to run a single test case when developing a new testcase
  • Ability to multithread and run n tests concurrently

David Walsh suggested and after evaluating it we quickly realized that it hit all of our prerequites. I was quickly able to configure the MDN travis-ci instance to use SauceLabs (configuring travis-ci). Using the included DigDug library for tunneling, tests were quickly able to run on Sauce Lab’s infrastructure.

Outstanding issues

  • The important isDisplayed method of Intern’s Leadfoot API, paired with the polling function, returns inconsistent results. The cause may be as simple as we’re misusing isDisplayed(), polling, or the isDisplayed() has a bug. This will take more investigation.
  • Due to security restrictions Travis-CI cannot run end-to-end Webdriver based tests on a per pull request basis. Test jobs are limited to merges to master only. We need to update the .travis.yaml file to exclude the end-to-end tests when a pull request is made.
  • Refactor the current tests to use the Page Object Model. This was a fairly large deliverable to piece together and get working, thus David and I decided to not worry about best coding practices when trying to get a minimally viable example working.

We’re well-positioned to see if Javascript-based end-to-end tests that block a build will provide positive benefits for a project.

Next Steps

Before the the team is ready to trust using Leadfoot and Digdug we need to
solve the outstanding issues. If you are intestested in gettin involved,
please let us know.

  • Remove the sleep statements from the code base — this likely will involve better understanding how to use Promises.
  • Get an understanding why some tests fail when run against SauceLabs — the failures may be legitimate, we may be misusing the api, or their is a defect somewhere within the test harness.
  • Refactor tests to use the Page Object Model.

After these few problems are solved, the team can include end-to-end into their
build process. This will allow timely and relevant per feature merge feedback
to developers on the project.

Writing reliable locators for Selenium and WebDriver tests

By Zac Campbell

If you’ve come here looking for the perfect, unbreakable locator, then I’m afraid to tell you that there is no perfect locator. That HTML changes and locators become incompatible are realities of writing automated UI tests. You’ll have to get used to updating them as the development teams experiment with design, streamline HTML and fix bugs as long as your web app is evolving. Maintaining locators must be calculated as part of the cost of test maintenance.

However, the good news is that there is a difference between a good and poorly written locator. That means if you’re smart about your locators you can reduce the cost of maintenance and focus your time on more important tasks than debugging false negative results.

On the other hand, a failing locator is a good thing so don’t be afraid of it. The trusty NoSuchElementException, rather than an assertion failure, is often your first sign that there is a regression in your software.

In this guide I will assume that you know how to write a locator already and are familiar with the construction and syntax of CSS and Xpath locators. From here on we’ll dig into the differences between a good and bad locator in the context of writing Selenium tests.

IDs are king!

IDs are the safest locator option and should always be your first choice. By W3C standards, it should be unique in the page meaning you will never have a problem with finding more than one element matching the locator.

The ID is also independent of the element type and location in the tree and thus if the developer moves the element or changes its type WebDriver can still locate it.

IDs are often also used in the web page’s JavaScript so a developer will avoid changing an element’s ID to avoid having to change his JavaScript. That’s great for us testers!

If you have flexible developers or even an eye for the app source code you can always try and get extra IDs added into the code by buying them a beer on Friday evening, taking their sister on a date or just plain begging. However, sometimes adding IDs everywhere is impractical or not viable so we need to use CSS or Xpath locators.

CSS and Xpath locators

CSS and Xpath locators are conceptually very similar so I’ve put them together for this discussion.

These types of locators with combinations of tag name, descendant elements, CSS class or element attribute makes the matching pattern strict or loose, strict meaning that small HTML changes will invalidate it and lose meaning that it might match more than one HTML element.

When writing a CSS or Xpath locator it’s all about finding the balance between strict and loose; durable enough to work with HTML changes and strict enough to fail when the app fails.

Find an anchoring element

A good way to start a CSS or Xpath locator is to start with an element that you know is not likely to change much and use it as an ‘anchor’ in your locator. It may have an ID or stable location but not be the element you need to locate but is a reliable position to search from. Your anchoring element can be above or below the current element in the HTML tree, but most often it’s above.

<div id=”main-section”>
      <li> Option 1</li>

<div id=”main-section”> <p>Introduction</p> <ul> <li> Option 1</li> </ul> </div> In this example the <li> element that we want to locate does not have an ID or a CSS class making it difficult to locate. There is also the chance there is more than one list in the HTML. The div with the id “main-section” makes a good anchoring element from which to find the <li> element. It narrows down the HTML the locator is searching in.

When to use ‘index’ locators like nth-child() and [x]

nth-child(), first-child, [1] and such index-type locators should only be used if you are using it against a list itself. In this case the test should explicitly know it wants to pick an item at that index from the list, for example validating the first item of search results. Using an index-type locator to locate an element that is not index-placed is likely to cause you problems when the order of the elements is changed and thus should be avoided!

  <button>Option 1</button>
  <button>Option 2</button>
  <button>Option 3</button>

//menu/button[1] is a suitable locator only when you know you want to interact with the first menu item regardless of how the list of buttons is sorted. The order of the buttons may change and cause your test to fail. Would this be a legitimate failure or one that requires you to re-write the locator?

Depending upon the objective of your test a non-index based locator like //menu/*[text()=’Option 1’] might be more suitable. <menu> is the ideal anchoring element.

CSS class names often tell their purpose

Front end designers are actually humans too and will often give CSS classes names that represent their purpose.

We can take advantage of this and choose locators that are dependent upon the functionality rather than the styling because styling often changes.

<footer class="form-footer buttons">
  <div class="column-1">
      <a class="alt button cancel" href="#">Cancel</a>
  <div class="column-2">
      <a class="alt button ok" href="#">Accept</a>

In this example ignore the class “column-1” and “column-2”. They refer to the layout and thus might be susceptible to changes if the development team decide to adjust the design. It will be more reliable to target the button directly. Although “button.ok” would be quite a ‘loose’ locator there could be more than one on the page. You can use the footer as your anchoring element, making “footer .ok” a good locator in this example.

Spotting future fragility

By observing the HTML you can spot potential future fragility. I’ve intentionally left 3 more things out of the locator in the previous example: the <a>’s tag name, the <a>’s content text and any direct descendents ( > ) between footer and a. In the HTML it looks like the dev team have already changed the text label and the tag from “ok” and “button” respectively. The class, text content and tag names are all mismatched!

If the dev team are indecisive or experimenting with UX and performance improvements, these might still change again so we will err on a slightly “looser” locator that will tolerate some changes in the HTML.

Direct descendents

CSS example: div > div > ul > li > span
Xpath example: //div/div/ul/li/span

A direct descendent refers to the parent to child relationship of HTML elements. A good example is the first <li> element inside a <ul>.

A long chain of direct descendents like in the locator example above might help you find an element where there are no classes or IDs but it is sure to be unreliable in the long term. A large block of content without IDs or classes is likely to be very dynamic too and probably move around and change HTML structure often. It only takes one element in the chain to change for your locator to come tumbling down.

If you absolutely must use direct descendents in your locators then try to only use a maximum of one in each locator.

Adjust it for purpose

<section id=”snippet”>

Only use as much of a locator you need. Less is more! If you are only capturing text then using a locator like “#snippet div” is unnecessary. WebDriver will return the same text content for the locator ‘#snippet and ‘#snippet > div’ but the latter locator would break if the div element were changed to a <p> or <span>.

Locating on element attributes

Locating on attributes is a lot like locating by CSS class – the attribute can be unique but also it can be re-used across many items so tread carefully. You’ll have to decide on a case by case basis.

Generally, it is best to avoid using attributes and focus on tags, CSS classes and IDs but modern HTML5 “data-” attributes are stable attributes to use because they are tightly integrated with the functionality of the web application.

Tag name, link text, name locating strategies

These locating strategies are just shortcuts to locating by attribute or by text string (Xpath: text()). The rules for using these apply to tag name, link text and name too.

In summary: when you are composing a locator look for an ID first and failing that, the next nearest element with an ID (to use as an anchoring element). From there, look at descendents and element attributes to narrow down to the element you want to locate.

Understand exactly the purpose of the locator – is it simply to navigate through the site or is it asserting the order of an element? Should the locator be able to cope if the element moves or should the test fail?

The purpose of the locator will decide how strict, loose and which techniques you will need to use in the locator.

Good luck, and code wisely to save yourself future test maintenance and false negative hassles!

Setting Up PyCharm to Run MozWebQA Tests

(The following is a guest post by Bob Silverberg, one of our awesome contributors in Web QA.)

PyCharm is a Python IDE, released by JetBrains. I quite enjoy using it and, as I've recently started contributing some tests to Mozilla's Web QA department (mozwebqa), I wanted to use it to interactively debug some mozwebqa tests. This turned out to be a lot trickier than I had imagined, so I am documenting the steps to do so via this post.

This post assumes you have a project open in PyCharm for one of the mozwebqa projects. I am going to use marketplace-tests for this example. As I am on a Mac, the instructions and screen shots will be OS X-specific, but I imagine they will translate pretty closely to Windows.

Step 1 – Configure py.test as your default test runner

  1. Open up the Preferences… dialog and choose Python Integrated Tools under Project Settings.
  2. Choose py.test as the Default test runner.
  3. Click OK.

Step 2 – Set default run configuration parameters for your project

  1. Type ctrl + alt + R to open the Run configuration selector.
  2. Type 0, or choose the first option (Edit configurations…) from the select list.
  3. In the Run dialog, expand Defaults > Python's test and click onpy.test.
  4. Make sure the Python interpreter that you want to use for this project is selected for Python interpreter. If you are using virtualenv you may have to configure a new Python interpreter for your virtualenv. More on that in a separate post.
  5. Choose the root of your project for Working directory.
  6. Click Apply, then Close.

Step 3 – Create a pytest.ini file in the root of your project

This is necessary to pass command line options to py.test. It would be nice if there were a way to do this via the IDE, but I was not able to do it. If anyone knows how, or can figure out how, to do that I'd love to know. I was able to pass a single option to the command line from PyCharm, but could not get it to work with multiple options.

  1. Create a file called pytest.ini in the root of your project.
  2. Add the command line options you need into that file under the [pytest] section. See below for an example from marketplace-tests.
  3. Click OK.

addopts = --driver=firefox --credentials=mine/credentials.yaml --destructive

Step 4 – Create a copy of credentials.yaml in a personal folder

You are going to have to edit credentials.yaml to place some actual credentials in it, so in order to not have it overwitten each time you do a pull, you should put a copy that you are going to use somewhere else. I tend to create a /mine folder under the project root and place it there, but you can put it anywhere you like. You will notice that the command line option above uses that /mine folder to locate the credentials file.

Step 5 – Run your tests

With a file that contains tests open in an editor window, type ctrl + shift + R and PyCharm will run all of the tests in the file. If you wish to run just one test, type ctrl + alt + R, followed by 0 to open the Edit configurations… dialog and then place the name of your test in the Keywords input. Click Run.

WebDriver’s implicit wait and deleting elements

WebDriver has introduced implicit waits on functions like find_element. That means that when WebDriver cannot find an element, it will automatically wait for a defined amount of time for it to appear. 10 seconds is the default duration.

This is very useful behaviour and makes dealing with ambiguous page load events and dynamic elements far more tolerable; where it seems obvious to a human that the test should wait briefly then WebDriver does exactly that. Tests are now more patient and durable throughout tests on dynamic webpages.

However with Selenium RC and Selenium IDE we became quite used to using methods like is_element_present and wait_for_element_present to deal with Ajax and page loading events. With WebDriver we have to write them ourselves.

Checking an element is not present
When dealing with Ajax or javascript we might want to wait until an element is not present, when an element is being deleted, for example. Trying to find an element after it has been deleted will cause WebDriver to implicitly wait for the element to appear. This is not WebDriver’s fault. It’s doing the right thing, but we need to tell it not to implicitly wait just for this moment otherwise we will waste time waiting when we don’t need to. This time can really add up if you check a few times in each test.

The Web QA team has written its own is_element_present method for WebDriver:

def is_element_present(self, *locator):
        return True
    except NoSuchElementException:
        return False
        # set back to where you once belonged

There are 4 important things going on here. In order:

  1. Setting implicity_wait to 0 so that WebDriver does not implicitly wait.
  2. Returning True when the element is found.
  3. Catching the NoSuchElementException and returning False when we discover that the element is not present instead of stopping the test with an exception.
  4. Setting implicitly_wait back to 10 after the action is complete so that WebDriver will implicitly wait in future.

(Note that we have previously stored the default implicit wait value in the default_implicit_wait variable)

You may use this in logic but we mostly use this in WebDriverWait. It is important for bypassing WebDriverWait’s catching of the ElementNotFoundException:
WebDriverWait(self.selenium, 10).until(lambda s: not self.is_element_present((By.ID, ‘delete-me’)))

This method works well and most importantly the implicit wait is not triggered meaning your test does not needlessly wait!

How to WebDriverWait

As WebDriver moves more towards being more of an API and less of a testing tool, functions that contained the logic to wait for pages such as wait_for_page_to_load() are being removed. The reason for this is that it is difficult to maintain consistent behaviour across all of the browsers that WebDriver supports on modern, dynamic webpages.

That leaves the onus on the people writing the framework and tests (that’s you and me!) to write the logic. This is both good and bad. The bad side is that it adds a lot of extra work for us to do and a lot of extra things for us to think about. Your tests might be frail if you don’t get your head around how to wait properly. But the good side is that we can control WebDriver and make our tests more stable so let’s get on with learning about it!

The first issue to understand is that detecting when a click is just a click and when a click loads a page is difficult for WebDriver. There are just too many things going on on modern webpages with Ajax, Javascript, CSS animations and so forth. So let’s forget all about that and just think about what we need on the page to be ready before the test can proceed.

What we are looking for is a good signal. The signal can be an element appearing, disappearing, being created, being deleted or something else altogether! However what is important is that it’s relevant to the action you are performing. For example if you are scrolling through pages of search results and waiting for the page of results to change then you should instruct WebDriver to wait for something in the new set of search results. Waiting for something outside of that area can be an unreliable signal.

At this point it’s a good start to step through the test manually or if you’re debugging, watch the test run on your computer. Watch for elements appearing, javascript, ajax, css animations. Narrow your target on the page down to the area that is changing dynamically or even better the specific element that you want to interact with in the next step of the test. Firebug and Firediff are very useful for this task.

WebDriver’s aim is to replicate the user’s action and as such if an element is not displayed then you can’t click it. This is where a lot of tests come unstuck. By stepping through manually or watching the test run we are looking from the user’s perspective. WebDriver can’t see elements changing so we need to see them with our own eye before we can tell WebDriver to check on them.

Waiting for element visibility
In WebDriver an element can be present but not visible – be wary of this! If an element is not visible we can’t click, type or interact with it so the test is not ready to proceed. It’s hard to judge whether you will be checking for element’s presence or visibility; every case might be different. But generally when dealing with CSS animation or ajax transitions we will check visibility. In this example we’ve just clicked on a button that changes the loginbox to be displayed:
WebDriverWait(self.selenium, 10).until(lambda s: s.find_element(By.ID, loginbox).is_displayed())

Waiting for elements to be deleted
When dealing with elements being deleted from the page we check that there are 0 on the page (WebDriverWait will suppress the ElementNotFoundException). This example is checking that all items in a list have been deleted:
WebDriverWait(self.selenium, 10).until(lambda s: len(s.find_elements(By.CSS_SELECTOR, ‘list-item’)) == 0)

You may have noticed in the example of waiting for elements to have been deleted that I used find_elements instead of find_element. This is because WebDriverWait’s until is written to wait for elements to appear and as such suppresses the ElementNotFoundException.
If you try and use this code WebDriverWait will timeout and finish your test even if the element is not present:
WebDriverWait(self.selenium, 10).until(lambda s: not s.find_element(By.ID, ‘delete-me’))

Waiting for attributes: avoid this!
Waiting for attributes (class, text, etc) of an element can be unreliable as it relies on the element being stable inside WebDriver’s element cache. In you-and-me terms that means that waiting for a new node to be present is safer than waiting for an existing one to have changed.
WebDriverWait(self.selenium, 10).until(lambda s: s.find_element(By.ID, ‘label’).text == “Finished”)

When performing an action that requires a wait you can always log a value before (for example page number of the search results), perform the action and wait for that value to have changed:
page = page_object.page_number
self.selenium.find_element(By.ID, ‘next-page’).click()
WebDriverWait(self.selenium, 10).until(lambda s: page_object.page_number == page+1)

Reporting failures upon timeout
Reporting to the user a clear reason for a timeout failure is very valuable. In cases where the user has no knowledge of the steps of the test or the workflow of the AUT it saves time in having to re-run and debug the test investigating a failure.
As much as we try to make locators and variable names readable, sometimes a complex explicit wait is not clear. Treat it like an inline code comment where you want to communicate to the user, but keep the message brief.
To add a failure message simply add the message to the ‘until’ method:
WebDriverWait(self.selenium, 10).until(lambda s: s.find_elements(By.CSS_SELECTOR, ‘list-item’) == 0, “The list items were not deleted before the timeout”)

Tracking DOM attributes
Occasionally if a javascript package like jQuery is used to manipulate the contents of the page. Can you look at the DOM attributes to see when ajax actions are occurring? Use firebug’s DOM panel to inspect values or set a breakpoint and then replicate the action and watch the value change. This is a very stable option because it bypasses WebDriver’s element cache. jQuery has an attribute called ‘active’ that is easily watchable using this code:
WebDriverWait(self.selenium, 10).until(lambda s: s.execute_script("return == 0"))

Dealing with loading spinners and animations
Catching spinners or loading animations that come and go can be tricky! If you detect the spinner not being present then this might resolve to true before the spinner exists! Occasionally it’s more reliable to ignore the spinner altogether and just focus on waiting for an element on the page that the user will be waiting for. If you’re really struggling you can use a combination of the spinner and the dynamic element. Here is an example of both catching the spinner being deleted and a new element arriving:
WebDriverWait(self.selenium, 10).until(lambda s: s.find_element(By.ID, ‘new-element’) and s.find_elements(By.ID, ‘spinner’) == 0)

The order of WebDriverWait’s polling
While dealing with Ajax and WebDriverWait it is helpful to know a bit about exactly how the internals of WebDriverWait work. In simplified terms it will check the until equation, sleep, then check the equation again until the timeout is reached. The default setting for polling frequency (that means how much sleep between each the until equation) is 0.5 seconds.
The tricky part, however, is that WebDriverWait will check the until equation before it performs the first sleep. Thus if your Ajax has a slight delay, the very first poll of WebDriverWait might resolve true before the ajax has started. In effect, the the wait will not really have occurred at all because the first sleep was never reached.
There is no workaround for this and the only way to avoid it is to change the way or which element you are waiting for.

The StaleElementReferenceException during Waits
A StaleElementReferenceException may occur if javascript or Ajax is reloading the page during your explicit wait. The exception is thrown because, while WebDriver can find the locator before and after the page reload, it can also see that the element is different and it deems it untrustworthy (or stale). This relates to the previous section about WebDriverWait’s polling order.
If the developers are changing the classes of an element before and after then one effective way to wait is to use two locators to locate a single element in each of its states. This is slightly more verbose but the trade-off is a reliable test.

Before login: (By.CSS_SELECTOR, ‘div#user.not_authenticated’)
After login: (By.CSS_SELECTOR, ‘div#user.authenticated’)
WebDriverWait(self.selenium, 10).until(lambda s: s.find_element(By.ID, ‘div#user.authenticated’).is_displayed())

In this case even though the HTML

is the same, WebDriver will consider the elements to be different and hence one will only be found after the page refresh and the authenticated class is set.