Previously, NUnit was used as the testing framework. An attribute I find interesting especially when tests fail for no evident reasons or simply because of time outs is Retry.
I think the NUnit testing framework was replaced with MSTest because of some pipeline issues which is beyond my understanding. At the time, when I tried integrating the test suites in an Azure Pipeline, among other issues, I couldn't get the tests to run on an Ubuntu machine.
To replace this functionality, I consider writing a Retry reflection or decorator myself which I am proficient in doing with TypeScript. I therefore have the confidence in trying it in C# but this will take time.
So I mash up something quickly but it does not work as I expect. I get a session terminated or not started error message from Selenium Web Driver. So I continue researching until I find a relevant library namely MSTestEx. I check if it's included in my organisation's official repos before adding it. Sure it's present.
However, there is a caveat though: the test is still marked as a failure if the second time it passes. Going by memory, NUnit is different. If on the second time (or more) the test succeeds, it's marked as a success.
Ignore a test
Whenever I'm adding or modifying a test, I occasionally like running the whole suite in case something has been broken or the web application has changed. But I also don't want the current working test to be included in the suite. This is where ignoring a test is important.
Moreover, as the design for the web application often changes, especially concerning responsive layouts, I often ignore the tests for smaller screens until they're fully implemented.
I do that using the ignore attribute with MSTest but most test runners or frameworks have this capability inbuilt.
Parallel vs Sequence
Running your tests in parallel drastically improves the overall time. This is beneficial when you include the automation tests in your builds before releasing a new version. The less time it takes, the quicker you get feedback. From that, you can either correct any issue or proceed quickly to release the software.
However, one issue I've noticed when running tests in parallel on my machine, is many of them often fail because of timeout exceptions. When I launch around 100 tests, with 20 of them running in parallel, the testing environment, usually configured with basic resources is slow.
Therefore, I've found a middle ground to be acceptable. Running classes in parallel, but not more than 5 tests at a time. This ensures better overall test success.
Delay between Tests
Another improvement to address the timeouts issue, is adding a delay between tests. Overall this might impact the entire test suite time. It nevertheless increases the percentage of success.
I notice that MSTest doesn't really isolate end to end tests. It's maybe a library better suited for Unit and Integration Tests. Therefore, when running them in parallel, I often get the object reference exception. With the addition of the delay, this error drastically decreases.
I use this code below instead of Thread.Sleep as it's better for performance.
A better way to handle failing tests
Previously apart from Asserts, I didn't know when a test failed - especially for technical reasons such as Element Not Found. I therefore added Try Catch clauses in each test resulting in more boilerplate codes.
However, on more research, I find that tests should cater for Asserts (or Shoulds) rather than exceptions, which can happen through reasons outside of our control. An example is the NoSuchElement exception which happens when the server takes too much time to load a page.
I therefore remove each Try Catch clause surrounding the test, and instead print the errors in the TearDown method at the end of each test in the Base class. The test pass or fail logic, resides only in one place. We can also get the message why a test fail with libraries such as NUnit.