Quantcast
Channel: Tuenti RSS
Viewing all articles
Browse latest Browse all 97

Video Recordings of Browser Tests

$
0
0
David Santiago, Test Engineer Senior

As part of the automated tests suite we run for every release candidate here at Tuenti, there are over 300 webdriver tests. They are used to verify that the site behaves as expected from the user’s point of view.

If you have ever dealt with these kinds of end-to-end tests, you know they are more prone to nondeterministic behavior than smaller, more focused integration tests. That’s expected given the higher number of moving parts involved. Our Jenkins continuous integration setup deals with these kinds of issues by retrying failed tests before considering them actual errors, helping to reduce the noise they cause.

Retrying a failed test that later passes wastes resources in our build pipelines, yet there is an even larger cost associated with them: the search for the root cause of the wrong behavior.

We already store screenshots for failed tests alongside their stack trace to help diagnose problems, but with some corner cases it’s not enough and it’s not uncommon for those tests to work perfectly fine when debugged locally. In those cases, the developer / test engineer working on it would have to be in front of the failed test when it ran in our CI environment, which is obviously not possible.

Therefore, we extended our webdriver grid infrastructure in order to record and store videos for the execution of such tests. Here is a diagram of the overall architecture, with additions to the previous one:



We implemented a video recording web service that runs on each server (actually virtual machines), where webdriver nodes for our grid run. This service allows the video recording in a VM to remotely start and stop in addition to storing it with a given name. For the video recording itself we used the Monte Media java library, extending it a bit to fit our needs.

By taking advantage of the hooks available in the code of the webdriver grid code, we transparently start the recording of the video whenever a new browser session is requested. From that moment on, the video is recorded while the test runs. If the test fails, our webdriver client api allows to request the storage of the video with a proper name that is afterwards used to access it. Another extension in the form of a servlet registered in the webdriver grid server allows the performance of such a request. The servlet knows where the node for the given test is located and asks the video service running there to stop and save the video.

Now, whenever a browser test fails in a build, a link to the video that was recorded during its execution is provided as part of the error information for the test, besides the failed assertion message and the stack trace.

We’d like to thank Kevin Menard for his support in the selenium users group and, for being available to help when someone needed it.

Viewing all articles
Browse latest Browse all 97

Trending Articles