not automate

Do Not Automate Everything

Reading Time 6 minutes

No, we can-not automate every test.

In an ideal world, we should be able to automate every type of test. We should be able to cover every edge scenario, all functionality, everything. However, it may be in your interest to actually not automate everything. Not in our interest, read on.

Not Automate Everything – Why

Firstly, let’s take a second to understand why you would want to write an automated test. We write tests so that we can execute them as many times as we want. This gives us confidence that our functionality has not changed, it is still working. If we assume this to be the base reason for writing an automated test, we can make some clear assumptions.

Let’s consider the questions below:
Should we write tests for functionality that will change a lot?
Should we write tests for mechanics that will only run once?
Should we write tests for features which have a lot of dependencies?

How about these questions:
Should we write tests for UI?
Should we write tests for external dependencies?
Should we write tests which take a long time to run?

Thats a lot of questions. Let’s try to answer them.

Should we write tests for functionality that will change a lot?

The stability of a test is one way of measuring it’s success. Let’s expand what this means. When a test fails, if it fails due to actual deviation from expected behaviour then this means that the test is stable, it is of high value. However, if the same test fails due environment issues, dependencies etc then one may conclude the test is creating more problems then solving. With this in mind, how do we tackle test where functionality has changed but was not capture in the test.

Well, this is actually a good thing. If a test fails after functionality has changed, this means that our test is robust enough to identify that. Yes, we should absolutely write tests for functionality that will change. Why? Well, how are you supposed to track a change in functionality if you don’t write a test for it?

Should we write tests for mechanics that will only run once?

No, no, absolutely note. Let’s assume that there is some function, some feature that will only run once (DB migration for instance) – should we automate this? Well, one of the biggest values we get from automating a test is that it will run many times. Therefore, why write a test for something that is designed to only run once. One may argue, writing a test means that it does not need to be done manually anymore. On the flip side of the coin, one may argue if you do write a test for this, you have to make sure it only run’s once. This can go either way. In my head it goes against the core reason of writing a test.

Should we write tests for features which have a lot of dependencies?

Measuring the success of an automated test is difficult. There are no clear metrics that we can capture which identify success of an automated test. We can however use data gathered over time to validate if a test should be deleted. For instance, if a test captured actual bugs, is fast, can be extended etc, these all add towards the value of keeping a test. This can also be used to measure success. Now, why is this important towards writing a test with a lot of dependencies?

If we are dependant on something, an API endpoint for instance, if that endpoint goes down then our test will fail. However, the failure in the test will not reach the assertion level, it will fail before that. This introduces an instability in our tests. In other words, if a test fails, people may assume that the test would have passed but the endpoint is down. This sort of ‘thinking’ leads to a confidence drop in the test. In short, no. We should not write tests which have dependencies. However, we may be able to do something about this.

What if we write a hook in the test which checks to see if the endpoint is up. We can then prematurely exit the test if the endpoint is down and ignore the test. This means if the test fails, there will be a greater chance the failure is functionality related as opposed to dependency. Therefore, we should write ‘tests with dependencies’ but only if we are able to extract the dependency out.

Should we write tests for UI?

Traditionally a test is written to check functionality, not aesthetics or the look and feel of an application. Why? Because a computer is not able to judge whether something looks right. It is also worth pointing out that UI can change, it can change a lot. This would add instability to your tests.

In this instance, it would be a firm no. However, we could write UI tests which serve as warnings in case the test detects an unexpected change in the UI. The test could then perhaps take a screenshot and collect it as part of it’s test report. This would mean that a person could actually go over the UI results, check and see if anything has ‘broken’.

Should we write tests for external dependencies?

We have spoken about dependencies, what about external dependencies which you have no control over? If our own dependencies can create instability in our tests, imagine the level of instability you would have with external dependencies. No, we should not write tests when we rely on dependencies which we have no control over.

However, when considering external dependencies, we may decide to use mocks. If we decide to mock an external dependency, we can automate that test as the status of the external dependency would not matter anymore.

Should we write tests which take a long time to run?

Speed is a very important factor for automated tests. Do not automate a scenario which will practically take a long time to run, why? Going back to the stability point from earlier, if a test takes too long to run, one may argue how this aids the reason of ‘running many tests, many times’. No, we do not consider speed greatly when talking about running many tests, many times but it would be very defeatist if it takes substantially long to run these tests.

Let’s also assume we must automate a scenario which will most likely have ‘waits’ and ‘visible until’ methods, most likely UI tests. How can we get around this? We may decide to break the big slow test into smaller tests. We could then run these tests in parallel to help reduce the time it takes to run the tests.

In Summary

An automated test is not written to lessen the burden of manual tests, they are written to run a test many times. We do not automate to replace manual testing. They need to be clean, dependency free. They should be stable and should test functionality which may change. To validate UI tests, we may need to get a person to validate since they are difficult to judge from an automated test. Mocking out dependencies where possible will help to abstract out responsibility which we do not care about. Speed considerations of a test are also very important.

In this post we went through some (of many) conditions where you would not automate. However, with some tweaks then perhaps yes. What do you think, do you agree or disagree with any of my points? Let me know in the comment section below.

Mo

I am a passionate tester, father, husband and gamer. I love to write blogs about Software Testing and generally contribute back to the Software Testing world.

More Posts - Twitter - Facebook

Published by

Mo

I am a passionate tester, father, husband and gamer. I love to write blogs about Software Testing and generally contribute back to the Software Testing world.