Bad Software Testers

Not all testers are the same, some are really bad software testers

By profession, I am a Software Tester, QA and Automation Engineer and I have meet some really bad ones.

With this said, what constitutes as a bad tester? The answer is a selection of poor traits and attitude.

This post is not meant to discourage or dishonour Testers. It is meant to highlight my experience with poor testers and how I approached the problem.

Bad Software Testers

In this post let’s look at some of the key traits which sadly defines what a bad Software Tester is and the solutions that I tried to implement.

The ‘Forgetful’ One

It’s common for a given person to forget things, that is normal. In a professional environment, forgetfulness is something that can not always be ignored.

In this instance the Tester will very commonly forget to make backups of test plans, test data, reports. They will also raise issues and bugs but not supply any steps to reproduce. When asked what the steps are, sometimes their response is that they don’t remember.

Their approach to reporting is usually very junior. This can leads to wasted time in investigations, spikes and other time sensitive aspects.

Fortunately this trait is common for Junior Testers and is remedied automatically overtime. The best approach here is to guide and train the Tester.

The ‘I Don’t Want To’ One

We have all encountered someone who possesses the mentality of saying ‘I don’t want to’, sadly this can be a common trait amongst all level of Testers.

Anyone who broadcasts this type of attitude usually differs any work given to them and are not strong team players.

This said, they also favor tasks which appear to be easy, straight forward or where someone else has done the leg work.

Resolving this can be challenging. The approach which I have seen to work best is to simple talk to the Tester and try to find out why they are not happy to pick up tasks.

Most of the time it is due to a lack of confidence, domain knowledge or a lack of willingness to work with people who seem very dominating. For me, talking has always helped to resolve this attitude.

The ‘I Did This, I Did That’ One

The ‘I did this, I did that’ trait belongs to someone who is more concerned with the work that they do and do not take the time to care or appreciate the effort that others are putting in. This particular trait makes it difficult to work with the Tester.

Testers with this view are usually found in Waterfall environments. This is because Waterfall promotes the idea of working in batches, in patches but not in sync.

To resolve this, I usually approach the Tester and try to understand why they appear to have strong feelings about what they and others did. Usually it comes down to their view on ‘what is fare’.

I try to resolve this by simply listing out the tasks that they and others have done however the focus here is not to compare. Instead I do this in hopes of trying to build a bridge and say that no one task can be sufficiently completed by only on person.

With time, this leads to a greater understanding of ‘US vs ME’.

The ‘My Way Or Highway’ One

It becomes very difficult to work with someone who is strongly passionate and adamant about their views and believes, especially when they are not open to ideas.

Sadly, this is usually found amongst the more senior members on a team. Through experience, senior Testers’s can sometimes ignore simple solutions and try to employ more difficult and complex ones.

Working with someone like this can almost always feel like a battle. It feels like a fight of diplomacy, ideas and sacrifice.

Passion plays a strong part in almost all discussions when speaking to someone who has very strong views. When trying to deal with someone like this, I try to convince ideas and thoughts for only some discussions and give into the many.

The idea here is to slowly win the confidence of the Senior Tester which in time becomes a strong relation. With some luck this leads to more comfortable discussions with an open mind form both parties.

The ‘Whatever’ One

There is always someone who is so relaxed and chilled that they do not care unless their salary is on the line.

Usually this means that the person does not take an active role in trying to resolve issues, take part in conversation or pro-actively investigate better solutions to existing problems.

They are very content with their current tasks and are more than happy to just coast along.

I have always found it to be a challenge when working with someone like this. My approach has been to try and give the Tester some small responsibilities. I have found that this helps to motivate the Tester to think about the problem and be more active in other roles.

The ‘Whatever’ attitude appears to disappear when the Tester’s see their contributions and efforts are valued.

The ‘I’m Not A Developer’ One

Having to look at code as a Tester is becoming more and more common however this becomes a problem when the Tester is just not happy about looking or reading code. The Tester in this instance is more than happy to get someone who is a little more intimate with coding to look at it instead.

This means the task takes longer and this could potentially hold up progress on other tasks.

I have found that pairing, training and teaching are the best tools to help someone get up to speed with coding.

Software Testing is Tough

It’s long, it’s hard but it’s worth it

Software testing is tough. Software Testers find bugs, write tests, firefight issues, teach, are expected to learn quickly and as a result it can become a difficult experience.

A Software Tester can constantly face many hardships, let’s have a look at some of them.

Hardships of a Tester

On a day-to-day basis, here are some of the key struggles that a Tester may face:

Testers VS Developers

At times, it may feel like a Tester is constantly having to stand their ground, make their point and convince Developers about bugs and non-functional issues.

Firstly of all let’s make one clear point. Testers and Developers are important for different things. The real benefits comes from the synergy of both roles.

Sadly, this silly thought of Testes VS Developers is simply unhealthy. This sort of thinking breeds segregation and isolation. Therefore, this helps to encourage a very ‘blame’ and ‘fault’ culture.

Unfortunately I have been exposed to this, most noticeably in a Waterfall environment. In a Waterfall environment, since the whole development is done in stages, there is little to no day-to-day interaction between Developers and Testers. This can lead to a very us vs them mentality.

Fortunately, Agile completely promotes a mixture of Developers and Testers therefore somewhat mitigating the idea behind Testers VS Developers. In an Agile environment, Testers VS Developers becomes Testers AND Developers.

Regression: The Silver Bullet

What is Regression?

For starters, it is not there to ensure that an application is bug proof. Regression means one single and simple thing; is everything still working, has a new issue been introduced. Sadly, not everyone understands this.

In some instances, Regression is seen as the ‘Application God Process’ which will catch-all the bugs, find all the holes, makes everyone super happy.

What do I mean by this?

Simply put, a misunderstanding in Regression stems from people, not the process.

In an ideal world, Regression should not be a laborious, long or a difficult task. It should be quick, painless and swift. People on the other hand have a magical gift to turn Regression into a nightmare.

On a whim’s notice, Testers are asked to consider new browsers, new devices, add a hundred more test scenarios, consider extra testing to cover for other teams etc. It feels like people forget how draining Regression can be.

Performing Regression is an important aspect of a Testers life. However, I have seen (first hand) how difficult it can be when last-minute requirements, favors and the need to test everything makes things difficult.

Proxy PO and BA

Overtime time, since Testers go through the application more times than ‘failure of unstable UI tests’, Testers become domain knowledge experts.

Initially, a Tester mostly needs to work with Developers (and other Testers) to test an application and ensure that it works. Overtime the role can become more challenging.

It becomes more and more difficult to devote time towards actual testing. Instead you may find yourself in meetings, talking about plans, strategies and much more. On the surface this is a great thing. This gives the ability to influence decisions, technology and perhaps a promotion.

Hence this drags the Tester away from their actual work i.e. to test.

Software Testing is Tough! Well, Is It?

Most noteworthy yes, Software Testing is tough. Seems like Testers are team players, how could Testers not be?

Developers, Testers, PO, BA all need to work together to convert a well thought plan into a delivered product. If the processes are taken care of, the team will be happy.

Testers are part of the backbone of any software development team and should be treasured.

I Pushed a Bug in Production

It’s inevitable, at some point you will push a bug in production

Statistically speaking, at some point you will push a bug to production. It may be something small, it may be something big. It might be a small UI issue, it may be a rather large functional issue. Sadly, at some point it will happen.

So, how can we stop this, how can this be mitigated?

Stop Bugs going in Production

Unknowingly, I have pushed bugs to productions. Not my most favorite confession but it has happened. It wasn’t the most joyful of feeling, but you live and learn.

Here is a list of actions I live by to help reduce instances of bug getting past me, into production.

Play ‘The 5 WHY’ Game

This is perhaps one of my most powerful techniques. The ‘5 WHY’ approach is simple, you ask yourself 5 WHY based questions which help to diagnose the root cause to a problem.

For instance, let’s assume a bug around user login made it into production.

  • 1. WHY was the user not logged in?
  • 2. WHY was the user account not recognized?
  • 3. WHY did the user account have a duplicate user ID?
  • 4. WHY was the database able to store multiple user accounts with the same ID?
  • 5. WHY was the ID code generator not producing unique ID’s?

Following the above line of question would help to identify a flaw in the process. Perhaps if there was an automated test for ID generation, the bug may not have gone through to production.

The ‘5 WHY’ game has helped me to think about bug prevention in production. It has also helped me to triage situations where a bug has gone through.

Catch It, Automate It

Not a super fan about this particular approach but one of the best ways to ensure that the same bug does not make it through again is to write an automated test for it. This approach may however unnecessarily balloon the number of tests that you have. You may write many tests which run every time but not add much value since they are trying to capture something very specific.

On the other hand if we don’t write a test for it, it may go through again. That’s something no one would want on their conscious.

Big Bang Integration

Test your code with everyone’s code. Do not merge directly to some master code base, merge to a release code base and test it. If your release code works, put it in master.

A very simple approach to stopping bugs going to production is to pretend it did not happen, one way to achieve this is with multiple code bases. At any point in time, the production code that is running will be the latest version of master. In an ideal world, the master version is deemed stable and functionally working. When pushing new code to production, it is better to push a release version instead.

Pushing a release version somewhat reduces bugs in production. If the bug is found, you can quickly change back to a version of your code which is deemed better.

Process Failure, Not People

The biggest and most important thing to identify is this:

Bugs in production is not a result of a person’s failure, its the fault of a Process

Sadly at some point, most likely during a blue moon, you will push a bug. You are human, you will make mistakes or more likely you will miss things.

It’s is far better (and perhaps more healthy) to remedy ‘preventing bugs in production’ by putting in place a process which can help to prevent it.

Processes may include:

  • Writing automated tests
  • CI jobs and pipelines
  • Release processes
  • In closing, in an ever growing Software Industry, trying to prevent bugs in production is pretty much impossible.

    But we can give it our best.

Let’s TDD and BDD

What is the best approach of writing tests?

TDD vs BDD. Have you heard of either Test Driven Developer (TDD) or Behivour Driven Development (BDD)? These are perhaps the most common and most used practices when it comes to writing tests. Firstly, let’s discuss why anyone would want to follow a testing practice. Why not just write tests in whatever manner you want?

To Practice or Not To Practice

Let’s assume you do not follow any practices or principle. This means you are probably very happy about your tests, assuming they run and pass. Let’s also assume no one else will ever have to maintain your tests.

Now let’s assume you don’t touch your tests for six months, there is a good chance you will not be able to understand your own code. This is because the approach you toke was a very ‘in the moment’ approach. This sort of approach can be very fragile.

On the other hand, let’s assume you did follow some principle or structure that is well known in the software industry. Since your following a universal language, there is a good chance that you and others will be able to read and understand the structure and purpose of your tests.

Following a test principle means that we can maintain structure, formula and purpose of a test. It means we can give reason and purpose to a test.

What is TDD?

TDD is the practice of writing a test before writing the implementation code. We write the test and run it expecting the test to fail. The reason the test would fail is because the requirement the test is checking for does not exist. Once the implementation has been written, re-running the test should now result in the test passing. Writing a test before writing the implementation code, this practice is TDD.

Enforcing TDD into your testing process makes it easy to establish what the requirements are. You can easily capture requirements in the form of tests. The tests can also act as a exit criteria for any given task. Also, if TDD was followed throughout a project, if a test fails then this would be the same as saying that a requirement has failed.

What is BDD?

TDD was invented to help capture and manage requirements however TDD tests were designed to be written as low level tests. For instance Unit and Integration tests are low level tests. With TDD, this still did not help anyone other than developers understand what the captured requirements are. This is because low level tests were understood by developers, not Product Owners (PO) or Business Analysts (BA). This is where BDD steps in.

BDD tests are essentially ‘English’ friendly scripts which can be written by anyone using Gherkin. Gherkin is a very simple language which can be used to express a scenario. For instance:

Given <a condition>
When <an action or event happens>
Then <expect a response>

The above is a simple view of what a Gherkin scenario would look like, scenarios such as this can be written by anyone with any technical ability. Let’s enhance the above scenario a little:

Given I am on the login page
When I enter valid logic credentials and press the login button
Then I should land on the accounts page

It should be clear that the above scenario can be written and understood by anyone. The requirements captured in the scenario above can also be shared with anyone.

The biggest benefits of BDD is to help bridge the gap of technical abilities between a BA, PO and Developer.


TDD vs BDD. Clearly, TDD and BDD are both great techniques to incorporate into your testing process as they both bring different benefits. One allows you to capture the requirements in tests before writing any code and the other helps to universally broadcast understanding. They both help to maintain tests and testing requirements The biggest benefits one can gain from these process is to use them both together.

In a given project, the BA would write stories or tasks in a Task Tracking tool for teams of developers to pick up. This usually translates into tests written by developers. On the other hand, if BDD tests were written by a BA before any code was implemented, this would follow both BDD and TDD. Ideally, writing high level tests before writing any implementation code should clearly give all team members the expected behavior to be implemented.

TDD vs BDD is a very wrong assumption. Both of these concepts should not be seen as competing factors, instead they should be seen as factors which compliment each other.

E2E Code Coverage

E2E Code Coverage, is that even possible?

The concept of code coverage is used to ensure that the main application code has been tested through Unit Tests. End to End (E2E) tests do not run code in isolation, they run on the UI. If E2E tests run on the UI level and have no view on the application code, is it possible to see how much code is covered by E2E tests? Also, should E2E tests have any level of code coverage? Is E2E code coverage even possible?

E2E Code Coverage – Should UI Cover Code?

Why do we use Unit Tests to cover code? We do it because it is easy to do, we gain confidence in our code and it allows us to make changes knowing that if anything breaks, our tests will catch it. However, an E2E test can not do this since it cannot see the code. An E2E test only see’s the outer shell of the application, it can only see the UI.

This almost sounds like E2E tests are not designed to see the code! Is this a good thing or bad?

Being able to see the code from an automation perspective is great. We can be very careful, delicate but more importantly precise about what we test. However, Unit Tests cannot test an application on a large integration level. This means Unit Tests, although they can see code, they cannot ensure that two or more components are working together. E2E tests on the other hand are able to test the interactivity of all components of an application, in one go.

With this said, should an E2E test cover code coverage. No, they should not. They should sit away from code. This is not a disadvantage, this is an advantage. E2E tests have that flexibility of knowing how an application works in its entirety. To do this, E2E tests need to sit outside the application and observe it (just like a real customer/client).

E2E Test Can Do Functionality Coverage

Let’s assume for a moment that your E2E tests need to provide some level of coverage for your main code. If an E2E test cannot see your main code base, how can it conceptualise the concept of covering code?

Well, we may not be able to do it directly, we may be able to do it explicitly.

Let’s assume we have captured all the functionality of an app through the medium of documentation. Let’s also assume that the functionality captured is 100% of the functionality of the app. Finally, let’s assume that we have written a test covering each of the functional points in our documentation. Does this now mean that we have 100% explicit code coverage of our application through E2E tests?

You may not agree, code coverage dictates that we actually touch the code, in this instance we are explicitly touching the code. But, if you agree that explicitly covering the code is the same as touching the code itself then perhaps we can consider functionality coverage via E2E tests is the same as code coverage through Unit Tests.

What do you think?

Do Not Automate Everything

No, we can-not automate every test.

In an ideal world, we should be able to automate every type of test. We should be able to cover every edge scenario, all functionality, everything. However, it may be in your interest to actually not automate everything. Not in our interest, read on.

Not Automate Everything – Why

Firstly, let’s take a second to understand why you would want to write an automated test. We write tests so that we can execute them as many times as we want. This gives us confidence that our functionality has not changed, it is still working. If we assume this to be the base reason for writing an automated test, we can make some clear assumptions.

Let’s consider the questions below:
Should we write tests for functionality that will change a lot?
Should we write tests for mechanics that will only run once?
Should we write tests for features which have a lot of dependencies?

How about these questions:
Should we write tests for UI?
Should we write tests for external dependencies?
Should we write tests which take a long time to run?

Thats a lot of questions. Let’s try to answer them.

Should we write tests for functionality that will change a lot?

The stability of a test is one way of measuring it’s success. Let’s expand what this means. When a test fails, if it fails due to actual deviation from expected behaviour then this means that the test is stable, it is of high value. However, if the same test fails due environment issues, dependencies etc then one may conclude the test is creating more problems then solving. With this in mind, how do we tackle test where functionality has changed but was not capture in the test.

Well, this is actually a good thing. If a test fails after functionality has changed, this means that our test is robust enough to identify that. Yes, we should absolutely write tests for functionality that will change. Why? Well, how are you supposed to track a change in functionality if you don’t write a test for it?

Should we write tests for mechanics that will only run once?

No, no, absolutely note. Let’s assume that there is some function, some feature that will only run once (DB migration for instance) – should we automate this? Well, one of the biggest values we get from automating a test is that it will run many times. Therefore, why write a test for something that is designed to only run once. One may argue, writing a test means that it does not need to be done manually anymore. On the flip side of the coin, one may argue if you do write a test for this, you have to make sure it only run’s once. This can go either way. In my head it goes against the core reason of writing a test.

Should we write tests for features which have a lot of dependencies?

Measuring the success of an automated test is difficult. There are no clear metrics that we can capture which identify success of an automated test. We can however use data gathered over time to validate if a test should be deleted. For instance, if a test captured actual bugs, is fast, can be extended etc, these all add towards the value of keeping a test. This can also be used to measure success. Now, why is this important towards writing a test with a lot of dependencies?

If we are dependant on something, an API endpoint for instance, if that endpoint goes down then our test will fail. However, the failure in the test will not reach the assertion level, it will fail before that. This introduces an instability in our tests. In other words, if a test fails, people may assume that the test would have passed but the endpoint is down. This sort of ‘thinking’ leads to a confidence drop in the test. In short, no. We should not write tests which have dependencies. However, we may be able to do something about this.

What if we write a hook in the test which checks to see if the endpoint is up. We can then prematurely exit the test if the endpoint is down and ignore the test. This means if the test fails, there will be a greater chance the failure is functionality related as opposed to dependency. Therefore, we should write ‘tests with dependencies’ but only if we are able to extract the dependency out.

Should we write tests for UI?

Traditionally a test is written to check functionality, not aesthetics or the look and feel of an application. Why? Because a computer is not able to judge whether something looks right. It is also worth pointing out that UI can change, it can change a lot. This would add instability to your tests.

In this instance, it would be a firm no. However, we could write UI tests which serve as warnings in case the test detects an unexpected change in the UI. The test could then perhaps take a screenshot and collect it as part of it’s test report. This would mean that a person could actually go over the UI results, check and see if anything has ‘broken’.

Should we write tests for external dependencies?

We have spoken about dependencies, what about external dependencies which you have no control over? If our own dependencies can create instability in our tests, imagine the level of instability you would have with external dependencies. No, we should not write tests when we rely on dependencies which we have no control over.

However, when considering external dependencies, we may decide to use mocks. If we decide to mock an external dependency, we can automate that test as the status of the external dependency would not matter anymore.

Should we write tests which take a long time to run?

Speed is a very important factor for automated tests. Do not automate a scenario which will practically take a long time to run, why? Going back to the stability point from earlier, if a test takes too long to run, one may argue how this aids the reason of ‘running many tests, many times’. No, we do not consider speed greatly when talking about running many tests, many times but it would be very defeatist if it takes substantially long to run these tests.

Let’s also assume we must automate a scenario which will most likely have ‘waits’ and ‘visible until’ methods, most likely UI tests. How can we get around this? We may decide to break the big slow test into smaller tests. We could then run these tests in parallel to help reduce the time it takes to run the tests.

In Summary

An automated test is not written to lessen the burden of manual tests, they are written to run a test many times. We do not automate to replace manual testing. They need to be clean, dependency free. They should be stable and should test functionality which may change. To validate UI tests, we may need to get a person to validate since they are difficult to judge from an automated test. Mocking out dependencies where possible will help to abstract out responsibility which we do not care about. Speed considerations of a test are also very important.

In this post we went through some (of many) conditions where you would not automate. However, with some tweaks then perhaps yes. What do you think, do you agree or disagree with any of my points? Let me know in the comment section below.

Writing E2E UI Tests when UI is not ready

Writing E2E UI tests when UI is not ready is not an easy problem to solve. Ill get right to the point, it is tough writing a UI driven test for an application before the UI is ready. When a new application is in development, the first types of tests written are unit tests. However, trying to write a UI driven test can be difficult. How do you write an E2E test for an application where the UI, functionality and requirements may change every time new code is committed? Well, allow me to tell you how I approached this problem.

E2E UI Tests for New Applications – Let’s Understand the Problem

Let’s imagine for a second that you are just about to start a new project for which there is no UI, no backend, no tests, nothing. Simply said, you are currently in the planning stage of the project. You are then tasked with writing Acceptance Tests for the project. In the absence of any UI, how do you write any UI driven tests?

Or how about the situation where the UI is mature. However, the functionality for which you have been asked to write a test for does not exist yet. Again, in both this and the previous instance, how do you approach writing UI driven tests?

Should We Just Wait?

One approach I had considered was to simply wait until the UI was ready. This felt like the perfect solution to a problem which I as an Automation QA had no real control over. Overtime I had then noticed that this was not a sensible solution on the grounds that if the functionality changed then there was no test to capture it. Naturally, a change in priorities in an Agile environment also meant that writing the UI test just got pushed back sine the requirements were delivered.

After considering the ‘wait until the UI is ready’ method, it turns out that ideally I should have either written a failing Acceptance Test (used TDD) or written the test right after the functinality was delivered.

In hindsight it looked like waiting did not seem like the right solution.

What Can We Do Then?

So, if waiting did not seem like the right solution nor was it possible to write a passing test for the UI in parallel to the development – how could I approach and resolve the issue?

At the time the solution did not seem obvious to me, I figured out that the solution was to actually write a failing test!

Writing failing E2E UI tests meant that the requirement was captured and documented in the form of a test. It was then running as part of the CI build and therefore serving as a warning. I decided not to fail the build as a result but instead serve as a warning. Once the actual implementation was complete, the test started to pass. Once the test started to pass, I removed the warning on the test and instead started to fail the build in the event the test failed.

And that was it.

You Don’t Need Cucumber for BDD

You don’t need Cucumber for BDD. Oh boy, I can already sense the loving embrace of you, the reader. Let me start by saying one thing, I absolutely love Cucumber. It is an amazing tool and serves an amazing purpose. Cucumber allows one to easily express the behavior of a system through plain text (expressed in Gherkin), allowing many non technical people to understand what is going on. You can additionally generate reports more easily, create a living, breathing test spec and possibly better a debugging mechanism. But, hold on, do you need Cucumber for BDD?

Cucumber for BDD – What is BDD?

So, what is BDD? BDD is an acronym for ‘Behavior Driven Development’. When writing a test, expressing it in plain English, without exposing any code and trying to describe the condition is BDD. It allows you to ‘describe’ a scenario and represent that in a descriptive file. Let’s have a look at a simple example:

Given I navigate to
When I see the 'Join blog subscription' pop-up
Then I will enter my email address
And I will subscribe
And I will see a notification message

The above is a very simple scenario, captured in simple English. The ‘scenario’ has been written using Gherkin, trying to express a condition (Given), an event (When) and an action (Then). As opposed to TDD (Test Driven Development), BDD does not instruct that we write a failing test first. A BDD test can be written after the main application code has been written.

Cucumber runs test using Feature files which instructs a developer to write scenarios using Gherkin syntax, almost forcing BDD. Each step in the Feature file is then matched to a method in a code file, somewhere. Now, back to the focus of this post. If Gherkin is needed to express scenarios and Cucumber encourages BDD, do we need Cucumber to express a test in BDD?

BDD in a Java Test

Trying to answer ‘do we need Cucumber for BDD’ can be difficult, let’s take a look at a test written in WebDriver, in Java:

public class TheTestRoomTest {
    public void testChromeSelenium() {
        System.setProperty("", "/selenium-driver/chromedriver");
        WebDriver driver = new ChromeDriver();
        Assert.assertEquals("Thank you for subscribing", driver.findElement(By.className('subscription_notification')).getText());

Let’s break the above test down a little. Firstly, the above test is identical to the Feature file detailed above. We are firstly creating our Driver followed by navigating to We then subscribe to TheTestRoom and check to see the confirmation message.

Great, so what is the difference between the Java test and the Cucumber Feature file? From a test coverage perspective, nothing. From a technical perspective, yes maybe. For Cucumber, you would need a Feature file, a step definition class. For the Java test above, in its current form, the single script is enough. However, when using Cucumber you gain the advantage of allowing others to reuse the step deifintion methods more easily, you lose this ability with the Java test. Also, the Cucumber Feature file is much easier to read and understand, Java test is not. Wait, perhaps not.

Re-using methods in Java Test

Let’s take another look at our Java test and see if we can do anything to update it, perhaps promote code re-use. What would happen if we extract calls to Driver and move them out? This may help us to make the Java test more easier to read and understand.

public class HomePage {
    public WebDriver driver;
    public HomePage() {
        System.setProperty("", "/selenium-driver/chromedriver");
        driver = new ChromeDriver();
    public void acceptSubscription(String email) {
    public String getNotificationMessage() {
        return driver.findElement(By.className('subscription_notification')).getText();        
public class TheTestRoomTest {
    public void testChromeSelenium() {
        HomePage home = new HomePage();
        Assert.assertEquals("Thank you for subscribing", home.getNotificationMessage());

Now that we have moved all the calls to driver to the ‘HomePage’ class, we can now reuse any methods for the driver in any other class. It looks like this approach now solves our ‘re-useable’ code dillema. Wait, let’s quickly revisit the test and make one more small change:

public class TheTestRoomTest {
    public void testChromeSelenium() {
        // Given
        HomePage home = new HomePage();
        // When
        // Then
        Assert.assertEquals("Thank you for subscribing", home.getNotificationMessage());

Putting the Java test and Cucumber Feature file side by side, you can see that they almost serve the same purpose. Well, it looks like we now have a Java test but in a very BDD fashion.

Do we need Cucumber for BDD?

No, I don’t think so.

Cucumber allows us to really easily and quickly write behaviors for an application. It provides reporting and allows people who don’t write code to write tests. But, take a second to ask yourself, do you really need all that meat. If all your trying to do is implement a BDD strategy, can this be implemented through discipline alone? I don’t think Cucumber should be adopted for the sake of BDD. With a little bit of elbow greese you can easily implement BDD style of tests, this also makes your tests more cleaner, easier to read and more manageable.

So, what do you think? Do we need to use Cucumber for BDD?

Manual Testing more important than Automation?

Is manual testing more important than automation? Depending on who you ask the answer might be different. Manual testing is the process where a human (someone who has the ability to think and be creative) visually confirms that the aesthetics and functional elements of an application are working. But hey, hold on, we have tools that can do that sort of stuff. So is Manual Testing really that important in this day and age of Software Development and Testing?

The value of Manual Testing

The value of manual testing, this looks like a good place to start. The word ‘value’ can be seen as an overloaded term. To some it may mean the return on an investment or the effort needed to achieve a goal. Centrally it means the same thing i.e. what you get out based on what you put in. Now how does ‘value’ relate to manual testing? Well let’s take a step back and talk about automation really briefly.

We know that an automated test will run and perform checks based on what you have outlined. In other words it will not go ‘exploring’ or trying out new things. It will only perform the actions which have been outlined in the test. A manual run of the same test however would encourage the ‘human’ to consider other scenarios and features. When running the same test in an automated vs manual fashion, the value you get from automated test is that it will run automatically. The value you get from running the same test manually would encourage exploring other similar areas which may not be covered by the automated test. This brings us back full circle, what is the value of manually testing a scenario which has an automated test?

I think when running an automated test manually the biggest value you get is the creativity of the ‘human’ who is running the test. For instance they may consider other scenarios not currently automated. I think this is the biggest value you get from manual testing.

Manually testing an automated test, really???

OK, let’s take a short pause here and take a deep breath. I am not for a moment suggesting that we run our automated tests manually. I think this approach would be very defeatist. I am however suggesting that writing an automated test should not be considered the exit criteria of that scenario.

When I was a University I remember one of my lecturer’s saying ‘computers always perform the action you ask them to perform’. At the time I did not understand but over the years I started to agree with this statement. If your test fails, if your code does not compile, if your computer does not start etc it is doing what you have told it to do. In other words it does not have the ability to act on its own intelligence but rather just follow orders. Assuming you agree with this statement you can see that an automated test will always do what you expect it to do and nothing more. I don’t feel that automation is a replacement for manual testing but instead it aids in the endeavor of creating ‘bug free’ software.

When should we manually test?

The answer to this question may wildly change depending on who you ask. To the client, every single second of your breathing life. To the manual tester, the sooner the better.

Coming from a automation background I would say manual testing is an important element which aids towards development. It should be carried out differently depending on what stage of the development phase you’re in. For instance when starting a fresh project then it would most likely be very difficult to write any Acceptance tests since the UI may not have matured. In the absence of a strong dominant and constant UI then manual tests come to the rescue. In the instance where the UI is constant then manual testing should be conducted to supplement automated tests. Testing processes such as Sanity greatly benefit from manual testing as a quick visual and aesthetic test of an application may give greater value then automated tests that have passed.

Oh, OK that last statement was a bit blunt. I guess what I’m trying to say is that once the automated tests pass they give us a great deal of confidence that our functionality has not changed. However it does not give us any level of confidence that the look and feel of our application is still the same. A manual testing phase is able to bring the experience of that ‘human’ to the forefront and this is what gives us confidence in our development and test.

So, is manual testing more important than automation?

Manual testing plays a very important role in the world of testing.

Automated tests help us to ensure that our existing functionality has not changed, we are able to run the same tests as many times as we want. It is a great way of utilizing an extra process of running the same tests multiple times. This said there are still people such as developers, SDET’s, PO etc who would want someone to manually have a quick look. I guess the reason for this would relate to the manual testing. For instance, the ability for the testing to think creatively, outside the box and explore. This is almost the opposite of simply following orders like a computer.

For the moment it would be difficult for me to answer but Ill leave it to you to decide if manual testing is more important than automated testing. I think manual testing has an important role and a strong place in the Software Development cycle. I think manual testing should not be used as an alternative to automated testing. Manual testing should be used to supplement and aid automation testing.