## The Test Automation Effort Ratio

Many organizations are engaged in some kind of process transformation, introducing Test Automation into their Software Engineering practice. Automation is indeed a key element of Agility, proper delivery pipeline, and any modern way of producing software.

In such a transformation, interest is high in metrics that help to follow progression of work in Test Automation adoption. It might be tempting to define KPI on coverage or amount of tests. Something common is to list “tests that should be automated”, from an existing list of manual tests, or created from scratch. Such a list usually suffers of a problem of relevance. Indeed, because the team is at the beginning of its Test Automation Journey, it is highly probable that the list would include “manual” oriented tests, mainly end-to-end, or based on bad test automation paradigm.

Discovering which tests are the right test to automate is a quest every team have to achieve by its own, because the answer depends on the product being tested, the organization of software delivery, the team appetite concerning risk, and many other factors.

In addition, it is not unusual for teams that are progressing for being more mature, not to be able to collect metrics properly. Lack of reliable datasources, manual data collection or estimations, are a common source of bias that would make precise metrics more a tale than a fact.

But something is common to all team : at a certain level of maturity, teams that are on a Test Automation Journey expect to spend an increasing part of their effort in test automation activities, and a decreasing part in manual testing activities.

Timesheets are more reliable than test coverage

Bogrot, Gringotts gobelin

What I suggest you to introduce a metric to evaluate your team progression in the Test Automation Journey, is the Test Automation Effort Ratio.

The Test Automation Effort Ratio is the ratio between the amount of time spent by your team members on any test automation related activities (scripting, debugging, environment management, reporting…), divided by the total amount of time spent by your team members on any kind of test activities, should it be manual or automated. You should calculate this ratio on every cycle you consider relevant (sprint, release, month…)

\tau = \frac{\sum test\;automation\;related\;logged\;timesheets}{\sum any\;test\;related\;logged\;timesheets}

At the beginning of your Test Automation Journey, the value should be near zero. In your most secret dreams, you expect it to reach one. In the real life, you will see reach something like 0.5 quite easily, and a good maturity in the test automation practice will allow you to obtain 0.7 to 0.8. But instead of setting a specific value to obtain, the right goal to set is to have this ratio increasing over time: this is how you will be sure that you are globally doing the right thing.

This metric has several advantages including the following:

• Almost every organization are good at tracking time, doing timesheets is unfortunately the most applied practice in IT. It is easy to set up.
• This metric is completely independent from your context, and its specificities. You are free to decide what to automate, and when you will figure out that everything you have started are not appropriate – because, it is part of Test Automation Journey – it won’t affect the metric, since your effort will still continue to increase.
• The shape of the curve over time will give you a good idea of “when” you will reach a good level of maturity. It is expected to see your numbers to reach an asymptote, which you will discover with the time.

## Team confidence is the best readiness indicator

If you have to keep only one metric for evaluating the product readiness, I argue that collecting development team’s confidence should be that one.

In development team, I include what are usually called devs, testers, QAs, DBAs, devops, sysadmin, and whatever name you use to identify those who are involved in the product construction.

I don’t trust test coverage measurement, since coverage means so different things depending on how you mesure the base (what is supposed to be covered), and how you consider a coverage is done (how many or how deep you’ve tested the base).

I don’t trust test completion either, because if you haven’t listed appropriate tests then completion means nothing.

I don’t trust “definition of done” fulfilment, for same reasons as before.

I trust teams’ feelings more, and team’s confidence, because humans involved in such intellectual process as developing a product, always give part of their heart and soul in what they do. Their confidence is a way better indicator in many situations.

## How to measure it

Good news: that’s outrageously simple. Set a board next to the exit door of the floor. Every evening before leaving, any involved individual may get a token – sticky note, card, pin, whatever you want – on the board. Left side means: “I don’t think the product is ready enough for release”, right means the opposite. You can decide a token should be assigned to a person, by using any mark or avatar, or you can decide it should be anonymous, it’s up to you. You can use electronic vote, special survey plugin on your ticket tracking system, multi-value scales or analogic measurement, that’s equal. But measure your team confidence.

By end of the sprint, you should see tokens migrating one after another to the right side of the board. Not all of these will always be on the right side, but the vast majority of it, for sure. The ones staying on the left will probably be here because of some imperfection that the team should accept as a debt for next sprint, or unseen problems by other teammates and corrected ASAP.

Team confidence is the most valuable measurement

I think we struggle too much to get objective measures for our decisions. In many cases, because the element we are trying to measure relies on human thinking, these measure remain highly subjectives. A common example is test completion: the listed tests are the ones that a human thought useful, not the one which were. Then, measuring the completion of those tests, which can be thought as being objective, is really not.

Measuring team confidence is probably the most valuable subjective measurement we can do to evaluate product readiness.

## Test Automation Day

I’ll be happy to speak tomorrow at Test Automation Day (online), held by SauceLabs. I’ll talk about the journey through test automation, and how this journey can be “scaled” for quite big IT teams.

Automation at scale : journey for ambitious teams.
When many organizations want to move forward to continuous testing, some of them face several challenges. How to introduce massive test automation in big organizations ? How to scale and still get value with these changes ? What is, as an organization, our best asset for our success ?
Have a look on the journey of a big financial organization, and learn how they manage to introduce a test automation practice in a 2500+ IT division. Find out what you can apply on your side, if you have the ambition to make it works for dozens or hundreds of tester, in your own company.

Feel free to join here !

## Testers, who is your client?

It is something that many won’t agree upon. But my question is: who is your client? Who do you consider to be the one you’re working for? You might have two answers. First is: “My client is the user of the application I’m testing”. Second is: “My client is the [business line/project manager/company/payer] who is requesting the application to be developped”.

I’ve worked in several situations. Sometimes in outsourced testing teams where our customer were asking us to do test on their applications, eventually used by their own customers or their employes. Some other times for in-house IT teams where a business line was requesting IT department to develop in-house applications.

I’ve seen to many people – developpers, PM, testers – renouncing to argue with the payer, because you’re not supposed to bite the hand that feeds you.

I was working as a test consultant in a company, explaining to our different pre-sales how we should talk about test and test automation to our customers. I was saying, that sometimes we should consider giving the right advice to our customer.

Never say that to my client

I took an example of one of my customer, wanting to do test automation to save money. Their testing process was outdated, definitely not working, and he has been told that test automation was the solution. I’ve explained our pre-sales that my approach was to open my clients eyes up on his company lack of maturity, and guiding him through a maturity process that will exclude test automation for a while, but leading them to affordable but sure ameliorations: people training, process simplification, test environment stabilization. That being said, none of these had to involve external working force – that meant, no sale for us at this time.

Never say that to one of my clients“, said one of the pre-sale manager listening to that story. “Why so?“, I asked. “Because if one of our customer wants us to do work for him, we should never refuse such sale. If he figure’s out he was wrong, we still be able to sell him services to fix it. And however, maybe he would save money with test automation, no?“, he answered. “That’s probably why I’m not sitting on a chair as you are today“, I’ve conclude.

This guy was obviously not worrying about doing the right thing. Ones could say he was worrying about making more money. But if we want to give him some credit – let’s try – we may consider that he was believing his focus should be to fulfill his customer wishes. His customer wishes were to save money by doing test automation, but saving money would probably never happen in this context.

In any situation, my belief was and remains: as a tester, I’m working for the benefit of the product.

Is that different when I’m having another role than tester? Well, no.

I’m working for the benefit of the product, period.

If the payer is asking for something that won’t add value to the product, my role as an honest professional, is to demonstrate it and avoid such waste. If I don’t, because I gave up, because I don’t care, or because I want to make more money, then I’m not doing my job.

As a tester, are we always working for the product? Hmmm, please be honest. I’ll help you:

• We have to test that part first, the business requested it. It never fails but they don’t want us to go further before completing it.
• We must produce daily progression reports on test campaign, it is required by PM.
• Remove from reports and calculation tests that cannot be run due to lack of test data. We guaranteed 100% test completion and it messing thing up.
• This is bad UI/UX design but it is what has been approved by customer.
• Yeah, I know, but it works “as expected”…

Don’t tell me you never heard one of these. Out of context, surely it sounds bad. If we sometimes accept that in real-life situation, it is because we have placed the payer before the product and its user.

Please, work for the benefit of the product. Then you won’t have any doubt on who is your client.

## SauceCon 2020 Online: Replay

Good news: replay for SauceCon 2020 Online is now available. You can watch recordings of the first online edition directly from the event’s agenda. You have to be logged in to be able to start watching.

Here are my favorites:

Of course, my talk about Dynamic Test Environments for Continuous Testing can also be watched in replay.

Needless to say, doing SauceCon Online was not the plan. The current pandemic situation has forces SauceLabs, the organizer, to reconsider the original event which was supposed to be held in Austin, TX. I was pretty impressed how the SauceCon organization team was able to move forward and set up a fully online event in such a short notice.