If you have to keep only one metric for evaluating the product readiness, I argue that collecting development team’s confidence should be that one.
In development team, I include what are usually called devs, testers, QAs, DBAs, devops, sysadmin, and whatever name you use to identify those who are involved in the product construction.
I don’t trust test coverage measurement, since coverage means so different things depending on how you mesure the base (what is supposed to be covered), and how you consider a coverage is done (how many or how deep you’ve tested the base).
I don’t trust test completion either, because if you haven’t listed appropriate tests then completion means nothing.
I don’t trust “definition of done” fulfilment, for same reasons as before.
I trust teams’ feelings more, and team’s confidence, because humans involved in such intellectual process as developing a product, always give part of their heart and soul in what they do. Their confidence is a way better indicator in many situations.
How to measure it
Good news: that’s outrageously simple. Set a board next to the exit door of the floor. Every evening before leaving, any involved individual may get a token – sticky note, card, pin, whatever you want – on the board. Left side means: “I don’t think the product is ready enough for release”, right means the opposite. You can decide a token should be assigned to a person, by using any mark or avatar, or you can decide it should be anonymous, it’s up to you. You can use electronic vote, special survey plugin on your ticket tracking system, multi-value scales or analogic measurement, that’s equal. But measure your team confidence.
By end of the sprint, you should see tokens migrating one after another to the right side of the board. Not all of these will always be on the right side, but the vast majority of it, for sure. The ones staying on the left will probably be here because of some imperfection that the team should accept as a debt for next sprint, or unseen problems by other teammates and corrected ASAP.
Team confidence is the most valuable measurement
I think we struggle too much to get objective measures for our decisions. In many cases, because the element we are trying to measure relies on human thinking, these measure remain highly subjectives. A common example is test completion: the listed tests are the ones that a human thought useful, not the one which were. Then, measuring the completion of those tests, which can be thought as being objective, is really not.
Measuring team confidence is probably the most valuable subjective measurement we can do to evaluate product readiness.