I’m often asked regarding how likely a code deployment will deliver the expected results. It’s actually a very simplistic formula, the answer is a neutral answer, but people often think of it as either over positive, or over negative, more frequently the later.
Chance of delivering on time is a function of: testing, how recent the tests are, how close the test matches the production system, and the points of failure.
Obviously, by on time, we’re assuming we’re delivering the functionality the client expected to have by a certain date.
A delivery that has no testing, server systems are widely different, and there are many points of failure, have little chance of being a successful one, doesn’t mean it can’t happen. A delivery that has testing, matching server configurations, and little to no points of failure, has a huge chance of being a successful one, but doesn’t it’s guaranteed.
I know I speak for a few, possibly not for all, but developers wouldn’t feel comfortable with the deploy until it’s been released onto production, stress tested by the public, performed as expected, and then, their hearts are finally at ease. This, is also a form of testing, the final one. Even then, in the back of their minds, they’re thinking that the code could’ve been better, there are edge cases it didn’t cover, and so on. We are forever doomed by a feeling of insecurity, the only thing we can say is that we’ve rigorously tested the crap out of it because, fact of the matter, bugs are a fact of life.