S oftware Testing is the process of executing a program or system with the intent of finding errors. O r, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. S oftware is not unlike other physical processes where inputs are received and outputs are produced. W here software differs is in the manner in which it fails. M ost physical systems fail in a fixed (and reasonably small) set of ways. B y contrast, software can fail in many bizarre ways. D etecting all of the different failure modes for software is generally infeasible. U nlike most physical systems, most of the defects in software are design errors, not manufacturing defects. S oftware does not suffer from corrosion, wear-and-tear — generally it will not change until it upgrades, or until obsolescence ( 过时 ). S o once the software is shipped, the design defects — or bugs — will be buried in and remain latent until activation. S oftware bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable — and humans have only limited ability to manage complexity. I t is also true that for any complex systems, design defects can never be completely ruled out. D iscovering the design defects in software, is equally difficult , for the same reason of complexity. B ecause software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. A ll the possible values need to be tested and verified, but complete testing is infeasible. E xhaustively testing a simple program to add only two integer inputs of 32-bits would take hundreds of years, even if tests were performed at a rate of thousands per second. O bviously, for a realistic software module, the complexity can be far beyond the example mentioned here. I f inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible inputs parameters under consideration. A further complication has to do with the dynamic nature of programs. I f a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn ’ t work for previously. B ut its behavior on pre-error test cases that it passed before can no longer be guaranteed. T o account for this possibility, testing should be restarted. T he expense of doing this is often prohibitive . A n interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. B ut this alone will not guarantee to make the software better, because the Complexity Barrier principle states: Software complexity (and therefore that of bugs) grows to the limits of our ability to manage that complexity. B y eliminating the (previous) easy bugs you allowed another escalation of features and complexity, but this time you have subtler bugs to face, just to retain the reliability you had before. S ociety seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. T hus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. R egardless of the limitations, testing is an integral part in software development. I t is broadly deployed in every phase in the software development cycle. T ypically, more than 50% percent of the development time is spent in testing.