Requirement Definition

Requirements definition is the most crucial part of the project. Incorrect, inaccurate, or

excessive definition of requirements must necessarily result in schedule delays, wasted

resources, or customer dissatisfaction.

The requirements analysis should begin with business or organizational requirements and translate those into project requirements. If meeting stated requirements will be unreasonably costly, or take too long, the project requirements may have to be negotiated down, downscoped or down-sized, in discussions with customers or sponsors.

Any discussion of requirements analysis methods will quickly become specific to the type of project effort. Many industry areas have specific, proven techniques for obtaining thorough and accurate definition of requirements. Sometimes it is useful to write a draft users manual as a way to define requirements. While the methods may differ, the principles remain the same across all types and sizes of projects. The requirements analysis should cover the whole scope of the project. It must be comprehensive and thorough. It must consider the views and needs of all the project stakeholders.

The completed requirements analysis should be reviewed and approved by the customer or project sponsor before work continues.

The capture of user requirements is the process of gathering information about user needs.

User requirements should be realistic requirements are:

➔ Clear

➔ verifiable

➔ complete

➔ accurate

➔ feasible

Clarity and verifiability help ensure that delivered systems will meet user requirements.The specification of user requirements is the process of organising information about user needs and expressing them in a document.

A requirement is a ‘condition or capability needed by a user to solve a problem or achieve an objective’.This definition leads to two principal categories of requirements: ‘capability requirements’ and ‘constraint requirements’.

Capability requirements describe the process to be supported by software. Simply stated, they describe ‘what’ the users want to do.

The capacity attribute states ‘how much’ of a capability is needed at any moment in time.

Each capability requirement should be attached with a quantitative measure of the capacity required. For example the:

➔ number of users to be supported

➔ number of terminals to be supported

Constraint requirements place restrictions on how the user requirements are to be met. The user may place constraints on the software related to interfaces, quality, resources and timescales.

Users may constrain how communication is done with other systems, what hardware is to be used, what software it has to be compatible with, and how it must interact with human operators. These are all interface constraints.

A communications interface requirement may specify the networks and network protocols to be used.

A hardware interface requirement specifies all or part of the computer hardware the software is to execute on.

A software interface requirement specifies whether the software is to be compatible with other software (e.g other applications, compilers, operating systems, programming languages and database management systems).

Load, Performance & Stress Testing

Load, Performance & Stress testing:

Performance Test and Load / Stress Test determine the ability of the application to perform while under load. During Stress/Load testing the tester attempts to stress or load an aspect of the system to the point of failure – the goal being to determine weak points in the system architecture. The tester identifies peak load conditions at which the program will fail to handle required processing loads within required time spans. During Performance testing the tester designs test case scenarios to determine if the system meets the stated performance criteria (i.e. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.). In both cases the tester is trying to determine the capacity of the system under a known set of conditions. The same set of tools and testing techniques can be applied for both types of capacity testing – only the goal of the test changes.

Performance Testing:

Once a clearly defined set of expectations are defined, performance testing can be done by gradually increasing the load on the system while looking for bottlenecks.

All the activities take a white-box approach. The system is inspected and monitored “from the inside out” and from a variety of angles. Measurements are taken and analyzed and tuning is done accordingly

The best time to execute performance testing is at the earliest opportunity.  Developing performance test scripts at an early stage provides opportunity to prevent serious performance problems and expectations before load testing.

Goals of Performance Testing:

Not to find bugs, but to eliminate system bottlenecks and establish a baseline for future regression tests

To determine various critical business processes and transactions while the system is under low load with a production sized database.

Load Testing:

load testing” is usually defined as the process of testing the system by feeding it the largest tasks it can operate with. Load testing is also called volume testing, or longevity/endurance testing.

Goals of Load Testing:

Expose bugs such as memory management bugs, memory leaks, buffer overflows, etc.

Functional Testing

Testing software based on its functional requirements. It ensures that the program physically works the way it was intended and all required menu options are present. It also ensures that the program conforms to the industry standards relevant to that environment; for example, in a Windows program, pressing F1 brings up help.

Application functional testing is the most widely accepted testing practice in software development organizations today, and hundreds of millions of dollars have been spent on automated tools to support the practice. Yet, organizations still struggle to achieve software quality and suffer inconsistent, inefficient, and often inaccurate application of the practice. Over 70 percent of software testing today is still done manually despite the investment in automation. Understanding what has or hasn’t been tested is still most organizations’ major challenge, and the time commitment required for QA functional testing is increasingly one of organizations’ greatest concerns for meeting application delivery and time-to-market demands.

Software development organizations with an effective functional testing practice have a fast and objective way to determine whether each functional requirement is actually implemented in the code. With functional testing, the team translates functional requirements into executable test cases that confirm how well the code satisfies the requirements at any given time. It provides unprecedented objective insight into requirement status and prevents the missing or incorrect functionality implementations that can lead to countless rewrites (and then budget overruns and missed deadlines), user dissatisfaction, and project failure.

An effective functional testing practice should be a natural extension of a requirements management policy. Ideally, a requirements management policy describes how to build effective use cases for the features expected in the current release, and how to map those use cases to verifiable test cases. A test plan is then developed and implemented to build a suite of executable tests that frame and verify the functionality requirements, providing a fast and objective way to assess the status of expected functionality. These tests can then be executed regularly to ensure that code modifications do not unintentionally change previously verified functionality

An effective functional testing practice involves the definition of guidelines for using functional testing technologies effectively, and then the implementation and integration of those guidelines (along with supporting technologies and configurations) into your software development life cycle to ensure that your teams apply the policy consistently and regularly. It also requires a means to monitor and measure the policy’s application, as well as report the data it is tracking.

To achieve effective black box unit testing, you must not only have a defined practice for its use, but that practice must be implemented and integrated into your software development life cycle so that it is used consistently and regularly across your software development team. It is also important to have the means to monitor and measure its use.

Mobile Testing – DeviceAnywhere

Testing mobile contents like games and applications on real devices is a big challenge. Getting actual device is not always possible across all carriers. DeviceAnywhere solves one of these biggest challenges facing mobile app/content developers’ access to handsets in target carrier networks.

DeviceAnywhere is an innovative service that offers real-time remote access to in-market mobile handsets that are live on select carrier networks. It is a revolutionary online service that provides access to hundreds of real handsets on live worldwide networks, remotely over the internet for all your developing/porting/testing needs.

Anything you can do with a device in your hand, you can do with the handsets in DeviceAnywhere in real-time. Since all work is done on real devices in live carrier networks, it is now easy to verify that your games, applications, WAP sites, ringtones, wallpapers and videos work as you designed them. You see exactly what your consumer will see. No more unwelcome and expensive surprises.

With DeviceAnywhere on your side, you no longer need to make those expensive trips for testing your mobile content in remote networks.

For more information, visit:


This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. It is the least formal method of testing.

One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal. Finding new tests in this way can also be a sign that you should perform root cause analysis.

Ask yourself or your test team, “What other tests of this class should we be running?” Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad hoc testing is to determine the priorities for your other testing activities. In our example program, Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing shows this to work well, the formal testing of this feature might be deferred until the problematic areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature uncovers problems, then the formal testing might receive a higher priority.

Copyright © 1996-2010 . All rights reserved.