Technical Interview Questions
Loadrunner Interview Questions
Winrunner Interview Questions
Manual Testing Interview Questions and Answers
What kinds of testing should be considered?
• Black box testing - not based on any knowledge of
internal design or code. Tests are based on requirements
• White box testing - based on knowledge of the internal
logic of an application's code. Tests are based on
coverage of code statements, branches, paths,
• unit testing - the most 'micro' scale of testing; to
test particular functions or code modules. Typically
done by the programmer and not by testers, as it
requires detailed knowledge of the internal program
design and code. Not always easily done unless the
application has a well-designed architecture with tight
code; may require developing test driver modules or test
• incremental integration testing - continuous testing
of an application as new functionality is added;
requires that various aspects of an application's
functionality be independent enough to work separately
before all parts of the program are completed, or that
test drivers be developed as needed; done by programmers
or by testers.
• integration testing - testing of combined parts of an
application to determine if they function together
correctly. The 'parts' can be code modules, individual
applications, client and server applications on a
network, etc. This type of testing is especially
relevant to client/server and distributed systems.
• functional testing - black-box type testing geared to
functional requirements of an application; this type of
testing should be done by testers. This doesn't mean
that the programmers shouldn't check that their code
works before releasing it (which of course applies to
any stage of testing.)
• system testing - black-box type testing that is based
on overall requirements specifications; covers all
combined parts of a system.
• end-to-end testing - similar to system testing; the
'macro' end of the test scale; involves testing of a
complete application environment in a situation that
mimics real-world use, such as interacting with a
database, using network communications, or interacting
with other hardware, applications, or systems if
• sanity testing or smoke testing - typically an initial
testing effort to determine if a new software version is
performing well enough to accept it for a major testing
effort. For example, if the new software is crashing
systems every 5 minutes, bogging down systems to a
crawl, or corrupting databases, the software may not be
in a 'sane' enough condition to warrant further testing
in its current state.
• regression testing - re-testing after fixes or
modifications of the software or its environment. It can
be difficult to determine how much re-testing is needed,
especially near the end of the development cycle.
Automated testing tools can be especially useful for
this type of testing.
• acceptance testing - final testing based on
specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of
• load testing - testing an application under heavy
loads, such as testing of a web site under a range of
loads to determine at what point the system's response
time degrades or fails.
• stress testing - term often used interchangeably with
'load' and 'performance' testing. Also used to describe
such tests as system functional testing while under
unusually heavy loads, heavy repetition of certain
actions or inputs, input of large numerical values,
large complex queries to a database system, etc.
• performance testing - term often used interchangeably
with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
• usability testing - testing for 'user-friendliness'.
Clearly this is subjective, and will depend on the
targeted end-user or customer. User interviews, surveys,
video recording of user sessions, and other techniques
can be used. Programmers and testers are usually not
appropriate as usability testers.
• install/uninstall testing - testing of full, partial,
or upgrade install/uninstall processes.
• recovery testing - testing how well a system recovers
from crashes, hardware failures, or other catastrophic
• security testing - testing how well the system
protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated
• compatability testing - testing how well software
performs in a particular hardware/software/operating
• exploratory testing - often taken to mean a
informal software test that is not based on formal test
plans or test cases; testers may be learning the
software as they test it.
• ad-hoc testing - similar to exploratory testing, but
often taken to mean that the testers have significant
understanding of the software before testing it.
• user acceptance testing - determining if software is
satisfactory to an end-user or customer.
• comparison testing - comparing software weaknesses and
strengths to competing products.
• alpha testing - testing of an application when
development is nearing completion; minor design changes
may still be made as a result of such testing. Typically
done by end-users or others, not by programmers or
• beta testing - testing when development and testing
are essentially completed and final bugs and problems
need to be found before final release. Typically done by
end-users or others, not by programmers or testers.
• mutation testing - a method for determining if a set
of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if the
'bugs' are detected. Proper implementation requires
large computational resources.
What are 5 common problems in the software development
• poor requirements - if requirements are unclear,
incomplete, too general, or not testable, there will be
• unrealistic schedule - if too much work is crammed in
too little time, problems are inevitable.
• inadequate testing - no one will know whether or not
the program is any good until the customer complains or
• featuritis - requests to pile on new features after
development is underway; extremely common.
• miscommunication - if developers don't know what's
needed or customer's have erroneous expectations,
problems are guaranteed.
What are 5 common solutions to software development
• solid requirements - clear, complete, detailed,
cohesive, attainable, testable requirements that are
agreed to by all players. Use prototypes to help nail
• realistic schedules - allow adequate time for
planning, design, testing, bug fixing, re-testing,
changes, and documentation; personnel should be able to
complete the project without burning out.
• adequate testing - start testing early on, re-test
after fixes or changes, plan for adequate time for
testing and bug-fixing.
• stick to initial requirements as much as possible - be
prepared to defend against changes and additions once
development has begun, and be prepared to explain
consequences. If changes are necessary, they should be
adequately reflected in related schedule changes. If
possible, use rapid prototyping during the design phase
so that customers can see what to expect. This will
provide them a higher comfort level with their
requirements decisions and minimize changes later on.
• communication - require walkthroughs and inspections
when appropriate; make extensive use of group
communication tools - e-mail, groupware, networked
bug-tracking tools and change management tools, intranet
capabilities, etc.; insure that documentation is
available and up-to-date - preferably electronic, not
paper; promote teamwork and cooperation; use protoypes
early on so that customers' expectations are clarified.
What is software 'quality'?
Quality software is reasonably bug-free, delivered on
time and within budget, meets requirements and/or
expectations, and is maintainable. However, quality is
obviously a subjective term. It will depend on who the
'customer' is and their overall influence in the scheme
of things. A wide-angle view of the 'customers' of a
software development project might include end-users,
customer acceptance testers, customer contract officers,
customer management, the development organization's
software maintenance engineers, stockholders, magazine
columnists, etc. Each type of 'customer' will have their
own slant on 'quality' - the accounting department might
define quality in terms of profits while an end-user
might define quality as user-friendly and bug-free.
Page Numbers : 1