Automation QA Testing Course Content

Manual Testing Introduction


1.What is the necessity of Testing

Testing is necessary because we all make mis-takes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check everything and anything we produce because things can always go wrong - humans make mistakes all the time - it is what we do best!
Because we should assume our work contains mistakes, we all need to check our own work. However, some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. So we may not notice the flaws in what we have done. Ideally, we should get someone else to check our work - another person is more likely to spot the flaws.

2. What is Software Testing?


IEEE Terminology: An examination of the behavior of the program by executing on sample data sets
Testing is executing a program with an intention of finding bugs/defects, reporting the defects and verifying the defects.
Cause as many failures as possible so that faults can be identified and corrected: This is the main goal of a test team, cause failures so that developers fix them.

Testing & Debugging:

Testing and debugging are different. Executing tests can show failures that are caused by defects in the software. 

Debugging is the development activity that finds, analyzes, and fixes such defects. Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases, testers are responsible for the initial test and the final confirmation test, while developers do the debugging and associated component testing. However, in Agile development and in some other lifecycles, testers may be involved in debugging and component testing.

Debugging: The process of finding, analyzing and removing the causes of failures in software.


Testing is executing a program with an indent of finding Error/Fault and Failure.

Software testing is an organizational process within software development in which business-critical software is verified for correctness, quality, and performance. Software testing is used to ensure that expected business systems and product features behave correctly as expected.

Software testing may either be a manual or an automated process.

  • Manual software testing testing of the software where tests are executed manually by a QA Analyst. It is performed to discover bugs in software under development.

    In Manual testing, the tester checks all the essential features of the given application or software. In this process, the software testers execute the test cases and generate the test reports without the help of any automation software testing tools.

  • Automated software testing :In Automated Software Testing, testers write code/test scripts to automate test execution. Testers use appropriate automation tools to develop the test scripts and validate the software. The goal is to complete test execution in a less amount of time.
  • Automated testing allows you to execute repetitive task and regression test without the intervention of manual tester. Even though all processes are performed automatically, automation requires some manual effort to create initial testing scripts
  • KEY DIFFERENCE

    • Manual Testing is done manually by QA analyst (Human) whereas Automation Testing is done with the use of script, code and automation tools (computer) by a tester.
    • Manual Testing process is not accurate because of the possibilities of human errors whereas the Automation process is reliable because it is code and script based.
    • Manual Testing is a time-consuming process whereas Automation Testing is very fast.
    • Manual Testing is possible without programming knowledge whereas Automation Testing is not possible without programming knowledge.
    • Manual Testing allows random Testing whereas Automation Testing doesn’t allow random Testing.
    • ------------------------------------------------------------------------
  1. Defect:

    • Definition: A defect is a deviation from the expected behavior or specification in the software application.
    • Example: Suppose a login form should validate the password length, but it fails to do so, allowing users to set a password shorter than the specified minimum length.
  2. Bug:

    • Definition: A bug is a coding error that causes a defect in the software.
    • Example: In a banking application, a bug in the code might result in incorrect calculations of interest rates, leading to financial discrepancies.
  3. Error:

    • Definition: An error is a human action that produces an incorrect or unexpected result.
    • Example: During data entry, a user accidentally enters a negative value for a quantity, causing errors in subsequent calculations.
  4. Fault:

    • Definition: A fault is a defect in the software that may or may not result in failure.
    • Example: In an e-commerce application, a fault in the payment processing module may lead to occasional payment failures for certain users.
  5. Failure:

    • Definition: A failure occurs when the software does not perform as expected and deviates from its intended behavior.
    • Example: A failure could be the inability of a messaging app to send messages during peak hours due to a server overload.

3.Testing principles:

1.Testing shows presence of mistakes. Testing is aimed at detecting the defects within a piece of software. But no matter how thoroughly the product is tested, we can never be 100 percent sure that there are no defects. We can only use testing to reduce the number of unfound issues.

2.Exhaustive testing is impossible. There is no way to test all combinations of data inputs, scenarios, and preconditions within an application. For example, if a single app screen contains 10 input fields with 3 possible value options each, this means to cover all possible combinations, test engineers would need to create 59,049 (310) test scenarios. And what if the app contains 50+ of such screens? In order not to spend weeks creating millions of such less possible scenarios, it is better to focus on potentially more significant ones.

Exhaustive Testing: This is an impossible goal that can't be achieved

3.Early testing. As mentioned above, the cost of an error grows exponentially throughout the stages of the SDLC. Therefore it is important to start testing the software as soon as possible so that the detected issues are resolved and do not snowball.

4.Defect clustering. This principle is often referred to as an application of the Pareto principle to software testing. This means that approximately 80 percent of all errors are usually found in only 20 percent of the system modules. Therefore, if a defect is found in a particular module of a software program, the chances are there might be other defects. That is why it makes sense to test that area of the product thoroughly.

5.Pesticide paradox. Running the same set of tests again and again won’t help you find more issues. As soon as the detected errors are fixed, these test scenarios become useless. Therefore, it is important to review and update the tests regularly in order to adapt and potentially find more errors.

6.Testing is context dependent. Depending on their purpose or industry, different applications should be tested differently. While safety could be of primary importance for a fintech product, it is less important for a corporate website. The latter, in its turn, puts an emphasis on usability and speed.

7.Absence-of-errors fallacy. The complete absence of errors in your product does not necessarily mean its success. No matter how much time you have spent polishing your code or improving the functionality if your product is not useful or does not meet the user expectations it won’t be adopted by the target audience.

While the above-listed principles are undisputed guidelines for every software testing professional, there are more aspects to consider. Some sources note other principles in addition to the basic ones:

  • Testing must be an independent process handled by unbiased professionals.
  • Test for invalid and unexpected input values as well as valid and expected ones.
  • Testing should be performed only on a static piece of software (no changes should be made in the process of testing).
  • Use exhaustive and comprehensive documentation to define the expected test results.

Typical Objectives of Testing:

For any given project, the objectives of testing may include:

-To evaluate work products such as requirements, user stories, design, and code

-To verify whether all specified requirements have been fulfilled

-To validate whether the test object is complete and works as the users and other stakeholders expect

-To build confidence in the level of quality of the test object

-To prevent defects

-To find failures and defects

--To provide sufficient information to stakeholders to allow them to make informed decisions, especially regarding the level of quality of the test object

-To reduce the level of risk of inadequate software quality (e.g., previously undetected failures occurring in operation)

-To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the component or system being tested, the test level, and the software development lifecycle model.

4.Why is software testing?

To discover defects.
To avoid user detecting problems
To prove that the software has no faults.
To learn about the reliability of the software.
To ensure that product works as user expected.
To stay in business
To detect defects early, this helps in reducing the cost of defect fixing
oftware testing may also be required to meet contractual or legal requirements or industry-specific standards.

5.What are some recent major computer system failures caused by software bugs? 

  • In March of 2012 the Initial Public Offering of the stock of a new stock exchange was cancelled due to software bugs in their trading platform that interfered with trading in stocks including their own IPO stock, according to media reports. The high-speed trading platform reportedly was already handling more than 10 percent of all trading in U.S. securites, but the procesing of initial IPO trading was new for the system, and though it had undergone testing, it was unable to properly handle the IPO initial trades. The problem also briefly affected trading of other stocks and other stock exchanges.
  • It was reported that software problems in an automated highway toll charging system caused erroneous charges to thousands of customers in a short period of time in December 2011.
  • A U.S. county found that their state's computer software assigned thousands of voters to invalid voting locations in November 2011 for an upcoming election due to the system's problems accepting new voting district boundary information.
  • In August 2011, a major North American retailer initiated its own online e-commerce website, after contracting it out for many years. It was reported that within the first few months the site crashed six times, home page links were found not to work, gift registries were reported not working properly, and the online division's president left the company.
  • A new U.S.-government-run credit card complaint handling system was not working correctly according to August 2011 news reports. Banks were required to respond to complaints routed to them from the system, but due to system bugs the complaints were not consistently being routed to companies as expected. Reportedly the system had not been properly tested.
  • News reports in Asia in July of 2011 reported that software bugs in a national computerized testing and grading system resulted in incorrect test results for tens of thousands of high school students. The national education ministry had to reissue grade reports to nearly 2 million students nationwide.
  • In April of 2011 bugs were found in popular smartphone software that resulted in long-term data storage on the phone that could be utilized in location tracking of the phone, even when it was believed that locator services in the phone were turned off. A software update was released several weeks later which was expected to resolve the issues.
  • Software problems in a new software upgrade for farecards in a major urban transit system reportedly resulted in a loss of a half million dollars before the software was fixed, according to October 2010 news reports.

6.Why does software have bugs? 

  • miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).
  • software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tier distributed systems, applications utilizing multiple local and remote web services applications, data communications, enormous relational databases, security complexities, and sheer size of applications have all contributed to the exponential growth in software/system complexity.
  • programming errors - programmers, like anyone else, can make mistakes.
  • changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see'What can be done if requirements are changing continuously?' in the LFAQ. Also see information about 'agile' approaches such as XP, in Part 2 of the FAQ.
  • time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
  • egos - people prefer to say things like:
·           'no problem'
·           'piece of cake'
·           'I can whip that out in a few hours'
·           'it should be easy to update that old code'
·          
·          instead of:
·           'that adds a lot of complexity and we could end up
·              making a lot of mistakes'
·           'we have no idea if we can do that; we'll wing it'
·           'I can't estimate how long it will take, until I
·              take a close look at it'
·           'we can't figure out what that old spaghetti code
·              did in the first place'
·          
·          If there are too many unrealistic 'no problem's', the
·          result is bugs.
·          
  • poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').
  • software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

7.How exactly Testing is different from QA/QC?


Testing is often confused with the process of quality control and quality assurance.
Testing can find faults; when they are removed, software quality is improved.
QC is the process of Inspections, Walk-troughs and Reviews.
Quality Control: Quality control involves various activities, including test activities, that support the achievement of appropriate levels of quality. Test activities are part of the overall software development or maintenance process. Since quality assurance is concerned with the proper execution of the entire process, quality assurance supports proper testing.

QA involves in monitoring and improving the entire SDLC process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with

Quality Management: Quality management includes all activities that direct and control an organization with regard to quality. Among other activities, quality management includes both quality assurance and quality control. Quality assurance is typically focused on adherence to proper processes, in order to provide confidence that the appropriate levels of quality will be achieved. When processes are carried out properly, the work products created by those processes are generally of higher quality, which contributes to defect prevention. In addition, the use of root cause analysis to detect and remove the causes of defects, along with the proper application of the findings of retrospective meetings to improve processes, are important for effective quality assurance.

In simple way, Testing is Quality Control
quality
Quality Assurance is to Quality Control is to test the product test the process to deliver the quality product

Quality Assurance
Quality Control
Its Preventive in nature
Its Detective in nature
Helps establish process
Relates to specific product or service.
Sets up measurement programs to evaluate process.
Verifies specific attributes are there or not in product/service.
Identifies weaknesses in process and improves them.
Identifies for correcting defects.
Management responsibility, frequently performed by staff function.
Responsibility of team/worker
Concerned with all products produced by the process
Concerned with specific product.
Is a Quality Control over Quality Control activity?

  


8.What is verification? validation? 
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.

Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements set. In fact, verification focuses on the question 'Is the deliverable built according to the specification?'.

Verification process:

Reviews:relooking into the documents review are many types
                i)peer to peer review
             ii)peer to Manager
             iii)technical reviews(dev team,business analyst, product manager or customer)
What is a 'walkthrough'? 
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
Example:Code walkthrough
What's an 'inspection'? 
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality.

Validation is concerned with evaluating a work product, component or system to determine whether it meets the user needs and requirements. Validation focuses on the question 'Is the deliverable fit for purpose, e.g. does it provide a solution to the problem?'.
Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

9.What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.

 However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

 Verification
Validation
·         Verifying process includes checking documents, design, code and program
·         It is a dynamic mechanism of testing and  validating the actual product
·         It does not involve executing the code
·         It always involves executing the code
·         Verification uses methods like reviews, walkthroughs, inspections and desk- checking etc.
·         It uses methods like black box testing ,white box testing ,greyboxtesting and non-functional testing
·          Whether the software conforms to specification is checked
·         It checks whether software meets the requirements and expectations of customer
·         It finds bugs early in the development cycle
·         It can find bugs that the verification process can not catch
·         Target is application and software architecture, specification, complete design, high level and data base design etc.
·         Target is actual product
·         QA team does verification and make sure that the software is as per the requirement in the SRS document.
·         With the involvement of testing team validation is executed on software code.
·         It comes before validation
·         It comes after verification

Verification and Validation: Differences in Functions

The verification and validation are the main aims of the workbench concept, but it is important to know the difference between them to outline all the specific elements of each process clearly:

Verification

  1. Checks program, documents, and design.
  2. Reviews, desk-checking, walkthroughs, and inspection methods.
  3. Check of accordance with the specified requirements.
  4. Bug detection is performed on the cycle of early development.
  5. It precedes the validation.

Validation

  1. It is a process of testing and validating the real product.
  2. It uses non-functional testing, Black Box Testing, and White Box Testing.
  3. Checks whether the software is in compliance with customers’ expectations.
  4. It can detect bugs, which are missed by verification.
  5. It is performed when the verification is done.

We can conclude, that verification is conducted at the initial stage before validation and verifies the input requirements. The validation checks the examination performed by verification and detects the missed issues to make the real product closer to the definition of done.



Workbench Concept: Goals and Stages

This is a method which aims to examine and verify the structure of testing performance by detailed documenting. Workbench process has its common stages and steps which serve for different test assignments. The common stages of each workbench include:

Stages of workbench concept

Input. It is the initial workbench stage. Each certain assignment should contain its initial and outcome (input and output) requirements to know the available parameters and expected results. Each workbench has its specific inputs depending on the type of product under testing.

Performance. The priority aim of the entire testing is in the transformation of the initial parameters to outcome requirements and reach the prescribed results.

Check. It is an examination of output parameters after the performance phase to verify its accordance with the expected ones.

Production output. It is the final stage of a workbench in case the check confirmed the properly conducted performance.

Reworking. If the outcome parameters are not in compliance with the desired result, it is necessary to return to the performance phase and conduct it from the beginning.


SDLC: software development life cycle:  

Life cycle: Entire duration of a project, from inception to termination of Different life cycle Models

A framework that describes the activities performed at each stage of a software development project  :

 Feasibility Study

                                                          

Analysis


 
                                                                                                                                                                         
                                                                 

Design

 
                                                                                                                    
                                                            
                                                                

Coding
 
                                                                  - 

Testing

 
                                                                    
                                                 
                                                            


Installation &
Maintenance
 
 

                                            


Initial/requirements gathering:

1)business analyst:gathers all the requirements from customer
and then prepares business requirement Document(BRD). it has very high level requirements

testing activities will start in the initial phase itself
testers will read and understand the BRD
output document of initial phase:
BRD

2)Analysis:
System analyst:
The intention…    - To determine how well a business copes with its                                                              Current information processing needs  - below points are considered while analysis phase:
 1)whether application is possible to improve,
2)Whether Technically possible or not
3)Whether legally possible or not
4)Whether financially possible or not 
5) which hardware and software needs to use for complete development of the application
6)Howmany resources required
system requirement specifications will be prepared with all resources information.

functional requirement specification document is prepared and given to Team
FRS doc contains very detailed requirements.
Designing test cases early and comparing them (test design) with the test basis (requirements) will help in detecting requirements defects only which will prevent them from escaping to development. In this case, we prevent defects from occurring in the software because they are detected early in the requirements stage before implementing them

what are the base documents for writing the testcases?
1)FRS (or) usecase Document
2)screenlayouts: are prepared by UIX designer. sample format of application screens designed and shared with team

o/phase
SRS
FRS

3)Design:
architect:high level design document
HLD:internal logic/flow diagrams and link between main modules to the databse tables
techlead:low level design document(LLD)
functional flow will be explained
pseudo code will be prepared
o/phase
HLD
LLD

4)Coding:
developers/programmers &dev manager/product managers will participate in the coding phase
  • This is the longest phase.
  • This phase consists of Front end + Middle ware + Back-end
  • In front end: development coding are done even SEO[Searh Engine Optimization] setting are done. front will be done using HTML5,javascript & CSS
  • In Middle ware: They connect both front end and back end. The middleware is done using Java,C#,Python, Ruby,C++ etc..
  • In back-end: database is created. Backend will be created using different databases MYSQL, Oracle, MangoDB, Microsfot SQL Server etc..
developers/programmers will start writing the logic for the functionality
unit testing:Testing each and every micro function of the application is called unit testing
dev team will prepare a build(collection of software code)

Output of Coding phase:
build url is ready

developed modules will be intimated to the testing team in meetings or through email

5)Testing:

Qa Manager
testlead
sr.testengineer
test engineer
smoke testing:/build verification testing:to ensure that there are no blockers/show stoppers in the application before 
conducting a detailed testing

Testcase Execution:each testcase will be executed on the given build
if testcase passes status will changed to pass
if any testcase fails you will make that testcase status as "fail"
then file a bug
The test basis is “a source to determine expected results to compare with the actual result of the system under test”
The body of knowledge used for test analysis and design 


Defect reporting:reporting the defect in a bug tarcking tool is called defect reporting

bug triage meeting will happen(dev team/dev manager/testers)

bug fix will be done by developers/programmers

retesting:to confirm whether the failed testcase is working properly or not after the fix

failed testcases only with same testdata and test steps need to be followed and updated build
bugfixverification:
verifying all the fixed bugs is called bug fix verification

Regression testing:to ensure that there are no new bugs introduced in unchanged areas of the application 
due to modifications/changes to the application. the existing functionality should work as expected

both passe and failed testcases

no issues are found then give the signoff

6. Deployment:
After successful testing the product is delivered/deployed to the client, even client are trained how to use the product.
7. Maintenance:
Once the product has been delivered to the client a task of maintenance start as when the client will come up with an error the issue should be fixed from time to time.
===================================================================

Software Testing Life Cycle or Testing Process:



STLC:from the begining of the project to till the release of the project what are all the testing activities carried out by testing team is called STLC
Test Process: The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.
Test Control: A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
1)understand the requirement document and get domain knowledge
           if you have any queries regarding requirements..set up a meeting with BA/Product Manager.get it clarieifed
2)design the testplan document:prepared by QA Manager/Test Lead
3)designing the testcases using FRS/usecase/screenlayouts/User Story(Agile Scrum methodology)
Designing test cases early and comparing them (test design) with the test basis (requirements) will help in detecting requirements defects only which will prevent them from escaping to development. In this case, we prevent defects from occurring in the software because they are detected early in the requirements stage before implementing them

Test Design: The activity of deriving and specifying test cases from test conditions.

4)review the testcase document

peer --->peer
peer -->manager
technical reviews(dev team/BA/PM/client)
i)if any review comments provided-->incorporate those changes in testcase document
ii)no review comments are there, that is final draft.
Test Implementation: The activity that prepares the testware needed for test execution based on test analysis and design.
5)prepare an environment or get a build from dev team
6)perform smoke/build verification testing:to ensure that there are no blockers/showstoppers in the applicaitons before 
conducting a detailed testing
whether the build is testable or not
if you found a blocker bug, file the bug in defect tracking tool(ALM/JIRA/BUGZILLA,ISSUE TRACKER etc...) and then send an eamil to
 all the people including your QA manager,dev mannager and developer
if no blocker bugs in the build..then continue further detailed testing
7)Testcase Execution:execute all the testcases and prepare the testlog report
if any testcase fails mark the testcase status "Fail" then file a bug
8)defect Repoprting:if any testcase fails-->file a bug in bug tracking tool(HP-ALm,JIRA,Bugzilla...) with all the necessary 
information and assign it to the dev team
fixing of the bug is done by developers

In false positive, the tester finds a defect which is not a defect. For example, the tester was testing the software and the website didn't load because the internet connection is dropped, he reported it as a defect.

In false negative, there is a defect in the software but the tester didn't find it. For example, the tester executed all the test cases for the mobile app on portrait mode but there are undiscovered defects that don't appear unless he uses the app in landscape mode, he didn't report those defects.

False negatives are tests that do not detect defects that they should have detected; false positives are reported as defects, but aren’t actually defects.

A developer finds and fixes a defect: Fixing the defect is debugging no matter who found the defect

9)bugfixverification:verifying all the fixed bugs on the updated build.
Retesting:to ensure/confirming that failed testcase is working properly as expected.
only failed test cases can be retested
Regression Testing:to ensure that there are no new bugs introduced in unchanged areas of the application because 
of modifications/changes to the application.the existing functionality should work as expected.

10)if no new bugs are present in the application then give sign off
Test Monitoring: A test management activity that involves checking the status of testing activities, identifying any variances from the planned or expected status, and reporting status to stakeholders.

11)deliverables:
1)testcase document
2)test plan document
3)testlog report
4)defect report.

EveryDay tasks:

participate in the daily meetings and update the testing status[what you have done yesterday and what you are going to do today]
responding to the emails
=============================================================

i. Delaying the release date because the UAT build is not ready yet

-This is a control activity because here we applied a corrective action to get a test project on track.

ii. Calculating the number of test cases executed during the last iteration

-This is a monitoring activity because here we are checking the status of testing activities.

iii. Holding a retrospective meeting at the end of an iteration

-This is a monitoring activity because here we are checking the status of testing activities. The retrospective meeting deals with testing & non-testing issues, but that doesn't mean that it is not a test monitoring activity.

iv. Changing the story points of a user story from 5 to 20 because a new risk has been identified

-This is a control activity because here we applied a corrective action to get a project on track. Changing the story points is mainly not the responsibility of the tester but if we consider it as a reviewing activity, it should be a control activity.

v. Looking at the burn-down chart and analyzing points where the team was off-track

-This is a monitoring activity because here we are checking the status of testing activities. The burn-down chart deals with testing & non-testing issues, but that doesn't mean that looking at it is not a test monitoring activity.

==========================

======================================================================




Difference between Windows And Web Based Applications:





===============================================

TEST LEVELS:







Different types of Test environments in Software projects:

1. Development Environment: Used for writing and testing code, typically on a developer's local machine.

2. Integration Environment: Integrates different software components, ensures they work together.

3. Staging Environment: Pre-production environment, mimics production for final testing.

4. User Acceptance Testing (UAT) Environment: End-users test software for requirements, mirrors production closely.

5. Production Environment: Live environment for end-users, where software is used as intended.

6. Performance Testing Environment: Tests software under load, matches production hardware.

7. Security Testing Environment: Focuses on software security, uses security tools and configurations.

8. Sandbox or Experimental Environment: For experimental and exploratory testing, allows trying new ideas safely.

The five test levels used, each with their own objectives, are:

Unit TestingUnit Testing involves verification of individual components or units of source code. A unit can be referred to as the smallest testable part of any software. It focuses on testing the functionality of individual components within the application. It is often used by developers to discover bugs in the early stages of the development cycle. Developers will perform the unit testing on dev environment. 

 A unit test case would be as fundamental as clicking a button on a web page and verifying whether it performs the desired operation. For example, ensuring that a share button on a webpage lets you share the correct page link.


Unit testing Example – The battery is checked for its life, capacity and other parameters. Sim card is checked for its activation.

  1. Component Testing

    Testing a module or component independently to verify its expected output is called component testing. Generally, component testing is done to verify the functionality and/or usability of a component but not restricted to only these. A component can be of anything which can take input(s) and delivers some output. For example, the module of code, web page, screens and even a system inside a bigger system is a component to it.

From the above picture, Let’s see what all we can test in component 1 (login) separately:

  • Testing the UI part for usability and accessibility
  • Testing the Page loading to ensure performance
  • Testing the login functionality with valid and invalid user credentials
Component testing is done by testers.component testing is done on testing environment

  1. Integration testingIntegration testing is the next step after component testing. Multiple components[Modules] are integrated as a single unit then perform the testing to check the data flow is happening from one module to other modules. For example, testing a series of webpages in a particular order to verify interoperability.

  1. This approach helps QAs evaluate how several components of the application work together to provide the desired result. Performing integration testing in parallel with development allows developers to detect and locate bugs faster.

Integration testing is done by testers and developers on integration environment
    Integration Testing:
After completion of dependent programs development & Unit
testing, the programmers interconnect them. Then the programmers verify the
interconnection of the programs in any one of the below three ways.
   Top-Down Approach
   Bottom-Up Approach
  Hybrid Approach/sandwich
  Top-Down Approach:
The interconnection of the main program & some sub-programs is called
the Top-Down Approach. Programmers use temporary programs called stubs
instead of sub-programs, which are under construction. The other name for stubs
is ³Called Programs´. A stub returns the control to the main program.


                                                                   

   
* In this Approach first Parent Modules are developed
* After that Child Modules are developed
* Then interconnect Parent & Child Modules.
* In the interconnection process is there any the sub-module is under construction
then the developers create temporary program Instead of sub modules that is called
³Stub´.
  Bottom - Up Approach:
The interconnection of internal sub-programs without using main
programs is called the bottom up approach. In this approach, programmers use a
temporary program instead of main program, which is under construction. The
temporary program is called ³Driver´ or ³Calling Program´.
Eg:


                                                                                             




 

*In this approach first Child Modules are developed.
* After that parent modules are developed
* Then interconnect Child Modules with Parent Modules.
* In the interconnection Process is there any main module is under construction
then the developers create temporary program that is called ³Driver´.
Difference Between STUB & DRIVER:
STUB
DRIVER
1.Temporary Program is used instead of Sub-Programs which are under construction
2.Used in top down approach
3.Other name is “Called Programs”
4,returns the control to the Main program

1.Temporary Program used instead of main Program, which is under construction
2.Used in Bottom Up approach
3. other name is “Calling Programs”



Hybrid Approach:
Also known as ³Sandwich approach´, this is a combination of the Process
Top-Down Approach & Bottom-Up Approaches.


  
   System Approach:
It is also known as ³Big Bang Approach´. From this approach, the
programmers interconnect programs after completion of all programs
development & unit Testing.
Build:
A finally integrated set of all programs is called a ³Build´ or AUT
(Application Under Testing
Integration Testing Example – Battery and sim card are integrated i.e. assembled in order to start the mobile phone.
iv)system testingAs the name suggests, system testing involves testing all the integrated modules of the software as a whole. It helps QAs verify whether the system meets the desired requirements. It includes multiple tests like validating output based on specific input,

testing user experience and more.

Teams perform several types of system testing like regression testing, stress testing, functional testing and more, depending on their access to time and resources.

Testing the whole system against functional requirement document is called system testing. This is also called Blackbox testing.


i)GUI Testing:
  look & feel
  spelling mistakes
  alignment issues
 consistency
tab order
ii)positive testing:testing the application by inputting valid data and positive functional flow is called
positive testing.//happy path testing
ex:entering valid username and password in login section
iii)Negative Testing technique is performed using incorrect data, invalid data or input. It validates if the system throws an error of invalid input and behaves as expected.
Ex:provide invalid username and password and then submit the login button

System testing is carried out on staging environment by testers.

Example: 

Let us understand these three types of testing with an oversimplified example.

E.g. For a functional mobile phone, the main parts required are “battery” and “sim card”.

Functional Testing Example – The functionality of a mobile phone is checked in terms of its features and battery usage as well as sim card facilities.


Almost every web application requires its users/customers to log in. For that, every application has to have a “Login” page which has these elements:

  • Account/Username
  • Password
  • Login/Sign in Button

For Unit Testing, the following may be the test cases:

  • Field length – username and password fields.
  • Input field values should be valid.
  • The login button is enabled only after valid values (Format and lengthwise) are entered in both the fields.

For Integration Testing, the following may be the test cases:

  • The user sees the welcome message after entering valid values and pushing the login button.
  • The user should be navigated to the welcome page or home page after valid entry and clicking the Login button.

Now, after unit and integration testing are done, let us see the additional test cases that are considered for functional testing:

  1. The expected behavior is checked, i.e. is the user able to log in by clicking the login button after entering a valid username and password values.
  2. Is there a welcome message that is to appear after a successful login?
  3. Is there an error message that should appear on an invalid login?
  4. Are there any stored site cookies for login fields?
  5. Can an inactivated user log in?
  6. Is there any ‘forgot password’ link for the users who have forgotten their passwords?
  7. Almost every web application requires its users/customers to log in. For that, every application has to have a “Login” page which has these elements:

System Integration or End-to-End TestingSimilar to system testing, End-to-End Testing involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  
acceptance testingThe main goal of acceptance testing is to verify whether the system as a whole is fit for use in the real world.

Acceptance testing is performed both internally and externally. 

Internal acceptance testing (also known as alpha testing) is performed by the members within the organization. 

External testing (also known as the beta testing) is performed by a limited number of actual end-users. This approach helps teams evaluate how well the product satisfies the user’s standards. It also identifies bugs in the last stage before releasing a product.

  alpha testing. This test takes place at company location. A cross-section of potential users and members of the developer's organization are invited to use the system. Developers observe the users and note problems. Alpha testing may also be carried out by an independent test team.
This will be done on prelive environment.

Beta testing, or field testing, sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization where the defects are repaired .
Contractual Acceptance Testing: Acceptance testing conducted to verify whether a system satisfied its contractual requirements.
===============================================

TestPlan:



  
During test planning, we make sure we understand the goals and objectives of the customers, stakeholders, and the project, and the risks which testing is intended to address. This will give us what is sometimes called the mission of testing or the test assignment. Based on this understanding, we set the goals and objectives for the testing itself, and derive an approach and plan for the tests, including specification of test activities.
Test policy gives rules for testing, e.g. 'we always review the design documents';
Test strategy is the overall high-level approach, e.g. 'system testing is carried out by an independent team reporting to the program quality manager.
     The Test Plan Design document helps in test execution it contain
1.      About the client and company
2.      Reference document (BRS, FRS and UI etc)
3.      Scope (What to be tested and what not to be)
4.      Overview of the Application
5.      Testing approach (Test Strategy)
6.      For each Testing
·      Definition
·      Start criteria
·      Stop criteria
7.      Resources and Roles & Responsibilities
8.      Defect Definition
9.      Risk/Contingency/Mitigation Plan
10.  Training Required
11.  Schedules
12.  Deliverables 




                     Test Plan
                            Test Strategy
·         A test plan for software project can be defined as a document that defines the scope, objective, approach and emphasis on a software testing effort
·         Test strategy is a set of guidelines that explains test design and determines how testing needs to be done
·         Components of Test plan include- Test plan id, features to be tested, test techniques, testing tasks, features pass or fail criteria, test deliverables, responsibilities, and schedule, etc.
·         Components of Test strategy includes- objectives and scope, documentation formats, test processes, team reporting structure, client communication strategy, etc.
·         Test plan is carried out by a testing manager or lead that describes how to test, when to test, who will test and what to test
·         A test strategy is carried out by the project manager. It says what type of technique to follow and which module to test
·         Test plan narrates about the specification
·         Test strategy narrates about the general approaches
·         Test plan can change
·         Test strategy cannot be changed
·         Test planning is done to determine possible issues and dependencies in order to identify the risks.
·         It is a long-term plan of action.You can abstract information that is not project specific and put it into test approach
·         A test plan exists individually
·         In smaller project, test strategy is often found as a section of a test plan
·         It is defined at project level
·         It is set at organization level and can be used by multiple projects

TESTING TECHNIQUES:



   1.Static Testing techniques
static testing, software work products are examined manually, or with a set of tools, but not executed.
Reviews, Inspections, Walkthroughs
   2.Dynamic Testing technique
dynamic testing methods, software is executed using a set of input values and its output is then examined and compared to what is expected.
Black box testing: -Mainly focus on external behavior of the application not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

Example: A simple example of black-box testing is a TV (Television). As a user, we watch the TV but we don’t need the knowledge of how the TV is built and how it works, etc. We just need to know how to operate the remote control to switch on, switch off, change channels, increase/decrease volume, etc.

In this example, The TV is your AUT (Application Under Test).

The remote control is the User Interface (UI) that you use to test.

You just need to know how to use the application.

Advantages of Black Box Testing
-
Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
-
The test inputs needs to be taken from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
- Chances of having unidentified paths during this testing

Test data commonly include the following types

🌟 Valid test data. It is necessary to verify whether the system functions are in compliance with the requirements, and the system processes and stores the data as intended.

🌟 Invalid test data. QA engineers should inspect whether the software correctly processes invalid values, shows the relevant messages, and notifies the user that the data are improper.
🌟 Boundary test data.
🌟 Wrong data.
🌟 Absent data.


Blackbox testing Techniques:
        1.Equivalence class Partitioning:(ECP)
For each piece of specification, generate one or more equivalence class.
Label the classes as “valid” or “invalid”
Generate one test case for each invalid equivalence class
Generate a test case that covers as many valid equivalence classes as possible
An inputbox accepts 50 characters
1 to 10
11 to 20
21->30 31->40 41à50
Dividing the input range into equivalence classes is called ECP
2.  Boundary value analysis(BVA):Testing the application inputbox/textbox/editbox field at the boundary values(in,at,outside) is called boundary value analysis
Generate test cases for the boundary values
Minimum value, minval+1, minval-1
Maximum value, maxval+1, maxval-1
Examples:
Password should accept 6 to 12  characters
Minvalue=6 ,min+1à6+1=7,min-1à6-1=5
Maxvalue=12 max+1=13 max-1=11
Without any characters(empty) test the password field
Test password should display in encrypted format(*,.)
Enter special characters : ~!@#$%^&*,.;”’?><

Task 1)
password field accepts 6 to 12 characters
password should contain atleast one uppercase letter
password should contain atleast one number
password should contain atleast one special character

testcase designing:
verify entering invalid password less than 6 characters
verify entering invalid password more than 12 characters
verify leaving the password field blank
verify entering valid data in the password field(6 char/12/in between 6 & 12)
verify entering only numbers in password field
verify entering only alphabets in password field
verify entering only special characters in password field
verify entering only alphanumerics in password field
verify entering alphabets and special characters in password field
verify entering numbers and special characters in password field 
verify entering without uppercase letter in the password field

Task 2)
Write testcases for below requirements

Password:
• must contain 6-12 characters
• must contain both upper and lower case characters
• must contain numeric characters
• must contain special characters
• should not contain username or firstname or lastname






task 3)write the testcases for mobile number
mobile number 
847 987 4279
verify entering 10 numbers in the mobile number field
verify entering more than 10
verify entering less than 10
verify leaving mobile number field blank


Boundary Value Testing

This type of testing checks the behavior of the application at boundary level.

Boundary Value Testing is performed to check if defects exist at boundary values. Boundary Value Testing is used for testing a different range of numbers. There is an upper and lower boundary for each range and testing is performed on these boundary values.

If testing requires a test range of numbers from 1 to 500 then Boundary Value Testing is performed on values at 0, 1, 2, 499, 500 and 501.

3. Decision table testing

What is Decision Table in Software Testing?

The decision table is a software testing technique which is used for testing the system behavior for different input combinations. This is a systematic approach where the different input combinations and their corresponding system behavior are captured in a tabular form.
This table helps you deal with different combination inputs with their associated outputs. Also, it is known as the cause-effect table because of an associated logical diagramming technique called cause-effect graphing that is basically used to derive the decision table.

Why is Decision Table Important?

The techniques of equivalence partitioning and boundary value analysis are

often applied to specific situations or inputs. However, if different combinations
of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based tech-niques, decision tables and state transition testing are more focused on business logic or business rules.
  • Decision tables are very much helpful in test design technique.
  • It helps testers to search the effects of combinations of different inputs and other software states that implement business rules.
  • It provides a regular way of stating complex business rules which benefits the developers as well as the testers.
  • It assists in the development process with the developer to do a better job. Testing with all combination might be impractical.
  • It the most preferable choice for testing and requirements management.
  • It is a structured exercise to prepare requirements when dealing with complex business rules.
  • It is also used in model complicated logic.

Advantages of Decision Table in Software Testing

There are different advantages of using the decision table in software testing such as:
  • Any complex business flow can be easily converted into the test scenarios & test cases using this technique.
  • Decision tables work iteratively. Therefore, the table created at the first iteration is used as the input table for the next tables. The iteration is done only if the initial table is not satisfactory.
  • Simple to understand and everyone can use this method to design the test scenarios & test cases.
  • It provides complete coverage of test cases which help to reduce the rework on writing test scenarios & test cases.
  • These tables guarantee that we consider every possible combination of condition values. This is known as its completeness property.

    Way to use Decision Table: Example

    Decision Table is a tabular representation of inputs versus rules, cases or test conditions. Let’s take an example and see how to create a decision table for a login screen:
    login screen- decision table in software testing - edureka
    The condition states that if the user provides the correct username and password the user will be redirected to the homepage. If any of the input is wrong, an error message will be displayed.
    ConditionsRule 1Rule 2Rule 3Rule 4
    Username
    F
    T
    F
    T
    Password
    F
    F
    T
    T
    Output
    E
    E
    E
    H
    In the above example,
    • T – Correct username/password
    • F – Wrong username/password
    • E – Error message is displayed
    • H – Home screen is displayed
    Now let’s understand the interpretation of the above cases:
    • Case 1 – Username and password both were wrong. The user is shown an error message.
    • Case 2 – Username was correct, but the password was wrong. The user is shown an error message.
    • Case 3 – Username was wrong, but the password was correct. The user is shown an error message.
    • Case 4 – Username and password both were correct, and the user is navigated to the homepage.
    4.State transition testing:
    It is a type of testing based on the state machine model wherein an application is tested based on the change in the application’s state under varying input
    State transition testing is used where some aspect of the system can be described in what is called a 'finite state machine'. This simply means that the system can be in a (finite) number of different states, and the transitions from one state to another are determined by the rules of the 'machine'. This is the model on which the system and the tests are based. Any system where you get a different output for the same input, depending on what has happened before, is a finite state system. A finite state system is often shown as a state diagram (see Figure 4.2).
    For example, if you request to withdraw $100 from a bank ATM, you may be given cash. Later you may make exactly the same request but be refused the money (because your balance is insufficient). This later refusal is because the state of your bank account has changed from having sufficient funds to cover
    the withdrawal to having insufficient funds. The transaction that caused your account to change its state was probably the earlier withdrawal. A state diagram can represent a model from the point of view of the system, the account or the customer.
    Another example is a word processor. If a document is open, you are able to close it. If no document is open, then 'Close' is not available. After you choose 'Close' once, you cannot choose it again for the same document unless you open that document. A document thus has two states: open and closed.
    A state transition model has four basic parts:
    • the states that the software may occupy (open/closed or funded/insufficient funds);

    • the transitions from one state to another (not all transitions are allowed);
    • the events that cause a transition (closing a file or withdrawing money);
    • the actions that result from a transition (an error message or being given your cash).
    Note that in any given state, one event can cause only one action, but that the same event - from a different state - may cause a different action and a dif-ferent end state.

    Let’s take some scenario to develop test cases,
    • The first test case should be very much sensible to enter the correct PIN at the first time
    • The second test should be to enter an incorrect PIN each time, so that the system rejects the card
    • To test all transition, firstly do the testing where the PIN was incorrect the first time but OK the second time and another test where the PIN was correct on the third try
    • But, these tests are basically less important than the first two tests
    5.Error guessing is a technique that should always be used as a complement to other more formal techniques. The success of error guessing is very much dependent on the skill of the tester, as good testers know where the defects are most likely to lurk. Some people seem to be naturally good at testing and others are good testers because they have a lot of experience either as a tester or working with a particular system and so are able to pin-point its weaknesses. This is why an error-guessing approach, used after more formal techniques have been applied to some extent, can be very effective. In using more formal tech-niques, the tester is likely to gain a better understanding of the system, what it does and how it works. With this better understanding, he or she is likely to be better at guessing ways in which the system may not work properly.


    WHITE BOX TESTING:
    White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
    Structural testing is often referred to as 'white-box' or 'glass-box' because we are interested in what is happening 'inside the box'.
    Structural testing is most often used as a way of measuring the thoroughness of testing through the coverage of a set of structural elements or coverage items

    Example: A Car mechanic should know the internal structure of the car engine to repair it.

    In this example,

    CAR is the AUT (Application Under Test).
    The user is the black box tester.
    The mechanic is the white box tester.

    Why we do White Box Testing?
    To ensure:
    • That all independent paths within a module have been exercised at least once.
    • All logical decisions verified on their true and false values.
    • All loops executed at their boundaries and within their operational bounds internal data structures validity.
    Need of White Box Testing?
    To discover the following types of bugs:
    • Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
    • The design errors due to difference between logical flow of the program and the actual implementation
    • Typographical errors and syntax checking
    Skills Required:
    We need to write test cases that ensure the complete coverage of the program logic.
    For this we need to know the program well i.e. We should know the specification and the code to be tested. Knowledge of programming languages and logic.

    basis path testing: or structured testing: is a white box method for designing test cases. The method analyzes the control flow graph of a program to find a set of linearly independent paths of execution.
    Statement coverage is a white box testing technique, which involves the execution of all the statements at least once in the source code. It is a metric, which is used to calculate and measure the number of statements in the source code which have been executed
    Statement coverage and statement testing
    Number of statements exercised
    Statement coverage = --------------------------------------------- x 100%
    Total number of statements
    Decision coverage and decision testing
    A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a CASE statement, where there are two or more possi-ble exits or outcomes from the statement. With an IF statement, the exit can either be TRUE or FALSE, depending on the value of the logical condition that comes after IF. With a loop control statement, the outcome is either to perform the code within the loop or not - again a True or False exit.
    Decision coverage or Branch coverage is a testing method, which aims to ensure that each one of the possible branch from each decision point is executed at least once and thereby ensuring that all reachable code is executed. That is, everydecision is taken each way, true and false.
    Decision coverage is calculated by:
    Number of decision outcomes exercised
    Decision coverage = -------------------------------------------------------- x 100%
    Total number of decision outcomes
    Path coverage is a white-box testing concept that considers the possible paths of the software under test. One way to better understand path coverage is to create a graphical diagram of the code flow and follow each of the possible paths through every decision point.

    An introduction to code coverage

    Code coverage is a metric that can help you understand how much of your source is tested. It's a very useful metric that can help you assess the quality of your test suite, and we will see here how you can get started with your projects. 

    How is code coverage calculated?

    Code coverage tools will use one or more criteria to determine how your code was exercised or not during the execution of your test suite. The common metrics that you might see mentioned in your coverage reports include:

    • Function coverage: how many of the functions defined have been called.
    • Statement coverage: how many of the statements in the program have been executed.
    • Branches coverage: how many of the branches of the control structures (if statements for instance) have been executed.
    • Condition coverage: how many of the boolean sub-expressions have been tested for a true and a false value.
    • Line coverage: how many of lines of source code have been tested.

    These metrics are usually represented as the number of items actually tested, the items found in your code, and a coverage percentage (items tested / items found).

    These metrics are related, but distinct. In the trivial script below, we have a Javascript function checking whether or not an argument is a multiple of 10. We'll use that function later to check whether or not 100 is a multiple of 10. It'll help understand the difference between the function coverage and branch coverage.

    function isMultipleOf10(x{   if (x % 10 == 0)     return true;   else    return false; }   console.log(isMultipleOf10(100));

    We can use the coverage tool istanbul to see how much of our code is executed when we run this script. After running the coverage tool we get a coverage report showing our coverage metrics. We can see that while our Function Coverage is 100%, our Branch Coverage is only 50%. We can also see that the isntanbul code coverage tool isn't calculating a Condition Coverage metric.


    This is because when we run our script, the else statement has not been executed. If we wanted to get 100% coverage, we could simply add another line, essentially another test, to make sure that all branches of the if statement is used

    function isMultipleOf10(x{   if (x % 10 == 0)     return true;   else     return false; }   console.log(isMultipleOf10(100)); console.log(isMultipleOf10(34)); // This will make our code execute the "return false;" statement.  

    A second run of our coverage tool will now show that 100% of the source is covered thanks to our two console.log() statements at the bottom.

    In this example, we were just logging results in the terminal but the same principal applies when you run your test suite. Your code coverage tool will monitor the execution of your test suite and tell you how much of the statements, branches, functions and lines were run as part of your tests

    Find the right tool for your project

    You might find several options to create coverage reports depending on the language(s) you use. Some of the popular tools are listed below:

    What percentage of coverage should you aim for?

    There's no silver bullet in code coverage, and a high percentage of coverage could still be problematic if critical parts of the application are not being tested, or if the existing tests are not robust enough to properly capture failures upfront. With that being said it is generally accepted that 80% coverage is a good goal to aim for. Trying to reach a higher coverage might turn out to be costly, while not necessary producing enough benefit

    Use coverage reports to identify critical misses in testing

    Soon you'll have so many tests in your code that it will be impossible for you to know what part of the application is checked during the execution of your test suite. You'll know what breaks when you get a red build, but it'll be hard for you to understand what components have passed the tests.

    This is where the coverage reports can provide actionable guidance for your team. Most tools will allow you to dig into the coverage reports to see the actual items that weren't covered by tests and then use that to identify critical parts of your application that still need to be tested.


    Good coverage does not equal good tests

    Getting a great testing culture starts by getting your team to understand how the application is supposed to behave when someone uses it properly, but also when someone tries to break it. Code coverage tools can help you understand where you should focus your attention next, but they won't tell you if your existing tests are robust enough for unexpected behaviors.

    Achieving great coverage is an excellent goal, but it should be paired with having a robust test suite that can ensure that individual classes are not broken as well as verify the integrity of the system.

    Difference Between Black Box And White Box Testing

    S.NoBlack Box TestingWhite Box Testing
    1The main objective of this testing is to test the Functionality / Behavior of the application.The main objective is to test the infrastructure of the application.
    2This can be performed by a tester without any coding knowledge of the AUT (Application Under Test).Tester should have the knowledge of internal structure and how it works.
    3Testing can be performed only using the GUI.Testing can be done at an early stage before the GUI gets ready.
    4This testing cannot cover all possible inputs.This testing is more thorough as it can test each path.
    5Some test techniques include Boundary Value Analysis, Equivalence Partitioning, Error Guessing etc.Some testing techniques include Conditional Testing, Data Flow Testing, Loop Testing etc.
    6Test cases should be written based on the Requirement Specification.Test cases should be written based on the Detailed Design Document.
    7Test cases will have more details about input conditions, test steps, expected results and test data.Test cases will be simple with the details of the technical concepts like statements, code coverage etc.
    8This is performed by professional Software Testers.This is the responsibility of the Software Developers.
    9Programming and implementation knowledge is not required.Programming and implementation knowledge is required.
    10Mainly used in higher level testing like Acceptance Testing, System Testing etc.Is mainly used in the lower levels of testing like Unit Testing and Integration Testing.
    11This is less time consuming and exhaustive.This is more time consuming and exhaustive.
    12Test data will have wide possibilities so it will be tough to identify the correct data.It is easy to identify the test data as only a specific part of the functionality is focused at a time.
    13Main focus of the tester is on how the application is working.Main focus will be on how the application is built.
    14Test coverage is less as it cannot create test data for all scenarios.Almost all the paths/application flow are covered as it is easy to test in parts.
    15Code related errors cannot be identified or technical errors cannot be identified.Helps to identify the hidden errors and helps in optimizing code.
    16Defects are identified once the basic code is developed.Early defect detection is possible.
    17User should be able to identify any missing functionalities as the scope of this testing is wide.Tester cannot identify the missing functionalities as the scope is limited only to the implemented feature.
    18Code access is not required.Code access is required.
    19Test coverage will be less as the tester has limited knowledge about the technical aspects.Test coverage will be more as the testers will have more knowledge about the technical concepts.
    20Professional tester focus is on how the entire application is working.Tester/Developer focus is to check whether the particular path is working or not.

    Gray box testing:

    It is the new term, which evolved due to the different behavior of the system.
    This is just a combination of both black box and white box testing. Tester should have the knowledge of both the internals and externals of the function.
    Test
    Even though you probably don’t have full knowledge of the internals of the product you test, a test strategy based partly on internals is a powerful idea. We call this Grey box testing.

    The concept is simple: If you know some thing about how the product walks on the inside, u can test it better from the outside. This is not to be confused with white box testing, which attempts to cover the internals of the products in detail. In gray box mode, u r testing from the outside of the product, just as u do with the black box, but your testing choices are informed by your knowledge of how the underlying components operate and interact.

    Gray box testing is especially important with Web and Internet Applications, because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Unless u understands the architecture of the Net, your testing will be skin deep. Hung Nguyen’s Testing Applications on the Web (2000) is a good example of Gray Box Test strategy applied to the Web.

    =============================================

    Test Case Design:

    What is a Test Case?

    A test case describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly.

                               (Or)

    Test case is a description of what to be tested, what data to be given and what actions to be done to check the actual result against the expected result.

     What are the items of Test Case?

    Test case items are:

    ·         Test Case Number

    ·         Test Case Name

    ·         FRS number

    ·         Test Data

    ·         Pre-condition

    ·         Description(Steps)

    ·         Expected Result

    ·         Actual Result

    ·         Status (Pass/Fail)

    ·         Remarks

    ·         Defect id

     Can this Test Cases reusable?

    Yes, Test cases can be reusable.

    Test Cases developed for functionality/Performance testing, these can be used for Unit/Integration/System/Regression testing and performance testing with few modifications.

    What are the characteristics of good test case?

    A good test case should have the following:

    TC should start with “what you are testing”.

    TC should be independent.

    TC should catch the bugs.

    TC should be uniform.

    e.g. <Action Buttons> “Links”…

     Are there any issues to be considered?

     Yes there are few issues:

    All the TC’s should be traceable.

    There should not be too many duplicate test cases.

    Out dated test cases should be cleared off.

    All the test cases should be executable.

     Requirement Document Sample:

    http://web.cse.ohio-state.edu/~bair.41/616/Project/Example_Document/Req_Doc_Example.html


        TC ID            

     Pre-condition

      Description(steps)   

      Expected

       Result        

      Actual

        Result

       Status     

       Remarks

        Unique

        Test case

        Number

        Condition     to satisfied

       1.What to be tested

      2.What data to provided

     3.What action to be done

     As per FRS

       System

       Response

       Pass or

        Fail

       If any

      Yahoo-001

       Yahoo web page should displayed

      1.check inbox is displayed

      2. User ID/PW

       3.Click on submit

      System should mail box

      System response

     

     

     

    OpenCart Application TestCases:

    Usability Test cases :

            i.            These are general for all kinds of pages of the Application

                         o   Test case title 1: Verify card insertion

    o   Test case title 2: Verify operation with wrong angle of card insertion.

    o   Test case title 3: verify operation with invalid card insertion. (Eg: Scratches on

    o   card, trecken card, other bank cards, time expired cards, etc,)

    o   Test case title 4: Verify language selection

    o   Test case title 5: verify PIN no. entry

    o   Test case title 6: verify operation with wrong PIN no.

    o   Test case title 7: verify operation when PIN no. is entered wrongly 3 consecutive

    o   times.

    o   Test case title 8: verify Account type selection.

    o   Test case title 9: verify operation when wrong Account type is selected.

    o   Test case title10: verify withdrawal option selection.

    o   Test case title11: verify Amount entry

    o   Test case title12: verify operation with wrong denominations of Amount

    o   Test case title13: verify withdrawal operation success. (Correct Amount, Correct

    o   Receipt & able to take cards back)

    o   Test case title14: Verify operation when the amount for withdrawal is less then

    o   possible

    o   Test case title15: Verify operation when the network has problem.

    o   Test case title16: verify operation when the ATM is lack of amount.

    o   Test case title17: Verify operation when the amount for withdrawal is less than

    o   minimum amount.

    o   Test case title18. Verify operation when the current transaction is greater than no.

    o   of transactions limit per day.

    o   Test case title19: Verify cancel after insertion of card.Page 34

    o   Test case title20: Verify cancel after language selection.

    o   Test case title21: Verify cancel after PIN no. entry.

    o   Test case title22: Verify cancel after account type selection.

    o   Test case title23: Verify cancel after withdrawal option selection.

    o   Test case title24: Verify cancel after amount entry (Last Operation)

    o    Test case title 1: verify if all the windows are closed when shutting down.
    o    Test case title 2: verify shutdown option selection using Alt+F4.
    o    Test case title 3: verify shutdown option selection using run command.
    o    Test case title 4: verify shutdown operation.
    o    Test case title 5: verify shutdown operation when a process is running.
    o   Test case title 6: verify shutdown operation using power off button.

    --------------------------------------------------------------------------------

          ii.            Prepare test case titles for washing machine operation.

    o   Test case title 1: Verify Power supply

    o   Test case title 2: Verify door open

    o   Test case title 3: verify water filling with detergent

    o   Test case title 4: verify cloths filling

    o   Test case title 5: verify door close

    o   Test case title 6: verify door close with cloths overflow.

    o   Test case title 7: verify washing settings selection

    o   Test case title 8: verify washing operation

    o   Test case title 9: verify washing operation with lack of water.

    o   Test case title10: verify washing operation with cloths overload

    o   Test case title11: verify washing operation with improper settings

    o   Test case title12: verify washing operation with machinery problem.

    o   Test case title13: verify washing operation due to water leakage through door.

    o   Test case title14: Verify washing operation due to door open in the middle of the

    o   process.

    o   Test case title15: verify washing operation with improper power.

     

    Requirements formats: use cases and user stories

    Since we have to make functional and nonfunctional requirements understandable for all stakeholders, we must capture them in an easy-to-read format. The two most typical formats are use cases and user stories.

    Use cases

    Use cases describe the interaction between the system and external users that leads to achieving particular goals.

    Each use case includes three main elements:

    Actors. These are the external users that interact with the system.

    System. The system is described by functional requirements that define the intended behavior of the product.

    Goals. The purposes of the interaction between the users and the system are outlined as goals.

    There are two ways to represent use cases: a use case specification and a use case diagram.

    use case specification represents the sequence of events and other information related to this use case. A typical use case specification template includes the following information:

    • Description, 
    • Pre- and Post- interaction condition,
    • Basic interaction path,
    • Alternative path, and
    • Exception path.






    use case diagram doesn’t contain a lot of details. It shows a high-level overview of the relationships between actors, different use cases, and the system.

    The use case diagram includes the following main elements.

    • Use cases. Usually drawn with ovals, use cases represent different interaction scenarios that actors might have with the system (log in, make a purchase, view items, etc.).
    • System boundaries.  Boundaries are outlined by the box that groups various use cases in a system.
    • Actors. These are the figures that depict external users (people or systems) that interact with the system.
    • Associations. Associations are drawn with lines showing different types of relationships between actors and use cases.

    User stories vs epics vs tasks





    User stories

    user story is a documented description of a software functionality seen from the end-user perspective. The user story describes what exactly the user wants the system to do. In Agile projects, user stories are organized in a backlog. Currently, user stories are considered the best format for backlog items.

    A typical user story looks like this:

    As a <type of user>, I want <some goal> so that <some reason>.

    Example:

    As an admin, I want to add product descriptions so that users can later view these descriptions and compare the products.

    User stories must be accompanied by acceptance criteria. These are the conditions the product must satisfy to be accepted by a user, stakeholders, or a product owner.

    Each user story must have at least one acceptance criterion. Effective acceptance criteria must be testable, concise, and completely understood by all team members and stakeholders. We can write them as checklists, in plain text, or using the Given/When/Then format.

    Here’s an example of the acceptance criteria checklist for a user story describing a search feature.

    • A search field is available on the top bar.
    • A search starts when the user clicks Submit.
    • The default placeholder is a grey text Type the name.
    • The placeholder disappears when the user starts typing.
    • The search language is English.
    • The user can type no more than 200 symbols.
    • It doesn’t support special symbols. If the user has typed a special symbol in the search input, it displays the warning message: Search input cannot contain special symbols.

    Finally, all user stories must fit the INVEST quality model:

    •         I – Independent
    •         N – Negotiable
    •         V – Valuable
    •         E – Estimable
    •         S – Small
    •         T – Testable

    Independent. You can schedule and implement each user story separately. It’s very helpful if you employ continuous integration processes.

    Negotiable. All parties agree to prioritize negotiations over specification. Details will be created constantly during development.

    Valuable. A story must be valuable to the customer. You should ask yourself from the customer’s perspective “why” you need to implement a given feature.

    Estimable. A quality user story can be estimated. It will help a team schedule and prioritize the implementation. The bigger the story is, the harder it is to estimate it.

    Small. Good user stories tend to be small enough to plan for short production releases. Small stories allow for more specific estimates.

    Testable. If we can test a story, it’s clear and good enough. Tested stories mean that requirements are done and ready for use.




    Wireframes. Wireframes are low-fidelity graphic structures of a website or an app. They help map different product pages with sections and interactive elements.


    Mockups. Wireframes can be turned into mockups – visual designs that convey the look and feel of the final product. Eventually, mockups can become the final design of the product.


    Prototypes. The next stage is a product prototype that allows teams and stakeholders to understand what’s missing or how the product may be improved. Often, after interacting with prototypes, the existing list of requirements is adjusted.


    Test Case Execution

    Execution and execution results plays a vital role in the testing. Each and every activity

    Should have proof.

     The following activities should be taken care:

    1.   Number of test cases executed.

    2.   Number of defects found.

    3.   Screen shoots of successful and failure executions should be taken in word document.

    4.   Time taken to execute.

    5.   Time wasted due to the unavailability of the system.

    Test Case Execution Process:

     

    Take the Test Case document

    ß

    Check the unavailability of application

    ß

    Implementation of Test Cases

    Inputs:

     -Test Cases

     -System Availability

     -Data Availability 

     Process:

      -Test it.

     output:

      -Raise the Defects

      -Take screen shot & save it.




    TRACEABILITY MATRIX:

        Definition: The Mapping between Requirements to testcases is called traceability Testing.
    How do you know whether all the requirements are met in your testcase document.?

    Ans: Through Traceability Matrix document

    Why is traceability important? Consider these examples:
    • The requirements for a given function or feature have changed. Some of the fields now have different ranges that can be entered. Which tests were looking at those boundaries? They now need to be changed. How many tests will actually be affected by this change in the requirements? These questions can be answered easily if the requirements can easily be traced to the tests.
    • A set of tests that has run OK in the past has started to have serious problems. What functionality do these tests actually exercise? Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily.

    • Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed - was every requirement tested? 
    =================================================
    Test Scenario  VS Test Case
    Test ScenarioTest Case
    A test scenario is a high-level description of a test.A test case is a detailed set of instructions for a test.
    It outlines what needs to be tested.It specifies how to execute the test.
    Test scenarios are created based on requirements and user stories.Test cases are derived from test scenarios or directly from the software’s requirements.
    They provide a broader view of what is to be tested.They provide specific steps and data to verify a particular functionality.
    Test scenarios may encompass multiple test cases.Test cases are individual steps within a test scenario.
    Test scenarios focus on business needs and end-to-end scenarios.Test cases focus on individual testable aspects or conditions.
    They are less detailed and may not have specific test data.They include specific test data and expected results.
    Test scenarios serve as a basis for creating test cases.Test cases are executed during the testing process.
    Their purpose is to ensure that all major functionalities are covered.Their purpose is to validate the correctness and reliability of the software

    Defect Handling

     What is Defect?

    In computer technology, a Defect is a flaw or imperfection in a software work product.

    (or)

    When the expected result does not match with the application actual result it is termed as Defect.

    The defect that causes harm is the defect which might result in injuries, health issues, or death.

    -A usability defect that results in user dissatisfaction: This will not harm the user, it will make him dissastified

    -A defect that causes slow response time when running reports: This will not harm the user unless these reports are used inside  a critical place like a hospital operations room

    -A defect that causes raw sewage to be dumped into the ocean: This will harm users because polluting the ocean has a direct harm on humans' health

    -A regression defect that causes the desktop window to display in green: This will not harm the user, it might annoy him if he doesn't love the green color


    It's your Salary day and your Salary credited by your company and it will redited to your colleague, not to you. (Bug in Banking system)

    🤯You ordered Pizza and coke for your birthday party for 5 friends, waiting for it delivery and you got only coke, not pizza.
    (Final order shows only the coke to the Pizza shop without the pizza order in the list)

    😠You have subscribed to a OTT platform to watch a exciting Cricket match, but you see System error without the live match.

    We work to prevent and report all these issues before it happens to you in your real life..... We are Software Testers who works to ensuring Quality preventing bugs🐞 in applications.



     Latent Defect is one which has been in the system for a long time; but is discovered now. i.e. a defect which has been there for a long time and should have been detected earlier is known as Latent Defect. One of the reasons why Latent Defect exists is because exact set of conditions haven’t been met.

    - Latent bug is an existing uncovered or unidentified bug in a system for a period of time. The bug may have one or more versions of the software and might be identified after its release.

    - The problems will not cause the damage currently, but wait to reveal themselves at a later time.

    - The defect is likely to be present in various versions of the software and may be detected after the release.

    E.g. February has 28 days. The system could have not considered the leap year which results in a latent defect

    - These defects do not cause damage to the system immediately but wait for a particular event sometime.

    to cause damage and show their presence.

     Masked defect: hides the other defect, which is not detected at a given point of time. It means there is an existing defect that is not caused for reproducing another defect.

     Masked defect hides other defects in the system.

     E.g. There is a link to add employee in the system. On clicking this link you can also add a task for the employee. Let’s assume, both the functionalities have bugs. However, the first bug (Add an employee) goes unnoticed. Because of this the bug in the add task is masked.

     E.g. Failing to test a subsystem, might also cause not testing other parts of it which might have defects but remain unidentified as the subsystem was not tested due to its own defects

     Defect leakage:

    The defects which we are unable to find during the system testing or UAT and it occurs in production it is called defect leakage.

     It is calculated as. (Number ofdefects slipped)/(Number of defects received-Number of defectswithdrawn)*100.

     Example: In production the customer raises 21 issues, during your tests 267 Issues were reported but there were 17 invalid defects (p.e. because of wrong tests? mistake by tester, error in testenvironment...)

    Then your Defect Leakage Ratio would be:

    [21/(267-17)] x 100 = 8,4%

    What is Defect Density?

    Defect Density is the number of defects confirmed in software/module during a specific period of operation or development divided by the size of the software/module. It enables one to decide if a piece of software is ready to be released.

    Defect density is counted per thousand lines of code also known as KLOC.

    Formula to measure Defect Density:

    • Defect Density = Defect count/ size of the release

    Size of release can be measured in terms of line of code (LoC).

    What is a Defect Age?

    The Time Gap between the date of detection & the date of closure of a defect termed as Defect Age

    What is the Showstopper Defect?

    A Defect that doestnot permit testing to continue further is called Showstopper defect.

     Who can report a Defect?

    Anyone who has involved in s/w development life cycle and who is using the s/w can report a Defect. In most of the cases Testing Team reports defects.

     A short list of people expected to report bugs:

     Testers/QA Engineers

    Developers

    Technical Support

    End Users

    Sales and Marketing Engineers

    Root Cause: A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.

    Root Cause Analysis: An analysis technique aimed at identifying the root causes of defects. By directing corrective measures of root causes, it is hoped that the likelihood of defect occurrence will be minimized.

    root cause analysis can lead to process improvements that prevent a significant number of future defects from being introduced.

    For example, suppose incorrect interest payments, due to a single line of incorrect code, result in customer complaints. The defective code was written for a user story which was ambiguous, due to the product owner’s misunderstanding of how to calculate interest. If a large percentage of defects exist in interest calculations, and these defects have their root cause in similar misunderstandings, the product owners could be trained in the topic of interest calculations to reduce such defects in the future.

    Difference Between Bug And Defect

    AspectBugDefect
    TerminologyInformal termFormal term
    UsageWidely used by developers and users to refer to software issuesPrimarily used in the context of software testing and quality assurance
    OriginArises from various sources, such as coding errors, design flaws, or external factorsArises due to discrepancies between the actual behavior and specified requirements or design
    ContextUsed in general discussions and conversations about software problemsUsed specifically during the testing phase to identify deviations from expected behavior
    SeverityCan range from minor glitches to critical errors affecting functionalityUsually associated with the failure to meet specified requirements and considered as deviations from expected behavior
    ImportanceAddressed during development and maintenance phasesIdentified and documented during testing and resolved before release
    ResolutionCan be fixed without being documented as a formal issueTypically documented in a defect tracking system and must be resolved and validated
    FormalityOften resolved informally or through informal communicationRequires a formal process to track, manage, and resolve defects

    Severity and Priority - What is the Difference?

    Severity and Priority - What is the Difference?

    Both Severity and Priority are attributes of a defect and should be provided in the bug report. This information is used to determine how quickly a bug should be fixed.

    Severity vs. Priority

    Severity of a defect is related to how severe a bug is. Usually the severity is defined in terms of financial loss, damage to environment, company’s reputation and loss of life.

    Priority of a defect is related to how quickly a bug should be fixed and deployed to live servers. When a defect is of high severity, most likely it will also have a high priority. Likewise, a low severity defect will normally have a low priority as well.

    Although it is recommended to provide both Severity and Priority when submitting a defect report, many companies will use just one, normally priority.

    In the bug report, Severity and Priority are normally filled in by the person writing the bug report, but should be reviewed by the whole team.

    High Severity - High Priority bug

    This is when major path through the application is broken, for example, on an eCommerce website, every customers get error message on the booking form and cannot place orders, or the product page throws a Error 500 response.

    High Severity - Low Priority bug

    This happens when the bug causes major problems, but it only happens in very rare conditions or situations, for example, customers who use very old browsers cannot continue with their purchase of a product. Because the number of customers with very old browsers is very low, it is not a high priority to fix the issue.

    High Priority - Low Severity bug

    This could happen when, for example, the logo or name of the company is not displayed on the website. It is important to fix the issue as soon as possible, although it may not cause a lot of damage.

    Low Priority - Low Severity bug

    For cases where the bug doesn’t cause disaster and only affects very small number of customers, both Severity and Priority are assigned low, for example, the privacy policy page take a long time to load. Not many people view the privacy policy page and slow loading doesn’t affect the customers much.

    The above are just examples. It is the team who should decide the Severity and Priority for each bug.

    Defect Tracking Tools:

     Bug Tracker—BSL proprietary Tools

    Rational Clear Quest

    Bugzilla

    JIRA

    Quality Center/ALM

    Rally

    JIRA Tutorial: A Complete Guide for Beginners

    What is JIRA?

    Jira Software is built for every member of your software team to plan,
    track, and release great software.

    JIRA is a tool developed by Australian Company Atlassian. It is used for bug tracking, issue tracking, and project management. The name "JIRA" is actually inherited from the Japanese word "Gojira" which means "Godzilla".

    The basic use of this tool is to track issue and bugs related to your software and Mobile apps. It is also used for project management. The JIRA dashboard consists of many useful functions and features which make handling of issues easy.

    Under Issues options click on search for issues that will open a window from where you can locate your issues and perform multiple functions.




    JIRA WORKFLOW











    -----------------------------------------------------------------------------------------------------

    ALM - Application lifecycle management tool. It is owned by MicroFocus company

    It is used to track the tasks, track the defects and project management. This is a commercial tool

    Tabs in ALM:

    1)Dashboard

    2)Management

    3)Requirements

    4)Testing 

    ->Testplan --here we will create testcases based on requirements

    ->Testlab -->Here we will execute the testcases

    5)Defect -->it is used to create the defects/track the defect

    BUG LIFE CYCLE:

    Defect Life Cycle helps in handling defects efficiently. This DLC will help the users to know the status of the defect




    ---------------------------------------------------------------------------------

    ===================================================================

    Types of Software Testing

    Software testing is generally classified into two main broad categories: functional testing and non-functional testing. There is also another general type of testing called maintenance testing.

    Manual vs. automated testing

    At a high level, we need to make the distinction between manual and automated tests. Manual testing is done in person, by clicking through the application or interacting with the software and APIs with the appropriate tooling. This is very expensive as it requires someone to set up an environment and execute the tests themselves, and it can be prone to human error as the tester might make typos or omit steps in the test script.

    Automated tests, on the other hand, are performed by a machine that executes a test script that has been written in advance. These tests can vary a lot in complexity, from checking a single method in a class to making sure that performing a sequence of complex actions in the UI leads to the same results. It's much more robust and reliable than automated tests – but the quality of your automated tests depends on how well your test scripts have been written.

    Automated testing is a key component of continuous integration and continuous delivery and it's a great way to scale your QA process as you add new features to your application. But there's still value in doing some manual testing with what is called exploratory testing

    1. Functional Testing

    Functional testing involves the testing of the functional aspects of a software application. When you’re performing functional tests, you have to test each and every functionality. You need to see whether you’re getting the desired results or not.

    There are several types of functional testing, such as:

    • Unit testing
    • Integration testing
    • End-to-end testing
    • Smoke testing
    • Sanity testing
    • Regression testing
    • Acceptance testing
    • White box testing
    • Black box testing
    • Interface testing

    Functional tests are performed both manually and using automation tools. For this kind of testing, manual testing is easy, but you should use tools when necessary.

    Some tools that you can use for functional testing are Micro Focus UFT (previously known as QTP, and UFT stands for Unified Functional Testing), SeleniumJUnitsoapUIWatir, etc.

    1.Unit Testing

    Unit testing ensures that each part of the code developed in a component delivers the desired output. In unit testing, developers only look at the interface and the specification for a component. It provides documentation of code development as each unit of the code is thoroughly tested standalone before progressing to another unit.

    Unit tests support functional tests by exercising the code that is most likely to break. If you use functional tests without unit tests, you may experience several smells:

    • It’s hard to diagnose failed tests
    • Test fixtures work around known issues rather than diagnosing and fixing them

    2.Component Testing

    Testing a module or component independently to verify its expected output is called component testing. Generally, component testing is done to verify the functionality and/or usability of a component but not restricted to only these. A component can be of anything which can take input(s) and delivers some output. For example, the module of code, web page, screens and even a system inside a bigger system is a component to it.


    From the above picture, Let’s see what all we can test in component 1 (login) separately:

    • Testing the UI part for usability and accessibility
    • Testing the Page loading to ensure performance
    • Trying SQL injection through the UI components to ensure security
    • Testing the login functionality with valid and invalid user credentials

    3.Integration Testing

    Integration testing is performed to test individual components to check how they function together. In other words, it is performed to test the modules which are working fine individually and do not show bugs when integrated. It is the most common functional testing type and performed as automated testing.

    Generally, developers build different modules of the system/software simultaneously and don’t focus on others. They perform extensive black and white box functional verification, commonly known as unit tests, on the individual modules. Integration tests cause data and operational commands to flow between modules which means that they have to act as parts of a whole system rather than individual components. This typically uncovers issues with UI operations, data formats, operation timing, API calls, and database access and user interface operation. Let’s take an example of another project of search functionality in the e-commerce site where it shows the results based on the text entered by users. The complete search function works when developers build the following four modules.

    Module #1: This is the search box visible to users where they can enter text and click the search button.

    Module #2: It’s a converter or in simple terms program which converts entered text into XML.

    Module #3: This is called Engine module which sends XML data to the database.

    Module #4: Database


    In our scenario, the data entered in the search function (module #1) gets converted into XML by module #2. The EN module(module #3) reads the resultant XML file generated by module 2 and extracts the SQL from it and queries into the database. The EN module also receives the result set and converts it into an XML file and returns it back to the UI module which converts the results in user readable form and displays it.

    So where does Integration testing comes into the picture?

    Well, testing whether the information/data is flowing correctly or not will be your integration testing, which in this case would be validating the XML files. Are the XML files generated correctly? Do they have the correct data? Has the data been transferred correctly from one module to another? All these things will be tested as part of Integration testing.

    Checking of data transfers between two components is called as an Interface Testing. It is a part of integration testing.

    Interface testing includes testing of interfaces such as web services, APIs, connection strings that connect two components in the application. These interfaces don’t have a UI but takes an input and delivers output (do not confuse it with Unit testing).

    Interface testing is done to check that the different components of the application/ system being developed are in sync with each other or not. In technical terms, interface testing helps determine that different functions like data transfer between the different elements in the system are happening according to the way they were designed to happen.



    Let’s see how to test the Interface 2 in the above example considering that the interface takes an XML file as input from Component 4 and delivers a JSON file as output with a response message from the payment service provider. To test this interface we do not need to worry about the functionality of component 4. All we need is the specification of the XML file from Component 4 and the specification of JSON output. With the help of these specifications, we can create the sample input XML files and feed into the interface. The interface will pass the input to the payment service provider and returns an output JSON file. So in our example, validating the input file and the output file with the requirement is called Interface Testing.

    4.Smoke Testing

    Smoke testing is performed on the ‘new’ build given by developers to QA team to verify if the basic functionalities are working or not. It is one of the important functional testing types. This should be the first test to be done on any new build. In smoke testing, the test cases chosen cover the most important functionality or component of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionality of the system is working fine.



    If the build passes the smoke testing then it is considered as a stable build. On the stable build, QA team performs functional testing for the newly added features/functionality and then performs regression testing depending upon the situation. But if the build is not stable i.e. the smoke testing fails then the build is rejected and forwarded to the development team to fix the build issues and create a new build. Let’s understand it better with an example.

    We’ve built an Employee portal application for our client. As we follow continuous testing we had to test each build right after its development. The client wanted us to build the portal which consists of features like leave application, leave reports, store employees’ data, etc.

    First, developers build a leave application feature and passed to QA for testing. The QA team examined that the entire build required 80-100 test cases for all the scenarios:

    • Login
    • Show total leaves count and types
    • Testing of the calendar while selecting the date
    • Select date
    • User should be able to fill the required information. i.e., a reason of the leave
    • After applying request sent to the manager for approval
    • Manager approves the leave
    • Employee gets notified
    • Leave gets deducted from the total count
    • Logout

    Here smoke testing comes in picture. Instead of testing all the functionalities, they decided to test only critical functionalities which had only 20 test cases. These test cases covered the following scenarios:

    • Login
    • Select date
    • Fill other details
    • Request sent to the manager after clicking the button

    As you can see we have taken only the main features for testing which were critical. For example, if an employee can’t select the date then there’s no need for further testing. This saves the developers’ time of fixing bugs

     5)Regression testing

    Whenever developers change or modify the functionality/feature, there’s a huge possibility that these updates may cause unexpected behaviors. Regression testing is performed to make sure that a change or addition hasn’t broken any of the existing functionality. Its purpose is to find bugs that may have been accidentally introduced into the existing build and to ensure that previously removed bugs continue to stay dead. There are many functional testing tools available which support regression testing.


    Let’s understand it by continuing our example of the leave management system. Let’s assume that developers have built a new feature(build 2) which shows the report of the employee’s leave history. Now, testers need to test this new feature by performing smoke testing with new test cases. Now, testers need to perform regression testing on build 2(Leave reports) to ensure that the code carried over from Build 1 (Leave application) behaves correctly. Here the main principle is reusing tests derived from Build 1. Also, the test case for build 2 would be a subset of build 1.

    Regression testing can become a challenge for the testers as well. Here are some of the reasons:

    1. The Number of test cases in the regression suite increases with each new feature.
    2. Sometimes, the execution of the entire regression test suite becomes difficult due to time and budget constraints.
    3. Minimizing the test suite while achieving maximum test coverage is not a cake walk.
    4. Determination of frequency of Regression Tests after every modification or every build update or after a bunch of bug fixes is always a challenge

    6)Sanity Testing

    When a new build is received with minor modifications, instead of running a thorough regression test suite we perform a sanity test. It determines that the modifications have actually fixed the issues and no further issues have been introduced by the fixes. Sanity testing is generally a subset of regression testing and a group of test cases executed that are related to the changes made to the product. Many testers get confused between sanity testing and smoke testing. Refer below image to understand the basic difference.


    Let’s continue with the above example of the leave management system. Let’s assume that developers have released the build 2 with some other features. Now first we need to perform smoke testing and check whether the overall functionality is working fine. Here we are assuming that the build 2 has passed the smoke test. Now, we know that we’ve reported for “date selection” in build 1 and it has been solved in build 2. In sanity testing we’ll only test “date selection” functionality and whether it affects other functionalities.

    7)End-to-end Testing

    End-to-end testing is the functional testing of the entire software system. When you test the complete software system, such testing is called end-to-end testing. You need to perform fewer end-to-end tests than integration tests.

    CucumberProtractorJasmineKarma, etc. are some great end-to-end testing tools.

    8. User Interface Testing

    User interface testing involves the testing of the application’s user interface. The aim of UI tests is to check whether the user interfaces have been developed according to what is described in the requirements specifications document.

    By running UI tests, you can make the application’s user interfaces more user-friendly and appealing to the eyes.

    Some great automated user interface testing tools are Monkey test for AndroidSaucelabs, and Protractor.

    9. Accessibility testing

    The aim of Accessibility Testing is to determine whether the software or application is accessible for disabled people or not.

    Here, disability means deafness, color blindness, mentally disabled, blind, old age and other disabled groups. Various checks are performed such as font size for visually disabled, color and contrast for color blindness, etc.

    10. Alpha testing

    Alpha testing is a kind of testing to look for all the errors and issues in the entire software. This kind of test is done at the last phase of app development and is performed at the place of the developers, before launching the product or before delivering it to the client to ensure that the user/client gets an error-free software application.

    Alpha testing is run before the beta testing, which means that after performing alpha testing, you need to run beta testing.

    Alpha testing is not performed in the real environment. Rather, this kind of tests is done by creating a virtual environment that resembles a real environment.

    11. Beta testing

    As said earlier, beta testing takes place after alpha testing. Beta testing is done before the launch of the product. It is carried out in a real user environment by a limited number of actual customers or users, in order to be certain that the software is completely error-free and it functions smoothly. After collecting feedback and constructive criticism from those users, some changes are made to make the software better.

    So when the software is under beta testing, it is called beta version of the software. After this testing is complete, the software is released to the public.

    12. Ad-hoc testing

    As the name suggests, ad-hoc testing is a kind of testing that is performed in an ad-hoc manner, without using any test cases, plans, documentation, or systems. Unlike all other types of testing, this kind of testing is not carried out in a systematic manner.

    Although finding errors can be difficult without using test cases, there are technical issues that are easily detected through an ad-hoc test, but are hard to find through other testing approaches that use test cases.

    This informal type of software testing can be executed by any person involved with the project.

    13.Exploratory testing

    Exploratory testing is a testing exercise in which testers are assigned a loosely defined task to achieve using the software being tested. This means you can learn a lot about the way people use your product in the wild. Exploratory test sessions can even motivate their users by offering rewards for the most number of issues, best defect, or doing something unexpected with the product.

    One of the benefits of exploratory software testing is that anyone can join in to help test because all they need to do is wander about the product in a free form manner. Exploratory testing is not random, yet they aren't scripted like manual tests, either.

    14. Black box testing

    Performed by the QA team of a company, black box testing is a testing technique that involves the checking of the application’s functionality without having any technical knowledge of the application, like the knowledge of the code’s logic, how the code works, knowledge of the internal structure, etc.

    15. White box testing

    Performed by the development team, white box testing is a testing method that requires a good understanding of the application’s code. It requires great knowledge of the app’s internal logic.

    15-I)Mutation Testing

    Mutation Testing is a type of white box testing in which the source code of one of the programs is changed and verifies whether the existing test cases can identify these defects in the system.

    The change in the program source code is very minimal so it does not impact the entire application, only the specific area having the impact and the related test cases should be able to identify those errors in the system.

    16. Acceptance testing

    The client who will purchase your software will perform acceptance testing (also known as User Acceptance Testing) to see if the software can be accepted or not by checking whether your software meets all the client’s requirements and preferences. If your software doesn’t meet all the requirements or if your client doesn’t like something in the app, they may request you to make changes before accepting the project.


    Unit testingIntegration testingFunctional testing
    Definition and purposeTesting smallest units or modules individually.Testing integration of two or more units/modules combined for performing tasks.Testing the behavior of the application as per the requirement.
    ComplexityNot at all complex as it includes the smallest codes.Slightly more complex than unit tests.More complex compared to unit and integration tests.
    Testing techniquesWhite box testing technique.White box and black box testing technique. Grey box testingBlack box testing technique.
    Major attentionIndividual modules or units.Integration of modules or units.Entire application functionality.
    Error/Issues coveredUnit tests find issues that can occur frequently in modules.Integration tests find issues that can occur while integrating different modules.Functional tests find issues that do not allow an application to perform its functionality. This includes some scenario-based issues too.
    Issue escapeNo chance of issue escape.Less chance of issue escape.More chances of issue escape as the list of tests to run is always infinite.


    2. Non-functional Testing

    Non-functional testing is the testing of non-functional aspects of an application, such as performance, reliability, usability, security, and so on. Non-functional tests are performed after the functional tests.

    With non-functional testing, you can improve your software’s quality to a great extent. Functional tests also improve the quality, but with non-functional tests, you have the opportunity to make your software even better. Non-functional testing allows you to polish the software. This kind of testing is not about whether the software works or not. Rather, it’s about how well the software runs, and many other things.

    Non-functional tests are not generally run manually. In fact, it’s difficult to perform this kind of tests manually. So these tests are usually executed using tools.

    There are several types of non-functional testing, such as:

    • Performance testing
    • Security testing
    • Load testing
    • Failover testing
    • Compatibility testing
    • Usability testing
    • Scalability testing
    • Volume testing
    • Stress testing
    • Maintainability testing
    • Compliance testing
    • Efficiency testing
    • Reliability testing
    • Endurance testing
    • Disaster recovery testing
    • Localization testing
    • Internationalization testing



    Performance Testing Terminologies




    Performance testing is an essential part of software development and deployment processes. It involves evaluating the speed, responsiveness, stability, scalability, and resource usage of a system under varying conditions. Here are some key terms that we use during performance testing:

    Response Time

    Response time refers to the time taken by the system to respond to a user request or an event. It is a critical metric for evaluating the overall performance and user experience of an application.

    Throughput

    Throughput measures the number of transactions or operations a system can handle in a given time period. It helps assess the system’s processing capacity and efficiency under varying workloads.

    Latency

    Latency represents the time delay between sending a request and receiving a response. It is an important measure of the system’s responsiveness and can impact the overall user experience, especially in real-time applications.

    Concurrent Users

    Concurrent users refer to the number of users accessing the system simultaneously. Performance testing often involves simulating various levels of concurrent users to assess the system’s ability to handle concurrent requests without performance degradation.

    Virtual User (VU)

    Virtual users, also known as VUs, are software programs or scripts that simulate the actions and behavior of real users on an application. VUs are used in performance testing to simulate realistic user interactions and generate loads on the system.

    Transactions per second (TPS)

    Transactions per second or TPS shows the number of transactions sent by the users in one second. TPS is one of the key metrics of non-functional requirements which helps to set the expected load on the server. The bigger unit of TPS is Transactions per hour (TPH) which represents the transaction rate on an hourly basis.

    Iteration

    It is a group of transactions that denotes the end-to-end flow of the user action.

    Script

    A Performance Test Script is a programming code that automates real-world user activities. Such a script is developed by a performance testing tool.

    Protocol

    The method of communication between a client and the server. All performance testing tool does not have a protocol option. The selection of protocol depends on the language/technology used for an application. Example: Web HTTP/HTML etc.

    Baselining

    Creating a baseline is the process of running a set of tests to capture performance metric data for the purpose of evaluating the effectiveness of subsequent performance-improving changes to the system or application.

    Benchmarking

    Benchmarking is the process of comparing your system’s performance against a baseline that you have created internally or against an industry standard endorsed by some other organization.

    Saturation

    Saturation means the point where a resource is at the maximum utilization point. At this point, if we are talking about a server, it cannot respond to any more requests.

    Thread Counts

    Thread count refers to the number of concurrent threads or virtual users that are simulated or created to generate load on an application or system. It is a parameter that determines the level of concurrency and workload during performance testing.

    Ramp-up

    During the ramp-up phase, the load on the system is gradually increased from a low level to a desired peak level. This gradual increase helps identify the system’s behavior as it handles an increasing number of concurrent users or transactions.

    Ramp-down

    Once the peak load has been sustained for a specific duration during the test, the system enters the ramp-down phase. In this phase, the load on the system is gradually reduced to zero, simulating a decrease in user activity or the end of a busy period.

    1. Compatibility testing

    Compatibility testing involves compatibility checking of the software with different operating systems, web browsers, network environments, hardware, and so on. It checks whether the developed software application is working fine with different configurations.

    To give you a few examples, if the software is a Windows app, it should be checked whether it is compatible with different versions of the Windows operating system. If it’s a web application, it is tested whether the app is easily accessible from different versions of the widely-used web browsers. And if it’s an Android app, it should be checked whether it is working well with all the commonly used versions of the Android operating system.

    2. Backward compatibility testing

    Backward compatibility testing is carried out to test if a brand new or an updated version of an application is compatible with the previous versions of the environments (such as operating systems and web browsers) on which the software runs. Sometimes, some application is updated specifically to match the standard and style of a newer, more modern environment. In that case, support for backward compatibility is necessary.

    Backward compatibility testing ensures that all those who are using the older versions of a particular environment can use your software.

    3. Browser compatibility testing

    As the name says, browser compatibility testing checks a web application for browser compatibility. More specifically, it is tested whether the web app can easily be accessed from all versions of the major web browsers.

    It is a specific form of compatibility testing, while compatibility testing checks for general compatibility.

    Some popular tools to check browser compatibility include CrossBrowserTesting.com, LamdaTest, BrowsershotsExperitestTurbo Browser SandboxRanorex StudioBrowsera, etc.

    Load Testing:

    It is a type of Non-Functional Testing and the objective of Load Testing is to check how much load or maximum workload a system can handle without any performance degradation.

    Load Testing helps to find the maximum capacity of the system under specific load and any issues that cause software performance degradation. Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc.

    Stress Testing

    This testing is done when a system is stressed beyond its specifications in order to check how and when it fails.

    This is performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to the system or database load.

    Difference Between Load And Stress Testing

    To summarize, let’s observe the major differences between load testing, stress testing as well as performance testing in the table below:

     Performance TestingLoad testingStress testing
    DomainSuperset of load and stress testingA subset of performance testing.A subset of performance testing.
    ScopeVery wide scope. Includes - Load Testing, Stress Testing, capacity testing, volume testing, endurance testing, spike testing, scalability testing and reliability testing, etc.Narrower scope as compared to performance testing. Includes volume testing and endurance testing.Narrower scope as compared to performance testing. Includes soak testing and spike testing.
    Major goalTo set the benchmark and standards for the application.To identify the upper limit of the system, set SLA of the app and see how the system handles heavy load volumes.To identify how the system behaves under intense loads and how it recovers from failure. Basically, to prepare your app for the unexpected traffic spike.
    Load LimitBoth – below and above the threshold of a break.Till the threshold of breakAbove the threshold of break
    Attributes studiedResource usage, reliability, scalability, resource usage, response time, throughput, speed, etc.peak performance, server throughput, response time under various load levels (below the threshold of break), adequacy of H/W environment, the number of user app can handle, load balancing requirements, etc.Stability beyond bandwidth capacity, response time (above the threshold of break), etc.
    Issues identified through this testing typeAll performance bugs including runtime bloat, the scope for optimization, issues related to speed, latency, throughput, etc. Basically – anything related to performance!Load balancing problems, bandwidth issues, system capacity issues, poor response time, throughput issues, etc.Security loopholes with overload, data corruption issues at overload situation, slowness, memory leaks, etc.

    Volume testing is also a kind of performance testing that majorly focuses on the database.

    In volume testing, it is checked as for how the system behaves against a certain volume of data. Thus, the databases are stuffed with their maximum capacity and their performance levels like response time and server throughput are monitored.

    Volume testingLoad testingStress testing
    A huge amount of dataA huge number of usersToo many users, too much data, towards system crash.

    6. Recovery testing

    Recovery testing involves the checking of whether the application can recover from crashes and how well it recovers. In this kind of tests, testers observe how well the software can come back to the normal flow of execution. Crashes can happen anytime. Even if your software is of exceptional quality, crashes may happen. You don’t know when they may take place and annoy the users.

    So you have to implement mechanisms that will recover the software application quickly and that will make the application run smoothly again.

    7. Agile testing

    AGILE TESTING is a testing practice that follows the rules and principles of agile software development. Unlike the Waterfall method, Agile Testing can begin at the start of the project with continuous integration between development and testing. Agile Testing methodology is not sequential (in the sense it’s executed only after coding phase) but continuous.



    8. API testing

    Just like unit testing, API testing is also a code-level testing type. The basic difference between unit testing and API testing is that unit testing is performed by the development team whereas API testing is handled by the QA team.


    9. Security testing

    security testing intends to uncover system vulnerabilities and determine how well it can protect itself from unauthorized access, hacking, any code damage, etc. While dealing with the code of application, security testing refers to the white box testing method.

    The four main focus areas in security testing:

    1. Network security
    2. System software security
    3. Client-side application security
    4. Server-side application security

    It is highly recommended that security testing is included as part of the standard software development process.

    The top website security testing tools include GrabberArachniIron WaspNogotofailSQLMapW3afWapitiWfuzzZed Attack Proxy, etc.

    10. Usability testing

    Testing the user-friendliness of an app is known as usability testing. It involves the checking of how much usable or user-friendly the app is. It is tested whether any user can easily use your software without getting stuck.

    One of the best ways to test the usability of your software is to invite a few people to use your software. See if they can do certain things in your app without taking any help from you.

    Take a look at these useful usability testing tools: OptimizelyQualarooCrazy EggUsabillaClicktaleFive Second TestChalkmark.

    11. Scalability testing

    It is used to check whether the functionality and performance of a system are capable to meet the volume and size changesas per the requirements. In other words, it checks if your app performs well when the number of users, amount of data, or the number of transactions increases significantly. A software application that is not scalable may cause great business loss.

    12. Reliability testing

    Reliability testing is a type of software testing that verifies if the software is reliable or not. In other words, it checks whether the software runs error-free and that one can rely on it.

    For example, if a user’s important information stored in the database of the software gets suddenly deleted after a few months because of some error in the code, we can say that the software is not reliable.

    13. Monkey testing

    A type of testing that is performed randomly without any predefined set of test cases or test input. It is performed with the intent of breaking the system.

    14. Gorilla testing

    It involves testing an individual module or functionality of the application heavily in order to test its robustness.

    15.Configuration testing

    It is used to evaluate the configuration requirements of the software along with the effect of changing the required configuration.

    16. Localization testing

    It is one of the types of testing in which we evaluate the application’s customization or a localized version of the application to a particular culture or locale.

    17. Globalization testing

    It is one of the types of testing in which application is evaluated for its functioning across the world independent of its geographical location or the cultural environment.

    18. A/B testing

    It is one of the types of testing in which the two variants of the software product are presented to the end-users to find which variant performs better in terms of user experience or any business goal and then eventually keeping the better performing variant.

    19.All pair testing

    It is a software testing type in which the application is tested with all possible combinations of the values of input parameters.

    20. Failover testing

    It is used to verify the application’s ability to allocate more resources(more servers) in case of failure and transferring the processing part to a back-up system.

    21. Fuzz testing

    A type of software testing in which a large amount of random data is provided as input to the application in order to find security loopholes and other issues in the application.

    22. Fault injection testing

    It is a type of testing in which fault is intentionally introduced in the application in order to improve the test coverage.

    23Back-end Testing

    Whenever an input or data is entered on the front-end application, it is stored in the database and the testing of such database is known as Database Testing or Backend Testing.

    There are different databases like SQL Server, MySQL, and Oracle, etc. Database Testing involves testing of table structure, schema, stored procedure, data structure and so on.

    In Back-end Testing, GUI is not involved, the testers are directly connected to the database with proper access and testers can easily verify data by running a few queries on the database.

    There can be issues identified like data loss, deadlock, data corruption etc during this back-end testing and these issues are critical to fix before the system goes live into the production environment

    24)Install/Uninstall Testing

    Installation and Uninstallation Testing is done on full, partial, or upgraded install/uninstall processes on different operating systems under different hardware or software environments.


    Quick difference between Performance Testing and Functional Testing

    Sl NOFunctional Testing
    Performance Testing
    1To verify the accuracy of the software with definite inputs against expected outputTo verify the behavior of the system at various load conditions
    2It can be manual or automated
    It can beperformed effectively if automated
    3One user performing all the operations
    Several users performing desired operations
    4Involvement required from Customer, Tester and DeveloperInvolvement required from Customer, Tester, Developer, DBA and N/W Management team
    5Production sized test environment not mandatory and H/W requirements are minimalRequies close to production test environment & several H/W facilities to populate the load

    Cloud Testing


     Some of these reasons to hire Cloud Testing Company are enlisted below:

    • Cloud Testing is comparatively less costly than other testing choices.
    • It is one testing alternative that assures you unlimited availability. As long as you have a working internet connection, you can access the resources from the cloud on any device and at any place.
    • It supports agile workflows. Assets available on the cloud are persistently updated, and the process also facilitates persistent integration.
    • The process of cloud testing s customization for different testing needs.
    • Unlike the other testing alternatives, Cloud Performance testing is quite flexible and offers you the freedom to explore

    • Cloud providers are AWS,Google Cloud, Oracle Cloud (OCI) and Azure.










    No comments:

    Post a Comment

    Note: Only a member of this blog may post a comment.