Event:

Join Xplor! Worldclass education, networking, industry certifications, events and more!

Test and QA

From Xplor Wiki
EDBOK Guide
EDBOK-book cover.png
Body of Knowledge
Document Systems Development Lifecycle
Lifecycle Category
Test and QA
Content Contributor(s)
Neil Merchant m-edp, Tim Ciceran edp
Original Publication
August 2014
Copyright
© 2014 by Xplor International
Content License
CC BY-NC-ND 4.0

What is Testing?

Quality Assurance (QA) defines activities conducted throughout a project to ensure the accuracy and completeness of the project execution. Examples of those activities include ensuring that:

  • All stakeholders are identified and engaged,
  • Requirements are completely collected and accurately documented,
  • Hardware, software or application design is appropriate, robust and compliant with regulations and internal standards,
  • Standard project management processes are followed, and
  • Disaster recovery requirements are included and addressed.

Testing is the subset of QA that relates to proving that the new or modified product (hardware, software, application) works as designed, is safe, is reliable, and is fit for purpose. Some organizations have a separate QA department or team. When this is the case that team may conduct their own tests on the product in a phase known as QA Testing or Acceptance Testing.

Even the testing should be subject to QA, however. QA for testing should verify that all processes and procedures have been followed. It should also ensure that all records and reports have been completed and that all requirements have been tested and proven.

Purpose of Testing

Testing is required to ensure that all business processes and product interactions continue to function as required and that new ones work as designed. Changes in composition software, workflow software, transforms, print hardware, vision systems (e.g. bar-code readers), and even paper can subtly alter the environment. At a minimum the following items should be part of an overall testing protocol:

  • business, legal and compliance requirements are accurately expressed in the product,
  • the product operates reliably,
  • special, exceptional and error conditions are managed,
  • applications run within the time available,
  • the product cohabits satisfactorily with the existing architecture portfolio,
  • there is enough IT capacity (processor, memory, disk, bandwidth) to handle the workload,
  • the print room can manage and process the printed output,
  • non-print delivery channels maintain all appropriate interfaces and formatting,
  • the operations group can run and manage the product reliably,
  • integrity checks work,
  • paper types and forms work reliably,
  • all finishing works as planned, and
  • mailstream insertion is not impacted.
  • Managing the Testing Process

Most organizations will have a standard development methodology that includes work objects (e.g. requirements definition templates, report templates), processes and procedures to lead and guide the project team through all the activities in the testing phase. If these work objects are not in place, this is a good time to assemble them and make them part of the testing protocol. A goal of the testing process is to ensure consistent planning and execution.

Test Strategy

A test strategy defines the high-level plan for testing that is appropriate to the product and project. Defining a test strategy embraces:

Where - what locations, test environments

What - up- and down-stream limits of what’s to be tested

Who - key resources

When - scheduling with consideration to resource availability and project delivery plan

Why - rationale for strategy

How - what types of test are appropriate and necessary for this project and product.

Test strategies are influenced by many factors, all ultimately representing risk in one form or another. As you define your strategy consider how you will account for:

  • client/user expectations,
  • experience and skill of the project team,
  • project visibility and risk,
  • project size, complexity, scope,
  • quality of up front requirements definition,
  • terms of the SOW (Statement of Work) or contract related to the test approach and the number of test cycles,
  • project governance requirements, and
  • project timelines and budget.

Just as every project is unique, so every test strategy will be uniquely tailored to the needs, priorities and other characteristics of the project in hand.

Test Plans

These are detailed plans for each type of test identified in the test strategy. Good test plans, properly executed, can make the difference between a product that runs almost perfectly for years, and one that is a constant source of missed schedules, faulty output and repeated maintenance.

Critical to the test plan will be the identification of specific known or suspected risk areas. Subject Matter Experts can provide clear guidance into these items and should be involved in both the definition of tests and the analysis of the results before a project is declared fit.

Test plans should include:

  • Purpose(s)
  • Approach
    • tools/medium
    • specific environment in which testing will occur
  • Requirements documents or other criteria that will be used
  • What is to be tested (e.g. the product itself, adjacent up/downstream products)
  • Roles and responsibilities
  • Testing protocol
    • testing types and phases
    • tools and methods
  • Test case definition
    • purpose(s) - could be multiple in one test case
    • traceability to requirements and vice versa – every requirement should be tested, and every test should speak to one or more requirements
    • expected outcome(s)
  • Test criteria (what’s success?) and measurement (capturing and assessing the results)
  • Data required: data acquisition and preparation can be complex and difficult, as there are often privacy and security concerns, and the manpower required to fabricate large volumes can be costly.
    • type, specific cases (e.g. empty files, invalid/missing fields/records, min/max values, fonts and languages
    • volume
    • source/transmission requirements

Despite its length, this list is not comprehensive. Every environment has unique touch points which should be taken into consideration.

The Team

The team make up will be different for every project and will change as testing progresses through its various stages. Core team members should be involved in all steps. They include:

  • project manager,
  • business and systems analysts, and developers/engineers, and
  • dedicated test team members if the organization is structured that way.

The broader team will include one or more of the following:

  • Subject Matter Experts (SMEs)
  • Legal, Compliance, Internal Audit
  • QA team
  • Production/Operations (data center, print room, call center, distribution, other)
  • Suppliers: design firm, professional services provider, hardware or software vendors, offshore development and testing resources
  • Client or user

Test Execution

Test execution consists of a number of sequential activities, iterated for each test:

  • execute a test,
  • document results,
  • confirm expected outcome,
  • document any variances,
  • identify defects and assign them to the product development team,
  • identify changes in requirements and refer them to the project management team,
  • retest, and
  • report overall results.

Success Factors

Above all else, be methodical! The following guidelines are considered best practices in executing a testing protocol.

  • Define roles clearly:
    • Who is testing what?
    • Who is collating results?
    • Who is verifying the results?
    • Who holds accountability?
  • Define the plan:
    • Don’t leave it until the end of build
    • Pay attention to data
    • Get all stakeholders engaged early.
  • Set up a defect management process that includes:
    • An effective tool (and owner) for recording all tests planned and executed, and for capturing defects
    • Sample data for developers to recreate and re-test a defect
    • Traceability to requirements
    • Effective triage of defects so that major problems, and those early on in the workflow, are tackled first.
  • Understand the unique IT/print combination.
    • This is not typical application or software development
    • People who understand and are well trained in transaction driven projects are essential to the test team
    • An understanding of the architecture and the overall process is key.
  • Clear test strategies:
    • Be very clear what you are testing and how, at each phase – and what method/ medium you are using
    • Be equally clear about what you are NOT testing at each phase
    • Manage time and cost components
    • Decide how much you test and manage to that plan.
  • Use the right medium/tool for each output test. The choice of the correct medium for viewing your test output can shorten test cycle times, facilitate effective evaluation, and help avoid embarrassing errors.
    • Use the target production printer or its backup
    • For accurate color, shading, image, resolution, anti-aliasing and other look and feel tests
      • Where absolute positioning accuracy is key to the test: production printers do not shrink the page
        • address block
        • barcode
    • For MICR signal testing
    • For print workflow testing including the RIP
    • For production testing
    • Use a desktop printer
      • When production print capabilities aren’t being tested, but content is
      • When the final output may be printed on a desktop printer, such as call center reprints from e-presentation or archive
    • Use the screen
      • When print is not being tested, but content/business rules is
      • For testing e-presentation or archive view.

With regard to output testing for third-party software development, where a wide range of print–room hardware may be used, data should be printed and inserted on the target hardware that the software is intended to support. This can be a challenge given the variety of output devices available so it helps to select a baseline device that you will support and then include as many other devices as is feasible. For example, if the output type is AFP print data, the target may be an Infoprint 4100 for monochrome output and an Infoprint 5000 for color output. This is important since it may not be possible to support all devices due to both varying implementations of a standard or architecture and proprietary extensions to a given device or architecture.

Challenges

All best efforts move the plan forward, but there will be challenges. Planning for challenges is the most efficient way to ensure that they do not stop a test plan from execution or unnecessarily delay essential testing. Some common challenges include:

  • Getting data:
    • Getting data can be challenging. There are privacy and security issues with using production data; when new data feeds are involved creating them can be time-consuming and difficult.
  • Poor requirements management:
    • Time spent gathering clear and complete requirements up front pays off at this stage. If this was not the case, plan for the impact on testing and on the project cost and timeline.
    • Often a result of inadequate client engagement, expectation management or resourcing.
    • The complexity of the testing effort is often underestimated in sizing the project.
    • Testing comes late in the project, and if there have been delays, may be rushed if the project plan does not have sufficient contingency built in.
  • Regression testing:
    • Regression testing relies on previously created test cases and environments, with associated good output data. These are used to validate not only the new product directly, but to determine whether previous functionality has been compromised. Challenges include:
    • Lack of reliable documentation to define current functionality.
    • Insufficient time allowed.
  • Test case development:
    • Test cases and conditions for business rules are often defined by the business systems analysts, as they are involved in the capture of detailed requirements and understand how the system should behave.
    • Developers will create test cases for their units of code during the program build.
    • The user acceptance test team should create (or at least define) their own test conditions independently of the development team.
    • The print and mail room should define their additional test conditions to support their validation.

Types of Testing

There are many types of testing, each with its own resource requirements and goals.

Unit Testing

  • Performed by:
    • Developers
    • People configuring software
    • Engineers developing hardware
  • Ensures that their code/configuration/hardware:
    • Meets design criteria
    • Behaves as intended
  • Should occur in each discreet new or changed module of an overall solution to makes changes easier to regression test.
  • Improves quality early (“gets the lumps out”), before more complex and more expensive later testing occurs.
  • Tests business rules required for properly formatted test data. If it is not available, the developer may need to build their own data, if data feed is being changed or reformatted for example.
  • Does not replace need for other phases of testing.

Integration, System, and End-to-End Testing

These tests are designed to progressively test more of the project’s product and its interactions with upstream and downstream systems. These tests:

  • Are performed by technical resources in the testing team.
  • Install software/hardware/developed code in a test environment.
  • Test that the various components of the solution such as ETL, composition, post- composition manipulation, messaging, e-presentation and archive work in an integrated environment.
  • Test the scripts that make the components run and work together.
  • Test reconciliation: that “data in” equals “data out” from each step.

Performance and Volume Testing

These test phases ensure that the product can execute its work in the necessary business cycle window, has sufficient resources such as disk space, and provide the opportunity for identifying hotspots in code that can be reworked for improvement. They can also extend to include:

  • paper and environmental testing,
  • data to post office testing, and
  • parallel processing.

Hardware Testing

One of the challenges in software development is determining the minimum hardware requirements for a given software product. This can be platform dependent and is an ongoing effort as operating systems, software, and hardware continue to evolve at a rapid pace. When defining the requirements, consideration must be given to whether the host system will be dedicated or not, and if not, what other components are running there and what impact they have on system resources. Test systems that meet these requirements should be maintained for all of the forms of testing mentioned.

Regression Testing

Regression testing is to ensure that existing functionality is not lost or unintentionally changed in the course of a project that modifies an existing system, or introduces a new system into an existing environment.

In the third party software world, where there may be hundreds of implementations and uses of the product across the customer base, regression testing is particularly important. It is central to any software development lifecycle work such as defect or bug fixing, as well as to feature and functionality enhancements. Once the initial test set-ups and strategies are defined and executed, samples of "known good" output should be collected and the purpose of each test case documented. These can then be used as a benchmark for regression testing any changes to the product, be they enhancements or fixes. This, coupled with an automation strategy and suitable comparison tools, allows the development team to quickly gain feedback on their changes and any potential impact it may have on the existing customer base.

QA and/or User Acceptance Testing

User Acceptance Testing both ensures that all requirements are correctly addressed in the product and focuses on features such as document objects, SLA’s (Service Level Agreements) and the T&C’s (Terms and Conditions) that are included in the output for each business case.

There are different names for this phase (and more than one type of testing performed) depending on:

  • organisations involved – internal and external,
  • project methodology,
  • who the “user” is,
  • nature and complexity of the project or application, and
  • if there is a QA department.

A QA team tests to the business needs by using the requirements documents. Their process should be documented, structured and traceable. Business analysts and/or the client may participate in resolving questions and analyzing potential defects.

  • Translated text is often tested by the translator during business testing.
  • User Acceptance Testing may include functional testing of items such as usability of interfaces, searching archives, message management and local reprint. The “users” doing testing could include:
    • call centres, branches, business or technical operations, technical support,
    • marketing or product,
    • translation,
    • legal and compliance, and
    • interested third parties – financial advisors, brokers.

The ultimate user of a statement may be the account holder, but they typically only participate, as a group, in consumer focus groups and of course as individuals through industry surveys like JD Power and by way of complaints or accolades.

Parallel Testing

Not a routine test, but used when the product replaces all or part of an existing product and comparing the outputs is considered vital. It’s often used for a high visibility/high risk product. It’s generally run as part of QA/UAT, and compares current production output to future test output and analyzes/ resolves differences.

It may also be done as part of launch, comparing old and new output running in the production environment before the new output is delivered (and the “old” destroyed!). Typical issues:

  • The request for parallel testing may be indicative of a lack of confidence in the up-front business requirements – plan accordingly.
  • Parallel testing is often a manual process, takes up time and costly resources, and is prone to error; manage the volumes involved to minimize the impact.
  • Be alert to privacy and confidentiality issues if live data or production output is used.
  • True comparison of output requires the same input data: source and content may differ from old and new, and the data may not be available in both environments.
  • The purpose of parallel testing must be clear; it cannot be used to compare look and feel if the printer is different. Understanding of the data, technology and environmental differences is key.

Production Testing  

Production may be internal (in-house, or in-plant, facility) or outsourced to a service provider, who may be providing anything from print-ready to full development (raw data) services, as well as from print-and-mail only to multi-channel delivery. The testing process will differ, and it’s important to communicate well, plan early and involve the right people. Production testing can include:

  • color selection and reproduction,
  • shading, formatting, fonts,
  • graphics/pictures,
  • pie charts,
  • opacity of the paper,
  • look and feel,
  • process control information flow,
  • correct process control and integrity marking,
  • ADF workflow integration of the new or changed product, and
  • printing, insertion and other handling or finishing.

Work Objects

Typical work objects (components) for the Test phase include:

  • test strategy,
  • test plan,
  • test case plan,
  • test reports,
  • System Integration Test (SIT) plan,
  • User Acceptance Test (UAT) plan,
  • Client Acceptance Test (CAT) plan, and
  • Production Acceptance Test (PAT) plan.

What is the End Product of Testing?

The end product of a Test is typically a test case document or library where test case documents provide proof that all tests have been met successfully, or that all major criteria have been met, and:

  • there is an agreed plan to resolve minor problems,
  • there are acceptable temporary workarounds in place, and
  • stakeholders agree that fixes for the minor problems are not required for implementation.

Approval is required by all identified project stakeholders. It is not just management, but also the teams that have operational responsibility that must approve.

Implementation Planning

The point at which implementation planning should start will vary with the type  of project, but it generally needs to be complete by the end of the Test phase so that implementation can proceed in a timely fashion. By the end of Build & Unit Test the overall shape, schedules and dependencies of the new/changed computer jobs, and/or print-room workloads, workflows and equipment layouts should be clear.

The implementation plan should include a checklist with:

  • required production job scheduler changes,
  • JCL/scripts for executing the new/changed jobs,
  • storage (disk, tape, etc.) and network bandwidth requirements,
  • design for the change implementation package and process, including backout, and
  • required operational documentation is required by operations.

Implementation planning should include any scheduled outages, maintenance downtime, and other known potential delays necessary to schedule and execute the changes. It is critical to take account of other impending changes, business volume peaks and troughs, and other factors so as to minimize the business impact of implementation.