You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Unknown macro: {div}

Contents


On this page


Unknown macro: {div}


Just what is QA doing, anyway?

Schedule of QA events during the Thalia sprint cycle.
Process for acceptance by QA of the release.
Customer communications for each release.
Description of tests.

Smoke Test outline
Complete Functionality test outline
Test Cases

Thalia's JIRA

 The Thalia Sprint schedule:

First half of each Thalia sprint is devoted to development. QA uses this time to write test plans and test cases to cover new functionality, and perform testing on fixed bugs and new features as they are released to the staging platform. Bug reports are converted to new test cases, as they indicate an area that needs coverage. The second half of each Thalia sprint is the QA phase. When the candidate build has been put up on the staging platform, QA begins regression testing to confirm that all functionality still works as expected.

First Half - Development Phase:

As each build is released to QA:

  1. Smoke test to confirm integrity of basic functionality:
    - test create, view, update, and delete of all components   
  2. Verify and close defect fixes with each new build:
    - Close fixed JIRA issues
    - Reopen JIRA issues as needed.
    - Write additional test cases for new bugs.
  3. Test new functionality:
    - Open JIRA issues as needed.
    - Write test cases for new functionality.
Second Half - QA Phase:

•    Initial smoke test of major functionality
•    Regression test - hunting for bugs in previously working areas (use test cases to guide this; log results in the test log)
•    Re-test bugs fixed during the current sprint - make sure they are still fixed.
•    Open new JIRA issues as required
•    Exploratory testing - as much as possible, to find creative new ways Thalia can go wrong.

Process for  Acceptance of the Release:
•    Does the release candidate work at least as well as as the current public version?
•    Do the new features work the way they are supposed to?
•    Are blocker and critical bugs fixed?
•    Have customer communications regarding the new release date been sent out?
o    First notice to customers goes out one week ahead of the expected release.
o    Second notice to customers goes out 24 hours ahead of expected release
•    After release-
o    Smoke test of the product
o    Third notice goes out to customers 24 hours after the release, if the smoke tests are acceptable.

Explanation of Tests:

Usability Testing
* We periodically consult the Usability lab ....
Sanity Testing determines whether it is reasonable to proceed with further testing.
Smoke Testing a minimal test of create, view, update, and delete of all components
is a preliminary to further testing,
Preliminary - broad - shallow - fast
which should reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical.
Regression Testing
Used to determine that changes have not caused unintended side effects:
    * Do the unmodified parts of the system still work as before?
New Stuff Testing   
 * Do the new or modified parts work as required?
User Use
* does it serve customer needs/abilities/etc  ????
Acceptance Testing: determine if product is acceptable.

  • No labels