Attendees: Michael Berger, Sean Velury, Dave Tanner, Ed Orsini, Felicia Leung, Don Flanders, Lisa Robinson, Alex Kozlov
I discussed the origins and scope of this project. Lisa and Alex were both interested in how this testing tool might aid the Web Browser release process for future browsers. They were looking for tests to run both in currently supported browser/operating systems, future browsers (IE9 FF4) and browsers that we may not support but customers call the help desk about (Chrome?).
Sean and Alex suggested the we limit our discovery process to the most important browser/operating system combinations.
The following people agreed to check which of the testing applications at this link (http://www.softwareqatest.com/qatweb1.html#FUNC) might be worth including in this discovery project:
Alex agreed to look at Gartner Research to see if it has recommendations for Web Testing applications.
Sean agreed to look into various QA magazines and web sites he knows to see if he can identify the Top 10-20 Web Testing applications.
We agreed that we would test QTP 11 as part of this discovery project, since we already own and know QTP 10.
We agreed that we would use the Java web application APR-Hires as our test subject. And we have finalized a test plan for apr-hires.
I will set a meeting time of every two weeks after I talk to our missing members (Judith McJohn and Stephen Turner).
We will do email check-ins on the Friday between meetings.
We discussed some of the measures we want to evaluate each application against, which included:
Attendees: Michael Berger, Ed Orsini, Felicia Leung, Don Flanders, Alex Kozlov, Judith McJohn
We discussed the functional testing applications we had discovered so far and narrowed them down to QTP, Selenium, others. We discussed whether we should look at testing applications that emulate a browser, and we are leaning against that. Next steps: Michael Berger will narrow down our list of applications further.
We discussed the current criteria for evaluating these testing applications and added more criteria. Everyone is expected to add to this section as homework.
Attendees: Michael Berger, Don Flanders, Judith McJohn, Sean Velury, Lisa Robinson
We went over 13 different functional testing tools and sifted them down to the final five we will be testing. We also assigned each person to test at least one of the applications.
The criteria we used to winnow down our test list included:
Application |
url |
Testers |
---|---|---|
Selenium |
Mike Berger and Don Flanders and Felicia Leung |
|
FuncUnit |
Lisa Robinson and Judith McJohn |
|
AppPerfect Web Test |
Ed Orsini |
|
Eggplant |
Alex Kozlov and Sean Velury |
|
QTP 11 (for baseline comparison) |
|
Sean Velury and Felicia Leung |
Attendees:
Attendees: Judith McJohn, Felicia Leung, Michael Berger
Most people have been too busy to test, so we are behind. Sean Velury has tested Eggplant and found that it does not meet our requirements, so we need to come up with a different tool for he and Alex to test.
Attendees: Michael Berger, Ed Orsini, Don Flanders
Funcunit does not meet initial requirements because it does not work in current versions of Firefox. It was a front end to Selenium, so we need not test it if we continue to test Selenium.
Attendees: Michael Berger, Ed Orsini, Don Flanders, Sean Velury, Felicia Leung, Judith McJohn, Alex Kozlov
After a hiatus while everyone delivered early fall work, we got back together. Current status is we are down to two products, Selenium and Appperfect. We are in process of testing the Selenium back end, and having everyone test the Selenium IDE front end (which some of us have used successfully). In addition, at our next meeting Ed Orsini will demo Appperfect so we can see what further testing we should do with it.
Attendees: Michael Berger, Ed Orsini, Don Flanders, Sean Velury, Felicia Leung, Judith McJohn
Ed showed us Appperfect. It was not perfect: he had problems with html selects (dropdowns). He has requested help from them, and is trying to get a complete end to end test.
We looked more at Selenium. We are close to get an end to end run through our test application, but so far we have not added all the assertions we need to make it a real "test."
We have split into several groups: Felicia, Judith and Sean are trying to finish up the test. Mike is trying to get Selenium running on his Mac and in a Linux VM. Then we will run the finished test in all combinations of OS/browser that we support. Meanwhile Ed continues to work on Appperfect.
We hope to have one or two more meetings and then write a report.
Assessed the results so far:
1. Jude likes Selenium, but thinks it is not be all end all, not that easy to set up and debug
2. Lisa defers to developers, thinks Help Desk can learn to run tests.
3.