Kickoff Meeting 5/20/2011

Attendees: Michael Berger, Sean Velury, Dave Tanner, Ed Orsini, Felicia Leung, Don Flanders, Lisa Robinson, Alex Kozlov

I discussed the origins and scope of this project. Lisa and Alex were both interested in how this testing tool might aid the Web Browser release process for future browsers. They were looking for tests to run both in currently supported browser/operating systems, future browsers (IE9 FF4) and browsers that we may not support but customers call the help desk about (Chrome?).

Sean and Alex suggested the we limit our discovery process to the most important browser/operating system combinations.

The following people agreed to check which of the testing applications at this link (http://www.softwareqatest.com/qatweb1.html#FUNC) might be worth including in this discovery project:

  • Ed will look at applications 1 through 20;
  • Felicia will look at applications 21 through 40;
  • Mike will look at applications 41 through 60;
  • Dave will look at applications 61 through 79. Dave left this project because his has a new job in Student Systems. So no one looked at 61 through 79.

Alex agreed to look at Gartner Research to see if it has recommendations for Web Testing applications.

Sean agreed to look into various QA magazines and web sites he knows to see if he can identify the Top 10-20 Web Testing applications.

We agreed that we would test QTP 11 as part of this discovery project, since we already own and know QTP 10.

We agreed that we would use the Java web application APR-Hires as our test subject. And we have finalized a test plan for apr-hires.

I will set a meeting time of every two weeks after I talk to our missing members (Judith McJohn and Stephen Turner).

We will do email check-ins on the Friday between meetings.

We discussed some of the measures we want to evaluate each application against, which included:

  • Support for all current IST browser/operating system combinations;
  • Ease of use
  • Documentation
  • A healthy future for the application based on its community/company support.

Second Meeting 6/3/2011

Attendees: Michael Berger, Ed Orsini, Felicia Leung, Don Flanders, Alex Kozlov, Judith McJohn

We discussed the functional testing applications we had discovered so far and narrowed them down to QTP, Selenium, others. We discussed whether we should look at testing applications that emulate a browser, and we are leaning against that. Next steps: Michael Berger will narrow down our list of applications further.

We discussed the current criteria for evaluating these testing applications and added more criteria. Everyone is expected to add to this section as homework.

Third Meeting 6/17/2011

Attendees: Michael Berger, Don Flanders, Judith McJohn, Sean Velury, Lisa Robinson

We went over 13 different functional testing tools and sifted them down to the final five we will be testing. We also assigned each person to test at least one of the applications.

The criteria we used to winnow down our test list included:

  1. Support for all MIT operating systems
  2. Support for all MIT browsers
  3. Ability to playback tests in a browser (instead of testing via browser emulation)
  4. Ability to record tests (to make test creation easier)
  5. Some sort of gui (versus code only framework)
  6. Decent documentation
  7. Some sort of name in the industry
Applications we will test:

Application

url

Testers

Selenium

http://seleniumhq.org/

Mike Berger and Don Flanders and Felicia Leung

FuncUnit

http://funcunit.com/

Lisa Robinson and Judith McJohn

AppPerfect Web Test

http://www.appperfect.com/products/app-test.html

Ed Orsini

Eggplant

http://www.testplant.com/

Alex Kozlov and Sean Velury

QTP 11 (for baseline comparison)

 

Sean Velury and Felicia Leung

Fourth Meeting, 7/5/2011

Attendees:

Fifth Meeting 7/29/2011

Attendees: Judith McJohn, Felicia Leung, Michael Berger

Most people have been too busy to test, so we are behind. Sean Velury has tested Eggplant and found that it does not meet our requirements, so we need to come up with a different tool for he and Alex to test.

Sixth Meeting 8/12/2011

Attendees: Michael Berger, Ed Orsini, Don Flanders

Funcunit does not meet initial requirements because it does not work in current versions of Firefox. It was a front end to Selenium, so we need not test it if we continue to test Selenium.

Seventh Meeting 10/3/2011

Attendees: Michael Berger, Ed Orsini, Don Flanders, Sean Velury, Felicia Leung, Judith McJohn, Alex Kozlov

After a hiatus while everyone delivered early fall work, we got back together. Current status is we are down to two products, Selenium and Appperfect. We are in process of testing the Selenium back end, and having everyone test the Selenium IDE front end (which some of us have used successfully). In addition, at our next meeting Ed Orsini will demo Appperfect so we can see what further testing we should do with it.

Eighth Meeting 10/18/2011

Attendees: Michael Berger, Ed Orsini, Don Flanders, Sean Velury, Felicia Leung, Judith McJohn

Ed showed us Appperfect. It was not perfect: he had problems with html selects (dropdowns). He has requested help from them, and is trying to get a complete end to end test.

We looked more at Selenium. We are close to get an end to end run through our test application, but so far we have not added all the assertions we need to make it a real "test."

We have split into several groups: Felicia, Judith and Sean are trying to finish up the test. Mike is trying to get Selenium running on his Mac and in a Linux VM. Then we will run the finished test in all combinations of OS/browser that we support. Meanwhile Ed continues to work on Appperfect.

We hope to have one or two more meetings and then write a report.

Ninth Meeting 11/14/2011

Assessed the results so far:

1. Jude likes Selenium, but thinks it is not be all end all, not that easy to set up and debug

2. Lisa defers to developers, thinks Help Desk can learn to run tests.

3. Felicia not as easy to make the test work, hard to figure out why, some of the selectors are not that good, drawback of the tool is the amount of checking,

4. Sean says he uses record and playback to get DOM ids, then hand codes his test. Points to a weakness in our test of Selenium, we did not test the coding process.

Hardware requirements: need to be able to get to the all the OS/brower combos

1. Need someone who is the product evangelist, creates the docs, builds some baseline tests, shows handle problem items (type versus type keys), trains developers and help desk

2. Need dedicated hardware/software and test users (Touchstone and certificates)

3. Need someone to coordinate and maintain the environment

4. Management buyin and continued support

5. A project and a project plan and project manager

6. Need to do the following testing of Selenium: Need to get Safari working on the Mac, need to test Selenium server on Linux. Need to test on Lion.

  • No labels