Sunday, July 14, 2013

The Kokoda Challenge 2013

My daughter took part yesterday in the Jim Stillman Cup of the Gold Coast Kokoda Challenge. The Kokoda Challenge is a uniquely Australian event. It is a walk to commemorate WWII battles in Papua. The full length Stan Bisset Cup is 96 km long. It is open to students who are at least 15 years old. The Jim Stillman Cup is half the distance and is open to students at least 13 years old. There are shorter Kokoda Challenge walks in Brisbane and Melbourne.

185 school teams of 4 students and an adult and 190 other teams got to the finish line in Nerang Velodrome yesterday and today. This is a gruelling event. The participants walk and run for 28 or 14 hours on average, day and night in difficult terrain, and most of them make it to the finish line.

A big thank you to the support teams and all teachers!

Jim Stillman Cup start in Numinbah Valley.

The full 96 km track.

Thursday, July 4, 2013

Software Testing

I would like to share with you an approach to software regression testing that works for systems consisting of multiple applications.

First things first. Software regression testing should always mean automated testing. Manual regression testing is a wasteful practice that will make you loath change. If you go the manual testing path, the process will take more time every time you add a feature, and soon you will  find out that a simple change, that takes a programmer a few minutes to implement, takes weeks to deliver to the customer. It's best to leave manual testing to beta customers, internal or external. When they find a problem, add a test case for it to the automated test suite to prevent regression. Automated testing ensures that no regression occurred in the last build. Just like continuous integration, it is a must, and ideally you should have it right from the start of the project.

Continuous deployment that triggers tests is a bit more difficult to implement with limited resources - you would need a new test environment for every check-in, but definitely every build should generate all installers, and you should have an automated nightly deployment and test run.

Automated testing of some systems is difficult. A system consisting of web applications, desktop applications, services, all interacting with different operating systems, cannot be functionally tested the same way you test a class library with unit tests. You need something that will emulate one or more users, and will interact with all parts of the system running on multiple computers at the same time. Mocks won't do. You need a program that will click buttons, read text, close windows, launch programs, and remote desktop (yes, it's a verb).

We chose Sikulideveloped at MIT and maintained by people from around the world. Sikuli (“God’s eye” in the language of Mexico’s Huichol Indians) is still quite new, and some features don't work very well, but it is the best one out there. Especially IR (Image Recognition) works great if you remember that it uses fuzzy matching and your images need to capture the essence of what you are looking for. For example, if you have icons on the screen and you need Sikuli to find one, your images should include only the parts of the icons that differ, which may mean not including their borders. Turning off the mouse cursor is tempting, it speeds things up a lot, but it makes troubleshooting harder and may cause weird errors when a program is surprised that a mouse click happens without any preceding events.

What doesn't work very well in Sikuli?
  • OCR (Optical Character Recognition) works best when text is 12-14 pixels big. Use smaller or bigger text and you get funny results. 
  • Switching between applications on Windows is not reliable, the operating system sometimes doesn't honour your requests. You need to resort to switching by clicking on things, like Windows taskbar -> right click -> show desktop -> click on app icon. On Macs it may work better. 
  • There is no serious IDE, and no unit test framework, but that problem is easily mitigated with Google's Robot Framework and Eclipse. For a simpler environment try Notepad++ with a workspace file and Python and Robot Framework language style templates.

When you have tests running overnight and something goes wrong you need to know what happened. Logs are good, but not enough. You need to see what happened. We record all tests with Screenpresso. Just yesterday, I saw a recording of an error that I would find very hard to believe, if reported by a tester. And that's what I like about programming - I see miracles every day. :-)