What we do
QA Consulting & Engineering
In February this year I had the honour of being invited to attend the Workshop on Performance and Reliability (WOPR) held this year in Wellington. Twenty three performance and capacity experts from Australia, New Zealand, and North America congregated for a three-day closed door workshop.
The theme this year was 'Performance Tools for 2017 (and beyond)'. The attendees came from both product companies and external consultancies, and worked at different scales ranging from testing and monitoring a few to over a million servers.
The format of the workshop was simple. Members of the group shared an ‘experience report’ about something they encountered on the job. The group would then discuss this, and these discussions in many ways were where the magic happened. I was lucky enough to share my experience working in New Zealand and the unique circumstances here. The most prevalent theme in my mind was the idea of continuous performance testingand everything that entails. The basic concept here is that within a continuous delivery pipeline we automatically run performance tests. There’s a lot to consider in order to make this work:
- How do we determine the pass or fail criteria for an automated performance test? We can use non-functional requirements but defining those in a way which is meaningful to the business is challenging. We can also compare each run with the previous runs to look for degradation over time. This requires some very clear mathematical reasoning in order to inform accurate decision making. The third option is to automate performance testing but not tie pass/fail criteria to them but instead publish the results in a dashboard where a performance specialist can review them.
- Maintaining load test suites can be trivial or outrageously time consuming. A lot of it depends on the testability of the software we are assessing. Testability should be part of the decision making when choosing to purchase it (or build) software. In particular, for load testing this applies to the complexity of the HTTP (or other network) traffic. Off the shelf and legacy systems commonly fit into the category of ‘difficult’ to test.
- Centralised dashboards which collate server and application monitoring along with load test results are important when your performance testing is automated. There’s lots of parts to that – picking the right level of monitoring which tells you what you need to know to make educated decisions, and then how to present that in a way which is easy to understand. Having a dashboard like this and getting it in front of the teams of people we work with gets them invested in the performance of their software.
- Organisations are starting to prefer SaaS tool solutions over on premise (or self-managed). A completely externally hosted and managed solution is one less concern for an organisation. They’re often more cost effective and easier to maintain. For internally managed tools, simple and specific tools which can be chained together are the preference. Fitting simple tools together fits the continuous delivery pipeline model. Monolithic and enterprise level tools are no longer cost effective or suitable in the continuous delivery world.
- Environment management is key. Testing in production (or using ‘canaries’), production-like environments, dynamically spinning up environments on demand; whatever is required to have a reliable environment that provides accurate results.
A lot more was discussed at WOPR and I’m still picking my notes apart to solidify it in my mind. Some of these topics included raising accountability and visibility around software performance, the unique challenges in the New Zealand context (which I’ll blog about soon), the ‘lost skills’ required to write good performing software (including statistics and probability, and how compilers work), and what is shifting left versus shifting right.
I had a great time at WOPR25 and it was an honour to hear from the global leaders of our industry. It has inspired me to learn more and strive for absolute excellence in the work I do.
Stephen Townshend, 1 March 2017
Performance Test Practice Lead @ TTC