Exploring Performance testing, Productivity, and Testing Dynamics

Over the years, Quality Assurance and Testing Services market has started to witness a rapid expansion across different industry verticals. It is only happening because of the highly informed audiences or users that have got into the nerves of technology.  

From business software to mobile applications, users demand a seamless experience and purposeful applications. And Quality Assurance has served as a helping hand to DevOps communities in meeting such goals.  

With QA turning to be a highly prominent practice than ever, it is extremely crucial for us at BugRaptors to dive into the changing dynamics of the industry by talking with experts from QA backgrounds.  

Hey everyone, this is Rajeev Verma, Project Manager QA, BugRaptors, and I’ve brought you all another interesting interview session. This time, we had a word with Stephen Townshend, a performance engineering specialist working with IAG, New Zealand.  

Stephen is an experienced and competent consultant and specialist who has worked exclusively in performance for over a decade. Now, working as an internal specialist, most parts of Stephen’s career was focused on consulting, where he worked on everything right from pre-sale, scoping to strategy building along with delivery and diagnosis.  

With a rich experience of working across multiple industries, including insurance, retail, banking, logistics, government, and education, Stephen had an amazing grip on a wide variety of traditional load testing tools, including JMeter, Visual Studio, Load Test, Rational Performance Tester, LoadRunner, NeoLoad, and SilkPerformer.  Besides this, Stephen has a great understanding of supporting tools and scripting languages with some functional and automated functional testing exposure. Furthermore, he also picked up on a wide range of scripting, programming, OS, architecture, and infrastructure on the job. 

So, without taking much of your time, let’s quickly jump on the interview, asking about Stephen’s personal experiences, his interest in performance testing while exploring his secrets to productivity, and quick tips to testing success.  

Let’s begin! 

Rajeev Verma: How was your experience working with different companies? 

Stephen Townshend: Firstly – a note that these are my views and ideas only, not necessarily that of my employer. 

I worked as a performance test consultant for a decade, so I got to work with many different organizations in New Zealand. Consulting is challenging, constantly having to learn new business domains, technologies, organizations, and meeting new people.  

Despite the challenges, consulting is where I grew my skills and confidence. If it wasn’t for the constant change and frequent challenges that pushed me outside my comfort zone, I wouldn’t have grown so much as a performance engineer. 

One of the most challenging parts of consulting for me was taking part in pre-sales. I would go along with a salesperson to meet a new client, ask questions about their project or initiative, and hopefully sound like I knew what I was doing (to help win us the business).  

I put a lot of pressure on myself in those meetings, and it took a toll. One time I had an anxiety attack in the middle of a pre-sales meeting and found myself unable to speak. It was incredibly embarrassing, but something I learned from. 

When I moved out of consulting into an internal role, I thought things would be very different, but it wasn’t so different after all. As an internal engineer, I still find myself doing a kind of pre-sales activity, but my customers are now the internal teams in our organization.  

As an internal staff member, there is more consistency in the technology stacks I work with than when I did consulting, but there is still a lot of diversity of technology. What I like about working internally is the opportunity to build something lasting – whether it’s a tool, framework, or culture change. 

  Rajeev Verma: What are all things involved in the Performance Testing Process? 

Stephen Townshend: This is a very difficult question to answer because performance testing can be a lot of different things, and there is no one size fits all process. When most people hear “performance testing,” they think of “load testing,” which is where we use tools to simulate load on an application.  

Load testing is an increasingly small part of how we manage performance risk, as we are forced to adapt to modern ways of delivering software. Performance testing is a bigger discipline and involves any testing activity at all which measures or helps us understand performance. 

I’ll give an overview of a “traditional” performance testing engagement. This was the kind of process I would follow as a consultant. 

To begin with, I would do something called a performance risk assessment. This is not a very well-known way of working, but I think it’s an excellent way to scope and understand our work.  

The idea is to gather information about the solution in question, including what the components are, the technology used, whether each component is new or modified or existing, etc.  

You also look at the different user groups who rely on the application and how much load we expect on each component (we find this out by completing a workload model, which is a topic all in itself). This requires talking to many stakeholders, including business representatives, architects, testers, developers, and project managers.  

The end result is that we have a fairly comprehensive view of all the components of the system, how they interact, and which areas carry the most risk to the business. 

From the output of the risk assessment, we then build a performance engineering approach, with a focus on the areas of highest risk first. The strategy will usually include some performance testing (but not always), and for this, we answer questions such as: 

  • Who will build and run the performance tests? 

  • Who will analyze, interpret, and report the results? How will this be presented? How often will we report? 

  • What tools will we use? 

  • Where will we run the performance tests from? (e.g., the platform) 

  • Which environment will we test in? How does it differ from production? 

  • What test data is required, and how will we obtain it? 

  • Is this a one-off engagement, or do we want to build something to continually use in the future? 

  • What monitoring, logging, and tracking are available, and is it suitable for our needs? Do we need to set up any additional monitoring? 

  • Where possible, an indication of timelines and effort (sometimes I would provide multiple options so the client could pick which scope they wanted me to cover off) 

That’s not an exhaustive list of things to include in a strategy, but you get the idea. We can also include recommendations that do not involve performance testing but address performance risk in another way.  

This could be recommended monitoring/observability, synthetics, or even something as practical as recommending a phased roll-out to customers to avoid a “big bang” of load on the system. 

Once we have a strategy, it’s time to implement it. Building the load test scripts is perhaps a surprisingly small part of the work. Other things such as getting access to or setting up monitoring, getting a usable environment, or sourcing test data can often be much more time-consuming than you expect. 

Once the preparation work is done, that’s when test execution can begin. The basic process is to run a test, analyze the results, and then feed the key message back to stakeholders to find out whether performance is acceptable or not.  

Depending on the outcome of those conversations, more performance tests may need to be run. We end up iterating through this test, analyze, report, and discuss the cycle as many times as are required until stakeholders are happy with the performance. 

Not all performance engineers do this, but I like to get involved in the investigation and analysis of performance issues. In standard testing terms, many performance tests are black box – but I like to go as white box as possible.  

A lot of the time, there is a transaction that is too slow, and we need to find out where the time is being taken. The basic process is divide and conquer. Take timings at different points in the solution and narrow down where the slowness is.  

To do this, we grab data from logs, monitoring, and tracing as well as potentially drawing on component performance test results. One of my favourite parts of this process is visualizing data to understand the behaviour of a software system.  

I use a tool called Tableau almost every day of my work, and an open-source alternative is the programming language R and the IDE R Studio. 

As I said, the above is a traditional performance testing process. There is plenty more that performance engineers do. Something that is very popular these days is to add a layer of process automation on top of your performance testing and monitoring.  

There is also client-side (usually web or mobile) performance profiling which is a whole other discipline or setting up monitoring and observability platforms. 

  Rajeev Verma: What's your secret of being productive at work? 

Stephen Townshend: This is another interesting question. For the most part, I care about the work, and I care about the outcome. That in itself has always driven me to push myself to continually improve and reflect on my work so I can become a better performance engineer. 

Talking specifically about productivity, something that I must work strongly to avoid is context switching. I remember one time as a consultant, I was working on seven different projects for several different customers at the same time and having to switch between them during the day.  

Because of the overhead of switching between them, I barely got anything done, and my stress levels went through the roof. 

It’s not always possible to focus on one thing at a time, but I always try and plan my days like that now. Something I do find helpful is having someone else handle new requests for work because I can’t always see objectively when I’m in the work, and I tend to say ‘yes’ no matter what. Sometimes it needs someone to think about the bigger picture and which work is strategically more important to the organization. 

  Rajeev Verma: What mistakes should one avoid while doing performance testing? 

Stephen Townshend: Most performance testers only look at averages or percentiles when looking at response times. We must stop doing this, as it hides the real system behavior under the covers.  

I’ve spoken about this multiple times at online events and blogged about it, but it’s the duty of every performance engineer to start looking at raw data – that is, a plot of every individual response time measured during a test (or via logging).  

The way we look at raw data is scatterplots, which most (or all) load testing tools don’t support, which is why I am a strong believer that performance engineers need a standalone visualization and analysis tool to do their work effectively.  

As I said previously, I use Tableau, and I know other performance engineers who use R Studio. A good performance engineer dabbles in data science. 

Only performance test when there is a risk to the business or an opportunity to improve business outcomes. I commonly see huge, complicated load test suites that cover way too much functionality.  

They end up taking forever to build and an enormous effort to keep maintained and working. For example, if there is a transaction that occurs less than 100 times an hour, why are you load testing it? What is the likelihood of concurrency?  

That’s why the performance risk assessment process is so valuable, and it gives you a clear sense of what actually matters to the business so you can make better decisions about scope. 

  Rajeev Verma: Suggestions for the (New) Testers 

Stephen Townshend: If you want to get into performance testing, my advice would be to find a mentor. There is no qualification you can do or a great training guide online that covers how to think like a performance tester or engineer. 

When I train new graduates, I focus on tools to start with. If you try and talk about the big strategic ideas too soon, then it won’t make sense. The best thing you can do to start with is to learn a couple of load testing tools, learn some scripting languages to build your own utilities, and learn a data analysis and visualization tool. Once you have those basic building blocks, you can start thinking about performance more holistically. 

  Rajeev Verma: Are you planning to write any book on performance testing? 

Stephen Townshend: Not at this stage. I have thought about it and even started writing one in the past, but I have a young family and a busy life, it’s difficult to justify the time it would take. I also worry about writing a book when things change so quickly in our industry. 

Instead of writing books, I have shared ideas in other ways: 

  • I frequently speak at online events such as Neotys PAC and TestGuild/PerfGuild. Most of my talks are about going back to basic concepts or challenging an idea which I think is no longer serving our industry. 

  • I’ve written blogs about different topics, which I post on LinkedIn. 

  • I have a YouTube channel which is a little hard to find called Performance Time where I’ve created a series of beginner’s guide tutorials for those just getting started. The quality isn’t great (I was just starting to learn about video and audio production) but I’ve been told that others have found these useful. 

  • Recently I have started a podcast, also called Performance Time, where I speak about the human side of performance engineering. I have had some pretty stressful moments in my career in the past, and I wanted to explore that – the challenges of working as a performance engineer, and how we can make our jobs more enjoyable. I also experiment a bit with storytelling, important performance engineering concepts that I think need discussion, and I interview well known performance engineers in the industry to hear about their journeys. If that sounds like something you’d like to hear, you can find it on most popular podcast platforms. 

 Rajeev Verma: If anybody wants to get in touch with you, what’s the best way to do that. 

Stephen Townshend: The only platform I am regularly active on is LinkedIn, just search for my name. I accept any connection request from anyone who works in Technology. I post fairly regularly, including posting the latest podcast episodes weekly. https://www.linkedin.com/in/stephentownshend/ 

We at BugRaptors always try to bring you experts from QA industries to help you explore different aspects and segments of the QA processes. We believe this interview between Rajeev and Stephen would have helped you understand new concepts of performance testing.  

For more such information, queries, or QA service related concerns, feel free to connect with our experts at BugRaptors.  

author_image

Rajeev Verma

Rajeev works as Project Manager at BugRaptors. He is working on several Web Applications, Network Vulnerability assessments, Mobile Applications, Secure Network Architecture reviews. Proven track record of successfully leading and mentoring cross-functional teams in dynamic environments. Work with all of the development teams to improve initial release quality, quality of production releases and agile development practices. He is passionate about leveraging technology to elevate QA practices and contribute to the success of innovative projects.

Comments

Add a comment

BugRaptors is one of the best software testing companies headquartered in India and the US, which is committed to catering to the diverse QA needs of any business. We are one of the fastest-growing QA companies; striving to deliver technology-oriented QA services, worldwide. BugRaptors is a team of 200+ ISTQB-certified testers, along with ISO 9001:2018 and ISO 27001 certifications.

USA Flag

Corporate Office - USA

5858 Horton Street, Suite 101, Emeryville, CA 94608, United States

Phone Icon +1 (510) 371-9104
USA Flag

Test Labs - India

2nd Floor, C-136, Industrial Area, Phase - 8, Mohali -160071, Punjab, India

Phone Icon +91 77173-00289
USA Flag

Corporate Office - India

52, First Floor, Sec-71, Mohali, PB 160071,India

USA Flag

United Kingdom

97 Hackney Rd London E2 8ET

USA Flag

Australia

Suite 4004, 11 Hassal St Parramatta NSW 2150