Exploring The New Stack And The Need For Scaling Automation

Over the years, the concept of Quality Assurance and Software testing has run down deep into the veins of development. More importantly, the present-day solutions demand a balance of QAOps and DevOps.  

Especially, the introduction of advanced tech solutions into the testing best practices has made it necessary to shift focus at innovating the stack and integrating automation. Since BugRaptors is striving to be the top software testing company in the world, we frequently interact with people from the QA industry who have achieved extraordinary.  

This time our in-house expert, Rajeev Verma, interviewed another QA pro, Juan Negrier, who is working for eDesk, Dublin. Juan is a software engineer who is motivated and driven to achieve the highest benchmarks with QA and software testing. Currently working to cultivate productivity in the software construction process through QA, Juan has excellent knowledge of GNU/Linux OS and macOS.  

Besides, Juan has tremendous experience on test automation tools like Appium, Selenium Webdriver, along with Gherkin language, use of docker for distributed testing, and basics of penetration testing using techniques like XSS.  He holds great expertise in handling the most sophisticated to the most complex problems that belong to different realms and test environments.  

At present, Juan is working on reinforcing quality assurance with automation in order to overcome drudgery and improve the quality benchmarks for the products launching into the market.  

During the conversation, Rajeev Verma and Juan Negrier interacted upon discussing Juan’s current innovation of running a new stack of servers while talking about the need for changing QA practices. So, without taking much of your time, let us quickly jump on the interview to have a detailed insight on Juan’s approach to Quality Assurance 

Rajeev Verma: What’s an example of a mistake or failure you experienced, and what did you learn from it? 

Juan Negrier: There was a time, were the company I was working for wanted to do a massive migration into a new data center, this involved running the software in a new stack of servers with a different architecture (Dell PowerEdge to HP Blade), 

So, in order to ensure a smooth transition, we created a new staging environment on this new stack, and a new production environment as well, so we started to test everything on this new staging environment, as the new production was still waiting for the database migration to get completed 

Finally, together with management and the dev team, we defined a D Day to start using the new stack, it was a massive challenge, as our product was the 4th website most visited in Chile, with millions of views each day, and to make things worse, the contract with the old datacenter ended in the same D Day, so there was no gap for failures. 

As you can imagine, the QA and Architecture teams were in charge to give the blessing to perform this task and re-route the traffic into the new stack. 

So, as part of the testing performed, we checked that the communication between the services was okay, that the communication queues were working, emails, static components among other things. There were several issues that were fixed in time. 

Another test that was done, was to check the load on the new stack, for which we just use jMeter and we use the bandwidth in our office to check the response of the new stack, I must say tho, that no one in my team was knowledgeable about this tool, so we interpreted the results as if they were good enough for us, discarding the idea of looking for a third party to check this for us.  

And so, some days before the D Day, we were asked the main question, the one that makes you chill on your bones: "Are we ready to proceed?" 

Since we already fixed a lot of issues, those jMeter readings were good, and our tests indicated that everything was fine. We, and me in particular, made the decision to proceed... and so, the disaster started. 

I met with the release team at 2 AM to start the process, this team was me as QA Engineer, the SysAdmin, the Architect, and a Product Owner. The website was taken down, and we reroute our traffic to the new stack, everything was really smooth at first, all was working as expected and no issues were found. 

But then, when the load started to increase... the website started to become slower and slower, to the point where it was frozen, and even when tailing the logs, they were displayed as a waterfall in slow motion... 

The SysAdmin and the Architect tried different things to tackle the issue, while I was testing if any of these attempts was working, and nope, nothing worked, and so, the night become day, and we didn't sleep that day, and we had problems to sleep the next days, as the issue took three days to be solved... as it was a problem on the internal networking in the servers, something that just the provider was able to tackle on. 

So, what did I learn about this? I learn to be more humble regarding my knowledge and admit when the expertise of a third party is required, if we had sought assistance from a service specialized in load testing, we would have been able to spot the issue beforehand 

We didn't see the issue before, as the bandwidth used by our local jMeter was not high enough to generate it, as I mentioned before, it was the 4th most visited website in the country, and assuming that our load test was good enough was a really naive decision. 

Sometimes, paying for a service like that, it's better than having the 4th most popular website in your country down for 3 days and losing revenue during that time. 

Rajeev Verma: What do you think about changing QA practices? 

Juan Negrier: I believe that the QA practices need to evolve constantly and adjust to your organization, this way, we ensure that these practices become a core value of the work of all the team and these are something natural into the workflow. 

In my experience, having practices like retrospectives, and standups help a lot to get valuable feedback about how the team feels about the way we are ensuring the quality of our products. 

In a way, it's about feeling connected with these and seeing the value that comes from them. 

If at any point, practice becomes a burden rather than something beneficial, something that has a questionable value. Then it's always a good idea to have a good disposition to discuss it, and be willing to offer alternatives that adapt better for each case.  

In my humble opinion, it's a balance, be too rigorous about the practices, dictating what to do, and people, like any human, will find a way to try to avoid them, which is not a good thing at all, and impact on quality. 

On the other hand, be more flexible, and you have the risk of not having enough coverage, of not knowing for sure the number of bugs and issues on the system, and you could end with software full of small issues that never get fixed, and so your technical debt goes up! 

Spent some time with your team, listen to them and integrate them to find a good balance to have bug-free software and practices that are done because they make sense, not because they are enforced. 

Rajeev Verma: How do you scale automation? 

Juan Negrier: Normally I first assess our current tests, checking which type of tests are we running, the amount of these, and whats the coverage.  

After having this information, I always compare this to the Testing Pyramid, so that it's easy to detect where we should focus, and define what to do. 

Something really important is to always think in the future of these tests: 

  • How to maintain them?  

  • How long are they gonna take to run? 

  • Is the structure easy to expand? 

  • Is there any challenge with the fixtures? 

  

There is never an absolute answer to these questions, hence the reason we have different frameworks (Robot Framework, WebdriverIO, TestCafe, Selenium WD, Playwright, Puppeteer, etc..), different ways to run these (Locally, Docker, Kubernetes, ECS, etc..), different structures (Page Object Model, Gherkin Language, etc..) and even different ways to deal with the fixtures (Loading snapshots, SQL sentences, API calls, Direct interaction, and so on) 

Scaling is always a challenge, as it requires deep analysis of the pros and cons of each option, I always recommend choosing a language your team feels comfortable with, and then picking the framework and structure based on the flows in your organization. For instance, using the Gherkin language could make a lot of sense if your team uses BDD regularly and if they like Python, then Robot Framework sounds like a good option. 

Then for the fixtures and the deployments of these, seek pieces of advice from developers and architects to define the best way to scale and manage data in a proper way. 

Rajeev Verma: How do you keep up to date with what’s relevant to your role? 

Juan Negrier: I keep in contact with colleagues about new technologies, not only about testing but everything, which is quite useful as there is always some application that can be done to improve our testing stack  

Besides that, I then read a lot about new processes, CI-CD systems and check pages like https://opensource.com/ to read about new technology 

 Rajeev Verma: What are you excited to learn more about next year? 

 Juan Negrier: I'm really excited about the future of Playwright and see if it evolves to become a challenger to Selenium on web automation.  

Something else that really calls my attention is the evolution of artificial intelligence models to run automated tests, that's an area that has massive potential, and it's always a good idea to keep an eye. 

 Rajeev Verma: What mistakes should one avoid while performing automation testing?  

 Juan Negrier: I think that it's important to always think ahead, in the sense that End to End tests are expensive, as they require more time to maintain and more resources to run.  

Having considered that, it's good to assess the feature you're covering and see which type of test is the right one for it, maybe Unit, Integration, Visual Regression, or Contract tests are better and cheaper options in the long term. 

There is always the temptation to cover everything with end-to-end tests, as they are the ultimate tool to find bugs in a system, but I would always take into account the future effort to maintain these. 

 Rajeev Verma: What’s your main piece of advice for testers in 2022? 

 Juan Negrier: My main advice is to always keep assessing yourself and always be open to learning something new and keep focused on finding bugs and providing great value to your organization, it's easy to get too focused on the activity of writing tests and miss other weak points. 

 Rajeev Verma: What is the best way to get in touch with you? 

 Juan Negrier: I'm always happy to connect on LinkedIn and Github: 

We hope the above interview would have helped you yield some great points of knowledge while giving you a greater exposure on the QA industry, the changing environment, and the technologies.  

For more such interesting updates, stay connected with us.  

Looking for some expert quality assurance company to partner while making your way for progressive development, feel free to reach our experts through info@bugraptors.com  

author_image

Rajeev Verma

Rajeev works as Project Manager at BugRaptors. He is working on several Web Applications, Network Vulnerability assessments, Mobile Applications, Secure Network Architecture reviews. Proven track record of successfully leading and mentoring cross-functional teams in dynamic environments. Work with all of the development teams to improve initial release quality, quality of production releases and agile development practices. He is passionate about leveraging technology to elevate QA practices and contribute to the success of innovative projects.

Comments

Add a comment

BugRaptors is one of the best software testing companies headquartered in India and the US, which is committed to catering to the diverse QA needs of any business. We are one of the fastest-growing QA companies; striving to deliver technology-oriented QA services, worldwide. BugRaptors is a team of 200+ ISTQB-certified testers, along with ISO 9001:2018 and ISO 27001 certifications.

USA Flag

Corporate Office - USA

5858 Horton Street, Suite 101, Emeryville, CA 94608, United States

Phone Icon +1 (510) 371-9104
USA Flag

Test Labs - India

2nd Floor, C-136, Industrial Area, Phase - 8, Mohali -160071, Punjab, India

Phone Icon +91 77173-00289
USA Flag

Corporate Office - India

52, First Floor, Sec-71, Mohali, PB 160071,India

USA Flag

United Kingdom

97 Hackney Rd London E2 8ET

USA Flag

Australia

Suite 4004, 11 Hassal St Parramatta NSW 2150