Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Article Innovation

Is AI Revolutionising Software Quality? The Emerging Impact on Processes and People

Article Innovation

There is no shortage of both hype and fear mongering about the impact of generative AI on the ways we work, especially within the world of software engineering. However, as we approach 2nd anniversary of ChatGPT’s launch, it’s worth examining whether we’re starting to work out exactly what practical benefits are being delivered, and whether there are new risks and issues that we need to contend with.

GitHub reports that users of their Copilot AI assistant are 55% faster at coding, and that 46% of their code is AI generated. It’s in GitHub’s interests to report positively on the impact of their own tool. The question is, does all that accelerated coding actually deliver value? While some companies do report significant increases in overall dev team velocity, a number of studies are looking at whether the use of AI coding assistants is producing code that is less secure and less reusable and maintainable. As AI becomes more integrated into not just code generation, but also the software testing process, let’s consider what aspects are proving valuable.

Increased Automation and Efficiency

Automated generation of unit tests is one of the first areas where generative AI helps. Despite plenty of evidence that a test-driven approach delivers better code, as a tester it’s all too common to encounter systems that have little if no automated unit testing in place, so anything that shifts the dial here is welcome. The same generative AI tools can help translate user stories into gherkin style acceptance tests and generate the test steps that need to sit behind them. However, automating the generation of unit tests and acceptance may not be as helpful as you’d think if you don’t stop to really consider what test cases need to be covered. The real benefit of these generative tools is reducing friction: if some of the drudgery of setting up the initial scaffolding and populating the first few really obvious tests can be automated, then it becomes easier to stop and do some thoughtful test design. 

Test automation, particularly at the GUI level, is notoriously fragile. We’re seeing the emergence of AI-driven tools can detect UI changes in web apps and even some desktop application, and automatically update object locators. These tools reduce the need for human intervention and allows testers to focus on more complex challenges, but it may be too soon to expect truly resilient and self-healing test automation.

The Lure of Natural Language

Tools enabling the generation test automation scripts directly from natural language instructions are making test script creation accessible to non-technical testers, and can be seen as part of the general low-code / no-code trend. While these approaches offer scale by enabling a wider range of team members to participate in test automation, these should be seen primarily as easy, and ultimately disposable, accelerators for manual test execution, rather than pathways to developing test automation suites that are maintainable for the long term.

The ability of large language models to do perform tasks such as summarise complex requirements documents, generate potential risks for a specific domain or suggest a range of test cases might, at first glance, threaten the role of the manual tester. In reality, such tools need skilful piloting to achieve more than bland, generic results. Targeted prompts to an LLM with access to project specific documentation can certainly generate useful starting points which accelerate the process of manual test design. However, like any generative content, it needs to be used thoughtfully: just because the content is grammatically correct doesn’t mean you can simply cut and paste.

Enhanced Test Data Management and Governance

A number of AI-enabled tools are emerging that can automatically generate realistic test data, reducing the burden on testers to manually manage data. The use of production data in test environments is a security risk that many organisations are no longer willing to take. Synthetic test data will still not be sufficient for some testing tasks even if it is closely modelled on production data. Deidentifying production data is a much more feasible using current AI-enabled tools compared the past, however such tools always need to be carefully tested in context to confirm that the target data is being deidentified effectively. In KJR’s experience, there can be significant drops in performance when applying deidentification tools trained primarily in one domain, or on data from one geography, and applying them to another.

Shifting Roles in Software Testing

As AI becomes embedded in routine software development tasks, the role of the tester will shift from manual execution to oversight, risk management, and AI governance. Testers will need to work closely with AI developers and vendors, data scientists, and DevOps teams to ensure that AI tools are implemented effectively and ethically.

AI is poised to bring greater automation, efficiency, and intelligence to software testing. However, this transformation also means that testers will need to evolve alongside the technology. Upskilling in AI tools, data governance and Responsible AI principles will be critical for testers who want to remain relevant in the industry. Testers who embrace this change will find themselves at the forefront of a new, AI-driven era of software quality assurance.