Testing in financial services – the big shift left

By Darren Stocks | July 12, 2017 | Categories: Blog | Tags: , , , , , ,

In the last few years, testing in financial services has most definitely shifted left, as banks advocate earlier and more frequent testing in the software development lifecycle. Why has this happened? The drive to shift testing earlier in the development cycle comes predominantly from the fear of failure – tech failures have dogged big retail banks in the last few years with customers being unable to use ATMs, pay for goods at retail outlets or unable to log in to digital banking for days on end.

Much of this has been well documented and fingers have been pointed at problems with legacy systems. But it’s also well known that legacy architecture is immensely costly to replace. The architecture of banks is set to become more complex with the advent of open banking – next year PSD2 regulation comes into effect, meaning banks have to open their customer and transactional data to third parties, capable of offering services that compete directly with the banks. As a result, banks’ systems are under increasing pressure to perform.

As banks begin to understand the difficult position they are in, so an appreciation of the importance of performance testing and automation has grown and the burgeoning acceptance that faster, better quality technology deployment is vital if they are to stay competitive. So what are the factors influencing testing in financial services?

  • Agile has happened: in the last ten years, agile development methodologies have become much more de rigueur, where banks have moved away from waterfall development in favour of more iterative development. This has emphasised the need for more integrated development and testing and for testing to occur much earlier in the development lifecycle than has traditionally happened.
  • DevOps: this rise of the culture of DevOps, in a similar way to agile, has also highlighted how important testing is and that it is an ongoing part of the development process, not a tick box exercise that only happens just before an app goes into production.
  • Failures in budgeting: it’s fair to say that operational acceptance testing (OAT) has been grossly undervalued and under resourced in the past. What tends to happen is that all non functional tests get lumped together in OAT – performance testing and disaster recovery fall within this area and usually, most of the budget gets spent on disaster recovery meaning performance testing gets left behind. This can create a false impression that OAT has been taken care of when in fact areas of testing within it have been woefully under resourced.
  • Automation: automation is playing an increasingly vital role in testing, especially the act of automating repeatable tests. There is an expectation now that tests should be run early and repeatedly as we move from left to right through the development lifecycle, the furthest right being production. Banks are expecting tests to be automated and repeatable.
  • Blurring of the lines between tester and developer: traditionally, there has always been an element of friction between the developer and the tester – the developer creates the glitches and the tester picks them up. Agile has changed all that and has started to change the profiles of these roles. Both are now responsible for testing.

It’s good news that testing is getting a better seat at the table than it has in the past and that large organisations are paying it more attention. And it was about time – the value that testing brings to the table is significant. With increased collaboration between teams, better integration and improved communication, the shift left is bound to be a permanent one.