The challenges facing the education sector and the children it serves are immense. As a former school governing body member at a Model C school in Tshwane, I saw first-hand how difficult it is for many children to participate meaningfully in school. When I consider the additional complications faced by rural children just to get to school, the challenge can feel overwhelming.
Safe, reliable transport for school children
In the Eastern Cape, the Department of Transport has a special unit dedicated to getting rural children to school. Working closely with their colleagues at the Department of Education, they contract transport operators, monitor drivers, coordinate with schools, and manage budgets to ensure that young scholars get an opportunity to learn. Since June 2022, Mint and Microsoft have been engaged in digitizing these processes in the Sgelezel ‘uTshintsho Project.
The project has visibility throughout the Eastern Cape Provincial government and up to National Treasury. It’s no wonder that we are dedicating so much effort to ensuring success at every step of the delivery.
Testing. Testing. Testing
Our testing regime is intense. Over the course of six, two-week sprints, we are projected to write over 500 functional test cases for the Dynamics 365 application and its associated Canvas application. We follow that with a two-week internal testing sprint, and then two, two-week UAT sprints.
Testing isn’t a magic bullet. Without a dedicated delivery team, no project will be successful. Still, isn’t it interesting how many projects have hard-working teams with experienced testers but can’t seem to deliver the project successfully? There are numerous reasons that this can be true, but poor structuring of the testing regime is an important potential culprit to consider.
Common testing mistakes are easy to prevent with a little bit of planning:
1. Timely, Targeted Testing
One common mistake is to emphasize testing only at the end and only by the testers. You might think that this is only relevant for waterfall projects, but agile teams often make the same error. For a project to be successful, developers need to test as they work, putting their stamp of quality assurance on the delivery.
For Sgelezel ‘uTshintsho, we write tests in Azure DevOps as we build features. When the developers say they are done at the end of a sprint, we test those features thoroughly in the following sprint. This helps us to identify issues early and fix them, before they compound, often even before the next release.
We also stage when we run which tests. Unit and functional tests are best run by the delivery team. They have context and can quickly identify problems and solutions. By the time the customer is in User Acceptance Testing (UAT), we know that functions work as designed. UAT focuses on technical users confirming that the processes essential to the customer’s organization work. If there are fixes required, they are usually small fixes to smooth gaps between functions. This builds the customer’s confidence in the solution because they know that the solution they are signing off on delivers value to their business stakeholders.
2. Scaffolding Builds Success
Another common mistake is to write tests without thinking about how tests work together and in context. Businesses don’t work in functions; they work in processes. Without understanding how the functional tests work together, testing can never drive quality business outcomes.
Test scaffolding solves this problem with a simple formula: correct data in a working function accessed by the right user and the essential time.
We’ve adapted the concept of unit tests to apply to data flows so that they meet agreed data quality standards and prevent dirty data from corrupting the solution. We build functional tests on top, describing the experience of a single user role executing a single function with that reliable data. This embeds security concerns in the functional testing, so that the right people can execute the essential function. Process tests string together functional tests, ensuring that end-to-end processes are appropriately serviced by the solution.
3. Validating Automation
The desire to reduce human testing time drives businesses to automate testing. That’s the right move, if the automated tests produce valid results. The error is trusting that automation guarantees validity. The tools for test automation are effective in the right hands, but they can paper over severe functional defects that only surface when the customer’s users are testing, drastically reducing customer confidence in the solution. To this end, there’s no substitute for an insightful tester experiencing the solution first-hand.
For Sgelezel ‘uTshintsho, we’ve borrowed a pearl of wisdom from tailors: measure twice, cut once. We write the functional tests and then execute them manually. This ensures that the test is verifying the correct outcome. Then, we automate using EasyRepro. Before removing any test from the manual test run, we execute automated and manual tests concurrently and compare the results. In this way, an automated build test result can be trusted and the manual test can be removed.
4. The Confidence Game
In testing software or delivering a child to school, the key is confidence. By planning your testing strategy to avoid these common pitfalls, you can ensure that UAT builds the confidence your stakeholders need to adopt the solution you’ve built. In the Eastern Cape, stakeholder confidence and user adoption mean the scholar transport budget is used more effectively, which will give more children the opportunity to learn with one less challenge to overcome.