When it comes to building great software, it is essential to have efficient software testing procedures.
Software testing checks whether the actual results equal the expected results. This ensures the software is bug-free and therefore frustration-free for the user.
Also, software testing identifies errors or unexpected outcomes. These software bugs could be costly. In extreme cases, software testing not adequately completed has resulted in financial crashes, cars recalled and even deaths through incorrect operation of machinery.
There are over 100 different software testing categories. Each has defined outcomes, strategies and deliverables. The goal of software testing is to validate each Application Under Test (AUT) for the determined result.
For example, if you are on a login in screen, following email and password entry, the outcome of clicking the ‘Next’ button could be to enter a dashboard. If nothing happens, the expected result is a fail. If you enter an incorrect password and you still get logged in, again, the expected outcome is a fail.
Software Testing Errors Caught During Development
Here in the Innovation Centre, we pursue rigorous testing of the software for each team. We naturally find general faults but have identified some severe errors too.
- Early software testing highlighted a significant bug that enabled one organisation to view, edit or delete another organisation.
- The ability to delete other members accounts even without administrator privileges.
- Administrators could downgrade themselves or delete their own account leaving users without control of permissions or payments.
- A user could upload any file type of any size via an upload form. Users with malicious intentions could upload a huge file or script file with the intent of gaining access to sensitive data.
Automated and Manual Software Testing Processes
You cannot build great code without adequate and disciplined software testing. Software testing should be completed using manual and automated tools.
There are many processes when it comes to testing software. Each company here has a different approach to development. They use different frameworks, different programming languages and varied architecture. However, the underlying concept of testing their product still remains the same.
Automated Software Testing
Standalone tools such as Selenium and Cucumber can help mock user behaviour. Most modern frameworks feature integrated software testing frameworks, such as PHPUnit, JsUnit and Junit. Modern programming languages enable code Linters to be plugged in, these analyse code for validation errors when compiling. This level of automation helps with acceptance, functional and compilation mistakes; however, this is no substitute for manual testing.
Manual Software Testing
Here at the Innovation Centre, the Wesley Clover management team complete software testing on behalf of each team. We produce a document that covers three essential principles in software testing:
Functionality, Usability & Security
Functionality Software Testing
1. Check every button and link for expected outcome.
2. Incorrect details should not allow progression.
3. Error messages should display for mandatory fields.
4. Upload files are restricted to appropriate filetype and size.
5. Confirmation message displayed for delete or update operations.
Selenium is a powerful tool in this scenario as it automates via the browser. You can mock a users movement throughout the application, such as form filling.
One example of functionality testing is the stress test. Use tools such as Blazemeter, Testable and WebLoad to mock specific scenarios. These could be many users handling the application at the same time for instance. This allows understanding of software performance while under duress. It also provides the opportunity to fix issues with stability, speed and scalability.
Usability Software Testing
1. Check your signposts, ensure page titles are clear.
2. Field names should be short and easily identified.
3. Identify the number of steps within / left to complete a process.
4. Confirmation on completion / incompletion of each process (ticks / warnings).
5. If your user gets lost, ensure help or direction easily available.
If browser compatibility testing, use a tool called BrowserSauce to fire up a virtual machine containing legacy operating systems and browsers. This enables developers to create fall-backs to ensure the experience is the same for all users.
Security Software Testing
1. Secure information should display the encrypted (hashed) format using HTTPS (SSL).
2. New passwords disable previous versions, and incorrect attempts lock the account.
3. Logging out disables visibility, navigation and secure areas.
4. Permissions are correctly set for user hierarchy.
5. Verify against Brute Force Attacks.
This identifies overlooked product issues. It also recognises appropriate support needed from mentors within our programme.
Software Testing Best Practice
Catching these bugs and issues early in the development phase is of paramount importance. Testing against potential vulnerabilities improves overall product quality, increases customer satisfaction and saves money. Bug fixing during the early stages is less resourceful than following public deployment with established users onboard. Catching bugs early has a lower impact on customer experience than one of your customers highlighting them to you.
Highlighted issues should be entered into Issue Tracking software such as Jira. Most repository hosting services like GitHub and GitLab also offer built-in issue tracking software.
These issues are then embedded into regular development sprints, in order of severity.
These are later revisited to ensure they have been addressed correctly. This opportunity further extends the discussion regarding the teams approaches to UX / UI design.
User Experience Software Testing
This is not an alternative to live users feedback but does give a good indication of usability before deployment.
Teams conduct focus group exercises such as A/B testing, usability evaluations and content analysis.
WCIC mentors support with translating feedback into requirements for new UI design and product flow. Following a revised user interface, a stage of UX exercises will begin until all issues have been covered.
Test-driven development (TDD) is an iterative process where a developer would firstly write a failing test that defines the desired outcome. The least amount of code is then written that would make that test pass.
The TDD process is as follows:
- Write a new test for the desired outcome
- Run all test (The new test will fail as step 3 hasn’t concluded)
- Write code
- Run tests again
- Refactor the new code
No matter how good a developer you may be, there is no running away from the fact that human error will play a part in causing bugs in lines of code. In an ideal world, TDD (Test Driven Development) would be the approach that every start-up would take when developing.
Highlight bugs and issues quicker using a TDD approach. In a startup environment, this can be time-consuming, primarily when the company has limited resources. However the time it takes to write the tests early on, far outweigh the time it takes, later on, to backtrack and fix them.