About this course
THE FUTURE OF SOFTWARE TESTING LIES AT THE INTERSECTION OF HUMAN INGENUITY AND TECHNOLOGICAL LEVERAGE.
Our AI for Software Testing course is an immersive and interactive journey to prepare experienced software testing professionals for the AI-empowered future.
Through hands-on exercises with leading AI agents and tools, gain first-hand experience generating key artefacts from system test plans to individual test cases. Sure, AI can automate the boring and repetitive work of generating test cases, but it can do so much more. You will learn how to use AI to evaluate existing tests, structure systems for more effective testing, interpret results, and maintain traceability from requirements to tests. While balancing AI’s power to create testing artefacts with human judgment, you will discover how AI can help streamline the testing process through increased test automation.
What you will gain
An AI for Software Testing digital badge will be available upon successful completion of the course from Skills Development Group.
This course will contribute 14 PMI® Professional Development Units (PDUs) towards your chosen certification (14 Business Acumen).
What you will learn
- Use generative AI to explore and understand system behaviour when documentation is incomplete or unclear
- Write prompts that describe observed behaviour, constraints, and test intent clearly
- Identify assumptions, gaps, and invented details in AI-generated test cases
- Turn AI-generated test ideas into executable tests with clear steps and observable outcomes
- Structure tests using partitions, boundaries, state, and sequences with AI support
- Use AI to generate exploratory testing ideas without losing tester control or focus
- Evaluate AI-generated nonfunctional test ideas for relevance and testability
- Write clearer bug reports and evaluate AI-assisted summaries and metrics
- Make informed decisions about what to automate, and what not to automate
- Create a practical plan for integrating generative AI into your own testing work
NOTE: This is a foundational level AI course and does not teach participants how to build agents.
What you need
To get the most out of this course, it is recommended that participants have foundational knowledge of software testing through formal training like our Software Testing Foundations or Agile Testing course or have relevant experience working in a software testing context.
This course is great for
- Testers, Test Analysts and Developers wanting to utilise AI to automate and assess testing tasks and artefacts
- Project Managers, Business Analysts and leaders wanting to accelerate the testing process whilst balancing responsible and ethical oversight
Topics covered
Understanding AI’s role in software testing
- Capabilities and limits
- How to evaluate AI outputs
- How AI affects tester judgement and responsibilities
Let’s test!
- Examining AI-generated tests
- Identifying assumptions and invented details
- Assessing whether outputs are truly executable
Tests as specifications
- Inferring requirements from behaviour
- Separating facts from assumptions
- Correcting AI inaccuracies
- Organising requirements to expose gaps
Test data
- Applying equivalence partitioning and boundary value analysis
- Distinguishing observed behaviour from assumptions
- Defining necessary preconditions
Making tests executable
- Identifying missing information
- Refining tests into clear steps and outcomes
- Expressing them in structured formats such as Gherkin
Stories and scenarios
- Understanding how stories
- Acceptance criteria, scenarios, and tests relate
- Using AI to draft them critically
States and coverage
- Modelling system behaviour with states and transitions
- Reasoning about coverage beyond simple test counts
Validating quality attributes
- Assessing non-functional requirements
- Critically reviewing AI-generated insights
Test strategy and planning
- Identifying risk-driven priorities
- Structuring testing efforts
- Evaluating AI-generated strategy inputs
Bug analysis and reporting
- Analysing defects for impact and cause
- Producing clear, actionable reports
Test automation
- Deciding what to automate,
- Evaluating AI-generated scripts
- Recognising when manual or exploratory testing is more effective
Integrating AI into your testing
- Identifying pilot opportunities
- Applying ethical AI practices
- Creating adoption roadmaps to improve productivity and quality