Up to 30% off e-learning!

Click here to see all deals

Use offer code: XMAS23NZ
ISTQB training
Request a quote
Course type: Virtual
Duration: Four days
Delivery: Virtual

ISTQB Foundation - AI for Testers virtual

The ISTQB Foundation AI  Tester Extension course extends the broad understanding  of testing acquired at Foundation Level to enable the role of AI Tester to be performed. 

About the course

This four-day tutor-led AI in software testing course includes lectures, exercises and practical work, as well as exam preparation. It is fully-accredited by UKITB on behalf of ISTQB and has been rated SFIAplus level 3 by the BCS.

What is virtual classroom training?

Virtual instructor-led training combines the personal teaching experience of a classroom, with the ease and flexibility of a virtual environment. Virtual courses are interactive and engaging, allowing participants to communicate with both the instructor and each other in a collaborative manner.


This is a four-day intensive virtual course.


The recommended entry criteria for candidates taking the AI Tester course is as follows: 

  • Hold the ISTQB Foundation in Software Testing certificate. 

Learning objectives

Individuals who hold the ISTQB® Certified Tester- AI Testing certification should be able to accomplish the following business outcomes: 

  • Understand the current state and expected trends of AI 
  • Experience the implementation and testing of a ML model and recognize where testers can best influence its quality 
  • Understand the challenges associated with testing AI-Based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability 
  • Contribute to the test strategy for an AI-Based system 
  • Design and execute test cases for AI-based systems 
  • Recognize the special requirements for the test infrastructure to support the testing of AI-based systems Understand how AI can be used to support software testing 

In addition, Certified AI Testers should be able to demonstrate their skills in the following areas once they have completed the course and passed the exam: 

  • Describe the AI effect and show how it influences the definition of AI 
  • Distinguish between narrow AI, general AI, and super AI 
  • Differentiate between AI-based systems and conventional systems 
  • Recognize the different technologies used to implement AI 
  • Identify popular AI development frameworks 
  • Compare the choices available for hardware to implement AI-based systems 
  • Explain the concept of AI as a Service (AIaaS) 
  • Explain the use of pre-trained AI models and the risks associated with them 
  • Describe how standards apply to AI-based systems 
  • Explain the importance of flexibility and adaptability as characteristics of AI-based systems 
  • Explain the relationship between autonomy and AI-based systems 
  • Explain the importance of managing evolution for AI-based systems 
  • Describe the different causes and types of bias for AI-based systems 
  • Discuss the ethical principles that should be respected in the development, deployment and use of AI-based systems 
  • Explain the occurrence of side effects and reward hacking in AI-based systems 
  • Explain how transparency, interpretability and explainability apply to AI-based systems 
  • Recall the characteristics that make it difficult to use AI-based systems in safety-related applications 
  • Describe classification and regression as part of supervised learning 
  • Describe clustering and association as part of unsupervised learning 
  • Describe reinforcement learning 
  • Summarize the workflow used to create an ML system 
  • Given a project scenario, identify an appropriate ML approach (from classification, regression, clustering, association, or reinforcement learning) 
  • Explain the factors involved in the selection of ML algorithms 
  • Summarize the concepts of underfitting and overfitting 
  • Demonstrate underfitting and overfitting 
  • Describe the activities and challenges related to data preparation 
  • Perform data preparation in support of the creation of an ML model 
  • Contrast the use of training, validation and test datasets in the development of an ML model 
  • Identify training and test datasets and create an ML model 
  • Describe typical dataset quality issues 
  • Recognize how poor data quality can cause problems with the resultant ML model 
  • Recall the different approaches to the labelling of data in datasets for supervised learning 
  • Recall reasons for the data in datasets being mislabeled 
  • Calculate the ML functional performance metrics from a given set of confusion matrix data 
  • Contrast and compare the concepts behind the ML functional performance metrics for classification, regression and clustering methods 
  • Summarize the limitations of using ML functional performance metrics to determine the quality of the ML system 
  • Select appropriate ML functional performance metrics and/or their values for a given ML model and scenario 
  • Evaluate the created ML model using selected ML functional performance metrics 
  • Explain the use of benchmark suites in the context of ML 
  • Explain the structure and working of a neural network including a DNN 
  • Experience the implementation of a perceptron 
  • Describe the different coverage measures for neural networks 
  • Explain how system specifications for AI-based systems can create challenges in testing 
  • Describe how AI-based systems are tested at each test level 
  • Recall those factors associated with test data that can make testing AI-based systems difficult 
  • Explain automation bias and how this affects testing 
  • Describe the documentation of an AI component and understand how documentation supports the testing of AI-based systems 
  • Explain the need for frequently testing the trained model to handle concept drift 
  • For a given scenario determine a test approach to be followed when developing an ML system 
  • Explain the challenges in testing created by the self-learning of AI-based systems 
  • Explain how autonomous AI-based systems are tested 
  • Explain how to test for bias in an AI-based system 
  • Explain the challenges in testing created by the probabilistic and non-deterministic nature of AI-based systems 
  • Explain the challenges in testing created by the complexity of AI-based systems 
  • Describe how the transparency, interpretability and explainability of AI-based systems can be tested 
  • Use a tool to show how explainability can be used by testers 
  • Explain the challenges in creating test oracles resulting from the specific characteristics of AI-based systems 
  • Select appropriate test objectives and acceptance criteria for the AI-specific quality characteristics of a given AI-based system 
  • Explain how the testing of ML systems can help prevent adversarial attacks and data poisoning 
  • Explain how pairwise testing is used for AI-based systems 
  • Apply pairwise testing to derive and execute test cases for an AI-based system 
  • Explain how back-to-back testing is used for AI-based systems 
  • Explain how A/B testing is applied to the testing of AI-based systems 
  • Apply metamorphic testing for the testing of AI-based systems 
  • Apply metamorphic testing to derive test cases for a given scenario and execute them 
  • Explain how experience-based testing can be applied to the testing of AI-based systems 
  • Apply exploratory testing to an AI-based system 
  • For a given scenario select appropriate test techniques when testing an AI-based system 
  • Describe the main factors that differentiate the test environments for AI-based systems from those required for conventional systems 
  • Describe the benefits provided by virtual test environments in the testing of AI-based systems 
  • Categorize the AI technologies used in software testing 
  • Discuss, using examples, those activities in testing where AI is less likely to be used 
  • Explain how AI can assist in supporting the analysis of new defects 
  • Explain how AI can assist in test case generation 
  • Explain how AI can assist in optimization of regression test suites 
  • Explain how AI can assist in defect prediction 
  • Implement a simple AI-based defect prediction system 
  • Explain the use of AI in testing user interfaces 

This is an intensive four-day course that includes the following:

  • All accompanying course material                                                                   
  • The cost of the exam 

What's covered?

  • Introduction to AI
  • Quality characteristics for AI based systems
  • Machine learning overview
  • Machine learning - Data
  • ML Function performance metrics
  • ML Neural networks and testing
  • Testing AI based systems - Overview
  • Testing AI specifics quality characteristics
  • Methods and techniques for the testing AI - based systems
  • Test environments for AI-based systems
  • Using AI for testing

Target audience

The AI Tester course is suitable for those who are, or expect to be, working on projects that have AI at their heart. It is aimed at those who seek a practical application of the core software testing material covered at ISTQB Foundation level on all projects that work with AI. 

The Certified Tester AI Testing (CT-AI) qualification is aimed at people who are seeking to extend their understanding of artificial intelligence and/or deep (machine) learning, most specifically testing AI based systems and using AI to test.

Delegates will be provided with a Pearson VUE exam voucher one week prior to course commencement. This enables you to book and sit your exam at your local Pearson VUE testing centre at a time and date convenient to you.  Pearson VUE centres are worldwide, and you will be able to choose the closest testing centre to you. You then go along to the test centre with your photo ID at the specified date and time and you will then take an electronic exam. Your exam voucher will have an expiration date and your exam must be sat before this date as these vouchers cannot be extended.

Exam format

To qualify as an internationally-recognized Certified Foundation Acceptance Tester and be issued with an ISTQB® AI Foundation Extension Level Certificate, delegates must successfully pass the exam administered by the relevant National Board or Examination Provider. 

  • The examination consists of a one-hour exam with 40 multiple choice questions. 
  • It will be a ‘closed book’ examination i.e. no notes or books will be allowed into the examination room. 
  • Duration of 60 minutes (or 75 minutes for candidates taking examinations that are not in their native language). The pass mark is 65% (26 out of 40). 

Why Choose ILX Group?

graduates in New Zealand
New Zealand flag
in New Zealand since 2010
Customer satisfaction
customer satisfaction