Expert Opinion: Should AI Be Used to Develop IT Certification Exams?

Q: Should AI Be Used to Create Certification Exams?

Is there a role for AI in creating IT certification exams?

AI has been all over the news of late. In the IT certification realm, one extremely popular question on the minds of many right now is whether AI should be used in certification exam design and development. During a recent exam development workshop I was facilitating, the topic came up in a roomful of SMEs (subject matter experts).

One of the key points raised was that AI can't function without data points and information. Let's suppose that someone asked an AI chatbot to search for exam development questions in a certain area. The necessary information would likely only be available if it had been used previously for a similar purpose.

In essence, any questions formulated this way would be exposed as soon as they'd been created, and the exam would be invalid. Maintaining the security of test items is of paramount importance to good exam design.

The other part of the equation is whether anyone would be comfortable feeding an AI system as much information as it would need to create the test questions themselves. Most companies like to protect their intellectual property. Some might suggest using training material to generate exam questions. If the training material were already freely available, then concerns might be minimal. But what if the training material itself were an asset that brings in revenue?

Another, more subtle aspect of exam design is considering different perspectives. Would AI be able to mimic this process of weighing and integrating different viewpoints? The very best outputs are achieved when SMEs come together from different parts of an IT ecosystem. A new Microsoft exam, for example, would benefit from involving SMEs from different internal departments, as well as outside partner organizations. Varied perspectives contribute to a much stronger collective result.

Another factor to consider is the time required to check any work handed off to an AI. Let's say that you decide you can live with the above-mentioned caveats and you let an AI system create some questions. Since there was no conscious input at the design level, the questions will need to be rigorously reviewed and validated. Is the time saved in thinking of ideas and content for questions, worth the tradeoff of a beefed-up review cycle? Not from my perspective.

There is also the question of the increasing complexity of certification exams. Scenario-based questions require exam candidates to think critically and demonstrate the ability to tie together several factors. Such questions are increasingly popular, but creating them requires the same combination of critical thought and independent evaluation of factors that answering them does. Is current AI technology even capable of knitting together these types of questions?

One step beyond scenario-based questions are performance-based questions. These require the exam candidate to not just think through a complex scenario, but to actually complete tasks in a live environment. The "correct" solution to such questions frequently involves subjective elements that have to be considered both in designing the question and evaluating each "answer" an exam candidate produces.

Performance-based exams require a great deal of investment. Creating the questions, delivering the exam, and evaluating the results are all labor- and cost-intensive. (Exam delivery restrictions also mean that fewer candidate take the exam during a given period of availability, a hidden "cost" that has to be factored in.) Certification providers may not want to leave any of those processes in non-human hands.

There is certainly a role for AI in scoring completed certification exams (unless subjective criteria are involved). And if you are not necessarily concerned about legal defensibility, then AI may be useful in creating exams for a lesser standard of credential, such as a qualification program. If high-stakes proctored testing is not required, then AI could play a big role in generating questions and answers. Current AI technology could almost certainly scrape a manual and create some test questions.

There will almost certainly already bad actors out there who are attempting to capture certification test items and feed them to AI to generate test guides. So AI will cause problems even if no one tries to use it for certification exam creation.

There may be a role for AI to play in sussing out bad actors. The potential of AI in this regard — an AI-driven automated search tool might be useful in uncovering pirated exam content online, for example — will almost certainly be intriguing to those concerned with exam security.

No matter how AI is potentially involved in the future of certification exams, it will take time and patience to figure out the specifics. It would be best for all concerned to tread lightly. I would love to hear what GoCertify visitors have to say about all of this. And, yes, this rumination was indeed written by me — not AI. 😊

Would you like more insight into the history of hacking? Check out Calvin's other articles about historical hackery:
About the Author

Peter Manijak is a training and certification consultant and served as Certification Chair for CEdMA (Computer Education Management Association) for more than six years. He now sits on the CEdMA Europe Board of Directors. An innovator and pioneer of IT certification, Peter specializes in building and managing world-class certification programs and training organizations. Certification regimes he has led include those affiliated with EMC, Storage Networking Industry Association (SNIA), Hitachi Data Systems, Acquia, Magento and Ceridian. Peter has been awarded CEdMA Certification Chair - Emeritus status and is a regular contributor to Certification Magazine and GoCertify.