Skip to main content

ATP’s new best practice guidelines for Technology-Based Assessment—a summary

Read Time 4 Mins
Principles
Learning & Development

They say that if you ask two psychometricians a question, you’ll get at least three opinions. So what happens if you try to work with 100 assessment experts?

On November 14, the Association of Test Publishers (ATP) and the International Test Commission launched a major set of guidelines for technology based assessment (which you can download for free here).

At 175 pages long, they cover most aspects of using technology in testing.

Let me share a little about them.

Some background

Preparing the guidelines was a substantial project that took four years and involved over 100 contributors. It was led by two eminent psychometricians: John Weiner of PSI and Steve Sireci of the University of Massachusetts Amherst.

Each chapter section was authored by an expert, reviewed by several other experts and then combed through many times. There has also been a public review process and a legal review.

As one of the contributors, I authored the chapter on privacy and a couple of smaller sections, and led the review of the chapter on test delivery. I am also on the ATP Committee on Technology-Based Assessment, which will work on taking forward future iterations of the guidelines.

The guidelines are broken down into 11 chapters:

  1. Test development and authoring (including gamification and technology-enhanced items)
  2. Test design and assembly—linear or adaptive
  3. Test delivery environments (including web, mobile, offline, locked-down browsers,disruptions and interoperability)
  4. Scoring—automated and technology assisted
  5. Digitally based results reporting
  6. Data management (storage, maintenance, integrity, integration)
  7. Psychometric and technical quality
  8. Test security
  9. Data privacy in technology-based assessment
  10. Fairness and accessibility
  11. Global testing considerations including translation

Some sections are focused on psychometrics, ensuring that technology is used in a way that’s consistent with the psychometric principles of validity, reliability and fairness. 

Other sections are more focused on technology pragmatism and good practice. Although it covers all stakes of tests, the guidelines focus more on issues relating to summative tests with medium or high stakes than they do to formative or low stakes tests.

Key takeaways

Here are a few examples from the guidelines that give a clear idea of the kind of thing they cover.

Technology-enhanced items

A technology-enhanced item (or TEI), is defined as a test item that incorporates media or additional functionality that is only available through electronic means. 

TEIs can make the assessment more authentic … increase learner engagement, which in turn increases learner effort, which in turn can make test results more valid. Click To Tweet

The guidelines suggest that the use of such items can better measure constructs (the knowledge or skill that the test seeks to measure) by increasing the scope of a test or exam—for example, by using audio or video or animations/graphics. Such TEIs can also make the assessment more authentic and give face validity (stakeholder buy-in). These items can increase learner engagement, which in turn increases learner effort, which in turn can make test results more valid.

Classify, match & order is part of Learnosity’s extensive range of TEIs.

The practical guidance suggests that: 

  • When designing TEIs, start from the analysis of what you are seeking to test (the construct)—for example, from skills maps or content blueprints. 
  • In order to produce high-quality TEIs, it’s important to have item writing guidelines that focus authors on what works well for each item type and gives consistency between items.
  • Make sure to check the operation of TEIs on the devices that test takers will use to ensure they work well (e.g. they do not need too much scrolling).
  • It’s important to give learners tutorials or practice in technology-enhanced item formats before the test to ensure that test takers are familiar with them.
  • Such steps will reduce the risk of “CIV”—construct irrelevant variance—caused, for example, by test takers being unable to demonstrate their skill in the item.

For a wide range of TEI examples from Learnosity, click here. If followed correctly, the guidelines should help with the wider application of such items for good learning and measurement purposes.

Web vs local vs offline vs mobile delivery

A key chapter of the guidelines covers approaches for web-based delivery, local delivery, offline, and mobile delivery, offering pros and cons of the different approaches.

The guidelines suggest that whatever the delivery modality, test delivery systems “should be robust and secure, including capabilities for graceful degradation, encryption, auditing and meaningful system messaging”. 

There is good practice guidance on what to do in the event of Internet connectivity failure or other challenges, with recommendations that vendors should “perform thorough quality assurance on all delivery methods and combinations … on a wide range of devices and conditions .. including stress tests on central (cloud) infrastructure in representative conditions before the testing event”.

Approach to technology disruptions

There is some excellent related material on dealing with technology disruptions during assessments. We have all seen examples of exams going wrong due to failures of technology and it’s good to have some guidance on how to deal with such issues. To this end, the guidelines cover:

  • Preventing disruptions
  • Developing a response plan in the event of disruptions
  • Having clear policies on communication in the event of a disruption
  • Training personnel around disruptions
  • Planning for possible disruptions when setting up vendor contracts

The key message is that prevention is better than cure, but should test disruptions occur, it’s important to prepare well for them so that they can be ameliorated.

The key message is that prevention is better than cure, but should test disruptions occur, it’s important to prepare well for them so that they can be ameliorated. Click To Tweet

Vocabulary for test security solutions

The excellent chapter on test security gives a high-level overview of safeguarding against potential security threats to focus effort on the most important strategies. It sets out three sets of solutions for test security, all of which should be considered:

  • Prevention
    Ways of preventing people from cheating at tests—e.g. randomizing questions or choice order to make it harder for people to copy from others or take advantage of published “cheat sheets”
  • Deterrence
    Effective communication to test takers to persuade them not to cheat.
  • Detection/response
    Find occurrences of cheating and respond appropriately to them—e.g. data forensics that can identify likely cheating via statistical analysis.

I think it’s likely that the industry will coalesce on this categorization to help coordinate efforts against test fraud/cheating, and this could help improve communication and action to increase test security.

On data privacy

Last but not least, is the chapter on data privacy (written by me but with contributions and review from several others). 

At 10 pages, it gives a relatively concise introduction on why privacy is important for those delivering tests and gives some good practice guidance on what steps to take—both for legal compliance and to respect test taker privacy and rights.

For example, one of the guidelines encourages pseudonymity, which is a core Learnosity practice. It says: “Where practical, personal data captured during the assessment process should be stored and transmitted in an encrypted and/or pseudonymized form to reduce the risk of unauthorized access or disclosure of personal data.”

As I’ve shared previously, test sponsors and vendors capture a great deal of data when delivering assessments, and with great amounts of data comes great responsibility. 

I think the assessment industry as a whole is becoming increasingly vigilant and respectful of test-taker privacy so data is only captured to enable good quality assessment for the benefit of stakeholders and society. I’m pleased this chapter sets out good practice for everyone to follow.

You can download the ATP guidelines in full here.

John Kleeman

EVP at Learnosity

More articles from John