I recently got back from the Association of Test Publishers (ATP) Innovations in Testing conference in Florida. The event is a bellwether for the assessment industry and a good place to look into the crystal ball for a glimpse at its possible future.
The conference ran virtually during the pandemic but was back in person for the first time in three years last week—with about 750 attendees in Orlando, Florida and a further 300-400 attending virtually. I was last year’s ATP Chairperson, and have been liaising with everyone by Zoom during the pandemic. It was so invigorating to see everyone in person again.
Following many informal chats, panel discussions, and presentations, a number of topics emerged as potentially having a strong impact on learning and assessment technology. Here are five of the most-talked about.
The standout theme at the conference was diversity and inclusivity.
Though tests and exams provide a gateway to life opportunities for many young people, they’ve been rightly criticized for not giving a fair chance to those from minority groups. In the past, the response was often that tests just measure, and you can no more blame testing for educational inequality than you can blame a ruler for showing that things are different sizes. But there is a growing awareness that better testing is a large part of the equity solution.
Though tests provide a gateway to life opportunities for many, they've been rightly criticized for not giving a fair chance to those from minority groups. Share on XBoth the opening and the closing keynotes at ATP focused on this and there were around 25 sessions at the conference on diversity, inclusivity, equity, and bias. This included a panel session led by Learnosity General Manager Brad Baumgartner on “Inclusivity in Hybrid Assessment” and one by Learnosity CEO Gavin Cooney and myself on “Inclusivity and Equity in Testing: Enabling Multilingual Assessment”.
Some of the discourse around equity is about writing tests fairly. For example, good practice is that test authors and reviewers should themselves be diverse, item writing guidelines should culturally unbiased, and all questions should be reviewed for bias before the test. In addition, post-delivery statistics should be examined to ensure fairness.
But there are much more impacts on learning and assessment technology. Obviously, to be inclusive for all learners and test takers—including those at home—we need to support a wide range of devices, including inexpensive ones, and be able to work around intermittent connectivity. But there are many other factors to consider too.
One is that it’s a mistake to assume that everyone speaks English or a given country’s native language. Learners themselves might need support in multiple languages and (not so obviously) so might their parents. Parental involvement is valuable for learner success and parents may need aid in different languages to coach, mentor, and support learners.
This is just the tip of the iceberg. There is, and will be, much more for technology to do to become genuinely equitable and inclusive.
Also much spoken about was accessibility. I counted 12 sessions on different angles at the conference. This included a session by Gavin Cooney, Learnosity’s CEO, myself and Peter Ramdsell from Learnosity partner Texthelp on “Accessibility in Digital Assessments 101” where we gave an introduction to how digital assessments need to be accessible and some key steps to make them so.
One potential trend is that, in the past, requirements for learning and assessment technology tended to be stronger in schools/K12 than in adult/professional learning and certification. But this may be changing.
Employees coming into the workforce expect to see that the approaches technology takes in school learning be replicated in corporate and professional learning—and are willing to be assertive if they are not. Employers recognize the value that diversity brings in terms of adding a rich variety of people, perspectives, and insights to their teams. If you treat accessibility as a compliance tick box instead of a core principle of usability, now might be a good time to revise your thinking.
Pre-pandemic, there were lots of measurement professionals who genuinely thought the only way to securely test people was to bring them to a physical test center and assess them in person. However, the pandemic has forced everyone to adopt remote testing, usually with an online proctor observing the test on video—with general success.
There are lots of debates on how to increase remote testing’s efficacy and security, but the overwhelming majority of measurement experts now believe that it’s a credible, trustworthy way to deliver even high stakes tests.
This is game changing for assessment, but it may be even more so for distance learning, which has sometimes been seen as the poor relation of in-person learning—convenient for the learner but less distinguished than an in-person course. Part of this perceived quality gap is how trustworthy and credible the end-course assessment is.
There are lots of debates on how to increase remote testing’s efficacy and security, but the overwhelming majority of measurement experts now believe that it’s a credible, trustworthy way to deliver even high stakes tests.
If remote testing is genuinely on par with in-person testing, this could significantly help the perceived quality of distance learning more generally.
For those of us who are new to it, biometric ways of identifying people, like facial recognition, are close to “magic”. For example, when I boarded my plane from Florida to London, I didn’t need to show my passport or boarding card—facial recognition allowed me to walk on the plane without paperwork.
Past ATP conferences have seen a lot of interest in biometrics for test taker identification, and to reduce test fraud, but biometrics is now very much out of fashion. There has been a lot of angry commentary on how unfair facial recognition is to some demographics. Other forms of biometrics may have similar concerns and, in any case, are restricted by various laws and regulations.
There is a lot of talk about AI in the assessment community and it’s a topic that was given wide coverage at the conference.
Some organizations are using it to help write questions or decide which questions are delivered to a test taker; some are using it to score voice or essay responses; and some are using it to identify possible cheating and flag it to a human for review. Some of this is more algorithmic than “intelligent” and much of it is experimental, but it is increasing.
The core concept of computational psychometrics is that new technology allows us to gather huge amounts of data for people taking assessments—every keystroke or interaction with conventional questions, video and audio, learner’s communications within course material, simulations, performance data and more. Using techniques from computational psychometrics, we can then derive meaning and skill judgements from this data that could be more effective than traditional assessment.
Computational psychometrics is a new field, and one that will have to navigate carefully the privacy of learners and test takers. However, it’s worth keeping an eye on for those in learning and assessment technology.
[I’d recommend the book Computational Psychometrics: New Methodologies for a New Generation of Digital Learning and Assessment, for further reading.]
Despite the transformative power of technology in countless areas, there’s nothing like the immediacy of face-to-face conversations with colleagues, peers, and friends. The experience of attending events such as the ATP conference gives a greater sense of the growing energy, urgency, and momentum in the industry. I hope this summary provides some insight into where it might lead.