Skip to main content

AI and assessment: The CEO perspective

Read Time 6 Mins
Principles
AI

Gavin Cooney, Learnosity co-founder and CEO, on AI in assessmentits potential, pitfalls, and future impact.

Q. We know tech evolves rapidly. Just over a hundred years ago, most homes didn’t have electricity. Since then we’ve had things like TV & radio, personal computers, the Internet, smartphones—all of which helped launch major societal changes. Do you think AI will have a similar impact, or is it somehow different?

GC: As a philosopher [Heraclitus] once said, “The only thing that is constant is change”.

And it’s happening more rapidly now than ever. To use the examples you’ve already given, electricity changed the world but wasn’t widely available to people for 50 years or so. It took a few decades before computers were everywhere. It took less time again to get smartphones into people’s hands. The adoption curve with AI is something else entirely, and it’s already blowing everything else out of the water. There are three primary conditions driving this: 1) consumer comfort, 2) network effects, and 3) exponential tech advancements (also known as Moore’s law—see graph below).

Line graph describing Moore's law number of transistors per microprocessor
Source:
Esteban Ortiz-Ospina (2019) – “The rise of social media”. Retrieved from: ‘https://ourworldindata.org/rise-of-social-media’ 

In short, each new tech advance paves the way for others. In my opinion, whole new economies of innovation will emerge from AI—and they’ll arise more quickly than at any time previously. 

Line graph describing technology adoption trends in the US from 1930 to 2019
Source: https://ourworldindata.org/grapher/transistors-per-microprocessor
Q. The rapid adoption is clear for all to see. Even though it feels like the concept of AI has been around forever in popular culture, it seemed to become a reality almost overnight. Can you describe your first encounter with it? What was your a-ha moment regarding its potential?

GC: Like most people, I knew that AI was coming down the tracks, but as the CEO of a tech company, I guess you could call me an early adopter. 

My first time using generative AI, I remember I prompted it to write a scene from Friends, only set in 20 years’ time. It churned that out easily. So I went more abstract and gave a prompt to write a scene from Friends crossed with Breaking Bad. And it gave me a detailed script with stage directions, dialogue, the lot. Walt and Jessie were talking and Joey was there in the background doing Joey stuff. Now maybe the crossover wasn’t perfect, but it was still a OMG moment for me. This kind of instant imagining that could bridge two totally disparate things that I could then control, change, improve. I knew right away it would be a game-changer across all industries—and definitely for education. 

Q. On that point, how does AI fit support Learnosity’s mission as a company?

GC: To restate it, our mission as a company is to advance education and learning, worldwide, with best-in-class technology.

That’s the lens we put on all our work. AI is no different in that respect. Trends come and go so we always need to take a big-picture view—does this feature or product add real value for our users and for their users in the long term? Does it make their lives easier? We need to ask those kinds of questions.

Back in April I attended the ASU+GSV conference in San Diego. There was a livestream interview with Sam Altman, the Founder and CEO of OpenAI. On a side note: He didn’t make it to the conference so actually dialled in from his car, which was pretty weird. Still, he spoke about how better tools make us more ambitious. I agree with that point. AI raises the bar in allowing us to do a lot of new, transformative things. This is most obvious with generative AI, which has mass market appeal because its use isn’t restricted to data scientists or engineers. It has massive implications for everyone in education—from product owners and publishers to teachers and learners.

AI raises the bar in allowing us to do a lot of new, transformative things…It has massive implications for everyone in education—from product owners and publishers to teachers and learners. Click To Tweet

But to go back to how AI fits into our mission—we’re an API company. That means we can abstract complexity into an API. That’s what we’re doing with AI. That’s what Open AI and other companies are doing with AI. So while companies in the “before times” had to do everything themselves, now AI and all its potential is just ready and waiting for them.

Just think about training an AI to do voice-based search. It’d take endless resources to do well. But now it’s easy to access thanks to the voice APIs developed by the likes of Google or Apple. Our ambition is to make AI an easily accessible tool within assessment. 

The game now is multiple companies building on top of these AI APIs. The AI piece is ‘solved’, and it’s the job of entrepreneurs to build applications on top of this—essentially workflows on top of an API. Our ambition is to make AI an easily accessible tool within assessment.

Q. Where do you see the greatest opportunities for AI to enhance assessment? 

GC: Our initial AI product, Author Aide, is mainly focused on assessment content authoring. It’ll be a huge productivity booster for test creators—we’re talking a 10x increase in author output—while also dramatically improving content quality and making question types themselves more interactive. There are so many ways AI can be used to deliver better, more timely learner feedback or increase engagement too. 

Down the line, the possibilities are endless. It’s really just about what people want to create with AI. Our job is to streamline the process for them as much as possible, to optimize it, make it readily available and easy to use.

Q. There are some valid concerns over AI in assessment too though, right? Things like plagiarism, misinformation, misuse, copyright issues…how do we mitigate those risks and safeguard, as far as possible, responsible use of the technology?

GC: There are valid concerns, for sure.

It’s tempting to dismiss them as simply resistance to change. There are always concerns in the face of major change. Oral cultures were hostile to writing because they thought it would weaken their memory. The printing press freaked so many people out that the Pope was threatening excommunication to anyone who printed a book and guilds were running around destroying the printing presses themselves.

In an educational context, even calculators caused a furore when they were brought into schools because people feared they’d lose the ability to perform mental arithmetic.

Existing publishers have legit concerns that these large language models are just trained on their copyrighted material that’s already on the web. Some I’ve spoken to have put it less politely.

My opinion is that the conversation around AI really shouldn’t be binary—it’s neither all good nor all bad.

What’s a fact though, is that AI technology is in the public realm right now. So the balloon has popped, the horse has bolted, the genie is out of the bottle—you can choose whatever metaphor you like for it. There is no going back now. So how do we use it responsibly? How do you weigh short-term use against long-term implications? 

The concerns you raise there are things we’ve to find ways to overcome. And we can. We can train language models to work within the parameters of our customers’ content. We can use plagiarism checkers. There’ll always be ways of dealing with misuse. That’s just another part of our job.

Q. What future uses can you imagine for AI in learning in general, and assessment more specifically?

GC: In truth, it’s impossible to predict where and how things will change. It’d be like trying to predict stock prices or currency exchange rates. When search became the big thing you had companies like Yahoo and Altavista leading the way. You couldn’t have known that a little search engine called Google would come to totally dominate and shape the market.

Things will change, that much we know. I think that how we interact with learning material and assessment will look a lot different in ten years’ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed so you know what’s worth pursuing and investing your resources in.

I think that how we interact with learning material and assessment will look a lot different in ten years’ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed. Click To Tweet
Q. I think I know the answer to this one already, but I’ll ask anyway. You’re a big fan of movies. Which vision of the future are we most likely to see: Back to the Future 2 or Terminator 2?

Ah, nice question! Are humans served in the future or do they get served—that’s the gist of it, right? 

The way I see it, Terminator 2 was a cautionary tale. What happens when there’s no regulation or oversight? What happens if scientists make all the decisions and get carried away by what they’ve made possible? Regulation is essential to prevent things getting away from us. Creators and owners shouldn’t be given free reign. We need guardrails to protect against outgrowths of god-knows-what—extremism, mis- and dis-information. As I mentioned earlier, AI is neither all good nor all bad, but it does require some kind of democratic process that allows us to develop a clear understanding of what could change and how it’ll impact the future. If we manage that, I’d strongly lean toward the future being more like the one in Back to the Future 2.

See what we’ve created with AI. Meet Author Aide.

Gain a competitive edge with industry-leading insights on what AI means for assessment with our downloadable AI guidebook.👇

Learnosity eBook on AI in assessment. Illustration of 3-D cube intersected by a waveform.

Note: AI-generated feature image of a DeLorean car created with craiyon.com.

Micheál Heffernan

Senior Editor

More articles from Micheál