GC: As a philosopher [Heraclitus] once said, “The only thing that is constant is change”.
And it’s happening more rapidly now than ever. To use the examples you’ve already given, electricity changed the world but wasn’t widely available to people for 50 years or so. It took a few decades before computers were everywhere. It took less time again to get smartphones into people’s hands. The adoption curve with AI is something else entirely, and it’s already blowing everything else out of the water. There are three primary conditions driving this: 1) consumer comfort, 2) network effects, and 3) exponential tech advancements (also known as Moore’s law—see graph below).
In short, each new tech advance paves the way for others. In my opinion, whole new economies of innovation will emerge from AI—and they’ll arise more quickly than at any time previously.
GC: Like most people, I knew that AI was coming down the tracks, but as the CEO of a tech company, I guess you could call me an early adopter.
My first time using generative AI, I remember I prompted it to write a scene from Friends, only set in 20 years’ time. It churned that out easily. So I went more abstract and gave a prompt to write a scene from Friends crossed with Breaking Bad. And it gave me a detailed script with stage directions, dialogue, the lot. Walt and Jessie were talking and Joey was there in the background doing Joey stuff. Now maybe the crossover wasn’t perfect, but it was still a OMG moment for me. This kind of instant imagining that could bridge two totally disparate things that I could then control, change, improve. I knew right away it would be a game-changer across all industries—and definitely for education.
GC: To restate it, our mission as a company is to advance education and learning, worldwide, with best-in-class technology.
That’s the lens we put on all our work. AI is no different in that respect. Trends come and go so we always need to take a big-picture view—does this feature or product add real value for our users and for their users in the long term? Does it make their lives easier? We need to ask those kinds of questions.
Back in April 2023 I attended the ASU+GSV conference in San Diego. There was a livestream interview with Sam Altman, the Founder and CEO of OpenAI. On a side note: He didn’t make it to the conference so actually dialled in from his car, which was pretty weird. Still, he spoke about how better tools make us more ambitious. I agree with that point. AI raises the bar in allowing us to do a lot of new, transformative things. This is most obvious with generative AI, which has mass market appeal because its use isn’t restricted to data scientists or engineers. It has massive implications for everyone in education—from product owners and publishers to teachers and learners.
AI raises the bar in allowing us to do a lot of new, transformative things…It has massive implications for everyone in education—from product owners and publishers to teachers and learners. Share on XBut to go back to how AI fits into our mission—we’re an API company. That means we can abstract complexity into an API. That’s what we’re doing with AI. That’s what Open AI and other companies are doing with AI. So while companies in the “before times” had to do everything themselves, now AI and all its potential is just ready and waiting for them.
Just think about training an AI to do voice-based search. It’d take endless resources to do well. But now it’s easy to access thanks to the voice APIs developed by the likes of Google or Apple. Our ambition is to make AI an easily accessible tool within assessment.
The game now is multiple companies building on top of these AI APIs. The AI piece is ‘solved’, and it’s the job of entrepreneurs to build applications on top of this—essentially workflows on top of an API. Our ambition is to make AI an easily accessible tool within assessment.
GC: Our initial AI product, Author Aide, is mainly focused on assessment content authoring. It’s a huge productivity booster for test creators—we’re talking a 10x increase in author output—while also dramatically improving content quality and making question types themselves more interactive. There are so many ways AI can be used to deliver better, more timely learner feedback or increase engagement too.
Down the line, the possibilities are endless. It’s really just about what people want to create with AI. Our job is to streamline the process for them as much as possible, to optimize it, make it readily available and easy to use.
GC: There are valid concerns, for sure.
It’s tempting to dismiss them as simply resistance to change. There are always concerns in the face of major change. Oral cultures were hostile to writing because they thought it would weaken their memory. The printing press freaked so many people out that the Pope was threatening excommunication to anyone who printed a book and guilds were running around destroying the printing presses themselves.
In an educational context, even calculators caused a furore when they were brought into schools because people feared they’d lose the ability to perform mental arithmetic.
Existing publishers have legit concerns that these large language models are just trained on their copyrighted material that’s already on the web. Some I’ve spoken to have put it less politely.
My opinion is that the conversation around AI really shouldn’t be binary—it’s neither all good nor all bad.
What’s a fact though, is that AI technology is in the public realm right now. So the balloon has popped, the horse has bolted, the genie is out of the bottle—you can choose whatever metaphor you like for it. There is no going back now. So how do we use it responsibly? How do you weigh short-term use against long-term implications?
The concerns you raise there are things we’ve to find ways to overcome. And we can. We can train language models to work within the parameters of our customers’ content. We can use plagiarism checkers. There’ll always be ways of dealing with misuse. That’s just another part of our job.
GC: In truth, it’s impossible to predict where and how things will change. It’d be like trying to predict stock prices or currency exchange rates. When search became the big thing you had companies like Yahoo and Altavista leading the way. You couldn’t have known that a little search engine called Google would come to totally dominate and shape the market.
Things will change, that much we know. I think that how we interact with learning material and assessment will look a lot different in ten years’ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed so you know what’s worth pursuing and investing your resources in.
I think that how we interact with learning material and assessment will look a lot different in ten years’ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed. Share on XAh, nice question! Are humans served in the future or do they get served—that’s the gist of it, right?
The way I see it, Terminator 2 was a cautionary tale. What happens when there’s no regulation or oversight? What happens if scientists make all the decisions and get carried away by what they’ve made possible? Regulation is essential to prevent things getting away from us. Creators and owners shouldn’t be given free reign. We need guardrails to protect against outgrowths of god-knows-what—extremism, mis- and dis-information. As I mentioned earlier, AI is neither all good nor all bad, but it does require some kind of democratic process that allows us to develop a clear understanding of what could change and how it’ll impact the future. If we manage that, I’d strongly lean toward the future being more like the one in Back to the Future 2.
See what we’ve created with AI. Meet Author Aide.
Gain a competitive edge with industry-leading insights on what AI means for assessment with our downloadable AI guidebook.👇
Note: AI-generated feature image of a DeLorean car created with craiyon.com.