It may not help you make small talk at the grocery store, but for computer programmer Jeff Dyer, math is the universal tongue of the sciences. And like any other language, with access to the right tools and teachers, it can be understood by anyone.
For many, however, the unfortunate fact is that math remains either an inscrutable mystery or an insufferable bore. It’s not hard to find evidence of the subject’s malaise in the US. PISA data indicate that math performance among 15-year-old students in the US lags well behind that of students in rival economies across Asia and Europe.
And adults in the US don’t fare much better: in a test measuring work skills among 16- to 65-year-olds across 24 developed countries, the US placed 18th out of 24 for numeracy skills. When it came to digital problem solving, they placed last.
The real challenge in counteracting current attitudes toward math is to make the subject more engaging and comprehensible to learners.
But how do you sell the idea of a “language” that is both complex and abstract? How can you determine whether students are genuinely understanding math or just repeating what they’ve been trained to commit to memory?
The real challenge in counteracting current attitudes toward math is to make the subject more engaging and comprehensible to learners. Share on XAttempting to tackle such challenges is no easy feat. But then Jeff Dyer is no ordinary programmer.
Having earned a reputation as one of the lead designers of ActionScript 3 (the programming language of Adobe Flash Player), Jeff was approached by Learnosity’s co-founders to help guide the development of a math scoring engine that was more flexible and robust than anything that education had seen before. With six years of development behind it, Learnosity Math is now the go-to digital math platform for some of the biggest educational publishers in the world.
We spoke to its creator to learn more about how the product came to be and find out what sets it apart.
[The following interview has been lightly condensed.]
I actually started programming because I was interested in computational linguistics – I wanted to use computers to understand spoken languages. This was back in the era of IBM PCs and Apple IIs. I didn’t have much computing power to work with, but you could get some pretty interesting programming languages to play with on even these underpowered machines. I taught myself Lisp which was the standard language for doing natural language processing (NLP). I sat in on some linguistics classes at UC Davis where my wife was getting her teaching credential and tried to write programs that implemented the ideas of linguistics.
What I do for Learnosity is not that much different. But instead of processing English, my code now processes math.
Gav [Learnosity co-founder and CEO] found my name on a whitepaper I wrote about auto-scoring math exercises. I’d done some work on a math exercise framework. Gav reached out to me in June 2013 and introduced me to Mark [Learnosity co-founder and CTO], and Mark and I started meeting weekly to hash out the design of a new math scoring engine. The work seemed interesting so I started working part-time writing a spec with Mark and implementing the new math scoring engine.
Gav and Mark explained that they were looking for help implementing a set of math scoring functions that they had vetted with customers. They’d put together a PDF of requirements for what they wanted. After that, Mark and I met weekly to talk about the design of the engine and iterate on the design in the PDF.
I became interested in auto-scoring math questions. I started to look at ways to support authors in expressing the validations of student responses.
I wrote up a whitepaper outlining how it might be possible to make scoring more flexible and robust. It was shortly after this that Gavin reached out to me to work together.
The code was based on prior work I had done with parsers and translators. Learnosity Math was just a new application of those patterns. That said, the hardest, and most rewarding, work was in translating the “paper and pencil” algorithms we learn as math students into algorithms that can be executed by a computer.
The most rewarding, work was in translating the "paper and pencil" algorithms we learn as math students into algorithms that can be executed by a computer. Share on XPerhaps the greatest factor to the success of Mathcore [Ed: Mathcore is the name used internally to refer to Learnosity Math’s scoring engine] is our testing strategy. We recognized early on that we need to focus testing on use cases that our customers care about.
This may seem obvious, but we could have easily missed that point and just written lots of tests based on our own understanding of math and Mathcore.
So what we did was hire math educators (students and a professor) from the University of Colorado, Boulder to write use cases. They used K-12 curriculum called Engage New York and the Common Core standards as their guide. We used a cloud-based platform I created to quickly capture the use cases. To date, they have written over thirteen thousand test cases in this way. These tests get run with every release to ensure that nothing was broken by the changes. This work continues as we are moving into support for calculus and other higher-level math subjects.
In addition to the thousands of use case tests, we also run Mathcore over all known customer content to further ensure that the behavior of Mathcore doesn’t change unexpectedly from release to release.
The power of our scoring engine is in its ability to give students an engaging practice environment and immediate feedback. It gives teachers real-time insight into their students’ abilities while freeing them from the rote work of grading assessments.
“The power of our scoring engine is in its ability to give students an engaging practice environment and immediate feedback. It gives teachers real-time insight into their students’ abilities while freeing them from the rote work of grading assessments.”
Automatic feedback allows students to know when they are doing a problem correctly or incorrectly. It’s much harder to relearn how to do a skill correctly once it has been repeatedly practiced incorrectly. Practice does not make perfect. Perfect practice makes perfect. Automatic feedback allows students to correct problems before inaccurate pathways are written into their brains.
Practice does not make perfect. Perfect practice makes perfect. Automatic feedback allows students to correct before inaccurate pathways are written into their brains. Share on XReal-time analytics enabled by auto-scoring allows teachers to quickly identify and intervene with students who are struggling and provide enrichment for students who are ready for more. By automating the scoring of formative assessments, teachers have more information and time to address the individual needs of their students.
I read a lot, mostly non-fiction for both professional and personal growth. I enjoy just chilling with my wife. Thinking and journaling about life. Visiting with friends and family, usually in a coffee house or pub or on our front porch. Traveling. Drawing. Walking. Working on various side projects like woodworking or making new programming languages.
“The fact that the physical world reflects the computational properties of arithmetic has a profound implication. It means that, in a sense, the physical world is a computer.”
That the world reflects properties of arithmetic might say something about it being a calculator, but not a computer.
My view, to the degree that I have one, is that physical universes follow rules but that we understand only an infinitesimally small subset of those rules. We use math to describe the rules as best we can but in general only as a dim approximation of the truth. Math is a language that humans have created to think about quantities and qualities we perceive around us. My sense is that math, and language in general, are blunt tools for describing that reality. If we could get inside the mind of God we would see a very different formulation of the universe.