Moving At The Speed Of Thought – Learnosity Hackathon 2017: Part 2

Read Time 6 Mins
People & Culture

Read about the ideas, the obstacles, the processes, and the results of our teams’ Hackathon 2017 projects.

Last week we took a look at some of the rapid-fire innovations that the Learnosity team in Sydney took on in the course of a day. In Part 2 of our Hackathon 2017 report, we pick up where we left off and take a look at yet more imaginative projects – from survey builders and Slack integration to hi-tech beer kegs and programming robots to recognize colors…

LRN Surveys

Team members: Andy, Andrew, and Michael S

Andy and Andrew recall their project and what they learned along the way.

The LRN Surveys hack day project’s goal was to use the Learnosity Activity editor, the Assessment APIs (Learnosity’s question rendering and interaction handling products) and Reports API (Learnosity’s analytics rendering product) to create a standalone survey builder workflow for creating, conducting, and reporting on simple surveys. As an additional exercise, the team also created a Slack app that used Learnosity’s Data API to source and deliver those same surveys through Slack, our workplace communications app.

The team had two primary tasks:

  1. Build a survey workflow environment that was simple, clean, and easy to use (available across both mobile and desktop browsers)
  2. Slack integration. Andy and Michael teamed up to work on the workflow and design, which had higher usability requirements, while Andrew took on the Slack App documentation, which had a higher trial-and-error requirement.

The app was good fun to build and we branded the experience to our product. The Slack integration was a success and the app will constantly receive upgrades with every new release of our authoring platform.

The bonus of the project was the cross-device platform the app operates from. If you’re away from your desk you can simply respond to any questionnaire or poll straight from your mobile – all information is captured and safely stored in our Item Bank.

Kegosity Version 2.0

Team members: Alan, Jarrod, Pierre, and Olivier

Olivier reflects on a project so big it spread across not one, but two hack days!

General info

We have a kegerator in the office, which allows us to have some nice brews on tap.

However, we lack crucial information about how much remains, who is pouring a cold one, and how often they do so. In a previous hack day, we tried to answer those questions with Wii-Fit scales. They proved to be precise enough to detect the change of weight during a pour, and we hacked a web dashboard together showing when this was happening. The problem was that the whole set up was not very solid – the bulk of the Wii-Fit made it hard to change the kegs and the SD card of the Raspberry Pi powering the system went walkabout shortly after. A fun project, but unfortunately it fell into disuse pretty quickly.

Our goal this time was to fix all the outstanding issues from the previous project:

  • Have a clean and sturdy set up
  • Replace the Wii-Fit with something to detect the flow of beer
  • Add person and face detection to keep stats
  • Polish the dashboard

Flow of the project

Alan got an early start on the physical front so we had a screen sturdily mounted on a pole with all the machinery neatly tucked behind it – leaving us to focus on the features. For the pour detection we thought about using a flow sensor, but were concerned about it disrupting, well, the flow (nobody likes a really frothy head!). We realized, however, that we could simply measure the temperature in the line and check for drops indicating a pour.

We just needed a small heat source and a thermometer to measure the change. Alan made this happen with a big resistor on constant power to heat a small part of the metal tap, and a 112-bit temperature sensor to check for changes, all neatly tucked in the casing of the kegerator. He wrote a Python script to track the temperature and transform it into events.

Meanwhile, Pierre dug into the dashboard code. Due to the way we’d initially conducted our QA in the original project (i.e. too much beer and not enough QAs), most of the code was … somewhat hazy. Fortunately, the last changes had not been saved, which gave us a clean slate. Pierre migrated/rewrote it using React, and made it more manageable overall, adding a nice flow animation, plotting the measured temperature, and listing the last pour events. The events are displayed in a list with user avatars fetched from the Slack API.

This information was available through WebSocket. JMO wrote this server to expose the pour events from Alan’s sensor and a placeholder photo-taking and Slack upload feature while we waited for the facial recognition piece.

I worked on this part, initially on a Kinect, hoping to use the depth information for person detection, followed by facial recognition on the captured image. As it turned out, the pour event is a good enough an indication of a person’s presence, so we ended up dropping the Kinect and just using OpenCV with a webcam.

What we learned

We learned a few things in the process:

  • State machines are easier to debug and fix when monitoring a constant metric and trying to generate events from it.
  • Pierre wrote a live-chart React component using D3js and used the Slack API to get user avatars for pour events.
  • We encountered a weird issue with in Python, where using os.exec() with cURL to hit the Slack API would fail for no apparent reason. Fortunately, using a good old os.system() instead worked wonders.
  • I spent a bunch of time making clean-ish interfaces in the Kinect code. They were very pretty but not all that useful in actually getting the system running. That said, they shined when we migrated to using the webcam: they made it an easy thing to do as only one class needed to be reimplemented; the system remained the same.

Choice 2: Robot Pathfinding

As we outlined near the start of Part 1 of this Hackathon report, teams were given two options: “Choose your own HACKadventure” afforded greater autonomy, while “Robot Pathfinding” required a dedicated sense of focus to a particular task.

Lego Mindstorms EV3 sets are a great introduction to robotics for all ages. Using color and proximity sensors on a buggy with a brain, our teams had to program their robots to recognize and react to colors, navigate obstacles, and make their merry way along a set path.

Here is what some of the team had to say about the challenges they faced and what they enjoyed about the task the most.

Proof that foosball tables needn’t always be a distraction.

Grace – Software Engineer

We found the robot works nicely when using one color sensor to track the line. But when we tried to use two color sensors (one for tracking the black line, the other for detecting the green spot) there were a lot of issues.We needed to consider the turning angle, the distance between these two sensors, the height between the sensor and the ground which could affect the detected color readings. When all these situations were put together, sometimes our solutions contradicted each other which brought us much pain to figure out.

We needed to consider the turning angle, the distance between these two sensors, the height between the sensor and the ground which could affect the detected color readings. When we put these considerations together, our solutions sometimes contradicted each other, which made it pretty difficult to figure things out.

What is interesting about the task is we spent a lot of time refactoring the line tracking algorithm with different values and formulas to make the robot turn more smoothly and with greater stability. I quite liked the progress of building the formula and trying to adjust the number settings incrementally. It was kind of explorative progress. Quite fun!

Don – Senior Software Engineer

It’s a bit difficult to deal with the sensor data because it wasn’t always reliable: EV3 software was a little frustrating to use on Mac.

What I found the most interesting, or surprising, was that even though it’s called pathfinding, it’s not actually following the black line, but following the edge between the two colors.

Michal – Software Tester

I’ve had some experience with graphical programming, but I still think that it’s a great way of getting people into programming and algorithmic thinking – you just move pictures around and don’t have to learn any programming language!

It was great that teams used a different approach to the problem in the competitions. Combining mechanics with programming was a nice combination of two ways of thinking – mechanical and algorithmic. Both ways require creativity, but of different kinds.

Dmitrii – Business Development Engineer

Our Team (“Stab in the Dark”) encountered difficulties with color sensor calibration. We were not able to properly set up reading colors from the path; therefore, we were playing with “our own” values (like considering white as green, etc). That was the most unpredictable part. As a result, we failed with turning support but struggled up to the end trying our best, even attaching the third sensor!

Great experience overall. You can see the results as soon as you develop the code – this is the exciting thing. I appreciated the help from other teams as we faced the same problems during the tournament.

If you like a challenge or feel you’d enjoy learning every day by working with a talented team of developers, then come hack with us! You can get in touch through our careers page or by emailing us at

Micheál Heffernan

Senior Editor

More articles from Micheál

Let’s make it official

Get behind the scenes at Learnosity with quarterly insights, inspiration, and updates.