Michael Lai – Avra

Imagine if you couldn’t type or use a mouse.

If it took you 20 minutes to complete a single action on a computer.

If typing out three simple sentences was frustrating enough to leave you in tears.

Michael Lai may not have experienced all of this first-hand, but he knows someone who has: a young boy from KidsAbility who appeared at a University of Waterloo lecture last year.

“I was just sitting there like, what in God’s name is going on, like why is this so difficult?” he says, wondering why a better solution hasn’t been developed.

After the presentation, he had the chance to meet the boy and take a peek at the hardware and software he used. It was basically a tablet, with two fist-sized buttons: a green one for “yes” and a red one for “next.”

To operate the tablet, the boy had to navigate through a series of windows. “He’s selecting boxes within boxes. It was primitive,” says Michael – as if the software was developed in 2000 and sat untouched since then.

When he went into KidsAbility to visit the facility, it got worse. “The current technology wasn’t working well – it didn’t even work in the demo while I was there,” he says. “It had to restart multiple times. It just crashed.”

The co-founder of Avra saw this as an opportunity. Leading up to that lecture, Michael had been researching virtual reality technology and human-computer interaction. In particular, he was interested in eye tracking, and how to interface between our eye movement and our devices.

“I just thought, perfect, you can just use your eyes, right? Why don’t you just look at something that knows you’re looking, and bam, you don’t need anything else?” he says.

In other words: why not do away with the fist-sized buttons and a crash-prone specialized tablet, and use eye tracking on a regular computer to do the same work?

But eye tracking technology just wasn’t there yet. “Even the best eye tracker, you don’t have nearly enough accuracy to control a normal computer,” he says.

The first hurdle is calibration. The current leading eyetracker uses three infrared dots that shine in your eyes and reflect light back. Those readings are used with data about the curvature of your eye to know where you’re looking.

“You have to recalibrate every time you use it because those dots and the way your eyes reflect, many things can change it,” says Michael. Wearing contact lenses, for example, is one obstacle, as they dry out and change the way the light reflects and refracts.

The second hurdle is lighting. Things go wrong “as soon as there’s sunlight, or abundant amount of light, or any sort of light that’s kind of weird, or if there’s a light in the background,” he says. All of these can distort the result that goes back to the computer.

The third hurdle is that, once everything’s set up, you can’t touch it. Even a small jolt could throw everything off. That means, in practical uses, you’re recalibrating every 5, 10, or 20 minutes.

And the fourth hurdle is the fact our eyes aren’t precise to begin with. “It doesn’t gaze; it wanders most of the time. If I look at you, for example, I don’t specifically look the middle of your nose, the middle of your eye. What I do is I look at the general area of you,” he says. “Unless I’m intentionally looking into your eyes, it’ll be in the vicinity because I know they’re there.”

These are all problems Avra is aiming to fix, says Michael, with its glasses-based system for tracking eye movement and ring accessory for clicking and scrolling.

As an Avra user himself, he actually finds it much easier to use than a standard computer 85% of the time, when he’s just browsing away from the keyboard. So for him, the solution goes beyond helping people with disabilities. It has the potential to change how we work and interact with our computers.

“The whole office setup is for one reason and one reason only: mouse and keyboard. Well… I guess that’s two reasons. But the input is the only reason why you have this setup, why you have all this back pain and neck pain,” he says.

With eye tracking, “you can lean back, sit on the couch the same way as you watch TV, and the good thing about glasses is it has no restriction on distance,” he says. “If you have a 15-inch screen you can stand as far as you can stand while still being able to see what’s on the screen. And if that’s the case than we can tell what you’re looking at.”

It’s a great complement to voice recognition technology as well. “If you were trying to use voice recognition as your sole way of interaction, there are a lot of things that you’re going to have a very difficult time doing. Selecting for example,” says Michael. Eye tracking makes that stuff easy, and voice recognition picks up where eye tracking is weaker: typing and inputting words and numbers.

But he doesn’t forget where he started, with accessibility. “For people like you and me, you probably are a little bit reluctant to wear glasses that are a little heavy and maybe emit some heat, and that you need to charge,” he says. “But for people like this child, it would be a huge improvement.”

One thought on “Michael Lai – Avra

  • February 23, 2018 at 8:15 am
    Permalink

    What a wonderful example of recognising a need and using technology to make life easier for those with that need!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *