How User Experience Goals Change in edTech Products with Alternative Inputs

Sean Oakes bio picture Sean Oakes

When SoapBox Labs launched in 2013, they set the bar for speech recognition in learning tools.

By training an AI model specifically on the speech patterns of a wide variety of young learners, they made voice recognition technology more accurate and relevant for learning tools everywhere. 

edTech brands from Scholastic to Amplify jumped on board, incorporating the tech into their own platforms.

What SoapBox Labs accomplished wasn’t easy. For a long time, it was near impossible. Technology simply wasn’t accurate enough to interpret the voices—or the handwriting efforts—of young learners.

Now that speech and handwriting recognition technology is much more accurate, it’s time for edTech companies to get better at designing for them. 

In this article, I’ll help you understand how user experience goals change when you introduce alternative inputs in edTech products, so you can address common product design challenges with ease.

With a strategic plan in place for addressing the UX of complex learning tools, you can better support teachers, improve learning outcomes, prioritize accessible design, and help teachers scale their impact in the classroom.

How user experience goals change in learning tools that use voice recognition

Product owners are used to spending the majority of their time perfecting their user interface. 

But for learning tools that use voice recognition, you’ll need to consider environmental factors, onboarding, and hardware considerations even more than you’re likely used to.

I went back to my team’s most recent user research and classroom testing observations to create a list of common challenges you’ll need to solve to better support user needs.

1. Design for busy, noisy classrooms

Whatever your ideal user scenario is…throw it out the window.

When you introduce voice recognition capabilities into your edTech products for assessment purposes or other learning goals, plan for flexibility.

My team regularly observes users with complex learning tools in the wild. It’s challenging for any teacher to manage young learners while introducing learning technology. Tools with voice recognition have an added layer of complexity.

Learning content in these tools must be easy for students to accomplish in small amounts of time. This helps busy teachers, who are often moving quickly between many activities.

You’ll also have to account for cross-talk and ambient noise. If your assessment tool picks up inputs from other learners, it won’t accurately interpret or evaluate a learner’s recorded submission.

By working backwards from environmental factors like busy, noisy classrooms, you’ll eliminate potential risks in product design and improve overall user experience.

2. Proactively consider hardware needs

Hopefully, your qualitative user interviews and classroom observations helped your product team better understand the classroom environment.

To account for everything from noisy classrooms to old hardware, your team should also develop a list of baseline hardware needs.

This might include hardware like:

  • Headphones
  • Microphones
  • Tablets with a specific operating system
  • And more

Don’t be afraid to get really specific.

Do teachers need to provide students with headphones that have a built-in microphone? 

Will your software work on the most common pieces of technology—even if they haven’t been updated recently?

Considering hardware also helps your team develop onboarding for both teachers and students. 

Design the prompts, reminders, and technology lists your users will need to be successful from the moment they open your app.

3. Support onboarding needs

Understanding the classroom environment can also help your team design more effective onboarding flows.

Make sure the teachers using your product understand how to set up their classroom environment, so students using voice recognition can use the tool in quieter areas. 

Provide tips for teachers on training, so they can support their students and teach them how to use your tools effectively.

Teachers may also need support with using specific pieces of hardware, whether that’s a tablet or a headset.

Whether you build and test a prototype or conduct user interviews about your MVP, planning for a more complex onboarding flow will support overall product usability, engagement, and assessment goals.

4. Get technical with your content sequencing

Once you’ve adequately designed for the classroom environment, your team’s attention to technical UI can shine.

The content sequencing of any learning tool with voice recognition must make it clear to learners what they should do at any given moment.

  • Does a learner push a button to activate the software, so it begins ‘listening?’ Or is your assessment tool always on?
  • How does a learner know when to start—or stop—speaking?
  • How does your visual UI indicate when it’s time for a learner to move on to the next sequence?
  • And, perhaps most importantly, how can your tool give learners a chance to confirm their own inputs?

Since designing learning sequences for these tools can inadvertently introduce accessibility issues, we put together another article on accessibility best practices that can support your UI design.

No matter what design solutions your team identifies, investing in the technical UI of your learning tool will help you meet the needs of your users and support the learning outcomes of your product.

How to address new UX goals in edTech products with handwriting recognition

Like learning tools that use voice recognition, edTech products that incorporate handwriting inputs have a complex set of user experience goals.

We’ve worked hard to identify and meet those in our client portfolio, including our Webby Award-recognized work on Starwriter

A handwriting app for young learners, Starwriter teaches students early readers to form letters.

Throughout our engagement on Starwriter, my team worked hard to test and iterate design solutions for handwriting recognition.

Here’s what we learned—and how you can apply it to learning tools in the future.

1. Match your product’s learning goals to the need for a handwriting input

Handwriting recognition is an exciting technological advancement, and it can support a range of student needs. 

For example, this technology works especially well for young learners who need to trace shapes, like letters. It also supports modalities like drawing, sketching, and essay mark-up.

Because handwriting recognition can be challenging to implement from a UX standpoint, however, it’s important to be discerning.

Implementing this technology successfully requires meeting a demonstrated user need. If your product team is chasing the desire to add bells and whistles to your edTech product, instead, you might be headed down the wrong path.

I like to think about this UX challenge using the following framework:

Handwriting recognition that supports learner goals, learning content, and helps form follow function is worth investigating.

In my mind, use cases that meet this threshold include:

  • Learning handwriting and letter formation
  • Writing what a learner hears as a way to reinforce literacy skills
  • Learning foreign languages and practicing vocabulary
  • Identifying new words by recognizing pictures
  • Writing and showing work in math
  • Showing work in sciences, by developing formulas, making models, or diagraming concepts
  • Marking up essays with constructive peer feedback or other notes and suggestions

Each of these use cases have a defined learning or product goal that requires handwriting technology to improve user experience.

After all, if we’re going to ask young learners to use a stylus rather than a keyboard, handwriting should be the best input from a UX strategy standpoint.

2. Design for hardware from the jump

Like edTech products that use voice recognition, learning tools with handwriting recognition also require hardware that works well in a classroom environment.

You’ll have to consider the following:

Using an active vs. passive stylus

Active styluses automatically determine whether a hand is resting on the screen. 

This capability can lighten your design and development load, which is great for product development. 

However, the expense of the tool itself might put it out of reach of many classrooms, limiting your market.

Unlike an active stylus, a passive stylus can’t identify when a hand is resting on a touch screen. 

This technical constraint means you’ll have to collaborate with your design and development teams to identify reasonable work-arounds.

Even though it requires more from your product team, designing your learning tool for use with a passive stylus makes the tool more accessible to users, purely from a financial standpoint.

If your design and dev teams have the chops, you may wish to go this route and get your tools into the hands of more users.

Designing for touch screens

Some pieces of hardware, like tablets, already have multiple input systems.

Users can swipe or write with their finger on a touch screen, use a stylus, or even type on a digital keyboard.

If your handwriting application is designed to work with a tablet, instructions need to be clear for users: use a stylus or their finger to write, not the mousepad or the keyboard.

3. Limit the amount of reading in your UI

Most likely, you’re investigating using handwriting recognition because the users of your app are early or pre-readers.

If that’s the case, it’s important to limit the amount of reading in your UI, including UX microcopy for buttons, instructions, and other interface patterns.

For example, in our work on Starwriter, we re-used a specific pattern to help students understand when to move on from an activity.

Screens from the application Starwriter that demonstrate simple UI and the ROI of a design system.
Support the needs of young learners by refining your user experience goals to limit the amount of text in your interface.

This friendly blue button signaled to young users that it was time to move forward. 

Both designers and developers also knew this button should only ever be placed in the center or bottom-right of the screen.

When pre-readers have dependable interface patterns to follow, it makes it easier for them to complete complex tasks without additional support.

Conclusion

Whether your product team is planning to use voice or handwriting recognition for an upcoming tool, know that your UX needs will change with new technologies.

By conducting user research, iterating on design solutions, and observing learners in the wild, you’ll create the solution that will best help your organization improve learning outcomes and scale impact.

Are you hoping to demonstrate the effectiveness of your learning tool to buyers as you integrate new technologies?

Download our guide for improving your data dashboard. We’ll show you the three features your data dashboard needs to engage teachers and buyers alike.

Document cover image for "3 Ways to Prove the Effectiveness ofYour edTech Product," showing a woman working on data reports at a desk.

Download the guide

Let’s build the future of digital products together.