Our warning to you on user research is that it becomes addictive. Once you hear your users start giving you reactions that bring unexpected insights, you’ll start to bring that research to every solution. You’ll then find you want to go get user insights over and over and over at every phase of a solution.
What Is User Research?
In grossly simple terms, user research is watching how users interact with and react to a product. As with any field, it can get immensely more complex from there. It can range from executing user research through interviews to gather insights long before a product is created, getting reactions along the way (hopefully using Design Thinking in parallel with user research), and testing the market for how to communicate and advertise your product. It can be simple in terms of surveys, but some of the most rich insights can come from user labs, where a user researcher prepares a study, product owners and stakeholders watch on video or through two-way glass, and everyone watches the user interact with the tool and listens intently to any piece of insight they can gather along the way.
You can imagine that a critical key to success is that those conducting user research aren’t bringing their bias to the interview. It’s incredibly important that the researcher doesn’t lead the user or guide their reactions in order to gather the richest insights.
The best part of user research is that “you are not the user.” We all bring lots of baggage to our solutions. We bring the bias around our own desired experience, or our own belief that something is right, even if the data shows otherwise. User research is an awesome moment of reckoning where a user who doesn’t have to worry about your feelings shoots straight with you and you can start to lay down your own opinions and experiences and focus on the users you actually hope to adopt your solution.
Below, you’ll find four short stories about trips to user labs testing elearning, web sites, and LMS functionality. You’ll see that some comments make you react with “of course,” but when developing a massive solution, sometimes you forget the details. Other pieces of feedback were completely surprise or directly in conflict, which would lead to more testing.
We wanted to try a number of non-traditional approaches to our elearning, including removing next buttons, breaking out of corporate color restrictions, creating dynamic results pages that linked into content, and asking about what users have done in recent weeks, not what they thought they would do. It all seemed relatively straightforward, but watching the users interact with it yielded tons of insights.
Here are a few insights:
- Learners were far more anxious about their scores than we imagined. They wanted to study the site before they tested because they were afraid their boss and the company would have record of a bad score. As a result, we softened our language on the landing page and description that we shared with others.
- Learners thought we had “gamified” the experience in some way. That wasn’t our intent, but we loved that they loved it.
- Users thought some of our questions had to do with the topic at hand. That made us both revisit our content and whether the organization was defining the topics correctly. Also, some of the questions just needed to be rewritten.
- In our effort to get rid of “next” buttons, we also got rid of “back” buttons and users HATED that.
Learning Portals (Websites)
There were two distinct learning portals that we were testing, the first was sorted into broad groupings – “living, banking, or working” digitally. The other site went deeper into topics around the digital transformation like design thinking, agile, big data, etc. Both sites contained copy and videos, though the more advanced site contained 8-12 pieces of content per topic, were the basic site just had one video, copy, and a few links. Here are a few insights we gathered:
- The overview copy was too long and required too much scrolling
- We had added a graphic to the page that every single user tried to click on, but we hadn’t added as a hyperlink. Oops.
- The categories were confusing. The researcher used a card sort to recommend a different approach for bucketing the content.
- The language “exercise” didn’t resonate with them at all. They recommending something like “Try it out now.”
- People wanted content separated into “overview” and “practitioners”
- We used a phone as a part of our graphic and the users thought that meant they could download an app to their phone with the content
- People didn’t want videos at their desks – they didn’t feel like they had time to find their headphones. They wanted a short synopsis to know if watching was worth it or have an article as a replacement
- Users were completely mixed on whether they wanted self-navigation through open pages or a defined list in order to follow
- There was little interest in social interaction on the page
Recommendation Engine (Both manual & machine learning) in LMS
We led an effort to partner with an LMS vendor to test their recommendation function (including a machine learning feature) with our users to gather insights about how they might or might not use the function. Here are some of the insights we gained:
- One person in our “Tech” org didn’t know what IT stood for
- The labels between external content, online courses, moocs, etc were confusing. Our industry vernacular was completely lot on the users.
- Users didn’t care at all for learning recommendations from their executive population. Here’s a quote, “What does my executive know about what I need to learn to do my job?”
- Users valued peer recommendations most, then recommendations from their boss, then system recommendations, then executive recommendations, then learning team recommendations.
- Learners generally had no idea about the volume of content on relevant topics contained in the system
- Users wanted the recommendations sent to them, not just contained in the LMS
- Users were served up links by the recommendation engine that were completely dead (embarrassing) or content that was irrelevant, so the algorithms had to be updated
- Almost every learner was using the same external site to learn how to code – they were expensing it on personal cards and no one in the learning function knew that the content we were paying for was so much less popular than where users were going to on their own
New UI in LMS
When our LMS vendor decided to invest in rebuilding the UI for its 25MM+ users, they looked to my team to partner on testing the user experience and gathering learner feedback. Here’s a little of what we learned when they conducted a day of user testing in our labs:
- Users were incredibly keen on seeing what learning others were taking (especially their boss and peers), but they wanted to opt out of having others see their learning activity
- Users had tons of navigation – they were quite interested in landing on the page that showed what was “required” of them to complete while they also browsed opt-in content
- Learners really appreciated simple confirmation messages of registering for classes in addition to receiving confirmation emails
- Users wanted to content of how others in their job roles were consuming content and have recommendations for “next” content they should consume
While these insights may or may not resonate with you or your learners, developing a practice of regularly user testing your concepts, prototypes, and experiences will yield incredible and challenging insights. While I’m not sharing the results above, we also used the user research to test things like:
- eLearning Templates
- eLearning games
- Instructor Capabilities in the LMS
- Internal portals to support learning professionals