More than 20 million students are looking to improve their lives through learning, and depend on our ability to help them find the right course. With such a large collection to choose from, search is the easiest & preferred way to find what they’re looking for.
For the past 1.5 years, I led the design of the student Search experience, working alongside a Product Manager, 3 Researchers, and 5 engineers. My work spanned the entire spectrum of the double diamond process:
I partnered with multiple researchers at different stages of the project to uncover insights about student behaviours and motivations, validate (or invalidate) key assumptions, and de-risk design decisions.
I partnered with my product manager to define product design strategy with a focus on creating tangible frameworks that help to align our vision, prioritize key projects, and evangelize our point of view.
I facilitated brainstorms to ensure my team and stakeholders had the space to share their ideas, thoughts, or concerns with me throughout (and especially early on in) the process.
I transformed high-level concepts into concrete experiment designs to test and learn from. These early learnings would then inform future iterations of the design.
I worked with my engineers to understand the fidelity of design deliverables needed to build an experience. This ranged from wireframes, prototypes to detailed design specs.
I collaborated with designers on other teams & platforms (Browse, Recommendations, Mobile, & Udemy for Business teams) to translate designs into their respective contexts.
When I joined the team, Search was an engineering-only team with a focus on ranking and technical improvements. In lieu of a product strategy, my team members and stakeholders suggested well-meaning ideas that felt disparate and solution-oriented, while my response to these solutions were often based off my own intuition. I realized for us to get on the same page about what we should build, we had to first get to a shared understanding of who our users were and the problems they faced.
At a high-level, I wanted to understand who our searchers are, why they searched, and what they searched for. To do this, David, the Search PM, and I kicked off user interviews, and a qualitative analysis into our queries.
Previously, we assumed that students knew what they were looking for and when they found it, they would buy it. From our research, we learned that many students didn't even know what they needed to learn.
Almost ~90% of searches were topics (i.e. photoshop, python, seo)
Other queries include: instructor, exact course titles, multi-word phrases
Based on the main problems we observed, we articulated 4 approaches:
We decided to prioritize each approach by how poor the user experience / conversion rate was. Situations where students didn’t see any courses at all were worse than situations where students found a good course.
Following this, I will highlight key projects within these approaches.
One common reason students reach “dead-ends” is because they searched for a niche topic that our search couldn’t find an entire course on. Those topics may be mentioned within a course, but we were only scanning course titles & subtitles to find results. We wanted to see if searching within courses would help reduce dead-ends.
After a kick-off discussion with my PM and engineers, 4 open questions remained.
Answering some of these questions required a qualitative look at the potential new results. Since lecture titles tended to be topics, it was a good starting point that introduced the least amount of noise (compared to freeform descriptions and instructor bios). David pulled the data so that we could see what exactly would return if we scanned lecture titles. We were excited to see really useful courses were returning where previously there were no courses at all.
With more confidence in the functionality behind lecture search, I began to explore how the UI could look. Unlike subscription platforms with unlimited content, students would still have to purchase the full course to be able to view the relevant lecture so I felt it was important to differentiate these results to avoid misleading or confusing a student.
We worked with Claire, our researcher, to see if there was any usability concerns with the design. Do students understand this new result type? Do they know how it’s different? Here are some of the main learnings:
After 2 more rounds of design iterations and usertesting, I finalized on a design.
At a high-level, we realized full courses (title / subtitle matches) were still more compelling, but in cases where there weren’t many full course matches, the lecture matches are also compelling. Thus, we enabled lecture matches only on queries that had less than 12 results.
By enabling lecture matches to appear on niche queries, our “No results” page impressions dropped significantly by 36%. Mike, our data analyst, dug into the results and found that there was, in fact, a 10% increase in Revenue Per User for users with a lecture eligible search.
One of the main reasons that students weren’t seeing relevant courses is because they typed a broad topic that can be taught for different reasons. For example, students could search the topic “photoshop” for photo editing, graphic design, ux design, mobile design or marketing. To surface more relevant results, we wanted to enable our students to specify the path they’re interested in.
After kicking off with a brainstorm, we determined 3 ways to do this
Of the 3 experiments, students in the Topic Filter experiment had the highest conversion.
This may be because:
The quant data showed promising results for Topic Filters, so we conducted user tests to uncover usability risks.
After several iterations, the launch of this filter resulted in a 2% decrease in impressions. This meant that students who used Topic Filters looked at fewer courses before finding the right one.
Our course descriptions don’t always help students understand if a course would help them achieve their goals. In past surveys, students repeatedly brought up different pieces of information that would help them understand just that.
We wanted to show more information without bloating the course card and reducing information hierarchy. Our counterpart team "Browse" had already developed a quickview feature that reveals additional information and allows students to purchase a course on the spot. I translated the designs so that it would work with our Search cards.
The original plan was to simply bring quickview into Search. I suspected that including CTA’s into the original quickview biased students to convert. It was also unclear If the additional information contributed to this change in student behaviour so I designed a second variant of the quickview to test the usefulness of the additional information.
Both variants performed equally well. From this, we concluded that reducing steps to purchase seemed to be the most impactful part of “quickview”, and resulted in a 3% revenue lift in Search. However, since we keep students in the Search page with this feature, they weren’t seeing useful recommendations on the Course Landing Page and the post-purchase pages. This means the overall results of this feature was nearly flat.
This was a good learning because it gives us hope that we can further test the different pieces of information that we had originally found users looking for on Search. In our next iteration, we would like to do that as well as bringing the recommendations (that you miss from staying on Search page) back into view after you have "Added to Cart" to counteract the cannibalization effects.
With so many new features being introduced to help students find the right courses, it was only a matter of time before the search page became too restrictive and cluttered to be used optimally. I wanted to address this problem for longer-term product success before proceeding with additional feature explorations.
This project began with a lot of ambiguity on my part. Because we positioned it as an investment in the long-term, I occasionally imagined future features that we could design, rather than focusing on an IA that can support long term growth & new features.
I took a step back and looked at the system of user flows that different students have while searching. Looking at all the possible flows enabled us to articulate directional concepts that a new search page should be able to support.
With the directional concepts in mind, I started exploring different ways to lay out the concepts on a page. We had a few open questions:
We worked with Claire to test whether the positions we put existing features were intuitive to new and existing students. I specifically refrained from introducing new features in this test so we could focus on whether the change in layout had any effect on student behaviour.
I brought 3 designs to mid-fidelity prototypes to test with students.
Ultimately, users seemed to have the easiest time completing tasks with prototypes B+C.
We realized the salience of the top area as a quick way to narrow in on the right course. Depending on the query and the student, different filters would be more useful. At the same time, we realized the majority of our students were beginners who didn’t have a rigid criteria for how they should be learning. In that sense, filters generally seemed like nice-to-have features that shouldn't take too much real-estate.
In our test, students struggled to the put the topic they searched for into perspective. They showed us how they would google or do their own research elsewhere to understand how topics relate to each other.
(c) Beatrice Law 2018