Evolving Udemy’s 
Search Experience

Udemy is a marketplace that lets you learn from over 65,000 courses taught by instructors around the globe.

More than 20 million students are looking to improve their lives through learning, and depend on our ability to help them find the right course. With such a large collection to choose from, search is the easiest & preferred way to find what they’re looking for.


My Role

For the past 1.5 years, I led the design of the student Search experience, working alongside a Product Manager, 3 Researchers, and 5 engineers. My work spanned the entire spectrum of the double diamond process:

Discover Insights

I partnered with multiple researchers at different stages of the project to uncover insights about student behaviours and motivations, validate (or invalidate) key assumptions, and de-risk design decisions.

Define Strategy

I partnered with my product manager to define product design strategy with a focus on creating tangible frameworks that help to align our vision, prioritize key projects, and evangelize our point of view.

Facilitate Ideation

I facilitated brainstorms to ensure my team and stakeholders had the space to share their ideas, thoughts, or concerns with me throughout (and especially early on in) the process.

Design and Experimentation

I transformed high-level concepts into concrete experiment designs to test and learn from. These early learnings would then inform future iterations of the design.

Deliver Designs

I worked with my engineers to understand the fidelity of design deliverables needed to build an experience. This ranged from wireframes, prototypes to detailed design specs.

Stakeholder Management

I collaborated with designers on other teams & platforms (Browse, Recommendations, Mobile, & Udemy for Business teams) to translate designs into their respective contexts.

A shared understanding of our problem space

Legacy Search Page

When I joined the team, Search was an engineering-only team with a focus on ranking and technical improvements. In lieu of a product strategy, my team members and stakeholders suggested well-meaning ideas that felt disparate and solution-oriented, while my response to these solutions were often based off my own intuition. I realized for us to get on the same page about what we should build, we had to first get to a shared understanding of who our users were and the problems they faced.

At a high-level, I wanted to understand who our searchers are, why they searched, and what they searched for. To do this, David, the Search PM, and I kicked off user interviews, and a qualitative analysis into our queries.

Research Findings

Who are our searchers?

Previously, we assumed that students knew what they were looking for and when they found it, they would buy it. From our research, we learned that many students didn't even know what they needed to learn.

  • The Explorer
    searches with multiple interests in mind, uncommitted to any one

  • The Beginner
    searches with a general idea of what they need to learn, typically a bigger goal or a task, but has limited understanding of what they are looking for in a course

  • The Retriever
    searches with a clear idea of exactly what they need from a course

What do they search for?

  • Almost ~90% of searches were topics (i.e. photoshop, python, seo)

  • Other queries include: instructor, exact course titles, multi-word phrases

query types

Key Barriers to Enrollment

  • We have no results that match what they searched for

  • Students see irrelevant courses for their goal and assumes Udemy has no courses that can help them

  • Students don’t have enough information to understand if a topic or course will help them achieve their goal

  • Students worry that there is a better course on the marketplace despite finding a good course (FOMO)
Screen Shot 2018-03-24 at 4.33.00 AM
We synthesized key barriers after mapping out user problems on the user flow.

Forming a new approach

Based on the main problems we observed, we articulated 4 approaches:

Reduce dead ends

Guide students to relevant results for their goals

Help students understand their results

Bring focus to recommended courses

We decided to prioritize each approach by how poor the user experience / conversion rate was. Situations where students didn’t see any courses at all were worse than situations where students found a good course.  

Following this, I will highlight key projects within these approaches.

Approach 1

Reduce dead-ends

One common reason students reach “dead-ends” is because they searched for a niche topic that our search couldn’t find an entire course on. Those topics may be mentioned within a course, but we were only scanning course titles & subtitles to find results. We wanted to see if searching within courses would help reduce dead-ends.

Open Questions

After a kick-off discussion with my PM and engineers,  4 open questions remained.

  1. Would someone be interested in purchasing an entire course for a relevant lecture, even if the course as a whole is about a much broader topic?

  2. What content within a course should we scan? Section titles, lecture titles, course description, or instructor description?

  3. Would we be introducing a lot of noise by searching through more content

  4. How would these new result types rank amongst our normal course results?

The first prototype is done in excel sheets

Answering some of these questions required a qualitative look at the potential new results. Since lecture titles tended to be topics, it was a good starting point that introduced the least amount of noise (compared to freeform descriptions and instructor bios). David pulled the data so that we could see what exactly would return if we scanned lecture titles. We were excited to see really useful courses were returning where previously there were no courses at all.

Screen Shot 2018-06-22 at 12.46.02 AM

Exploring UI representations

With more confidence in the functionality behind lecture search, I began to explore how the UI could look. Unlike subscription platforms with unlimited content, students would still have to purchase the full course to be able to view the relevant lecture so I felt it was important to differentiate these results to avoid misleading or confusing a student.

lecture search exploration 2

Usertesting Learning: Not quite there

We worked with Claire, our researcher, to see if there was any usability concerns with the design. Do students understand this new result type? Do they know how it’s different? Here are some of the main learnings:

  • Students use the title area to quickly check for relevance, so when the match isn’t in that area, they are confused why the result is showing at all

  • Highlighting minutes of content was not very useful, and in some cases, confused the students

  • Showing the exact lectures was not necessary at first since most students were quickly skimming the results for relevance

Launching lecture search results

After 2 more rounds of design iterations and usertesting, I finalized on a design.

At a high-level, we realized full courses (title / subtitle matches) were still more compelling, but in cases where there weren’t many full course matches, the lecture matches are also compelling. Thus, we enabled lecture matches only on queries that had less than 12 results.

The Results:
Less searches resulting in “No Results”

By enabling lecture matches to appear on niche queries, our “No results” page impressions dropped significantly by 36%. Mike, our data analyst, dug into the results and found that there was, in fact, a 10% increase in Revenue Per User for users with a lecture eligible search.

Approach 2

Guide students to relevant results for their goals

One of the main reasons that students weren’t seeing relevant courses is because they typed a broad topic that can be taught for different reasons. For example, students could search the topic “photoshop” for photo editing, graphic design, ux design, mobile design or marketing. To surface more relevant results, we wanted to enable our students to specify the path they’re interested in.


After kicking off with a brainstorm, we determined 3 ways to do this

  1. Filtering courses based on Student Goals
    Students were prompted to tag courses they enrolled into with their goal. The idea behind this is that using student-generated tags to filter courses would surface courses that accomplish real goals that students have had.

  2. Filtering courses based on Course Topics  
    Instructors were prompted to tag their course with the primary focus. We saw a lot of overlap in how students think of a goal and how instructors tagged their courses. I.e. UX design is both a topic and a goal.

  3. Suggesting Related Searches
    The idea behind this is that other students have likely experienced this problem and searched other terms to find more relevant courses. Suggesting search terms that other students have tried after the initial query may show realistic paths.

Quality Data Wins

Of the 3 experiments, students in the Topic Filter experiment had the highest conversion.
This may be because:

  1. Topics were systematically collected, which let us think carefully about the kind of data that we were asking our instructors for. This led to the highest quality data of the three experiments, and had coverage on almost every course in the marketplace

  2. Student goals were free-form answers and only collected from students who opted in to give this information so there weren’t as many tagged courses

  3. Related searches may have also not performed well because it did not get at the intent behind why people searched the query and would lead them to completely different areas.

Honing in on Topic Filters

The quant data showed promising results for Topic Filters, so we conducted user tests to uncover usability risks.

The Result:
A quicker journey to relevant courses

After several iterations, the launch of this filter resulted in a 2% decrease in impressions. This meant that students who used Topic Filters looked at fewer courses before finding the right one.

Approach 3

Help students understand their results

Our course descriptions don’t always help students understand if a course would help them achieve their goals. In past surveys, students repeatedly brought up different pieces of information that would help them understand just that.

Converging on Quickview

We wanted to show more information without bloating the course card and reducing information hierarchy. Our counterpart team "Browse" had already developed a quickview feature that reveals additional information and allows students to purchase a course on the spot. I translated the designs so that it would work with our Search cards.


Teasing out the value of Quickview

The original plan was to simply bring quickview into Search. I suspected that including CTA’s into the original quickview biased students to convert. It was also unclear If the additional information contributed to this change in student behaviour so I designed a second variant of the quickview to test the usefulness of the additional information.

2 variants

The Learning:
Reducing steps to purchase is helpful

Both variants performed equally well. From this, we concluded that reducing steps to purchase seemed to be the most impactful part of “quickview”, and resulted in a 3% revenue lift in Search. However, since we keep students in the Search page with this feature, they weren’t seeing useful recommendations on the Course Landing Page and the post-purchase pages. This means the overall results of this feature was nearly flat.

Hopes for future iterations

This was a good learning because it gives us hope that we can further test the different pieces of information that we had originally found users looking for on Search. In our next iteration, we would like to do that as well as bringing the recommendations (that you miss from staying on Search page) back into view after you have "Added to Cart" to counteract the cannibalization effects.

Search page was so simple it restricted growth

With so many new features being introduced to help students find the right courses, it was only a matter of time before the search page became too restrictive and cluttered to be used optimally. I wanted to address this problem for longer-term product success before proceeding with additional feature explorations.

old-search challenge
Experiments were over-cluttering the search page and pushing results further down

Designing for the future

This project began with a lot of ambiguity on my part. Because we positioned it as an investment in the long-term, I occasionally imagined future features that we could design, rather than focusing on an IA that can support long term growth & new features.

decision path

I took a step back and looked at the system of user flows that different students have while searching. Looking at all the possible flows enabled us to articulate directional concepts that a new search page should be able to support.

  • Query Refining
  • Notifications
  • Orientation
  • Compare & Evaluate
  • Narrow
  • Switch
  • Take Action

Exploring layout structure

With the directional concepts in mind, I started exploring different ways to lay out the concepts on a page. We had a few open questions:

  • Filters and Sorts are not often used but when they are used, they can be very impactful. How does usage vs usefulness affect prominence within the layout?

  • Switches have a natural home at the bottom of the results (we want to offer better alternatives if the results aren’t right for you). However, some people need that assistance almost as soon as they land on the page. Would showing switches higher in the page help users or distract users?

  • Orientation should conceivably be positioned to welcome you to the search page but does such a prominent position just get in the way of what students come to Udemy for - the courses?

Testing position intuitiveness

We worked with Claire to test whether the positions we put existing features were intuitive to new and existing students. I specifically refrained from introducing new features in this test so we could focus on whether the change in layout had any effect on student behaviour.

I brought 3 designs to mid-fidelity prototypes to test with students.

Screen Shot 2018-06-24 at 3.24.41 AM_1

UserTest Learnings

Ultimately, users seemed to have the easiest time completing tasks with prototypes B+C.

We realized the salience of the top area as a quick way to narrow in on the right course. Depending on the query and the student, different filters would be more useful. At the same time, we realized the majority of our students were beginners who didn’t have a rigid criteria for how they should be learning. In that sense, filters generally seemed like nice-to-have features that shouldn't take too much real-estate.

In our test, students struggled to the put the topic they searched for into perspective. They showed us how they would google or do their own research elsewhere to understand how topics relate to each other.

A new layout to grow into

(c) Beatrice Law 2018