Measure the right things to show impact

Building an assessment feature to connect product to real-world outcomes for students

Outcomes

  • 11% increase in renewal rates from previous year

  • 13% increase in mentee sense of belonging - critical product KPI

  • Key product differentiator in market

Summary

  • Type: Evaluation study

  • Focus: Sense of belonging

  • Timeline: May 2019 to September 2020

  • My Role: Research lead, data analyst, research liaison

  • Methods: Quantitative data collection, user interviews, systems diagramming

Company needed to know user outcomes on a faster cadence to secure renewal pipeline

Mentor Collective is a Series-A startup based in Boston, focusing on B2B SaaS edtech to help colleges and universities run mentoring programs at scale with the ultimate goal of improving student retention. Many schools already have small mentoring programs in place, but struggle with the large overhead required to run them, limiting their ability to reach students at volumes that will meaningfully move the needle on degree completion.

For Mentor Collective, the biggest challenge was always time. Showing retention impact only happens once a year, when students either return to their classes the following academic year or not. This cadence for measuring results slowed down sales cycles and strained renewal conversations while turning retention into a high-risk metric. We needed to find a way to demonstrate value earlier on to support renewal conversations, and identify programs in need of additional support to achieve their retention goals.

Identify an opportunity

When I began as Research Lead in 2019, were wasn’t directly collecting any data related to student outcomes, such as retention, GPA, academic probation, or sense of belonging. This presented a challenge when working with outcomes-driven higher ed executives; I would need to negotiate with university administrators to share those data sets with us in order to connect participation in our program to improvements in those outcomes. Often I was successful in this, but it slowed the process down and created unnecessary risk (the administrators could just say “no”).

I would occasionally be asked to create surveys to measure sense of belonging on a partner campus. After writing and analyzing several of these surveys, I saw an opportunity that might solve our challenges with both directly collecting our own data and showing the value of our product in a way that would support renewal conversations. If we created a sense of belonging assessment feature within the product, we’d be able to collect and share a metric that, in asking us to run and analyze these surveys, partners were already telling us was valuable.

Building the case for an assessment feature

First, while a connection between having a mentor and feeling a sense of belonging felt intuitive, I wanted to know if there was expert evidence to support that. I reviewed the academic literature in psychology, sociology, and higher education studies and found that mentorship was, in fact, connected to increased sense of belonging in students. Even better, I learned that a strong sense of belonging among undergrads was a strong predictor of student retention, one of the most common quantitative outcomes we were asked to demonstrate. This was the missing puzzle piece we were looking for!

To solidify the business case for this feature, I talked with our VP of Partner Success and ascertained that our contract renewal approach was costly: often there would be a lot of back-and-forth with buyers over whether the program had succeeded or not. This would mean running last-minute retention analyses to demonstrate quantitative ROI, but such an approach was pretty high-risk, since we had no way of knowing if the results of the analysis would show the effect we wanted to demonstrate. I next met with our CEO and Head of Product and shared my findings. I hypothesized that if we established our own outcome metric and gathered data for it throughout the program lifecycle, our renewal process would be more efficient and much less risky. This would put us in a stronger negotiating position at contract renewal time and get us on track to meet our ambitious renewal goals, and ultimately, help more students. We were green-lit to make it happen!

Building the MVP

From my academic literature review, I saw that there were standard survey questions that we could use to measure sense of belonging. This was great for two reasons: one, it saved us time having to come up with what questions to ask; and two, we could feel confident that we were actually measuring sense of belonging and not something else (like affection for their mentor - a critical consideration in survey design.

We decided to send the survey out to students three times: one at the beginning of a program, one at the middle, and one at the end, so we could observe change over time. Once we had collected data from the first two time frames, I would provide basic descriptive statistics and t-tests to determine how student sense of belonging changed over time in an individual program. This data would then be included as part of our Client Success Managers’ contract renewal presentations.

We were still a scrappy startup at this time, so one way that the Head of Product and I saw to save budget on this project was to continue using Typeform, and to train our operations associates on how to create surveys for their programs using the templates, when to send the surveys out, and how to talk about them with partners. This initial version was pretty manual. I worked closely with our partner services team, running multiple trainings and Q&A sessions to help them understand what these surveys were, why we were measuring these things, when and how to send the surveys out, how and when to request analysis of the survey data, and how to interpret and talk about the results with partners.

In Q3 of 2019, we launched our first sense of belonging surveys to all mentees in our product - about 100,000 users.

From Most Viable to Most Lovable

We had some great learnings from this first iteration: partners were excited to know that we were measuring sense of belonging, as it wasn’t data they were regularly collecting themselves but cared about very deeply, giving us early market validation for this product offering. However, we also encountered pushback from our VP of Partner Success, as the highly manual processes were taking up significant amounts of operations associates’ time. I worked with the Head of Product to understand where the time constraints were coming from. We talked to different members of the operations team, and shadowed them as they followed the survey creation and implementation protocols. We learned that whatever efficiencies we were creating in the contract negotiation period were being eaten up by the inefficient process of getting the surveys out. From there, the Head of Product and I began making a plan to iterate and lower our COGS for the following academic year.

Build and launch a survey tool

Knowing that partners were excited about this data enabled us to prioritize creating an in-house survey management system in the 2020 roadmap. I worked closely with PMs and engineers to elevate the voice of our customers and our operations associates, ensuring they had a clear understanding of the jobs that needed to be done, and the problems we were solving for as they developed requirements. Through this process, we identified data cleanliness as a secondary issue that we were able to solve by bringing our surveys in house. As the one who conducted all of the survey analysis for the previous year’s surveys, I was able to offer specific insights into export construction that would result in more accurate data matching and faster analysis.

Outcomes

One of the outcomes we targeted for an assessment feature was to increase our renewal rate. We found that following the release of this feature, there was a 10% increase in our renewal rate, 74% in Q4 of the previous year, to 85% in Q4 2020. While we couldn’t have foreseen this, this data was especially impactful during that fall and early winter of COVID, as students were navigating hybrid or fully-remote learning experiences. One Client Success Manager shared with me that they saw firsthand how customers used this data to create on-campus strategies to help increase sense of belonging among their students during an incredibly difficult year.

Since the launch of our standardized assessment function, Mentor Collective has seen an overall 8% increase in sense of belonging for mentees who participate in their programs. Better still, that increase goes up to 13% for mentees who have 3 or more conversations with their mentors, demonstrating a clear quantitative impact of our product on student outcomes.

Even better, this assessment function set Mentor Collective apart from competitors as the only mentoring platform offering any kind of assessment feature. This offering has made being “research-backed” a core element of their brand, and a key product differentiator in the market. Sales reps shared with me that for many contracts that closed won, the assessment feature was the main reason they chose to work with us over competitors.

What I Learned

Understand the end-to-end journey for both customers and internal teams to catch inefficiencies before you make them.