Call to Action on Common Outcomes Definitions

November 2, 2016

By Bill DeBaun, Director of Data and Evaluation

Getting students to a college, keeping them there, and making sure they graduate are the key activities for the college access and success field. In recent years, NCAN and its members have been increasingly focused on measuring these programmatic outcomes to both ensure accountability and demonstrate impact. Unfortunately, while we’re all talking to some degree about these outcomes, we are seldom talking about them in the same way. That’s a problem. What do we mean by these terms?

In September, the directors of research for three NCAN member organizations came together in Detroit at NCAN’s national conference to host a session demonstrating the extent of this problem and proposing a way to fix it. “Defining, Measuring, and Utilizing College Outcome Data: Three Organizations’ Approach” was presented by Dr. Jim Lauckhardt of iMentor, Dr. Keith Zander of One Goal, and Dr. Salem Valentino of Summer Search.

The need for their presentation is most clearly seen in this chart:

But the problem was identified even more clearly through an activity conducted during the conference session. Attendees were asked to provide their organization’s definition, source, timing, and use of matriculation, persistence, and bachelor’s attainment rates. The following notes (edited for length and clarity) from Dr. Valentino highlight the variability of these definitions just among those in the room:

Postsecondary Matriculation Rate. There is a lot of variability in how this is being assessed! Some of this variability is likely due to different program goals (e.g., whether you track matriculation into a one-year program versus a traditional post-secondary institution), but differences also surfaced in how the outcome was summarized and communicated. While the majority provided a basic statement of “X% of students attending a post-secondary institution,” detailed definitions differed as well. Here are some observed variations on the matriculation metrics:

  • When: within one year of graduating high-school, only in the fall after senior year of high school, any time after high-school graduation, within 16 months after high-school graduation
  • Who: graduating high school seniors, “students,” scholarship recipients
  • What: enrollment, activation of on-campus supports or financial aid, completion of fall semester, arrival on campus, class attendance
  • Reporting: Once mid-fall, once early fall, end of each semester, three times per year

Postsecondary Persistence Rate. Similarly, with this outcome, there was a lot of variability in its assessment:

  • Who: percent of students enrolling in the fall directly following high school graduation, of those enrolling fall or spring of the following year, of those enrolling at any point in time (similar to the nuances of the matriculation definition)
  • What: persisting semester-to-semester, differences in whether this requires full-time versus part-time enrollment, year-to-year enrollment, still enrolled at a certain date during third semester, “remain enrolled,” complete third semester in good academic standing
  • Reporting: twice a year at the end of each semester, mid-spring/mid-fall, weekly, monthly, once during the fall

Bachelor’s Attainment Rate:

  • Who: Very few people indicated the denominator in this calculation, focusing instead on the numerator: % of students who… 
  • What: Then there were differences in terms of % of students who… earn a bachelor’s degree within 6 years, earn a degree within 150% time, and bachelor’s versus associate’s degree versus certificate. One person did mention “of first fall enrollees.”
  • Reporting: June, October, end of summer semester, November, every semester

Without clear, shared definitions of student outcomes, none of the above organizational outcomes can be compared in an apples-to-apples way because organizational measures may differ on which students are in the numerator (i.e., how many students met the outcome) and denominator (i.e., how many total students were there who could’ve met the outcome) of these percentages. 

NCAN’s Common Measures identify the essential metrics in each of these areas:

  • Enrollment: “percent of students who enroll within six months of high-school graduation” and “student enrollment by institution type and status (full- vs. part-time)
  • Persistence: “year-to-year student persistence” and “term-to-term student persistence”
  • Completion: “percent of students completing a degree within 150 percent of time, by school type”

(Read more on each of these metrics in the Common Measures Handbook, a reference that examines research and technical notes for every one of NCAN’s Common Measures.)

As Dr. Valentino noted, “there is a clear need to align the field around shared, detailed definitions of (at least) these three student outcomes and hold ourselves accountable to mutual transparency.”

The presenters propose the following calculations/definitions as a call to action to NCAN members and the field:

On-Time Enrollment:

(number of students who enrolled in college within six months of graduating high school)
(total number of students who graduated high school)

Persistence to Second Year: 

(number of students who enrolled on-time in college AND are enrolled in a third semester)
(total number of students who enrolled on-time in college)


(number of students who complete college in 150% of intended completion time)
(total number of students who enrolled on-time in college)

Each of these metrics should be disaggregated by student-level demographics like, for example: gender, race/ethnicity, first-generation status, ESL status, and Pell Grant eligibility. Disaggregating by student-level characteristics is key to identifying gaps in outcomes that may be actionable or correctable according to program practice.

While these definitions won’t fit into every NCAN member’s model (e.g., what about returning students who begin their interaction with a program years after high-school graduation?) nor do they address every question (e.g., when should a student be dropped from my denominator for starting my program and then stopping out?), but that does not discount these definitions’ usefulness. This call to action is a good one both for individual programs and our field for the sake of transparency, accountability, and comparability. 

Furthermore, program-level contexts will still exist. Some of these contexts include demographic information on students served, requirements for program participation, program size, cost per student, intensity of dosage, and types of services provided. In an effort to better allow for comparisons in these areas, NCAN is working on a member-facing dashboard that uses data from the Benchmarking Project to allow members to view enrollment and completion (and eventually persistence) outcomes, according to these program-level characteristics.

Thank you to Drs. Valentino, Lauckhardt, and Zander for their presentation and their effort in calling attention to this important topic. Their presentation and associated materials are well worth each member’s time for better understanding how commonly defining our key metrics can better demonstrate the impact of, and ultimately improve, our field.

Back to Blog

Leave a Reply:

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License