Accountability for Institutions and Programs: Striking the Right Balance in HEA

Lawmakers from both sides of the aisle are becoming increasingly concerned about low-performing institutions and higher education programs that offer very little in return to students and taxpayers for their investment.1 College accreditors—which are supposed to spur continuous improvement and ensure a basic level of educational quality—have minimal incentives to punish poor performers, leading to a paltry number of reprimands despite dismal student outcomes throughout the system.2 And the only outcome-based guardrail in federal law intended to hold institutions accountable for failing their students—the Cohort Default Rate (CDR)—is easily manipulated, with fewer than 1% of institutions facing sanctions.3 What is the result of ineffective and easily manipulatable federal oversight? Billions of dollars in federal student aid flowing to low-performing institutions every single year—not to mention the time and money wasted by students who are left without a degree, a job, or the ability to pay back their loans.4
As Congress considers how to better protect students and taxpayers by improving its oversight within a reauthorization of the Higher Education Act (HEA), it has been grappling with a key question on how to do so: should it consider addressing this problem by adding or strengthening accountability guardrails based on the student outcomes of individual higher education programs, in addition to looking at how institutions are doing for their students as a whole?
While both approaches have certain advantages, a combination of the two methods may help mitigate the disadvantages inherent within each. This memo outlines some of the considerations that policymakers will need to take into account as they contemplate expanding both program- and institution-level accountability within the next HEA.
Strengthening Institutional-Level Accountability
More robust institution-level guardrails at the federal level would help identify widespread failure across an entire school, an approach that can help restrict federal resources from institutions that leave the majority of their students degreeless, underemployed, or with unmanageable debt. It has the advantages of being historically effective, easier to implement, and easier to predict in terms of its impact before implementation.
In fact, some institution-level accountability already exists. Originally written into law in the 1980s, the CDR was intended to curb alarmingly high default rates, affecting 20% of all student borrowers in the US during that time.5 Following its initial implementation, many low-performing schools lost access to federal funding, closed down, and student loan defaults began to decline.6 Since then, however, institutions have learned to manipulate the CDR accountability measure, oftentimes hiring consultants to persuade distressed student borrowers to enter repayment plans that help them avoid default during the measurement window for which colleges are on the hook, but that may not be to their long-term benefit.7 This has resulted in the federal government sanctioning fewer than a dozen institutions enrolling fewer than 0.1% of borrowers every year.8 While the CDR is ineffective in its current form, several lawmakers have suggested ways to strengthen it by also accounting for the number of borrowers who are in long-term forbearance, a sign of economic hardship that isn’t currently accounted for in the CDR test.9
An institutional accountability framework may also be easier for the US Department of Education (Department) to implement. Right now, there are over 5,000 institutions across the United States. Cumulatively, these institutions offer more than 150,000 programs.10 While the Department has shown its capacity to adjudicate outcomes data and appeals for institutions under the current CDR rule, it’s more administratively burdensome and would require more staff capacity to evaluate every higher education program on an annual basis. In fact, we’ve seen this approach result in delays through the Department’s efforts to evaluate a much more limited universe of higher education programs originally subject to the Gainful Employment regulations.11 Last, an additional advantage of institution-level accountability is that lawmakers would be able to better forecast its impact before being implemented within the next HEA. While preliminary program-level data has recently been released from the Department, it is still limited, whereas institutional-level data has been publicly available for years on measures like college completion, post-enrollment earnings, and loan repayment.12
Adding Program-Level Accountability
The concept of looking at student outcomes by program in addition to at the institutional level has gotten some traction from both sides of the political aisle.13 There are a few reasons proponents argue that adding program-level accountability makes sense.
Student Outcomes Can Vary More by Program Than by Institution.
Student outcomes can often depend as much on the major in which a student enrolls as on the institution they attend. For example, even though the same institution may offer both majors, a graduate with a Counseling Psychology degree typically earns only about $29,000, while the typical graduate of a Petroleum Engineering program earns a median salary of $120,000.14 Yet, post-enrollment earnings show less variation when comparing at the institution level, ranging from $29,100 at the 25th percentile to $44,100 for institutions at the 75th percentile 10 years after enrollment.15 Similar disparities can also exist between other programmatic outcomes, like completion and loan repayment rates, meaning that looking at these outcomes at the institution level alone can leave policymakers and prospective students in the dark when assessing which higher education programs are most likely to provide a return on investment. Program-specific information, on the other hand, can provide a more nuanced way to pinpoint whether the vast majority of students within a given field of study at an institution are successful.
Accountability by Program Could Lead to More Self-Regulation.
Programmatic accountability can also lead to more self-regulation within the higher education industry, as administrators will have a better indication of which programs fail to serve students well and can take action without closing or revamping their entire school. For example, after Gainful Employment data on the debt and earnings of program graduates became available in January 2017, institutions made the decision on their own to shut down 300 failing programs before receiving any sanctions—obviating the need for the federal government to step in to close those programs.16 And even when the threat of sanction isn’t enough for institutions to act, a program-level accountability framework may also be more politically feasible for policymakers to enforce, as there is more tolerance for restricting federal student aid dollars from a single program with bad results than from an entire school.
It’s a More Efficient Use of Federal Financial Aid Dollars.
Program-level accountability may also better target federal student aid toward programs that are shown to be more successful, helping to raise an institution’s overall outcomes by focusing administrators and students on the fields where that school is providing the greatest return on investment. For example, Arizona College — a for-profit institution in Glendale, Arizona — offers two programs that focus on medical insurance and billing, one that leads to an associate’s degree and the other resulting in a certificate.17 While it’s easy to assume that the associate’s degree would provide a better return on investment, programmatic data show that those who graduated from the medical insurance billing certificate program actually owed less and earned more than their associate’s degree counterparts.18 An institution-level accountability system might mask this difference, while a programmatic approach could encourage better use of federal resources by only disbursing federal grants and loans to areas of an institution that are shown to serve students well.
Five Practical Challenges of Using Program-Level Accountability
If Congress decides to include program-level accountability in the next HEA, there are a number of technical challenges that it will need to consider before doing so.
1. Defining a Program
Before putting any accountability measures in place based on program outcomes, lawmakers will first have to determine what actually constitutes a “program” at different levels of higher education. Right now, the US Department of Education uses Classification of Instructional Programs codes—also known as CIP codes— to help determine how programs can be grouped together. For example, a two-digit CIP code can be used for a more general classification, separating all higher education programs across the United States into 47 distinct categories. However, there is also a four-digit CIP code that is more specific and shows 389 program categories, or a six-digit CIP code that is the most granular measurement, allowing for 1,835 groupings of higher education programs. In practical terms and for accountability or transparency purposes, lawmakers will have to decide whether programs like Clinical Psychology, Family Psychology, and Forensic Psychology should all get grouped together under a 2-digit CIP code of Psychology, or if it’s more appropriate to measure them separately by using either a four- or six-digit CIP code.19 A 2-digit CIP code would provide for a simpler, yet less nuanced way of evaluating programs. Using a 4- or 6-digit CIP code may help identify more specific areas of success or ineffectiveness within an institution, yet it will also yield smaller sample sizes and increased likelihood of data suppression for programs that are simply too small to report their numbers without jeopardizing student privacy. A more granular evaluation of program outcomes will also make it more difficult to assess whether all student subgroups are succeeding at an institution, as smaller sample sizes will make it more likely that their data will be suppressed.
2. Accounting for Students with Undeclared Majors
Being able to evaluate student outcomes at a program level depends on postsecondary students enrolling in a specific major. In the Gainful Employment regulations put in place by the last Administration this was less of an issue, as only program graduates were accounted for, obscuring the outcomes of those who never declared a program or who entered a program but never finished.20 However, studies suggest that between 25% and 50% of students enter an institution undeclared, and many will leave before ever declaring any major at all.21 For accountability purposes, policymakers will have to determine how to classify the many students who fall into this category: those who have only taken general education classes, never graduated, nor declared a specific major. Leaving them unaccounted for will result in many students’ outcomes being left out of a federal accountability framework—for some institutions, it could even be a majority of their students. If Congress does account for their outcomes within a program-level framework, it will need to determine an appropriate sanction if most undeclared students at an institution fail an accountability measure, as there may be no program to close or revamp to fix the problem.
3. Assessing Only Graduates v. All Students
Since only half of students who enter the typical institution leave with a degree in hand, another question that policymakers will have to grapple with is whether program-level accountability should measure only graduates of a program or all students who enter an institution or declare a specific major.22 By only focusing on the graduates of programs—rather than all students who enroll—institutions would not have incentive to ensure that all students succeed in their programs, as accountability would focus solely on those who completed. This approach would fail to account for the many students who enter an institution but never graduate—completely erasing the students who likely have the worst outcomes after leaving a program. This calls into question what combination of outcome measures make sense at a program level and an institution level and how to design a framework to capture the outcomes of, both, graduates and non-graduates who attend an institution.
4. Allocating Outcomes for Students Who Change Majors
Policymakers will also have to grapple with how to treat students who transfer into or out of a higher education program. Three-quarters of students end up changing their major at least once after they enroll.23 If a student starts as an education major but finishes as a business graduate, are their outcomes attributed to the initial program of enrollment or the latter? What if they enroll in three different programs, but don’t finish any of them? A possible solution would be to measure all students in accordance with how the Department currently does through its Outcome Measures survey, by including all students who switch majors as part of the entering cohort regardless of the amount of credits with which they transfer in. While this may provide an unfair advantage to programs that enroll a higher number of transfer students, it could also incentivize them to accept more, along with their credits, potentially boosting completion rates and improving other post-enrollment outcomes.
5. Increasing Chances of Programs Evading Review
Another challenge of using a programmatic accountability measure is that a substantial number of programs may have their data suppressed if the size of their class is too small to comply with federal privacy rules. For example, the Department recently released programmatic loan data on over 194,000 programs, yet a whopping 78% of those programs (152,000) had their data privacy suppressed.24 Suppression is more likely to affect smaller institutions or programs that may enroll fewer students, though it could also be used by bad actor schools to evade requirements by breaking poor-performing larger programs up into smaller subprograms to get their numbers below the privacy threshold. And if programs are not clearly required to be broadly defined or if only graduates are counted, the number of students within each cohort becomes even smaller, putting the enforcement of program-level outcomes in jeopardy on a wider scale. Similarly, lawmakers will have to consider whether the Department has the capacity and resources to adjudicate outcomes data and appeals at all higher education programs. While the Department was only tasked with evaluating roughly 10,000 programs in its review of Gainful Employment programs, the process still produced delays and over a year to formally publish results.25
Conclusion
As Congress moves toward a reauthorization of HEA and determines which accountability metrics are most appropriate to ensure that students and taxpayers are getting a return on their investment in higher education, it will have to decide whether certain outcomes should be measured at the institution level, by program, or both. While each approach has certain advantages, a combination of both methods may help mitigate the disadvantages inherent within each. Together, the right mix of institutional- and program-level metrics could drive institutions to focus on improving outcomes for their students, while weeding out the worst actors who are currently leaving students worse off than before they enrolled. Failure to act will continue to allow more bad actors to participate in the system, funneling billions of dollars and students into postsecondary programs that deliver nothing in return for that investment.
Endnotes
Andrew Kreighbaum, “Bridging the Gap on Accountability.” Inside HigherEd, 25 Apr 2019, https://www.insidehighered.com/news/2019/04/25/senate-democrat-adds-momentum-push-accountability-all-colleges. Accessed 4 May 2019; See also, Lauren Camera, “A Mission to Overhaul Higher Education.” 4 Feb 2019, https://www.usnews.com/news/education-news/articles/2019-02-04/sen-lamar-alexanders-mission-to-overhaul-the-higher-education-law. Accessed 4 May 2019.
Antoinette Flores, “Fact Sheet: What Happens When Accreditors Sanction Colleges.” Center for American Progress, 27 Nov 2018, https://www.americanprogress.org/issues/education-postsecondary/reports/2018/11/27/461458/fact-sheet-happens-accreditors-sanction-colleges/. Accessed 3 May 2019; See also, Michael Itzkowitz. “The State of American Higher Education Outcomes in 2019.” Third Way, 8 Apr 2019, https://www.thirdway.org/report/the-state-of-american-higher-education-outcomes-in-2019. Accessed 8 July 2019.
United States, US Department of Education, “National Student Loan Cohort Default Rate Falls.” 26 Sep 2018, https://www.ed.gov/news/press-releases/national-student-loan-cohort-default-rate-falls. Accessed 3 May 2019; See also, Erica Green. “Colleges Hire Consultants to Help Manipulate Student Loan Default Rates.” New York Times, 11 May 2018, https://www.nytimes.com/2018/05/11/us/politics/colleges-student-loan-default-rates.html. Accessed 20 June 2019.
Michael Itzkowitz. “Risky Bet.” Third Way, 15 Apr 2019, https://www.thirdway.org/memo/a-risky-bet. Accessed 31 May 2019.
United States, Higher Education Opportunity Act of 2008, Section 436. https://www2.ed.gov/policy/highered/leg/hea08/index.html. Accessed 30 Oct 2017; See also, Michael Itzkowitz. “Why the Cohort Default Rate is Insufficient.” Third Way, 7 Nov 2017, https://www.thirdway.org/report/why-the-cohort-default-rate-is-insufficient. Accessed 16 July 2019; See also, Jordan Weissmann. “Student-Loan Defaults Are Still Soaring Thanks to Washington’s Neglect.” The Atlantic, 1 Oct 2013, https://www.theatlantic.com/business/archive/2013/10/student-loan-defaults-are-still-soaring-thanks-to-washingtons-neglect/280158/. Accessed 18 July 2019.
Jordan Weissmann. “Student-Loan Defaults Are Still Soaring Thanks to Washington’s Neglect.” The Atlantic, 1 Oct 2013, https://www.theatlantic.com/business/archive/2013/10/student-loan-defaults-are-still-soaring-thanks-to-washingtons-neglect/280158/. Accessed 18 July 2019.
Erica Green. “Colleges Hire Consultants to Help Manipulate Student Loan Default Rates.” New York Times, 11 May 2018, https://www.nytimes.com/2018/05/11/us/politics/colleges-student-loan-default-rates.html. Accessed 18 July 2019.
United States, US Department of Education. “Official Cohort Default Rates for Schools.” 24 Sep 2018, https://www2.ed.gov/offices/OSFAP/defaultmanagement/cdr.html. Accessed 18 July 2019.
United States, United States Congress, Education & Labor Committee. “Aim Higher Act.” 26 July 2018, https://edlabor.house.gov/Aim-Higher. Accessed on 18 July 2019.
United States, US Department of Education. “Secretary DeVos Delivers on Promise to Expand College Scorecard, Provide Meaningful Information to Students on Education Options and Outcomes.” 21 May 2019, https://www.ed.gov/news/press-releases/secretary-devos-delivers-promise-expand-college-scorecard-provide-meaningful-information-students-education-options-and-outcomes. Accessed 12 June 2019.
Andrew Keighbaum. “DeVos Allows Career Programs to Delay Disclosure to Students.” Inside Higher Ed, 3 July 2017, https://www.insidehighered.com/news/2017/07/03/education-department-announces-new-delays-gainful-employment. Accessed 18 July 2019.
United States, US Department of Education. “College Scorecard Data.” 21 May 2019, https://collegescorecard.ed.gov/data/. Accessed 19 July 2019.
Lauren Camera. “A Mission to Overhaul higher Education.” US News, 4 Feb 2019, https://www.usnews.com/news/education-news/articles/2019-02-04/sen-lamar-alexanders-mission-to-overhaul-the-higher-education-law. Accessed 22 May 2019; See also, United States, US Senate, “Senators Hassan and Durbin Introduce Comprehensive Legislation to Protect Students and Taxpayers from Predatory Higher Education Practices.” Press Release, 26 Mar 2019, https://www.hassan.senate.gov/news/press-releases/senators-hassan-and-durbin-introduce-comprehensive-legislation-to-protect-students-and-taxpayers-from-predatory-higher-education-practices. Accessed 10 July 2019.
Anthony P. Carnevale, Jeff Strohl, and Michelle Melton. “What’s it worth? The Economic Value of College.” Center on Education and the Workforce, https://cew.georgetown.edu/cew-reports/whats-it-worth-the-economic-value-of-college-majors/#full-report. Accessed 10 June 2019.
United States, US Department of Education, “Using Federal Data to Measure and Improve the Performance of US Institutions of Higher Education.” Jan 2017, https://collegescorecard.ed.gov/assets/UsingFederalDataToMeasureAndImprovePerformance.pdf. Accessed 1 Mar 2019.
Kevin Carey, “DeVos is Discarding College Policies That New Evidence Shows Are Effective.” New York Times, 30 June 2017, https://www.nytimes.com/2017/06/30/upshot/new-evidence-shows-devos-is-discarding-college-policies-that-are-effective.html. Accessed 1 May 2019.
United States, US Department of Education. “Education Department releases Final Debt-to-Earnings Rates for Gainful Employment Programs.” 9 Jan 2017, https://www.ed.gov/news/press-releases/education-department-releases-final-debt-earnings-rates-gainful-employment-programs. Accessed 11 May 2019.
United States, US Department of Education. “Education Department releases Final Debt-to-Earnings Rates for Gainful Employment Programs.” 9 Jan 2017, https://www.ed.gov/news/press-releases/education-department-releases-final-debt-earnings-rates-gainful-employment-programs. Accessed 11 May 2019.
United States, US Department of Education, “Introduction to the Classification of Instructional Programs: 2010 Edition.” Web page, https://nces.ed.gov/ipeds/cipcode/Files/Introduction_CIP2010.pdf. Accessed 7 May 2019. https://nces.ed.gov/ipeds/cipcode/search.aspx?y=55
United States, U.S. Department of Education, “Education Department Releases Final Debt-to-Earnings Rates for Gainful Employment Programs,” Press Release. 9 Jan 2017, https://www.ed.gov/news/press-releases/education-department-releases-final-debt-earnings-rates-gainful-employment-programs. Accessed 8 July 2019.
Liz Freedman, The Developmental Disconnect in Choosing a Major: Why Institutions Should Prohibit Choice until Second Year.” The Mentor: An Academic Advising Journal, 28 June 2013, https://dus.psu.edu/mentor/2013/06/disconnect-choosing-major/. Accessed 10 June 2019; See also, Michael Itzkowitz, “New Data Further Cements Completion Crisis in Higher Education.” Third Way, 1 Feb 2018, https://www.thirdway.org/memo/new-data-further-cements-completion-crisis-in-higher-education. Accessed 11 June 2019.
Michael Itzkowitz, “New Data Further Cements Completion Crisis in Higher Education.” Third Way, 1 Feb 2018, https://www.thirdway.org/memo/new-data-further-cements-completion-crisis-in-higher-education. Accessed 7 June 2019.
Liz Freedman, The Developmental Disconnect in Choosing a Major: Why Institutions Should Prohibit Choice until Second Year.” The Mentor: An Academic Advising Journal, 28 June 2013, https://dus.psu.edu/mentor/2013/06/disconnect-choosing-major/. Accessed 10 June 2019; See also, Valerie Strauss, “Why so many college students decide to transfer.” Washington Post, 29 Jan 2017, https://www.washingtonpost.com/news/answer-sheet/wp/2017/01/29/why-so-many-college-students-decide-to-transfer/?utm_term=.f3a84addfd1f. Accessed 12 June 2019.
United States, US Department of Education. “Secretary DeVos Delivers on Promise to Expand College Scorecard, Provide Meaningful Information to Students on Education Options and Outcomes.” 21 May 2019, https://www.ed.gov/news/press-releases/secretary-devos-delivers-promise-expand-college-scorecard-provide-meaningful-information-students-education-options-and-outcomes. Accessed 12 June 2019.
United States, US Department of Education. “Educaiton Department Releases Final Debt-to-Earnings Rates for Gainful Employment Programs.” 9 Jan 2017, https://www.ed.gov/news/press-releases/education-department-releases-final-debt-earnings-rates-gainful-employment-programs. Accessed 17 July 2019.
Subscribe
Get updates whenever new content is added. We'll never share your email with anyone.