- K-12 Education
- Higher Education
- Who We Are
In September 2006, the Spellings Commission on the Future of Higher Education issued an indictment of American higher education. Costs are too high, the panel said, graduation rates too low (and lower still for low-income and minority students), and learning outcomes a mystery. Moreover, "compounding all of these difficulties is a lack of clear, reliable information about the cost and quality of postsecondary institutions, along with a remarkable absence of accountability mechanisms to ensure that colleges succeed in educating students."
To long-time observers of higher-education policy and politics, the criticisms were nothing new. Nor was the response from the nation's colleges and universities. While supporting the general idea of accountability, they disputed many of the commission's findings and recommendations. Higher education, they said, is already accountable in many—perhaps too many—ways.
But the commission had it right: Accountability in American higher education is largely a myth. Many of the systems, relationships, and arrangements often cited as elements in higher education’s "accountability system" are no such thing. As a result, higher education is stuck in a seemingly endless cycle of attack and defense, of accountability conversations that founder on basic differences of definition.
This creates two dangers for the nation's vitally important institutions of higher learning. The first is that an outside force will break the impasse by imposing an accountability system that actually works, but in all the wrong ways. The second is that the impasse won't be broken, and public support for higher education will slowly bleed away. To avoid these outcomes, higher education needs accountability in more than name. It needs accountability that is real.
The Two Elements of Real Accountability
Benjamin Disraeli once defined justice as "truth in action." He wasn't speaking of higher education, but he could have been. Disraeli's famous aphorism identifies the two essential elements of any real accountability system: truth and action.
Accountability systems begin with a conception of purpose: what an institution being held accountable is meant to be and do. Once that is established, the agents of accountability gather information about effectiveness. This is the "truth" part of the accountability equation, truth in the dictionary sense of the "body of real things, events, and facts" (Merriam-Webster, 2007) that indicate the degree to which the institution has fulfilled its purpose.
But it's not enough simply to gather information. Real accountability systems push institutions to act on that information, in manner that is designed to change what they do in order to make them more successful than they would otherwise be.
More simply, real accountability systems matter. They satisfy the parallel-universe test: If scientists could create two miniature parallel universes in a laboratory, identical in all ways except that one contained a real higher-education accountability system and one did not, an observer could peer into a microscope and tell the difference—not by seeing the system itself but by seeing the colleges and universities in the two universes acting in different ways.
Unfortunately, most discussions of higher education accountability—particularly those within the academic community itself—elide this distinction. They focus on legitimate questions of truth: How can we gather adequate information with limited time and resources? How much inevitable error in measurement can we tolerate? But they studiously ignore the need for action. This is not a function of some misunderstanding or confusion as to the meaning of real accountability. Many in higher education understand what real accountability means all too well.
The Fallacy of Self-Accountability
The accountability conversation dates back decades and has been in full force since at least the mid-1980s, with the publication of A Time for Results by the National Governors Association. As states began to require colleges and universities to assess and report performance, Congress got into the act with the 1990 Student-Right-To-Know Act, mandating significant new disclosure of information on graduation rates and school safety. Additional federal oversight was briefly added in 1992, with the creation of joint state-federal State Postsecondary Review Entities (SPREs), designed to monitor and evaluate institutional performance. Many in higher education worried that these actions heralded new requirements to come. The SPREs died an early death, undone by Newt Gingrich in the early days of the "Republican Revolution" Congress elected in 1994. Real accountability didn't fit with the radical anti-government mood that briefly dominated federal policymaking. But that did not kill concerns about accountability, although momentum for accountability slowed for a time at the federal level. The Spellings Commission’s tough report showed that ignoring pressure for convincing accountability has not served higher education well.
Accountability is based on the view that people work best when their motivations are both internal and external. Colleges give students grades because they know that while students may have an innate desire to learn, they learn more if their performance is monitored and judged. Humans are fallible; they work harder and better if they know someone else is paying attention to how well they do. The same is true for institutions.
But colleges and universities seem to cling to the fundamentally illogical idea that a college or university can be accountable only to itself and its peers, what the National Association of Independent Colleges (NAICU) referred to during the Spellings Commission process as "appropriate accountability." The idea can be summed up simply: "Leave us alone. And if we must be judged, we will judge one another. We will determine the truth and take action as we please."
This is not to say that higher education institutions shouldn't define their missions, or govern themselves. All colleges and universities—particularly those in the private sector—need this independence. But once missions are defined by institutional leaders and appropriate governing bodies, it's more than reasonable to hold institutions accountable for achieving them. Self-governance means freedom to choose how to succeed—not freedom to choose whether to succeed.
But organized higher education's overall reaction to the Spellings report's criticisms was clear when David Ward, president of the American Council on Education and a member of the panel, refused to sign the final report saying, "Many problems cited in the report are the result of multiple factors but they are sometimes attributed entirely to the limitations of higher education. The recommendations as a whole also fail to recognize the diversity of missions within higher education and the need to be cautious about policies and standards based on a one-size-fits-all approach."
Ward's appeal to diversity was familiar and deliberate. Like snowflakes, no two colleges and universities are exactly alike. They're big, small, public, private, old, young, rich and poor. This diversity is a huge asset to the nation, a factor any effective accountability system should take into account. But just as all snowflakes are light, cold, and wet, higher-education institutions are far more alike than they are different. Most organize and run themselves in the same way, with academic departments, professors, deans, tenure, and credit hours. They offer degrees with the same names that take the same amount of time to earn. They teach many of the same classes, and over half of all bachelor’s degrees are awarded in just five major disciplines—business, education, social science/history, psychology, and communications.
Yet rather than embracing their common purpose, colleges like to focus on their differences as a way of asserting their uniqueness. This is an anti-accountability gambit—if you're unique, you can't be compared. If you're not comparable, you can't be judged.
Some say institutions are accountable through the market, to students. But students choosing colleges currently have little or no information about which institutions actually provide the best education. Accreditation is also frequently cited as a source of accountability. And indeed, the U.S. Department of Education has worked to implement the Spellings Commission recommendations by strengthening the role of accreditors. But because accreditation is a process of self-study and peer review, it can be uncomfortably close to self-accountability. Predictably, the department’s efforts to require accreditors to give more weight to an institution's student-learning outcomes have produced a backlash among institutions and their defenders.
Accreditation as it stands today also has suffered from a lack of truth, not in the sense of dishonesty, but in failing to require information that provides a full picture of how well institutions are achieving their mission, particularly as it relates to student learning. While in recent years accreditors have shifted their focus to include assessment of student-learning outcomes, a welcome and necessary step, historically they have put their emphasis on issues such as financial integrity and faculty governance. These things, while important, have little to do with teaching, knowledge creation, and other essential purposes of the university.
But the real weakness of accreditation lies with (in)action. Accreditation is a floor, a minimum requirement. Most institutions are far above the minimum, making accreditation too often merely a compliance exercise. And given that the federal government ties accreditation to student aid, accreditors are reluctant to impose what amounts to a financial death penalty on low-performing institutions. Meanwhile, the most important accreditation-related information about institutional quality stays hidden from the public eye, and the pressure to act on it is reduced. Recently, in response to complaints from higher education, Congress has acted to protect institutions from Secretary Spellings’ efforts to carry out the recommendations of her commission to make accreditation a more credible gatekeeper. Over the long term, however, this is unlikely to stop the national discussion of accreditation’s shortcomings.
How States Tried (and Failed) To Build Real Accountability Systems
The conversation about higher-education accountability on the state level didn't end with the death of the State Postsecondary Review Entities in the mid-1990s. While momentum for accountability slowed for a time at the federal level, the 1990s ultimately saw an explosion of new state-level accountability systems aimed at public colleges and universities. Unfortunately, like so many other attempts to hold higher education accountable, the state systems foundered. The reasons were familiar: little truth, less action.
Gathering information to judge the success of an institution as complex as the modern university is no easy task. But in their rush to build accountability systems, many states simply went with the information they had, producing lengthy compendiums of context-free numbers—in the words of Joseph Burke of the Rockefeller Institute of Government, "a grab bag of available indicators with no sense of state priorities or a public agenda."
States also constructed their accountability systems in ways that weren't focused on the needs of students. State policymakers tend to see higher education in aggregate economic terms: They give universities a large amount of money every year, in return for which they expect the greatest possible return to the state. Thus, they were apt to look at measures of financial "efficiency," like average credits taught by each faculty member or the total number of degrees awarded (in each case, the more the better, in their view). Little data was gathered about the quality of teaching or the level of students’ learning.
Not all states fell short to the same degree. The
But nearly every state has failed to translate information into real incentives for improvement. Some started with a legitimate theory of action, centered on monetary incentives. But as Burke and his colleagues have documented, state "performance funding" plans have never really taken root. In large part because institutions were loathe to have any of their base funding put at risk, states that explicitly tied funding to performance levels tended keep the dollar amounts relatively low. Performance funding was often confined to "new" dollars, which tended to be first on the chopping block when state fiscal fortunes nosedived in 2001.
The net result of the 1990s-era expansion of state higher-education accountability was that nearly every state now publicly reports some data about some aspects of their higher-education systems. But much of that data is only tangentially related to the truth about institutional performance in educating students. And few, if any, states have created new obligations or strong incentives for institutions to improve their performance. States have made marginal contributions to the truth part of real higher-education accountability but left action largely untouched.
Higher Education Accountability in the Era of No Child Left Behind
The failure of the state higher-education movement meant that the colleges and universities entered the 21st century in a familiar state—criticized but still unaccountable. The stakes in the accountability conversation, however, were soon raised. In January 2002, President Bush signed the No Child Left Behind Act into law, ushering in an era of unprecedented federal control over the schools. As with higher education, states had worked throughout the 1990s to establish K–12 accountability systems, and just as with higher education, their efforts often fell short. Congress decided to take matters into its own hands, and soon each of the nation's 90,000 schools was required to test students in reading and math and be subjected to serious consequences if scores fell short of government-created standards.
The higher-education community, in an attempt to avoid a similar fate, launched a new round of efforts to see if it could get the accountability equation right. The Business-Higher Education Forum weighed in, as did the Association of American Colleges and Universities. Most prominently, a high-profile National Commission on Accountability in Higher Education was created in 2004. Coordinated by the State Higher Education Executive Officers, the bipartisan commission was led by former Governor Frank Keating of
Over the course of a year, the commission took hundreds of pages of testimony from a wide range of individuals and organizations. The final report of the Commission on Accountability has many virtues. It is frank about higher education's shortcomings: uneven student learning, low graduation rates, ever-rising costs. It calls for new sources of information and more emphasis on innovation and quality in teaching students, particularly undergraduates.
But like all the accountability efforts and manifestos before it, the commission report fails the action test. Though it acknowledged that "institutional accountability practices … are most important to performance because they directly influence faculty and students who do the actual work of higher education," the report studiously avoided challenging institutions specifically or directly. Instead, it simply said that institutions should establish "goals aligned with fundamental public priorities" and "create the conditions, including necessary incentives and management oversight, for students and faculty to meet ambitious objectives." Institutions should also monitor progress on goals, communicate clearly with students, continuously assess and improve learning, etc., etc.
These are all worthy—if overly general—ideas. But they're not new ideas. They amount to saying that institutions should be well-managed and focused on student learning. The point of accountability is not alerting institutions to the wisdom of this course or merely suggesting that they follow it. Real accountability systems make such actions unavoidable, or create external incentives so strong that the distinction has no difference. Yet the report advises that "institutional accountability for student learning should be internal, not external." Self-accountability still rules the day.
A key phrase in the report embodies the flawed logic of this approach: "People achieve excellence because they want to, not because they have to." Not true—people are most likely to achieve excellence when they want to and have to, when intrinsic and extrinsic motivations are strong and aligned. College leaders, by contrast, may want to make the tough choices suggested by the report, but they don't have to. Not surprisingly, most of them haven't. And so the Spellings Commission was formed a year later, and the cycle of criticism, debate, and irresolution continued.
Reaction to Spellings Commission from within higher education followed the familiar pattern of advocating truth without action. In a March 2006 op-ed in the Washington Post, former Harvard president Derek Bok offered a sharp critique of undergraduate teaching but insisted that "useful reforms can come only from within the universities." Despite his sincere desire to push higher education in a new direction, Bok wouldn't concede that a system that has largely chosen to be what it is might never, of its own accord, choose to be something else.
Similarly, Lee Shulman proposed "Seven Pillars of Assessment for Accountability" in the January/February 2007 issue of Change. In stressing the importance of multiple assessment measures, integrated into instruction and management and considered in the context of the larger narrative of institutional quality, Shulman offers important insight on the truth element of accountability. But the fifth pillar, "remember that high stakes corrupt," speaks to the anti-action impulse (and implies a dim view of higher education character).
High stakes can corrupt, but they don't have to, as long as institutions maintain their integrity and student-centered values. For every student who cheats on a high-stakes final exam, far more do not, despite temptations and opportunities to do so. For every Enron or WorldCom, hundreds of publicly traded companies report accurate financial results to the Security and Exchange Commission—even when the results are sure to drive their stock prices down. High stakes create risks and complications, but they're also the difference between accountability that is real and accountability that is not.
In recent months, a number of higher-education organizations have put forth proposals that purport to answer the call for accountability. The National Association of State Land-Grant Colleges (NASULGC) and the American Association of State Colleges and Universities (AASCU) outlined a "Voluntary System of Accountability" in which institutions could report information including measures of student engagement and learning. The Association of American Universities, which represents the most elite public and private universities, announced that it has “committed to collecting and providing to the public basic information about undergraduate student performance, such as graduation rates, time to degree, and careers pursued following graduation.” Similarly, NAICU released a "consumer information template" that private colleges could populate with many of the same measures.
NASULGC and AASCU deserve praise for including engagement and learning in the accountability conversation. But all of the proposals embrace a vision of accountability that begins and ends with merely reporting information—if institutions so choose. The action component of accountability is nowhere to be found. The associations describe how colleges and universities could report performance but not why they will then act to improve it.
Defenders of higher education like Sen. Lamar Alexander (R-TN), a former Secretary of Education and university president, don’t like the department dictating operational details to campuses. But his recent proposals—including establishing an award for accountability in higher education akin to the Malcolm Baldridge Award for quality in American business, as well as making grants to encourage institutions to develop better measures of accountability—would keep accountability centered on institutions themselves rather than mandating action or consequences for inaction.
For Better or Worse
The clearest evidence that higher-education accountability has mostly come to naught lies with the institutions themselves. They're organized and run the same way they have been for over a century. Student outcomes are stable at best; while six-year graduation rates have crept up slightly since the 1970s, they still hover at two-thirds overall and are significantly lower for low-income and minority students. Meanwhile, the recently released National Survey of America's College Students, conducted by the non-partisan American Institutes for Research, a respected
Given these manifest shortcomings, the desire for real higher-education accountability will not diminish. The more important higher education becomes in the increasingly globalized labor market, the more policymakers and consumers will demand of it. The likelihood that real higher-education accountability will arrive grows stronger by the year. What is not clear is what form it will take.
One possibility is that private entities, driven by the profit motive, will take charge. This arguably has already happened, in the form of the U.S. News & World Report annual college rankings. These clearly provide incentives to action: Because institutions benefit from higher rankings, they change their practices and behaviors in order to boost their standing. Anecdotes of rankings-driven policy changes abound, and sometimes the evidence is remarkably direct, as when
Both the U.S. News rankings and NCLB are deeply controversial. The criticism of the U.S. News rankings stems from their heavy emphasis on status, wealth, and selectivity of the institutions. NCLB detractors are wary of judging schools purely on the basis of student scores on standardized tests in reading and math. But if all U.S. News and NCLB did was assess schools and universities, nobody would pay them much mind. They matter because they create action.
To avoid this fate—of having a narrow, damaging truth imposed upon them by for-profit magazines or ultimately by government—the nation's colleges and universities will have to do two things. First, they must invest more time and money in gathering information about their performance and make that information publicly available in a way that allows for straightforward comparisons between institutions. A great deal of data is already being collected through instruments like the National Survey of Student Engagement and the CLA, but most of it is kept private by institutions, away from the public eye.
But simply disclosing the truth isn't enough. Higher-education institutions must also be subject to some system that makes action unavoidable. If enough data are available, U.S. News or one of its competitors might simply take care of this on its own, in the form of better college rankings. While criticism of rankings is de rigeur in polite conversation within higher education, this is arguably the least objectionable alternative. Compared to other options, rankings are relatively uninvasive and non-regulatory, giving institutions total discretion in how best to increase their standing. Tying presidential pay to rankings would be a step in the right direction if the rankings were based on good criteria, like success in helping students learn.
Alternatively, both public and private institutions could negotiate with external bodies—such as accreditors, legislatures, or state higher-education boards—to set ambitious, mutually agreed-upon goals for success in key areas like student learning, graduation rates, scholarship, and research. Meeting those goals would have to matter a great deal to institutions in order to be a priority on a par with—or above—fundraising, marketing, intercollegiate athletics, and other concerns. That could be accomplished with strong funding incentives, contractual obligations for institutional leaders, high-profile public reporting, and legislative hearings—some combination of positive and negative incentives strong enough to matter.
If history is any guide, neither of these courses will come naturally. Colleges and universities are right to be wary of new accountability proposals, because bad accountability can be worse than none at all. Real accountability can also seem like submission, a surrender of autonomy and control. And if one thing is clear, it's that our higher-education institutions hold their autonomy dear.
But in the long run, this attitude is self-defeating. Accountability is really just responsibility—to the students whom colleges educate, to the governments that provide funding, to society at large. Responsibility creates obligation and limits freedom, but at its best it also creates mutual, cooperative relationships. Lack of responsibility, by contrast, loosens bonds and degrades commitment. Until higher education is more transparently and strongly accountable, it won’t be able to compete for public support with Medicaid, K–12 education, and public safety. Nor will it be able to convince policymakers to match student financial aid increases with cost increases, since public officials have been consistently unwilling to provide such support during the recent decades of non-accountability.
If higher education's endless fight against such accountability continues, it may have thrust upon it a version that is real but harmful. Or it may not—but suffer the isolation and marginalization that comes from being responsible to no one. Neither fate is in the best interests of the nation's great colleges and universities or those of the students who depend on them.