BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Methodology

This article is more than 10 years old.

Overview

The Center for College Affordability and Productivity (CCAP) compiled its rankings using five components:

1. Listing of Alumni in the 2008 Who's Who in America (25%)

2. Student Evaluations of Professors from Ratemyprofessors.com (25%)

3. Four- Year Graduation Rates (16 2/3%)

4. Enrollment-adjusted numbers of students and faculty receiving nationally competitive awards (16 2/3%)

5. Average four year accumulated student debt of those borrowing money (16 2/3%)

In most cases data for each component was gathered and standardized to yield a Z-score. The Z-scores were then altered to yield a value between 0 and 100. Nationally competitive awards were not normally distributed between schools, and thus could not be standardized. As a result they were indexed in such as manner that they highest value was set equal to 100 and others valued as a proportion of that score.

Put a little differently for the non-technically oriented reader, we faced a problem in adding together the various components of the rankings -for some factors, the variation of observed values around the average was relatively modest, while for others it was greater. An observation 20 percent above the average for on the student evaluation measure, for example, was an exceptionally high score (relative to other schools), while that was not the case for the alumni achievement factor. Thus we standardized scores into what are, roughly, "standard deviation equivalents" using the statistical device known as Z-scores. It is roughly like adding together apples and oranges, and turning oranges into "apple equivalents" to make addition feasible and allow us to apply the weights which we decided were appropriate for each factor.

Thus, each score was weighted to reflect its importance in the ranking and the individual component scores were summed up. The sum of all the components was then used to obtain an ordinal ranking of the schools. All schools in the sample were included in the rankings, regardless of the type of institution. We did some independent rankings limited to schools of a specified type, e.g., liberal arts colleges, or national public universities, although we emphasize the comprehensive rankings since we believe most students consider a mix of different type of schools in making their college choices.

School Selection

We chose to rank 569 schools, covering a variety of institutional types and classifications. We began with the first three tiers of the national doctoral universities ranked by U.S. News and World Report (USNWR), a total of 195 schools. In addition, we selected 186 schools, the top three tiers from the USNWR liberal arts college rankings. To take care of regional universities and colleges, we selected the top 20 master's level universities ranked from each region, North, South, Midwest , and West, a total of 84 schools. We also selected the top 10 baccalaureate colleges from each region, amounting to 41 total. We further added the 50 institutions with highest enrollments that were not already included in the rankings. Two schools, Sarah Lawrence College (#25) and Gustavus Adolphus College (#103), go unranked by USWNR and we saw fit to add them because of their generally fine reputations. We decided to take American Jewish University and Bard College at Simon's Rock out of our rankings because of their extremely small enrollments that led to enrollment-adjusted values for key components that were unrealistically non-representative of the institutions. After looking at Who's Who data for schools with multiple entries, we decided to add 13 new schools, Fairleigh Dickinson University, Carroll University, Georgia State University, Iona College, King's College of Pennsylvania, Millersville University, Rhode Island College, Rider University, St. Francis College, St. John Fisher College, CUNY, City College of New York, CUNY, Brooklyn, and CUNY, Queens. This completed our total list for ranked schools.

Alumni Listings in Who's Who in America (weighting: 25%)

Why Who's Who in America?

Who's Who in America, published by the New Jersey firm Marquis Who's Who, has issued biographical sketches about influential and noteworthy men and women since its first appearance in 1899. The Who's Who volumes are routinely purchased by libraries as a standard biographical reference. CCAP used Who's Who in America 2008 to generate a sample of successful Americans.

Other college rankings rely typically on popularity of the institution among education professionals (peer assessments), selectivity (acceptance rates, high school performance and SAT scores), and variables related to institutional resources (e.g., faculty resources). Some rankings (e.g., Princeton Review) consider student attitudes towards schools almost exclusively. Others take into account financial dimensions, such as the cost of attending college. The CCAP college rankings stress the outputs of a college education rather than the inputs stressed by the most influential of the rankings. Who's Who in America, while imperfect, is a good sampling of America's successful residents. By recording the college attendance of individuals in Who's Who, the rankings account for the achievement of individuals once they leave college. In this way, we are able to determine how many graduates of a particular college reach a significant level of accomplishment.

We are aware that this approach is not perfect. There are cases -relatively few in our judgment - of individuals with decidedly modest vocational achievement being included in the Who's Who volume. There are other cases of accomplished individuals who simply refuse to fill out the forms and are thus not included. While these deficiencies exist, they apply to graduates of all universities, and do not work to create any known bias in favor of a particular individual institution or class of institution.

We bias the sample towards the present, allowing for the data to focus on the recent success of colleges rather than recording people who have not attended the institution in the past 40 years. The year 1952 was chosen as the earliest birth date of those sampled because, using data obtained in a sample of slightly over 5,000 from an earlier study and extrapolating to the full population of about 100,000 names, approximately 20% of entries should have been born in 1952 or later. We felt we needed a sample of approximately 20,000 names. In fact, the 1952 or later birth date criterion gave us a data set of 20,900 names, allowing us to strike a fair balance between temporally relevant results and an adequate population size.

We recorded the page number, the birth date, the sex, whether or not the individual had attended college, and the undergraduate college attended. We also recorded, although not used here, other information, such as graduate and professional schools attended. The data were recorded in such a fashion that we could later replicate the sample or revisit individual entries. Since the typical student born in 1952 graduated from college no earlier than 1974, our analysis focuses on graduates of the 1970s, 1980s, 1990s, and, in a few cases, this decade.

When we finished entering in all names from the book with birthdates of 1952 or later, we cleaned our data. First, we eliminated all entries who did not graduate from college, all entries who attended a foreign institution, and all entries who we found to have insubstantial or erroneous data (the entry had an invalid birth date or was otherwise anomalous). Great pains were taken to standardize college names, especially for schools that have aliases, such as College of the Holy Cross(MA) (a.k.a. Holy Cross College, Holy Cross). In the case of schools with the same name where the state was not included, e.g., Augustana College, St. John's University, and Concordia College), every entry was reexamined to see if we had missed any information. If we found nothing, we used other clues to determine the exact institution. For example, if a student was employed in Norton, Massachusetts (the location of Wheaton College) during his or her undergraduate years, we assumed that the student attended Wheaton College of Massachusetts as opposed to Wheaton College of Illinois. If biographical clues given in Who's Who were not enough to determine which school the person attended, we looked for information online (lawyer profiles, state representative websites, educator curriculum vitas, and so on).

For entries which omitted the campus name, e.g., University of Michigan, University of Arkansas, and California State University, we used similar tactics as those used for the colleges with the same name. Some of these schools belong to large university systems without a distinct flagship campus, such as California State University, and some belong to smaller university systems with a distinct flagship campus, such as the University of Michigan. Still others belong to a system without a strongly distinctive flagship campus, such as the University of Massachusetts--the Amherst campus was not designated a flagship until 2003. First we reexamined the entries, searching for clues in the Who's Who biographical sketches, and looking up the individuals online. For schools without a distinct flagship campus, we used the ratio of the campuses we had already determined and we extrapolated that over the remaining undetermined entries. For campuses with a distinct flagship campus with little regional competition, such as the University of Texas, Austin--we assigned all remaining entries to the flagship. Similarly, if the name had graduated from a specific known campus, we assigned the entry to that campus. This is especially true of entries labeled "University of California," or "University of Michigan," where the school designation minus a campus historically denotes "Berkeley," and "Ann Arbor," respectively. Some schools, e.g., Rhodes College have changed their name significantly over time, and we endeavored to take that into account in getting accurate measures of Who's Who entries.

Once we found the numbers of entries for every school, we needed to control for the number of students at each institution. By introducing enrollment numbers, we are able to compare schools as large as Ohio State University and as small as Thomas Aquinas College in California, without giving schools like Ohio State undue advantage. We found that the average college graduate in our sample attended school in between 1980 and 1990, so we took Full Time Equivalent enrollments (FTE) for 1980 and 1990 and averaged them to determine a figure used in calculating enrollment-adjusted number of Who's Who entries. We defined full=time equivalent (FTE) student totals as being equal to the full time enrollment plus one third of the part time enrollment. This way we take into account all students. We took our absolute numbers of Who's Who entries for each school and divided them by the institution's average of 1980 and 1990 FTE enrollments. Given the varying graduation dates of entries and given changing enrollments, this is not a precisely accurate way of adjusting for enrollment variation, but one that has the virtue of simplicity and one that should yield results that are relatively accurate. As indicated earlier, the raw data were then standardized using Z scores; in the final computation to obtain the rankings, the raw data were weighted so that they constituted a 25 percent importance in determining the final ranking.

Student Evaluations from Ratemyprofessors.com (25%)

RateMyProfessors.com was founded in 1999 as TeacherRatings.com by John Swapceinski. This free online service allows university and college students from American, Canadian, British, New Zealand, and Australian Institutions to assign ratings to professors anonymously. It was converted to RateMyProfessors in 2001.

The participation of students in this web site has been overwhelming, and it has been estimated that 7,000,000 evaluations were considered in the formulation of the rankings included in this ranking. University administrations have no control over the process of evaluation, meaning schools would find it difficult to try to "game" the process by manipulating student participation (accurate reporting of data is an issue with any ranking that uses self-reported data, such as the popular US News & World Report rankings).

Any student who has cookies enabled on their web browser can enter in ratings of professors via RateMyProfessors.com. All categories are based on a 5 point rating system, with 5 as the highest rating. The categories students evaluate classes on are Easiness, Helpfulness, and Clarity. Overall Quality is determined by averaging the Helpfulness and Clarity ratings given by students. An overall quality rating of 3.5 to 5 is considered good (yellow smiley face), a rating of 2.5 to 3.5 is considered average (green face), and an overall rating of 1 to 2.5 is considered poor (blue sad face). There is also a chili (hotness) component that assesses the professor's physical appearance. A professor receiving more hots than not hot is given a chili by his or her name. We ignored the chili component in the determination of this component of the rankings.

Why This Measure?

Students are consumers, who, ostensibly at least, attend college to learn and acquire knowledge and skills. The core dimension of the learning experience comes from attending classes taught by instructors. Asking students what they think about their courses is akin to what some agencies like Consumers Report or J.D. Powers and Associates do when they provide information on various goods or services.

To be sure, the use of this instrument is not without criticism. Some would argue that only an unrepresentative sample of students complete the forms. In some cases, the results for a given instructor might be biased because only students unhappy with the course complete the evaluation, while in other instances perhaps an instructor urges students liking the course to complete the evaluation, biasing results in the opposite direction.

It is possible that this concern has some validity as it applies to individual instructors -where perhaps only 10 of 50 students in a class complete the evaluations, for example. But we are not interested in individual instructor evaluations, but only in the totality of evaluations for all instructors on a given campus. When the evaluations of dozens or even hundreds of instructors are added together, most examples of bias are washed out -or any systematic bias that remains is likely relatively similar from campus to campus. What is important to us is the average course evaluation for a large number of classes and instructors, and the aggregation of data should largely eliminate major inter-school biases in the data.

The other main objection to the ratemyprofessors.com measure is that instructors can "buy" high rankings by making their course easy, giving high grades, etc. Again, to some extent the huge variations in individual instructor rigor and easiness are reduced when the evaluations of all instructors are aggregated - nearly every school has some easy and some difficult professors, for example. Nonetheless, we took this criticism seriously, and did observe some inter-institutional variation in course easiness, as perceived by the students themselves. It occurred to us that, other things equal, an institution's score on this factor should be enhanced if it has a relatively high proportion of "hard" instructors or courses, for two reasons. First, there is a negative correlation between student overall evaluation of a course and its ease of difficulty, and we should control for this factor in order to get evaluations relatively unbiased by this factor. Second, there is a case that can be made that where difficulty is perceived to be high, likely there is more learning occurring -students on average are being challenged more. For these reasons, we gave special consideration to the difficulty factor in the measurement of this factor, as discussed below.

Scholarly Assessments of RateMyProfessors.com

There have been a number of studies assessing the validity of the ratemyprofessors.com web site. The general approach is to relate the results on this web site to the more established student evaluations of teaching (SET) that are routinely performed by most North American institutions of higher education. Since the schools themselves think their SET provides useful information in assessing the effectiveness of faculty and instruction, then if they correlate well with the ratemyprofessors.com (RMP) results, then it enhances the likelihood that RMP is a valid instrument.

The research to date cautiously supports the view that RMP is relatively similar to the SET used by universities themselves. As one oft-cited study puts it, "The results of this study offer preliminary support for the validity of the evaluations on RateMyProfessors.com." Thomas Coladarci and Irv Kornfield, surveying instructors at the University of Maine, note that "...these RMP/SET correlations should give pause to those who are inclined to dismiss RMP indices as meaningless," although they also expressed some concerns that the correlation between the two type of instruments was far from 1.00. An evaluation of RMP at the University of Waterloo found that 15 or 16 faculty that the university had awarded the Distinguished Teacher Award had been rated in the high quality category on RMP, a sign that extremely high quality teaching was recognized both by student ratings on RMP and by formal university processes of identifying distinction.

To be sure, the research is not all enthusiastically supportive of RMP. Felton, Koper, Mitchell, and Stinson suggest that the positive correlation between RMP quality ratings and ease of course assessments make this a questionable instrument. But it precisely because of this potential bias that we adjusted rankings downward for high levels of course easiness, as indicated below.

In spite of some drawbacks of student evaluations of teaching, they have apparently have value for the 86% of schools that have some sort of internal evaluation system. RMP ratings give similar results to these systems. Moreover, they are a measure of consumer preferences, which is what is critically important in rational consumer choice. When combined with the significant advantages of being uniform across different schools, not being subject to easy manipulation by schools, and being publicly available, this indicates that RMP data is a preferred data source for information on student evaluations of teaching - indeed, the largest single uniform data set we know of student perceptions of the quality of their learning experience.

Turning finally to the procedures used by us with respect to this factor, we looked at all 569 institutions ranked by CCAP and averaged the overall professor rating for all instructors included on the RMP website. We also examined course rigor from the RMP easiness variable. The RMP easiness variable was based on a scale from 1-5. To establish a measure of course rigor, we simply used the inverse of the average rating, which is to say schools received more points the more their students reported their courses to be challenging.

The overall RMP score was generated by giving three times more weight to the overall course/instructor ranking and summing its number with the rigor/easiness factor. The data were then standardized and given a score between 0 and 100 commensurate with its location in a normal distribution. Put differently, 18.75 % of the total ranking of each school was based on student perceptions of course/instructor quality, and 6.25 % was based on student perception of course rigor, with greater points being given the less easy -more difficult--the course was perceived to be.

Typical Student Debt at Graduation (16 2/3%)

Student debt was incorporated into the ranking as a measure of the relative affordability of attending a particular school. In the CCAP rankings, student debt is inversely related to the school's overall ranking, meaning that higher debt is associated with lower rankings. In this way, the rankings account for schools which have higher student debt and would be considered less affordable for students.

The figure used for student debt is the average debt for the typical student borrower. In other words, we excluded from consideration those students who do not borrow for college. The data for the student debt was obtained from the US Department of Education database (IPEDS), where the data is available as the average amount of loan aid received by the typical borrower for that year. The total debt was compiled by summing the average loan debt for four years, assuming that the student graduates in the normal four year time span, beginning in 2002 and ending in 2005 (the most recent year that data was available from the US Department of Education; the results, however, do not change if simply a one year debt figure were used. According to the federal database, student debt is defined as any financial aid which the student must repay, including "all Title IV subsidized and unsubsidized loans and all institutionally- and privately-sponsored loans." This debt burden, however, does not include PLUS loans or loans made directly to the parents.

Although most schools had reported data in the federal database, several did not (e.g., Hillsdale College and Grove City College). The debt data for these schools was obtained from various other sources, including the schools' websites. Once the overall four-year debt burden was calculated, the data was then standardized by converting the raw data into Z-scores. The standardized rates were then given a score between 0 and 100 commensurate with where they fell in a normal distribution.

Four-Year Graduation Rates (16 2/3%)

The graduation component accounted for 16 2/3% of the total rankings and consisted of two sub-components, the actual graduation rate and the actual graduation rate vs. the predicted graduation rate. The actual graduation rate accounted for 8 1/3 % of the total ranking. The actual graduation rates for all schools were gathered and standardized. The standardized rates were then given a score between 0 and 100 commensurate with where they fell in a normal distribution.

The actual graduation rate was included as a variable because parents and students are interested in the actual probability that they would be able to graduate in four years. Yet simply using the four year graduation rate is arguably highly unfair to schools that accept students with mediocre academic records, lower cognitive skills, etc. The less selective admissions schools can be expected to have lower graduation rates given the less outstanding academic pool of students they have matriculate. Accordingly, we base one-half of the graduation rate component on the variation of the actual from the predicted graduation rate, as determined by a statistical model. More specifically, we estimated an ordinary least squares (OLS) regression that incorporated several independent explanatory variables to explain the variations in graduation rates between the 569 schools. Our regression model explained over three-fourths of the total considerable inter-institutional variations in graduation rates.

Schools gained points in the final score by having actual graduation rates that exceeded those predicted by the regression model. They lost points, in some relative sense, if their predicted graduation rate exceeded the actual one. The difference in actual vs. predicted rate for all schools were standardized similar to other components of the index previously discussed. The standardized rates were then given a score between 0 and 100 commensurate with where they fell in a normal distribution.

The indicated regression model had five independent variables, three that were statistically significantly (at the one percent level) negatively related to graduation rates -the percent of applied students that were accepted for admission, the percent of applicants who actually enrolled, and the percent of students on Pell Grants, a measure of the presence of relatively low income students on campus. The inclusion of these variables means we control to a large extent for admission selectivity and the socioeconomic characteristics of the student body. The model included a dummy variable for private schools that took a value of one if an institution were private, and a value of zero if it were public. There was a strongly statistically significant positive relationship between being privately controlled and the four year graduation rate. Similarly, there was a significant positive relationship between the composite SAT score of students at the 25th percentile in the institution's distribution of SAT score and the four year graduation rates (suggesting that SAT scores are a pretty good predictor of academic success, as measured by graduation rates).

Student and Faculty Nationally Competitive Awards (16 2/3%)

Nationally competitive awards received by an institution's students and faculty members combine to contribute 16 2/3 % of the overall ranking. Such awards are indicators of school success in providing distinguished levels of academic achievement for their undergraduates in a number of ways.

Student Awards

Every year students from colleges and universities across the country compete with one another for these highly prestigious student awards. Analyzing the number of award winners per school serves as an indicator of how well an institution is preparing its students to successfully compete for these awards. Winning a nationally competitive award assumes that the student is not only thoroughly academically prepared and qualified, but also possesses other qualities such as leadership. The logic follows that those schools with a high number of award winners are doing a good job preparing students, while those with few or no award winners are doing a poorer job.

The following five specific nationally competitive student awards were considered:

The Rhodes Scholarship

The British Marshall Scholarship

The Harry S. Truman Scholarship

The Barry M. Goldwater Scholarship

Fulbright Grants

The Rhodes and Marshall Scholarships were included because they are widely recognized as the two most selective of all postgraduate awards made to students at the end of their undergraduate career. The Truman award is directed toward students interested in pursuing careers in public service, the Goldwater Scholarship targets students pursuing careers in the natural sciences, mathematics or engineering. Finally, Fulbright Grants encompass a wide range of academic disciplines and student interests, and typically finance travel and study abroad.

In calculating an institution's Rhodes and Marshall Scholars, the number of such scholars from a given institution was examined from between 2000 and 2008. Since a very limited number of Rhodes and Marshall Scholarships are awarded in any given year, multiple years of data were necessary to expand the sample size. Multiple years were also used in calculating the Truman Scholars (2004-2008) to likewise obtain a larger sample size. Only single year data are included for Goldwater and Fulbright Awards since more are awarded every year. Goldwater winners are calculated using 2008 data, and Fulbright winners using 2006-07 school year data.

After calculating the raw number of each award students from an individual institution won over the examined period, each award was given a weight. It would be unfair to weight the Rhodes Scholarship, the most competitive and prestigious of undergraduate awards, equally with Fulbright Awards. While both are competitive and distinguished awards, the Rhodes Scholarship is certainly more competitive. The same is true, to a lesser extent, of the Marshall Scholarship. For that reason the Rhodes Scholarship was weighted five times and the Marshall Scholarship three times greater than the remaining three awards. Thus, if a school had one scholarship winner for each award, that institution's total number of awards would be recorded as eleven. In a few rare cases, award winners had studied for a significant amount of time (at least two years) at an institution before transferring to the institution at which they were current students upon winning the award. Under such circumstances, credit for the student's award was divided equally among the two institutions.

Enrollment size of an institution was accounted for as well. A school with a greater number of students, other things equal, would logically have a better chance of winning an award. Thus, the number of award winners was adjusted by the school's full-time equivalent undergraduate enrollment during the fall of 2005.

Faculty Awards

Faculty awards can also serve as indicators of a quality undergraduate institution. It is true that most such awards are given on the basis of research accomplishments. However, other things equal, it is logical that a student would prefer an institution at which they may have interaction with distinguished and exceptionally accomplished faculty. Indeed, research is one important aspect of the academy--even at the undergraduate level. Additionally, a case can be made that faculty research can enhance undergraduate instruction. For these reasons faculty awards are given an 8 1/3 % percent overall weight.

The following lists the six faculty awards and the years each award was observed in the sample:

The Nobel Prize (1997-2007)

The American Academy of Arts and Sciences (2005-2008)

The National Academy of Sciences (2005-2008)

The National Academy of Engineering (2004-2008)

The Guggenheim Fellowship (2007-2008)

The John D. and Katherine T. MacArthur Foundation (1998-2007)

These specific awards were chosen for similar reasons as the student awards. All are highly distinguished and collectively represent a wide variety of academic areas. The Nobel Foundation annually awards its Nobel Prize in Chemistry, Economics, Literature, Peace, Physics and Physiology/Medicine. The American Academy of Arts and Sciences annually elects approximately 200 fellows from diverse backgrounds including traditional academic disciplines, business, the arts and public affairs. The National Academy of Sciences annually elects 72 distinguished scholars of science and technology to its ranks. The National Academy of Engineering is a member of the same network of The National Academies and admits approximately 70 prominent members of the engineering profession every year. The Guggenheim Foundation makes around 200 awards each year to advanced professionals in the fields of the natural sciences, social sciences, humanities and creative arts. Finally, the John D. and Katherine T. MacArthur Foundation awards approximately 25 fellowships, popularly referred to as "Genius Grants," each year to individuals "who have shown extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction." These awards collectively provide a diverse sample of distinguished scholars and academics.

In sum, a total of 1376.83-a decimal is shown here to reflect partial awards-- award winning faculty members were counted in this sample. Each award was observed over the respective periods listed above in order to ensure a large sample size of data. In examining all the award winners of these six distinctions, it is clear that not all the winners have an affiliation with an institution of higher education. The sample used in the ranking includes only those award winners who are instructors at a college or university. Those faculty members who were affiliated only with a graduate program of an institution were given half credit. Furthermore, for those award winners who were associated solely with research institutes of a university--such as NASA's Jet Propulsion Laboratory at the California Institute of Technology--no credit was given to the institution as these researches have no real meaningful benefit to undergraduate students.

It is common for the Nobel Foundation to award the same Nobel Prize to multiple individuals. In these situations, the Nobel Foundation makes clear the percentage of the prize awarded to each. This same percentage was the percentage of credit awarded to an institution of a Nobel Laureate for the purposes of the ranking. For example, in 2004 the Nobel Prize for Economics was awarded fifty percent to Finn E. Kydland of Carnegie Mellon University and fifty percent to Edward C. Prescott of Arizona State University. In this instance both Carnegie Mellon and Arizona State received one half of a citation for a faculty award. Additionally, the Nobel Prize is a global award. Nobel Laureates with an affiliation as instructors at any higher education institution in the United States were included regardless of their country of residence or birth. The same is the case for Guggenheim and MacArthur Foundation award winners. Furthermore, the American Academy of Arts and Sciences and the National Academies of Sciences and Engineering annually elect foreign honorary members. In the instance that any of these foreign honorary members had an affiliation as an instructor at an American institution, they were included in the sample.

After obtaining the raw sampling of award winners it was necessary to weight the Nobel Prize more heavily than the other five awards. The Nobel Prize is the most competitive of all the awards and widely considered the pinnacle of academic and scholarly accomplishment. Thus, Nobel Awards were given three times (x3) the weighting of the other awards. If an institution were to have one winner for each award, its total awards score would be eight (three for the Nobel Prize and one of each of the other five).

This weighted score had to be adjusted by faculty size. It is logical that, other things equal, a school with more faculty/instructors would have a higher probability of one of those individuals winning one of the listed awards. For this reason the total weighted value of faculty awards won by an institution was divided by that institution's 2006 full-time equivalent Instruction/Research and Public Service staff as provided by the Integrated Postsecondary Education Data System (IPEDS) database. These faculty award values figured in as a 8 1/3 % factor in the final overall rating of a college.

Nationally and internationally, competitive awards are output measures of the success of an institution's students and teachers. It is logical that a greater percentage of students winning competitive awards indicate that a school is doing a better job of preparing students than an institution with fewer award winners. Furthermore, an institution at which distinguished and highly accomplished faculty members instruct students is more likely to provide a high quality education. Students are also more likely to prefer an institution providing highly accomplished faculty to instruct them over another institution that is otherwise identical. For these reasons, competitive awards are a useful way to help measure the quality of outputs of a college/university and are thus granted a 16 2/3 % weighting in the overall rating of America's best colleges and universities.

How Sensitive Are the Rankings To Small Data Changes?

Our findings vary a good deal from those of others doing rankings, although the similarities outweigh the differences. For example, Harvard, Yale and Princeton are in the top five national research universities in the most recent US News & World Report (USNWR) rankings, as they are in these rankings. Looking only at national research universities and at liberal arts colleges, we find the correlation between our rankings and the ones in the most recent USNWR assessment to be well above .60, a significant positive relationship, but one still one indicating a good amount of variation. Did schools which did extremely well in the Forbes/CCAP rankings compared with the USNWR assessment get their high ranks because of one or two extra entries in Who's Who or in nationally competitive awards, or because of extreme sensitivity in the student debt variable?

We looked at Wabash College, a school that ranked 12th among all schools in our rankings, and sixth amongst liberal arts and baccalaureate colleges - compared with 52nd in the USNWR rankings of liberal arts schools. It is a small school, so it is conceivable that even one more entry in Who's Who in America might materially impact rankings, since we adjust the values for the factors examined by enrollment. It turns out that is not the case. One more entry for Wabash in Who's Who would have not changed the ranking at all, and one less entry would have lowered its ranking only to 13th. Similar very small changes occur with small changes in the number of nationally competitive awards. If the debt load of students doubled, it is true the rankings would have slipped materially, to 38th, but still well above the ranking by USNWR (a halving of debt load would have raised Wabash to eighth.)

Looking at a more typical school, Rocky Mountain College ranked 285 -the median school in the entire sample. It, too, is a small enrollment institution. One more Who's Who ranking for this institution would have raised its ranking to 239th, a healthy increase but not a momentous one. Similarly, addition one more nationally competitive award would have increased the ranking only slightly, to 284 (a halving of debt load would have lowered it to 221). The rankings then are not radically sensitive to small changes in the underlying variables, and that holds more for institutions with greater enrollments.

Conclusions

No set of college rankings is perfect, immune to potential criticism. We would have preferred, for example, to measure post-graduate success based on the median earnings of recent and mid-career alumni, something possible for some institutions because of the payscale.com web site computations, but not for all schools. We would have loved to have data on the amount of student engagement in the learning process, measured by the National Survey of Student Engagement which is administered at many, but not all schools, and whose results are very often not made public by the relevant universities and colleges. We would have been overjoyed to include data on the "value added" during college as measured by increases in test scores on standardized instruments such as the Collegiate Learning Assessment. But, again, this is not generally possible, in part because of a resistance on the part of colleges to participate and/or publish these results in this fashion.

Having said that, however, we think these rankings have several distinct positive attributes that should commend them to families contemplating college choices. First, they emphasize student reaction to their instruction, and the success of graduates of an institution. Second, they do take account student concerns about debt burdens, the difficulty or graduating in four years, and the presence of academic excellence in the university community. Third, the data are not based on self-reporting by colleges, which leads to the possibility of fraudulent numbers and 'gaming' the system. This feature also allows us to include good schools, such as Sarah Lawrence college, that do not participate in the popular USNWR survey.

College is a huge investment, and parents make decisions often based on very limited information -must less than they do when they buy, say, a new car. These rankings are an attempt to, in a modest way, alleviate the information gap and lead to better and more informed decisions about where to go to college.

FOOTNOTES

1 Richard Vedder, James Coleman, Jonathan Robe, and Thomas Ruchti, An Outcomes Based Assessment of Universities: Using Who's Who in America (Washington, D.C.: Center for College Affordability and Productivity, March 2008).

2 Michael E. Sonntag, Jonathan F. Bassett, and Timothy Snyder, "An Empirical Test of the Validity of Student Evaluations of Teaching Made on RateMyProfessors.com," Assessment & Evaluation in Higher Education, July 2008; see also, Scott Jaschik, "Validation for RateMyProfessors.com?" Inside Higher Ed, April 25, 2008, available at http://www.insidehighered.com/news/2008/04/25/mp , accessed originally on April 25, 2008.

3 Theodore Coladarci and Irv Kornfield, "RateMyProfessors.com Versus Formal In-class Evaluations of Teaching," Practical Assessment, Research & Evaluation, May 2007.

4 University of Waterloo, TRACE Newsletter, September 2007, available at http://www.adm.uwaterloo.ca/infotrac/tmsept01.html , accessed August 11, 2008.

5 James Felton, Peter T. Koper, John Mitchell, and Michael Stinson, "Attractiveness, Easiness, and Other Issues: Student Evaluations of Professors on RateMyProfessors.com," the abstract page of which is available on http://ssrn.com/abstract=918283, accessed on August 11, 2008.

6 Actually, the Nobel Award in Economic Science was not created under the will of Alfred Nobel, but was added later, and is administered slightly differently than the other awards. It is, however, widely regarded as a "Nobel prize."

7 The John D. and Catherine T. MacArthur Foundation, available at: (accessed August 3, 2008).