Do universities add value?
Never before in History, has a university degree counted for so much. In this 'Information Age', university graduates have life time earnings that are significantly more than high school graduates. Empirical studies by economists show that the lucky few who graduate with a university degree are assured of a life of good fortune relative to their unfortunate peers who either due to lack of diligence or lack of opportunity stop short of entering the doors of future success. Thus it is not surprising that countries all over the world are scrambling to increase university enrolment. A corollary to the above argument is that if university graduates are the vanguard of the workforce, surely the graduates of the best institutions should reap the highest rewards.
University rankings are minutely examined, contested and vilified, but never ignored. Both institutions and countries simultaneously take pride in doing well in the rankings 'game' but at the same time sanctimoniously explain away their failures as resulting from the limitations of the particular methodology used. It helps that there are several international ranking systems each with their own biases and subjectivities, so everyone can claim victory of sorts. While the rankings vary there is broad agreement that US research universities completely dominate followed by a sprinkling of British, European, Australian and Asian institutions. It is worth noting that South Asian Universities do very poorly in these international rankings.
As university education has become essential for economic mobility, it is becoming increasingly globalised with students all over the world eager to get into a select number of top institutions. Thus the admissions process has become much more competitive which has largely resulted in universities competing for students by offering better (and inevitably costlier) educational and extra curricular facilities. This increased competition for students along with the world wide weakening of government commitments and fiscal abilities to subsidize higher education has resulted in sharply increased tuition and financial burdens for students and their parents. A corollary of these increased financial burdens has been a clamour for accountability and quality assessment of higher educational institutions. Students, parents and governments have become vociferous in demanding an assessment and breakdown of the quality of universities and their net value addition.
The old tried and tested formula for burnishing and sustaining higher educational quality is to take in the best and brightest students and not surprisingly produce high quality graduates who do the institution proud. This strategy is followed by iconic universities such as Harvard, Stanford, MIT, Oxford, Cambridge, National University of Singapore, Indian Institutes of Technology, and others of their ilk. In the context of Bangladesh, the relevant issue is whether the top universities both private and public such as Independent University, Bangladesh (IUB), BRAC, Dhaka University, BUET are really providing any additional enhanced skills to their enrolees or are they just coasting on the pre-existing skill sets of those admitted through a rigorous admission selection mechanism.
A recent article in the Economist has highlighted this value addition issue by ranking US universities on the basis of the expected additional life time income of their graduates after adjusting for their initial characteristics on entry. The important issue here is to assess whether the Harvard or Yale enrolee would have done any worse going to a nondescript no name university. The very significant and surprising results of this ranking exercise suggest that the majority (but not all) of the top US universities have relatively poor value addition. They do not substantially enhance the earning power of their enrolees. Their graduates do well because they are very smart and motivated to begin with and not because they learn anything special in their four years enrolled in these institutions. I am glad to report that my Alma mater Harvard did rather well in value addition but Yale for instance did not. The real winners in this race were in fact fairly nondescript and non- selective institutions that significantly added value to the pre-existing characteristics of their admitted students by significantly enhancing their expected earnings. This report has generated a firestorm of controversy about its results based on its exclusive reliance on additional expected lifetime earnings as a metric of value addition of the institution- this metric tends to favour institutions with large numbers of graduates in high earning potential disciplines such as pharmacy, financial services, and engineering—unfortunately even very creative academics, musicians and artists will not earn as much as mediocre accountants. It should be noted however that other studies which have looked at value addition by discipline show much the same results, i.e., in terms of future earnings engineering students are no better of going to MIT or Caltech rather than some no name institution, once you correct for initial student characteristics.
Despite the potential deficiencies of the above rankings, the important point here is that we need to pay attention to value addition and focus on ways of trying to assess the quality of the educational experience in all higher educational institutions. Stake holders (ultimately students, parents and governments) need to know what they are getting for what they paid for. In addition, universities need to address how well they are serving their students—this is a basic tenet of accountability.
The essence of university quality assessments are cross comparisons/rankings on standardised, equivalent criteria which can include a long list: the number of students full time and part time (currently enrolled and ever enrolled), the academic quality of entering students (e.g. high school CGPAs and standardised test scores), the number of graduates by discipline, Alumni performance (both job and educational placement), cohort dropout rates, four and six year cohort graduation rates, average class size, number of faculty members both full time and part time by discipline by level of appointment (i.e. number of Lecturers, Senior lecturers, Assistant Professors, Associate Professors, Full Professors), educational qualifications of faculty members by level of appointment (e.g. number of faculty members with PhD degrees appropriately quality weighted by granting institutions), Research funding (amount both from internal and external sources), number of quality weighted peer review faculty research publications by discipline, library resources (books, journal subscriptions, particularly digital access), laboratory resources, quality of physical infrastructure, digital connectivity, student centric resources (availability of club activities, sports, cafeteria, gymnasia), availability of scholarships, availability of academic tutoring, psychological counselling etc.
The above is by no means a comprehensive list and in fact does not include any indicators of value addition per se as this is very difficult to gauge. Some preliminary attempts have been made to devise before and after tests of analytic and communication skills to assess whether there is any improvement in university student performance between entry and graduation. A lot more work needs to be done in this area, but at the very least this issue of value addition is on the radar screen of all relevant stakeholders.
It is important to note that the validity of the above institutional quality rankings and comparisons is crucially dependent on the transparency and verification of the underlying data and its equivalent nature. Given the importance of these assessments, institutions have a huge incentive to game the system and appropriate checks and balances need to be instituted. Just to give my readers a small flavour of the difficulty in appropriate comparisons between universities, let us take the intuitively simple and seemingly unambiguous metric of number of enrolled students. This is interpreted very creatively and differently by various institutions. Some report number of currently enrolled students in a particular semester taking a set minimum number of credits (e.g. nine credits). Others include all ever enrolled students both part and full time whether they are currently present or not. The latter of course will give you an artificially significantly inflated head count and concomitant bragging rights.
In the past year, the Bangladesh government has made an impressive commitment to assessing and monitoring quality in higher education by promoting the setting up of Internal Quality Assessment Cells in Universities which will report on a regular basis a variety of key quality indicators to the university management and the University Grants Commission. The UGC also publishes an annual compendium of data on various quality indicators for private universities. I applaud and commend these initiatives by the education ministry and the UGC. While the initial efforts are focused on private universities I fervently hope that public universities will also be brought into this orbit of quality monitoring as currently there is no objective source of information about quality and accountability in public universities. While there is an anecdotal but widely acknowledged notion (at least in the media) that there is significant heterogeneity in quality amongst private universities this issue is never addressed with regard to Public Universities. In this context it is worth noting that from an economic point of view, consumers (i.e. students and their parents) are much more aware of these issues when they are paying out of pocket for educational services than when they are provided for free by the state. Thus private universities which now educate the majority of students in higher education in Bangladesh (65%) are already inherently more engaged in this process of accountability than public universities.
Although the UGC annual reports provide a very useful basis of comparison across private universities, this self- reported information is of somewhat questionable quality and more over is not widely available to students, parents, faculty members and the general public. In addition, there is no attempt to rank institutions or departments/ disciplines within institutions. Although ranking systems are fraught with pitfalls and never completely satisfactory to all stake holders, I feel very strongly that they serve a very important function of focusing attention on quality and the underlying heterogeneity in quality across institutions and across disciplines/departments within institutions. Furthermore rankings provide a powerful incentive and signal for institutions to improve their performance.
In conclusion, I would like to make a plea for an independent media institution in Bangladesh to invest in and implement and publicise a credible, fact based, transparent ranking system of all universities/medical and engineering colleges in Bangladesh (similar to those provided internationally by the US WORLD NEWS, and THE TIMES). This would focus a much needed spotlight on the wide heterogeneity in university educational quality (both public and private) in Bangladesh and be of huge benefit to students, guardians and the public at large.
The writer is Vice Chancellor, Independent University, Bangladesh and Adjunct Professor, Harvard T.H. Chan School of Public Health.
Comments