GUEST COLUMN / EDUCATION :
EVALUATIONS NEED EVALUATING
The ranking of Thai universities conducted by the Higher Education Commission was carried out without a good understanding of assessment methodology, writes RICHARD WATSON TODD
Last year, with great fanfare the Higher Education Commission (HEC) announced the first-ever official rankings of Thai universities. Perhaps predictably, with almost as much fanfare these rankings were criticised by university administrators, lecturers and students.
It should be stressed that no-one is criticising the need to evaluate universities. After all, universities use taxpayers' money and should be accountable. Also, evaluations allow universities to identify areas needing improvement and so can promote educational development.
Rather, the criticisms have focused on how the HEC carried out the evaluations on which the rankings are based. Thammasat University found the whole procedure so fraught with problems that they withdrew, and King Mongkut's Institute of Technology North Bangkok found so many irregularities with the process that they deemed the results not credible.
Students, especially those from universities which ranked poorly, have also expressed concern, although the demand by the Abac Poll Centre director that the HEC should apologise to students for the rankings shows an incredible level of misunderstanding. If anything, the commission should apologise for not having instituted the rankings ten years ago, which would have given the poor-performing universities more motivation to develop. If quality assessment had not been started this year, the Ministry of Education would have to apologise to future students.
The criticisms So what were the specific criticisms of the ranking procedures and are they justified?
Firstly, there is concern that the rankings considered only the quality of teaching and research. While it may seem that these are the main concerns of universities, many upcountry Rajabhat universities, which generally performed poorly on the rankings, emphasise services to the community and perhaps act more like community colleges than research universities.
Even within the areas of teaching and research, the evaluation criteria may be inappropriate. The teaching and research areas are divided into five and four categories respectively, and each of these is further subdivided into criteria. For example, the category of research output comprises seven criteria. On the face of it, this appears to be an admirable attempt to cover a wide range of indicators of quality, but the methods the HEC used are riddled with inconsistencies.
For instance, one of the categories used to measure teaching is termed, at least in English, "quality of education." The only method the HEC used to measure this is the number of teaching awards the university had received - in Thai, the category is termed "receipt of awards." Awards do not seem to be the best way of measuring education quality, and where the awards come from and how justifiable they are does not appear to be a concern.
More prosaically, the commission's attempts to quantify everything show, at best, a poor understanding of basic mathematics. Having decided to omit one category, marks were given out of a possible total of 80%; some faculties scored more than the maximum possible, such as 14.57 out of 10; and most faculties scored an arbitrary minimum such as the 7.94 out of 45, which 47 faculties of humanities scored for research output.
This last point raises a further cause for concern. The faculty where I work scored this minimum. The criteria for the category of research output involve number of patents, number of articles in reputable international refereed journals, number of citations and number of books written in foreign languages. While my faculty has not registered any patents, we publish several international articles every year, some of which are well-cited by other authors. Why we deserve the minimum score in this category is unclear.
One possible reason, and another major criticism of the rankings, is that the only data considered by the HEC came from documents. No visits were made to universities. This in itself is not necessarily a major problem as the evaluation process must be practical, but prior to formally publishing the results the commission should have allowed universities and faculties the chance to question dubious scores.
Poor implementation As with so much of the work of the Ministry of Education, the overall pattern is of a good idea implemented poorly. Perhaps the Ministry should be the subject of a public evaluation before it evaluates the universities.
Although the public evaluation of universities is generally a worthy goal, it does have a couple of worrying implications, even if it is implemented properly.
With the university rankings affecting the public's perceptions of the quality of universities, university administrators may be pressured to fudge documents to make their university better than it really is and to direct university work only to those areas which score ranking points. As long as the document checking is satisfactory and the categories and criteria for evaluating universities are valuable, these issues should not be damaging.
Placing such a heavy emphasis on evaluation within education, however, does have a potential downside. At the university where I work, for example, evaluation of teachers has spiralled out of control. Every semester, teachers have to report on their performance according to 78 criteria. As you can imagine, it takes several days to write up a full report - days which could be more productively spent on, say, research.
Since these teacher evaluations are the basis for considerations of salary, most teachers take them very seriously with the consequence that teachers do work for the purpose of gaining higher evaluation scores rather than from any intrinsic interest.
This is ironic, since teachers regularly tell students to focus on interest rather than marks when completing assignments. In this case, there is a danger of evaluation becoming the tail that wags the dog.
While evaluation should not be an end in itself, it is a very valuable means to the ends of public accountability and educational improvement. The university rankings have the potential to serve these purposes, but improvements in implementation need to be made first.
Shifting responsibility The problems with implementing the rankings have been realised by the Ministry of Education with the blame for the problems being placed on the HEC. As a consequence, the commission has been stripped of the right to conduct rankings and this responsibility has been shifted to the Office for National Education Standards and Quality Assessment (ONESQA), an independent body.
ONESQA already conducts evaluations of schools, colleges and universities in Thailand, but, at present these evaluations are designed purely for the development of educational institutions with no intentions of publicising the results for accountability.
The assessments conducted by ONESQA include visits to universities and examine a wider range of indicators than just teaching and research, thus answering two of the main criticisms of the HEC evaluation procedures, and therefore the shifting of responsibility to ONESQA would appear to be a wise move.
There is, however, one clear danger with giving responsibility for rankings to ONESQA. At present, ONESQA evaluation visits are good-natured cooperative attempts to improve educational quality. If the institution visited perceives the purpose of the visit as to be ranked, however, this cooperative atmosphere may be lost.
There are also a couple of other issues that ONESQA could usefully consider to improve on the HEC rankings.
First, after calculating scores for universities and faculties, instead of just publishing the scores straight away, an opportunity should be provided for universities to examine their own scores to check that all relevant information has been included and the calculations are correct.
Second, universities could be asked to categorise themselves into one of three types, depending on whether their main focus is on teaching, research or the community, and the evaluations conducted on these bases. There should be no stigma attached to any of the three types - all serve useful purposes in society. This would ensure that evaluations compare like with like and fit with the main purposes of the institutions being evaluated.
Although the current university rankings are riddled with flaws, they should not be dismissed out of hand. As the rector of King Mongkut's North Bangkok has said, they should be treated as a pilot phase in the hope that improvements can be made to the process of evaluation leading to greater incentives for Thai education to develop in the future.
Richard Watson Todd works at the School of Liberal Arts, King Mongkut's University of Technology Thonburi.
Perspective
Bangkok Post
Monday January 15, 2007
No comments:
Post a Comment