Investigating Comparison-based Evaluation for Sparse Data

dc.contributor.advisorGao, Byron J.
dc.contributor.authorMartinez Torres, Jose Antonio
dc.contributor.committeeMemberNgu, Anne H.H.
dc.contributor.committeeMemberLu, Yijuan
dc.date.accessioned2014-09-15T18:09:58Z
dc.date.available2014-09-15T18:09:58Z
dc.date.issued2014-12
dc.description.abstractEvaluation is ubiquitous. Often we need to evaluate a set of target entities (movies, restaurants, products, courses, paper submissions) and obtain their true ratings (average ratings from the population) or true rankings (rankings based on true ratings). Based on the law of large numbers, average ratings from large samples can well serve the purpose. However, in practice evaluation data are typically extremely sparse and each entity would receive a very small number of ratings from evaluators. In this case, the average ratings would significantly differ from the true ratings due to biased distributions of evaluators holding different standards or preferences. Based on the observation that comparative evaluations (e.g., paper 1 is better than paper 2) are more trustworthy than isolated ratings (e.g., paper 1 has a score of 4.5), in this study we investigate comparison-based evaluation, where the principle idea is to first extract a partial ranking for the entities evaluated by each evaluator, and then aggregate all the partial rankings to obtain a total ranking that well approximates the true ranking. The aggregated total ranking can be used to further estimate the true ratings. In this study we also investigate an associated topic of evaluation assignment (assigning target entities to evaluators). In many applications (e.g., academic conferences) there is such an assignment phase before evaluation is conducted. Currently in these applications assignment is not sophistically designed to maximize evaluation quality. We propose a layered assignment approach to maximize the quality of comparison-based evaluation for given evaluation resources (evaluation is generally labor-intensive). All the proposed algorithms have been implemented and validated using benchmark datasets in comparison with state-of-the-art methods. In addition, to demonstrate the utility of our approach, a prototype system has been deployed and made available for convenient public access.
dc.description.departmentComputer Science
dc.formatText
dc.format.extent73 pages
dc.format.medium1 file (.pdf)
dc.identifier.citationMartinez Torres, J. A. (2014). Investigating comparison-based evaluation for sparse data</i> (Unpublished thesis). Texas State University, San Marcos, Texas.
dc.identifier.urihttps://hdl.handle.net/10877/5293
dc.language.isoen
dc.subjectRanking
dc.subjectRank aggregation
dc.subjectEvaluation
dc.titleInvestigating Comparison-based Evaluation for Sparse Data
dc.typeThesis
thesis.degree.departmentComputer Science
thesis.degree.disciplineComputer Science
thesis.degree.grantorTexas State University
thesis.degree.levelMasters
thesis.degree.nameMaster of Science

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MARTINEZTORRES-THESIS-2014.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
LICENSE.txt
Size:
2.14 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
6.72 KB
Format:
Plain Text
Description: