Show simple item record

dc.contributor.advisorGao, Byron J.
dc.contributor.authorMartinez Torres, Jose Antonio ( )
dc.date.accessioned2014-09-15T18:09:58Z
dc.date.available2014-09-15T18:09:58Z
dc.date.issued2014-12
dc.identifier.citationMartinez Torres, J. A. (2014). Investigating comparison-based evaluation for sparse data (Unpublished thesis). Texas State University, San Marcos, Texas.
dc.identifier.urihttps://digital.library.txstate.edu/handle/10877/5293
dc.description.abstractEvaluation is ubiquitous. Often we need to evaluate a set of target entities (movies, restaurants, products, courses, paper submissions) and obtain their true ratings (average ratings from the population) or true rankings (rankings based on true ratings). Based on the law of large numbers, average ratings from large samples can well serve the purpose. However, in practice evaluation data are typically extremely sparse and each entity would receive a very small number of ratings from evaluators. In this case, the average ratings would significantly differ from the true ratings due to biased distributions of evaluators holding different standards or preferences. Based on the observation that comparative evaluations (e.g., paper 1 is better than paper 2) are more trustworthy than isolated ratings (e.g., paper 1 has a score of 4.5), in this study we investigate comparison-based evaluation, where the principle idea is to first extract a partial ranking for the entities evaluated by each evaluator, and then aggregate all the partial rankings to obtain a total ranking that well approximates the true ranking. The aggregated total ranking can be used to further estimate the true ratings. In this study we also investigate an associated topic of evaluation assignment (assigning target entities to evaluators). In many applications (e.g., academic conferences) there is such an assignment phase before evaluation is conducted. Currently in these applications assignment is not sophistically designed to maximize evaluation quality. We propose a layered assignment approach to maximize the quality of comparison-based evaluation for given evaluation resources (evaluation is generally labor-intensive). All the proposed algorithms have been implemented and validated using benchmark datasets in comparison with state-of-the-art methods. In addition, to demonstrate the utility of our approach, a prototype system has been deployed and made available for convenient public access.
dc.formatText
dc.format.extent73 pages
dc.format.medium1 file (.pdf)
dc.language.isoen
dc.subjectRanking
dc.subjectRank aggregation
dc.subjectEvaluation
dc.subject.lcshDatabase managementen_US
dc.subject.lcshComputational complexityen_US
dc.titleInvestigating Comparison-based Evaluation for Sparse Data
txstate.documenttypeThesis
dc.contributor.committeeMemberNgu, Anne H.H.
dc.contributor.committeeMemberLu, Yijuan
thesis.degree.departmentComputer Scienceen_US
thesis.degree.disciplineComputer Scienceen_US
thesis.degree.grantorTexas State Universityen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMaster of Scienceen_US
txstate.departmentComputer Science


Download

Thumbnail

This item appears in the following Collection(s)

Show simple item record