Learning Image Saliency from Human Touch Behaviors
MetadataShow full metadata
The concept of touch saliency has recently been introduced as a possible alternative for eye tracking in usability studies. This touch saliency study shows that image saliency maps can be generated based on human simple zoom behavior on touch devices. However, when browsing images on touch screen, users tend to apply a variety of touch behaviors such as pinch zoom, tap, double tap zoom, scroll, etc., in order to look at their regions of interest on images. Several questions naturally draw our attention: Do these different behaviors correspond to different human attentions? Which behaviors are highly correlated with human eye fixation? How to learn a good image saliency map from various/multiple human behaviors? In order to address those open questions, a series of studies are designed andconducted. Two novel and comprehensive touch saliency learning approaches are also proposed to derive good image saliency maps from a variety of human touch behaviors by using different machine learning algorithms. The experimental results demonstrate the validity of our study and the potential and effectiveness of the proposed approaches.
CitationFang, S. (2013). Learning image saliency from human touch behaviors (Unpublished thesis). Texas State University-San Marcos, San Marcos, Texas.