Dal Alert!

Receive alerts from Dalhousie by text message.

X

MCSc Thesis Defence - VISUALIZING GESTURAL ANNOTATION MADE DURING LEISURE RUNS TO PROMOTE RECALL OF AFFECTIVE EXPERIENCE

Who:    Felwah Alqahtani

Title:    VISUALIZING GESTURAL ANNOTATION MADE DURING LEISURE RUNS TO PROMOTE RECALL OF AFFECTIVE EXPERIENCE

Examining Committee:

Derek Reilly - Faculty of Computer Science (Supervisor)
Kirstie Hawkey - Faculty of Computer Science (Reader)
Raghav Sampangi - Faculty of Computer Science (Reader)

Chair:    Malcolm Heywood - Faculty of Computer Science

Abstract:

Today, there are many devices and applications available in the market for monitoring physical activity. These systems display physiological data such as step counts, energy expenditure, and heart rate, but information about the emotions that affect the performance of physical activity is needed for deeper self-reflection and increased self-awareness. In this thesis, we present results from a comparative study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of one’s affective experience of recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our research, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners interacted with an interactive map-based visualization to review and recall their affective experiences. In addition to the routes run, the visualization presented a set of cues that might support recall: weather, time of the run, running speed, elevation, heart rate, and the location of each annotation. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of time, confidence and detail. Audio annotation supported recall primarily through the runner’s spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we suggest that background logging (of ambient sounds and video) may supplement gestural annotation.

Time

Location

Room 429, Goldberg Computer Science Building