You're lost in the wilderness and need to be found. A search and rescue team send an Unmanned Aerial Vehicle (UAV) ahead to find you. How can we modify the aerial photography captured by this UAV to help the team find you as quickly and accurately as possible?
This webpage is intended as an appendix to the thesis entitled “Assisting Search and Rescue through Visual Attention”, available below.
These appendices include all the data analysed from the four experimental investigations:
svp, codename IIMC, trials 1–6)
seg_clustered, code names RSVPMSA and RSVPMSB)
enl, codename RSVPE)
gcd, codename IIMC, trials 7 and 8)
Each archive available from this page is in an open format and includes a document describing the data enclosed. If you require any help processing this data, or wish to collaborate, please feel free to email me at email@example.com.
The forms and protocols used for the experimental investigations are included here. These materials were used as part of the ethics approval applications for these experiments.
Each archive includes a
README.md file to explain the PDFs included.
The aerial photography (a.k.a. terrain strips) used within the experimental investigations was converted from the Natural Resource Information System of the Montana Geographic Information Clearinghouse.
The “Terrain Strip Archive” includes the six original terrain strips that were used in each experimental investigation. These are identified by the letters A–F, with terrains A and B consisting of scrubland and terrains E and F consisting of dense forest. Terrain maps C and D are somewhere between the two extremes of scrubland and dense forest. Two versions of the each terrain strip are available; one with clustered targets, and one with singular targets.
GitHub repositories (source code) are available for both MapTiler which displays the stimuli and MapCutter that prepares the stimuli from the source terrain strips.
The eye-movement behaviour of each subject was recorded using a passive LC Technologies eye-gaze tracker system, which records the position of the eye relative to the display screen coordinates every 17 ms (60 Hz). The records available from this appendix include the default columns for each sample as well as some extra columns for relating this dataset to the experimental investigations described in the thesis and documentation.