Czajka, A; Bowyer, K; Krumdick, M; Vidal Mata, R
This paper presents a solution to automatically recognize the correct left / right and upright / upside-down orientation of iris images. This solution can be used to counter spoofing attacks directed to generate fake identities by rotating an iris image or the iris sensor during the acquisition. Two approaches are compared on the same data, using the same evaluation protocol: a) feature engineering, using hand-crafted features classified by a Support Vector Machine (SVM), and b) feature learning, using data-driven features learned and classified by a Convolutional Neural Network (CNN). A dataset of 20,750 iris images, acquired for 103 subjects using four sensors, was used for development. An additional subject-disjoint dataset of 1,939 images, from 32 additional subjects, was used for testing purposes. Both same-sensor and cross-sensor tests were carried out to investigate how the classification approaches generalize to unknown hardware. The SVM-based approach achieved an average correct classification rate above 95% (89%) for recognition of left / right (upright / upside-down) orientation when tested on subject- and camera-disjoint data, and 99% (97%) if the images were acquired by the same sensor. The CNN-based approach performed better for same-sensor experiments, and presented slightly worse generalization capabilities to unknown sensors when compared to the SVM. We are not aware of any other papers on automatic recognition of upright / upside-down orientation of iris images, or studying both hand-crafted and data-driven features in same-sensor and cross-sensor subjectdisjoint experiments. The datasets used in this work, along with random splits of the data used in cross-validation, are being made available.