Multimodal Feature Level Fusion based on Particle Swarm Optimization with Deep Transfer Learning

Citation:

P. H. Silva, E. Luz, L. A. Zanlorensi, D. Menotti, and G. Moreira. 2018. “Multimodal Feature Level Fusion based on Particle Swarm Optimization with Deep Transfer Learning.” In 2018 IEEE Congress on Evolutionary Computation (CEC), Pp. 1-8.

Abstract:

There are several biometric-based systems which rely on a single biometric modality, most of them focus on face, iris or fingerprint. Despite the good accuracies obtained with single modalities, these systems are more susceptible to attacks, i.e, spoofing attacks, and noises of all kinds, especially in non-cooperative (in-the-wild) environments. Since non-cooperative environments are becoming more and more common, new approaches involving multi-modal biometrics have received more attention. One challenge in multimodal biometric systems is how to integrate the data from different modalities. Initially, we propose a deep transfer learning optimized from a model trained for face recognition achieving outstanding representation for only iris modality. Our feature level fusion by means of features selection targets the use of the Particle Swarm Optimization (PSO) for such aims. In our pool, we have the proposed iris fine-tuned representation and a periocular one from previous work of us. We compare this approach for fusion in feature level against three basic function rules for matching at score level: sum, multi, and min. Results are reported for iris and periocular region (NICE.II competition database) and also in an open-world scenario. The experiments in the NICE.II competition databases showed that our transfer learning representation for iris modality achieved a new state-of-the-art, i.e., decidability of 2.22 and 14.56% of EER. We also yielded a new state-of-the-art result when the fusion at feature level by PSO is done on periocular and iris modalities, i.e., decidability of 3.45 and 5.55% of EER.