<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">CONF</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">George_CVPR_2021/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">Cross Modal Focal Loss for RGBD Face Anti-Spoofing</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">George, Anjith</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Marcel, Sébastien</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/papers/2021/George_CVPR_2021.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="711" ind1="2" ind2=" ">
			<subfield code="a">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2021</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">Automatic methods for detecting presentation attacks are
essential to ensure the reliable use of facial recognition
technology. Most of the methods available in the litera-
ture for presentation attack detection (PAD) fails in gen-
eralizing to unseen attacks. In recent years, multi-channel
methods have been proposed to improve the robustness of
PAD systems. Often, only a limited amount of data is avail-
able for additional channels, which limits the effectiveness
of these methods. In this work, we present a new framework
for PAD that uses RGB and depth channels together with a
novel loss function. The new architecture uses complemen-
tary information from the two modalities while reducing the
impact of overfitting. Essentially, a cross-modal focal loss
function is proposed to modulate the loss contribution of
each channel as a function of the confidence of individual
channels. Extensive evaluations in two publicly available
datasets demonstrate the effectiveness of the proposed ap-
proach.</subfield>
		</datafield>
	</record>
</collection>