<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">ARTICLE</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">Jankowski_RA-L_2021/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">Probabilistic Adaptive Control for Robust Behavior Imitation</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Jankowski, Julius</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Girgin, Hakan</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Calinon, Sylvain</subfield>
		</datafield>
		<datafield tag="653" ind1="1" ind2=" ">
			<subfield code="a">Imitation Learning</subfield>
		</datafield>
		<datafield tag="653" ind1="1" ind2=" ">
			<subfield code="a">Machine Learning for Robot Control</subfield>
		</datafield>
		<datafield tag="653" ind1="1" ind2=" ">
			<subfield code="a">Robust/Adaptive Control</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/papers/2021/Jankowski_RA-L_2021.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="773" ind1=" " ind2=" ">
			<subfield code="p">IEEE Robotics and Automation Letters</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2021</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">In the context of learning from demonstration (LfD), trajectory policy representations such as probabilistic movement primitives (ProMPs) allow for rich modeling of demonstrated skills. To reproduce a learned skill with a real robot, a feedback controller is required to cope with perturbations and to react to dynamic changes in the environment. In this paper, we propose a generalized probabilistic control approach that merges the probabilistic modeling of the demonstrated movements and the feedback control action for reproducing the demonstrated behavior. We show that our controller can be easily employed, outperforming both original controller and a controller with constant feedback gains. Furthermore, we show that the proposed approach is able to solve dynamically changing tasks by modeling the demonstrated behavior as Gaussian mixtures and by introducing context variables. We demonstrate the capability of the approach with experiments in simulation and by teaching a 7-axis Franka Emika Panda robot to drop a ball into a moving box with only few demonstrations.</subfield>
		</datafield>
	</record>
</collection>