<?xml version="1.0" encoding="UTF-8"?>
<collection xmlns="http://www.loc.gov/MARC21/slim">
	<record>
		<datafield tag="980" ind1=" " ind2=" ">
			<subfield code="a">ARTICLE</subfield>
		</datafield>
		<datafield tag="970" ind1=" " ind2=" ">
			<subfield code="a">Pignat_RAS_2017/IDIAP</subfield>
		</datafield>
		<datafield tag="245" ind1=" " ind2=" ">
			<subfield code="a">Learning adaptive dressing assistance from human demonstration</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Pignat, E.</subfield>
		</datafield>
		<datafield tag="700" ind1=" " ind2=" ">
			<subfield code="a">Calinon, Sylvain</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2="0">
			<subfield code="i">EXTERNAL</subfield>
			<subfield code="u">http://publications.idiap.ch/attachments/papers/2019/Pignat_RAS_2017.pdf</subfield>
			<subfield code="x">PUBLIC</subfield>
		</datafield>
		<datafield tag="773" ind1=" " ind2=" ">
			<subfield code="p">Robotics and Autonomous Systems</subfield>
			<subfield code="v">93</subfield>
			<subfield code="c">61-75</subfield>
		</datafield>
		<datafield tag="260" ind1=" " ind2=" ">
			<subfield code="c">2017</subfield>
		</datafield>
		<datafield tag="856" ind1="4" ind2=" ">
			<subfield code="u">http://doi.org/10.1016/j.robot.2017.03.017</subfield>
			<subfield code="z">URL</subfield>
		</datafield>
		<datafield tag="024" ind1="7" ind2=" ">
			<subfield code="a">10.1016/j.robot.2017.03.017</subfield>
			<subfield code="2">doi</subfield>
		</datafield>
		<datafield tag="520" ind1=" " ind2=" ">
			<subfield code="a">For tasks such as dressing assistance, robots should be able to adapt to different user morphologies, preferences and requirements. We propose a programming by demonstration method to efficiently learn and adapt such skills. Our method encodes sensory information (relative to the human user) and motor commands (relative to the robot actuation) as a joint distribution in a hidden semi-Markov model. The parameters of this model are learned from a set of demonstrations performed by a human. Each state of this model represents a sensorimotor pattern, whose sequencing can produce complex behaviors. This method, while remaining lightweight and simple, encodes both time-dependent and independent behaviors. It enables the sequencing of movement primitives in accordance to the current situation and user behavior. The approach is coupled with a task-parametrized model, allowing adaptation to different users’ morphologies, and with a minimal intervention controller, providing safe interaction with the user. We evaluate the approach through several simulated tasks and two different dressing scenarios with a bi-manual Baxter robot.</subfield>
		</datafield>
	</record>
</collection>