The paper investigates the effects of a humanoid robot's online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot's gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical "motionese" parameters (pauses, speed, and height of motion). We argue that a robot - when using adequate online feedback strategies - has at its disposal an important resource with which it could pro-actively shape the tutor's presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic "Social Learning" in that they suggest to consider human and robot as one interactional learning system.