Eamonn Walsh at Kings asks about faces and our ability to recognise emotion from them. I’m aware that it’s big field in emotion recognition and of course have come across Ekman and others’ work in the past. Is this part of our study?
Puppets’ faces, on the whole, do not change (although audiences often feel as if they do). There is a lot of attention paid by puppet designers as to how to sculpt the puppet’s face to allow it to have the maximum range of interpretation – a permanent grin can be a real obstacle to a tender scene, for example.
It might be that part of our provocation is to suggest that there is more non-facial data being taken in than is commonly appreciated – for example we would base most of our emotional content on posture, breath, tempo, and levels of physical tension. When all of this is in place (and in context) the audience are rarely in doubt about the character’s experience.
It might be useful to clarify the key difference between what is possible in a puppet and what you might get from a robot. Clearly the robotics industry (and AI in general) is greatly curious about how body language can express or communicate emotion. Crucially, the puppet is not programmed, but is manipulated by a human performer. The human performer is (as things stand at the moment) considerably better than a robot at performing social and behavioural nuance. They are more sensitive in response to cues, and I think that there is substantial tacit knowledge in play, in that an experienced puppeteer may not be overtly conscious of many of the body language actions or relations they are creating in the puppet’s body. Rather than executing a technical set of commands, they perform the feeling and the thought, using the body of the puppet as a medium.
I believe that this event in Bristol suggested that robots using motion capture from puppeteers delivered more ‘meaningful’ movement than when the robots moved using motion capture of real humans. The puppeteer’s special skill is to refine behavioural movement into meaningful movement.
And of course the puppeteer typically has a restricted ‘language’ for these cues to communicate through – for example there is usually no (or almost no) facial movement, and the joints of the puppet will in most cases be simpler than those of the human body. Yet, the puppet spectator regularly reads complex emotion in the puppets and experiences emotional connection and empathy with them.
It may be that we can help the robotics programmers in developing body languages for non-human bodies. It may also be important that the spectator knows that the puppeteer’s performance is live and is responsive to the liveness of the interaction.
Mindreading and bodyreading
I’ve started reading Ian Apperly’s Mindreading in order to get up to speed on the current state of theory of mind and he has a pleasantly informal approach (“I am not the first… to wonder whether experimental psychologists might be better off without these theories” – referring to ‘theory theory’ and ‘simulation theory’). I’m in a welter of competing research disciplines – just as planned – chopping between this sort of reading, articles on puppetry theory, and what sometimes seems the opposite sort of consideration – after a very useful chat with Stephen Mottram about his recent show The Parachute I am thinking about the point light displays pioneered by Johansson and used, I believe by several of the researchers I will meet to model full-body behaviour of emotional and other activities. Mottram places great emphasis on the apparent displacement of weight for our reflexive ‘life-detecting’ processes; other practitioners I know (like Basil Jones, and it is also the case in my teaching) place equally strong focus on the apparent expression of breath.
It seems obvious that the puppet is finding a way to mimic, alter or exaggerate this sort of body data- but also that part of the compulsion to watch drama (including puppet drama), and our admiration for its stars and writers, is that we are involved in ‘mindreading’ fictional minds. Apperly’s breakdown of the modular and abductive elements of social exchange and mindreading is a very useful basis.
At some stage I will understand enough about this to ask the next question – what changes in this process when we ‘know’ that the actor is a normally inanimate object?