June 24, 2017
 

Social adaptation in conversational agents

Stefan Kopp
Technology interfaces can improve human-machine interaction when they learn to adapt to users in social settings.

From cars with voice control, to instructable robots that help with housework, to embodied characters that take website requests, technology is increasingly invading our daily lives and new paradigms are emerging to interact with it. Machines are no longer viewed as complicated yet dumb tools, but rather as smart companions that support task management and relieve us of tedious operation of appliances. Such agent-based interfaces complement traditional relationships and call for intuitive forms of human-machine communication.

The interfaces required must cooperate and communicate with people using socially adept conversation (see Figure 1). This has led to development of embodied conversational agents.1 However, this is a complex and daunting task. Behavioural traits must be implemented that convey complex meaning, support discourse, regulate conversation and deal with socio-emotive functions. In interactions, the tiniest features and even the absence of a certain kind of behaviour can convey possibly unwanted messages. Therefore, designers must account for many aspects in conversational agents, from collaborative task models, to social reasoning about mutual beliefs and intentions to real-time multimodal behaviour. Mapping these structures onto each other to produce and understand conversational contributions must also be done carefully.


Conversational agent ‘Vince’ in different setups. (a) Mobile phone. (b) Marker-free face-to-face conversation.

Existing work on agents focuses on one or a few aspects, such as eye gaze, prosody (rhythm, intonation, stress and related attributes) of speech or turn-taking. The relevant models (either explicit or implicit) are based more or less directly on empirical data and theoretical considerations. Two problems emerge. First, neither our empirical/theoretical findings nor our modelling methods are sufficiently advanced to ensure adequate results. Second, our rigid models quickly become insufficient as conversation unfolds as a joint task in which interlocutors do not adhere to their goals, beliefs, intentions or behavioural patterns. Instead, adaptation is pervasive in natural conversation and is assumed to serve communicative (for example, as shared basis for common ground2 and facilitating dialogue3) and social functions (for example, creating rapport and affiliation4,5). ‘Socially adaptive agents’—which must learn from and adjust to their users to establish and maintain successful interaction routines and signals—can offer a solution to both problems. We do not only need models of social behaviour, but we must also understand how this behaviour facilitates and is subject to personalized adaptation.

How can one build such agents? A minimal layout (see Figure 2) comprises modules for processing task-oriented and social goals (red box), content/actions (green) and behaviours (blue). Each stage must support both production and perception and needs to process the input/output from adjacent modules. Interpersonal adaptation is found in all aspects of human conversation and therefore affects all levels of the layout. Behaviours are adapted through mimicry, entrainment or interactional synchrony, beliefs, intentions and goals through backchannel feedback, negotiation or metacommunication. The challenge is to achieve every single one of these adaptations and coordinate them on a principled and controlled basis. We set out to build these capabilities bottom-up.


Outline of two conversational agents and the potential adaptations between them in social interaction.

We previously developed components laying the foundation for socially adaptive agents. We devised flexible representation and specification formats for communicative behaviour and its functions (incorporated in our ‘behaviour markup language’).6Our articulated communicator engine (ACE) can turn these into multimodal (verbal and nonverbal) behaviour on the spot.7 We are using this to drive conversational robots (such as Honda's ASIMO) and virtual characters like our sociable agent Vince in mobile, virtual/augmented reality or desktop applications. We designed a distributed architecture based on D-Bus8 to allow modules to emit and receive data continuously and incrementally.

One prerequisite for adaptivity is to adjust flexibly, and not being restricted to fixed repertoires of verbal phrases, gestures, facial expressions, discourse plans or scripts. Therefore, constructive models are required at each stage. Building on ACE's flexibility at the behavioural realization level, we developed a model that allows agents to autonomously plan, i.e., select the content and derive the form of coordinated language and gestures.9 It comprises specific real-time planners to formulate verbal and gestural behaviour. The former is done with a grammar-based sentence planner that is already socially adaptive because it incorporates priming of lexical or syntactical structures informed by frequency and recency of use.10 Gesture generation is done with probabilistic decision networks that map meaning (for example, visuo-spatial properties of the object shown) along with contextual factors (information structure, discourse status, previous gestures) onto gesture forms learned from empirical data and, in the future, adapted online to the user.

The next step is to tie these flexible generation models to perceptual processes and to extend them by learning. Vince already contains hierarchical sensorimotor structures for nonverbal behaviour. These levels exist for motor commands, programs and abstract schemes. They are equipped with forward models that enable continuous prediction-based behaviour perception. A probabilistic approach has been proposed for this, which also accounts for the vertical flow of evidence/predictions between levels. Inverse models (learned as self-organizing maps) are in charge of analysing novel behaviour and augmenting the levels with corresponding motor structures. In our current setup with marker-free time-of-flight cameras (see Figure 2), Vince can learn and cluster novel gestures (up to the level of motor programs) and recognize and imitate them already during observation.

In summary, we argue that a crucial next step for conversational interfaces is to become socially adaptive. This will foster acceptability and user satisfaction since interpersonal adaptation of social behaviour helps mitigate shortcomings of behavioural models and induces socially desirable qualities into human-machine interactions. Key challenges for the future include creating an integrated model of social adaptation that comprehends all arrows in Figure 2, and reducing the imbalance between what (virtual) agents can produce and what they can sense with current input processing technology.


Author

Stefan Kopp
Sociable Agents Group, Center of Excellence for Cognitive Interaction Technology Bielefeld University

Stefan Kopp heads the Sociable Agents Group. After obtaining his PhD from Bielefeld University in 2003 and subsequent postdoctoral appointments at Northwestern University, Illinois, and the Center for Interdisciplinary Research in Bielefeld, he is currently investigator in Sonderforschungsbereich (special field of investigation) 673 (Alignment in Communication), the Research Institute for Cognition and Robotics and the Center of Excellence.


References
  1. Embodied Conversational Agents, MIT Press, 2000.

  2. H. H. Clark, Using Language, Cambridge University Press, Cambridge, UK, 1996.

  3. M. J. Pickering and S. Garrod, Towards a mechanistic pschology of dialogue, Behav. Brain Sci. 27, pp. 169-226, 2004.

  4. R. Y. Bourhis, G. Howard and E. L. Wallace, Social consequences of accommodating one's style of speech: a cross-national investigation, Int'l J. Sociol. Language 6 (5), pp. 5-71, 1975.

  5. J. L. Lakin, V. E. Jefferis, C. M. Cheng and T. L. Chartrand, The chameleon effect as social glue: evidence for the evolutionary significance of nonconscious mimicry, Nonverb. Behav. 27 (3), pp. 145-162, 2003.

  6. S. Kopp, B. Krenn, S. Marsella, A. Marshall, C. Pelachaud, H. Pirker, K. Thorisson and H. Vilhjalmsson, Towards a common framework for multimodal generation in ECAs: the behavior markup language Series Lect. Notes Artif. Intell. 4133 , pp. 205-217, Springer, 2006.

  7. S. Kopp and I. Wachsmuth, Synthesizing multimodal utterances for conversational agents, Comput. Anim. Virt. Worlds 15 (1), pp. 39-52, 2004.

  8. http://dbus.freedesktop.org D-Bus software overview. Accessed 22 September 2009.

  9. K. Bergmann and S. Kopp, Increasing expressiveness for virtual agents: autonomous generation of speech and gesture, 2009. Paper accepted at the Proc. 8th Int'l Conf. Auton. Agents Multiagent Syst. (AAMAS) in Budapest, Hungary, 10-15 May 2009.

  10. H. Buschmeier, K. Bergmann and S. Kopp, An alignment-capable microplanner for natural language generation, Proc. 12th Euro. Worksh. Natur. Language Gener., pp. 82-89, 2009.


 
DOI:  10.2417/2200909.1821