Home

AREA II is the follow up on the first AREA meeting at LREC 2018 (http://lrec2018.areaworkshop.org/).
There has recently been increased interest in modeling actions, as described by natural language expressions and gestures, and as depicted by images and videos. Additionally, action modeling has emerged as an important topic in robotics and HCI. The goal of the AREA II workshop is to gather and discuss advances in research areas where actions are paramount e.g., virtual embodied agents, robotics, HRI, human-computer communication, as well as modeling multimodal human-human interactions involving actions. Action modeling is an inherently multi-disciplinary area, involving contributions from computational linguistics, AI, semantics, robotics, psychology, and formal logic.

While there has been considerable attention in the community paid to the representation and recognition of events (e.g., the development of ISO-TimeML and associated specifications, and the 4 Workshops on “EVENTS: Definition, Detection, Coreference, and Representation”), the goals of this workshop are focused specifically on actions undertaken by embodied agents as opposed to events in the abstract. By concentrating on actions, we hope to attract those researchers working in computational semantics, gesture, dialogue, HCI, robotics, and other areas, in order to develop a community around action as a communicative modality where their work can be communicated and shared. This community will be a venue for the development and evaluation of resources regarding the integration of action recognition and processing in human-computer communication.

We invite submissions on foundational, conceptual, and practical issues involving modeling actions, as described by natural language expressions and gestures, and as depicted by images and videos. Relevant topics include but are not limited to:

  •  dynamic models of actions
  •  formal semantic models of actions
  •  affordance modeling
  •  manipulation action modeling
  •  linking multimodal descriptions and presentations of actions (image, text, icon, video)
  •  automatic action recognition from text, images, and videos
  • communicating and performing actions with robots or avatars for joint tasks
  • action language grounding
  • evaluation of action models

Submissions should be made at the link for the conference, Easy Chair.