We invite submissions on foundational, conceptual, and practical issues involving modeling actions, as described by natural language expressions and gestures, and as depicted by images and videos. Relevant questions include but are not limited to:
– dynamic models of actions
– formal semantic models of actions
– affordance modeling
– manipulation action modeling
– linking multimodal descriptions and presentations of actions (image, text, icon, video)
– automatic action recognition from text, images, and videos
– communicating and performing actions with robots or avatars for joint tasks
– action language grounding
– evaluation of action annotation models