In Vision-and-Language Navigation (VLN), an embodied agent needs to reach a
target destination with the only guidance of a natural language instruction. To
explore the environment and progress towards the target location, the agent
must perform a series of low-level actions, such as rotate, before stepping
ahead. In this paper, we propose to exploit dynamic convolutional filters to
encode the visual information and the lingual description in an efficient way.
Differently from some previous works that abstract from the agent perspective
and use high-level navigation spaces, we design a policy which decodes the
information provided by dynamic convolution into a series of low-level, agent
friendly actions. Results show that our model exploiting dynamic filters
performs better than other architectures with traditional convolution, being
the new state of the art for embodied VLN in the low-level action space.
Additionally, we attempt to categorize recent work on VLN depending on their
architectural choices and distinguish two main groups: we call them low-level
actions and high-level actions models. To the best of our knowledge, we are the
first to propose this analysis and categorization for VLN.