Visual Control of Action

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (15 April 2019) | Viewed by 17470

Special Issue Editor


E-Mail Website
Guest Editor
Neuro-cognitive Psychology & Cluster of Excellence ‘Cognitive Interaction Technology’ CITEC, Bielefeld University, Bielefeld, Germany
Interests: eye movements; visual attention; sequence learning; action control; visual working memory; scanpath methods

Special Issue Information

Dear Colleagues,

For the Special Issue of Vision on “Visual Control of Action”, original and review articles are invited covering the following topics:

  • Perception and Action
  • Sensorimotor control
  • Sensorimotor learning
  • Manual action control
  • Eye–hand coordination
  • Visual attention for action control
  • Visual working memory and action control
  • Sensorimotor sequence learning
  • Eye movements in natural tasks
  • Motor imagery

Dr. Rebecca M. Förster
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1008 KiB  
Article
Errors in Imagined and Executed Typing
by Stephan F. Dahm and Martina Rieger
Vision 2019, 3(4), 66; https://doi.org/10.3390/vision3040066 - 20 Nov 2019
Cited by 6 | Viewed by 2376
Abstract
In motor imagery (MI), internal models may predict the action effects. A mismatch between predicted and intended action effects may result in error detection. To compare error detection in MI and motor execution (ME), ten-finger typists and hunt-and-peck typists performed a copy-typing task. [...] Read more.
In motor imagery (MI), internal models may predict the action effects. A mismatch between predicted and intended action effects may result in error detection. To compare error detection in MI and motor execution (ME), ten-finger typists and hunt-and-peck typists performed a copy-typing task. Visibility of the screen and visibility of the keyboard were manipulated. Participants reported what type of error occurred and by which sources they detected the error. With covered screen, fewer errors were reported, showing the importance of distal action effects for error detection. With covered screen, the number of reported higher-order planning errors did not significantly differ between MI and ME. However, the number of reported motor command errors was lower in MI than in ME. Hence, only errors that occur in advance to internal modeling are equally observed in MI and ME. MI may require more attention than ME, leaving fewer resources to monitor motor command errors in MI. In comparison to hunt-and-peck typists, ten-finger typists detected more higher-order planning errors by kinesthesis/touch and fewer motor command errors by vision of the keyboard. The use of sources for error detection did not significantly differ between MI and ME, indicating similar mechanisms. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Graphical abstract

15 pages, 711 KiB  
Article
Grasping Discriminates between Object Sizes Less Not More Accurately than the Perceptual System
by Frederic Göhringer, Miriam Löhr-Limpens, Constanze Hesse and Thomas Schenk
Vision 2019, 3(3), 36; https://doi.org/10.3390/vision3030036 - 19 Jul 2019
Cited by 3 | Viewed by 3630
Abstract
Ganel, Freud, Chajut, and Algom (2012) demonstrated that maximum grip apertures (MGAs) differ significantly when grasping perceptually identical objects. From this finding they concluded that the visual size information used by the motor system is more accurate than the visual size information available [...] Read more.
Ganel, Freud, Chajut, and Algom (2012) demonstrated that maximum grip apertures (MGAs) differ significantly when grasping perceptually identical objects. From this finding they concluded that the visual size information used by the motor system is more accurate than the visual size information available to the perceptual system. A direct comparison between the accuracy in the perception and the action system is, however, problematic, given that accuracy in the perceptual task is measured using a dichotomous variable, while accuracy in the visuomotor task is determined using a continuous variable. We addressed this problem by dichotomizing the visuomotor measures. Using this approach, our results show that size discrimination in grasping is in fact inferior to perceptual discrimination therefore contradicting the original suggestion put forward by Ganel and colleagues. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

19 pages, 2976 KiB  
Article
Object Properties Influence Visual Guidance of Motor Actions
by Sharon Scrafton, Matthew J. Stainer and Benjamin W. Tatler
Vision 2019, 3(2), 28; https://doi.org/10.3390/vision3020028 - 10 Jun 2019
Cited by 1 | Viewed by 3906
Abstract
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and [...] Read more.
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and shape of objects) and object state (whether it is full of liquid, or to be set down in a crowded location) influence visual supervision while setting objects down, which is an element of object interaction that has been relatively neglected in the literature. In a liquid pouring task, we asked participants to move empty glasses to a filling station; to leave them empty, half fill, or completely fill them with water; and then move them again to a tray. During the first putdown (when the glasses were all empty), visual guidance was determined only by the type of glass being set down—with more unwieldy champagne flutes being more likely to be guided than other types of glasses. However, when the glasses were then filled, glass type no longer mattered, with the material and fill level predicting whether the glasses were set down with visual supervision: full, glass material containers were more likely to be guided than empty, plastic ones. The key finding from this research is that the visual system responds flexibly to dynamic changes in object properties, likely based on predictions of risk associated with setting-down the object unsupervised by vision. The factors that govern these mechanisms can vary within the same object as it changes state. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

17 pages, 3026 KiB  
Article
The Limitations of Reward Effects on Saccade Latencies: An Exploration of Task-Specificity and Strength
by Stephen Dunne, Amanda Ellison and Daniel T. Smith
Vision 2019, 3(2), 20; https://doi.org/10.3390/vision3020020 - 11 May 2019
Cited by 2 | Viewed by 3467
Abstract
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. [...] Read more.
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. The purpose of this study was to investigate whether the effects of operant conditioning of prosaccades to specific spatial locations would transfer to more complex oculomotor behaviours, specifically, prosaccades made in the presence of a distractor (Experiment 1) and antisaccades (Experiment 2). In part 1 of each experiment, participants were rewarded for making a saccade to one hemifield. In both experiments, the reward produced a significant facilitation of saccadic latency for prosaccades directed to the rewarded hemifield. In part 2, rewards were withdrawn, and the participant made a prosaccade to targets that were accompanied by a contralateral distractor (Experiment 1) or an antisaccade (Experiment 2). There were no hemifield-specific effects of the reward on saccade latency on the remote distractor effect or antisaccades, although the reward was associated with an overall slowing of saccade latency in Experiment 1. These data indicate that operant conditioning of saccadic eye movements does not transfer to similar but untrained tasks. We conclude that rewarding specific spatial locations is unlikely to induce long-term, systemic changes to the human oculomotor system. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

15 pages, 2641 KiB  
Article
Hands Ahead in Mind and Motion: Active Inference in Peripersonal Hand Space
by Johannes Lohmann, Anna Belardinelli and Martin V. Butz
Vision 2019, 3(2), 15; https://doi.org/10.3390/vision3020015 - 18 Apr 2019
Cited by 11 | Viewed by 3667
Abstract
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize [...] Read more.
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants’ virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

Back to TopTop