DOC

The cognitive spatial maps of a blind sailor using the SeaTouch

By Megan Russell,2014-05-06 16:30
8 views 0
The cognitive spatial maps of a blind sailor using the SeaTouch

    CAN VIRTUAL REALITY PROVIDE DIGITAL MAPS TO BLIND

    SAILORS? A CASE STUDY

     (1)(2)(1)(3)Mathieu Simonnet, R. Daniel Jacobson Stephane Vieilledent, and Jacques Tisseau

    (1) UEB-UBO, LISyC ; Cerv - 28280 Plouzané, France. { mathieu.simonnet@orion-brest.com

    stephane.vieilledent@univ-brest.fr}

     (2) Department of Geography, University of Calgary, 2500 University Dr. NW, Calgary, Canada

    T2N 1N4 {dan.jacobson@ucalgary.ca}

    (3) UEB-ENIB, LISyC ; Cerv - 28280 Plouzané, France { tisseau@enib.fr }

Abstract

    This paper presents information about “SeaTouch” a virtual haptic and auditory interface to

    digital Maritime Charts to facilitate blind sailors to prepare for ocean voyages, and ultimately to navigate autonomously while at sea. It has been shown that blind people mainly encode space relative to their body. But mastering space consists of coordinating body and environmental reference points. Tactile maps are powerful tools to help them to encode spatial information. However only digital charts can be updated during an ocean voyage and they very often the only alternative is through conventional printed media. Virtual reality can present information using

    auditory and haptic interfaces. Previous work has shown that virtual navigation facilitates the ability to acquire spatial knowledge.

    The construction of spatial representations from physical contact of individuals with their environment, the use of Euclidean geometry seems to facilitate mental processing about space. However, navigation takes great advantage of matching ego- and allo-centered spatial frames of reference to move and locate in surroundings. Blindness does not indicate a lack of comprehension of spatial concepts, but it leads people to encounter difficulties in perceiving and updating information about the environment. Without access to distant landmarks that are available to people with sight, blind people tend to encode spatial relations in an ego-centered spatial frame of reference. On the contrary, tactile maps and appropriate exploration strategies allow them to build holistic configural representations in an allo-centered spatial frame of reference. However, position updating during navigation remains particularly complicated without vision. Virtual reality techniques can provide a virtual environment to manage and

    explore their surroundings. Haptic and auditory interfaces provide blind people with an immersive virtual navigation experience.

    In order to help blind sailors to coordinate ego- and allo-centered spatial frames of reference, we conceived SeaTouch. This haptic and auditory software is adapted so that blind sailors are able to set up and simulate their itineraries before sailing navigation.

    In our first experimental condition, we compare spatial representations built by six blind sailors during the exploration of a tactile map and the virtual map of SeaTouch. Results show that these

    two conditions were equivalent.

    In our second experimental condition, we focused on the conditions which favour the transfer of spatial knowledge from a virtual to a real environment. In this respect, blind sailors performed a virtual navigation in Northing mode, where the ship moves on the map, and in Heading mode,

    where the map shifts around the sailboat. No significant difference appears. This reveals that the most important factor for the blind sailors to locate themselves in the real environment is the orientation of the maps during the initial encoding time. However, we noticed that the subjects who got lost in the virtual environment in northing condition slightly improved their

    performances in the real environment. The analysis of the exploratory movements on the map are congruent with a previous model of coordination of spatial frames of reference. Moreover, beyond the direct benefits of SeaTouch for the navigation of blind sailors, this study

    offers some new insight to facilitate understanding of non visual spatial cognition. More specifically the cognitively complex task of the coordination and integration of ego and allo-centered spatial frames of reference.

    In summary the research aims at measuring if a blind sailor can learn a maritime environment with a virtual map as well as with a tactile map. The results tend to confirm this, and suggest pursuing investigations with non visual virtual navigation. Here we present the initial results with one participant.

Introduction Spatial frames of reference

    We know that “the main characteristic of spatial representations is that they involve the use of

    reference (p.11)” (Millar, 1994). In the egocentered frame of reference, locations are represented

    with respect to the particular perspective of a subject. It is the first person reference. On the

    contrary, in the allocentered frame of reference, information is independent of the position and

    the orientation of the subject. It is the map reference.

Mastering navigation requires coordinating these two spatial frames of reference. Matching first

    person point of view and map representation leads to the building and use of cognitive maps

    (Thinus-Blanc, 1996), considered as a sort of cartographic mental field (Tolman, 1948). Blindness reference frames

    The lack of sight tends to lead to body centered spatial frames of reference (egocentric) because

    of the sequentially properties of manual exploration and pedestrian wayfinding do not provide blind people with global and simultaneous information like vision does (Hatwell, 2000) . How do blind people build efficient spatial representations? During the previous century different theories tried to answer this question and many controversies appeared about the role of previous visual experience (See Ungar 2000 for a review). Eventually, it seems that lack of vision slows

    down ontogenic spatial development […] but does not prohibit it” (Kitchin and Jacobson 1997).

    So, we emphasize that certain weak spatial performances of blind people do not come from a lack of spatial reasoning. They rather are the consequences of difficulties to access and actualize spatial information (Klatzky, 2003). How could we help blind people to build updated spatial cognitive map?

    Cognitive travel aids

    Trying to answer this question, we discover a sort of paradox: nowadays, among the numerous digital maps connected to Global Positioning Systems (GPS) almost all of the cognitive travel aids rely on the visual modality. For example, the TomTom? system enables the presentation of information in an egocentered spatial frame of reference (Heading) or allocentered one

    (Northing).

    Even if blind people are the most concerned with navigation difficulties (Golledge, 1993), only a few non visual geographical information systems (GIS) are adapted to them. The first personal guidance system f\or blind individuals was developed in the late 1980s by (Golledge, et.al., 1991) Recently, a system made up of two video-cameras in glasses and a matrice of taxels (tactile

    pixels) provides blind people a tactile surface directly presenting the near space information (Pissaloux et al. 2005). Even if this tool is based on egocentric information, experimentations have shown that the possibility to touch simultaneously multiple objects helps blindfolded subjects to perceive relations between objects-to-objects too (Schinazi, 2005). To go further, virtual reality suggests using haptic and auditory interface to provide blind people with GIS that could permit to prepare itineraries and control them.

    Virtual navigation

    In the last fifteen years, the virtual reality community has widely investigated the question of the

    construction of spatial representations using virtual navigation. Different researchers study the influence of the users points of view on the acquisition of spatial knowledge (Tlauka and

    Wilson, 1996; Darken and Banker, 1998; Christou and Bülthoff, 2000). They globally conclude that transfers between virtual and real environments are more efficient when virtual navigation involves multiple orientations. These results are in accordance with others which show the negative effect of misalignment of the map and the body during virtual navigation (May et al. 1995). However, other studies find that an additional bird‟s eye view (allocentric) and active

    decision are required to enhance spatial knowledge during virtual navigation (Witmer et al., 2002; Farrell et al. 2003). Eventually, Peruch and Gaunet (1998) suggest that virtual reality could use other modalities than vision. In other words haptic and auditory environments.

    Few works take into account the potential of virtual reality to help blind people to acquire spatial knowledge. Early work by Jacobson (1998) illustrated the possibility of such techniques. Using a force feedback device (phantom haptic device) and surrounding sounds, Magnusson and Rasmus-Gröhn (2004) show that blind people can learn a route in a haptic and auditory virtual environment and reproduce it in the real world. In this experiment, subjects navigate in an egocentered frame of reference and use the phantom device as a white cane.

    Later, Lahav and Mioduser (2008) ask blind subjects to learn the configuration of a classroom in a real or in a virtual environment. Performances are assessed by pointing directions from objects to others. Results reveal that the virtual exploration is more efficient than the real one. The authors suggest that one possible explanation for their findings may have been that the use of the haptic interface provides the subjects with exploring the environment quicker and also reconstructing a spatial cognitive map more globally.

    Even if these results are encouraging, to our knowledge, no study has compared the efficiency of virtual environments and tactile maps to build non visual spatial representation. Our point is to validate haptic and auditory virtual map before investigating non visual virtual navigation. The case of the blind sailors

    Rowell and Ungar (2003) show that blind people do not regularly use tactile maps because they are rare and incomplete. One important underlying reason for this is the complexities of cartographic design, combined with production and distribution difficulties. Digital maps and virtual reality could potentially give an answer.

    In Brest (France), several blind sailors consult maritime charts weekly. Their case is specifically interesting because they are in the efficient habit of using maps in natural environment. So they form a convenient control group to assess the potentiality of a new kind of map. In this study, we

    compare the precision of the spatial cognitive maps elaborated by a blind sailor after exploring tactile or virtual maps. The virtual environments are provided by SeaTouch, a haptic and auditory

    software developed for blind sailors navigation.

    Experimentatal Subject

    The twenty-nine-year-old subject involved in this experiment lost vision at eighteen. His level of education is the baccalaureate. This blind sailor is familiar with maritime maps more than computers.

    Material

    The tactile and SeaTouch maps of 30 cm by 40 cm contain a little part of land, a large part of sea and 6 salient objects. On the tactile map, the sea is represented in plastic and the land is in sand mixed with paint. The salient objects are 6 stickers in different geometric shapes (e.g. triangle,

    rectangle, circle,…). So, different textures can be perceived by touching (See Figure 1).

    Picture 1: Tactile map. Presentation format

The haptic map come from SeaTouch, a JAVA application developed in our laboratory for

    navigation training of blind sailors. This software uses the classic OpenHaptics Academic

    Edition Toolkit and the Haptik library 1.0 final to interface with the Phantom Omni device. The

    contacts with geographical objects are rendered from a JAVA3D representation of the map and

    environment. Like a computer screen, this map stands in the vertical plane and implies that the north is at the top and the south is at the bottom of the workspace. The rendering of the sea is soft and sounds of waves are played when the subject touches it. The rendering of the earth is rough and three centimeters higher than the surface of the sea. A sound of land birds is played when there is a contact with the land. Between the land and the sea, the coastline, as a vertical cliff, can be felt and followed with the sounds of sea birds. The salient objects are materialized by a spring effect (attractor field) when the haptic cursor enters in contact with them. Then a synthetic voice

announces the names of each object (e.g. rock, penguin or buoy) (See Figure 2). The same

    geometric shapes, located in the same space, as those in Figure 1.

    Figure 2: SeaTouch Map (at the top) and the Phantom haptic device (at the bottom). The crosses represent the salient objects that are vocally announced, and equivalent spatially to the salient reference points in Figure 1. The blue depicts the ocean and the sand colour the land.

Tasks

    During the exploration phase, the subject has to learn the six salient objects layout. Whereas he explores the tactile map using his two hands, he explores the haptic map with the Phantom device held in one hand only. The exploration phase stops when the subject states that they are confident about the objects layout.

    At the end of the exploration phase, the subject performs pointing task from his own orientation with a tactile protractor. Without consulting the map, he answers 18 questions as follows: "From the penguin, could you point to the rock?" Here, the subject faces the north direction of the map.

    So in this aligned condition, ego- and allo-centered spatial frames of reference are aligned. Our goal is to access to the situated cognitive map of the subject. In other words, we aim at assessing the non visual spatial representation of the subject when combining ego- and allo- centered frames of reference. Thus, we ask the subject to estimate directions by answering 18 questions as follows: "You are positioned at the penguin and facing at the rock, where is the

    buoy?” In this non aligned condition, the imagined orientation of the subject is not aligned with the orientation he had while exploring the map. Thus the subject is forced to deduce this new

    orientation from inter-objects relations. Then answering with the specific tactile protractor becomes possible. Consequently, the subject merges ego- and allo-centered spatial frames of

    reference. For example, the point penguin is 45 cardinal degrees from the point rock (allocentric).

    The subject imagines he is at the penguin facing the rock and estimates the buoy at 36 degrees on

    the right (egocentric). Consequently, we rule off a 81 cardinal degrees oriented line from the penguin to the buoy.

    Data reduction

    Firstly, we measure the angular errors of responses. Secondly, we use projective convergence technique to obtain easily scoreable physical representations of cognitive maps. This method was originally adapted by Hardwick et al. (1976) from the more familiar triangulation method used in navigation to determine the position of a ship. Typically, the subject estimates directions to a location from three places. The resulting vectors can be drawn and where the lines cross, a triangle of error can be outlined (Kitchin and Jacobson, 1997). Here, the triangle areas allow us to assess spatial performances.

    Results

    Because the values do not respect the normal distribution, we use the non parametric test of Wilcoxon to compare the performances obtained after the exploration of SeaTouch and tactile

    maps. Our first result is that the subject angular errors were significantly less important (p=0.017) after the SeaTouch map exploration than after tactile map exploration. This result is confirmed by the areas of error triangles (p=0.046) obtained by the projective convergence technique.

Figure 3: Error triangles after SeaTouch (left) and tactile maps (right) explorations in misaligned

    condition.

    However, our second result shows that there is no significant difference between the angular errors (p=0.161) and the areas of error triangle (p=0.463) obtained after the exploration of the SeaTouch and tactile maps in misaligned condition (see Figure 3).

Discussion

    Even if we only take into account the results of this subject solely, it is surprising to discover that the exploration of the SeaTouch map leads to better spatial representation than the exploration of the tactile map in aligned condition. This suggests that haptic and auditory maps could be efficient to encode a geographical layout when ego- and allo-centered spatial frames of reference are aligned. However this result is not found in misaligned condition. Does that mean that haptic maps do not favor the coordination of ego- and allo- centered spatial frames of reference when they are not aligned?

    The main difference between tactile and virtual maps is that the first is explored with ten fingers whereas the second proposes the use of only one sort of “super finger”. This implicates more

    manual movement on the SeaTouch map than on the tactile one in order to learn the layout. A

    previous study has shown that blindfolded subjects use a mode of coding based on exploratory movements to infer a spatial point in space (Gentaz and Gaunet, 2006). This argument is reinforced if we consider that virtual exploration time (8 minutes) is twice as long as tactile one (4 minutes). Moreover, during the SeaTouch map exploration, the subject says several times that

    he had to verify where the salient objects are. Then he spends time to rediscover them and seems

    to refine his encoding. On the contrary, during the tactile map exploration, the subject explores the whole map with his two hands and said “OK”. Consequently we suggest that the sequential characteristic of the SeaTouch map forces the subject to encode more precisely his movements. It is known that movements are mainly encoded in an egocentered spatial frame of reference (Millar, 1994). So this could explain the best performances obtained after the SeaTouch map

    exploration in aligned situation only.

Another difference comes from the verticality of the plane of SeaTouch maps. Hatwell et al.

    (2000) show that blind people take great advantage of the vertical reference. Here, the axis of the gravity and the north-south direction are confused. This could provide the subject with a common invariant between the gravity proprioceptive sensations and the north axis reference of the map. Moreover, the exploration trajectories show that many back-and-forth movements take place into the vertical plane.

    However, the results do not show any improvement of the ego- and allo-centered spatial frames of reference coordination after the SeaTouch map exploration. This would reveal that the subject

    remains as dependant of the initial encoding orientation after having explored vertical planed map as having explored an horizontal one (Mou et al., 2004). However, we have to perform this experiment with many more participants to be able to argue this conclusion.

    Perspectives

    More than reproducing this experiment with other subjects, we envisage setting up another experiment where blind sailors could navigate in a virtual maritime environment. In order to learn more about the coordination of the ego- and allo- centered spatial frames of reference, we project to compare the influence of navigation in Northing (See Figure 6) and Heading mode

    (See Figure 7).

Figure 6: The Northing mode of SeaTouch: while changing boat directions, the boat moves on

    the map but the orientation of the map remains stable.

    Figure 7: The Heading mode of SeaTouch: while changing boat directions, its position and

    orientation in the workspace remains stable but the map orientation moves.

    In this respect, we would like to investigate the consequences of multiple of virtual orientations upon the capacity for blind sailors to match the map and their current orientations. This would provide critically important information for wayfinding while at sea and also insights into the cognitively complex task of matching mis-aligned ego and allocentric frames of spatial reference. References

    Christou, C. & Bülthoff, H. (2000) Perception, representation and recognition: A holistic view of recognition. Spatial Vision, 13, 265-275.

    Darken, R. and Banker, W. (1998) Navigating in natural environments: A virtual environment training transfer study. VRAIS98: Virtual Reality Annual Symposium, 98, 12-19.

    Farrell, M.; Arnold, P.; Pettifer, S.; Adams, J.; Graham, T. & Mac Manamon, M. (2003) Transfer of route learning from virtual to real environments. Journal of Experimental Psychology:

    Applied, 9, 219-227.

    Gentaz, E. & Gaunet, F. (2006) L'inférence haptique d'une localisation spatiale chez les adultes et les enfants : étude de l'effet du trajet et du délai dans une tâche de complètement de triangle. L'année psychologique, 106, 167-190.

    Golledge, R. Geography and the Disabled (1993 A Survey with Special Reference to Vision Impaired and Blind Populations. Transactions of the Institute of British Geographers, 18, 63-85.

    Golledge, R. G., Loomis, J. M., Klatzky, R. L., Flury, A., & Yang, X. L. (1991). Designing a personal guidance system to aid navigation without sight: Progress on the GIS component. International Journal of Geographical Information Systems, 5, 373-396.

    Hardwick, D.; McIntyre, C. & Pick Jr, H. (1976) The Content and Manipulation of Cognitive Maps in Children and Adults. Monographs of the Society for Research in Child Development,

    41, 1-55.

    Hatwell, Y.; Streri, A. and Gentaz, E. (2003) Touching for knowing: cognitive psychology of haptic manual perception. John Benjamins Publisher.

    Jacobson, R. D. (1998) Navigating maps with little or no sight: An audio-tactile approach. Proceedings of the Workshop on Content Visualization and Intermedia Representations (CVIR).

    Montreal.

    Kitchin, R. and Jacobson R. (1997) Techniques to Collect and Analyze the Cognitive Map Knowledge of Persons with Visual Impairment or Blindness: Issues of Validity. Journal of

    Visual Impairment and Blindness, 91, 360-376.

    Klatzky, R.; Lippa, Y.; Loomis, J. & Golledge, R. (2003) Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision. Experimental Brain Research, 149, 48-61.

    Lahav, O. and Mioduser, D. (2008) Haptic-feedback support for cognitive mapping of unknown spaces by people who are blind. International Journal of Human-Computer Studies, 66, 23-35.

    Magnuson, C.) and Rassmus-Gröhn, K. (2003) Non-visual Zoom and Scrolling Operations in a Virtual Haptic Environment. EuroHaptics 2003.

    Millar, S. (1994) Understanding and Representing Space: Theory and Evidence from Studies

    with Blind and Sighted Children. Oxford : University Press.

    Mou, W., McNamara, T., Valiquette, C. and Rump, B. (2004) Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory, and

    Cognition, 30, 142-157.

    Peruch, P. & Gaunet, F. (1998) Virtual environments as a promising tool for investigating human spatial cognition. Cahiers de psychologie cognitive, Association pour la diffusion des recherches

    en sciences cognitives, 17, 881-89.

    Pissaloux, E., Maingreaud, F, Velazquez, R. Hafez (2005) Space cognitive map as tool for navigation for visually impaired. 1st International Symposium on Brain Vision and Artificial Intelligence, Naples, Italy, 2005.

    Rowell, J. & Ungar, S. (2003) The world of touch: an international survey of tactile maps. Part 1: production. British Journal of Visual Impairment, 21, 98-104.

    Schinazi, V. (2005) Spatial representation and low vision: Two studies on the content, accuracy and utility of mental representations. International Congress Series, Elsevier BV, 1282, 1063-

    1067.

Thinus-Blanc, C. (1996) Animal Spatial Cognition: Behavioural and Brain Approach. World

    Scientific.

    Tlauka, M.; Brolese, A.; Pomeroy, D. and Hobbs, W. (2005) Gender differences in spatial knowledge acquired through simulated exploration of a virtual shopping centre. Journal of

    Environmental Psychology, 25, 111-118.

    Tolman, E. (2008) Cognitive map in rats and men. Psychological Review, 55, 189-209.

    Ungar, S., Kitchin, R. and Freundschuh, S. (2000) Cognitive mapping without visual experience In Kitchin and Freundschuh, Cognitive Mapping: Past, Present and Future, London: Routledge,

    221-48.

    Witmer, B.; Sadowski, W. & Finkelstein, N.(2002) VE-based training strategies for acquiring survey knowledge. Presence: Teleoperators and Virtual Environments, 11, 1-18.

Report this document

For any questions or suggestions please email
cust-service@docsford.com