DOC

The cognitive spatial maps of a blind sailor using the SeaTouch

By Megan Russell,2014-05-06 16:30
12 views 0
The cognitive spatial maps of a blind sailor using the SeaTouch

    CAN VIRTUAL REALITY PROVIDE DIGITAL MAPS TO BLIND

    SAILORS? A CASE STUDY

     (1)(2)(1)(3)Mathieu Simonnet, R. Daniel Jacobson Stephane Vieilledent, and Jacques Tisseau

    (1) UEB-UBO, LISyC ; Cerv - 28280 Plouzané, France. { mathieu.simonnet@orion-brest.com

    stephane.vieilledent@univ-brest.fr}

     (2) Department of Geography, University of Calgary, 2500 University Dr. NW, Calgary, Canada

    T2N 1N4 {dan.jacobson@ucalgary.ca}

    (3) UEB-ENIB, LISyC ; Cerv - 28280 Plouzané, France { tisseau@enib.fr }

Abstract

    This paper presents information about “SeaTouch” a virtual haptic and auditory interface to

    digital Maritime Charts to facilitate blind sailors to prepare for ocean voyages, and ultimately to navigate autonomously while at sea. It has been shown that blind people mainly encode space relative to their body. But mastering space consists of coordinating body and environmental reference points. Tactile maps are powerful tools to help them to encode spatial information. However only digital charts can be updated during an ocean voyage and they very often the only alternative is through conventional printed media. Virtual reality can present information using

    auditory and haptic interfaces. Previous work has shown that virtual navigation facilitates the ability to acquire spatial knowledge.

    The construction of spatial representations from physical contact of individuals with their environment, the use of Euclidean geometry seems to facilitate mental processing about space. However, navigation takes great advantage of matching ego- and allo-centered spatial frames of reference to move and locate in surroundings. Blindness does not indicate a lack of comprehension of spatial concepts, but it leads people to encounter difficulties in perceiving and updating information about the environment. Without access to distant landmarks that are available to people with sight, blind people tend to encode spatial relations in an ego-centered spatial frame of reference. On the contrary, tactile maps and appropriate exploration strategies allow them to build holistic configural representations in an allo-centered spatial frame of reference. However, position updating during navigation remains particularly complicated without vision. Virtual reality techniques can provide a virtual environment to manage and

    explore their surroundings. Haptic and auditory interfaces provide blind people with an immersive virtual navigation experience.

    In order to help blind sailors to coordinate ego- and allo-centered spatial frames of reference, we conceived SeaTouch. This haptic and auditory software is adapted so that blind sailors are able to set up and simulate their itineraries before sailing navigation.

    In our first experimental condition, we compare spatial representations built by six blind sailors during the exploration of a tactile map and the virtual map of SeaTouch. Results show that these

    two conditions were equivalent.

    In our second experimental condition, we focused on the conditions which favour the transfer of spatial knowledge from a virtual to a real environment. In this respect, blind sailors performed a virtual navigation in Northing mode, where the ship moves on the map, and in Heading mode,

    where the map shifts around the sailboat. No significant difference appears. This reveals that the most important factor for the blind sailors to locate themselves in the real environment is the orientation of the maps during the initial encoding time. However, we noticed that the subjects who got lost in the virtual environment in northing condition slightly improved their

    performances in the real environment. The analysis of the exploratory movements on the map are congruent with a previous model of coordination of spatial frames of reference. Moreover, beyond the direct benefits of SeaTouch for the navigation of blind sailors, this study

    offers some new insight to facilitate understanding of non visual spatial cognition. More specifically the cognitively complex task of the coordination and integration of ego and allo-centered spatial frames of reference.

    In summary the research aims at measuring if a blind sailor can learn a maritime environment with a virtual map as well as with a tactile map. The results tend to confirm this, and suggest pursuing investigations with non visual virtual navigation. Here we present the initial results with one participant.

Introduction Spatial frames of reference

    We know that “the main characteristic of spatial representations is that they involve the use of

    reference (p.11)” (Millar, 1994). In the egocentered frame of reference, locations are represented

    with respect to the particular perspective of a subject. It is the first person reference. On the

    contrary, in the allocentered frame of reference, information is independent of the position and

    the orientation of the subject. It is the map reference.

Mastering navigation requires coordinating these two spatial frames of reference. Matching first

    person point of view and map representation leads to the building and use of cognitive maps

    (Thinus-Blanc, 1996), considered as a sort of cartographic mental field (Tolman, 1948). Blindness reference frames

    The lack of sight tends to lead to body centered spatial frames of reference (egocentric) because

    of the sequentially properties of manual exploration and pedestrian wayfinding do not provide blind people with global and simultaneous information like vision does (Hatwell, 2000) . How do blind people build efficient spatial representations? During the previous century different theories tried to answer this question and many controversies appeared about the role of previous visual experience (See Ungar 2000 for a review). Eventually, it seems that lack of vision slows

    down ontogenic spatial development […] but does not prohibit it” (Kitchin and Jacobson 1997).

    So, we emphasize that certain weak spatial performances of blind people do not come from a lack of spatial reasoning. They rather are the consequences of difficulties to access and actualize spatial information (Klatzky, 2003). How could we help blind people to build updated spatial cognitive map?

    Cognitive travel aids

    Trying to answer this question, we discover a sort of paradox: nowadays, among the numerous digital maps connected to Global Positioning Systems (GPS) almost all of the cognitive travel aids rely on the visual modality. For example, the TomTom? system enables the presentation of information in an egocentered spatial frame of reference (Heading) or allocentered one

    (Northing).

    Even if blind people are the most concerned with navigation difficulties (Golledge, 1993), only a few non visual geographical information systems (GIS) are adapted to them. The first personal guidance system f\or blind individuals was developed in the late 1980s by (Golledge, et.al., 1991) Recently, a system made up of two video-cameras in glasses and a matrice of taxels (tactile

    pixels) provides blind people a tactile surface directly presenting the near space information (Pissaloux et al. 2005). Even if this tool is based on egocentric information, experimentations have shown that the possibility to touch simultaneously multiple objects helps blindfolded subjects to perceive relations between objects-to-objects too (Schinazi, 2005). To go further, virtual reality suggests using haptic and auditory interface to provide blind people with GIS that could permit to prepare itineraries and control them.

    Virtual navigation

    In the last fifteen years, the virtual reality community has widely investigated the question of the

    construction of spatial representations using virtual navigation. Different researchers study the influence of the users points of view on the acquisition of spatial knowledge (Tlauka and

    Wilson, 1996; Darken and Banker, 1998; Christou and Bülthoff, 2000). They globally conclude that transfers between virtual and real environments are more efficient when virtual navigation involves multiple orientations. These results are in accordance with others which show the negative effect of misalignment of the map and the body during virtual navigation (May et al. 1995). However, other studies find that an additional bird‟s eye view (allocentric) and active

    decision are required to enhance spatial knowledge during virtual navigation (Witmer et al., 2002; Farrell et al. 2003). Eventually, Peruch and Gaunet (1998) suggest that virtual reality could use other modalities than vision. In other words haptic and auditory environments.

    Few works take into account the potential of virtual reality to help blind people to acquire spatial knowledge. Early work by Jacobson (1998) illustrated the possibility of such techniques. Using a force feedback device (phantom haptic device) and surrounding sounds, Magnusson and Rasmus-Gröhn (2004) show that blind people can learn a route in a haptic and auditory virtual environment and reproduce it in the real world. In this experiment, subjects navigate in an egocentered frame of reference and use the phantom device as a white cane.

    Later, Lahav and Mioduser (2008) ask blind subjects to learn the configuration of a classroom in a real or in a virtual environment. Performances are assessed by pointing directions from objects to others. Results reveal that the virtual exploration is more efficient than the real one. The authors suggest that one possible explanation for their findings may have been that the use of the haptic interface provides the subjects with exploring the environment quicker and also reconstructing a spatial cognitive map more globally.

    Even if these results are encouraging, to our knowledge, no study has compared the efficiency of virtual environments and tactile maps to build non visual spatial representation. Our point is to validate haptic and auditory virtual map before investigating non visual virtual navigation. The case of the blind sailors

    Rowell and Ungar (2003) show that blind people do not regularly use tactile maps because they are rare and incomplete. One important underlying reason for this is the complexities of cartographic design, combined with production and distribution difficulties. Digital maps and virtual reality could potentially give an answer.

    In Brest (France), several blind sailors consult maritime charts weekly. Their case is specifically interesting because they are in the efficient habit of using maps in natural environment. So they form a convenient control group to assess the potentiality of a new kind of map. In this study, we

    compare the precision of the spatial cognitive maps elaborated by a blind sailor after exploring tactile or virtual maps. The virtual environments are provided by SeaTouch, a haptic and auditory

    software developed for blind sailors navigation.

    Experimentatal Subject

    The twenty-nine-year-old subject involved in this experiment lost vision at eighteen. His level of education is the baccalaureate. This blind sailor is familiar with maritime maps more than computers.

    Material

    The tactile and SeaTouch maps of 30 cm by 40 cm contain a little part of land, a large part of sea and 6 salient objects. On the tactile map, the sea is represented in plastic and the land is in sand mixed with paint. The salient objects are 6 stickers in different geometric shapes (e.g. triangle,

    rectangle, circle,…). So, different textures can be perceived by touching (See Figure 1).

    Picture 1: Tactile map. Presentation format

The haptic map come from SeaTouch, a JAVA application developed in our laboratory for