DOC

We did not restrict the amount of time the groups had to solve the

By Clara Clark,2014-06-19 19:20
5 views 0
We did not restrict the amount of time the groups had to solve the

    Published in Proceedings of the International Workshop on New Technologies in Collaborative Learning, Awaji-Yumebutai, Japan.

    Modeling the Process of Collaborative Learning

     11,2Amy Soller and Alan Lesgold 12Intelligent Systems Program and School of Education

    Learning Research and Development Center

    University of Pittsburgh

    3939 O‟Hara Street, Pittsburgh, PA 15260-5159

    soller@pitt.edu, al@pitt.edu

    Abstract: Supporting group learning activities requires an understanding of the

    process of collaborative learning. This process is complex, coupling task-based and

    social elements. We present a view of this process from multiple perspectives, and

    explain the need for multiple angles of analysis. Enabling a computer supported

    collaborative learning system to understand and support the process of collaborative

    learning requires a fine-grained sequential analysis of the group activity and

    conversation from each angle. The selection of a computational strategy to perform

    the analysis should be driven by the chosen perspective and the desired goal: to better

    understand the interaction, or to provide advice or support to the students. Examples

    of five different computational approaches for modeling collaborative learning

    activities are discussed: Finite State Machines, Rule Learners, Decision Trees, Plan

    Recognition, and Hidden Markov Models. We illustrate the Hidden Markov

    Modeling approach in detail, showing that it performs significantly better than

    statistical analysis in recognizing the knowledge sharer, and the knowledge recipients

    when new knowledge is shared during learning activities.

    Keywords: Computer Supported Collaborative Learning (CSCL), Interaction Analysis, Dialog Analysis, Knowledge Sharing, Hidden Markov Models

1 Introduction

    Over the past 20 years, computer-based training software has evolved to realize the benefits of adapting instruction to meet the needs of individual students. Yet, adapting computer-supported collaborative learning software to meet the needs of learning groups continues to be a challenge. Just as supporting individual learning requires an understanding of individual thought processes, supporting group learning requires an understanding of the process of collaborative learning. In this paper, we explain why it has been so difficult to understand the process of collaborative learning, discuss an array of strategies for analyzing this process, and provide an example of applying one such strategy, Hidden Markov Models.

1.1 Why Understanding Collaborative Learning Processes is Difficult

    In general, a student‟s understanding of a concept is reflected in his actions, and his explanations of these actions. In a one-on-one tutoring environment, this information is available and, in most cases, straightforward to analyze. The system is able to watch the student solve the problem, perhaps ask pointed questions to evaluate the student‟s understanding of key concepts, and once in a while, interrupt him if remediation is necessary.

    Evaluating the learning of a group of students solving the same problem is a very different ball game. If one student solves the problem successfully while explaining his actions, and his teammates acknowledge and agree with his actions, to what degree should we assume his teammates understand how to solve the problem themselves? If a student is continually telling his teammates what to do, and his teammates are simply following his instructions without questioning them, who should get credit for solving the problem?

    Collaborative interactions are complex. Whereas the key to understanding and supporting computer-supported individual learning lies in evaluating the student‟s actions, the key to understanding computer-

    supported collaborative learning lies in understanding the rich interaction between individuals (Dillenbourg, 1999). These interaction patterns contain information about the students‟ roles, understanding of the subject matter, engagement, degree of shared understanding, and ability to follow and contribute to the development of ideas and solutions. A collaborative learning environment that can analyze sequences of learning interaction may be able to determine, for example, when a student is falling behind in the group, and why.

    Analyzing sequences of collaborative learning interaction, however, is not without its own challenges. The interaction must be transcribed and coded, and patterns indicating effective learning behavior must be identified. The popularity and acceptance of on-line text chat on the internet has eliminated the many hours that researchers spent in the past on transcription, before they could begin analysis. Various different schemes for coding dialog exist, and selecting one that fits the bill can be daunting. Many researchers have taken to developing their own schemes to meet the needs of their project, only to find out that developing a dialog coding scheme is its own research project! The third challenge, identifying patterns of coded interaction indicative of effective group learning, remains to be an impressively difficult area of research. This is due to the many factors that are involved in determining whether or not a group is an effective one, and the undefined process of mapping a series of low level actions and conversational exchanges to a set of high-level group interaction behaviors.

    Performing team tasks well means not only having the skills to execute the task, but also collaborating well with teammates. Collaborating well means, among other things, asking questions to gain a better understanding of key concepts, sharing and explaining ideas, and elaborating and justifying opinions. When group members‟ combined skills suffice to complete the learning task, effective group work may result in greater overall achievement than individual learning (Doise, W., Mugny, G., and Perret-Clermont, 1975; Heller, Keith, and Anderson, 1992; Joiner, 1995). Students learning in effective teams benefit through both enhanced learning of the task, and improvement in the social interaction skills they need throughout their lives. Soller, Goodman, Linton, and Gaimari (1998) describe a comprehensive model of collaborative learning that compiles research ideas from educational psychology, computer-supported collaborative learning, and small group dynamics. The model describes potential indicators of effective collaborative learning teams, and proposes strategies for promoting effective peer interaction in an intelligent collaborative learning environment. These strategies (such as assigning roles to students, or facilitating brainstorming sessions) describe actions that a computer could carry out to facilitate learning teams. How does the system know which strategies to apply, and when? Answering this question requires the ability to dynamically analyze the interaction based on an understanding of the collaborative learning process. Hence, we turn to a discussion of this process.

1.2 A Multiple Perspective View of the Collaborative Learning Process

    Construction. Many educators and philosophers believe that collaboration facilitates learning. There are multiple reasons for this belief. Social constructivists, arguing that all knowledge is constructed by those who have it, go on to assert that learning is essentially an initiation into the belief (i.e., knowledge) system of a group. By this account, individual learning consists of noticing aspects of group activity and assimilating them. A more complete level of learning might, from this viewpoint, involve some questioning by the learner and some explanation from others in response to those questions. By this view, collaborative learning involves a combination of the learner noticing group activity that imparts new knowledge, and the group explaining its actions and thinking when the learner is curious.

    Criticism. A second tradition goes back to the dialectic of Aristotle and even some of his predecessors and contemporaries. Aristotle tended to begin arguments by noting existing knowledge, assertions, and phenomena that seemed relevant to an issue. Inevitably, he would discern apparent contradictions in these data and attempt to understand how to reconcile these contradictions. He also introduced the idea that one person could help another person better understand the world by asking questions that exposed the apparent contradictions. Much of the history of logic is the evolution of ideas about how central contradictions are to complete understanding, but for our purposes, the central point is that people can help each other learn through some process of critique, exposing the apparent contradictions and incompleteness of each others‟ thoughts. By this view, collaborative learning consists of examining each others‟ assertions and challenging any apparently contradictory claims.

    Accumulation. A third view of collaborative learning has to do with the practicalities of accumulating knowledge. In this view, the job of the learner is to track down important knowledge and assimilate it. It seems plausible (though not all social psychological studies confirm this) that two people searching for bits of knowledge will find more than one. By this view, successful collaborative learning involves participants making public what they have figured out sharing knowledge.

    Motivation. A final view is that collaborative learning works because it is motivating. Festinger (1954) argued that people have a deep need to match their activity to that of others, a process he called social comparison. One aspect of motivation in group activity is simply that each person sees the others engaged in the learning task at hand and is thus motivated, via social comparison, to keep working himself. Beyond that, words of encouragement can pass from one person to another and provide motivation more explicitly.

    Any intelligent effort to contribute to collaborative learning by participating in conversations among learners will need to include the ability to recognize the likely presence or absence of one of these four possible group activities (explaining, criticizing, sharing, and motivating) and to offer suggestions based upon their presence or absence. For example, a system might note the absence of explanation activity and suggest that learning will improve if people explain ideas to each other.

    We suspect that intelligent coaching of collaborative learning will need to attend to a higher level of analysis than individual speech acts such as explanation or assertion. Rather, effective collaborative learning is likely to involve a higher level unit of conversation, such as asking a question and then receiving an explanation, or making an assertion and hearing a criticism. For this reason, we are attempting to determine whether there are sequences of speech acts in learning collaborations that signal coherent and effective instances of explaining, criticizing, sharing, and motivating, and others that signal incomplete or less effective instances.

2 Understanding On-Line Collaborative Activities

Coding and analyzing sequences of conversational interaction by hand, termed interaction analysis, helps us

    understand the patterns of conversation that lead to key learning events. Various coding schemes (e.g. Katz, O'Donnell, and Kay, 2000; Pilkington, 1997) have been developed for studying different aspects of interaction (e.g identifying grounding behaviors, or achieving educational goals). These coding schemes help researchers break down dialogs so they are easier to compare and analyze. Unfortunately, translating from one coding scheme to another sometimes requires the entire dialog to be re-coded, meaning that effects learned using one coding scheme may not transfer well to other schemes.

    Understanding and explaining hand-coded sequences of interaction is one thing automating the

    identification of such sequences in collaborative learning environments is another. One clear constraint is that involving computer understanding of natural language. Natural Language Understanding technologies are rapidly advancing, yet they continue to be error-prone, computationally intensive, and time consuming.

    Cahn and Brennan (1999) explain that a system can represent or model a dialog using only the “gist” of

    successive contributions; a full account of each contribution, verbatim, is not necessary. The gist of a contribution can often be determined by the first few words, or the sentence opener. Sentence openers such as

    “Do you know”, “In other words”, and “I agree because”, suggest the underlying intention of a statement. Associating these sentence openers with conversational acts such as Request Information, Rephrase, or Agree, and requiring students to use a given set of sentence openers, allows a system to automatically code dialog without having to rely on Natural Language parsers. Previous work has established promising research directions based on approaches that adopt this idea. Most approaches make use of a structured interface, comprised of organized sets of sentence openers (see Figure 1 in section 4 for an example). Students typically select a sentence opener from the interface to begin each contribution.

    One of the first systems to adopt this approach was McManus and Aiken‟s (1995) Group Leader. Their system compared sequences of students‟ conversation acts to those allowable in a four finite state machines

    developed specifically to monitor discussions about comments, requests, promises, and debates. The next section describes this system in more detail.

    Recently, several researchers have been interested in the tradeoffs involved in requiring students to use sentence openers to communicate. Baker and Lund (1997) compared the problem solving behavior of student pairs as they communicated through both a sentence opener interface and an unstructured chat interface. They found that the dialogue between students who used the sentence opener interface was more task focused. Jermann and Schneider‟s (1997) subjects could choose, for each contribution, to type freely in a text area, or to

    select one of four short cut buttons, or four sentence openers. Jermann and Schneider discovered that, in fact, it is possible to direct the group interaction by structuring the interface, as Baker and Lund suggest. Furthermore, they found that the use of the sentence openers was more frequent overall than that of the free text zone (58% vs. 42%). Soller, Lesgold, Linton, and Gaimari (1999) found that the types of conversation acts that group members use may indicate the quality of interaction. Their work suggests that conversations of effective groups include a balance of different conversational acts, and in particular an abundance of questioning, explaining, and motivation, whereas ineffective groups tend to show an imbalance of conversation acts, with an abundance of acknowledgement.

    Learning conversations involving 3 or more participants are full of gaps and overlaps, and lack the tight logical sequencing of dyad conversations. In essence, the job of recognizing when to coach collaborative learning can be seen as one of figuring out how to detect meaningful speech act sequences that are embedded in longer sequences, and that are not necessarily contiguous in those longer sequences. The next section takes a look at a few different methods for detecting and analyzing such sequences.

3 Approaches to Analyzing Sequences of Collaborative Learning Interaction

    Different approaches to analyzing collaborative learning activity result from the need to understand different aspects of interaction, or to understand the interaction from different perspectives. The selection of an analysis

    method should be driven by the desired outcome: to better understand the interaction, or to provide advice or support to the students. The result should be an analysis of group interaction that reveals occurrences of events that the system knows how to target.

    We describe four different computational approaches for analyzing group learning interaction below, and a fifth approach in section 4. In some cases, the approach requires that the system designer adopt a particular knowledge representation. Knowledge representations may be used to describe systems, constrain users‟ choices, or analyze users‟ actions; we focus here on approaches aimed at analyzing the interaction.

3.1 Finite State Machines

    The Coordinator (Flores, Graves, Hartfield, and Winograd, 1988) was one of the first systems to adopt the finite state machine approach. In Flores et al.‟s view, conversations represented intentions to take actions in an organization. Users sent messages to each other by choosing conversational acts (such as Request or Promise)

    from menus set up by the system. The system dynamically generated these menus based on a state transition matrix of “sensible next states”, displaying only those actions that would direct the conversation toward completion of action. The Coordinator was intended to create organizational change by making the structure of conversation explicit. Consequently, the first versions were often regarded as overly coercive.

    McManus and Aiken‟s (1995) Group Leader system compared sequences of students‟ conversation acts to those allowable in a four finite state machines developed specifically to monitor discussions about comments, requests, promises, and debates. The Group Leader was able to analyze sequences of conversation acts, and 1provide feedback on the students‟ trust, leadership, creative controversy, and communication skills. For

    example, the system might note a student‟s limited use of sentence openers from the creative controversy category, and recommend that the student, “use the attribute of preparing a pro position by choosing the opener

    of „The advantages of this idea are‟”. The Group Leader received a positive response by the students, and paved the way for further research along these lines.

    Inaba and Okamoto (1997) describe a model that draws upon the ideas of finite state machines and utility functions. They used a finite state machine to control the flow of conversation and to identify proposals, while applying utility functions to measure participants‟ beliefs with regard to the group conversation. For example,

    the utility function for evaluating a student‟s attitude took into account the degree to which his teammates agreed with his proposals. Hybrid approaches such as this are key, as they broaden our ability to analyze interaction in new ways.

    Barros and Verdejo‟s (1999) asynchronous newsgroup-style environment enables students to have

    structured, computer-mediated discussions on-line. Users must select the type of contribution (e.g. proposal, question, or comment) from a list each time they add to the discussion. The list is determined by the possible next actions given by a state transition graph, which the teacher may specify before the interaction begins. In this case, the state transition graph provides a mechanism to structure, rather than to understand, the conversation. Evaluating the interaction involves analyzing the conversation to compute values for the following four attributes: initiative, creativity, elaboration, and conformity. For example, making a proposal positively influences initiative and negatively influences conformity. These four attributes, along with others such as the mean number of contributions by team members and the length of contributions factor into a fuzzy inference procedure that rates student collaboration on a scale from “awful” to “very good”. This work is seminal in

    combining a finite state approach with fuzzy rubrics to structure and understand the group interaction. A closer look at interaction sequences containing both task and conversational elements may help in composing rubrics for dynamically evaluating learning activity, enabling a facilitator agent to provide direction at the most appropriate instances.

3.2 Rule Learners

    Katz, Aronis, and Creitz (1999) developed two rule learning systems, String Rule Learner and Grammar Learner, that learn patterns of conversation acts from dialog segments that target particular pedagogical goals. The rule learners were challenged to find patterns in the hand-coded dialogs between avionics students learning electronics troubleshooting skills and expert technicians. The conversations took place within the SHERLOCK 2 Intelligent Tutoring System for electronics troubleshooting.

    The String Rule Learner, which searches for patterns common to a training set, discovered that explanations of system functionality often begin with an Identify or Inform Act. The Grammar Learner, which develops a

    probabilistic context-free grammar for specified conversation types, learned that explanations of system

     1 These four categories were proposed by Johnson and Johnson (1991), and are intended to define the skills involved in small group learning.

functionality not only begin with an Inform statement, but may go on to include a causal description, or another 2. Rule learning algorithms such as these hold promise for classification Inform Act followed by a Predict Act

    and recognition tasks, and may prove useful tools for assisting in the sequential analysis of learning conversations.

3.3 Decision Trees and Plan Recognition

    Constantino-Gonzales and Suther‟s (2000) system, COLER, coaches students as they collaboratively learn Entity-Relationship modeling, a formalism for conceptual database design. Decision trees that account for both task-based and conversational interaction are used to dynamically advise the group. For example, the coach might observe a student adding a node to the group‟s shared diagram, and might notice that the other group

    members have not offered their opinions. The coach might then recommend that the student taking action invite the other students to participate. The system also compares students‟ private workspaces to the group‟s shared workspace, and recommends discussion items based on the differences it finds.

    Muhlenbrock and Hoppe (1999) take a plan recognition approach to analyzing collaboration processes. In their approach, the system maps actions taken on a shared workspace to steps in a partially ordered, hierarchical plan. The hierarchical nature of the plan allows users‟ individual actions to be generalized to problem solving activities (e.g. conflict creation or revision). Muhlenbrock and Hoppe show that the group members‟ roles can be determined by analyzing how these general problem solving activities shift focus from one user to another. They are examining methods for using this information to coach group interaction.

    Both Constantino-Gonzales and Suthers, and Muhlenbrock and Hoppe have implemented novel ways to analyze group members‟ actions on shared workspaces, and have successfully inferred domain independent behaviors from information based on the frequency and types of domain related actions. Yet, until a computer tutor can understand the rich conversation between peers as they discuss their problems, ask questions, and probe their teammates for explanations, it cannot fully address the pedagogical and social needs of the learning group. More work is needed to understand how students communicate, and to apply this knowledge in developing computational methods for determining how to best support and assist the process of collaboration.

    In the next section, we describe an approach to analyzing collaborative learning using Hidden Markov Models. In section 1.2, we discussed four main processes involved in collaborative learning conversation: explaining, criticizing, sharing, and motivating. Here, we focus on the process of knowledge sharing.

    4 Example: Modeling the Sharing of New Knowledge using Hidden Markov Models

    At the beginning of this paper, we described knowledge sharing as one way to view the process of collaborative learning. In fact, the way in which knowledge is shared, and the parties involved (the knowledge sharer and the

    knowledge recipients), determine to a large extent whether or not that knowledge will be critiqued, and how it will be constructed or changed, and assimilated. For this reason, we have chosen to take a closer look at how effectively learners transfer the knowledge that they bring to the table during a collaborative session.

    We define a knowledge sharing episode as a series of conversational contributions (utterances) and actions

    (e.g. on a shared workspace) that begins when one group member introduces new knowledge into the group conversation, and ends when discussion of the new knowledge ceases. New knowledge is defined as knowledge that is unknown to at least one group member other than the knowledge sharer. Determining the effectiveness of a knowledge sharing episode involves the following three steps:

    1. Determining which student played the role of knowledge sharer, and which the role(s) of receiver

    2. Analyzing how well the knowledge sharer explained the new knowledge

    3. Observing and evaluating how the knowledge receivers assimilated the new knowledge

    In this section, we describe an experiment for assisting in the identification and assessment of knowledge sharing episodes, and we illustrate the successful use of Hidden Markov Models (HMMs) to accomplish step (1) above. Steps (2) and (3) are largely future work, however in the conclusion, we briefly discuss recent analyses in which HMMs were shown to successfully evaluate the effectiveness of knowledge sharing episodes.

    In our experiment, the team knowledge sharing process was analyzed by comparing the dialog segments in which students shared new knowledge with the group to the group members‟ performance on pre and post tests. These tests targeted the specific knowledge elements to be shared and learned during the experiment. To ensure that high-quality knowledge sharing opportunities exist, each group member was provided with a unique piece

     2 The coding terminology used here has been altered from the original for brevity and clarity

    of knowledge that the team needed to solve the problem. By artificially constructing situations in which students are expected to share knowledge, we single out interesting episodes to study, and more concretely define situations that can be compared and assessed.

    Experiments designed to study how new knowledge is assimilated by group members are not new to social psychologists. Hidden Profile studies (Lavery, Franz, Winquist, and Larson, 1999; Mennecke, 1997), designed to evaluate the effect of knowledge sharing on group performance, require that the knowledge needed to perform the task be divided among group members such that each member‟s knowledge is incomplete before

    the group session begins. The group task cannot be successfully completed until all members share their unique knowledge. Group performance is typically measured by counting the number of individual knowledge elements that surface during group discussion, and evaluating the group‟s solution, which is dependent on these elements.

    Surprisingly, studying the process of knowledge sharing has been much more difficult than one might imagine. Stasser (1999) and Lavery et al. (1999) have consistently shown that group members are not likely to discover their teammates‟ hidden profiles. They explain that group members tend to focus on discussing information that they share in common, and tend not to share and discuss information they uniquely possess. Moreover, it has been shown that when group members do share information, the quality of the group decision does not improve (Lavery et al., 1999; Mennecke, 1997). There are several explanations for this. First, group members tend to rely on common knowledge for their final decisions, even though other knowledge may have surfaced during the conversation. Second, “if subjects do not cognitively process the information they surface, even groups that have superior information sharing performance will not make superior decisions (Mennecke, 1997).” Team members must be motivated to understand and apply the new knowledge.

    At least one study (Winquist and Larson, 1998) confirms that the amount of unique information shared by group members is a significant predictor of the quality of the group decision. More research is necessary to determine exactly what factors influence effective group knowledge sharing. One important factor may be the complexity of the task. Mennecke (1997) and Lavery et al.‟s (1999) tasks were straightforward, short-term tasks

    that subjects may have perceived as artificial. Tasks that require subjects to cognitively process the knowledge that their teammates bring to bear may reveal the importance of effective knowledge sharing in group activities. In the next section, we describe one such task.

4.1 Experimental Method

    Groups of three were asked to solve one Object-Oriented Analysis and Design problem using a specialized shared workspace, while communicating through a sentence opener interface (section 2), containing sets of phrases organized in intuitive categories. Sentence openers provide a natural way for users to identify the intention of their conversational contribution without fully understanding the significance of the underlying communicative acts. The sentence opener interface is shown on the bottom half of Figure 1. The categories and corresponding phrases on the interface represent the conversation acts most often exhibited during collaborative learning and problem solving in a previous study (Soller et al, 1998). Details about the functionality of the communication interface can be found at http://lesgold42.lrdc.pitt.edu/EPSILON/Epsilon_software.html.

    The specialized shared workspace is shown on the top half of Figure 1. The workspace allows students to collaboratively solve object-oriented design problems using Object Modeling Technique (OMT) (Rumbaugh, Blaha, Premerlani, Eddy, and Lorensen, 1991). Object-Oriented Analysis and Design was chosen because it is usually done in industry by teams of engineers with various expertise, so it is an inherently collaborative domain. An example of an OMT design problem is shown below.

    Exercise: Prepare a class diagram using the Object Modeling Technique (OMT) showing

    relationships among the following object classes: school, playground, classroom, book,

    cafeteria, desk, chair, ruler, student, teacher, door, swing. Show multiplicity balls in your

    diagram.

    The shared OMT workspace provides a palette of buttons down the left-hand side of the window that students use to construct objects, and link objects in different ways depending on how they are related. Objects on the shared workspace can be selected, dragged, and modified, and changes are reflected on the workspaces of all group members.

    Subjects. Five groups of three students each participated in the study. The subjects were undergraduates or first-year graduate students majoring in the physical sciences or engineering, none of which had prior knowledge of Object Modeling Technique. The subjects received pizza halfway through the four hour study, and were paid at the completion of the study.

    Figure 1. The shared OMT workspace (top), and sentence opener interface (bottom)

    Procedure. The five groups were run separately. The subjects in each group were asked to introduce themselves to their teammates by answering a few personal questions. Each experiment began with a half hour interactive lecture on OMT basic concepts and notation, during which the subjects practiced solving a realistic problem. The subjects then participated in a hands-on software tutorial. During the tutorial, the subjects were introduced to all 36 sentence openers on the interface. The subjects were then assigned to separate rooms, received their individual knowledge elements, and took a pre-test. Individual knowledge elements addressed key OMT concepts, for example, “Attach attributes common to a group of subclasses to a superclass.” Each knowledge element was explained on a separate sheet of paper with a worked-out example. The pre-test included one problem for each of the three knowledge elements. It was expected that the student given knowledge element #1 would get only pre-test question #1 right, the student given knowledge element #2 would get only pre-test question #2 right, and likewise for the third student. To ensure that each student understood his or her unique knowledge element, an experimenter reviewed the pre-test question pertaining to the student‟s

    knowledge element before the group began the main exercise. The subjects were not told specifically that they hold different knowledge elements, however they were reminded that their teammates may have different backgrounds and knowledge, and that sharing and explaining ideas, and listening to others‟ ideas is important in

    group learning. All groups completed the OMT exercise on-line within about an hour and fifteen minutes. During the on-line session, the software automatically logged the students‟ conversation and actions (see Figure 2). After the problem solving session, the subjects completed a post-test, and filled out a questionnaire. The

    post-test, like the pre-test, addressed the three knowledge elements. It was expected that the members of effective knowledge sharing groups would perform well on all post-test questions.

    Figure 2. The student action log dynamically records all student actions and conversation

4.2 A Brief Introduction to Hidden Markov Models

    Hidden Markov Models (HMMs) were used to model the sequences of interaction present in the knowledge sharing episodes from the experiment. HMMs were chosen because of their flexibility in evaluating sequences of indefinite length, their ability to deal with a limited amount of training data, and their recent success in speech recognition tasks. In this section, we briefly explain the basics of the Hidden Markov Modeling approach.

    Markov Chains are similar to finite state machines, except that each arc from one state to another stipulates the probability of taking that arc, and all arcs leading out of a state must sum to one. The probability of taking a particular path through the model is then the product of all the probabilities along the path. Given a set of example (training) sequences, one can imagine constructing a Markov Chain describing all the different types of transitions that occur in those sequences. The main limitation of such a model is that it does not generalize well to new examples, even when an abundant amount of training data is available (Charniak, 1993). Hidden Markov Models were developed specifically to deal with the problem of sparse training data.

    Hidden Markov Models generalize Markov Chains in that they allow several different paths through the model to produce the same output. Consequently, it is not possible to determine the state the model is in simply by observing the output (it is “hidden”). Markov models observe the Markov assumption, which states that the probability of the next state is dependent only upon the previous state. This assumption seems limiting, however

    efficient algorithms have been developed that perform remarkably well on problems similar to that described here. Hidden Markov Models allow us to ask questions such as, “How well does a new (test) sequence match a given model?”, or, “How can we optimize a model‟s parameters to best describe a given observation (training) sequence?” (Rabiner, 1989). Answering the first question involves computing the most likely path through the model for a given output sequence; this can be efficiently computed by the Viterbi (1967) algorithm. Answering the second question requires training an HMM given sets of example data. This involves estimating the (initially guessed) parameters of an arbitrary model repetitively, until the most likely parameters for the training examples are discovered. The explanation provided here should suffice for understanding the analysis in the next section. For further details on HMMs, see Rabiner (1989) or Charniak (1993).

4.3 Using Hidden Markov Models to Select the Knowledge Sharer

    The software logs (e.g. Figure 2) from the five experiments were parsed by hand to extract the dialog segments in which the students shared their unique knowledge elements. Fourteen of these knowledge sharing episodes

    were identified. The segments varied in length from 5 to 62 contributions, and contained both conversational

    elements and action events. The sequences of conversation acts within the extracted episodes were used to train a Hidden Markov Model to identify the knowledge sharer. These conversational sequences ranged from 2 to 50 contributions. Figure 3 shows an example of one such sequence. The sentence openers, which indicate the system-coded subskills and attributes, are italicized.

Student Subskill Attribute Actual Contribution (Not seen by HMM)

    Do you think we need a discriminator for the car A Request Opinion

    ownership

    I'm not so sure C Discuss Doubt

    Can you tell me more about what a discriminator is B Request Elaboration

    Yes, I agree because I myself am not so sure as to what C Discuss Agree

    its function is

    Let me explain it this way - A car can be owned by a A Inform Explain/Clarify

    person , a company or a bank. I think ownership type is

    the discrinator.

    Sorry I mean discriminator. A Maintenance Apologize

    Actual HMM Training Sequence

    A-Request-Opinion

    C-Discuss-Doubt

    B-Request-Elaboration

    C-Discuss-Agree

    A-Inform-Explain

    A-Maintenance-Apologize

    Sequence-Termination

    Figure 3. An actual logged knowledge sharing episode (above), showing system coded subskills and attributes,

    and its corresponding HMM training sequence (below)

    For each episode, the system was tasked to select one of the three participants as the knowledge sharer. Hidden Markov Models, however, are designed to output the probability that a particular sequence matches a trained model. The knowledge sharer role was therefore held consistent throughout the training data, and each test set was reproduced twice such that three test sequences were obtained, each featuring a different participant playing the role of knowledge sharer. Because of the small dataset, we used a 14-fold cross validation approach, in which we tested each of the 14 examples against the other 13 (training) sets, and averaged the results.

    Given the choice of three possible knowledge sharers, the 5 node HMMs chose the right student for all 14 experiments, achieving a 100% accuracy. The baseline comparison is chance, or 33.3%, since there is a 1/3 chance of arbitrarily choosing the right student as knowledge sharer. The next best comparison is to count the number of Inform conversation acts each participant uses during the knowledge sharing episode, and select the student with the highest number in each test set. This strategy produces a 64.3% accuracy. The results are summarized in Table 2. This analysis shows that determining which participant is sharing new knowledge involves more than simply determining who is doing all the informing. The next question, then, is, “What exactly are knowledge sharers doing if they are not primarily providing information?”

    Table 2. Accuracy of three methods for selecting the group member playing the role of knowledge sharer

    Selection Method Accuracy (over 14 cross validation trials)

    5 Node Hidden Markov Model 100%

    Participant with Greatest Number of Inform Acts 64.3%

    Baseline (Chance) 33.3%

    A closer look at the trained HMM provides clues about why this approach works so well. Figure 4 shows one of the five node HMMs trained using thirteen conversational sequences totaling 180 contributions (outputs). The test sequence for this model is shown in Figure 3, above. The most probable path for this output sequence starts at state 5 (A-Request-Opinion, with .12 probability), and proceeds through states 5 (C-Discuss-Doubt, .04),

    2 (B-Request-Elaboration, .03), 1 (C-Discuss-Agree, .09), 4 (A-Inform-Explain, .05), and 3 (A-Maintenance-Apologize, .06), ending in state 5 (Sequence-Termination). This sequence is seen by the model as more likely than a sequence in which the knowledge sharer expresses doubt, and one of the other participants provides an elaborated explanation, for obvious reasons.

    The model in Figure 4 describes the possible ways that student A might share new knowledge with his teammates, and the possible ways that his teammates‟ might react. The model is therefore a sort of compiled

    conversational model, and should be analyzed in the context of the sorts of examples it embodies.

    15%B-Discuss-Doubt14%C-Acknowledge-Accept17%B-Acknowledge-Accept11%B-Inform-Elaborate13%C-Request-Information11%.68A-Inform-SuggestB-Discuss-AgreeA-Acknowledge-Accept

    213

    .55.69.99

    .31.34.44.32

    45.58

    .0814%12%C-Motivate-EncourageC-Inform-Suggest13%A-Request-OpinionB-Request-OpinionB-Inform-Suggest12%A-Inform-Assert11%A-Inform-Suggest

    Figure 4. A summary of the five state HMM, trained using 13 conversational sequences in which A is the

    knowledge sharer. Outputs for each state that exceed an 11% threshold are shown in boxes.

5 Conclusion

    The difficulties encountered in analyzing the process of collaborative learning can be attributed to the complex nature of group interaction, the limitations of computer-based natural language understanding, and the coupling of task-based and social elements that factor into collaborative activities.

    To help explain and simplify the complex nature of group interaction, we offered a multiple perspective view of the collaborative learning process in section 1, highlighting the perspectives that drive explaining, criticizing, sharing, and motivating behaviors. To address the limitations of natural language technology, we described an approach in section 2 that makes use of key phrases to help students identify the intentions of their contributions. Understanding and analyzing the collaborative learning process requires a fine-grained sequential analysis of the group interaction in the context of the learning goals. In sections 3 and 4, we discussed examples of five different computational approaches for performing such analysis: Finite State Machines, Rule Learners, Decision Trees, Plan Recognition, and Hidden Markov Models.

    The analysis presented in section 4 shows that Hidden Markov Models (HMMs) can effectively learn to recognize the knowledge sharer, and the knowledge recipients when new knowledge is shared during learning activities. The Hidden Markov Model approach was shown to perform significantly better than a statistical analysis approach. In a similar investigation, we trained HMMs to determine the effectiveness of knowledge sharing episodes. An episode is considered effective if one or more students learn the shared knowledge (as shown by a difference in pre-post test performance), and ineffective otherwise. The 6 node HMMs for determining effectiveness considered sequences including both task and conversational events, correctly

Report this document

For any questions or suggestions please email
cust-service@docsford.com