Sustainable Online Learning Technologies for Students

Sustainable Online Learning Technologies for Students Vive Kumar, Chris Groeneboer, Stephanie Chu, Dianne Jamieson-Noel, Cindy Xin, Shilpi Rao Simon F...
Author: Piers Mills
1 downloads 0 Views 162KB Size
Sustainable Online Learning Technologies for Students Vive Kumar, Chris Groeneboer, Stephanie Chu, Dianne Jamieson-Noel, Cindy Xin, Shilpi Rao Simon Fraser University, Canada [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract Online learning aims to enrich learning by blending traditional and innovative learning models; conceptualizing courseware in multiple media; standardizing interoperable content representation; personalizing learning experiences to custom learning devices; integrating administrative functionalities with other academic units; and not the least, ensuring the quality of learning. Such a multifaceted ideology is construed as a learning ecosystem where knowledge is constructed, analyzed, and disseminated among members of the ecosystem. A number of subsystems that are oriented around learners, researchers, instructors, administrators, and technologists have been identified and explored within the online learning ecosystem. This paper focuses on the learner ecosystem, presents a model that characterizes the interactions and the flow of information across learning activities, and highlights some of the key evolving technologies that enhance learner skills in understanding subject matter, in applying learned concepts in new situations, and in transferring learned knowledge.

Keywords: online learning, SRL, cognitive model, tactics, strategies, ecosystem, content, context, help system, helper, helpee, programming

1. Introduction Online learning aims to enrich learning by blending traditional and innovative learning models; conceptualizing courseware in multiple media; standardizing interoperable content representation; personalizing learning experiences to custom learning devices; integrating administrative functionalities with other academic units; and not the least, ensuring the quality of learning. Such a multifaceted ideology is construed as a learning ecosystem where knowledge is constructed, analyzed, and disseminated among members of the ecosystem. A number of subsystems that are oriented around learners, researchers, instructors, administrators, and technologists have been identified and explored within the online learning ecosystem. This paper focuses on the learner ecosystem, presents a model that characterizes the interactions and the flow of information across learning activities, and highlights some of the key evolving technologies that enhance learner skills in understanding subject matter, in applying learned concepts in new situations, and in transferring learned knowledge. We present our arguments in terms of a number of learner-oriented case studies and their interrelations within an online learning ecosystem in higher education. Specifically, we explore three different approaches for supporting learners in their academic pursuits: studying, help seeking, and problem solving. We outline considerations for designing these types of systems. As well, we characterize how aspects of these systems support students through the learning process.

2. Online learning support for studying Self-regulation refers to how students are able to define tasks, set goals for learning, articulate plans to reach task goals, enact tactics and strategies to direct learning, and adapt learning approaches based on monitoring and evaluating learning processes and outcomes [27, 33]. When students are given a task, information they gather while working on a task provides an opportunity for feedback both about products that are generated through studying (notes or highlights) or task products resulting from studying (i.e. an essay or response to an exam question). Feedback, either internally generated by students as they study or provided externally by teachers or peers, provides important information that guides students in terms of how they can direct and redirect the learning process [1]. Although all students self-regulate, they do so to varying degrees and are not always productive or effective in how they self-regulate their learning [13]. Previous research also suggests that students often have an impoverished or inaccurate knowledge base about tactics and strategies as well as entrenched beliefs about what learning is and how it occurs [9]. Given these potential problems with students’ knowledge about how to study effectively, educational researchers are interested in investigating the processes and outcomes of how students learn, with a specific emphasis on how to design instructional models or methods to help students develop more effective forms of self-regulation which may result in improved learning outcomes [25]. Typically research in self-regulated learning uses self-report measures where students are asked to complete a questionnaire about their methods used to study content. Current methodological chapters on self-regulation pinpoint limitations to the amount and type of information that self-report measures provide. Self-report instruments require that students reconstruct from memory how often they used various tactics and strategies for studying [30, 28]. Based on previous research, students’ reconstructions of how often they study often lead to biased and inaccurate estimates of tactic use compared to actual studying behaviours [29]. The advantage of an online learning environment is that it can allow researchers to trace specific activities that students use to examine content. For example, we can tell how often

students used various tools available within the interface. This type of data can provide additional insight into how students self-regulate their learning. The Learning Kit project is designed to support students in their use of self-regulated learning strategies. gStudy [31] is a cross-platform software tool for researching learning. Researchers or instructors can assemble content (styled and hyperlinked text, graphics, and video) into kits displayed in a web browser. Researchers can manipulate the structure and behaviour of the kit’s elements to operationalize experimental variables corresponding to research hypotheses. By manipulating a kit’s structure we can ask specific research questions about how we can design instructional materials to determine how students examine and utilize this information to build new knowledge structures. With gStudy, students can examine and annotate their individual kit’s content using a number of embedded tools. These tools are designed to provide guidelines to help students determine what they can do to annotate content. For example, gStudy incorporates several different types of notes templates based on a choice of schemas (e.g., summary, question and answer, to do, debate, etc.). Each note template has different fields embedded that guide students in terms of how they can process information. For example, in a summary note the fields include providing a label for the note, filling in key ideas, and providing main ideas for the notes. A question note provides two fields: one for asking a question and one for building an answer to a question. As a final example, a debate note provides an opportunity for the students to identify the various positions presented in the text and then provide evidence to support the position. Each note template provides opportunities for the students to manipulate and transform information into their own personal understandings if they choose to use these tools to annotate the content. Second, gStudy offers a highlighting tool, which allows students to select and then classify information according to various properties (e.g., important, review this, don’t understand, etc.). Students highlight or underline information for any number of different purposes; for example, it could be used simply to maintain attention while reading. However, highlighting can also be used to discriminate relevant information from irrelevant information, thus reducing the overall amount of information that needs to be covered during review. This type of activity reflects a more active and generative way to examine content because it invites students to make decisions about why they need to highlight information in the text, thereby affording opportunities for students to process the content more deeply. A third feature provides a method for students to examine the terminology embedded within the content. Students can construct new glossary entries including a variety of information within different fields similar to the note-taking template. The fields include identifying a title for the concept, and providing a definition, description, and example(s). Each of these fields provides opportunities for students to elaborate on and extend their knowledge of the content. When students examine content, there are often explicit or implicit links that can be built between conceptual ideas presented in the text. gStudy’s linking tool allows students to assemble information within and across elements of the content (i.e., selections in a “chapter,” amongst notes, glossary items, etc.). To build connections across glossary entries, students can make links by selecting information and building connections between the two sources within the content. The interface also offers a multi-faceted search tool, which allows students to query specific search terms to examine all of the information associated with the content. For example, to determine how often a term occurs within the body of the chapter, the student could enter the term as a search query and then set parameters on where to look within the kit. The search tool will then provide a table outlining every instance of the term within the kit, including how frequently the term occurs within the content itself, within references to notes, within glossary entries and links, and so on. The student can then select from where they would like to examine the term in relation to that content and continue to study the content. Currently most of the tools embedded into gStudy focus on tactics and strategies for elaborating on content. However, we want to expand the interface by developing tools to focus on all aspects of self-regulation including defining the task, setting goals, and articulating plans. Currently we have plans to develop a task tool that will ask students to define what they think the task is about (if a task has been pre-assigned) or that will allow students to define a task for the studying episode. The task tool will provide an opportunity for students to articulate questions and outline resources required for the task. We are also considering a goal-setting tool such as a to-do list where students have an opportunity to specify what they want to achieve in learning. As well we want students to specify subtasks with a proposed due date to schedule what they want to do with the content.

2.1Using log file data to examine studying As students study, the software builds a log file that timestamps every instance of the student’s active engagement with content. This data is saved at the end of a study session as a log file on a server. The log file is an important data source for researchers as it allows us to build multiple representations of how students examined content. There are four different methods for examining log file data that allow us to extrapolate how students approach studying the content. LogReader [8] is a toolkit for analyzing gStudy log data and it allows us to examine the trace data to count frequency of events, identify patterns of studying activity, compute transition matrixes, and graph the timing and sequence of events. First and foremost, we can generate counts of how often students used various features of the interface. For example, we can count how many notes students made of the various note types, count how many quicknotes students made and their associated classification structure, and trace when students made updates to previously created notes or concepts. We can

use the count information and compare this to self-report data where we give students questionnaires about the very tools that are embedded into gStudy. By comparing self-report to trace data, we can measure calibration, which provides some estimation of bias (whether students overestimate or underestimate their self-report) and accuracy of the absolute value of a bias score that provides the degree of association between self-reports and traces. Beyond frequency data we can also examine patterns in studying. Patterns allow us to focus on sequences of events, in particular, to examine when students make transitions between using single tactics for studying and how they combine them with other tactics to form strategies. For example, a student may choose to make quicknotes for a period of time, transition into making notes based on that content, and then return to making quicknotes. This reflects a particular pattern of activities that tells us something about students’ preferences for various tactics and strategies for examining content. More than individual frequencies, this type of information provides further information about how active students were in examining the content. Typically time based analysis is a new technique for examining data and we are just uncovering methods for time based analysis. Time based analyses will also expose patterns in how students study. Eventually we want to expose students to these time based analysis graphs so they can see a characterization of what they did to study and then use this to think about other methods they might want to try while studying. The last type of analysis is content analysis. With content analysis we can determine whether students were generative when they filled in the fields by using their own words and phrases, or if they provide simple or no rephrasing of the original content. We also get more information about how active they were by determining what types and how many of the fields students filled in. A content analysis can provide quite different information about students’ self-regulation. An active studier would attempt to play around with ideas, manipulating and transforming their understanding into their own personalized framework. A productive self-regulator would also question the content. A passive studier is less likely to stray too far from the original ideas presented in the text; instead, they will tend to stick closely to the original language. Tools can be designed to provide hints to students about how they can annotate the content more effectively, but ultimately it is up to students to decide if and how they want to fill in the fields. The goal of the learning kit project is to provide opportunities for students to actively think about their approaches to studying. We want students to develop more informed views of how they can approach a studying activity. Gathering trace data provides opportunities for offering process feedback to students about their approaches, and then helps students adapt approaches to studying based on this feedback. Making students more aware of how they are studying is the first step to helping them develop alternative studying frameworks.

3. Online learning support for academic help seeking Online learning is becoming an inseparable part of human life at work, in education, and at home. It necessitates different degrees of adaptation from learners with different amounts of expertise. Therefore, online learning tools must emphasize built-in help facilities to handle such impasses. This section presents a case study [14] that addresses support for peer helpers in an online learning environment. The case highlights methodologies employed in a knowledge-based just-in-time help system called Helper’s Assistant, which delivers context-specific and personalized help to a human helper (rather than to the helpee directly). Online help technologies range from sophisticated graphical interfaces that guide users, to proactive and intelligent tutorial interactions. Introducing ready, able, and willing human helpers to help scenarios has proven to be an important milestone in help technologies. In this case study we contend that a well-defined context, that encapsulates the relative knowledge, preferences, and task goals of the helper and the helpee, is integral to the success of help interactions. We discuss empirical results to highlight the need for context-awareness in help scenarios and argue how such contexts dynamically regulate the contributions of the conversants - the helper, the helpee, and the help system. Help systems have been investigated extensively for some time now. Houghton [11] reports different types of early help systems that include "command and help assistance", "error prompting", "online tutoring", "online documentation", and "help scripting". Online help investigations also include the impact of display format [3, 7], animation [4], graphics [32], and hypermedia [18] on help systems. Most contemporary software tools have generic help facilities including metaphoric help (user-friendly interfaces) and online help (www manuals). A few of them have context-specific help, as in Lumiere Project [10]. Human help is inherently personalized, customized, and delivered exactly when needed. Help technology being investigated under the aegis of Intelligent Tutoring Systems is insufficient to duplicate the sophistication and depth of human help [15, 16]. Such attempts are limited by the shortcomings of the context information, the inability to match a help request to an appropriate help response, and the inadequacy in meeting time limitations. An ideal system would attempt to store or generate vast amounts of situated and individualized help information and to provide fast and structured access to it. Yet, such an approach introduces unmanageable computational complexity, inadequate failure handling, deficient self-improvement, and inflexible generalization.

These shortcomings have been addressed to a greater extent in recent help systems where human helpers are introduced as an integral part of the help system [5, 6], aptly named the human-in-the-loop approach. A typical help scenario involves a user and the help system, where help is delivered through a dialog between the user and the help system. The human-in-the-loop approach brings in human helpers to complement the system. A helpee consumes help and a helper provides help. Adding a human helper to the mix can buttress the help system when it fails. This can be particularly beneficial if the conversations among the helper, the helpee, and the help system are dynamically regulated based on the principles of mixed-initiative interactions. Human help is superior to machine help as long as the helper is competent and pertinent context is established between the person delivering help and the person receiving help. This is true because human helpers understand subtle contextual cues better than any help system and can identify and deliver help responses within a reasonable time limit. Successful peer help among friends and colleagues is due to the establishment of personal context. A context is a shared understanding of the help requirement. Establishing a suitable context is the heart of the problem in machine help. Kumar [14] explores techniques and interfaces to support the human helper who has been embedded in a humancomputer help environment, where the design of the help system is capable of acquiring context information, making useful knowledge-based help responses, and ensuring delivery of help within acceptable time limits. Human-in-the-loop approach, aided by task-specific user-centric contexts, can assist the development of a pragmatic help system that is intelligent, informed of the user, informed of the tasks involved, informed of the information used, informed of the collaborative interactions, and informed of the help resources.

3.1 Contexts Most contemporary help systems are content-rich and context-poor; that is, a help request can be resolved using a variety of information and tools, hence is content-rich; however, it is difficult to deliver the help in a personalized fashion targeted to the user’s needs, hence it is context-poor. A context expands the scope of information that facilitates the helper and the helpee to interact in a congenial fashion. Information contained within a context is, in most cases, localized. That is, the context information from one help session may not be relevant in another help session. Context information can be used during a help session in a variety of ways: to verify the suitability of the helper-helpee pair; to find out how much time the helper would like to help in a help session; to suggest help tools that the helpee would like to use; to suggest pedagogy (as part of the delivery of instructional strategies) that the helpee is comfortable with; to categorize the helper and the helpee in terms of their conceptual knowledge; and so on. Essentially, the context is used to ensure the success of the three-way dialogue between the helper, the helpee, and the help system. Typical help contexts contain information including ontological relationships among context elements, instantiated knowledgebase about the users, inference rules pertaining to the help request, the concepts addressed by the help request, the tasks of the helpee related to the help request, the preferences of the helpee, the helper, and the system, and finally a set of instantiated plans pertaining to the current help session. Helper’s Assistant is a support tool for helpers in the domain of Java programming. A help-request originating from a helpee initiates the help session and the help context. A help-context is construed as an extended representation of a help request and is a container for resources relevant for the help request. The context also includes summaries of help sessions that the helpee and the helper went through based on the feedback and commentaries the system collects from the conversants at the end of each help session. Typically, a helpee creates a help-request that contains a question, an expected type of response, and the corresponding material (such as a piece of Java code). In addition, the help request may also map onto a set of concepts in a concept map and associated keywords. In turn, each instantiated concept and keyword is associated with the knowledge/skill levels of the helper and the helpee. The context also identifies a list of tasks that the helpee is currently engaged in. For instance, “assignment submission”, “exams”, “in-class discussions”, and “quizzes” are some example tasks. Corresponding to each task, the context instantiates task models that capture the procedural knowledge associated with a task. The help-context in Helper’s Assistant also records/infers preferences of the conversants with respect to the type of help responses, the mode of help-delivery, and the form of help communication. Some example help response types are: debugging, pointer, short answer, discussion, explanation, analogy, rebuke, need more information, delay response, and provide clues. Helper can interact in three pre-defined modes in Helper’s Assistant: offline, online, and just-in-time. Offline help involves asynchronous communication between the helpee and the helper using email or discussion boards as the media. Online help involves the helper sharing the helpee’s workspace (or the communication channel) and helping him/her step through an ongoing task. That is, the helper remains available for the duration of the task. In the just-in-time mode, help is highly specific to the question raised by the helpee and is delivered in short bursts. The helper predominantly decides the form of the help response, either manual or automated. However, the helpee and the system can propose/negotiate their respective preferences for the consideration of the helper. In the manual form of

help, communication tools and interfaces are established between the helpee and the helper so that the helper can manually (personally) deliver help using these tools and interfaces. In the automated form of help, Helper’s Assistant provides the necessary help documents and help procedures to the helper who then verifies the help material and lets the system deliver the same to the helpee without any further involvement from the helper. A help-context is created for every help session in Helper’s Assistant. Each help-context is classified into a type in order to associate the effectiveness of a help session with the help-context. The type of the help-context is a summarization (or a signature) of help-context data deduced only from the instantiated concepts, and the preferences of the helpee, helper, and the system. An empirical study was conducted with the goal of estimating the effectiveness of help sessions with and without Helper’s Assistant, among expert helpers (teaching assistants) and peer helpers (novices). Each helper responded to four help requests, handling two requests with assistance from Helper’s Assistant and two without. The help requests were derived from four buggy Java solutions to an introductory programming problem. The help requests originated from two pre-assigned helpees. The helpees were trained to ask a specific set of questions corresponding to the four help requests. Each helper was blinded from the helpees and also from other helpers. Essentially, each helper was led to believe that the help requests and the follow-up questions were coming from real learners. The interactions of the helper and the helpee were video taped. The mouse clicks, keyboard button pushes, and browsing patterns of the helpers across different applications were also recorded and time-stamped. In addition, each helper and helpee was asked to complete a questionnaire at the end of each help session and also provide overall feedback. The dialogues between the helper and the helpee were encoded independently by an external evaluator. Refer to [14] for a complete analysis of the study. Based on the results from the study we emphasize the need for an explicit representation of the context for use by the conversants of the help environment. The current contextual representation of Helper’s Assistant can be improvised to include variables associated with mixed-initiative interactions. For instance, the help-contexts can include which of the conversant is currently in control of the interactions and why; the help-context can advise whether a particular type, mode, or form of help interaction is beneficial; the help-context can facilitate negotiation among the conversants with respect to preferences and the degree of contextual use; the help-context can trace interactions with respect to specific theories of academic help seeking [19, 20, 21]; the help-context can suggest specific goals to pursue with respect to factual, procedural, and conditional knowledge; the help-context can negotiate with the helper on the effective use of working-memory; and, the help-context can estimate values for commitment, attitude, attention, and motivation of conversants.

4. Online learning support for problem-solving This section presents a case study that considers programming as a problem-solving activity and characterizes requirements to incorporate problem-solving activities in online learning environments [16, 17]. Programmers cope with volumes of information as part of their day-to-day work. They obtain help from a variety of information sources such as program code snippets, in-line comments from other programmers, sticker reminders about their deadlines, scrap paper notes of the algorithm, phone messages regarding client consultations, reference books, book-marked websites for online reference, and emails/memos from the project manager. Obviously, the working memory of a programmer can be quite overloaded with respect to the variety, volume, and granularity of information that they deal with. Becoming an expert computer programmer potentially involves understanding (application context and possibility), planning (design), imaging (imagination and visualization), attitude (acceptance of work involved and confidence in completing projects), logic (conceptualization, language use, and knowledge), creativity (artistry) and work (persistence, exploration, purpose and commitment) [24], which are inherently cognitive activities. Thus, any improvement in programmers’ cognition can lead to improvement in the process of software development. This case study explores techniques based on the model of self-regulated learning (SRL) that programmers can exploit to enhance the use of working memory, to develop tactics to carry out task-level activities during programming, and to learn how to program more effectively in an online learning setup.

4.1 SRL and Programming Programmers cope with volumes of information as part of their day-to-day work. The working memory of a programmer can be quite overloaded with respect to the variety, volume, and granularity of information that they deal with. In this context, we explore techniques based on the model of self-regulated learning (SRL) that programmers can exploit to enhance the use of working memory, to develop tactics to carry out task-level activities during programming, and to learn how to program more effectively. We focus on the effects of SRL viewed from models of information processing [26] in programming, under the assumption that programmers are faced with huge volumes of information to be processed as part of their programming task.

Jambalsuren and Cheng [12] investigated novice programmers’ learning performance during programming in a Visual Legacy Basic programming environment, an imperative programming language designed for beginners to learn programming. Their system enables students to reach their goal without high cognitive load and with minimum effort. They pointed out the advantages of having an Integrated Development Environment which determines how students think and plan to program. They also noticed that students often perceive programming courses as requiring significantly more work than other general courses, because novice programmers technically do not have a systematic plan to write a program. Becoming an expert computer programmer potentially involves understanding (application context and possibility), planning (design), imaging (imagination and visualization), attitude (acceptance of work involved and confidence in completing projects), logic (conceptualization, language use, and knowledge), creativity (artistry), and work (persistence, exploration, purpose, and commitment) [24]. Programmers typically need to apply their cognitive abilities to write a program. Assuming these cognitive abilities depend on previous learning, one can deduce that learning is a built-in part of programming. Programmers have to learn new information in their current programming task, such as the algorithm, data structures, programming styles, and coding standards, to develop the best possible code. SRL involves a recursive cycle of control and monitoring processes that are used during four phases: perceiving the task(s), setting goals and plans, enacting studying tactics, and adapting the tactics [26]. Control can be enhanced through acquiring more and more adaptive studying tactics. Self-monitoring involves evaluation of outcomes in terms of a person’s standards. These cognitive evaluations of matches and mismatches between a student’s current outcomes and standards provide an impetus for learning according to information theory. A collection of specific features that characterize a process or an artefact is called a schema. Many schemas are formatted as a set of rules for carrying out tasks. For instance, experienced programmers have schemas that not only help them recognize strategic formations of program pieces, but which also include sophisticated tactics for handling the interrelations among program pieces. Moreover, an automated schema is what is typically known as a skill. A tactic is a particular part of a schema that is represented as a rule in IF-THEN form, sometimes called a conditionaction rule. IF a set of conditions is the case, THEN a particular action is carried out. IF not, a learner’s ongoing behaviour or qualities of interacting with the task proceed unchanged. A strategy is a design or a plan for approaching a high-level goal, such as mastering a new software system. A strategy coordinates a set of tactics. Each tactic is a potential tool to use in carrying out a strategy, but not all tactics that make up the strategy are necessarily enacted. Control in the context of information processing models is about giving direction to behaviour [2], that is, guiding information processing toward a goal held in the mind of a purposeful learner. The learner according to the result of selfmonitoring may choose the direction to achieve his/her goal. Self-monitoring provides the window of awareness on one’s functioning. Although self-consciousness can assist in making adaptations, it occupies mental capacity and, as a result, must be limited when seeking optimal performance. Information processing theorists assume that when performances become highly automated, learners can self-regulate without direct awareness, and this frees them to self-regulate at a higher level in a hierarchy of goals and feedback loops [34]. Task conditions refer to information about the task that the learner interprets based on the outside environment, for example, resources (such as worked-out sample solutions, references etc.), instructional cues, time, and social context. Cognitive conditions refer to information the learner retrieves from long-term memory, for instance, beliefs, dispositions, and styles, motivational factors and orientations, domain knowledge, knowledge of task, and knowledge of study tactics and strategies. A product is new information created when information processes—e.g., searching, self-monitoring, assembling, rehearsing, and translating—manipulate existing information. Standards are qualities that products are supposed to have. A schema made up of standards is a goal. SRL advocates four phases. In the first phase, a learner processes information about the conditions that characterize an assigned task or a self-posed task. Once information about task conditions and cognitive conditions is active in working memory, the learner amalgamates it to construct an idiosyncratic definition of the task at hand. In the second phase, the learner frames a goal and assembles a plan to approach it. Goals are viewed as multifaceted or multivariate profiles of standards. In other words, goals should be continuously monitored and updated to fit the standards. In the third phase, the learner begins to apply tactics and strategies that were identified in phase two, as part of the execution of the task. Phase four is optional. Here, the learner adapts schemas that structure how self-regulating is carried out. This is accomplished in three ways: (1) by accreting or deleting conditions under which operations are carried out or by adding or replacing operations themselves; (2) by tuning conditions that articulate tactics in strategies; and (3) by significantly restructuring cognitive conditions, tactics, and strategies to create quite different approaches to addressing tasks [22].

4.2Experimental Results We conducted a two-group posttest-only randomized experiment [17, 23] to investigate the effects of self-regulated learning in programming. Forty programmers participated in this experiment. Their programming experiences ranged from a minimum of two years of programming (nine months in Java language) to a maximum of ten years (5 years in Java language). We randomly divided the participants into two twenty-person groups under identical conditions: those

who wrote a program using our proposed software tool and applied our guidelines (treatment or experimental group), and those who wrote the same program without any interventions (control group). One of the common software metrics in programming is the number of logical lines of code (LOC). The difference between physical and logical LOC is that unlike physical LOC, logical LOC only includes valid programming languages’ (in this experiment: Java) statements. Logical LOC is more accurate and common so the evaluator was requested to extract Logical LOC. The evaluator counted the lines in four categories. Overall LOC: all the valid Java statements. Useful LOC: Overall LOC excluding the irrelevant lines. Data Structure LOC: the number of lines devoted to the data structure implementation. I/O LOC: the number of lines devoted to the input/out implementation. In order to calculate the speed of coding we divided the LOC of each participant by his/her coding time. The experiment revealed that the members of the treatment group outperformed their counterparts in the control group in all categories except I/O. The evaluator examined all the programs and compile-logs and graded them based on lines of code, understanding of the problem, number of errors, efficiency of the debugging process, completeness, and other criteria. In addition to the thorough and in-depth evaluation of these criteria for each participant, the evaluator provided us with a list of performance grades ranging from 0 to 100 based on his subjective judgement about the programs according to his several years of programming experience. Moreover he wrote a report about each participant and justified his judgement. The results show that the treatment group participants outperformed their control group counterparts. The mean value for control group participants is 43.41, while the treatment group participants’ mean value is 54.39. The participants in the control group spent more time on coding and less time on warm-up and thinking. Reading times of the two groups are very close. Note that the total time was 100 minutes. Because of some external failures experienced by a few participants that were out of our control, such as Internet disconnections, PC restarts, etc., we provided them with more time. Also, after 100 minutes all of the participants were given at most 5 minutes to wrap up their tasks; as their utilization times of this extra time opportunity were different, indeed the programming time was in neighbourhood of 100 minutes. As a result, we scaled down all timing values to range of 0-100. The results indicate a statistically detectable difference for Warm-up (T (1, 40) = -3.071, p

Suggest Documents