INTERNATIONAL JOURNAL of COMPUTERS, COMMUNICATIONS & CONTROL With Emphasis on the Integration of Three Technologies Year: 2007 Volume: II Number: 1 (January-March) CCC Publications www.journal.univagora.roEDITORIAL ORGANIZATION Editor-in-Chief Prof. Florin-Gheorghe Filip, Member of the Romanian Academy Romanian Academy, 125, Calea Victoriei 010071 Bucharest-1, Romania, [email protected] Executive Editor Managing Editor Dr. Ioan Dzi¸tac Prof. Mi¸su-Jan Manolescu Agora University Agora University [email protected] [email protected] Editorial secretary Horea Oros Emma Valeanu ˘ University of Oradea Agora University [email protected] [email protected] Publisher & Editorial Office CCC Publications, Agora University Piata Tineretului 8, Oradea, jud. Bihor, Romania, Zip Code 410526 Tel: +40 259 427 398, Fax: +40 259 434 925, E-mail: [email protected] Website: www.journal.univagora.ro ISSN 1841-9836 (print version), 1841-9844 (online version) EDITORIAL BOARD Prof. Pierre Borne Prof. Dr. Petre Dini Ecole Centrale de Lille Cisco Cité Scientifique-BP 48 170 West Tasman Drive Villeneuve d’Ascq Cedex, F 59651, France San Jose, CA 95134, USA [email protected] [email protected] Prof. Antonio Di Nola Prof. Ömer Egecioglu Department of Mathematics and Information Sciences Department of Computer Science Università degli Studi di Salerno University of California Salerno, Via Ponte Don Melillo 84084 Fisciano, Italy Santa Barbara, CA 93106-5110, U.S.A [email protected] [email protected] Prof. Constantin Gaindric Prof. Xiao-Shan Gao Institute of Mathematics of Academy of Mathematics and System Sciences Moldavian Academy of Sciences Academia Sinica Kishinev, 277028, Academiei 5, Republic of Moldova Beijing 100080, China [email protected] [email protected] Prof. Kaoru Hirota Prof. George Metakides Hirota Lab. Dept. C.I. & S.S. University of Patras Tokyo Institute of Technology, Universiy Campus G3-49, 4259 Nagatsuta, Midori-ku, 226-8502, Japan Patras 26 504, Greece [email protected] [email protected] Prof. Shimon Y. Nof Dr. Gheorghe Paun ˘ School of Industrial Engineering Institute of Mathematics Purdue University of the Romanian Academy Grissom Hall, West Lafayette, IN 47907, U.S.A. Bucharest, PO Box 1-764, 70700, Romania [email protected] [email protected] Prof. Mario de J. Pérez Jiménez Prof. Dana Petcu Dept. of CS and Artificial Intelligence Computer Science Department University of Seville Western University of Timisoara Sevilla, Avda. Reina Mercedes s/n, 41012, Spain V.Parvan 4, 300223 Timisoara, Romania [email protected] [email protected]. Radu Popescu-Zeletin Prof. Imre J. Rudas Fraunhofer Institute for Open Communication Systems Institute of Intelligent Engineering Systems Technical University Berlin Budapest Tech Germany Budapest, Bécsi út 96/B, H-1034, Hungary [email protected] [email protected] Prof. Athanasios D. Styliadis Dr. Gheorghe Tecuci Alexander Institute of Technology Center for Artificial Intelligence Thessaloniki George Mason University Agiou Panteleimona 24, 551 33, Thessaloniki, Greece University Drive 4440, VA 22030-4444, U.S.A. [email protected] [email protected] Prof. Horia-Nicolai Teodorescu Dr. Dan Tufi¸s Faculty of Electronics and Telecommunications Research Institute for Artificial Intelligence Technical University “Gh. Asachi” Iasi of the Romanian Academy Iasi, Bd. Carol I 11, 700506, Romania Bucharest, “13 Septembrie” 13, 050711, Romania [email protected] [email protected] This publication is co-financed by: 1. Agora University 2. The Romanian Ministry of Education and Research / The National Authority for Scientific Research CCC Publications, powered by Agora University Publishing House, currently publishes the “International Journal of Computers, Communications & Control” and its scope is to publish scientific literature (journals, books, monographs and conference proceedings) in the field of Computers, Communications and Control. Copyright ° c 2006-2007 by CCC PublicationsInternational Journal of Computers, Communications & Control Vol. II (2007), No. 1 Contents Crossing the Rubicon: A Generic Intelligent Advisor By Razvan Andonie, J. Edward Russo, Rishi Dean ˘ 5 The Moments in Control: a tool for Analysis, Reduction and Design By Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou 17 Deformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging By Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache 26 International Conference on Virtual Learning – ICVL 2006 By Grigore Albeanu 37 Virtual Communities and their Importance for Informal Learning By Antonios Andreatos 39 Advances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints By Alla Anohina 48 Advancing Electronic Assessment By Nikolaos Doukas, Antonios Andreatos 56 Development of An Algorithm for Groupware Modeling for A Collaborative Learning By Ikuo Kitagaki, Atsushi Hikita, Makoto Takeya, Yasuhiro Fujihara 66 A Methodology for Providing Individualised Computer-generated Feedback to Students By Michael Lambiris 74 A Software System for Online Learning Applied in the Field of Computer Science By Gabriela Moise 84 MAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling By Mihaela Oprea 94 MANAGEMENT INFORMATION SYSTEMS: Managing the Digital Firm - 9th edition, authors: Keneth C. Laudon and Jane P. Laudon (Book Review) By Florin G. Filip 103 Author index 106International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 5-16 Crossing the Rubicon: A Generic Intelligent Advisor Razvan Andonie, J. Edward Russo, Rishi Dean ˘ Abstract: Recommender systems (RS) are being used by an increasing number of e-commerce sites to help consumers find the personally best products. We define here the criteria that a RS should satisfy, drawing on concepts from behavioral science, computational intelligence, and data mining. We present our conclusions from building the WiseUncle RS and give its general description. Rather than being an advisor for a particular application, WiseUncle is a generic RS, a platform for generating application-specific advisors. Keywords: Recommender systems, electronic commerce, user interface, user modeling. 1 Introduction E-commerce sites use RS to guide visitors through the buying process by providing customized information and product recommendations. Some actual online recommender systems are described in [26, 31, 32]. Several well-known e-commerce businesses use, or have used, RS technology on their web sites: Amazon, Travelocity, BMW, MovieFinder, and Dell among them. Although commercial RS use began several years ago, we are still only beginning to use such systems on a large scale. Overviews of the relatively short history of RS and the techniques used may be found in [16, 30]. What is an “intelligent RS"? We will consider as intelligence the use of artificial intelligence features, such as adaptation, integration of learning algorithms, explanation, and case-based planning. Such an intelligent product search engine for online catalog sales is Analog Devices [1], developed at University of Kaiserslautern. We begin by specifying the performance goals of a RS. This delineation of what a successful RS needs to be able to do leads to an analysis of the criteria that a successful RS must meet. Then we describe our own RS, named WiseUncle. The paper concludes with the results of the preliminary tests of WiseUncle performed by three e-commerce businesses. A preliminary paper describing our results may be found in [2]. 2 Goals The most telling sign of a RS’s success is that it is used. On the one hand, the seller finds enough benefits to mount it on the company’s website. On the other hand, site visitors find it both intelligent and trustworthy enough to purchase its recommended product. Both consumers and seller have goals that a RS must help them achieve. Many of these goals are closely linked because unless the consumer is satisfied by the interaction with the RS, a sale will not be made and the RS’s main benefit to the seller is lost. Thus, the seller must provide a RS that meets the consumer’s goals. 2.1 Consumer Goals Consumers vary greatly in their product knowledge. Yet no matter how little they know and how few of the RS’s queries they can respond to, they want an accurate, believable, and customized recommendation of the product they should buy. The overarching task of any RS is to start with the knowledge a consumer can provide and end with a recommendation of the best product tailored to that particular individual. Said differently, a RS bridges the gap between what consumers know and the one fact that Copyright ° c 2006-2007 by CCC Publications6 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ they want to know, the single best product for them. To accomplish this task, the RS must be built with knowledge from experts about both consumers and products and from mining the data of past purchases. Even after all this knowledge has been embedded in a RS, the resulting recommendation must not only be accurate, but also credible to the potential buyer. It will do neither buyer nor seller any good if the former doesn’t believe the recommendation, even one that is truly insightful and fully matches the buyer’s needs. 2.2 Seller Goals Sellers also want the RS to deliver an accurate recommendation. (One reason, infrequently acknowledged, is the need to minimize exposure to lawsuits by buyers who feel that they were misled by an incompetent RS.) Because the RS is mounted on a website, sellers need it to function independently of any human assistance (sometimes just because humans are expensive). This desired independence requires that the RS must deal with whatever information the consumer can provide, however sparse or inconsistent. It must also function in real time. And it must do all this for the complete set of (possibly) millions of available products, as for automobiles, personal computers, multi-location vacations, and other products constructed from components. For a related kind of large problem for a RS, see [15]. Finally, sellers have goals beyond an immediate purchase. They also want a RS to help improve the long-term business relationship. Thus, an ideal RS should facilitate site visitors’ return to the website (even if they did not purchase), incite customers to recommend the website to acquaintances, and build a positive image for the company. A technologically sophisticated and customer-friendly website is naturally associated with a technologically sophisticated and customer-friendly company. In addition, a RS can bring valuable information to the company’s marketers, enabling them to improve their products, prices and promotions. For instance, through customer profiling or market segmentation, a RS can help a business decide to whom to send a customized offer or promotion. 3 Criteria In this section we list the main criteria that a RS must satisfy. We partition them into two parts, those relating to system design and those more pertinent to the user “conversation". However, because the two are linked, our distinctions are necessarily imprecise. A different set of criteria can be found in [12]. 3.1 System design criteria Knowledgeable A successful RS must utilize both global and local knowledge of both products and customers. By global customer knowledge we mean knowledge that is independent of the particular product domain, such as knowing how people make purchase decisions or how best to “converse" with them online. Global product knowledge includes the product’s attributes, their functionalities, and anything else that is independent of the particular seller’s offerings. A domain expert in personal computers, for instance, knows video cards, their performance capabilities (e.g., which type of card is needed for displaying photographs, playing video games, or showing movies), their approximate prices, and even how rapidly their technology is changing. Local customer knowledge refers to the ability to link a customer’s personal needs, uses, and preferences to the focal product. Thus, a consumer may have a high need for status among coworkers that might influence many purchases. However, local consumer knowledge enables a RS to direct the consumer toward a particular product, such as a personal computer with the image of the latest and greatest technology or an automobile that signals status (at least to the target audience of coworkers). Local product knowledge focuses on the vendor’s offerings. Such knowledge not onlyCrossing the Rubicon: A Generic Intelligent Advisor 7 includes models, styles, components, add-ons, etc., but also which components can be configured with others and the moment-to-moment availability of any recommended product or special offers. Thus, a RS should be connected to the vendor’s product database and to the pertinent marketing reporting systems. The volatile nature of both real-world products like personal computers and vendors’ information systems requires adequate database maintenance in the RS. Finally, a successful RS must not only integrate all four types of knowledge, but it must make all of this knowledge visible to the user. That is, this knowledge should not only be “under the hood" of a good recommender system, but users must be able to see how the full array of knowledge is used and why the resulting recommendation has been intelligently customized to their personal needs. Viable RS can be constructed from less than the full array of sources of knowledge just described. However, some functionality will be sacrificed. For instance, global consumer knowledge is used by RS that require users to input numerical estimates like the importance weights of the main product attributes. Such systems more or less ignore the decision process that consumers prefer to use (i.e., a part of global consumer knowledge). More extremely, a RS can be built with essentially no expert knowledge relying on data mining alone. Using only a record of past products purchased and the assignment of consumers to market segments, a RS can recommend products that were sold successfully in the past to each segment, a kind of popularity poll based on market share. However, the customization to a single consumer is likely to be crude. Note that recommendations based on expert knowledge are much better at dealing with new products, whereas market share rankings can only be used for products that have been on the market and changed little over time. Customization The task of making a customized recommendation requires the RS to know how to draw multiobjective comparisons among products. This is a classic challenge for any decision maker, human or software. It is made more complex for a RS because the information extracted from a potential purchaser is limited compared to that needed for at least some ideal solutions [13, 6, 11]. Nonetheless, the RS must somehow trade off the relative importance of different product attributes or features in order to achieve the best customization. Many automobile buyers face a difficult tradeoff between gas mileage and safety. The former points toward smaller and lighter vehicles, while the latter is associated with larger and heavier ones. Scalability and real-time performance In order to meet the requirements of an Internet application, a RS should be scalable and able to work in real-time, even for very large problem sizes. For instance, a recommender system connected to a large web site must produce each recommendation within a few tens of milliseconds while serving thousands of consumers simultaneously and searching through potentially millions of possible product configurations. Imperfect data A RS should be robust in the face of data that are uncertain, noisy, sparse, or missing. One source of value in an intelligent RS lies in the fact that most customers have not deeply considered many of the available products and their product features, or are unable to recognize and express all of their personal needs related to the product. However, the downside to this accommodation to users’ limited knowledge is that we must often deal with sparse or missing data that result from a customer responding, “I don’t know", “It doesn’t matter to me", or “I don’t want to answer this question". In addition, the less consumers know about the product, about their personal needs, and especially about the links between them, the more often the RS will have to resolve inconsistent input.8 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ Connection to database RS should be connected to the vendor’s product database and the marketer’s reporting systems. Only products that can be configured in a feasible manner should be recommended. And of these legitimate products, only those that are currently in stock should be recommended. The highly volatile nature of real-world products and information systems requires adequate database maintenance in the RS. Retention of customer data During each interaction with a customer, the RS extracts knowledge from the customer that should be used to build and update a customer profile. The RS should save this profile so that it, and aggregations of profiles, can be “mined" for marketing-relevant knowledge. That knowledge may be as straightforward as a record of products purchased by different customer segments. Alternatively, it may be as complex as inferring underserved segments in a space of consumers. It may even include the results of deliberate experimentation with, for example, the bundling of components, price discounts, complementary products, and so forth. The point that is relevant to a successful RS is that it must be able to retain enough of the knowledge that it gains, such as customer profiles, to be able to provide useful answers to a range of questions asked by marketing and product managers. Learning The RS should be able to improve its functionality by continually learning from its interactions with consumers. This learning can be thought as reward and penalty feedback. After selling a recommended product, the RS will enhance that type of recommendation for customers with similar profiles. When the recommended product is not purchased, the RS will make that recommendation less probable for the next similar customers. This learning depends the availability of sales data. Another type of interactive learning comes from the conversation with the customer. If customers customers find the conversation too long or uninteresting, they will not complete the recommendation process and force a partial recommendation. The RS should adapt to this situation and improve future conversations with similar customers. Domain independence Ideally, a RS should be largely domain-independent, so that with minimum modification, one should be able to customize the same platform for other applications (e.g., selling computers, cars, financial services, or vacations). Inter alia, this means that the system design should clearly separate the generic part from the domain-specific knowledge modules. 3.2 User conversation criteria The RS’s customer interface should be based on the psychology of the consumer and the purchase decision process [14]. Therefore, behavioral science techniques should guide the customer dialog. That conversation with the user should be based on how consumers actually think, rather than forcing them to feed an optimization algorithm. By speaking the language of the user, a successful RS provides welcome relief to those users from some of the complexity of the purchase decision. The customer-recommender interface is usually based on a series of interactive questions presented to the customer by the RS, accompanied by multiple-choice options for the customer to input their answers. For an alternative conversational structure, see [10]. One task is to devise an optimal strategy to select the next question to be posed to the user. An intelligent dialog should be personalized, with a subsequent question based on the responses to all prior ones. That still leaves the challenge of determining what exactly is meant by optimal. One reasonable proposal is to minimize customer effort, typically measuredCrossing the Rubicon: A Generic Intelligent Advisor 9 as the duration of the dialog, while still being able to make accurate predictions [34, 35, 36]. However, this strategy may be based on too simple a behavioral criterion. We believe that the standard should not be customer effort only, but overall customer satisfaction. The quantification of overall satisfaction must be derived from longer-term statistics on system usage and surveys of customers. One twist on the optimization of question sequence is inserting a small amount of randomness when selecting the questions. This may help to extend the space over which the recommender understands the customer’s interests and ensures that all questions are occasionally presented to customers. During the conversation, the WiseUncle RS adopts a five-stage process for making the purchase decision described by Horowitz and Russo [25]. The stages are: Opening; Utilitarian Needs; Hedonic Preferences; Optional Features or Add-ons; and Endgame. Stage 1, Opening, frames the buyer (e.g., the buyer’s level of knowledge of the product category and extent of product search to date). It also identifies the buyer’s preference among gross product characteristics (e.g., a desktop versus a laptop personal computer or a one-week versus a two-week cruise). Stages 2 and 3 encompass, respectively, the utilitarian and hedonic or emotional needs [3]. The former include the functional uses of the product, such as an automobile’s seating capacity or environmental friendliness or whether a personal computer is going to be used by teenagers or for showing movies. Stage 3’s hedonic needs, like the image of a car’s brand name or body style, are often harder for a buyer to express. Thus, it can be a substantial behavioral challenge not only to use such knowledge to identify the best product but also to extract hedonic knowledge from the consumer. Stage 4, Optional Features or Add-ons, captures the remaining, minor product specifications, like an automobile’s audio system or aspects of its interior. The final stage, Endgame, covers such external elements as the local availability of reliable repair service for an automobile, a PC’s warranty, or travel insurance for a vacation cruise. These five stages are sufficient to structure the process of a purchase decision for even the most complex products. The following factors contribute to the success of such conversations in Internet-mediated dialogs [24]. They form the remaining criteria in the design of a RS. The benefits of a conversation exceed its costs People use information only if it is perceived as adding benefits or as reducing costs. If (expected) costs exceed (expected) benefits at any point, there is a clear risk of the customer terminating the dialog. Thus, a criterion for a successful conversation is site visitors continually receiving some benefit during, not only after, the interactions/dialog. Credibility and trust Information and advice must be credible, and the source must be trustworthy [7, 8, 5]. An Internetdelivered RS cannot provide the face-to-face cues of trustworthiness that a human can. However, although a RS may have no initial reputation for trust (based on past experience), such an image can be built over time by personal usage, word-of-mouth recommendations, or public endorsements (e.g., the endorsement of the system’s knowledge and disinterestedness in consumer-oriented columns in newspapers and magazines). One alternative is to add a confidence metric (like the Average Customer Rating used by Amazon), and this has the potential to improve user satisfaction and alter user behavior in the RS [27]. A second alternative is to make the RS adaptive. This would reduce the risk of manipulation: users can detect systems that manipulate their predictions, something that that has a negative impact on their trust [28].10 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ Intelligence The RS must display intelligence in the conversation. For instance, it must know what kinds of information people can validly provide and how to successfully extract that information from buyers. Consumers can usually say what they need or want the product to do and can often articulate such personal preferences as style and color. However, they may have difficulty specifying the product features that meet those needs. Intelligence can also be demonstrated by prompting users about needs that might be overlooked. Such needs may include those of others (family members who will use the new personal computer), future needs (an automobile that suits the needs of an elderly parent who may visit regularly), and unanticipated needs (especially first time product users like vacationers to an “adventure" destination). Intelligence is manifest in many small ways, such as clearly remembering the answer from a previous query and incorporating it into a subsequent question. It can also appear as the rephrasing of questions to reflect the consumer’s level of knowledge in the product domain. Control Customers should be able to request additional or explanatory information. Or as the conversation proceeds, the customer may learn something that requires returning to an earlier point in the dialog and changing a preference stated there. Buyers who feel impatient should be able to request a recommendation at any time, even before the RS would normally feel comfortable providing one. Finally, the buyer might even like to suspend the conversation and return later. More control in any situation is empowering, and more so in situations where control is expected. Providing satisfactory conversational control is a special challenge to RS. Feedback A desire for feedback in a conversation is natural, so appropriate kinds enhance user acceptance of a dialog with a RS. Specific feedback might include (a) how much progress has been made toward identifying the best product, and (b) how much longer the conversation is expected to take. Whatever specific feedback options are provided, however, users do not want to receive feedback only after they have answered every question (as they must in many static surveys). How might we know that a RS fully meets all of the above criteria? That question has two answers, a real and an ideal. The real answer is that it depends on customer acceptance. This is, we can try to build into the RS all of the desired capabilities, but only users can validate success. We may believe that our system is intelligent or provides plenty of feedback, but if its users believe that the system has failed to do so, then it has. Working in the RS’s favor is that users often judge a system’s performance relative to that of the other systems they know. They do not expect the WiseUncle system described below to be as good as a genuine human wise uncle, someone who is very knowledgeable and is on their side. Thus, superior performance rather than perfection may be sufficient for success. The ideal criterion is a kind of Turing test for RS. That is, ideally the potential buyers would not be able to determine whether they were interacting with a RS or with a (very fast) human advisor who was knowledgeable and on their side (sometimes called a human concierge [9]). This ideal is not as impossible to achieve as it may first sound. No human advisor is perfect, so the RS doesn’t have to be either. We do not expect this ideal goal to be achieved soon by any RS. However, it may be worth keeping in mind as an ideal for system designers. 4 System description Rubicon is a generic domain-independent advisor, recommending products from an existing set. Each product is configurable, meaning it is comprised of several components, which may each be de-Crossing the Rubicon: A Generic Intelligent Advisor 11 scribed in turn by several attributes. Building a RS depends largely on the knowledge representation model, and we chose a computational intelligence framework. Our RS is a classifier that “learns" to make good recommendations. This classifier is an expert system, able to explicitly expose its acquired knowledge. The main characteristics of Rubicon are: † The inferential process from the customer’s needs to the best product is constructed in two stages, called Bridges, one from needs to product attributes, and the second from attributes to the products themselves. † It can easily be customized for different applications since the interface to the application-specific knowledge domain is separated from the main system. † The front-end dialog is dynamic and changes based on user responses. It elicits information from users to determine a location in a needs space which is then used to find optimal (sub-optimal) solutions in a products space. † It accepts imprecise input from users. † It provides a justification for all recommendations. † Reversibility: The system can reverse the decision process from effect to cause. This allows forecasting the adoption of new products or services using real customer decision data. Figure 1: Rubicon High Level System Description The Rubicon system diagram (Fig. 1) shows the following main modules.12 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ 4.1 Conversation Engine (CE) The CE is responsible for dialog management, presenting questions to the user and processing the resulting responses sent to it by the user. Questions and their associated responses are processed to accomplish the following two results: i) Propagate the knowledge gained from a response to the subsequent inference mechanisms and ii) Determine the next question to pose to the user. In doing so, the dialog management occurs subject to the following constraints: † Presents the appropriate questions for the system to confidently determine an intelligent, personalized recommendation. † Presents questions conforming to users’ expectation of a real dialog with respect to flow, organization, and coherence. † Minimizes the number of questions presented. † It enables scalable addition/ subtraction/ modification of questions. † Allows users and administrators to reproduce particular dialogs. † Uses proper constructs for further data mining. From a computational standpoint a rule-based expert system is used to implement the CE’s dialog management process. Questions and responses are linked by sets of predetermined rules, and a number of other intermediary constructs. In this way, the questions, responses, and rules can be specified, along with goals (i.e., knowledge to be gathered) independently of knowing the dialog flow in advance. The system at runtime determines, based on the behavioral and informational goals, which question to present next to the user. When all appropriate questions have been presented, the conversation is determined to be complete. However, the user may intervene at any time to ask for the system’s current best recommendation based on the information provided thus far. 4.2 Inference Engine (IE) The purpose of the IE is to map the user’s profile of needs (the output of the CE) to the attributes necessary to comprise the appropriate recommendation. Given a set of responses resulting from the dialog, the IE can indicate a set of recommendations, ordered by the degree of their preference. These recommendations are not concrete (physical) product recommendations yet, but a mapping from the user needs space to the space of attributes, yielding generic descriptions of the product, like “RAM Amount" (e.g., standard, large, maximum) and “Network Card type". Collectively this inference is called the First Bridge. The IE is taught by a human expert. However, it can learn incrementally as well: new teaching examples can be added without restarting the teaching process from the beginning. Conditional rules can be extracted to describe the behavior of the IE and justify recommendations, market research and performance improvement. The IE is stable under noisy inputs and user uncertainty. Such “noise" may be produced by “I don’t know" answers, or by contradictory answers in the dialog. A neuro-fuzzy network incorporates fuzzy, rather than crisp, membership functions in its structure and has the ability to learn when presented with training data [33]. After training, such a network can be used as a classifier. The fuzzy neural network classifier builds decision boundaries by creating subsets of the pattern space. It is a model-free estimator and does not make assumptions of how outputs depend on inputs. Instead, it adjusts itself to a given training set by learning algorithms and decides the boundaries of classes. When given an unknown pattern, the fuzzy inference network classifier uses the learnedCrossing the Rubicon: A Generic Intelligent Advisor 13 knowledge to estimate the membership value of this pattern in each class and classify the input pattern according to the membership values. To implement the IE, we have used the fuzzy neural net architecture introduced in [33], trained to represent the expert knowledge of a particular product domain. For instance, in the case of a personal computer RS, experts develop training patterns to represent the varying needs profiles of customers along with their corresponding feature sets for a recommended PC. The inference process is fast, online. 4.3 Product Search Engine (PSE) The PSE is the Second Bridge, a mapping from the space of attributes to the space of (physical) products. It is an optimization module interfacing with the retailer’s product database to select the best, valid product configurations that match the criteria specified by the user, such as the minimum cost, the maximum likelihood of success, or a number of other simultaneous criteria. The inputs to the PSE are the levels of the attributes (the output of the IE), the configuration constraints (i.e., incompatibilities among components), and a user’s criteria for optimization (e.g., a desired price point). These criteria for the algorithm can be set by the IE and CE and are, therefore, uniquely tailored to a given user. The PSE can sort though billions of options in real time, allowing searches to be completed online. The products with the highest degree of fit are passed to the Justification Engine for further processing. The response to a question is subsequently used to provide more information that adds to Rubicon’s knowledge base during a user-experience. This, in turn, leads to a recalculation and optimal selection of the next most appropriate question. This question-response model continues until Rubicon is either asked, or has sufficient confidence, to make a recommendation. The PSE navigates a vast search space, taking into account different optimization criteria. We used a genetic algorithm approach for this (NP-complete) optimization problem. In the initial phases, the PSE operates on abstractions of the real world, and then through an adaptor layer translates these abstractions into concrete items. This information is capable of being read and processed at runtime. The PSE remains independent of constant updating of the “real world" items. This adaptor level is implemented as an XML data bridge. 4.4 Justification Engine (JE) Recommendations are run through the JE to provide a plain English explanation of why the system has provided a specific recommendation. This justification is delivered in the same vernacular as the dialog, personalized to the user, and is present to facilitate user understanding and adoption of the recommendation. This can be accomplished by transforming the IEŠs mathematical equations into linguistic rules more easily understood by the human. A nice feature of the fuzzy-neural architecture used in the IE module is that it can be expressed by a set of fuzzy IF/THEN inference rules and these rules can be easily extracted automatically [33]. The JE takes the set of IF/THEN rules from the IE and the set of recommended configurations from the PSE and develops a rationale for selecting each product. The value of the JE is that it creates confidence in the recommendation. 5 Preliminary tests Rubicon is implemented using a complimentary modular software approach that encapsulates the individual computational blocks, as well as the necessary software architecture emphasizing a stable and reusable model that is compliant with the J2EE technology standard. Although still under development, Rubicon was sufficiently developed to be submitted to usability testing by two major PC manufacturers. Each test involved about a dozen users and compared three RS.14 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ One was the manufacturer’s current online RS, one was an attractive competitor, while the third was Rubicon. The results made available revealed that Rubicon was judged clearly superior in both tests. For instance, in one test, when asked which of the three RS the user would “be most likely to use again", nine of eleven respondents chose Rubicon. Rubicon was tested online by a webhosting services provider. Of 2200 online users who began a conversation, 83% completed it to the point of receiving a recommendation (which was the only result made available to us). This was judged by the host company to be an extraordinary high completion (i.e., non-abandonment) rate. 6 Conclusions We have built Rubicon to meet the criteria described in Section 3. We have used principles and techniques from artificial intelligence and behavioral sciences. Since we have focused on the core system, other modules of Rubicon, used for prediction, customer profiling, and marketing segmentation were omitted. It was a challenging task to build Rubicon, especially because of its generic character. Making the system largely independent of a specific e-commerce application required greater complexity and abstraction. But do we really need a generic RS? From a user perspective this may be a non-issue. However, for the RS designer and software engineer this is a critical requirement. We should think not only in terms of how to use a RS, but also how to build it and how to adapt it fast for very different application areas. References [1] I. Vollrath, W. Wilke, and R. Bergmann, Case-based reasoning support for online catalog sales, IEEE Internet Computing, July-August, pp. 47-54, 1998. [2] R. Andonie, J. E. Russo, and R. Dean, Crossing the rubicon for an intelligent advisor, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 7-12, 2005. [3] R. Batra and O. T. Ahtola, Measuring the hedonic and utilitarian sources of consumer attitudes, Marketing Letters, Vol. 2, pp. 159-170, 1990. [4] M. Bilgic and R. J. Mooney, Explaining recommendations: Satisfaction vs. promotion, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 13-18, 2005. [5] P. Bonhard, Who do trust? Combining recommender systems and social networking for better advice, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 89-90, 2005. [6] G. Carenini, User-specific decision-theoretic accuracy metrics for collaborative filtering, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 26-30, 2005. [7] V. F. Kleist, A transaction cost model of electronic trust: Transactional return, incentives for network Security and optimal risk in the digital economy, Electronic Commerce Research, Vol. 4, 41-57, 2004. [8] M. A. Patton and A. Josang, Technologies for trust in electronic commerce, Electronic Commerce Research, Vol. 4, pp. 9-21, 2004.Crossing the Rubicon: A Generic Intelligent Advisor 15 [9] M. J. Pazzani, Beyond idiot savants: Recommendations and common sense, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 99-100, 2005. [10] H. Sawamura, M. Yamashita, and Y. Umeda, Applying dialectic agents to argumentation in ecommerce, Electronic Commerce Research, Vol. 3, pp. 297-313, 2003. [11] J. B. Schafer, DynamicLens: A dynamic user-interface for a meta-recommendation system, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 72-76, 2005. [12] S. Spiekermann and C. Paraschiv, Motivating human-agent interaction: Transferring insights from behavioral marketing to interface design, Electronic Commerce Research, Vol. 2, pp. 255-285, 2002. [13] G. Urban, F. Sultan, and W. Qualls, Design and evaluation of a trust based advisor on the Internet, Cambridge, MA, MIT Press, 1999. [14] P. Warnestal, Modeling a dialogue strategy for personalized movie recommendations, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 77-82, 2005. [15] T. Zhu, R. Greiner, G. Haubl, B. Price, and K. Jewell, Behavior-based recommender systems for web content, Proceedings of the Wokshop Beyond Personalization 2005, in conjunction with the International Conference on Intelligent User Interfaces IUI’05, San Diego, CA, pp. 83-88, 2005. [16] W. Lin, S. A. Alvarez, and C. Ruiz, Efficient adaptive-support association rule mining for recommender Systems, Data Mining and Knowledge Discovery, Vol. 6, pp. 83-105, 2002. [17] J. B. Schafer, J. Konstant, and J. Riedl, Electronic commerce recommender applications, Data Mining and Knowledge Discovery 5, pp. 115-152, 2001. [18] C. Basu, C., H. Hirsh, and W. Cohen, Recommendation as classification: Using social and contentbased information in recommendation, Proceedings of the 1998 National Conference on Artificial Intelligence (AAAI-98), AAAI Press, pp. 714-720, 1998. [19] N. Good, J. B. Schafer, J.A. Konstan, A. Borchers, B. Sarwarand, J. Herlocker, and J. Riedl, Combining collaborative filtering with personal agents for better recommendations, Proceedings of the 1999 National Conference on Artificial Intelligence (AAAI-99), AAAI Press, pp. 439-446, 1999. [20] P. Resnick and H. R. Varian (eds.), Recommender systems, Special issue of Communications of the ACM, Vol. 40, 1997. [21] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, Analysis of recommendation algorithms for ecommerce, Proceedings of the ACM E-Commerce Conference, pp. 158-167, 2000. [22] A. M. Rashid, I. Albert, D. Cosley, S. K. Lam, S. McNee, J. A. Konstan, and J. Riedl, Getting to know you: Learning new user preferences in recommender systems, Proceedings of the 2002 International Conference on Intelligent User Interfaces, San Francisco, pp. 127-134, 2002. [23] J. Breese, D. Heckerman, and C. Kadie, Empirical analysis of predictive algorithms for collaborative filtering, Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, Madison, WI, 1998.16 Razvan Andonie, J. Edward Russo, Rishi Dean ˘ [24] J. E. Russo, Aiding purchase decisions on the Internet, Proceedings of the Winter 2002 SSGRR (Scuola Superiore Guglielmo Reiss Romoli) International Conference on Advances in Infrastructure for Electronic Business, Education, Science, and Medicine on the Internet, L’Aquila, Italy, 2002. [25] A. Horowitz and J. R. Russo, Modeling new car customer-salesperson interaction for a knowledgebased system, Advances in Consumer Research, Vol. 16, pp. 392-398, 1989. [26] J. Liu and J. You, Smart shopper: An agent-based web-mining approach to Internet shopping, IEEE Transactions on Fuzzy Systems, Vol. 11, pp. 226-237, 2003. [27] S. M. McNee, S. K. Lam, C. Guetzlaff, J. A. Konstan, and J. Riedl, Confidence displays and training in recommender systems, Proceedings of INTERACT ’03 IFIP TC13 International Conference on Human-Computer Interaction, pp. 176-183, 2003. [28] D. Cosley, S. K. Lam, I. Albert, J. Konstan, and J. Riedl, Is seeing believing? How recommender systems influence users’ opinions, Proceedings of CHI 2003 Conference on Human Factors in Computing Systems, pp. 585-592, Fort Lauderdale, FL, 2003. [29] L. M. Rocha, TalkMine: A soft computing approach to adaptive knowledge recommendation, Soft Computing Agents: New Trends for Designing Autonomous Systems - Studies in Fuzziness and Soft Computing, V. Loia and S. Sessa (eds.), Physica-Verlag, Springer, pp. 89-116, 2001. [30] J. L. Herlocker, J. Konstan, L. G. Terveen, and J. Riedl, Evaluating collaborative filtering recommender systems, ACM Trans. Inf. Syst., Vol. 22, pp. 5-53, 2004. [31] S. Y. Hwang, W. C. Hsiung, and W. S. Yang, A prototype WWW literature recommendation system for digital libraries, Online Information Review, Vol. 27, pp. 169-182, 2003 [32] H. W. Tung and V. W. Soo, A personalized restaurant recommender agent for mobile e-service, Proceedings of the 2004 IEEE International Conference on e-Technology, e-Commerce and e-Service (EEE’04), pp. 259-262, 2004. [33] L. Y. Cai and H. K. Kwan, Fuzzy classification using fuzzy inference networks, IEEE Transactions on System, Man, and Cybernetics, Part B: Cybernetics, Vol. 28, pp. 334-347, 1998. [34] M. Doyle and P. Cunningham, A dynamic approach to reducing dialog in online decision guides, Advances in case-based reasoning, Proc. of the 5-th European Workshop, EWCBR-2000, Trento, Italy, September 6-9, 2000, Springer, pp. 49-60, 2000. [35] S. Schmitt and R. Bergmann, A formal approach to dialogs with online customers, Proc. of the 14-th Bled Electronic Commerce Conference, Bled, Slovenia, June 25-26, 2001, pp. 309-328, 2001. [36] D. McSherry, Minimizing dialog length in interactive case-based reasoning, Proceedings of the 17-th International Joint Conference on Artificial Intelligence (IJCAI 2001), pp. 993-998, 2001. Razvan Andonie ˘ Computer Science Depart., Central Washington University, Ellensburg, USA, [email protected] J. Edward Russo Johnson Graduate School of Management, Cornell University, Ithaca, USA, [email protected] Rishi Dean Visible Measures Corp., Cambridge, MA, USA, [email protected] Received: December 2, 2006International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 17-25 The Moments in Control: a tool for Analysis, Reduction and Design Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou Abstract: In this paper we present a new method of model reduction via the moments. The reduction technique is composed of two steps, the first one consists on using the Least Squares linear optimization algorithm to minimize a cost function representing the norm 2 of the error between different moments of the full order function and the reduced model. This solution represents an initialization of the second step algorithm which is based a Non Linear Programming minimizing a new criterion composed of the cost function of the first step and an equality constraint. Keywords: Moments, model reduction, optimization. 1 Introduction During the last 30 years many design techniques like H¥ [10] was elaborated in order to obtain better performances of controlled plants but once the performances objectives ensured, implementation problems appears because most of industrial applications still use a simple controllers structure like PID (Proportional Integral Derivative) [1][10]; so the aim after the synthesis step is to find a reduced order controller easy to use and to implement, this reduced controller must ensure as possible the same performances of the full order controller[9]. Since reduction can not ensure the same performances of the full order controller in all frequencies, it is reasonable to specify a frequency range to proceed to the reduction [3][4]. There is many model reduction techniques like Balanced Truncation [11] and Optimal Hankel norm approximation [10]; every method differs from the other one by the importance accorded to d.c gain or to middle and high frequencies. Our reduction model method is original, it is based on the notion of moments which is a description of linear time invariant system around a given pulsation [12]. The methodology is composed of two steps, in the first one a cost function representing the norm 2 of the error between different frequency moments of the full order system and the reduced order one is minimized. Some parameters are imposed a priori to obtain a linear criterion, so the parameters of the reduced model are computed using Least Squares algorithm [2][5]. The solution obtained from the step one is used to initialize the Non Linear Programming algorithm of the second step in order to minimize a criterion composed of the cost function of the first step and an equality constraint between the first time moments of the full order system and the reduced model to ensure low frequencies performances [7]. The paper is organized as follows: in section 2, we present definitions and computing methods of time and frequency moments; in section 3, we develop the model reduction technique by presenting the principle, the Linear optimization and the Non Linear optimization and section 4 is devoted to conclusions. Notice that the illustrative examples are presented in section 3. 2 The Moments Let us consider a linear SISO system, characterized by its transfer function G(s) analytic in the RHP plan (.i.e Re(s) > 0) and let g(t) be its impulse response: G(s) = Z0¥ g(t)e¡stdt (1) Copyright ° c 2006-2007 by CCC Publications18 Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou The transfer function is given by the following state space (not necessary minimal) realization: G(s) = s • C A D B ‚ = C(sI ¡ A)¡1 B+ D (2) where A 2 Rn£n;B 2 Rn£1;C 2 R1£n and D 2 R1£1. 2.1 Time moments By expanding e¡st in Taylor series in the vicinity of s = 0, we get: G(s) = Z0¥ ∑ ¥ n=0 (¡1)n sn tn n! g(t)dt (3) G(s) = ¥∑n=0 (¡1)n An(g)sn (4) where: An(g) = Z0¥ n tn!g(t)dt (5) An: represents the nth order moment of g(t). Remark 1. The time moments An give a description of the system at low frequencies. † A0(g) represents the area or the d.c gain of g(t). † A1(g) defines mean time of g(t). † A2(g) deals with the ’dispersion’ of g(t) around its mean time,...etc [2][5][7] 2.2 Frequency moments Let consider the variable s = jw. By expanding e¡st in Taylor series in the vicinity of s0 = jw0, we get: G( jw) = ¥∑n=0 (¡1)n( jw ¡ jw0)n An;w0 (g) (6) with: A n;w0 (g) = Z0¥ n tn!e¡(s¡s0)tg(t)dt (7) Remark 2. Like the time moments, the frequency moments describe the system around w = w0: † A0;w0 represents G( jw) at w = w0. † A0;w0 ¡ j(w ¡w0)A1;w0 permits to enlarge the previous approximation around w = w0. Notice that the moments A n;w0 are complex and if w0 = 0, we recover the time moments of the system (i.e An;0 = An)[7]:The Moments in Control: a tool for Analysis, Reduction and Design 19 2.3 Computing the moments using state space realization Time moments Using the following equality: (sI ¡ A)¡¡A¡1 ¡ sA¡2 ¡ s2A¡3 ¡¢¢¢¡¢ = I ) (sI ¡ A)¡1 = ¡ ¥∑n=0 ‡snA¡(n+1)· (8) an from (2) and (4), we can write: G(s) = ¡C ˆn∑ = ¥1snA¡(n+1)!B+ ¡¡CA¡1B+ D¢ (9) so: A0(g) = ¡CA¡1B+ D and An(g) = (¡1)n+1CA¡(n+1)B; (n = 1¢¢¢¥) (10) Frequency moments Realizing a variable change m = jw ¡ jw0, equation (6) becomes: G(m) = ¥∑n=0 (¡1)n(m)n An;w0 (g) (11) and (2): G(m) = C(mI ¡(¡ jw0I+ A))¡1 B+ D (12) so, we get: A0;w0 (g) = ¡C(¡ jw0I+ A)¡1 B+ D (13) A n;w0 (g) = (¡1)n+1C(¡ jw0I+ A)¡(n+1) B; (n = 1¢¢¢¥) (14) 3 Model Reduction The purpose of model reduction is, starting from a real system, to find a reduced model making it possible as well as possible to approximate it in a given frequency range. 3.1 Principle Let G(s) be a nominal transfer function of high order: G(s) = b0+ b1s+¢¢¢+ bmsm a0+ a1s+¢¢¢+ an¡1sn¡1+ sn ; with m • n (15) and the parameters vectors is: qT = [a0 a1¢¢¢an¡1b0 b1¢¢¢bm] (16) We define a reduced structure: Gr(s) = b0r + b1rs+¢¢¢+ bmrsmr a0r + a1rs+¢¢¢+ a(n¡1)rsnr¡1+ snr ; with mr • nr (17)20 Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou 3.2 Linear Optimization Let consider the following reduced structure [6]: Gr(s) = Nr(s) Dr(s) (18) in linear optimization, we consider that the reduced denominator Dr(s) is fixed a priori and only the numerator parameters have to be optimized. As it is evoked before the reduced model try to ensure: Gr(s) = G(s) (19) in a frequency range evidently. Equation (19) can be written: b0r + b1rs+ ¢¢¢+ bmrsmr Dr(s) = G(s) = Dr1(s) ˆk∑ mr =0bksk! (20) using the moments, the previous equation becomes: ˆk∑ mr =0bksk!ˆk∑ = ¥0(¡1)k Ak;w0sk! = ˆk∑ = ¥0(¡1)k A0k;w0sk! (21) where: Ak;w0 represent the nth order frequency moment of Dr1(s) (22) and A0k;w0 represent the nth order frequency moment of G(s) (23) After truncation until nr and by equalizing the terms of the same power in s, the equation (21) can be written as follow: 26664 A0;w0 0 ¢¢¢ ¢¢¢ 0 ¡A1;w0 A1;w0 0 ¢¢¢ 0 ... ... . . . ¢¢¢ ... A nr;w0 Anr¡1;w0 ¢¢¢ ¢¢¢ A0;w0 37775 26664 b0r b1r ... bmr 37775 = 266664 A00;w0 ¡A01;w0 ... A0 nr;w0 377775 (24) which can be written as: Fnr;mrq r = Gnr (25) Let: e nr = Gnr ¡Fnr;mrq r (26) be the error between the mrth first moments between the real system and the reduced one. Notice that the moments can be either temporal or frequency ones it depends on the frequency range chosen. We want to determine q r which minimize the following quadratic cost: J = eT e (27) Since the system given by (25) is linear, we can determine q r using Least Squares method, so: q r = £FT nr;mrFnr;mr⁄¡1£FT nr;mrGnr⁄ (28)The Moments in Control: a tool for Analysis, Reduction and Design 21 Example 3. Let us consider the following transfer function: G(s) = 1:042s7+ 21:77s6+ 206:5s5+ 1049s4+ 2583s3+ 1789s2+ 437:5s+ 35 s7+ 22:38s6+ 228:3s5+ 1323s4+ 3832s3+ 6339s2+ 1995s+ 157:5 (29) we want to find a two states reduced model which approximates as well as possible G(s) in low, medium and high frequencies. First let choose a reduced denominator, for our case the choice is: Dr(s) = (1+ 0:5s)2 (30) so the number of numerator’s parameters to be computed will not exceed 3 and the number of moments used equals the number of parameters; we choose three pulsations for reduction: w0 = 0rd=s; w0 = 0:5rd=s and w0 = 2rd=s (31) The parameters vector q T r = [b0r b1r b2r] for the three cases is: qT r = [0:2222 0:1852 2:9012]; for w0 = 0rd=s (32) qT r = [0:1805 0:5030 0:2990]; for w0 = 0:5rd=s (33) qT r = [0:2969 0:5255 0:3808]; for w0 = 2rd=s (34) the frequency response of the real system and reduced model for the three cases is given in (Figure 1). −30 −20 −10 0 10 20 30 Magnitude (dB) 10−2 10−1 100 101 102 −45 0 45 90 135 180 Phase (deg) Bode Diagram Frequency (rad/sec) −15 −10 −5 5 0 Magnitude (dB) 10−2 10−1 100 101 102 −30 0 30 60 Phase (deg) Bode Diagram Frequency (rad/sec) G(s)(¡), Gr(s)(¡:¡), w0 = 0rd=s G(s)(¡), Gr(s)(¡:¡), w0 = 0:5rd=s −15 −10 −5 5 0 Magnitude (dB) 10−2 10−1 100 101 102 −30 0 30 60 Phase (deg) Bode Diagram Frequency (rad/sec) G(s)(¡), Gr(s)(¡:¡), w0 = 2rd=s Figure 1: Frequency response of real system and reduced models22 Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou 3.3 Non Linear Optimization It is clear that the fact of imposing the denominator of the reduced model, limits the optimization’s performances, also if the reduction is needed in high frequencies, it is necessary to ensure at the same time the low frequencies behaviour. In this part, we will optimize both poles and zeros of the reduced model or the parameters of numerator and denominator; a non linear optimization algorithm is used to minimize a quadratic cost between the moments of the real system and the reduced model including an equality constraint between the two first time moments in order to keep the same low frequencies performances. Principle Let us consider the parameters vector of a reduced model: qT r = £a0r a1r ¢¢¢a(n¡1)rb0 b1r ¢¢¢bmr⁄ (35) We want to find q T r which minimizes (27); for that we use Marquardt’s algorithm [8] which is a good combination between rapidity and convergence. Algorithm principle Parameter estimation is performed using an iterative optimization procedure: ˆ q i+1 = q ˆ i ¡ f[J00 + liI]¡1:J0gq ˆ=qˆi (36) µ¶q ¶Jr ¶ = J0 : the Gradient vector (37) µ¶q ¶ 2J 2 r ¶ = J00 : the Hessian matrix (38) li : coefficient to be adjusted (39) The initialization is given by the vector parameters emerged from the Least Squares optimization: ˆq0 = q ˆLS (40) Computing J0 and J00 The calculation of the gradient and the hessian is crucial for the optimization procedure, we use parametric sensitivity function to calculate them: J 0 … ¡2 N∑n=0 enQn and J00 … 2 N∑n=0 QnQT n (41) Q = dA n;w0 (Gr) dq r (42) where: dA n;w0 dqi = (¡1)n+1 ˆd dC qi A¡ m (n+1)B ¡CA¡ m (n+1) dA d( mq n+ i 1) A¡ m (n+1)B+CA¡ m (n+1) d dB qi ! (43)The Moments in Control: a tool for Analysis, Reduction and Design 23 where: dA(n+1) md qi =µ dA dq mi An m + AmdA dq n mi¶ and Am = (A¡ jw0I) (44) If we use a control or an observer canonical realization, (43) will be much easier to calculate. For Example, for control realization: dA n;w0 dai = (¡1)n+1 µ¡CA¡ m(n+1) dA da ( mn+ i 1) A¡ m(n+1)B¶ and dA db n;iw0 = (¡1)n+1 ‡ db dCi A¡ m(n+1)B· (45) where: a r = [a0r a1r ¢¢¢anr] and br = [b0r b1r ¢¢¢bmr] (46) Now, we want to find q r which minimizes (27) and ensures at the same time the following equality: F (q r) =k A0(G)¡A0(Gr) k + k A1(G)¡A1(Gr) k= 0 (47) The optimization problem can be reformulated as: min q r Jconst with Jconst = J+gF (q r) (48) which is equivalent to: g represents the vector of Lagrange multipliers to be estimated. To solve this problem, we can use the algorithm described in (36), by substituting J by Jconst, with: J 0 const = 264 ¶Jconst ¶q ¶Jconst ¶g 375 ; J 00 const = 264 ¶2Jconst ¶q2 ¶2Jconst ¶q¶g ¶2Jconst ¶q¶g ¶2Jconst ¶g2 375 (49) Example 4. Let us take the same system given in Example 2: G(s) = 1:042s7+ 21:77s6+ 206:5s5+ 1049s4+ 2583s3+ 1789s2+ 437:5s+ 35 s7+ 22:38s6+ 228:3s5+ 1323s4+ 3832s3+ 6339s2+ 1995s+ 157:5 (50) the aim is to find a reduced order model of three states which approximate the real system around w0 = 11rd=s and have the same low frequencies behaviour; for that we will compare the results obtained from Least Squares, Non Linear Programming (NLP) and Non Linear Programming with equality constraint. The reduced model have the following structure: Gr(s) = b0r + b1rs+ b2rs2 a0r + a1rs+ s2 (51) of course for Least Squares optimization, the denominator is fixed a priori: Dr(s) = (1+ 0:5s)2 (52) The three reduced model are: Gr(s) = 1:019¡1:698s+0:6728s2 (1+0:5s)2 Least Squares Gr(s) = 14:63 36+ :96 14 + :19 14s :84 +1+ :032 s2 s2 Non Linear Programming Gr(s) = 0:1351 0:6078 +0:+ 4075 1:935 s+ s1 + :052 s2 s2 NLP with equality constraint (53)24 Abdelmadjid Bentayeb, Nezha Maamri, Jean-Claude Trigeassou The frequency response of the real system and the reduced models is illustrated by the (Figure 2) −15 −10 −5 5 0 Magnitude (dB) 10−2 10−1 100 101 102 −90 0 90 180 270 360 Phase (deg) Bode Diagram Frequency (rad/sec) Least Squares G(s)(¡) Gr(s)(¡:) −15 −10 −5 5 0 Magnitude (dB) 10−2 10−1 100 101 102 −30 0 30 60 Phase (deg) Bode Diagram Frequency (rad/sec) NLP G(s)(¡) Gr(s)(¡:) −20 −15 −10 −5 5 0 Magnitude (dB) 10−2 10−1 100 101 102 −30 0 30 60 Phase (deg) Bode Diagram Frequency (rad/sec) NLP with constraint G(s)(¡) Gr(s)(¡:) Figure 2: Frequency response of real system and reduced models Remark 5. We saw that the moments may be used for model reduction; we presented three optimization methods: Least Squares, Non Linear Programming and Non Linear Programming with equality constraint. If the optimization is done in low frequencies,the Non Linear Programming can be used without equality constraint take into account the radius of convergence which allow to ensure the low frequencies behaviour. 4 Summary and Conclusions In this paper, we presented a new method for model reduction and controller design. The technic is based on the moments tool which is able to give a description of any linear system or linear system with time delay in low frequencies using time moments or around a given frequency using the frequency moments. The optimization procedure is composed of two steps, in the first one, we use least squares algorithm to have reduced model to initialize the non linear programming with equality constraint between the two first time moments between the reduced model and the full order system. For controller design, the aim is to ensure the closed loop performances using a reference model which regroups dominant poles, auxiliary poles and system’s singularities. Using Youla parametrization, we obtain an ideal controller which will be reduced for implementation using the moments.The Moments in Control: a tool for Analysis, Reduction and Design 25 References [1] K.Astöm and B.Wittenmark "Computer Controlled Systems, Theory and Design", Prenctice Hall (1984) [2] A.Bentayeb, N.Maamri and D.Mehdi "Moments Based Synthesis Approach Comparison with H¥ Design",IFAC DECOM-TT "Istanbul, Turkey" (2003). [3] S.Boyd and C.Barratt "Linear Controller Design-Limits of Performance". Prentice Hall (1991). [4] K.V.Fernando and H.Nicholson "Singular perturbational model reduction of balanced systems", IEEE Transactions on Automatic Control, AC-27:2:466-468 (1982). [5] N.Maamri and J.C.Trigeassou "PID design for time delayed systems by the method of moments", European Control Conference, Groningen Holland (1993). [6] F.Monroux. "Méthodologie Générale de Synthèse de Correcteurs par la Méthode des Moments, Approche Mixte: Fréquentielle et Temporelle", Thèse de doctorat, Université de Poitiers, 1999. [7] N.Maamri, A.Bentayeb and J-C.Trigeassou "Design and Iterative Optimization of Reduced Robust Controllers with Equality-Constraints" ROCOND-Milan (2003). [8] D.W.Marquardt "An Algorithm for Least-Squares Estimation of Non Linear Parameters" Journal of Soc. Indust. Appl. Math V(11-2) (1963). [9] M.G.Safonov and R.Y.Chiang "A schur method for balanced truncation model reduction". IEEE Transactions on Automatic Control AC-34:729-733, (1989). [10] S.Skogested and I.Postlethwaite "Multivariable Feedback Control", JW (1996). [11] M.S.Tombs and I.Postlethwaite "Truncated balanced realization of a stable non-minimal statespace system",International Journal of Control:46:1319-1330. [12] J.C.Trigeassou "La méthode des moments en automatique". CIFA, Lille (2000). A.Bentayeb, N.Maamri and J-C.Trigeassou University of Poitiers Laboratoire d’Automatique et d’Informatique Industrielle 40 Avenue du Recteur Pineau 86022 Poitiers E-mail: [email protected] Received: November 24, 2006 Editor’s note about the author: Abdelmadjid Bentayeb was born on August 9, 1977 in Annaba (Algeria). He received the engineer diploma of automatic control in 2000 and the Applied Studies Diploma of advanced control in 2001 at the University of Annaba. He received the Master degree of automatic and industrial data processing in 2002 and the PhD degree of automatic in 2006 with a very honourable distinction at the University of Poitiers. Currently, he prepares an industrial orientation of his work carried out at University of Poitiers. His current research interests include the multivariable robust control systems and model reduction using the method of the moments.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 26-36 Deformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache Abstract: Magnetic resonance imaging (MRI) is commonly employed for the depiction of soft tissues, most notably the human brain. Computer-aided image analysis techniques lead to image enhancement and automatic detection of anatomical structures. However, the information contained in images does not often offer enough contrast to robustly obtain a good detection of all internal brain structures, not least the deep grey matter nuclei. We propose a method that incorporates prior anatomical knowledge in the shape of digital atlases that deform to fit the image data to be analysed. Our technique is based on a combination of rigid, affine and non-rigid registration, segmentation of key anatomical landmarks and propagation of the information of the atlas to detect deep grey matter nuclei. The Montreal Neurological Institute (MNI) and Zubal atlases are employed. Results show that detecting important structures such as the ventricles and brain outlines greatly improves the results. Our method is fully automatic. Keywords: MRI, brain, deep grey matter nuclei, atlas, image normalisation, registration, segmentation. 1 Introduction The advent of medical imaging modalities such as X-ray, ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI) has greatly improved the diagnosis of various human diseases. To date, the most common procedure to analyse imaging data is visual inspection on printed support. In the last decade, computer-aided medical image analysis techniques have been employed to provide a better insight into the acquired image data. [5]. Such techniques allow for quantitative, reproducible observation of the patient condition. Furthermore, the computing power of modern machines can be used to combine information from several images of the same patient (i.e. image fusion) or add prior information from a database of images. In this paper, we present a fully automated medical image analysis technique aimed at the detection of internal brain structures from MRI data. Such automated processes allow the study of large image databases and provide consistent measurements over the data. In our case, we employ a priori anatomical knowledge in the form of digital brain atlases. Relevant background information about MRI and brain anatomy is provided next. Section 2 will describe the different components of our image processing framework, which detects and delineates internal brain structures by identifying analogous structures in digital brain atlases. Finally, results and conclusions are given. 1.1 Magnetic Resonance Imaging MRI has become a leading technique widely used for imaging soft human tissue. Its applications are extended over all parts of the human body and it represents the most common visualisation method of human brain. Images are generated by measuring the behaviour of soft tissue under a magnetic field. Under such conditions, water protons enter a higher energy state when a radio-frequency pulse is applied and this energy is re-emitted when the pulse stops (a property known as resonance) [7]. A coil is used to measure this energy, which is proportional to the quantity of water protons and local biochemical conditions. Thus, different tissues give different intensities in the final MR image. From the brain Copyright ° c 2006-2007 by CCC PublicationsDeformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging27 MRI perspective, this quality makes possible the segmentation of the three main tissue classes within the human skull: grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Their accurate segmentation remains a challenging task in the clinical environment. The relative contrast between brain tissues is not a constant in MR imaging. In most medical imaging applications, little can be done about the appearance of anatomically distinct areas relative to their surroundings. In MRI, the choice of the strength and timing of the radio-frequency pulses, known as the MRI sequence [12], can be employed to highlight some type of tissue or image out another, according to the clinical application. However, the presence of artefacts due to magnetic field inhomogeneity (bias fields) and movement artefacts may hamper the delineation of GM versus WM and CSF and make their depiction difficult. There is an entire family of MRI sequences that are used in common clinical practice. T1-weighted MRI offers the highest contrast between the brain soft tissues. On the contrary, T2-weighted and Proton Density (PD) images exhibit very low contrast between GM and WM, but high contrast between CSF and brain parenchyma. In other MRI sequences, like the Fluid Attenuated Inversion Recovery (FLAIR) sequence, the CSF is eliminated from the image in an adapted T1 or T2 sequence. More about these specific MRI sequences and their variations can be found in [2]. Multisequence MRI analysis combines the different information provided by the employed sequences. Combining such knowledge gives substantially more information about brain anatomy and possible occurring changes. MR images depict a 3D volume where the organ or part of the body of interest is embedded. This information can be used to build a 3D representation of the structure of interest. This applies both to 2D sequences, where images are acquired in slices, and to recently developed 3D sequences, where the data are captured in the 3D Fourier space, rather than each slice being captured separately in the 2D Fourier space [2, 12]. 1.2 Deep Grey Matter Nuclei The neurones that build up the human brain are composed of a cellular body and an axon. The latter projects its dendritic connections to other neurones in remote cerebral regions. In essence, grey matter corresponds to the cellular bodies, whereas the axons constitute the white matter. Cerebral grey matter is mainly concentrated in the outer surface of the brain (cortex), but several internal GM structures exist, as seen in Figures 1 and 2. These are known as deep grey matter nuclei and they play a central role in the intellectual capabilities of the human brain. Additionally, deep brain grey matter nuclei are relevant to a set of clinical conditions, such as Parkinson’s and Creutzfeldt-Jakob diseases. However, their detection in MRI data sets remains a challenging task, due to their small size, partial volume effects [6], anatomical variability, lack of white matter-grey matter contrast in some sequences and movement artefacts. A methodology for the robust detection of deep brain grey matter nuclei in multi-sequence MRI is presented in this paper. 2 Method 2.1 Spatial Normalisation The large variability inherent to human anatomy and the differences in patient positioning across scans leads us to consider spatial normalisation as an approach to put patient images in a standard reference frame. This will allow to localise the areas of interest with the help of an atlas of the brain. Furthermore, it will make automatic inter-patient comparisons possible. The identification of brain structures in volumetric images can be automated thanks to the use of digital atlases. These are images that have been segmented and thus contain information about the position and shape of each structure. Such atlases can be binary (1 for the location of a structure and 028 Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache Figure 1: Deep grey matter internal nuclei as seen in a normal T1 weighted axial MR image with good contrast between WM, GM and CSF. The arrows point towards some of these nuclei, namely the caudate nuclei, the thalami and the putamen. Figure 2: An annotated map of deep grey matter internal nuclei reproduced from the Talairach and Tournoux atlas [13]: the caudate nuclei (CN), putamen (Pu) and thalami (Th).Deformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging29 for "outside") or probabilistic, in which case the values correspond to the probability of a voxel containing the structure of interest. In order to locate such structures in a given patient image, the atlas image is deformed to match the shape of the patient brain. This process is known as registration. Depending on the type of geometric deformation allowed, registration can be rigid, affine, parametric (e.g. spline) or free-form (a deformation field specifying the displacement applied to each point). Registration to a digital atlas has become a common technique with the introduction of popular statistical algorithms for image processing, such as Statistical Parametric Mapping (SPM) [1] or Expectation Maximization Segmentation (EMS) [14]. A well-known probabilistic atlas in the scientific community is the MNI Atlas from the Montreal Neurological Institute at McGill University [4]. It was built using over 300 MRI scans of healthy individuals to compute an average brain MR image, the MNI template, which is now the standard template of SPM and the International Consortium for Brain Mapping [9]. The averaging is performed for the entire brain, but also on isolated GM, WM and CSF, providing a tool for statistical segmentation. For these reasons, we chose the MNI template as the basis for image alignment in our approach. Figure 3 shows the MNI template. Figure 3: The MNI template. On the left, the probabilistic MNI atlas of the brain; on the right, the corresponding GM atlas. Please note the arrangement of MR images in radiological convention with an axial, a sagittal and a coronal view. This convention is reflected in figures throughout the paper. We propose the following registration scheme. T1 images have often the highest resolution, hence we register them to the MNI template first using an affine transformation. The registration algorithm, previously developed in our group, is described in [11]. It uses a block matching strategy in a twostep iterative method. The standard assumption behind the algorithm is that there is a global intensity relationship between the template image and the one being registered to it. The method proposes several types of correlation measures: linear, functional or statistical. Maximising one of these, the correlation coefficient in our case, the transformation between the two images is computed block by block and a displacement field is thus generated. A parametric transformation, either affine or rigid, is then estimated from this deformation field. To further improve robustness, this procedure is repeated at multiple scales. More details can be found in [10]. Next, rigid intra-patient registration of all sequences is performed using the same algorithm as above. T2, FLAIR, diffusion-weighted, diffusion tensor or other sequence images can be registered to the T1 image. Since this registration is performed on images of the same patient acquired during the same scanning session, rigid registration suffices. By combining these rigid transformations with the affine transformation matching T1 and MNI template, we can find correspondences between the atlas and the other sequence images. This is illustrated in Figure 5. The final image resolution is that of the MNI atlas: 91 109 91 voxels. Figure 4 shows an example of spatial normalisation. With all images registered to the atlas, intra- and inter-patient analysis becomes simple and statistical algorithms can be applied.30 Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache Figure 4: An example of spatial normalisation. The image on top is the subject’s T1 before registration; the image on bottom left shows the subject’s T1 after spatial normalisation and the MNI template is presented on the bottom right image. 2.2 A Priori Anatomical Knowledge To be able to segment GM and WM in MRI sequences, a good contrast between these types of tissue in T1-weighted images is desired. Figure 6 shows a typical T1 with high contrast between brain soft tissues and a common T1 image from our database. Under the given circumstances, the segmentation of GM cannot be done directly from the patient images. The MNI atlas can provide a probabilistic segmentation of GM, but this is not precise enough for our application. We use instead a segmented anatomical atlas of the brain, the Zubal Phantom [15], which is introduced next. Figure 5: Diagram of the spatial normalisation algorithm. Intra-patient images are rigidly registered on the corresponding T1. The T1-weighted image is affinely registered to the atlas template. The resulting transformation is used to align all other MR images to the atlas. The Zubal atlas offers a precisely labelled segmentation of brain structures from the T1-weighted MR image of one single subject. Our interest focuses on the internal nuclei, which are segmented in the phantom. First, the atlas must be aligned to our set of images, which have been previously registered to the MNI atlas. Thus, we register the Zubal Phantom to the MNI template, again using our block matching algorithm [11], to estimate an affine transformation. However, in order to preserve the correctDeformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging31 Figure 6: A typical T1-weighted MR image with good contrast between brain GM, WM and CSF (left) versus a T1 image where GM and WM cannot be reliably distinguished from each other. values of the segmentation labels posterior to the application of the transformation, nearest-neighbour interpolation is performed, as opposed to the case of patient image registration, which employed spline interpolation. Figure 7 shows the results of registering the Zubal Phantom to the MNI reference without disrupting the Zubal labels. 2.3 Refined Segmentation Once the Zubal Phantom is registered to the working framework, we can easily depict the brain structures that are of interest, namely the deep GM internal nuclei. For the examples in this paper, we will focus on the basal ganglia. Hence, we create a mask with the thalamus, putamen and head of the caudate - which will be referred as internal nuclei for the rest of this paper - from the Zubal Phantom registered on MNI (Figure 8). We aim to use this mask for the segmentation of internal nuclei in patient images. Although the affine registration gives correct correspondences in a general brain registration framework, the anatomical variability between patients makes the correspondence between the Zubal internal nuclei mask and the corresponding internal nuclei in each patient erroneous. A refinement of the registration in the deep GM between the Zubal internal nuclei mask and the patient internal nuclei seems necessary to allow us to use the a priori anatomical information resulting from the segmentation of the Zubal Phantom. The segmentation of internal nuclei in patient images is not an obvious task; this is why we exploit the Zubal Phantom. Nevertheless, there are other important anatomical landmarks in the brain that are easier to identify. We concentrate on the segmentation of ventricles and cortex external boundary. Ventricles will give a good approximation of the deformation field around the internal nuclei, whereas the cortex boundary will impose the global spatial correspondence and stabilise the deformation field inside the brain. Figure 8 illustrates the segmentation of ventricles, brain contour and internal nuclei from the registered Zubal Phantom. To obtain similar images of segmented brain margin and ventricles for each patient, we employ morphological opening on patient T2 images. The strong contrast that CSF has against the brain in T2- weighted images allows us to segment the ventricles, while the cortex boundary can be extracted from either T1 or T2 sequences. We prefer using the T1 sequence, since the T2 image we employ lacks some top and bottom slices. The ventricles being located in the middle of the brain, it is correct to extract them from T2 images, but the cortex would be incomplete. We are now in the possession of two binary maps of ventricles and brain boundaries for each patient: one from the Zubal Phantom and the other from the patient. Non-rigid (free-form) registration is used to align the two images, employing the algorithm developed in our group and described in [3]. This registration method minimises an energy function, which uses measures of intensity similarity, smoothing, noise parameters and correspondence between points. Figure 9 shows typical results and Figure 10 shows the 3D deformation fields related to the registration in Figure 9. The outer margin of the cortex ensures that32 Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache Figure 7: The registration of Zubal Phantom onto the MNI template. On the top row, the original Zubal Phantom is shown; on the bottom- left, we have the registered Zubal Phantom on the MNI template, which is shown in the bottom- right image. the deformation fields are spatially sound and do not pull the internal nuclei over their location. Having the deformation fields computed, we apply them to the mask of internal nuclei of the Zubal Phantom, deforming the mask according to the position and size of the ventricles in the patient image. A diagram of the algorithm is shown in Figure 11. The deformed mask is used to segment the internal nuclei of the patient, namely the putamen, head of the caudate and thalamus. Figure 12 shows an example of registration of internal nuclei in 3D and the internal nuclei segmentation results in a T1-weighted MR image of a patient. In Figure 13 we segment the internal nuclei in a patient T2-weighted image. The segmentation can be accurately performed in any MR sequence of the patient, given that multisequence images have been previously registered to the MNI atlas. Figure 8: The segmentation of the Zubal Phantom. From left to right: column 1, the Zubal Phantom registered on MNI; column 2, the ventricles segmented from the Zubal Phantom; column 3, the cortex outer boundary is added to the ventricles; column 4, the internal nuclei segmented from the Zubal Phantom. The top row shows the axial view, while on bottom we present the coronal view. In this paper, we focused on the segmentation of the basal ganglia to present our algorithm for the segmentation of deep grey matter nuclei. An identical approach can be used for other inner brainDeformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging33 structures to accurately segment them in patient images. Each nuclei class has an associated label in the Zubal Phantom, which facilitates their identification. In Figure 14 we illustrate the segmentation of individual types of nuclei, here caudate nuclei, thalami and putamen, using our approach. Figure 9: Registration of the Zubal ventricles and cortex outer boundary on a patient. The patient’s ventricles are larger next to the small ventricles in the Zubal Phantom, where the subject is young. The algorithm gives robust results, as seen above. From left to right: column 1, the T2-weighted image of the patient; column 2, the ventricles and brain margin of the patient (ventricles segmented from T2 and cortex from T1); column3, the ventricles and brain boundary of Zubal Phantom; column 4, the ventricles and cortex boundary of the Zubal Phantom registered on the patient. 3 Conclusion A robust automatic technique for the identification of deep brain internal nuclei was presented. The use of key anatomical landmarks such as the ventricles and the outline of the brain imposes anatomical constraints in the deformation fields found by the non-rigid registration algorithm, which otherwise would fail to converge to the correct segmentation. Figure 10: Deformation fields of the non-rigid registration between the Zubal Phantom ventricles and those of a patient with very large ventricles. On the left is the x field, the y field is in the middle column and the z field on the right.34 Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache Figure 11: Diagram of the refined registration of internal nuclei. Figure 12: An example of internal nuclei registration and their segmentation in a T1-weighted patient image. On the left, we have the T1 image of the patient; in the middle column, we show the segmentation of internal nuclei according to the binary map before non-rigid deformation with the head of the caudate superposed on the ventricle; on the right, we segment the internal nuclei after non-rigid deformation, showing an accurate segmentation. Figure 13: An example of internal nuclei segmentation in a T2-weighted image of the patient. On the left, we have the T2 image of the patient; in the middle column, we show the segmentation of internal nuclei before non-rigid deformation; on the right, we segment correctly the internal nuclei after non-rigid deformation. Figure 14: Binary maps of deep grey nuclei. From left to right: the caudate nuclei, the putamen and the thalami. From top to bottom: axial and coronal views. These individual masks can be used for the accurate segmentation of each type of nuclei.Deformable Atlases for the Segmentation of Internal Brain Nuclei in Magnetic Resonance Imaging35 Acknowledgement The authors would like to thank Professor Ioana Moisil from the “Lucian Blaga” University of Sibiu for her assistance. References [1] J. Ashburner, K.J. Friston, "Voxel-Based Morphometry - The Methods", NeuroImage, 11:805-821, 2000. [2] M A. Brown, R.C. Semelka, "MR Imaging Abbreviations, Definitions, and Descriptions: A Review", Radiology, 213:647-662, 1999. [3] P. Cachier, E. Bardinet, D. Dormont, X. Pennec, N. Ayache, "Iconic Feature-based Nonrigid Registration: The PASHA Algorithm", CVIU - Special Issue on Nonrigid Registration, 89(2-3):272-298, 2003. [4] D.L. Collins, A.P. Zijdenbos, V. Kollokian, J.G. Sled, N.J. Kabani, C.J. Holmes, A.C. Evans, "Design and Construction of a Realistic Digital Brain Phantom", IEEE Transactions on Medical Imaging, 17(3):463-468, 1998. [5] J. Duncan, N. Ayache, "Medical Image Analysis: Progress over Two Decades and the Challenges Ahead", IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1):85-106, 2000. [6] M. A. González Ballester, A. Zisserman, M. Brady, "Estimation of the Partial Volume Effect in MRI", Medical Image Analysis, 6(4):389-405, 2002. [7] J P. Hornak, "The Basics of MRI", http://www.cis.rit.edu/htbooks/mri/. [8] M.G. Linguraru, M.A. Gonzales Ballester, E. Bardinet, D. Galanaud, D. Dormont, J.P. Brandel, N. Ayache, "Automated Analysis of Basal Ganglia Intensity Distribution in Multisequence MRI of the Brain - Application to Creutzfeldt-Jakob Disease", Rapport de Recherche, INRIA, 2004. [9] J.C. Mazziotta, A.W. Toga, A.C. Evans, P.T. Fox, J. Lancaster, K. Zilles, R.P. Woods, T. Paus, G. Simpson, B. Pike, C.J. Holmes, D.L. Collins, P.M. Thompson, D. MacDonald, M. Iacoboni, T. Schormann, K. Amunts, N. Palomero-Gallagher, S. Geyer, L. Parsors, K.L. Narr, N. Kabani, G. Le Goualher, M. Boomsma, T. Cannon, R. Kawashima, B. Mazoyer, "A Probabilistic Atlas and Reference System for the Human Brain: International Consortium for Brain Mapping (ICBM)", Philosophical Transactions of the Royal Society of London, Series B (Biological Sciences), 356(1412):1293-1322, Appendix II, 2001. [10] S. Ourselin, "Recalage d’Images Médicales par Appariement de Régions - Application à la Construction d’Atlas Histologiques 3D" , PhD thesis, Université de Nice - Sophia Antipolis, 2002. [11] S. Ourselin, A. Roche, S. Prima, N. Ayache, "Block Matching: A General Framework to Improve Robustness of Rigid Registration of Medical Images", in A.M. DiGioia and S. Delp (eds.) Medical Robotics, Imaging and Computer Assisted Surgery (MICCAI 2000), volume 1935 of Lectures Notes in Computer Science, Springer, 557-566, 2000. [12] D.D. Stark, W.G. Bradley, W.G. Jr. Bradley, "Magnetic Resonance Imaging", Mosby, 1999. [13] J. Talairach, P. Tournoux, "Co-Planar Stereotaxic Atlas of the Human Brain", Thieme Medical Publishers, 1988.36 Marius George Linguraru, Miguel Ángel González Ballester, Nicholas Ayache [14] K. Van Leemput, F. Maes, D. Vandermeulen, A. Colchester, P. Suetens, "Automated Segmentation of Multiple Sclerosis Lesions by Model Outlier Detection", IEEE Transactions on Medical Imaging, 20(8):677-688, 2001. [15] I.G. Zubal, C.R. Harrell, E.O. Smith, Z. Rattner, G. Gindi, P.B. Hoffer, "Computerized Threedimensional Segmented Human Anatomy", Medical Physics, 21:299-302, 1994. Marius George Linguraru EPIDAURE/ASCLEPIOS Research Group, INRIA Sophia Antipolis, France Division of Engineering and Applied Sciences Harvard University, Cambridge MA, USA E-mail: [email protected] Miguel Ángel González Ballester University of Bern, Bern, Switzerland MEM Research Center, Institute for Surgical Technology and Biomechanics Nicholas Ayache EPIDAURE/ASCLEPIOS Research Group, INRIA Sophia Antipolis, France Received: November 16, 2006 Editor’s note about the author: Marius George LINGURARU joined the Diagnostic Radiology Department at the Clinical Center at the National Institute of Health (NIH), Bethesda, Maryland, USA in 2007 as Staff Scientist. Previously, he worked as Research Fellow in the Division of Engineering and Applied Sciences of Harvard University in Cambridge, Massachusetts, USA. He moved to the Boston area from the South of France, where he was Expert Engineer in the Epidaure/Asclepios Research Group of the National Institute of Research in Informatics and Automatic Control (INRIA) in Sophia Antipolis, France. He received a PhD in Information Engineering/Medical Image Analysis at the University of Oxford, Oxford, UK within the Medical Vision Laboratory and was a member of Keble College. His previous studies include an MA in British Cultural, an MSc in Parallel and Distributed Processing Systems, Studies and a BSc in Computer Science. All three degrees are from the "Lucian Blaga" University of Sibiu, Romania, where he also worked as Assistant Professor in the Department of Computer Science and Automatic Control.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 37-38 International Conference on Virtual Learning – ICVL 2006 The current issue of the journal contains seven extended papers published in the "Proceedings of the 1st International Conference on Virtual Learning, October 27 - 29, 2006, Bucharest, Romania, (ICVL 2006)" (M. Vlada, G. Albeanu & D.M. Popovici eds.). The first edition of the International Conference on Virtual Learning - ICVL 2006 was organized by University of Bucharest, Faculty of Mathematics and Computer Science in association with European INTUITION Project (The INTUITION Network of Excellence in Europe - http://www.intuition-eunetwork.net/) and in conjunction with the fourth National Conference on Virtual Learning - CNIV 2006 to celebrate one hundred years from the birth of the great Romanian mathematician and "Computer Pioneer Award" of IEEE Computer Society (1996) - Grigore MOISIL (1906 - 1973). The ICVL was structured to provide a vision of European e-Learning and e-Training policies, to take notice of the situation existing today in the international community and to work towards developing a forward looking approach in Virtual Learning from the viewpoint of modelling methods and methodological aspects (M&M), information technologies (TECH) and software solutions (SOFT). The conference has established a large area of topics to cover the following subjects, but not limited to: Innovative teaching and learning technologies, Web-based methods and tools in traditional, online education and training, Collaborative e-Learning, e-Pedagogy, Design and development of online courseware, Information and knowledge processing, Knowledge representation and ontologism, Cognitive modelling and intelligent systems, Algorithms and programming for modelling, Advanced distributed learning technologies, Web, virtual reality/AR and mixed technologies, Mobile e-Learning, communication technology applications, Computer graphics and computational geometry, Intelligent virtual environments, New software environments for education and training, Streaming multimedia applications in learning, Scientific web-based laboratories and virtual labs, Soft computing in virtual reality and artificial intelligence, Avatars and intelligent agents. Initially 72 abstracts were received and 55 of them were selected. Finally only 34 papers were accepted for presentation at the ICVL and publication in Proceedings of the ICVL - Bucharest University Press (ISBN 978-973-737-218-5). Participants coming from Europe, Japan, Australia and Canada have discussed various aspects concerning the future developments in the virtual learning field during the conference. Four invited papers talking about trends in professional learning, time series modelling, analysis and forecasting in e-Learning environments, AeL - the e-Learning Universal Platform and, the teaching through projects methodology have been presented as plenary lectures. Ten papers proposed different software solutions, while twenty papers were dedicated to modelling methods and methodological aspects. Some ICVL papers are considered for publishing in the current issue of the International Journal of Computers, Communications and Control. Let us present an introduction of the selected papers. The paper of A. Andreatos is dedicated to define and classify the Virtual Communities and their Importance for Informal Learning, and to examine their social impact and resulting trends in technology management. A bibliographical review, and some case studies illustrate the aforementioned tasks. A. Anohina, in her paper, considers the intelligent tutoring systems (including architecture topics based on two layers approach) powered by adaptive support for learners in order to solve practical problems. Copyright ° c 2006-2007 by CCC Publications38 Grigore Albeanu The Minimax algorithm is considered as a practical illustration. The paper of N. Doukas and A. Andreatos presents a computer-aided assessment system (e-Xaminer; a web-based interface system) based on parametrically designed questions that uses meta-language concepts to automatically generate tests. In their paper, I. Kitagaki and his colleagues, present an algorithm for groupware modelling for a collaborative learning environment using mobile terminals. They show not only the grouping algorithm but also some considerations about discussion in a classroom. M. Lambiris presents the concepts and a technique used to design a methodology for providing individualised computer-generated feedback to students. Such an approach can also be used to provide detailed and high accuracy information to the instructor about the performance of the whole group. The paper of G. Moise describes a software system for online learning using intelligent agents (an execution agent and a supervisor agent) and conceptual maps. Experimental results are also considered. M. Oprea presents in her paper a multi-agent system design procedure to be applied for universities course timetable scheduling that is a difficult administrative task. A preliminary evaluation of the proposed multi-agent system is presented in order to show the benefits obtained when a university uses such an approach. Considering the successful story of the ICVL 2006 event, the scientific community shows great interest to the second edition (ICVL 2007: 26-28th of October) that will take place at OVIDIUS University of Constanta, Romania. Grigore Albeanu Guest editor ICVL Technical Program Chair UNESCO Chair in Information Technologies University of Oradea, University Street, No. 1 410087, Oradea, Bihor, RomaniaInternational Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 39-47 Virtual Communities and their Importance for Informal Learning Antonios Andreatos Abstract: This paper deals with the concept of informal learning in virtual communities on the Internet. Initially we discuss the need for continuing education and its relation with informal learning. Virtual communities are next defined and then compared to real communities. Case studies are employed, focused on some specific kinds of virtual communities. We examine how they operate, how their members interact, what values they share and what kind of knowledge they gather. The learning process within virtual communities is then examined. We look at the kind of information and knowledge available in some particular virtual communities, and comment on its organisation. Next, the learning process of virtual communities is compared to that of Open Universities. Finally, we claim that the participation in virtual communities is not only a form of continuing education but also a contribution towards the multiliteracies needed for working as well as living in the 21st century. Keywords: Virtual Communities, informal learning, multiliteracies. 1 Introduction 1.1 Defining informal learning Learning is a natural, spontaneous and lifelong process of human nature. Education, on the other hand, is a formal, structured, organised process with specific goals. The terms ‘learning’ and ‘education’ are often confused, because education is based on the learning process [1]. Learning may be formal, non-formal or informal [2]. 1. Formal learning (what we usually call Education) is offered by elementary schools, high schools, colleges and universities; it is based on the teacher-student model. 2. Non-formal learning is still organised learning but outside the formal learning system; it is offered by official organisations such as governmental services, youth organisations, training services, scientific unions, enterprises, voluntary and non-profit organisations, etc. 3. Informal learning on the other hand is not organised nor organized but casual; even travelling or watching TV may lead to informal learning [1]. It is what we learn from everyday life [2]. 1.2 Continuing education and informal learning Social changes and the evolution of human knowledge in the Digital Era are so fast, that make further education imperative for many professionals. Like regular education, continuing education may be formal, non-formal or informal. In a recent research among the engineers - members of the Technical Chamber of Greece, it was found out that [3]: † The great majority (92,5 %) believes that continuing professional education is necessary for finding a (good) job. A percentage of 50,6 % believes that this education should take place every 3 years, while another 22 % places this time to every 5 years. † A majority of 56,4 % believes that the most important needs for continuing professional education are related to computers and new technologies. Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 200640 Antonios Andreatos † A percentage of 49 % has attended (or attends currently) a professional continuing education program, while most of those who have not attended such a program (60,1 %) declare as the most important reason the lack of time due to work overload. From the above we conclude that professional continuing education program is necessary for many professionals. According to the Institute for Research on Learning, located in Menlo park (2000), at least 80 % of the professional knowledge, skills and practices needed for many jobs is informal [4]. Since a lot of professionals lack the time needed for attending a non-formal professional continuing education program, they have to learn new things and acquire new skills informally. Since informal education is so important, it is worth to be further examined. The most important characteristics of informal learning are [1], [4]: † It does not take place in special educational establishments standing out from normal life and professional practice; † It has no curriculum and is not professionally organized; it rather stems accidentally, sporadically, in association with certain occasions, from the changing practical requirements; † It is not planned pedagogically, nor systematically organised in subjects; † It is not qualification-oriented, nor officially recognised; † It is not formally organised and financed by institutions; † It is rather practical than theoretical; † It is rather unconscious, incidental problem-related and therefore, well-focused; † It is not instructed by a teacher or a course designer but rather self-directed; † It is closely related to professional practice; † It is a tool for living and survival. The ability of informal continuous self-education and training is a vital skill for today’s professionals. Knowledge is outdated fast, so a professional has to continually update his/her experiences and knowledge profile, if they want to be competitive. In this paper we are interested in studying informal learning, in relation to Virtual Communities (VCs). It is a process which lies between the non-formal education, defined above, and casual learning. The actual point varies from person to person [5]. It is not casual learning, because it aims at a goal; and the goal has to do with the common interest of the VC members. A user participates in a VC which deals with his/her interest(s), either for professional reasons or as a hobby. 2 Defining virtual communities 2.1 The era of new media The Internet has dramatically changed the way people get informed, interact and communicate in the 21st century. Distribution of information and knowledge is nowadays carried out more and more via the Internet [5]. It is characteristic that new terms such as blogs, bots, wikies and podcasting [6], [7] were unknown some years ago, and are still not registered in most (paper) dictionaries. Here lies the Internet advantage: it is the only medium that instantaneously follows today’s social evolutions. Not only that, but it is actually driving the evolution. In 1988 there were about 30.000 blogs available; today, there are moreVirtual Communities and their Importance for Informal Learning 41 than 35 million [8] and it is estimated that every second a new blog is created (www.technorati.com). On the other hand, Philip Meyer, author of the book “Vanishing newspaper: saving journalism in the Information Age”, estimates that, with current trends, the last newspaper reader will recycle that last newspaper in April 2040 [8]! Some types of new media, along with representative examples, are given below [9]: † BBS: The WELL, GEnie † Blog: LiveJournal, Xanga, MySpace, Facebook, Blogspot, Blogger, Myciab † Webcomic: UserFriendly, Penny Arcade, Sluggy Freelance † Habitat: LucasFilm’s Habitat, VZones † IM: ICQ, Yahoo! Messenger, MSN Messenger, AIM † IRC/EFNet † MMORPG: Everquest, Ultima Online, World of Warcraft, Silk Road Online † MOO: LambdaMOO † MUD/MUSH: TinyMUD † P2P: Napster, KaZaA, Gnutella, Morpheus † USENET † Wiki: Wikipedia, WikiWikiWeb, PBWiki, Wetpaint † WWW: eBay, GeoCities, Slashdot. 2.2 Towards a definition The birth of Virtual Communities is placed at the early years of the Arpanet, back to the seventies; the World Wide Web was not invented yet. Today they are well-established forums, i.e. virtual places for communicating and exchanging information. However, the term virtual community appeared in 1993 and it is attributed to the homonymous book by Howard Rheingold [10]. The book discusses a variety of Information and Communication Technology (ICT) -based communication and social groups. The technologies included Usenet, Internet Relay Chat (IRC), chat rooms, electronic mailing lists and gaming communities such as Multi-User Dungeon (MUD) and its clones (e.g., MUSH and MOO). Rheingold pointed out that belonging to such a group has some potential benefits for the personal psychological health, as well as for the society in general [9]. According to Rheingold, virtual communities are formed “when people carry on public discussions long enough, with sufficient human feeling, to form webs of personal relationships” [10]. The explosive diffusion of the Internet in certain countries was also accompanied by the proliferation of virtual communities. The nature of those communities and communications is rather diverse [9]. Today, virtual communities or online communities are used by a variety of social groups interacting via the Internet. Different virtual communities, like real communities, have different levels of interaction and participation among their members. An important characteristic of a community is the interaction among its members. Thus, an email distribution list with hundreds of recipients with zero or low interaction among members, may not be called a virtual community. Similarly, placing comments or tags to a blog or message board may not constitute a community. The highest degree of interaction is achieved in video gaming communities, where users compete online against other users. Like traditional social42 Antonios Andreatos groups or clubs, virtual communities often divide themselves into cliques or even separate to form new communities. Also, membership turnover rate varies greatly from VC to VC [9]. Each community shares its own interests, values, jargon [6], titles, leaders, ways of communicating and exchanging information and knowledge. 2.3 Comparison of VCs to real communities There is of course no substitute to interpersonal communication, but it may be limited by distance. On the other hand, in VCs the distance factor is not applicable. The ability to interact with likeminded individuals instantaneously from anywhere on the globe has considerable benefits. Perhaps the greatest advantage is that the common interests are guaranteed in VCs, whereas this is not the case in real companies based on proximity. The use of multimedia technologies greatly facilitates long-distance communication today. Evolution of technology will eventually bring multimedia (image, video etc) dimensions in digital communication, a fact which will enrich it further. Of course, the participation in a VC presupposes some familiarisation with ICT and the relevant equipment (PC, Internet connection etc). In real-life friendships, age is often a critical factor. Usually, one’s friends are around the same age. Generation gap constitutes a strong unwritten law in many societies. Yet, in virtual communities there is no age barrier. This is very important in many countries -including Greece- where the majority of Internet users are young people and higher age groups are minorities [11]. Since the personal characteristics of live communication are absent in VCs, user personalities are denoted by other symbols like nickname, personal information (such as email, website, blog, IRC number, Skype usernames etc), image/ personal mark/ signature, equipment related to the community interests (e.g. car, PC, cameras etc), user’s achievements related to the community interests etc. VCs should be seen as supplementary to the real communities and not as alternatives or substitutes. 3 Case studies The examination of some case studies will further clarify the above discussion. 1. Scientific union of Adult Education (of Greece, www.adulteduc.gr). The common interest here is professional. The Union organises conferences, seminars and meetings all around Greece; it also issues an online bimonthly bulletin for briefing and member communication. This also contains information on newly-edited books and scientific journals and the corresponding links on adult learning, information on instructor certification etc. A similar example is ‘the Hellenic Network of Open and Distance Education’ (www.opennet.gr). These communities have a professional character, are a bit more formal (e.g., no nicknames), have a hierarchy (president and members). They have a continuing education as well as a self- education character. 2. Hellenic Linux club (www.hellug.gr). This club is a Greek official and non-profit association of people working with or using or positively predisposed to Linux. Its aim is the union of such individuals, the communication among them in order to tighten their privities, as well as, the further proliferation of this operating system. Means for achieving the above goals are: meetings, problem solving support, translation of documents and articles in Greek; improvement of Greek language support in Linux; development of free software; presence in meetings, conferences and exhibitions; collaboration with peer clubs with common goals; diffusion of know-how; Follow-up and intervention whenever the interests of Linux are threatened. A similar site is: www.linux.gr. It contains news, documentation, articles, download material, links, Guidelines for various Linux distributions, guidelines for beginners, indexing and an electronic magazine. Linux blogs (for instance: http://linuxhelp.blogspot.com,Virtual Communities and their Importance for Informal Learning 43 http://www.computerworld.com/blogs/software/os/linux, http://linux-blogger.com, and http://www.suseblog.com) are also available. 3. www.overclockers.com: perhaps what is more admired here is the extra MHz a user can get out of his new PC, or, the exotic water cooler system one has constructed. VCs such as the second and the third one listed above, may be characterised as hobbyist or amateur communities rather than professional. Such VCs are more free, more non-formal, more casual. They share different values than the formal ones. Nicknames are used instead of real names. A couple of examples (with pseudonyms) are: “John Smith - aka Shroomer in the Forums”, “My name is Valentino Jones, a.k.a Cr@zyVJ on the net, and friends simply call me VJ”. Also, former education titles are not so important; the most important virtues are: expertise, participation and voluntarism to help other users. 4 Learning in Virtual Communities 4.1 Organisation of knowledge in VCs In a formal distance learning environment the educational material is well organised: (i) the courses are structured in a prerequisite order, from the fundamental to the most complicated. (ii) The educational material is composed of Learning Objects (LOs) [2]. Many LOs form a course and many courses form a curriculum. Among the various courses there is no (or minimal) overlap. (iii) The educational material is usually managed by a Learning Management System (LMS) [2], [13]. Let us assume that the information / knowledge resources of a VC are the contents of its node (such as website or a blog). In this case, the material is rather chaotically organised, with high overlaps, no particular structure, no particular management. Homepages link to several sub-pages and other related nodes. The various similar VC nodes (e.g., Linux communities) are loosely connected. The ability to find specific information requires sometimes specific skills of searching and data mining. However, there is still a hidden hierarchy: the first level is the knowledge present in the node, which may be downloaded; the second level is the knowledge and experience of the community members, which is not seen. 4.2 Looking for information in VCs The most common ways for getting access to specific information from VCs are: 1. Download articles from their nodes 2. Participate in fora and pose questions 3. Read FAQs and search for keywords 4. Use the site search engine (if available) 5. Contact sage membres (‘gurus’, ‘masters’ etc.) directly. However, since the material is not organised, a user may have to search for several minutes in order to find what he/she is looking for. 4.3 Comparison to Open University practices There exist strong similarities but also differences between the ways learning is achieved on line in virtual communities and Open Universities (OU) using the Web. As an example of an Open University we shall consider the Hellenic Open University (HOU) [www.eap.gr]. HOU students interact with their44 Antonios Andreatos instructor as well as with each other over the Internet frequently, in order to ask questions and get answers about the educational material and particularly the assignments they have to carry out. Mostly the interaction is done by emails and forums. The students are all provided with the same books and are supposed to follow a specific syllabus. The students meet regularly five times throughout the course of a year; attendance is not required. In the end of the academic year they also take an exam (live) which is mandatory and counts for 70 % of the final grade. All these practices do not occur in VCs, where learning is informal. But there is a strong similarity in that the students study and learn on their own. This practice is fundamental for the institution and operation of all OUs worldwide [3]. Similarities between OUs and VCs are listed in Table 1 below. Table 2 lists some differences. Students / members study on their own They depend a lot on the educational material They help each other gain specific knowledge or skills They may be assessed by knowledge or skills (titles or grades or expertise) They may be anywhere in the world They are adults and therefore, self-motivated They are moderated by an instructor or a list moderator or owner of the site Table 1: Similarities between OUs and VCs Open Universities Virtual Communities Students are directed by the instructors Members of VCs are self-directed Students are provided with the same education material Members study different material and practice a lot Students are supposed to follow a specific syllabus There is no syllabus Students have seen each other at least once Real life interaction may never take place Focus primarily on knowledge Focus primarily on expertise Provide a title Do not provide a title Knowledge is more theoretical Knowledge is more practical and empirical Table 2: Differences between OUs and VCs 4.4 Professionals and continuing education Today the Internet is used by millions of people as an interminable pool of knowledge, as a huge online encyclopaedia. A user seeking particular information on a subject, may find a lot of it, not only in online encyclopaedias or dictionaries but also in specialised VC nodes. For this reason, the ability of informal self-education and training is a vital skill for today’s professionals. Based on personal experience, we believe that information and knowledge gathered in some specific VC-related nodes, is superior to that available in traditional, even academic, sources such as books, electronic or conventional, in terms of practicality and in-depth and up-to-date coverage. Since VCs update continually the (practical) skills of their members, we can claim that they offer some kind of informal education [15]. The user groups of these professional sites may be regarded as loose professional communities with no or limited interaction amongst users. Many profit and non-profit organisations offer (often for free) seminars via the Web (also knows as ‘Webinars’) to their customers or community members. The main purpose of most such webinars is to demonstrate the use of the companies’ products (such as software tools, integrated circuits, e-Learning platforms, etc.). Let’s look at some examples.Virtual Communities and their Importance for Informal Learning 45 The MathWorks Inc. offers free, live and interactive monthly webinars concerning the use of MATLAB toolboxes. SABA, an e-Learning systems company, also offers live online webinars (www.saba.com). National Semiconductor (www.national.com) offers online seminars for design engineers. Microsoft maintains a large ‘knowledge base’ with articles for computer and network professionals. Teacher unions and communities do not lag behind. Let us examine two case studies from the Greek national Internet domain: † ‘EEEP’, the Greek Primary Teachers Association for the valorisation of ICT in education, a nonprofit open community. They issue a journal, organize conferences and maintain a vivacious site (eeep.gr). Users can read news and download their electronic magazine (sometimes called a ‘webzine’). † The aforementioned Scientific union of Adult Education (of Greece) [www.adulteduc.gr] is another example. They organise conferences, seminars and meetings all around Greece; it also issues an online bimonthly bulletin for briefing and member communication. 4.5 VCs and ‘multiliteracies’ In a pioneer as well as important article published in 1996, the ‘New London Group’ argues that today’s world is characterised by an increasing cultural and linguistic diversity and a variety of new communication ways and channels, due to the evolution of ICT. According to the authors, traditional language-based pedagogical approaches do not provide adequate skills for working and living in general in today’s multi-cultural societies, and that, a new approach to literacy pedagogy, which they have called ‘multiliteracies’, is needed instead. Multiliteracies are based on the assumption that the multiple linguistic and cultural differences in our society is essential to the working and private lives of students. The use of multiliteracies approaches to pedagogy will enable students to achieve the following two goals: a) create access to the evolving language of work and community; and b) foster the critical engagement necessary for them to design their social futures and succeed through satisfying employment [16], [17]. 5 Discussion and conclusion In this paper we have examined Virtual Communities (VCs); more specifically, we have dealt with three types of VCs: video game VCs, professional VCs and amateur VCs. We have identified some differences among them, as well as, some similarities and differences between VCs and real-life communities. Next we have examined informal learning in VCs and we have compared the organisation of knowledge in VCs to that of distance learning courses. Learning gained by the participation in VCs was briefly compared to the methods followed by open universities. Furthermore, it was claimed that new ‘digital’ skills are needed by 21st-century citizens. From the discussion above we may conclude that for a professional, participation in professional VCs may be akin to continuing education, whereas for a non-professional, it may merely serve as entertainment. Of course, professionals may also benefit from non-professional VCs. In any case, however, free-will participation in VCs is very important, because it fosters the necessary ‘digital behaviour’ and cultivates ‘digital communication’ skills. Based on personal experience, we believe that information and knowledge gathered in some communityrelated nodes concerning practical subjects, is superior to that available in traditional, even academic, sources such as books, electronic or conventional. Nowadays, where a multiliteracy education is needed for living and working in the digital era, digital communication skills are necessary. ‘Digital behaviour’ and ‘digital communication’ rules and ethics are being developed; therefore, all contemporary people should be ‘digitally literate’, in order to be able to survive in a changing and competitive environment. Real communication skills are not enough; ‘digital46 Antonios Andreatos communication’ skills are also needed. The ability to use the Internet and the new media is vital for surviving in the 21st century. VCs will continue to play an important role in 21st-century society, due to social evolution, the globalisation of economy and knowledge, competition and new media technologies. 6 Acknowledgement Author wishes to thank Mr. M. Vidalis for reviewing the manuscript. References [1] A. Rogers, Teaching Adults, Open University Press. 1996. [2] http://en.wikipedia.org/wiki/Learning, retrieved on Dec. 11, 2006. [3] Continuous further education is necessary - Changes in syllabus are imperative, News bulletin of the Technical Chamber of Greece, no. 2423, Jan. 15, pp. 6-8. 2007. [4] http://en.wikipedia.org/wiki/Informal_learning, retrieved on Dec. 11, 2006. [5] A. Vardamaskou and P. Antoniou, Informal learning: evaluation of an Internet-based physical activity educational program for adults, Proceedings of the 3rd International Conference on Open and Distance Learning, Patra, Greece, vol. A, pp. 405-417 (in Greek), 2005. [6] S. Ververidis, The glossary of New Media, Kathimerini (newspaper) special edition: New media: the alternative choice, 28, pp. 88-89 (in Greek). 2006. [7] D. Doulgeridis, Electronic diaries in common view, Tachydromos magazine, no. 266, pp. 44-49 (in Greek). (2005) [8] C. Angelopoulos, Blogs change the landscape of communication, special edition of Kathimerini (newspaper): New media: the alternative choice, 28, pp. 78-79 (in Greek), 2006. [9] http://en.wikipedia.org/wiki/Virtual_communities, retrieved on May 15, 2006. [10] H. Rheingold, The Virtual Community: Homesteading on the Electronic Frontier, Harper Perennial, San Francisco, 1993; also available on line at : www.rheingold.com/vc/book, retrieved on May 29, 2006. [11] VPRC National research for new technologies and Information Society, Available online at: http://www.vprc.gr/2/1232/21_gr.html, in Greek, 2005, retrieved on Sept. 29, 2006 [12] F. Pantano-Rokou, Educational design for e-learning: models, meaning and impact on learning, Open Education, 1, pp. 45-68 (in Greek), 2005. [13] G. Dimauro et al., An LMS to support e-learning activities in the university environment, WSEAS Transactions on Advances in Engineering Education, vol. 3(5), pp. 367-374. 2006. [14] D. Vergidis, A. Lionarakis, A. Lykourgiotis, B. Makrakis and Ch.Matralis, Open and distance learning, vol. 1, Institution and Operation, Hellenic Open University, Patra (in Greek; title of book translated by paper author), 1998.Virtual Communities and their Importance for Informal Learning 47 [15] A. Margetousaki and P. Michaelides, Communities of Practice as a place of learning and development, Proceedings of the 3rd Pan-Hellenic conference on the Didactics of Information Science, Corinth (in Greek), Oct. 2005. [16] The New London Group, A pedagogy of multiliteracies: designing social futures, Harvard Educational Review, vol.66, no.1, pp. 60-92, 1996. [17] J. Salpeter, 21st century skills: will our students be prepared?, Available online at: www.techlearning.com/story/showArticle.jhtml?articleID=15202090, 2003, retrieved on March 22, 2006. Antonios Andreatos Dept. of Aeronautical Sciences Div. of Computer Engineering and Informatics Hellenic Air Force Academy Dekeleia, Attica, TGA-1010 GREECE E-mail: [email protected] Received: November 17, 2006 Editor’s note about the author: Antonios Andreatos is a Professor at the Computer Engineering Division of the Hellenic Air Force Academy. He was born in 1960 in Athens, Greece. He received the Diploma in Electrical Engineering from the University of Patras in 1983, the M.S. degree from the University of Massachusetts (Amherst) in 1985 and the Ph.D. from the National Technical University of Athens (NTUA) in 1989. He was a research scholar at the European Joint Research Centre of Ispra, Italy. He has published various papers in journals and conferences; he has also authored a book on the design of Microcomputer Systems in 2001. He has also taught at the Hellenic Open University. His main technical interests lie in the areas of microprocessors, computer architecture, computer networks, e-learning and adult education.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 48-55 Advances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints Alla Anohina Abstract: The paper focuses on the issues of providing an adaptive support for learners in intelligent tutoring systems when learners solve practical problems. The results of the analysis of support policies of learners in the existing intelligent tutoring systems are given and the revealed problems are emphasized. The concept and the architectural parts of an intelligent tutoring system are defined. The approach which provides greater adaptive abilities of systems of such kind offering two modes of problem-solving and using a two-layer model of hints is described. It is being implemented in the intelligent tutoring system for the Minimax algorithm at present. In accordance with the proposed approach the learner solves problems in the mode which is the most appropriate for him/her and receives the most suitable hint. Keywords: intelligent tutoring system, problem-solving mode, hint 1 Introduction Emerging of a knowledge society and growing demands for highly skilled and educated labor force claim for changing traditional teaching and learning processes. One way of changes is related with an integration of various kinds of computer-based learning systems as supplements to conventional teaching methods. However, it is necessary to provide intelligent and adaptive abilities of a software system in order it could take over a role of a teacher in effective way. This idea is not new one as it is exploited in intelligent tutoring systems for more than 30 years since the earliest SCHOLAR system [1] appeared. During this time a huge amount of intelligent tutoring systems has been implemented for different areas, for example, for mathematics, physics, medicine, informatics and computer science [2, 3, 4, 5, 6, 7, 8, 9, 10]. Nowadays developments of systems of such kind have received new breath with the appearance of the agent paradigm [11]. However, adaptive abilities of intelligent tutoring systems still are not high enough, particularly regarding modes of practical problems solving and support of a learner in this process. Solving of domain problems is an important part of intelligent tutoring systems, as it allows to deepen the acquired theoretical knowledge in practice, but the mere solving is unlikely to lead to improved skills or deeper understanding of a subject matter. Learning often takes place best when the learner receives feedback from the system. Feedback is a way to improve the learning process on the basis of continuous assessment of learning results, the analysis of their quality and performance of necessary corrections. Feedback encourages desired learning behavior and discourages undesired one, allows to understand how successfully the learner acts, whether he/she applies relevant knowledge, and it provides opportunities to correct misconceptions. In case of intelligent tutoring systems feedback is the various reactions of the system to learner’s learning behaviour. In its turn, a hint is only one form of feedback. Unfortunately, little prior researches have been done which are devoted to the general issues of hints formation in intelligent tutoring systems. The most significant work is [12], containing the description of the results of studying hints used by experienced tutors and an attempt to formulate a strategy for using hints in intelligent tutoring systems. According to [12] hints encourage the student to engage in active cognitive processes that are thought to promote deeper understanding and long-term retention. As it is pointed in [13], the developed intelligent tutoring systems have relatively simple and inflexible hinting policies, which more often demand from the learner to follow a prescribed problem-solving strategy and, therefore, hints are always aimed at the next step which should be taken accordingly to the Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006Advances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints 49 strategy. The authors draw attention to two problems: inflexible choice of the steps targeted by hints and proceeding of hints from the most general to the most specific. The analysis of the existing intelligent tutoring systems allows to make the following conclusions about reactions of a system to actions of a learner. Typically the system gives the learner an immediate feedback after each performed action or step during problem-solving irrespective of the fact whether the action or the step was correct or incorrect. Such policy prevents the learner from proceeding along a wrong solution path. The examples of immediate feedback are found in [5, 8, 9, 14, 15]. But may be a learner would like to make a series of steps and after that to receive feedback about correctness and to find by his/herself what step has led to the incorrect solution? The system usually provides a special button or tool, which the learner can use to request a hint. In AlgeBrain [5] such tool is an animated agent. The system responds with two types of support: generalized "Here’s what I’m expecting you to do at this point" help text and a hint specific to the current state of the problem. In Andes [9] there are two buttons. One of them gives help "what’s wrong with that?" on an incorrect entry. Other button provides a hint about the next step in problem-solving. Typically hints are organized in a range from the most general to the most specific. The general hint as a rule contains a minimum information on an error. Further the informativeness of hints increases. The most specific hint clearly specifies or shows what should be done. Hints are given sequentially. There is a number of systems which use this approach, for example, [5, 8, 10, 14, 16, 17]. The organization of hints from the most general to the most specific is not flexible enough. The insufficient amount of information in a hint can cause frustration and desire to request the subsequent hints without attempts to solve a problem by the learner. Information, which specifies necessary actions after first request of a hint, in its turn, is contradictory to the learning process. Thus, mechanisms, which will allow to implement individual system’s reactions for each learner giving such amount of information, which will help and at the same time will provide certain cognitive load, are necessary. The example of adaptive hinting is described in [18]. The authors use learner’s proficiencies to select an appropriate hint. The learner with high proficiency at a particular skill receives the more subtle hint. The less proficient learner is presented with a more obvious hint. The authors point out that this is better than require learners to wade through several levels of hints before they receive material that is appropriate to their knowledge level. The paper describes an approach which provides greater adaptive abilities of intelligent tutoring systems supporting two modes of problem-solving and using a two-layer model of hints. Thus, the learner solves problems in the mode which is the most appropriate for him/her and receives the most suitable hint. The aforementioned approach is being implemented in the intelligent tutoring system for the Minimax algorithm at present. The paper is organized as follows. Section 2 defines the concept and the architectural parts of an intelligent tutoring system. The developed approach based on two modes of problem-solving and a twolayer model of hints is discussed in Section 3. Section 4 describes the intelligent tutoring system for the Minimax algorithm in which the proposed approach is being implemented. Finally, conclusions are presented, and some directions for future work are outlined. 2 Intelligent tutoring systems Despite of a broad variety of the developed systems an unequivocal and exhaustive definition of an intelligent tutoring system still does not exist. However, it is possible to list the most often mentioned characteristics of systems of such kind [10, 19, 20, 21, 22, 23, 24]. Thus, the intelligent tutoring system is a computer-based system. It is an intelligent system because it uses principles and methods of artificial intelligence [25] such as knowledge representation, inference mechanisms and machine learning in its structure and operation. An intelligent tutoring system is an adaptive system as it alters aspects of its structure, functionality or interface for the particular user and his/her changing needs over time [26]. It50 Alla Anohina emulates a human teacher, tries to provide benefits of individual (one-on-one) tutoring, and is based on the theory of learning and cognition. Furthermore, intelligent tutoring systems are characterized by the fact that they store three basic kinds of knowledge [20, 27]: domain knowledge, knowledge about learners, and pedagogical knowledge. The knowledge types determine three main parts of the system’s architecture: the domain knowledge, the student diagnosis module, and the pedagogical module. An intelligent tutoring system, as any other software intensively communicating with users, needs a part of the architecture responsible for the interaction between the system and the learner. It is a communication module or interface which controls screen layouts, interaction tools, etc. However, each system can contain additional components the presence of which depends on the following factors: features of problem domain, locking down of separate functions of the basic constituent parts in the isolated components of the structure, technology used for system implementation, and additional functional capabilities of the system. The general architecture of an intelligent tutoring system is shown in Figure 1. Figure 1: The general architecture of an intelligent tutoring system (adopted from [28]) The domain knowledge is the knowledge the system is teaching. Most often it is incorporated in the expert model which represents skills and expertise that an expert in a particular domain holds. The model serves as a standard for evaluating the learner’s performance and the knowledge level. The domain knowledge can include fragments of theoretical materials, texts of practical tasks and attributes related with them, explanatory units, rules and principles used in the domain, etc. Typically, it is represented within the system using logical, procedural, network or structured knowledge representation schemes [29]. Moreover, the domain knowledge is organized in a certain way, commonly, as a hierarchy, for example, a topic includes some units, which consist of several chapters. The expert module generates solutions of problems for their further comparison with solutions of the learner. The student diagnosis module carries out the student diagnosis process that collects information about the learner, analyzes it and stores in the student model. The student model is formed for a particular learner and serves as an input to the pedagogical module which tailors the learning process to the needs of the learner. The model contains learner’s identifying information, information on the current knowledge level of the learner, information about learner’s cognitive, emotional and psychological features, his/her past experience, interests, and system’s options usage by the learner. The pedagogical module provides a knowledge infrastructure for adaptation of the learning process to characteristics and needs of a learner without interventions of a human-teacher. It implements theAdvances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints 51 learning process on the basis of teaching strategies and instructions held in the pedagogical model. The primary tasks of this module are selection and sequencing of learning material that is the most suitable for the learner, determining of the type and content of feedback and help, and answering questions from the learner. 3 The proposed approach 3.1 The problem-solving modes Generally, there are two possibilities regarding moments of feedback delivering: an immediate feedback after each step or action in problem-solving and feedback after submission of a whole solution to the problem. It is a basis for two modes of problem-solving in the proposed approach. In the completeness mode a learner chooses the moments of feedback presentation to check correctness of a series of steps. So, he/she can perform one or more steps solving a problem and then to require checking of the performed steps. The system provides feedback about correctness of his/her previous actions and the learner by his/herself should determine what step has led to the incorrect solution. This mode is similar to reinforcement learning [30] which is widely used in artificial intelligence. In the step-by-step mode the system monitors each problem-solving step and gives feedback about its correctness. There are four variations of the step-by-step mode regarding a kind of information given to the learner: † The learner receives both positive and negative feedback. In the case when the learner has performed the correct action he/she is praised (receives a positive feedback, or a reward). If the step was incorrect, criticism (negative feedback) is given to the learner. Moreover, negative feedback can be given in two different forms: only as a text, which informs that the action was incorrect, and as a text about the incorrect step together with a hint about how to improve his/her operation. † The learner receives only negative feedback. In this case negative feedback also can be given in two different ways described above. In the completeness mode the learner is not praised or criticized for each performed step. Instead of it he/she receives a total estimation of all performed actions. The estimation specifies how far the learner is from his/her goal: the correct solution of a problem. The total estimation can have a positive or negative deviation regarding difference between a number of correctly and incorrectly performed steps. In case when the mentioned difference exceeds some admissible value, a hint can be given to the learner. Thus, there are two variations of the completeness mode regarding a kind of feedback: † The learner receives only a total estimation of the performed steps. † The learner receives a total estimation of the performed steps together with a hint about how to improve his/her operation. It is obvious, that it is necessary to provide an opportunity to change the problem-solving mode and a kind of feedback by the learner, as well as, to request a hint in case when he/she receives only the text of feedback. Thus, the general scheme of the problem-solving modes and kinds of feedback is displayed in Figure 2. It is necessary to stress, that the described problem-solving modes can be implemented only if the process of finding of a problem solution consists of several (homogeneous or heterogeneous) steps.52 Alla Anohina Figure 2: The problem-solving modes and kinds of feedback for an intelligent tutoring system 3.2 The model of hints The model of hints in the proposed approach defines two layers (Figure 3): a layer of the general hint categories and a layer of hints within the general categories. There are three general hint categories: general hints, hints of average informativeness, and specific hints. Each category contains one or more hint, which also are ranged from less informative to more informative. The model allows the learner to receive a hint that is the most suitable for him/her. Before the learner starts to work in problem-solving mode testing should be taken with the purpose to determine a general hint category which is suitable for the learner. Further requesting help during problem-solving the learner will receive an average by number hint from the hint category suitable for him/her. If after receiving of a hint the learner is not capable to execute a correct action, he/she is presented with a subsequent hint. The process proceeds while he/she will not reach last hint for the given error. Such approach spares the learner from being presented with informativeless hints. Contrary, the learner timely receives a hint providing help and certain cognitive load, therefore, reducing an opportunity of frustration, floundering and loss of interest to learning. Figure 3: The two-layer model of hints (adopted from [28]) 4 The intelligent tutoring system for the Minimax algorithm The described approach is being implemented in the intelligent tutoring system for a topic of the learning course "Fundamentals of artificial intelligence" at the faculty of Computer Science and Information Technology at Riga Technical University. The topic is related with the algorithm for implementingAdvances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints 53 two-person games with full information, i.e., the Minimax algorithm [29] which example is given in [31] in details. All practical problems directed to the development of skills of applying of the Minimax algorithm consist of a sequence of homogeneous steps. Let’s consider one of them. A game tree is presented to the learner. It has arcs which are inadmissible for a game with full information: arcs between nodes at the same level or through levels. As a rule, there are from 3 up to 5 wrong arcs in the tree. The learner should find them and to remove from the tree. Thus, a removal of one (wrong or correct) arc is a step in this task. It is obvious, that tasks for the Minimax algorithm are a fine example of an opportunity to provide two problem-solving modes in the system. At present the following tasks regarding the developed approach are completed: possible mistakes and hints corresponding to them are defined for each problem, sets of hints for each mistake are divided into categories according to the two-layer model of hints described in the previous section, and the user interface of both problem-solving modes is developed. The architecture of the system corresponds to the general architecture of an intelligent tutoring systems displayed in Figure 1. The system carries out assessment of an initial learner’s knowledge level on the topic, provides the theoretical knowledge acquiring mode and the practical problem-solving mode with preliminary determining of the problem-solving mode and a category of hints most suitable for the learner, and gives a final assessment of the achieved knowledge level. 5 Conclusions and future work Adaptive abilities of intelligent tutoring systems are not high enough especially regarding problemsolving modes offered to a learner and ordering of hints from the most general to the most specific. The paper presents an approach which allows the learner to work in the problem-solving mode that is the most appropriate for him/her and to receive the most suitable hint. The proposed two-layer model of hints can reduce frustration, floundering and loss of interest to learning that are inevitable in case when the learner receives too little support during problem-solving. At present, the proposed approach is in a stage of development. Firstly, it is necessary to determine, how testing for the problem-solving mode and a category of hints most suitable for the learner may be implemented. Secondly, both psychological and pedagogical foundations of the proposed approach should be specified. After the implementation of the approach in the intelligent tutoring system for the Minimax algorithm the testing of the system will be done in the corresponding learning course together with questioning of learners and the subsequent interpretation of the received results. References [1] J. R. Carbonell, AI in CAI: An Artificial Intelligence Approach to Computer-Assisted Instruction, IEEE Transactions on Man-Machine Systems, Vol. 11, No. 4, pp. 190-202, 1970. [2] J. S. Brown and R. R. Burton, A Paradigmatic Example of an Artificially Intelligent Instructional System, International Journal of Man-Machine Studies, Vol. 10, pp. 323-339, 1978. [3] J. S. Brown, R. R. Burton and J. de Kleer, Pedagogical, Natural language, and Knowledge Engineering Techniques in SOPHIE I, II and III, In D.H. Sleeman and J.S. Brown (Eds): Intelligent Tutoring Systems, Academic Press, London, 1982. [4] J. R. Anderson and B. J. Reiser, The Lisp Tutor, Byte Magazine, Vol. 10, pp. 159-175, 1985. [5] S. R. Alpert, M. K. Singley and P. G. Fairweather, Deploying Intelligent Tutors on the Web: an Architecture and an Example, International Journal of Artificial Intelligence in Education, Vol. 10, No. 2, pp. 183-197, 1999.54 Alla Anohina [6] V. Devedzic, J. Debenham and D. Popovic, Teaching Formal Languages by an Intelligent Tutoring System, Educational Technology & Society, Vol. 3, No. 2, pp. 36-49, 2000. [7] M. Hospers, E. Kroezen, A. Nijholt, R. den Akker and D. Heylen, An Agent-based Intelligent Tutoring System for Nurse Education, In J. Nealon and A. Moreno (Eds): Applications of Intelligent Agents in Health Care, Birkhauser Publishing Ltd, Basel, Switzerland, 2003. [8] N. Matsuda and K. VanLehn, Advanced Geometry Tutor: an Intelligent Tutor that Teaches Proof-Writing with Construction, Proceedings of the 12th International Conference on Artificial Iintelligence in Education, Amsterdam, pp. 443-450, 2005. [9] K. VanLehn, C. Lynch, K. Schulze, J. A. Shapiro, R. Shelby, L. Taylor, D. Treacy, A. Weinstein and M. Wintersgill, The Andes Physics Tutoring System: Lessons Learned, International Journal of Artificial Intelligence in Education, Vol. 15, No. 3, pp. 147-204, 2005. [10] R. S. Crowley and O. Medvedeva, An Intelligent Tutoring System for Visual Classification Problem Solving, Artificial Intelligence in Medicine, Vol. 36, No. 1, pp. 85-117, 2006. [11] A. Anohina, Agents in intelligent tutoring systems: state of the art, Scientific proceedings of Riga Technical University, Computer science, 5th series, Vol. 22, pp. 110-121, 2005. [12] G. Hume, J. A. Michael, A. A. Rovick, and M. Evens, Hinting as a Tactic in One-on-One Tutoring, The Journal of the Learning Science, Vol. 5, No. 1, pp. 23-47, 1996. [13] N. Matsuda and K. VanLehn, Modeling Hinting Strategies for Geometry Theorem Proving, Proceedings of 9th International Conference on User Modeling, Johnstown, PA, USA, pp. 373-377, 2003. [14] Kinshuk, T. Lin, A. Yang and A. Patel, Plug-able Intelligent Tutoring and Authoring: an Integrated Approach to Problem-based Learning, International Journal of Continuing Engineering Education and Life-Long Learning, Vol. 13, No. 1/2, pp. 95-105, 2002. [15] M. A. Nunes, L. L. Dihl, L. M. Fraga, C. R. Woszezenki, L. Oliveira, D. J. Francisco, G. Machado, C. Nogueira and M. Notargiacomo, IVTE - Pedagogical Game for Distance Learning, Proceedings of ASET Conference, Melbourne, 2002. [16] P. Suraweera, An Animated Pedagogical Agent for SQL-Tutor, Honours Project HONS 08/99, 1999. [17] M. Kalayar, H. Ikematsu, T. Hirashima and A. Takeuchi, Intelligent Tutoring System for Search Algorithm, Proceedings of ICCE, Seoul, Korea, pp. 1369-1376, 2001. [18] M. Stern, J. Beck and B. P. Woolf, Adaptation of Problem Presentation and Feedback in an Intelligent Mathematics Tutor, In C. Frasson, G. Gauthier and A. Lesgold (Eds): Intelligent Tutoring Systems, Springer-Verlag, New York, 1996. [19] B. A. Cheikes, GIA: An Agent-Based Architecture for Intelligent Tutoring Systems, Proceedings of the CIKM’95 Workshop on Intelligent Information Agents, Baltimore, Maryland, USA, 1995. [20] N. Capuano, M. De Santo, M. Marsella, M. Molinara and S. Salerno, A Multi-Agent Architecture for Intelligent Tutoring, Proceedings of the International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet SSGRR 2000, L’Aquila, 2000. [21] A. M. Bell and S. Ramachandran, An Intelligent Tutoring System for Remote Sensing and Image Interpretation, Proceedings of the Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), Orlando, Florida, USA, 2003. [22] C. J. Butz, S. Hua and R. B. Maguire, A Web-based Intelligent Tutoring System for Computer Programming, Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI’04), Beijing, China, pp. 159-165, 2004.Advances in Intelligent Tutoring Systems: Problem-solving Modes and Model of Hints 55 [23] E. Remolina, S. Ramachandran, D. Fu, R. Stottler and W. R. Howse, Intelligent Simulation-Based Tutor for Flight Training, Proceedings of the Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC), Orlando, Florida, USA, 2004. [24] J. M. Gascueña and A. Fernández-Caballero, An Agent-based Intelligent Tutoring System for Enhancing E-learning/ E-teaching, International Journal of Instructional Technology and Distance Learning, Vol. 2, No. 11, pp. 11-24, 2005. [25] P. Brusilovsky and C. Peylo, Adaptive and Intelligent Web-based Educational Systems, International Journal of Artificial Intelligence in Education, Vol. 13, pp. 156-169, 2003. [26] D. R. Benyon and D. M. Murray, Adaptive Systems: from Intelligent Tutoring to Autonomous Agents, Knowledge-Based Systems, Vol. 6, No. 4, pp. 197-219, 1993. [27] C. Frasson, T. Mengelle and E. Aimeur, Using Pedagogical Agents in a Multi-Strategic Intelligent Tutoring System, Proceedings of the 8th World Conference on Artificial Intelligence in Education AI-ED97, Workshop V: Pedagogical Agents, Kobe, Japan, pp. 40-47, 1997. [28] A. Anohina, The Problem-Solving Modes and a Two-Layer Model of Hints in the Intelligent Tutoring System for Minimax Algorithm, Proceedings of the 1st International Conference on Virtual Learning, Bucharest, Romania, pp. 105-112, 2006. [29] G. F. Luger, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Addison Wesley, 2001. [30] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, 2003. [31] A. Anohina, Intelligent Tutoring System for Minimax Algorithm, Scientific proceedings of Riga Technical University, Computer science, 5th series, Vol. 22, pp. 122-130, 2005. Alla Anohina Riga Technical University Department of Systems Theory and Design Kalku street 1, Riga, Latvia, LV-1658 E-mail: [email protected] Received: November 8, 2006 Editor’s note about the author: Alla Anohina is an assistant at the Department of System Theory and Design of Riga Technical University in Latvia. She got M.Sc.ing. in 2002 from Riga Technical University and received Werner von Siemens Excellence Award, Award of Latvian Fund of Education and Award and memorial medal of Latvian Academy of Sciences, Lattelecom Ltd. and Latvian Fund of Education for the best master’s thesis in the year 2002. Her main research fields are intelligent tutoring systems, computer-assisted assessment systems and artificial intelligence. Now she is finishing her Ph.D. thesis which main topic is related with the development of the intelligent supporting system for adaptive learning and knowledge assessment. She has five years’ experience of teaching in the field of computer science both in Riga Technical University, and in other educational institutions of Latvia. She has participated in several research projects related with the development of knowledge assessment software.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 56-65 Advancing Electronic Assessment Nikolaos Doukas, Antonios Andreatos Abstract: A computer-aided assessment system is presented that has been designed to produce and deliver tests to the Hellenic Air Force Academy students and assess their performance. The system is called e-Xaminer and is intended for use in both undergraduate courses and distance learning post-graduate programs of the Academy. The e-Xaminer uses meta-language concepts to automatically generate tests, based on parametrically designed questions. Tests intended for different students may entail differences in the arithmetic parameters. Additionally, different tests may be composed from different but equivalent and randomly chosen sub-questions. The system may also present each student with a scrambled sequence of the same questions, as a counter-measure against cheating. Examinations are delivered via a webbased interface; an automatically generated program marks the answers submitted by each student. e-Xaminer allows the implementation of question parameterisation and counter cheating measures, so that electronic tests become significantly different and more powerful than traditional ones. Sample problems are presented which show the additional features of the e-Xaminer, intended to facilitate the work of the course organiser in issuing and marking the tests, as well as in combating cheating. This paper focuses on some new, advanced types of questions enabled by electronic assessment; it then compares paper-and-pencil exams to electronic exams; results from a small student poll on the electronic exams are also presented. Finally, the directions for planned future work are outlined. Keywords: e-Xaminer, computer-assisted assessment, domain specific languages, HAFA. 1 Introduction 1.1 The revolution of e-learning E-learning may be defined as the technology and the academic services related to the teaching and learning processes [1]. It is a wide term covering all the range of previous educational applications such as Computer Based Training (CBT) and Web Based Training (WBT), as well as more recent technologies such as LMS, virtual classrooms or labs and digital collaboration [2]. Historically, the rush of e-learning may be located in the decade 1990-2000, a period characterised by a boom in the Information & Communication Technologies (ICT) and the invent and evolution of the Web. For better exploitation of the above technologies, standards which will allow the interoperability of various platforms and the re-use of educational material are being developed. The first such standards appeared in 2001. The above evolution, along with other reasons, not only led to the foundation of many Open Universities around the world [3], but also pushed many traditional universities to offer distance learning courses [1]. It is estimated that, as far as continuing education in higher education institutions is concerned, distance learning will grow ten times faster than on-campus learning over the next ten years [4]. According to Burns, “Up to 45 percent of colleges and university enrolment is from adult learners, many of which sign-up for distance learning classes rather than on-campus classes. Revenues for continuing education rose 67% at responding institutions since the previous survey in 2004. The trend is expected to grow distance learning 10 times faster than campus classes over the next decade. The growth in distance learning is driven by the growth of interactive marketing” [4]. The Web has been considered a means of education and knowledge since its early days. Today it is widely used for educational purposes due to its worldwide spread and penetration and because it supports a lot ways for the representation of information [5]. Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006Advancing Electronic Assessment 57 The recent proliferation of distance learning (delivered by Open Universities worldwide, as well as traditional institutions) has encouraged many higher education organisations to develop software for supporting this type of learning. A well known such package is Claroline [6], which is freeware and supports 32 languages. With the expansion of e-learning, the need for electronic examinations has become more imperative. A significant number of Computer Aided Assessment (CAA) systems has appeared as a result of the work of both academic institutions and commercial companies. Examples of such systems are given in [6], [7], [8], [9], [10], [11] and [12]. These systems have already been extensively tested and are being widely used. 1.2 The merits of electronic testing In ref. [13], the public-private coalition known as the ‘Partnership for 21st Century Skills’ gives a vision of how students should be prepared to face the challenges of the 21st century. Within this report, the benefits of using technology in order to give immediate feedback on student assessments is underlined. Electronic testing has been accused of bringing non-technology students to a dissadvantage when they are forced to use a keyboard to type their answers, rather than writing them on paper [14]. In response, public and private sector experts state in [13] that 21st century literacy is much more than the basic computer skills required for typing an answer. It is pointed out that the new tools for learning including computers, telecommunications and audio or video based media are critical enablers of learning in a number of realms, even for subjects that have nothing to do with technoology. They hence conclude that there is a need for assessment tools that measure those essential skills that will never be captured by traditional tests. While the value of traditional testing (like portfolio assessment) as a means for classroom level assessment of individual students is not questioned, electronic tests provide rapid feedback on students performance that can be compared across classrooms or schools [13]. Furthermore, computer based training can enable the evolvment of novel concepts that instructors would have never contemplated of delivering a few years ago [15]. The authors present a training platform which automatically designs courses that not only adapt to the students capabilities and previous knowledge, but also dynamically adjusts the contents, according to the students performance during the progress of the course. The statistical results presented show that computer delivered courses are extremely efficient in promoting learning. An automated computer-assisted assessment (CAA) system has already been presented [12], [16], which has been designed for the needs of the Hellenic Air Force Academy (HAFA). This system is called e-Xaminer, a title which emphasises the fact that this system extends e-Learning in the field of examinations. The principle reasons which dictated the development of a new system from scratch, in order to cover the needs of the HAFA, where the following: (1) The limited number of types of questions supported by the existing systems. (2) The intention of HAFA to experiment with cheating countermeasures so that the final system could eventually be used in its distance learning program [17]. (3) Most existing systems do not support the Greek language, while teaching in HAFA is done exclusively in Greek. This fact creates some special requirements that needed to be met. In ref. [16] we have presented the nine different problem types and partial credit possibilities supported by the e-Xaminer so far and we have briefly discussed the main advantages of CAA. In ref. [12] we have presented the DSL approach adopted by the e-Xaminer, cheating countermeasures and some experimental results obtained by the pilot application of the e-Xaminer. 1.3 Making the difference Multiple choice tests suffer from the drawback that examinees may aquire free-marks by taking lucky guesses [18]. Furthermore, a way to exploit electronic exams in order to overcome this problem is proposed. Given the evolution of web technologies since then, the need to make electronic exams58 Nikolaos Doukas, Antonios Andreatos significantly different is currently more pronounced. Another challenge for the design of electronic examination systems is to make exams more pleasant for students. Students are naturally reluctant to sit exams, since exams are a source of stess for them. On the contrary, they are keen to spend large periods of time in front of the computer screen, surfing the Internet. The question is hence posed: If exams are delivered using a friendly web interface, can they become a less stressful experience? This paper focuses on the implementation of test parameterisation and counter-cheating measures on the e-Xaminer, that distinguish e-Xams from paper and pencil exams and make them a better tool from promoting learning; it also lists some empirical results and presents a comparison between conventional paper-and-pencil exams and electronic exams given by means of the e- Xaminer (and therefore, referred to as ‘e-Xams’). The way in which questions are parameterised is firstly presented. Examples of parametric questions are shown. Counter-measures against exam-time cheating are listed and the way in which these have been implemented in the new system is described. The new system has been used for mid-term tests of students of all four years of the Academy, for all specialisations. Statistical results from these tests are given. Finally conclusions are drawn and the directions for the planned future work are presented. 2 Design of the system 2.1 System Architecture The architecture of the e-Xaminer is depicted in Figure 1. The system operates as follows. The instructor sets examination questions and model answers using a Domain Specific Language (DSL) [19]. These files are given as input to the core of the e-Xaminer which produces two programs, the examination agent and the marking agent. The examination agent is installed in an appropriate HTTP server and is the only part of the system that is ever exposed to a public terminal. The examinees open the ‘e-Xam’ pages and answer the test questions by filling in forms, using their favorite web browser. Student answers are stored and finally passed as input to the examination agent that runs on the instructors terminal (e.g. a laptop computer only temporarily connected to the student network). A marking report and statistics are produced. Figure 1: Architecture of the e-XaminerAdvancing Electronic Assessment 59 2.2 The examination procedure Figure 2 depicts the examination procedure. The correspondance to the system architecture described earlier on is straightforward. It is interesting to note however that it is possible for the instructor, to reiterate the process with a revised marking scheme. A marking (grading) policy is implicitely determined by the instructor and is included in his/her marking scheme. The marking scheme may include alternative answers that carry different marks. For example, the correct answer to the question “What is the maximum value that can be represented in an N-bit unsigned binary number”, is 2N ¡ 1. The instructor may however decide to grant students a partial credit for the (wrong) answer 2N. This iterative procedure, which would have been more tedious for a paper-and-pencil test, represents a significant advantage of e-assessment. As already pointed out, the main drawback of existing systems, as far as their use by the HAFA is concerned, was the fact that they supported only a limited number of question types; furthermore, these questions were often fixed (same for all students). It was required that the new system supported parameterisation. Additionally, the whole process needed to be automated. By using the e-Xaminer, the entire class is examined simultaneously, but each student sees a different worksheet [12]. Figure 2: Flowchart of the examination procedure 2.3 Question parameterisation Apart from the innovative types of questions, a key feature of the e-Xaminer is question parameterisation. It was required that the system be able to receive as input from the course instructor a skeleton question and produce a series of different questions from it. The process of defining and designing such skeleton questions is the question parameterisation incorporated in the e-Xaminer [12]. In order to implement this parameterisation, our system employs the concept of Domain Specific Languages or DSL [19]. A skeleton question can hence be defined by its textual parts, a series of parameters that it requires and a rule which is used for assembling the final question. An additional parameterisation for the entire test is question order scrambling, i.e., the randomisation of the order in which the questions in the test of a particular student is arranged. This randomisation may be expanded to the arrangement of sub-questions of a particular question. Two examples of parameterised questions which were used during the past academic year (2005-6) are given below. The first question was designed to examine a chapter on binary arithmetic. Each student was given60 Nikolaos Doukas, Antonios Andreatos six different random numbers. Each student was hence required to perform a number of operations on the numbers assigned to them. These operations were selected at random for each student from a set of predefined ones (binary to decimal conversion, decimal to binary conversion, addition, subtraction and negation). The setup of this question eliminated any chance that a student might have copied from their collegue sitting next to them. The second question was designed as part of a course on Computer Networks [20]. The skeleton of question was the following: “A company owns a set of class B IP addresses. The company has X LANs with an increase rate of Y% during the next 10 years. Each LAN connects W computers, with an increase rate of Z% during the next 10 years. Which is the best way of sharing the class B host bits between subnets and computers?”. All students sitting this examination had to answer this question, however each student had a different set of values for the X, Y, Z and W parameters. These values where automatically assigned by the e-Xaminer. Tests delivered during the pilot application phase, were just automated versions of what a paper and pencil test would have looked like. In order to better exploit the potentials of the CAA web interface, the decision was made to include various innovative features in the tests. The first such feature was the electronic processing of an (electronic) document, linked to the e-Xam by a simple hyperlink. The question had as follows [20]: “Given the specification of HTTP 1.1 [RFC 2616] in electronic format, answer the following questions: 1. What is the signalling mechanism used to terminate a persistent connection? 2. Who may terminate such a connection? (a) the client, (b) the server, (c) both of them 3. What does an error code 415 signify? 4. What encryption services are provided by HTTP?” RFC 2616 is a 176-page document describing the HTTP standard. Students are obviously not asked to memorise this text and it is impractical to copy the entire document for each student for the purpose of the exam. They are however expected to be familiar with such documents and able to extract information from them using various text/word processing tools (such as ‘Search’ or ‘Find’). Testing of this skill is only possible via electronic exams. These questions may further combined with parameterisation, in order to produce a different error code for each student. The next planned test requires the students to use some special services available on the Internet, get the answer on their browser and copy it on their ‘e-Xam’. Telecommunication engineering students in HAFA are required to be able to write, assemble and run machine language programs. In another type of question therefore, examinees were required to assemble their programme and submit the assembler listing file. This is another example showing how e-assessment can test skills that would have been impossible to examine with paper-and-pencil exams. 2.4 Facilitating the process of in-classroom examinations An additional requirement for the system was that it should shield both instructors and students from having to comprehend any complicated software engineering concepts. As far as students are concerned, the target was met by virtue of the fact that a plain web interface was used for delivering the tests. Students were pleasantly surprised to notice that each one of them had a different set of numbers. This made them adopt an attitude such as “I will mind my own business and finish my questions in time”, rather than the more traditional “I will try to see if one of my colleagues sitting next to me can help me with the exam questions”. Hence learning was promoted and student and instructor psychology was boosted. In order to maximise the benefits for course instructors as well, the concept of DomainAdvancing Electronic Assessment 61 Specific Languages was employed. Instructors compose the skeleton examination questions and their model (correct) solutions in HTML format (Figures 1, 2). They are not required to be familiar with HTML (low-level) coding [12]. They can use their favourite HTML editor to complete the task, as current editors have a simple user interface (similar to a word processor such as Microsoft Word). The HTML files are processed by the e-Xaminer core and two sets of programs are produced: the examination agent and the marking agent (Figure 1). The examination agent is an active page combining HTML and Perlscript. The testing agent is a Perl script which processes the answers submitted by students. The outputs are the marked tests and statistics of the class performance. 3 Contribution to the virtual classroom The fact that questions are parameterised has several positive implications for the virtual classroom. Firstly, tests can become personalised (a different set of questions for each student) without compromising test fairness. Additionally, tests can be given more often than usual (the customary midterm exam may be replaced by weekly exams), since the effort required by the instructor is minimal. Both the above achievements are recommended practices against examination-time cheating and in favour of promoting learning [21], [22]. It has additionally been observed that the web interface makes the idea of frequent exams, more agreeable than usual for students. Marking papers is a fairer process, since, even at early stages, the system has caught grading errors made by humans. As it was mentioned earlier, the electronic tests in particular have been accused of bringing some students to a disadvantage [14]. The e-Xaminer experience has shown that the e-test marks have large correlation to both the oral mark awarded by the instructor, as well as the mark attained during the paper-and-pencil semester final exam. Furthermore the HAFA, being a technically oriented institution, demands from its students (which, as Officers of the Hellenic Air Force, are going to handle electronic equipment worth millions of euros) to be familiar and at ease with technology. Accusations have been extended to state that “test-takers are unable to underline text, scratch out eliminated choices and work out math problems - all commonly-used strategies" [14]. Current web technology allows not only underlining on-screen texts, but also much more elaborate formatting to take place by incorporating a simple HTML editor (a practice used by many web-based email services). As far as scratching is concerned, we have provided special scratch-pad areas next to each question (implemented as HTML textareas). We have also encouraged our students to get better help by using electronic dictionaries and calculators, as well as their old paper notebook which is sitting next to their keyboard. 4 Pilot application and statistics The e-Xaminer is being used in testing digital electronics, computer science, microprocessors and computer network courses taught in HAFA since the beginning of the past academic year (2005-6). The CAA tests were addressed to students of all disciplines taught in HAFA (pilots, aircraft, telecommunications and civil engineers, as well as air-traffic controllers). During the first six months all student answers were being marked by both the e-Xaminer and the instructor. During this first period, automatically assigned marks deviated by up to 10% from the instructor marks. This deviation declined steadily as the system evolved and staff acquired experience in assigning better model answers. Currently, only a limited number of tests, selected at random, are being manually graded and the deviation between the two sets of marks is is less than 1%. The grading resolution (that is, the minimum grade associated to questions), was set to 0.5% during recent e-Xams. After the initial 6-month deployment period, the e-Xaminer has been systematically catching human marking errors in up to 10% of student papers. Instructors have very often exploited the fact that the system is automatic and have revised their model62 Nikolaos Doukas, Antonios Andreatos answers (along with their marking scheme) so as to better assess class performance. 4.1 Comparing paper exams to electronic exams In this subsection electronic exams delivered by the e-Xaminer will be compared to traditional paperand-pencil exams. Rather than answering a question of the form ‘which one is better’, our experience from the use of the e-Xaminer during the current (2006-7) and the past academic year (2005-6) will be presented. Each approach has its own merits that need to be considered by course instructors before making a choice. The aim of this section is to support such a choice. The advantages of paper-and-pencil exams include: † They can be easily adopted to any subject. † They may be used by digitally illiterate students. † No technological infrastructure is necessary. † No computer-related background skills are necessary. Their major disadvantages are: † The grading process is usually tedious and time-consuming. † It is difficult to embed in them multimedia (other than figures). On the other hand, the advantages of electronic assessment are: † Tests may be generated automatically from a pool of questions and problems. † Grading is easy, automatic and hence tests may be delivered more frequently. † It is easy to embed multimedia (such as designs, circuits, video and sound clips for courses such as signal processing, electronics and telecoms). † Students with bad, illedgible handwriting can produce a well formed document. † Results and statistics are immediately generated automatically and students obtain rapid feedback on their performance. † Counter-cheating methods may be employed. † Supplementary programs may be used (such as dictionaries, assemblers, compilers, MATLAB etc.). † Exams may be easily stored and retrieved, results may be further processed with other computer programs such as Excel and SPSS. † Finally it should be noted that e-Xams are environment-friendly in that little paper need ever be used. The disadvantages of electronic assessment are [16]: † It might be difficult to adopt it to some types of questions (such as math formulae, where special equation editors are needed for typing the formulae and complex software for interpreting them). † It might be difficult to adopt it to some subjects. † It requires additional equipment such as PCs, LAN, software etc. and is hence more complicated to give, as well as, vulnerable to power or system failures. An instructor must take all the above factors into account before making a decision.Advancing Electronic Assessment 63 4.2 Student opinions It is important to consider the opinions of users in evaluating any e-Learning system [22], [23]. The authors of ref. [22] define such an evaluation procedure based on hypothesis testing. The hypotheses that needed to be tested in the case of the e-Xaminer where (1) “e-Xams are perceived as equally difficult to the corresponding paper ones”, (2) “e-Xams promote learning by making tests more agreable experience for students” and (3) “e-Xams promote learning by helping students accept their test marks as a fair assessment of their performance”. In order for us to investigate these hypotheses, three extra questions were added at the end of the most recent tests. The questions where: 1. Do you consider the electronic test to be more difficult than what you would expect from a paper test? 2. Do you believe that automatic assessment you will get for this test will be fairer than the one you would have got if the test was marked by your instructor? 3. Do you prefer this test to a traditional paper one? HAFA students answered by an overwhelming majority (over 90%) that the electronic test was equally difficult and more preferable to the traditional test, while they expected their automatically assigned marks to better reflect their performance. Answers to these questions are still being collected, as more classes sit electronic exams and will be presented in future publications once the required statistical significance is attained. 5 Summary and conclusions The implementation of parametric questions and counter cheating measures of the ‘e-Xaminer’, our CAA system, was outlined. This implementation was shown to offer advantages to university course organisers, in that it facilitated their work and eliminated much of the tediousness involved in the grading process. The new system was also shown to promote learning by the students, by making exam taking a frequent, fair and agreeable procedure. Some contributions were also made to theoretical issues concerning the electronic classroom and its acceptance in general. Additionally, e-Xaminer was shown to offer a number of advantages in the fight against examination-time cheating. Statistics from the pilot application were given, were it was shown that e-Xaminer was capable of effectively marking student answers. Innovative types of exam questions were used, that could not have been set without the existence of an e-assessment platform. Initial statistical results were presented on the acceptance of the system by students. These results are part of a study on the usefulness of this system in promoting learning. During the next academic year we plan to further develop the system in order to: † Include some additional types of questions/problems. † Support multimedia in both questions and answers. This is a clear advantage of CAA over paperand-pencil exams. † Include a timer and enforce automatic submission upon time-out. † Strengthen security and fight cheating attempts by adding an authentication module that will monitor users login and logout [12].64 Nikolaos Doukas, Antonios Andreatos References [1] The Univ. of Iowa in cooperation with HBO Systems Inc., E-Learning Assessment Summary Report, Available online at: www.hbosystems.com, 2004, retrieved on March 31, 2006. [2] F. Pantano-Rokou, Educational design for e-learning: models, meaning and impact on learning (in Greek), Open Education, vol. 1, pp. 45-68, 2005. [3] D. Vergidis, A. Lionarakis, A. Lykourgiotis, B. Makrakis and Ch. Matralis, Open and distance learning, vol. 1, Institution and Operation, Hellenic Open University, Patra (in Greek; title of book translated by authors), 1998. [4] E. Burns, Continuing Education drives Distance-Learning enrollment, Available online at: www.clickz.com/stats/sectors/education/article.php/3605321, retrieved on May 25, 2006. [5] Ch. Fidas, Ch. Tranoris, V. Kapsalis and N. Avouris, System design for synchronous support and monitoring in web-based educational systems (in Greek), Proceedings of the 3rd International Conference on Open and Distance Learning, Propombos, Athens, vol. A, pp. 577-585, 2005. [6] Claroline documentation, http://www.claroline.net/documentation.htm, retrieved on June 21, 2006. [7] Blackboard documentation, http://library.blackboard.com/docs/as/bb_academic_suite_brochure_single.pdf, retrieved on June 21, 2006. [8] Univ. of Loughborough CAA site, http://www.lboro.ac.uk/service/pd/caa/index.htm, retrieved on June 21, 2006. [9] Quia site, http://www.quia.com/company/quia-presentation.pdf, retrieved on June 21, 2006. [10] Test Assessments project, http://www.scribestudio.com/home/ inAction/Flash/ss_in_action.jsp?cm=TestsAssessments_Project, retrieved on June 21, 2006. [11] Web CT site, http://www.webct.com, retrieved on July 10, 2006. [12] N. T. Doukas and A. S. Andreatos, Implementation of a Computer Aided Assessment System Based on the Domain Specific Language Approach., WSEAS Transactions on Advances in Engineering Education, vol. 3(5), pp. 382-388, 2006. [13] J. Salpeter, 21st Century Skills: Will Our Students Be Prepared?, http://www.techlearning.com, Oct. 2003, retrieved on March 22, 2006. [14] Fairtest site, http://www.fairtest.org/facts/computer.htm, retrieved on June 23, 2006. [15] A. D. Styliadis, I. D. Karamitsos and D. I. Zachariou, Personalized e-Learning Implementation - The GIS Case, International Journal of Computers, Communications and Control, vol. I, no. 1, pp. 59-67 (2006). [16] A. Andreatos and N. Doukas, e-Xaminer: Electronic Examination System, Proceedings of the 3rd WSEAS / IASME International Conference on Engineering Education, Vouliagmeni, Athens, Greece, July 2006. [17] A. Andreatos, Distance e-learning for the Hellenic Air Force, Proceedings of EDEN’03, Rhodes, Greece, pp. 428-433, 2003. [18] M. Bush, Alternative marking schemes for on-line multiple choice tests, 7th Annual Conference on the Teaching of Computing, Belfast, CTI Computing 1999.Advancing Electronic Assessment 65 [19] D. Spinellis, Notable design patterns for domain specific languages, Journal of Systems and Software, 56 (1), pp. 91-99, Feb. 2001. [20] J. F. Kurose and K. W. Ross, Computer Networking - a top-down approach featuring the Internet, 3rd ed., Addison-Wesley, 2005. [21] A. Angeletou et al., Assessment techniques for e-learning process, Proceedings of the 3rd International Conference on Open & Distance learning, vol. B, pp. 47-54, Patra, Greece, 2005. [22] D. Spinellis, P. Zaharias and A. Vrechopoulos, Coping with plagiarism and grading load: Randomized programming assignments and reflective grading, Computer Applications in Engineering Education, to appear in 2007. [23] R. Guidorzi and M. L. Giovannini, E-learning tools in higher education: users’ opinions, Proceedings of EDEN’03, Rhodes, Greece, pp. 201-206, 2003. Nikolaos Doukas, Antonios Andreatos Dept. of Aeronautical Sciences Div. of Computer Engineering and Informatics Hellenic Air Force Academy Dekeleia, Attica, TGA-1010 GREECE E-mail: [email protected], [email protected] Received: November 18, 2006International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 66-73 Development of An Algorithm for Groupware Modeling for A Collaborative Learning Ikuo Kitagaki, Atsushi Hikita, Makoto Takeya, Yasuhiro Fujihara Abstract: This paper reports an algorithm for forming groups of students with regard to a computer system for collaborative learning designed to give a cue for debate utilizing mobile terminals. With this system, questionnaires which should be used as the seeds for debates are prepared in advance on the Web and all students attending the class answer to the questionnaires on the Web through their mobile terminals. Following this, the computer assigns students to appropriate groups based on the results of answers, and transmits each of answers and information of the group member to terminals of the students. Based on the information, students form groups and each group starts debate. In this study, system composition is dealt with first and algorithm for forming groups using answers by the students is discussed. Keywords: Group forming, algorithm, collaborative learning, mobile terminal, debate system, groupware 1 Introduction This study relates to university class in Japan. In many cases, lectures are given in one-sided manner, and it is pointed out that some schemes are necessary to encourage students to express their opinions or to induce discussions by students themselves[5][8]. Considering the recent trend that almost all students carry their own mobile terminals[6][7], this study relates to a learning system that utilizes mobile terminals as an ancillary tools for debate. Students are divided into groups making use of the information of answers by the students against questionnaires presented beforehand. On this occasion, information necessary for grouping and results of grouping are transmitted together with contents of each of answers to mobile terminals of the students. Students are then requested to form groups based on these information and to initiate debates in each group. In other words, this system is one sort of blended learning in which face-to-face debate is carried out utilizing electronic educational information communication. In this study, algorithm for grouping based on information of answers by the students is dealt with. With collaborative learning as mentioned, two points can be expected as mentioned below. 1. Groups are being formed considering education effects. Therefore, every student can know why he/she is assigned to the current group. 2. Students are instructed to start debate using the information on their answers as the seeds. Therefore, they can easily grasp a cue for debates. According to the conventional method of grouping of students, groups are formed simply mechanically in the order of student ID numbers or those seating nearby are assigned to the same group. Meanwhile, in this study, it is attempted to form groups appropriately based on answers against some sort of assignments including tests. Teaching by groups which were formed based on answers of the students was attempted by the prior study[2][3]. At the time of this study, since possession of a mobile terminal by every student was not practical, test results were collected and marked by the teacher, groups were formed based on the results. Test papers were returned to the students, groups were formed, and students in each of the groups are requested to discuss primarily incorrect answers. For this reason, it took enormous time from answering to the questionnaires by the students to group discussion. This time, fully making use of information technology available today, we configured a new collaborative learning Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006Development of An Algorithm for Groupware Modeling for A Collaborative Learning 67 system. Thanks to assistance by mobile terminals, it is expected that from answering by the students till execution of discussions can be made promptly and smoothly. This system is currently under development. In Section 2, outline of system composition will be described, and an algorithm for forming groups will be introduced in Section 3 onward. Although the algorithm is one portion of the system, it is significant to make that in order to put the system into practical use, which may promote the practical e-learning in universities[1]. 2 System Flow Total four files are prepared and used in this system, student file, questionnaire file, aggregation file and groupware file. These files are used in this order. 1. Student file Basic information such as mail address, age, gender or the like of students are summarized in this file. 2. Questionnaire file Questionnaire or the like concerning discussion theme are summarized in this file. This file can be consisted of any of three types, multiple-choice method that allows only one selection, multiplechoice method that allows a plurality of selections, and free composition. (Multiple-choice method alone is subjected to processing of item 4. onward.) 3. Aggregation file Answers by the students for questionnaires in 2. are aggregated in this file. 4. Groupware file The computer forms student groups appropriately based on results of answers by the students in 3. This file also includes information on, in addition to answers by the students, which students form one group. Execution procedures for preparing and utilizing above-mentioned files are shown in Fig.1. (1) A student accesses the predetermined URL, enters basic information such as gender, ID number or the like and transmits it. His/her file (EXCEL) is then completed while the teacher may execute input and deletion of the information by operating EXCEL directly. Basic information of a certain class is stored here and this file can be utilized at every round-table discussion held by the class. (2) With regard to a discussion planned by a certain teaching, a group of questionnaires which may benefit grouping is prepared and is registered to the system as a questionnaire file. (3) Several groups of questionnaires are being registered in the questionnaire file. The teacher picks up specific groups of questionnaires and transmits URL for browsing questionnaire items to mail addresses of all students. (4) Students answer the questionnaires and transmit them to create an aggregation file. (5) Based on the information in the aggregation file, students are assigned appropriately to a certain group. (6) Information on grouping is transmitted to mobile terminals of the students. Upon completion of above-mentioned procedures, the students are grouped in the class and start discussions using results of answers as the cue.68 Ikuo Kitagaki, Atsushi Hikita, Makoto Takeya, Yasuhiro Fujihara (6)Mending students groupware information Instruction of group discussion (3)Sending URL for browsing questionnaire items (4)Assembling answer data Procudure Questionnaire maker Regst. of personal Inform.(FTP) WEB server Command of mail sending Assembled data Internet mail web Browser mail Receiving URL Keying in the answer Excel Excel student data CSV HP for generating questionnaire HP for kicking mails Data assembly program CSV Regst. of qustionnaire item Download(FTP) Making groups program Starting the duscuss Student (5)Making groups Sending personal information (1) Reg. student data (2)Reg.questio nnaires Download(FTP) Figure 1: System flowDevelopment of An Algorithm for Groupware Modeling for A Collaborative Learning 69 3 Construction of students groups Suppose that there are four students A, B, C and D, and three questions a, b and c are given. Results of evaluation of answers to these questions are shown by O and X as shown in Table 1. One group consists of two students and discussion is made within the group primarily for questions which resulted in incorrect answers. In this case, two types of grouping shown in the table are compared. With grouping p, students A and B and students C and D constitute one group, respectively. However, in both groups, each of group members presented the same correct/incorrect pattern. Namely, in group 1, both members gave incorrect answer for question b and a possibility that they can teach each other for problem solving is low. The same also applies to group 2. Contrary, with grouping q, students A and C and students B and D constitute one group, respectively, and the said problem does not occur in this case. From above-mentioned cases, it may be said that grouping q is superior to grouping p. When we focus on problem solving of correct answers and incorrect answers of the test, possibility of problem solving is considered to be higher for such a case where there are different patterns of answers rather than a case where there is only one pattern in the same group. Table 1: Example of results of evaluation of answers and grouping Student Result of Result of Result of Grouping p Grouping q evaluation of evaluation of evaluation of (Group ID) (Group ID) question a question b question c A ° £ ° 1 1 B ° £ ° 1 2 C £ ° £ 2 1 D £ ° £ 2 2 Suppose that the number of members in each of groups is the same. Then excellence of grouping ug in a certain group g (g 2 G) is defined. Question j and question set are expressed by m j and M (m 2 M), respectively, and student i in one group g and the student sets is expressed by si and Sg (si 2 g), respectively. Further, result e of evaluation of answer for question mi and for student sk are expressed by e (mi,sk). † Answer is correct ! e(mi,sk) = 1 † Answer is incorrect ! e(mi,sk) = 0 u g = ∑ sk2g ∑ mi2M (1¡ e(mi;sk)) [ s j2g e(mi;s j) jM k Sgj (1) where, jMj denotes the number of elements of sets M. Grouping u of entire class is expressed by Equation (2) u = ∑ g2G u g jGj (2) Although two ug’s obtained by equation (1) are comparable to each other only for the case that both groups consist of the same number of the students, the proposed idea in this paper will be able to be70 Ikuo Kitagaki, Atsushi Hikita, Makoto Takeya, Yasuhiro Fujihara extended to the different number of groups in a classroom by equation (3) where both equations (1) and (2) are combined: u g = ∑ g2G ∑ sk2g ∑ mi2M (1 ¡ e(mi;sk)) [ s j2g e(mi;s j) jM k Sgj (3) where N means the number of the students in a classroom. Actually there could be that several groups are different in the number of group members because the total number of the classroom cannot be well divided into an integer by appropriate integers. Whether the number of the students in a group is equal or not, it is necessary to calculate equation (2) or (3) for all the combination of student group in order to obtained the maximum solution. Although it is possible to do that theoretically, it is difficult to obtain the strict solution because of the enormous number of calculations. In reality, if the number of group members is supposed to be 4, the computer can execute computations of a class consisting of 50 peoples at the most. Then more simplified method should be used. In other words, with the relevant algorithm, suppose that students of one class are re-expressed as s1, . . , sN, the following replacement is made which results in calculations of N (N-1) times in total, and a group in which u becomes the maximum is judged to be the optimum grouping. s1 and s2;s1 and s3;¢¢¢ ;s1 and sN s2 and s3;s2 and s4;¢¢¢ ;s2 and sN ¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢¢sN¡1 and sN (4) It is already known that when one class consists of 20 and several members and one group consists of four members, excellence of grouping by the simplified algorithm as used in the current proposal is, compared with a case where the same is obtained for all combinations without simplification, approximately 60 » 100% optimization rate depending upon the answer pattern[4]. 4 Discussion in a classroom Two kinds of discussion were administrated concerning the proposed method. The first kind is the one for discussing test answers, where the evaluation either of correct or wrong was determined. Digital mathematics served as the subject. Several test problems were given to the students. All the test problems had multiple choice type of answers thus the students selected an answer among them. Because the evaluation was already done, the instructor, after group making, told them to start by telling the evaluated result then discuss the answer which was evaluated wrong. The second kind is the one for discussing opinions, where the evaluation either of correct or wrong cannot be determined. Training program for job hunting served as the subject. The example of a question done by an interview and the answer by the interviewee was presented followed by several opinions for the answer then the students selected an opinion which they thought the closest to their impression. The instructor, after group making, told them to start by telling their own choice, discuss a better answer then make the report as group co-working. Through several administrations, the method was thought to give the students learning motivation because, every time when the proposed discussions were done, group member changed for clear reason leading to their refreshment. In addition to that, we thought that, if the more detail direction in what point the discussion ought to be done was given to the students, they may think it easier to start.Development of An Algorithm for Groupware Modeling for A Collaborative Learning 71 5 Consideration According to the algorithm for forming groups used in the current proposal, grouping is made so that results of evaluation of answers of the test by students may become different as much as possible. For the sake of establishment of more generalized algorithm, overall investigations are necessary which includes what sort of group discussions are actually available, what sort of information are effective for forming groups, what judgment criteria should be used to determine good or bad of grouping based on the said information. Although in the present study, results of evaluation of answers are digitized to either 0 or 1, the authors intend to investigate another algorithm capable of coping with more diversified evaluation results. At the same time, utility of this group discussion system and points to be improved will be checked from both software and hardware aspects through practicing. 6 Aknowledgements The present study has been promoted under the grant partly from Grant-in-Aid for Scientific Research, germination study No.17650260, fundamental study (B) No.18300288, and germination study No.17300275, each sponsored by the Ministry of Education, Culture, Sports, Science and Technology. We are sincerely thankful to all the persons and bodies helping our works. References [1] Styliadis D. A., Karamitsos D.I. and Zachariou I.D: Personalized e-learning implementation-The GIS case International Journal of Computers, Communications & Control, Vol. 1,1, pp. 59-66, 2006. [2] Kitagaki I., Shimizu Y. and Suetake K., An instructional method which permits the students to critically discuss their own test answers, Jour.of Educ. Technol., Vol.5, No.1, pp.23-33, 1980. [3] Kitagaki I., Shimizu Y., Consideration on an educational system which permits the students to discuss over their own test answers, Res. Sci.Educ., Vol.5, No.1, pp.22-28,1981. [4] Kitagaki I, A Consideration on an Educational groupware Algorithm Using Fuzzy Integral, IEICE, Tech.Rep.,Vol.ET94-101, pp.61-66,1994. [5] Nagai M., Kitazawa T., Koshikawa H., Kato H., Akahori K, Eevelopment and Verification of the Formative Evaluation System with Utilizing Mobile Phone for Web-Based Collaborative Learning, Japan Journal of Educational Technology , Vol.28, No.4, pp.333-342, 2005. [6] Nagamori M., Ueno M., Ando M., Pokpong S., Endo K., Nagaoka K., Response Analyzer System Using Mobile Phones for Distance Education, Japan Journal of Educational Technology , Vol.29, Suppl., pp.57-60, 2005. [7] Otuska K., Yahiro T., The Input Interface of the Value of Evaluation in an Evaluation System of Instruction Using Mobile Phones, Japan Journal of Educational Technology, Vol.30, No.2, pp.125- 134, 2006. [8] Ozawa S., Mochizuki T., Egi H., Kunifuji S, Facilitating Reflection in Collaborative Learning Using Formative Peer Evaluation Among Groups, Japan Journal of Educational Technology Vol.28, No.4, pp.281-294, 2005.72 Ikuo Kitagaki, Atsushi Hikita, Makoto Takeya, Yasuhiro Fujihara Ikuo Kitagaki Hiroshima University, Japan Graduate School of Education The research Institute for Higher Education 1-2-2 Kagamiyama, Higashi-hiroshima, 739-8512, Japan E-mail: [email protected] Atsushi Hikita Hiroshima University, Japan Community Cooperation Center E-mail: [email protected] Makoto Takeya Takushoku University, Japan Faculty of Engineering Department of Computer Science 815-1 Tate-machi, Hachioji, Tokyo, 193-0985 Japan E-mail: [email protected] Yasuhiro Fujihara Iwate Prefecutural University Faculty of Software and Information Science [email protected] Received: November 7, 2006 Editor’s note about the authors: Ikuo Kitagaki, born in Aichi on August 9, 1947, received his BE and ME degrees in electronics in 1970 and 1972, and his Doctor Engineer degree in 1981 from Tokyo Institute of Technology. Joining Tokyo Institute of Technology in 1973 and Employment Promotion Corporation in 1986, he was engaged in various research areas including the development of computer applications in education, fuzzy science, science of laughter, and so on. Since 2000, he has been belonging to Hiroshima University, Hiroshima, Japan. Presently he is Professor of Graduate School of Education/The research Institute for Higher Education, Hiroshima University. One of his most important recent works is "University Authority (in Japanese)". His career is shown also in "Who’s Who in Science and Engineering, MARQUIS 9th ed.(2006-2007), USA".Development of An Algorithm for Groupware Modeling for A Collaborative Learning 73 Atsushi Hikita (born on January 30, 1968) graduated the faculty of Physics of Sophia university, Japan in 1991, and got his master degree at the same faculty in 1993. He worked in Mitsubishi Research institute as a media and information planner and researcher (1993-2000). His main research fields are information design (information and its places, especially map and pictograms as a nonverbal communications), and media design (media communications for society, museum, library and classroom). He has coauthered and co-edited, several books and papers for these fields. He is a chair of APNG (Asia Pacific Networking Group) Education and Live E! WG, committee member of Chugoku-Shikoku Internet council, Hiroshima region IPv6 deploy committee. Makoto Takeya, born in Tokyo on November 2, 1941, received his BE and ME degrees in applied physics in 1966 and 1968, and his Doctor of Science degree in 1981 from Waseda University. Joining NEC Corporation in 1968, he was engaged in various research areas including the development of computer applications in education. Since 1986 he has been belonging to Takushoku University, Tokyo, Japan. During 1992-1993 he was a visiting scholar of Department of Educational Psychology, University of Illinois. Presently he is Professor of Department of Computer Science, Takushoku University. Among his most important books are "A New Test Theory: Structure Analysis Methods for Educational Information" (in Japanese) and "Structure Analysis Methods for Instruction: Theory and Practice of Instructional Architecture, Design and Evaluation". He is the recipient of the 1976 Yonezawa Memorial Award from the Institute of Electronics and Communications Engineers of Japan, the 1989 Excellent Research Award from Behaviormetrics Society of Japan, the 1996 Winning Paper Award from Japan Society of Educational Technology, and the 1999 Engineering Education Award from Japanese Society for Engineering Education. Yasuhiro FUJIHARA (born on January 8, 1971) graduated the Faculty of Education of Kobe University in 1993. Presently he is an assistant professor at Faculty of Software and Information Science, Iwate Prefectural University, Japan. His main research fields are educational technology (educational evaluation, e-learning) and ICT education. He is a member of Japan society for educational technology, and the institute of electronics, information and communication engineers.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 74-83 A Methodology for Providing Individualised Computer-generated Feedback to Students Michael Lambiris Abstract: The traditional way of providing feedback to students after tests or assignments is labour-intensive. This paper explains the concepts and techniques used by the author to build computer-based applications that analyse students’ answers and generate individualised, detailed and constructive feedback. The paper explains how the data gathered from a student’s answers can be combined with other knowledge about the subject matter being taught, and the specific test questions, to create computerised routines that evaluate the individual student’s performance. This information can be presented in ways that help students to assess their progress, both in relation to their acquired knowledge in specified areas of study, and with regard to their ability to exercise relevant skills. In this way, appropriate feedback can be provided to large numbers of students quickly and efficiently. The same techniques can be used to provide information to the instructor about the performance of the group as a whole, with a degree of detail and accuracy that exceeds the impressions usually gained through traditional marking. The paper also explains the role of the subject instructor in designing and creating feedback-generating applications. The methodologies described provide insight into the details of the process and are a useful basis for further experimentation and development. Keywords: Teaching technology, computer-generated feedback, methodology and design, teaching large classes 1 Difficulties with Providing Good Feedback It is widely recognised by educators that detailed, constructive, prompt and individualised feedback is an important aspect of good teaching and effective learning. See [1]. But providing feedback to students in the traditional form, that is, by reading the students’ answers, evaluating them and writing comments, can be very time-consuming, especially with large classes. I teach a subject called Principles of Business Law that attracts enrolments of up to 700 students each semester. Assessment in this subject consists of four computerised tests, each comprising 30 to 40 multiple-choice questions. The tests are done under examination conditions. Scores are posted a day or two afterwards. It was in this context that I wished to provide individualised feedback after each test to the students. With classes of this size, it is impractical for an instructor to write comments for each student. A way was needed to produce feedback by means of a computer program. 2 What Should Feedback Consist of? One way of providing feedback would be to publish the test questions together with the correct answers. This is often what students expect, but it may not be the best approach to learning. Thirty or forty questions cannot comprehensively test everything a student should know. A test is usually only a sampling of the student’s knowledge and skills. When students correctly answer questions in a test, this indicates a probability that they know the relevant subject area well. Similarly, when they answer questions wrongly, this indicates a probability that they have an inadequate grasp of the subject area. If a student’s answers demonstrate a weakness, they will likely need to revise that whole area of study rather than being given the correct answers to specific questions. Accordingly, my aim is to provide feedback in the form of general analysis, comment and advice. See [2]. Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006A Methodology for Providing Individualised Computer-generated Feedback to Students 75 3 Extracting Useful Information from Basic Data Instructors who are knowledgeable in their own specialist area may not also be competent computer programmers and will need to employ specialist help to create application software. But the instructor needs to understand some basic programming concepts and techniques to be able to participate effectively in the process of designing and shaping feedback-generating software appropriately. In this paper I explain in detail one way in which such software can be created. The starting point of is to identify what basic data is available. In each of the assessment tests that I use in teaching my subject, the students answer the questions by selecting a letter that represents their chosen answer (a, b, c, etc.). This letter is recorded in an electronic database so that the student’s record consists of a sequence of 30 to 40 individual letters. A symbol (-) is used to indicate unanswered questions. A typical string of answers looks like this: baabebccaecb-dabbab-abbceabaccaabaeadac. To create effective feedback, techniques are needed to extract more information from such basic data. How can this be done? Essentially, the process involves combining three types of information. The first is the particular answer the student chose for each question. The second is what the instructor knows generally about the subject area and skills being tested. The third is the focus or intent of each particular question in the test. The computer application can be designed to take proper account of these three factors and to draw specified conclusions from them. In this way, it is possible to build a useful picture of how the student has performed, and to identify their particular strengths and weaknesses. This information, properly presented with comments and advice, forms the basis for individualised feedback. To fully envisage what is possible, it helps to understand the computer processes that are involved. An easy example of this is working out whether or not a particular answer is correct or incorrect. Essentially, the student’s answer must be compared to the correct answer, to see if they are the same. A computer program compares data by using variables. Variables can be thought of as an electronic slots in which specified information can be stored. To compare a student’s chosen answer with the correct answer, the data representing a student’s answer can be retrieved from the database where it is permanently stored, and temporarily placed in a specified variable. The data representing the correct answer can be placed in another variable. The computer program then compares the contents of the two variables. If they match, it follows that the student has answered the question correctly and the OK result can be stored in a third variable, in the form of an increasing number or score. If there is no match, the student’s answer is wrong and this NOT OK conclusion can be stored in a fourth variable (or by adjusting the number in the third variable downwards). Using a process like this for all the student’s answers, it is possible to work out how many answers were right or wrong. But a difficulty immediately emerges. It is apparent that the results obtained do not disclose why a student has chosen a particular answer. There can be many reasons for getting an answer wrong. For example, the student may have simply misread the question; or failed to understand the significance of a particular term used in the question; or not have had the necessary knowledge or skill to answer correctly. Similar possibilities exist in respect of correct answers. Taken individually, therefore, a student’s correct and incorrect answers do not provide a sufficiently reliable basis for giving feedback and advice. But, given sufficient data, it is possible to look for significant patterns in a studentŠs right and wrong answers. When all of the student’s answers are analysed in the light of the particular knowledge and skills that the various questions are designed to test, distinct patterns emerge that can be useful as the basis for providing that student with helpful feedback. 4 Identifying Categories of Skill and Knowledge To develop a computer-based application that carries out the necessary analysis of the student’s answers requires careful thought and planning. The first step is to analyse each of the questions on the test, to identify, describe and name the particular categories of knowledge and skill involved. To do this76 Michael Lambiris the instructor must combine their subject-matter expertise, teaching experience and examining skills. It may initially seem difficult to categorise each question in a specific and uncompromising way - some questions defy any neat classification. But, when classifying questions in one or more specified ways, it quite often happens that fresh insight is gained into what a question is truly attempting to do, and how that question might be improved so that it achieves its objectives more clearly and precisely. This is not a bad thing to happen. Examples taken from specific tests illustrate the way in which categories may be defined. In the first test written by PBL students, analysis shows that each of the 40 questions involves one of three different generic skills. One is an ability to recall and apply acquired knowledge. Another is the ability to find specified information in a Statute and a Law Report. The third is the ability to understand, analyse, and draw conclusions from specific facts. Each question can also be categorised according to the area of knowledge involved. In the test being discussed, the areas of knowledge are: (1) Constitutional arrangements and the organs of government in Australia; (2) The law-making powers of specified organs of government; (3) The processes and procedures for enacting legislation; (4) The hierarchy of the federal and state court systems; (5) The nature and organisation of law; (6) Understanding and appropriate use of legal terms and concepts; (7) The interpretation and application of statutory law; (8) The interpretation and application of case-law; and (9) Recognition and understanding of judicial reasoning. When each category of skill and knowledge has been identified, it needs to be given a brief but distinctive name. Using the example above, the three categories of skill can be named qt1; qt2 and qt3. The nine categories of knowledge can be named: ch; lmp; nol; cs; lc; leg; sti; cl; and jr. These names can be used (with some modification) to identify variables in a computer program. To carry out an analysis, the program will need three separate variables for each named category of skill and knowledge. This allows us to take account of whether the student answered that question rightly (r) or wrongly (w), or left it unanswered (n). Using this naming regime, the name of the first category above qt1 is transformed into three variables named qt1r, qt1w or qt1n. Similarly, qt2 becomes qt2r, qt2w and qt2n; and so on. Further named variables will be needed to track other important aspects of the results, for example: q1; q2; q3, etc to hold the student’s answer to each question; right to hold the correct answer being considered; wrong for the incorrect answer being considered; result for the result of comparing two variables; rans for the total number of correct answers; wans for the total number of incorrect answers; nans for the total number unanswered questions; tans for the total number of questions attempted; and score for the final score for the test. 5 Developing Routines for Analysis To see what sort of information can now be extracted from the basic data requires some understanding of the computer-based processes involved. Imagine that we want to begin by analysing a student’s answer to the first question in the test. The computer program begins by finding the particular student’s string of answers in the database. It then selects the answer chosen by that student to the first question, and places the appropriate letter (a,b or c) in the relevant named variable, for example, in q1. Next, the program places the letter which represents the correct answer to that question (a, b or c) in the variable right. By comparing the letter stored in q1 with the letter stored in right, the program can decide whether or not the question was correctly answered. This result can then be stored in a third variable where the total number of correct answers is kept: rans. If the question was not answered, this fact can be recorded in the variable that stores the total number of unanswered questions: nans. And if the question was answered wrongly, this conclusion is stored in the variable that stores the total number of incorrect answers: wans. The program can now be made to classify the student’s answer to the first question by reference to a category of skill. For example, assume question 1 tested the student’s ability to understand, analyse, and draw conclusions from specific facts. Recall that the relevant variable for this skill was named qt3.A Methodology for Providing Individualised Computer-generated Feedback to Students 77 If the student got the answer right, the program can store this conclusion in the variable that counts the student’s correct answers in this category - qt3r. Alternatively, if the question was answered wrongly, that conclusion can be recorded in the variable qt3w which shows the total of wrong answers in this category. Unanswered questions in this category are recorded in the variable qt3n. The same procedure is followed to classify the student’s answer to this question in relation to the area of knowledge being tested, using the variables lmpr; lmpw; or lmpn. In this way, the student’s answer to question one is being evaluated in various ways. The same routines are then repeated for each of the remaining questions, with appropriate changes to the variables used to store the conclusions. Once these basic routines have been carried out, further processes can be used to derive additional information from the data, or to organise it usefully. For example, the total number of questions answered by the student can be calculated by adding together the number of the student’s correct and incorrect answers (rans + wans), and placing the result in the variable tans (for total answers). Similar processes add to the value of the available information. So far, the individual questions have been classified as belonging to one of nine different areas of knowledge. For the purpose of generating feedback, the areas of knowledge can usefully be grouped into a smaller number of broader categories. The point of doing this is that it often helps students to understand where their strengths and weakness might lie in general terms, before going on to a more detailed analysis. In the first test written by PBL students, the nine areas of knowledge can be grouped into three broader categories, represented by variables total1, total2 and total3, as shown below: In total1 the broad area of knowledge is Organs, powers and processes of government and includes: constitutional arrangements and the organs of government in Australia (ch); the law-making powers of specified organs of government (lmp); the processes and procedures for enacting legislation (leg); and the hierarchy of the federal and state court systems (cs). In total2, the broad area of knowledge is Legal concepts and language and includes: the nature and organisation of law (nol); and understanding and appropriate use of legal terms and concepts (lc). In total3 the broad area of knowledge is The interpretation and application of law and includes: the interpretation and application of statutory law (sti); the interpretation and application of case-law (cl); and recognition and understanding judicial reasoning (jr). The totals in the relevant variables (shown in brackets above) are added together to show how the student has performed in each broad area of knowledge. This is done separately for right answers, wrong answers and unanswered questions. For example, the numbers in the variables chr; lmpr; legr; and csr are added together in total1r to show the correct answers in this broad area of knowledge, while chw; lmpw; legw; and csw are added together in ’total1w’ to show the incorrect answers in this same area. The variables chn; lmpn; legn; and csn are added together in total1n to show the unanswered questions in this area. The same type of process can be used to produce data in relation to other specified learning objectives. Finally, we can calculate the student’s score for the test and place it in score. This is done by taking the number of correct answers (already contained in the variable rans) and doing whatever arithmetic calculation is needed to express it as a final mark. In the test now being discussed, a mark out of 15 is needed because the test counts for 15 per cent of the overall assessment for the subject. The number in rans is therefore divided by 2.667 and the result placed in score. 6 Presenting Information as Feedback Using routines to analyse the basic data and extract additional information in the way described above is only the initial stage of actually providing feedback to a student. The next step is to build an interface that presents this data appropriately. The information available is sufficient to provide quite detailed feedback if it is built into a careful sequence of explanation, coupled with comment and advice. This should be presented in a clear, friendly, constructive and flexible way. One possibility is to follow78 Michael Lambiris a traditional web-page design, with a list of contents on the left of the screen to indicate the extent and structure of the available feedback, with direct hyperlinks different sections. See figure 2 below. As far as possible, the feedback should be individualised, by displaying the particular student’s own data. In addition, particular comments and advice can be displayed selectively, depending on whether the particular student has a good score, an average score, or a poor score. The screenshots below provide examples. To script a full range of alternative comments and advice requires considerable forethought but the result is worthwhile. The feedback can also include information about how the individual student’s performance compares to the class as a whole. And it can usefully include information and advice about future tests, for example, what new forms of question will be encountered, and what specific preparation may be needed. Students are very receptive to such information in the immediate aftermath of a test. The feedback applications can be made available to students either on a local area network, or by providing a downloadable version, or by running them on-line. 7 Providing Feedback to the Instructor So far, this paper has been concerned with providing feedback to the students But it is also important that the instructor get feedback on the effectiveness of their teaching, the validity of the questions set in the test, and the extent and accuracy of student learning. Traditional marking, which involves reading the answers, provides this feedback because, if a significant number of students make the same mistake, the instructor quickly becomes aware of the problem. With computer-based testing it is harder to get a clear idea of these matters. The normal output of a computer based test is a list of final marks and these do not tell the instructor much about where specific problems might lie. However, it is possible to use the techniques described above (with appropriate modification) to provide an analysis of the group results. For a group analysis, the program begins by finding each student’s string of answers in the database and carrying out the same sort of analysis already described, classifying the answers as right or wrong, and categorising the right and wrong answers in various ways, for example, by area of knowledge or skill, or in relation to specified learning objectives. As each student’s string of answers is analysed, a cumulative total is built up, so that in the end it is known how many students in the entire group got each question right or wrong; what the distribution of marks is; what percentage of the answers were right or wrong in relation to particular areas of law; and what percentage of students satisfactorily demonstrated competency at particular skills. This type of analysis would be time-consuming to do manually but it is quickly and easily accomplished using the methodologies described. The results give an accurate and clear picture of group performance - for example, see figures 6 and 7 below. If too many students appear to be answering a particular question wrongly, the instructor will quickly notice this and be able to investigate the different possibilities. It may be that the question is badly written; or that the topic is poorly taught; or that the student’s have prepared inadequately in that area of study. Responding appropriately helps to improve the quality of the teaching and learning process. 8 Conclusions By using appropriate techniques, and properly coordinating the skills and experience of instructors and computer-programmers, it is possible to automatically generate and deliver very satisfactory indivdualised feedback for students and instructors. Although the examples discussed here use the data obtained from computer-based tests in multiple-choice form, the same ideas could be adapted to tests that are not computer based, or that do not consist of multiple-choice questions. All that is required is to work out a marking scheme where numbers or letters are used to record the marker’s evaluation of what the student has achieved. This data could be digitalised and used as the basis for computer-generated analysisA Methodology for Providing Individualised Computer-generated Feedback to Students 79 Figure 1: A sample question from a test. This question involves case-law, more specifically the meaning of coded information in case citations (variable cl). The student must interpret and evaluate the significance of that information (variable qt3). and feedback, in much the same way as described in this paper. In essence, therefore, the techniques explained in this paper could find application in a wide range of situations. The screenshots illustrate various aspects of the ideas explained in this paper. They show how the information generated from the basic data can be presented in a constructive, meaningful and readable style, and within a well-contextualised framework. The last two screenshots present an analysis of group data and show how a clear and detailed overview can be gained by the instructor of class performance as a whole. References [1] Johnstone R, Patterson J and Rubinstein K, Improving criteria and Feedback in Student Assessment in Law, Cavendish Publishing, Australia, 1998. [2] East R, Effective assessment strategies in law, http: //www.ukcle.ac.uk/resources/ assessment/effective.html, 2005. [3] Higgins, E. and Tatham, L, Assessing by multiple choice question (MCQ) tests,http:// www.ukcle.ac.uk/resources /trns/mcqs/index.html, 2003. [4] Lambiris M, Assessment Management Software, Australian Law Courseware Pty Ltd, Australia, http://www.ALCware.com, 2005 - 2006.80 Michael Lambiris Figure 2: In the feedback application, the topics are listed on the left of the screen with hyperlinks to the content of each section. This particular screen explains the scoring process, shows the individual student’s final score and grade, and provides an appropriate comment. Figure 3: This screen provides a detailed analysis of the individual student’s performance in a specified area of law (organs, powers and processes of government) and selectively provides appropriate comment. The feedback is based on the variables total 1r; chr; lmpr; legr and csr.A Methodology for Providing Individualised Computer-generated Feedback to Students 81 Figure 4: This screen uses the variables qt1r; qt2r and qt3r to analyse the individual student’s ability to perform tasks involving specified skills. Appropriate comments are also displayed selectively, depending on the values in these variables. Figure 5: This screen summarises all of the available data. Presented in tabular form, it gives a concise overview of the student’s performance. It also shows how a substantial amount of meaningful information can be generated from the basic data.82 Michael Lambiris Figure 6: Using the same variables as devised for the feedback application, the data for the entire group of students can be generated for the instructor. This screen shows how many students in one group answered particular questions correctly or not. Figure 7: Group data can also give the instructor an overview of performance in relation to areas of knowledge, or particular skills. This screen shows the percentage of correct answers for the entire group in relation to the eleven areas of knowledge being tested.A Methodology for Providing Individualised Computer-generated Feedback to Students 83 Michael Lambiris The University of Melbourne, Faculty of Law Victoria 3010 Australia E-mail: [email protected] Received: November 6, 2006 Editor’s note about the author: Michael LAMBIRIS (born January 22, 1950) obtained an LLB (Hons) from the University of London in 1971, and a PhD from Rhodes University in 1988. He has held positions at the University of Zimbabwe (1976-1982 and at Rhodes University, South Africa (1982-1991). He is presently an Associate Professor and Reader in the Faculty of Law, The University of Melbourne, Victoria, Australia. His main fields of teaching and research are commercial law and computer-based legal education. In addition to writing computer-based learning materials, he has developed computer-based testing and feedback software, written various papers and books and presented papers at many international conferences. He is the managing director of Australian Law Courseware (Pty) Ltd which publishes computer-based learning materials for law students.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 84-93 A Software System for Online Learning Applied in the Field of Computer Science Gabriela Moise Abstract: The computer-assisted learning is a very modern study area, which can be applied to the learning process. The main objective of this paper is to present a software system for online learning based on the intelligent software agents technologies. The main ideas on which this paper is built are: to any person is associated a learning profile (the idea is based on the existence of multiple intelligences, defined by Gardner [3]); the pedagogical resources can be shaped through educational semantic networks or through conceptual maps; a flexible software system in computer assisted learning must be based on the intelligent agents’ technology. The system dedicated to computer-assisted learning must be adapted to the learning profile of each student. The author presents a flexible online teaching software system, which learns to teach according to the learning profile of each student (the author defines this system in the PhD thesis and includes: intelligent agent structures, reward learning algorithms, algorithms to generate plans for an agent). The application includes two agents: the supervising agent and the pedagogical agent, which determines the optimal pedagogical resources for teaching the course. The application has been designed in Microsoft Visual Studio 6.0 and uses Microsoft Agent Technology, which allows vocal recognition. Also, the Protéjé 3.0 software has been used, software that allows building ontology for computer assisted learning. The system has been experimented on the Graph Theory Course, taught at postuniversitary computer science courses, the results proving the necessity of defining a strategy for selecting the pedagogical resources presented to the students according to their learning profile. Keywords: intelligent agent, conceptual map, learning style 1 Introduction A software system for online learning in the field of the computer science is proposed in this paper, a system called iLearning. The system is based on shaping the pedagogical resources through conceptual maps or semantic networks and on using intelligent agent technology.[6] The main feature of the system is its flexibility.[7] In order to use this system we have to consider the learning styles of each person.[4] Donald Clark says that teaching in unconformity with the learning style of the students doesn’t mean that the students don’t learn, but students learn better if the teacher presents pedagogical resources according to the learning style of the students. [13] Teachers have to follow six phases in order to use the system [7]: 1. establishing the learning style of the students through an interrogation of them; 2. drawing the conceptual map of the courses using a software to build conceptual maps; 3. teaching is done based on a technique as to traversing the conceptual map of the course; 4. the teacher has to build for each node of the conceptual map a set or pedagogical resources and the system will select the best resources conforming to the learning style of students; Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006A Software System for Online Learning Applied in the Field of Computer Science 85 5. the students are evaluated in each node of the map and the teacher inputs the note in the software system; 6. the teaching takes place traversing all conceptual nodes and showing the resource to students. The system was proposed and tested by the author of this paper in the doctoral these with the same title[7]. 2 The Architecture of iLearning System The system is composed by six functional components: Instructional Manage System, Communication System, Informational System, Evaluation System, Tutoring system, Pedagogical System. [7] The Figure 1: Architecture of iLearning System Instructional Manage System is implemented by a SQL server and manages databases about: faculties, students, marks, credits, entrances, registration, teachers, fees; builds reports and statistics about the educational process; plans the instructional process: study periods, evaluation periods; records educational plans; starts teaching process; takes care about securing. The Communication Agent is implemented by the software agent and accomplishes the communication between students, teachers, students and teachers and administration staff by e-mail, forum, chat; accomplishes the extern communication: e-mail, browser, chat, messaging tools. The Informational System is implemented by the SQL server and manages the courses and pedagogical resources: records the pedagogical materials on the servers, offers tools to search courses and information in the course’s databases; offers web searching tools: search engine. The Evaluation Agent is implemented by the software agent and builds personalized tests; evaluates students according to objectives of the instructional process; evaluates the educational process: reports and statistics. The Tutor Agent is implemented by the software agent and has the following roles: guides the teacher to build pedagogical resources; guides the students in the instruction process; guides the teacher in the teaching activities. The Pedagogic Agent is implemented by the software agent and verifies the instructional objectives (according to Bloom’s taxonomy); checks the learning style of the student; defines pedagogical strategies; defines the curriculum; checks the pedagogical resources; checks the correctness of the evaluation’s activities. The Interface is implemented by the software agent and customizes the interface. The space developed by the iLearning system is a flexible instructional environment. The work scenario of the participants to the educational process is presented in figure 2.86 Gabriela Moise Figure 2: iLearning Work’s scenario 3 The Agents of iLearning System The iLearning system contains three kinds of components [7]: 1. "execution" agents which perform teaching-learning process. These kinds of agents don’t have perception sensors; they receive the inputs from the supervisor agent. These kinds of agents are intelligent, reactive and goal oriented. 2. "supervisor agent" is a software module which has some intelligent grade. This kind of agent manages the whole system and it is able to learn how a system can be managed. 3. databases system. This component was implemented through a SQL server and VB client. 3.1 The Execution Agents The execution agents are adapted according to INTERRAP [8], model to which was added a capacity component and a decisional module, which establishes the ability of the agent to solve a situation. The architecture of this kind of agent is presented in the figure 3. The control unit receives messages about the state of the environment from the supervisor agent. The unit establishes the ability of the agent to solve the problem. If the agent is not able to solve the situation then it returns an error message. The capacity is defined by the role of the agent in the system. The module analyzes the information received from the supervisor agent; recognizes the situation (goal oriented job or reactive job) and decides the level that will be activated. Finally, the module receives the actions that have to be performed by the actors or returns a message to the supervisor agent.The activities are implemented through three levels: communication level, goal oriented level and reactive level. The control unit has the duty to realize the maintenance of the plans library, the behaviors library and the knowledge base of the system. 3.2 The Supervisor Agents The supervisor agent performs the following functions:A Software System for Online Learning Applied in the Field of Computer Science 87 Figure 3: The Architecture of Execution Agent 1. coordinates the whole activity of the system; 2. receives the information from the environment (students, teachers and staff) and decides the agent that will be called; 3. receives the requests from the agents and transmits them to the destination agents or solves them using its owner resources; 4. records the capacities of the agents; 5. manages the agents’ library; 6. manages the databases systems; 7. communicates with the iLearning developments. The supervisor agent has the properties of the facilitator agents, mediator agents and broker agents. The architecture of the agent is shown in the figure no. 4. The structure of the agent is simpler then the one of the execution agent while it performs only two types of situations: social situations and administrative situations. Figure 4: The Architecture of Supervisor Agent88 Gabriela Moise 3.3 The Model of iLearning System The iLearning system is a hybrid system, which contains a multi-agent system, and two databases systems. The multi-agent system is composed by agents that cooperate between them. [2] The agents have different goals and are managed by the supervisor agent.[9] The supervisor agent collaborates with the databases systems and coordinates the whole educational process. The model is presented in the figure 5. Figure 5: The Model of iLearning System 3.4 The iLearning’s Programming and Reward Algorithm To test the system, it was realized a software with Visual Studio 6.0 and Microsoft Agent Technology. The algorithm used to learn the agents of iLearning system is a reward algorithm based on the Qlearning algorithm. This technique starts with an initial estimation Q(s;a) for each pair state-action. When it is selected the action a in the state s, the system receives a reward R(s;a) and it is observed the next state s 0 . The Q-learning algorithm (Watkins, 1989) [11] appreciates the function value state-action as follows: Q(s;a) = Q(s;a)+a £(R(s;a)+g £ mina0 Q(s0;a0) ¡ Q(s;a)) where a 2 (0;1) is the learning rate, g 2 (0;1) is the discount factor and s0 is the state reached from state s executing the action a.[1] [5] The conceptual map of the course defines the space of system’s state. A student with a certain learning style is with his studies in a node of the conceptual map and he was examined and received notes during the instruction process. The objective of the system is to maximize the results of students at different evaluations. The contribution of the authors is the adaptation of this algorithm for the pedagogical agents. The reward is established through the student’s evaluation in a node of the conceptual map and is defined according to the equation: R(s;a) = note or R(s;a) = note£pnote+apriorknowledge time £pbase where: note is the score received by the student at his/her evaluation in a node of the conceptual map, apriorknowledge is the score received by students at the initial evaluation (before starting teaching the course if the student starts the course or the average of the scores received by the student at the evaluations in the nodes before the current node in the conceptual map), pnote and pbase are parameters. The algorithm used is:A Software System for Online Learning Applied in the Field of Computer Science 89 1. start with an array Q for all possible pairs state-action. Each item of the array is initialized with zero or a small value. 2. the optimal policy is initialized with a supervised policy. Qoptim is initialized with Q. 3. for each student the conceptual map is traversed and the Q array is calculated. 4. the Q is analyzed 4 The Experimental Results To test the system there were selected students with different learning style (visual style, auditory style, kinesthetic style) and different ages and a module from the "Graph theory" course. The conceptual map [10] of the course with seven nodes is presented in the figure 6., each node has attached three pedagogical resource. The software contains two agents: the supervisor agent and the pedagogical agent.[12] The pedagogical agent establishes the optimal pedagogical resources to teach the course. The state of the system is defined by the combination (age, learning style and number of the node). The learning style no. 1 means the visual learning style, the style no. 2 means auditory learning style and the style no. 3 means the kinesthetic learning style. The age no. 1 means the persons with age between 20 and 30 years, the age no. 2 means the persons with age between 30 and 40 years and the age no. 3 means the persons with age between 40 and 50 years. The software teaches itself to teach better, selecting the best pedagogical resources for each node of the conceptual map. Figure 6: The conceptual map of the course Graph Theory, module Shortest Path The results obtained for age category no. 2 and learning style no. 1 are presented in the table no. 1. The last column contains the values for the parameter q from the reward algorithm. The greater values of parameter q give the optimal policy (the best pedagogical resources for each node of the map according to the learning styles of students). The optimal policy is described in the table no. 2. The results obtained for age category no. 3 and learning style no. 1 are presented in the table no. 3.90 Gabriela Moise Age Learning style No. node No. of pedagogical resource q 2 1 0 1 8.75 2 1 0 2 0 2 1 0 3 10.27 2 1 1 1 4 2 1 1 2 6 2 1 1 3 7 2 1 2 1 6.37 2 1 2 2 0 2 1 2 3 7.68 2 1 3 1 4.5 2 1 3 2 6.12 2 1 3 3 8.18 2 1 4 1 3.5 2 1 4 2 9.28 2 1 4 3 0 2 1 5 1 4 2 1 5 2 3.5 2 1 5 3 7 2 1 6 1 0 2 1 6 2 5.25 2 1 6 3 5.75 Table 1: Experimental Results for age 2 and learning style 1 Age Learning style No. node No. of pedagogical resource 2 1 0 3 2 1 1 3 2 1 2 3 2 1 3 3 2 1 4 2 2 1 5 3 2 1 6 3 Table 2: The optimal policy for age 2 and learning style 1A Software System for Online Learning Applied in the Field of Computer Science 91 Age Learning style No. node No. of pedagogical resource q 3 1 0 1 6.25 3 1 0 2 8.75 3 1 0 3 0 3 1 1 1 0 3 1 1 2 6.5 3 1 1 3 0 3 1 2 1 5 3 1 2 2 5.75 3 1 2 3 0 3 1 3 1 4 3 1 3 2 0 3 1 3 3 5 3 1 4 1 0 3 1 4 2 4.5 3 1 4 3 5 3 1 5 1 0 3 1 5 2 4.5 3 1 5 3 3.5 3 1 6 1 3.5 3 1 6 2 0 3 1 6 3 4.5 Table 3: Experimental Results for age 3 and learning style 1 The greater values of parameter q give the optimal policy (the best pedagogical resources for each node of map according to the learning styles of students). The optimal policy is described in the table no. 4. It is true that the best results will be obtained if there are a lot of pedagogical resources for each node of the conceptual map. These pedagogical resources have to be in different formats (text, multimedia, audio, video) containing details, explanations, exercises, references, cases studies, and so forth. There are a lot of software tools which can be used to make different files formats. Also, we need to improve the computers performances in order to use these kinds of softwares. Age Learning style No. node No. of pedagogical resource 3 1 0 2 3 1 1 2 3 1 2 2 3 1 3 3 3 1 4 3 3 1 5 2 3 1 6 3 Table 4: The optimal policy for age 3 and learning style 1 The teachers validate the results; the pedagogical resources no. 2 and no. 3 are the best pedagogical resources built for this course. The pedagogical resources no. 2 and 3 contain a lot of details, examples and explanations. The resource no. 3 (means the selection of the third pedagogical resource) is the best choice for persons with age group no. 2 and learning style no. 1. The resources no. 2 and 3 (means the92 Gabriela Moise selection of the second and the third pedagogical resources) are the best choices for persons with age no. 3 and learning style no. 1. 5 Summary and Conclusions The online instruction model can be implemented with the intelligent agent technology. The iLearning system proves this. The system models the teaching-learning process so that it can adapt itself at the learning profile of each person. There are defined two kinds of agents: the execution agent and the supervisor agent. The author in the doctoral study tested the system and proved the importance of using the technology of intelligent agents in the online instruction systems. References [1] Bowling, M., Veloso M., Multiagent Learning Using a Variable Learning Rate, Journal Artificial Intelligence, Vol. 136, pp. 215-250, 2002. [2] Buiu, C., Albu. M., Agenti Software Inteligenti, Editura ICPE, 2000. [3] Gardner, H., Intelligence Reframed, Multiple Intelligence for the 21st Century, Published by Basic Books, 1999. [4] Joyce, B, Weil. , Calhoun.E. , Models of Teaching, Published by Basic Books, 1999. [5] Leon, F., Sova, I., Gâlea, D., Reinforcement Learning Strategies for Intelligent Agents in Knowledge-Based Information Systems, Proceedings of the 8th International Symposium on Automatic Control and Computer Science, Ia¸si, 2004. [6] Moise, G., The role of intelligent agents in online learning environment, CBLIS Conference Procedings, pp. 98-105, 2005. [7] Moise G., A Software System For Online Learning Applied To Higher Education In The Field Of Computer Science, Thesis: Petroleum-Gas University of Ploiesti, 2006. [8] Müller, Jörg P., The design of intelligent agents: a layered approach, Lecture notes in computer science, Vol. 1177: Lecture notes in artificial intelligence, Springer-Verlag, 1996. [9] Rao, A., S., Georgeff, M., P. , BDI Agents From Theory to Practice,Proceedings of the First International Conference on Multi Agent Systems ICMAS-95, San Francisco, 1995. [10] Sowa, J.F. ,Kowledge Representation Logical, Philosophical, and Computational Foundations, Brooks/Cole, Thomson Learning. [11] Watkins,C. , Learning from Delayed Rewards, Thesis: University of Cambridge, England. [12] Wooldridge, M., Jennings, R., N. , BDI Agents From Theory to Practice,Intelligent Agents: Theory and Practice, Knowledge Engineering Review , Vol. 10 No 2, 1995. [13] http://www.nwlink.comA Software System for Online Learning Applied in the Field of Computer Science 93 Gabriela Moise Petroleum-Gas University of Ploiesti Computer Science Department no. 39 Bd. Bucuresti, Ploiesti, Romania E-mail: [email protected] Received: November 11, 2006 Editor’s note about the author: Gabriela Moise (born on February 13, 1969) graduated the Faculty of Mathematics, specialization Informatics of the Bucharest University. She worked as software development in the software industry. Since 2003 she is Lecturer at Petroleum-Gas University of Ploiesti. Her research fields are: e-learning, graph theory, intelligent agents, knowledge representation, e-health. She has (co)authored seven books and more than twenty research papers. She has participated to many international conferences in the elearning and e-business area.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 94-102 MAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling Mihaela Oprea Abstract: Many real-world applications are mapped into combinatorial problems. An example of such problem is timetable scheduling. In this case, the two basic characteristics can be defined by its distributed and dynamic environment. One efficient solution to solve this problem could be provided by an agent-based approach. A timetable scheduling problem can be modelled as a multi-agent system that provides the final schedule by taken into account all the restrictions. In this paper it is presented a preliminary research work that involves the development of a multi-agent system for university course timetable scheduling, named MAS_UP-UCT. We focus on the architecture of the multi-agent system, and on the evaluation of the communication process by using the interaction diagrams. Keywords: Intelligent agents, Multi-agent systems, Timetable scheduling 1 Introduction In the last decade, several Artificial Intelligence (AI) technologies and methods have been applied in the educational domain, at high school or university level. Most of the applications that use AI are solving the tutoring/teaching and/or examination tasks, while less of them try to solve also the administrative tasks (e.g. course timetabling, examination timetabling, students presence control, student registration) related to an educational institution. A recent developed educational system that models also such tasks is the e-Class Personalized prototype system presented in [10]. This system is an extension of the widely available open source Learning Content Management System, e-Class, which has a component named School administration, that deals with the tools that handle timetables, financial matters, personal student data, student registration etc. In this paper, we shall focus on the timetabling problem, and we shall discuss about MAS_UP-UCT, an agent-based system that we have designed, which provide solutions to university course timetabling. The general task of solving timetable scheduling problems is iterative and time consuming. In real world applications, the participants to the timetable scheduling have conflicting preferences, which make the search for an optimal solution an NP-hard problem. In order to solve the problem it is necessary to find a compromise between all the professors’ requirements, usually conflicting (e.g. day, time). The constraints are related to the availability, timetabling and preferences of each professor, to rooms availability, number of students, and curricula. In order to solve this problem for the particular case of university course timetable scheduling we have adopted the agent-based approach. Multi-agents systems (MAS) are concerned with coordinating behavior among a collection of autonomous intelligent agents (e.g. software agents) that work in an environment. Sometimes, software agents are designed to reconcile their own interests with the constraints implied by other agents. One type of software agents is given by expert assistants who enable us to automate certain manual tasks and who work more efficiently. Expert assistant is a term given to an intelligent software agent that performs certain tasks on our behalf [15], [14]. For example, our daily organiser is an assistant. The complexity of multi-agent systems is generally higher than that corresponding to conventional software systems and their success rely on properly designed and well tested subsystems. Also, in the particular case of timetable scheduling, the MAS could find an optimal or a sub-optimal solution using mainly inter-agent communication (with minimal message passing). In this paper, it is presented the architecture of a multi-agent system, MAS_UP-UCT, that is under development, and has as main purpose the modelling of the university courses timetable scheduling. We Copyright ° c 2006-2007 by CCC Publications Selected paper from ICVL 2006MAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling 95 shall describe the architecture of the multi-agent system, focusing on the mapping of a course timetable scheduling in terms of intelligent agents, and finally, we shall make a preliminary evaluation of the multi-agent system. 2 University Course Timetabling Problem The scheduling problem can be defined as a problem of finding the optimal sequence for executing a finite set of operations (tasks or jobs) under a certain set of constraints that must be satisfied. A scheduler usually attempts to maximize the utilization of individuals and/or resources and minimize the time required to complete the entire process being scheduled. There exist a number of different types of scheduling problems, such as job shop problems, sport leagues games scheduling, timetabling, service timetable problem for transportation networks, etc. Many scheduling problems share some features with the timetabling problem. In [12] it is presented a survey of automated timetabling. In the educational context, scheduling is the problem of assigning a set of events (courses and/or exams) to limited lengths of periods and to rooms, subject to certain conditions. There are two types of academic schedules: the course schedule and the examination schedule. For both types of problems the resources includes students, staff, rooms, courses, time, equipments. Several AI-based educational scheduling systems were reported in the literature (see e.g. the system presented in [13] for examination scheduling in universities). The formulation of the university course timetabling problem (as given in [4] and [12]) is the following: Input data: q courses, K1;:::;Kq, for each i, course Ki consists of ki lectures, r curricula which are groups of courses that have common students, S1;:::;Sr, p - the number of periods, lk - the maximum number of lectures that can be scheduled at period k (i.e. the number of rooms available at period k). Goal: find yik (i=1, ..., q; k=1, ..., p), so that (1) ∑k p=1 yik = ki (2) ∑q i=1 yik • lk (3) ∑i2Sl yik • 1 (4) yik = 0 or 1 where i=1, ..., q; l=1, ..., r. The constraints are the following: each course is composed by the correct number of lectures (relation (1)); each time there aren’t more lectures than rooms (relation (2)); avoid conflicting lectures to be scheduled at the same period (relation (3)). The objective function: max∑q i=1 ∑k p=1 dikyik where dik is the desiderability of having a lecture of course Ki at period k. Different solutions, manual or automated, were proposed in the literature. Some automated solutions are given by tabu search [3], constraint satisfaction [12], genetic algorithms [2], logic programming [5], and combination of different methods [9].96 Mihaela Oprea 3 The architecture of MAS_UP-UCT system We have designed the architecture of a multi-agent system, MAS_UP-UCT, that tries to solve optimally the university courses timetable scheduling. In Figure 1 it is shown the architecture of the multi-agent scheduling system, while Figure 2 presents the general overview of the university course timetabling. Figure 1: The architecture of MAS_UP-UCT We briefly describe how it is usually made the manual university course timetabling. Suppose the university includes five faculties, each of them having a number of specializations. The timetabling for each specialization is done by a person who is dedicated to this job, which we shall name specialization course scheduler. This person will provide five, four or three timetablings corresponding to the specialization’s number of study years. The specialization course scheduler will receive a list of options from each professor that is teaching a course to a certain year of study at that specialization. The list of options will include the professor’s options ordered by their desirability, and will include also, the list of impossible timetable schedulings. After course timetable scheduling is done at every faculty, it is started the activity of rooms allocation at university level. So, the university course timetable scheduling problem is divided in two subproblems: 1. faculty course timetable scheduling (which involves only allocation of course day and time), and 2. university course rooms allocation (which involves allocation of rooms for courses). When all courses have allocated time intervals (day and time) and rooms, the university course timetable scheduling is ended with success. Whenever a problem occur, it is started a communication process which will involve mainly a negotiation activity. In most Romanian universities, the university course timetable scheduling is done either manually or partial automatically. In order to improve the efficience of the whole activity, we have mapped the course timetabling in terms of autonomous intelligent agents. Each faculty has a scheduler multi-agent system (MAS-Fi), which has to schedule the courses of that faculty. The main scheduler agent (the university scheduler agent) which will allocate the rooms is MScheduler Agent. Because most professors teach courses to different faculties, every faculty scheduler agent has to communicate with the others scheduler agents, in order to solve some critical situations that may arise. The negotiation strategy used by the agents is similar to that described in [8]. In Figure 3 it is presented the MAS at faculty level, which includes a faculty scheduler agent, and expert assistants (EA) for each specialization of that faculty. For each specialization it is developed an expert assistant which has to do all the activities connected with that specialization (e.g. evidence of students, course curricula etc). An important activity that should be done by an expert assistant is course timetable scheduling (day and time). A lot of constraintsMAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling 97 Figure 2: The general overview of the university course timetabling task Figure 3: MAS-at faculty level98 Mihaela Oprea should be satisfied in order to solve the course timetabling. For example, one constraint is that all courses of a specialization are teached for all the groups of that specialization, and this constraint may become more severe in the case of courses that are teached for more than one specialization (this case appear for specializations that have courses with the same curricula). The faculty scheduler agents, who act autonomously, can schedule university course timetable on professor’s individual behalf. Ideally, all profesor’s preferences should be accepted. Unfortunately, we cannot reach an agreement among agents taking in consideration all professor’s preferences. In course timetable scheduling, agents must quantify the professor’s subjective preferences. In the worst cases (when a classical negotiation will have no results), we can reach a collective agreement by using a persuasion protocol (similar to that presented in [7]). The persuasion protocol is based on the rationality of agents. Agents should satisfy some criteria of rationality (e.g. maintaining logical consistency). The advantage is that negotiation using persuasion protocol can reach more agreements compared with existing negotiation protocols and it can improve the rate of agreement in course timetable scheduling. The analysis and design phases of the MAS_UP-UCT development were done by using the methodology Gaia v.2 [1]. In this methodology, during the design step three models are built: Agent Model, Service Model, and Acquaintance Model. The Agent Model specifies the types of agents that compose the system. Basically, the assignment of roles to agent types creates the Agent Model. The Service Model specifies the services that has to be implemented by the agent types. In Gaia, a service is a coherent block of functionality, neutral with respect to implementation details. The Acquaintance Model shows the communication links between agent types. Also, it is built a model of the environment. Summarizing, four types of agents are used by our system: Main Scheduler Agent (MSA), Faculty Scheduler Agent (FSA), Expert Assistant Agent (EAA), and Personal Agent (PA). Figure 4 shows the roles and responsibilities of each type of agent that compose the system MAS_UP-UCT. Figure 4: Roles for the agents that compose the system MAS_UP-UCT We make a brief discussion of two critical situations that may arise during a course timetabling: 1) at faculty course timetabling: day and time timetable conflict (two or more professor’s options are identical) - Solution: start a negotiation process between the expert assistant of that specialization and the professors involved (or their personal agents). A message is sent by the specialization expert assistant to all those professors that are involved in a conflict, and will wait for a solution of the negotiation. If it will receive an answer it will do a rescheduling. If it will receive no solution, it will start a persuasion process of negotiation, suggesting a solution. 2) at university course timetabling: - no room is available for a certain day and time course. In this case the MScheduler agent will start a negotiation process between faculty scheduler agents that are involved in the conflict, by given some options. Each faculty scheduler involved in the conflict will pass the message to the corresponding expert assistants, or, inMAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling 99 some cases will continue to pass the message to professor’s personal agents, who will than negotiate directly. If after this negotiation no solution will be find out (e.g. some courses cannot be moved in other module or day), the main scheduler agent will start a persuasion dialog between the faculties agents that are in conflict, which in turn will transfer the problem at the lower level. 4 Evaluation of the multi-agent system As an evaluation method of our MAS we have chosen the interaction diagram method [11]. An interaction diagram is a graph showing the processing of each agent symbollically as one or more vertical bars, and the messaging between agents as horizontal or oblique arrows between agents (from sender to receiver), decorated with message indications. In Figure 5 it is presented an example of interaction diagram, which illustrates a negotiation process at faculty level, between two expert assistants (EAi and EAj). In Figure 6 it is shown the interaction diagram in the case of a critical situation. Figure 5: Example of interaction diagram In order to evaluate the multi-agent system we can use interaction diagrams to design the communication process between agents (expert assistants, personal agents etc) and to verify that the system executes the correct communication sequences. We have used message flow fragmentation in order to realize an analysis of the communication process. The direct sequence and a part of the inverse sequence of the message flow fragmentation that corresponds to the negotiation process shown in Figure 6 is given bellow. beg(MAS);beg(FSAl);beg(FSAk);beg(EAlt);beg(EAlr);beg(EAki);beg(EAk j); snd(MAS;m1);split(m1;m1k;m1l);rcv(FSAk;m1k);rcv(FSAl;m1l);split(m1k;mki;mk j); split(m1l;mlt;mlr);rcv(EAki;mki);rcv(EAk j;mk j);rcv(EAlt;mlt);rcv(EAlr;mlr); snd(EAki;mki¡1);snd(EAk j;mk j¡1); join(mki¡1;mk j¡1;m1k¡1);:::;end(MAS);end(FSAl); end(FSAk);end(EAlt);end(EAlr);end(EAki);end(EAk j) The inter-agent communication is done by using the agent language FIPA ACL. Figure 7 shows an example of such a message mA12 - EA-IME-AC exchanged during a negotiation. Let’s consider a course timetabling conflict at the level of a faculty. This conflict consists in the situation of day and time identical options for two professors (PA1, PA2) that teach at the same specialization (computer science) different courses. This situation is described in Figure 8. Information exchanges during the conflict solve can be modeled with protocol diagrams using AUML notation [6]. Figure 8 shows a sample negotiation protocol for day and time timetable conflict solve. As it can be seen, the expert assistant of the computer science specialization (EACS) will inform the two personal agents corresponding to the two professors about the conflict. After this message is sent to both personal agents, between them it will start a negotiation protocol that involves a sequence of proposals and counter-proposals till a solution is accepted by the two agents. At the end of the negotiation process,100 Mihaela Oprea Figure 6: An example of negotiation in a critical situation Figure 7: Example of a FIPA ACL message Figure 8: Negotation protocol for a day and time conflict in course timetablingMAS_UP-UCT: A Multi-Agent System for University Course Timetable Scheduling 101 PA1 will inform the expert assistant agent about the solution found. 5 Conclusion The paper presented the current state of a research work that involve the development of a multiagent system for university course timetable scheduling. The purpose of our work was to analyse the benefits of using an agent-based approach for the university course timetable scheduling, which involves a lot of communication, cooperation and negotiation processes. We have described the architecture of a multi-agent system for university course timetable scheduling, MAS_UP-UCT, and briefly discussed about the evaluation of the multi-agent system. We can conclude that the main benefits of the agent-based approach adopted for university course timetabling are given by the possibility of doing negotiation between agents as a solution to the conflicts that may arise, and by the analysis of the exchanged messages flow between agents with the interaction diagrams. References [1] L. Cernuzzi, T. Juan, L. Sterling, F. Zambonelli, The Gaia methodology - Basic Concepts and Extensions, Methodologies and Software Engineering for Agent Systems, eds. Bergenti, F., Gleizes, M.-P., Zambonelli, F., Kluwer Academic Publishers, pp. 69-88, 2004. [2] D. Corne, P. Ross, H.-L. Fang, Fast practical evolutionary timetabling, Lecture Notes in Computer Science, LNCS 865, pp. 251-263, 1994. [3] D. Costa, A tabu search algorithm for computing an operational timetable, European Journal of Operational Research, vol. 76, pp. 98-110, 1994. [4] D. de Werra, An Introduction to Timetabling, European Journal of Operational Research, vol. 19, pp. 151-162, 1985. [5] R. Fahrion, G. Dollanski, Construction of university faculty timetables using logic programming techniques, Discrete Applied Mathematics, vol. 35, no. 3, pp. 221-236, 1992. [6] M. -F. Huget, J. Odell, B. Bauer, The AUML Approach, Methodologies and Software Engineering for Agent Systems, eds. Bergenti, F., Gleizes, M.-P., Zambonelli, F., Kluwer Academic Publishers, pp. 237-257, 2004. [7] T. Ito, T. Shintani, An Agenda-scheduling System Based on Persuasion Among Agents, Technical report: Nagoya Institute of Technology, 1997. [8] M. Oprea, The Use of Adaptive Negotiation by a Shopping Agent in Agent-Mediated Electronic Commerce, Lecture Notes in Artificial Intelligence, LNAI 2691, Springer-Verlag, Berlin Heidelberg, pp. 594-605, 2003. [9] G. Picard, C. Bernon, M.-P. Gleizes, Cooperative Agent Model within ADELFE Framework An Application to a Timetabling Problem, Proceedings of The 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, New York, USA, pp. 1506-1507, 2004. [10] E. G. Prodromou, N. Avouris, e-Class Personalized: Design and Evaluation of an Adaptive Learning Content Management System, Artificial Intelligence Applications and Innovations, eds. Maglogiannis, I., Karpouzis, K., Bramer, M., Springer, pp. 409-416, 2006.102 Mihaela Oprea [11] R. Ronnquist, C. K. Low, Analysing Expert Assistants through Interaction Diagrams, Proceedings of Autonomous Agents 97, ACM Press, pp. 500-501, 1997. [12] A. Schaerf, A survey of automated timetabling, Technical report: CS-R9567, Centrum voor Wiskunde en Informatica, 1995. [13] C. C. Wei, A. Lim, Automated Examination Scheduling Problem, Proceedings of the IASTED International Conference Applied Informatics, Innsbruck, ACTA Press, pp. 93-96, 2002. [14] G. Weiss, Multiagent systems, The MIT Press, Cambridge, Massachusetts, 1999. [15] M. Wooldridge, N. R. Jennings, Intelligent agents: theory and practice, The Knowledge Engineering Review, vol. 10, no. 2, pp. 115-152, 1995. Mihaela Oprea University Petroleum-Gas of Ploiesti Department of Informatics Address: B-dul Bucuresti 39, 100680 Ploiesti, Romania E-mail: [email protected] Received: November 8, 2006 Editor’s note about the author: Mihaela OPREA (born on February 20, 1967) graduated the Faculty of Automatics and Computer Science at the University Politehnica Bucharest in 1990, and got her PhD at the Department of Automatics and Computer Science of the University PetroleumGas of Ploiesti in 1996. Currently, she is a full professor at the Department of Informatics of the University Petroleum-Gas of Ploiesti. Her main research interests include pattern recognition algorithms, machine learning, knowledge modelling, applications of multi-agent systems and artificial intelligence techniques in various domains such as environmental protection, engineering, and education. She has published more than 70 research papers in the area of artificial intelligence in international journals, and in the proceedings of prestigious international conferences, printed by important publishers (Springer, Kluwer Academic, IOS Press, IEEE Computer Society Press). Since 1995 she has visited as a research visitor universities and artificial intelligence research institutes from UK, Austria, Spain, Greece, Sweden, Czech Republic, Hungary, and France. She is a member of some professional associations, and of the international programme committees of several conferences held in Europe, USA, Canada, South America, also participating at some of them as a tutorial presenter.International Journal of Computers, Communications & Control Vol. II (2007), No. 1, pp. 103-105 MANAGEMENT INFORMATION SYSTEMS: Managing the Digital Firm - 9th edition, authors: Keneth C. Laudon and Jane P. Laudon (Book Review) Florin G. Filip MANAGEMENT INFORMATION SYSTEMS: Managing the Digital Firm - 9th edition Authors: Keneth C. Laudon and Jane P. Laudon Pearson Prentice Hall, Pearson Education, Inc., Upper Saddle River, New Jersey 97458. ISBN 0-13-153841-1; XXXII+641+22+14+2+21 pags. This is the 9th edition of a successful textbook. The authors are two well-known and productive writers. K.C. Laudon, a professor of Information systems at Stern School of Business of the New York University, took his BA in Economics from Stanford and his Ph.D. from Columbia University. He is the author of twelve books and over forty articles about social, organizational and management impacts of information systems, privacy, ethics, and multimedia technology. Jane P. Laudon, a management consultant in information systems area, took her M.A. from Harvard University and her Ph.D. from Columbia University and authored seven books. Her main scientific interests are systems analysis, data management and software evaluation. The background, scientific interests and expertise of the authors and their previous works had an obvious impact on the manner this book was conceived, written and accompanied by auxiliary materials (the CD ROM and companion web site). The authors start from the premise that, nowadays, "Information systems knowledge is essential for creating successful, competitive firms, managing global corporations, adding business value and providing useful products and service to customers" (p. XIX). Moreover, they state that "in many industries survival and even existence without the extensive use of IT is inconceivable" (p.31). An important development the authors remark is the emergence of the digital firm, "where nearly all core business processes and relationship with customers, suppliers and employees are digitally enabled" (p.31). In the book the management information systems (MIS) is defined at large as "the study of [computer based] information systems in business and management" (p.44). Besides, the authors adopt a broader view of information systems (IS) "which encompasses an understanding of the management and organizational dimensions as well as technical dimensions of the systems as information systems literacy" (p.20). Consequently, this book can be viewed as an effort made by the Laudons with a view to contributing to building up and consolidating such an information system literacy for current and future managers, which are to be confronted with several "major challenges concerning: a) "information system investments", b) "strategic business"„ c) "globalization", d) "information infrastructure", and e) "ethics and security" (p.28). The authors have noticed a "user - designer communication gap". In Table 15.3 (p.552) they give several examples of that gap. While the user is concerned with problem solving related questions such as: "Will the system deliver the information I need for my work?, "How quickly can I access the data?", ..., "How will the system fit into my daily business schedule?", the designer is preoccupied to find optimal answers to technology-oriented questions such as: "How many lines of program code will it take to perform this function?", ...,"What database management system should we use? In order to help the future managers to successfully face the major challenges mentioned above and to solve the possible communication gap, the authors adopt a sociotechnical view and style of presentation. They combine technical aspects (drawn from computer science, management science, and operations Copyright ° c 2006-2007 by CCC Publications104 Florin G. Filip research) with behavioral elements (drawn from economics, sociology, psychology). Throughout the book, the presentation method chosen contains, besides the introductimg MIS concepts, facts about realworld experiences (cca 200 examples are given), new technologies, and various exercises. In accordance with the sociotechnical perspective adopted, the book is organized in four parts as it follows. Part One, which is made up of five chapters, addresses the organizational and managerial foundations of information systems. It introduces real-world systems and highlights their relationship with the organizations and managements. The concept of the digital firm is explained together with the fashionable concepts of e-business and e-commerce. Various types of information systems are reviewed (and they will be presented in detail in the remaining chapters). The four major business functions (sales and marketing, production and manufacturing, human resources and finance and accounting) are introduced with a view to being resorted to in an ending section of all remaining chapters. A special chapter is dedicated to ethical and social issues in the digital firm. Part Two, which is made up of five chapters, addresses the various facets of the information technology infrastructure (ITI). ITI is viewed as a set of technology resources (hardware and software) and as a set of services (computing platform service, telecommunication service, data management service, application software service, IT management service, standards service, education and research and development service), composed both of human and technical capabilities (p.186). A particular emphasis is laid on describing the leading edge wireless technologies, and security and control aspects. Part Three, which is made up of three chapters, describes several types of information systems. One by one, the main concepts of the Enterprise Applications (including enterprise [resource planning] - ERP, Customer Relationship Managing - CRM, and Supply Chain Management - SCM), knowledge management systems, and [group, and executive] decision support systems are presented. Part Four, which is made up of three chapters, presents the process of building, implementing and managing the systems in organizations. Several important topics such as development approaches (based on lifecycle or prototyping), managing the necessary changes in the organizations, and various methods for evaluating the business value of the project are reviewed. The last chapter addresses the specific aspects of managing international information systems. In comparison with previous editions, the present edition contains much up-to-date information about leading edge technologies. The chapters which address "wireless revolution", enterprise applications, knowledge management systems are new. Other chapters were re-written or/and completed with new topics. Throughout the book, all chapters are organized in the same manner. Each chapter opens with the statement of five learning objectives. Then, five (in few cases, four) subchapters of text follow to provide the MIS concepts and real-world examples which are related to the learning objectives. Other common features to all chapters are: a) the opening case which describes a real world example and the coresponding diagram to analyze the case in terms of management, organization and technology model; b) the concluding section on management opportunities, challenges and solutions related to the theme of the chapter; c) the "Make IT your Business" section to place the concepts described in the chapter in relation with the major business functions; d) the "Chapter summary" organized in accordance with chapter objectives, e) the list of key terms, f) the review questions, g) the application software exercise to develop solutions to real-world business problems, h) the running case project (on a simulated firm entitled Dirt Bikes). The book also contains an impressive List of references which is organized on chapters, a Glossary, three appendices of "hands-on" type (for analyzing a case study, designing a data base, and SQL), an Index of terms and an Index of organizations. The text about information systems is supplemented by two IT - based learning aids: a) the companion web site (to guide the interactive study, to facilitate Internet connections, and to provide additional case studies), and b) the interactive multimedia CD-ROM (to be used either as an interactive study guide or as an alternative to the text).MANAGEMENT INFORMATION SYSTEMS: Managing the Digital Firm - 9th edition, authors: Keneth C. Laudon and Jane P. Laudon (Book Review) 105 To conclude this review, I think this book, together with the CD-ROM and companion web site, is an excellent dynamic and active learning environment. I recommend it to be used as a textbook for the undergraduate students for information systems courses in business administration departments. It can also be utilized by the students in computer science as a complementary text, which can help them build a broader view on information systems. Florin-Gheorghe Filip Romanian Academy 125, Calea Victoriei 010071 Bucharest-1, Romania E-mail: [email protected] Received: December 31, 2006Author index Albeanu G., 37 Andonie R., 5 Andreatos A., 39, 56 Anohina A., 48 Ayache N., 26 Ballester M.A.G., 26 Bentayeb A., 17 Dean R., 5 Doukas N., 56 Filip F.G., 103 Fujihara Y., 66 Hikita A., 66 Kitagaki I., 66 Lambiris M., 74 Linguraru M.G., 26 Maamri N., 17 Moise G., 84 Oprea M., 94 Russo J.E., 5 Takeya M., 66 Trigeassou J-C., 17Description International Journal of Computers, Communications & Control (IJCCC) is published from 2006 and has 4 issues per year, edited by CCC Publications, powered by Agora University Editing House, Oradea, ROMANIA. Every issue is published in online format (ISSN 1841-9844) and print format (ISSN 1841-9836). We offer free online access to the full content of the journal http://journal.univagora.ro. The printed version of the journal should be ordered, by subscription, and will be delivered by regular mail. IJCCC is directed to the international communities of scientific researchers from the universities, research units and industry. IJCCC publishes original and recent scientific contributions in the following fields: † Computing & Computational Mathematics, † Information Technology & Communications, † Computer-based Control The publishing policy of IJCCC encourages particularly the publishing of scientific papers that are focused on the convergence of the 3 “C” (Computing, Communications, Control). Topics of interest include, but are not limited to the following: Applications of the Information Systems, Artificial Intelligence, Automata and Formal Languages, Collaborative Working Environments, Computational Mathematics, Cryptography and Security, E-Activities, Fuzzy Systems, Informatics in Control, Information Society - Knowledge Society, Natural Computing, Network Design & Internet Services, Multimedia & Communications, Parallel and Distributed Computing. The articles submitted to IJCCC must be original and previously unpublished in other journals. The submissions will be revised independently by two reviewers. IJCCC also publishes: † papers dedicated to the works and life of some remarkable personalities; † reviews of some recent important published books. Also, IJCCC will publish as supplementary issues the proceedings of some international conferences or symposiums on Computers, Communications and Control, scientific events that have reviewers and program committee. The authors are kindly asked to observe the rules for typesetting and submitting described in Instructions for Authors.Instructions for authors Papers submitted to the International Journal of Computers, Communications & Control must be prepared using a LaTeX typesetting system. A template for preparing the papers is available on the journal website http://journal.univagora.ro. In the template file you will find instructions that will help you prepare the source file. Please, read carefully those instructions. Any graphics or pictures must be saved in Encapsulated PostScript (.eps) format. Papers must be submitted electronically to the following address: [email protected]. The papers must be written in English. The first page of the paper must contain title of the paper, name of author(s), an abstract of about 300 words and 3-5 keywords. The name, affiliation (institution and department), regular mailing address and email of the author(s) should be filled in at the end of the paper. The last page should include a short bio-sketch and a picture of all the authors. Examples you may find in the previous issues of the journal. Manuscripts must be accompanied by a signed copyright transfer form. The copyright transfer form is available on the journal website. When you receive the acceptance for publication you will have to send us: 1. Completed copyright transfer form. 2. Source (input) files. † One LaTeX file for the text. † EPS files for figures - they must reside in a separate folder. 3. Final PDF file (for reference). 4. A short (maximum 200 words) bio-sketch and a picture of all authors to be included at the end of the article. One author may submit for publication at most two articles/year. The maximum number of authors is four. The maximum number of pages of one article is 16 (including a bio-sketch). The publishing of a 10 page article is free of charge. For each supplementary page there is a fee of 50 Euro/page that must be paid after receiving the acceptance for publication. The authors don’t receive a printed copy of the journal. The journal is freely available on http://journal.univagora.ro.Order If you are interested in having a subscription to “Journal of Computers, Communications and Control”, please fill in and send us the order form below: ORDER FORM I wish to receive a subscription to “Journal of Computers, Communications and Control” NAME AND SURNAME: Company: Number of subscription: Price Euro for issues yearly (4 number/year) ADDRESS: City: Zip code: Country: Fax: Telephone: E-mail: Notes for Editors (optional) 1. Standard Subscription Rates for Romania (4 issues/2007, more than 400 pages, including domestic postal cost): 90 EURO. 2. Standard Subscription Rates for other countries (4 issues/2007, more than 400 pages, including international postal cost): 160 EURO. For payment subscription rates please use following data: HOLDER: Fundatia Agora, CUI: 12613360 BANK: BANK LEUMI ORADEA BANK ADDRESS: Piata Unirii nr. 2-4, Oradea, ROMANIA IBAN ACCOUNT for EURO: RO02DAFB1041041A4767EU01 IBAN ACCOUNT for LEI/ RON: RO45DAFB1041041A4767RO01 SWIFT CODE (eq. BIC): DAFBRO22 Mention, please, on the payment form that the fee is “for IJCCC”. EDITORIAL ADDRESS: CCC Publications Piata Tineretului nr. 8 ORADEA, jud. BIHOR ROMANIA Zip Code 410526 Tel.: +40 259 427 398 Fax: +40 259 434 925 E-mail: [email protected], Website: www.journal.univagora.roCopyright Transfer Form To The Publisher of the International Journal of Computers, Communications & Control This form refers to the manuscript of the paper having the title and the authors as below: The Title of Paper (hereinafter, "Paper"): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Author(s): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The undersigned Author(s) of the above mentioned Paper here by transfer any and all copyright-rights in and to The Paper to The Publisher. The Author(s) warrants that The Paper is based on their original work and that the undersigned has the power and authority to make and execute this assignment. It is the author’s responsibility to obtain written permission to quote material that has been previously published in any form. The Publisher recognizes the retained rights noted below and grants to the above authors and employers for whom the work performed royalty-free permission to reuse their materials below. Authors may reuse all or portions of the above Paper in other works, excepting the publication of the paper in the same form. Authors may reproduce or authorize others to reproduce the above Paper for the Author’s personal use or for internal company use, provided that the source and The Publisher copyright notice are mentioned, that the copies are not used in any way that implies The Publisher endorsement of a product or service of an employer, and that the copies are not offered for sale as such. Authors are permitted to grant third party requests for reprinting, republishing or other types of reuse. The Authors may make limited distribution of all or portions of the above Paper prior to publication if they inform The Publisher of the nature and extent of such limited distribution prior there to. Authors retain all proprietary rights in any process, procedure, or article of manufacture described in The Paper. This agreement becomes null and void if and only if the above paper is not accepted and published by The Publisher, or is withdrawn by the author(s) before acceptance by the Publisher. Authorized Signature (or representative, for ALL AUTHORS): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signature of the Employer for whom work was done, if any: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Date: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Third Party(ies) Signature(s) (if necessary): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .