# ICEIS 2003 Abstracts

 Abstract of Accepted Papers Program Committee Case Studies Keynote Lectures Tutorials Workshops Paper Templates Proceedings Social Activities Transportation and Accomodation Local Information Organizing Committee Steering Committee Sponsors Hall of Fame Links ICEIS 2003 Sites www.est.ips.pt/iceis/ www.iceis.org

Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION
Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS
Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION
Area 4 - Software Agents and Internet Computing

Area 1 - DATABASES AND INFORMATION SYSTEMS INTEGRATION

Area 2 - ARTIFICIAL INTELLIGENCE AND DECISION SUPPORT SYSTEMS

 Title: THE ESSENCE OF KNOWLEDGE MANAGEMENT Author(s): Marco Bettoni, Sibylle Schneider Abstract: We contend in this presentation that more sustainable and successful Knowledge Management (KM) solutions can be built by using the principles of Knowledge Engineering (KE) to understand knowledge in a more appropriate way. We will basically explore five aspects of practical knowledge relevant for promoting the essential Human Factors (HF) involved in KM tasks: the value and function of knowledge, the motor and mechanism of knowledge, the two states and 3 conversions of individual knowledge, the logic of experience (organisation of knowledge) and knowledge processes (wheel of knowledge). We explain their consequences under the form of five principles that we suggest could be used as leading criteria for designing and evaluating KM solutions and systems in a new way more appropriate for implementing successfully the old insight of the essential role of people. Title: CONVENTIONAL VERSUS INTERVAL CLUSTERING USING KOHONEN NETWORKS Author(s): Mofreh Hogo, Pawan Lingras, Miroslav Snorek Abstract: This paper provides a comparison between conventional and interval set representations of clusters obtained using the Kohonen neural networks. The interval set clustering is obtained using a modification of the Kohonen algorithm based on the properties of rough sets. The paper includes experimental results for a web usage mining application. Clustering is one of the important functions in web usage mining. The clusters and associations in web usage mining do not necessarily have crisp boundaries. Researchers have studied the possibility of using fuzzy sets in web mining clustering applications. Recent attempts have adapted genetic algorithms, K-means clustering algorithm, and Kohonen neural networks based on the properties of rough sets to obtain interval set representation of clusters. The comparison between interval and conventional clustering, provided in this paper, may be helpful in understanding the usefulness of some of the non-conventional clustering algorithms in certain data mining applications. Title: PARTIALLY CONNECTED NEURAL NETWORKS FOR MAPPING PROBLEMS Author(s): Can Isik, Sanggil Kang Abstract: In this paper, we use partially connected feedforward neural networks (PCFNNs) for input-output mapping problems to avoid a difficulty in determining epoch while fully connected feedforward neural networks (FCFNNs) are being trained. PCFNNs can also, in some cases, improve generalization. Our method can be applicable to real input-output mapping problems such as blood pressure estimation and etc. Title: MAPPING DESIGNS TO USER PERCEPTIONS USING A STRUCTURAL HMM: APPLICATION TO KANSEI ENGINEERING Author(s): Jun Tan, D. Bouchaffra Abstract: This paper presents a novel approach for mapping designs to user perceptions. We show how this interaction can be expressed using three classification techniques. We introduce a novel classifier called "structural hidden Markov model" (SHMM) that enables to learn and predict user perceptions. We have applied this approach to Kansei engineering in order to map car external contours (shapes) to customer perceptions. The accuracy ob-tained using the SHMM is 90%. This model has outperformed the neural network and the k-nearest-neighbor classifiers. Title: IMPROVING SELF-ORGANIZING FEATURE MAP (SOFM) TRAINING ALGORITHM USING K-MEANS INITIALIZATION Author(s): Abdel-Badeeh Salem, Mostafa Syiam, Ayad Fekry Ayad Abstract: Self-Organizing Feature map (SOFM) is a competitive neural network in which neurons are organized in an l-dimensional lattice (grid) representing the feature space. The principal goal of the SOFM is to transform an incoming pattern of arbitrary dimension into a one- or two- dimensional discrete map, and to perform this transformation adaptively in a topologically ordered fashion. Usually, SOFM can be initialized using random values for the weight vectors. This paper presents a different approach for initializing SOFM. This approach depends on the K-means algorithm as an initialization step for SOFM. The K-means algorithms is used to select N 2 (the size of the feature map to be formed) cluster centers from the data set. Then, depending on the interpattern distances, the N 2 selected cluster centers are organized into an N x N array so as to form the initial feature map. Later, the initial map will be fine-tuned by the traditional SOFM algorithm. Two data sets are used to compare between the proposed method and the traditional SOFM algorithm. The comparison results indicated that: using the first data set, the proposed method required 5,000 epochs to fine tune the map while the traditional SOFM required 20,000 epochs (4 times faster). Using the second data set, the traditional SOFM required 10,000 epochs while the proposed method required only 1,000 epochs (10 times faster) Title: MODEL-BASED NEURAL NETWORKS FOR BRAIN TUMOR DIAGNOSIS Author(s): A. Salem, Safaa Amin, M. Tolba Abstract: This study aims to develop an intelligent neural network based system to automatically detect and classify brain tumors from head Magnetic Resonance Image (MRI) to help nonexperts doctors in diagnosing Brain tumors. Three types of brain tumors have been investigated which are acoustic neuroma tumor, which is a benign tumor occurring in the acoustic canals, optical glioma which occurs in the optic nerve or in the area connecting the two nerves and astrocytomas tumor. Two NN-based systems were developed for brain tumor diagnosis. The first system uses the Principal Component Analysis (PCA) for dimentionality reduction and feature extraction that extract the global features of the MRI cases. The second system uses manual and the expectation maximization segmentation algorithm to extract the local features of the MRI cases.Then Multi-Layer Perceptron (MLP) network is used for the classification of these features that obtaind from the PCA and the segmentation. A comparision study is made between the performance of MLP. Experimental results of real cases shows that peak recognition rate of 100% is achieved using PCA and 96.7% when applying the segmentation algorithm before the classification. Title: AGENTS FOR HIGH-LEVEL PROCESS MANAGEMENT: THE RIGHT ACTIVITIES, PEOPLE AND RESOURCES TO SATISFY PROCESS CONSTRAINT Author(s): John Debenham Abstract: Multiagent systems are an established technology for managing high-level business processes. High-level business processes are considerably more complex to manage than production workflows. They are opportunistic in nature whereas production workflows are routine. Each stage in a high-level process usually has a well-defined sub-goal, but the best way to achieve that sub-goal within value, time and cost constraints may not be known for certain. To achieve each sub-goal, resources, including human resources, must be found and brought together in an appropriate way. Alternatives include face-to-face meetings, and email exchanges. In a multiagent system for high-level process management each player is assisted by a personal agent. The system manages goal-driven sub-processes and manages the commitments that players make to each other. These commitments will be to perform some task and to assume some level of responsibility. The way in which the selection of tasks and the delegation of responsibility is done attempts to reflect high-level corporate principles and to ‘sit comfortably’ with the humans involved. Commitments are derived through a process of inter-agent negotiation that considers each individual’s constraints and performance statistics. The system has been trialed on business process management in a university administrative context. Title: A COMPARISON OF AUSTRALIAN FINANCIAL SERVICE FAILURE MODELS:HYBRID NEURAL NETWORKS, LOGIT MODELS AND DISCRIMINANT ANALYSIS Author(s): Juliana Yim, Heather Mitchell Abstract: This study investigated whether two artificial neural networks (ANNs), multilayer perceptron (MLP) and hybrid networks using statistical and ANN approaches, can outperform traditional statistical models for predicting Australian financial service failures one year prior to the financial distress. The results suggest that hybrid neural networks outperform all other models one and two years before failure. Therefore, hybrid neural network model is a very promising tool for failure prediction in terms of predictive accuracy. This supports the conclusion that for researchers, policymakers and others interested in early warning systems, hybrid networks would be useful. Title: THREE-DIMENSIONAL OBJECT RECOGNITION USING SUPPORT VECTOR MACHINE NEURAL NETWORK BASED ON MOMENT INVARIANT FEATURES Author(s): Doaa Hegazy, Ashraf Ibrahim, Mohamed Said Abdel Wahaab, Sayed Fadel Abstract: A novel scheme using a combination of moment invariants and Support Vector Machine (SVM) network is proposed for recognition of three-dimensional (3-D) objects from two-dimensional (2-D) views. The moment invariants are used in the feature extraction process since they are invariant to translation, rotation and scaling of objects. Support Vector Machines (SVMs) have been recently proposed as a new technique for pattern recognition. In the proposed scheme, SVM neural network, which trained using the Kernel Adatron (KA) with Gaussian kernel, is used for training (classification) and testing step. The proposed scheme is applied to a database of 1440 different views for 20 complex 3-D objects and very good results are achieved without adding noise to the test views. Using noisy test data also yielded promising results. Title: A QUALITY-OF-SERVICE-AWARE GENETIC ALGORITHM FOR THE SOURCE ROUTING IN AD-HOC MOBILE NETWORKS Author(s): Said Ghoniemy, Mohamed Hashem, Mohamed Hamdy Abstract: A QoS-aware delay-constrained unicast source routing algorithm for ad-hoc networks based on a genetic algorithm is proposed in this paper. The proposed algorithm is based on a new chromosomal encoding which depends on the network links instead of nodes. Advantages of the link-based encoding in the ad-hoc routing problem were studied. Promising results have been obtained when the proposed algorithm was compared to other routing algorithms. Results also showed that the proposed algorithm shows a better performance for heavy QoS constraints the average delay requirements and cost Title: SUPPORTING STRATEGIC ALLIANCES THE SMART WAY Author(s): Iain Bitran, Steffen Conn Abstract: The Network Economy forces managers to pursue opportunities and engage competition through alliances and networks of alliances. Managers and organisations must therefore nurture the skills that successful alliance development and management require, and attain the “partnering mindset” pertinent to this new industrial paradigm. Studies indicate that alliance success remains an elusive aspiration for the majority of organisations, with up to seventy percent failing to meet their initial objectives. The SMART Project addresses this issue by developing a systematic managerial method for strategic alliance formation and management. This method provides the structure for a software-based decision support system that includes extensive learning and support materials for manager and business consultant training. Following a brief introduction, this paper provides an overview of the concepts and issues relating to strategic alliances and networks. Subsequently, the requirements and functioning of the SMART System are described. Finally, the future direction and validation strategy of the project are relayed. Title: A HYBRID APPROACH FOR HANDWRITTEN ARABIC CHARACTER RECOGNITION: COMBINING SELF-ORGANIZING MAPS (SOMS) AND FUZZY RULES Author(s): E. Moneer, Mohamed Hussien, Abdel-Badeeh Salem, Mostafa Syiam Abstract: This paper presents a hybrid approach combining self-organizing feature map (SOM) and fuzzy rules to develop an intelligent system for handwritten Arabic character recognition. In the learning phase, the SOM algorithm is used to produce prototypes which together with the corresponding variances are used to determine fuzzy regions and membership functions. Fuzzy rules are then generated by learning from training characters. In the recognition phase, an input character is classified by a fuzzy rule based classifier. An unknown character is then re-classified by an SOM classifier. Experiments on a database of 41,033 handwritten Arabic character (20,142 used for training and 20,891 used for testing). The experimental results achieve a classification rate 93.1%. Title: KNOWLEDGE MANAGEMENT IN ENTERPRISES: A RESEARCH AGENDA Author(s): Konstantinos Karnezis, Konstantinos Ergazakis Abstract: Knowledge Management is an emerging area, which is gaining interest by both enterprises and academics. The effective implementation of a KM strategy is considering as a “must” and as a precondition of success for contemporary enterprises, as they enter the era of the knowledge economy. However, the field of Knowledge Management has been slow in formulating a universally accepted conceptual framework and methodology, due to the many pending issues that have to be addressed. This paper attempts to propose a novel taxonomy for Knowledge Management research by co instantaneously presenting the current status with some major themes of Knowledge Management research. The discussion presented on these issues should be of value to researchers and practitioners. Title: AN ALGORITHM FOR MINING MAXIMAL FREQUENT SETS BASED ON DOMINANCY OF TRANSACTIONS Author(s): Srikumar Krishnamoorthy, Bharat Bhasker Abstract: Several algorithms for mining maximal frequent sets have been proposed in the recent past. These algorithms, mostly, follow the bottom-up approach. In this paper, we present a top-down algorithm for mining the maximal frequent sets. The proposed algorithm uses a concept of dominancy factor of a transaction for limiting the search space. The algorithm is especially efficient for longer patterns. We theoretically model and compare the proposed algorithm with MaxMiner (an algorithm for mining long patterns) and show it to be more efficient Title: THE STRATEGIC AND OPERATIONAL ROLES OF MICROCOMPUTERS IN SMES: A PERCEPTUAL GAP ANALYSIS Author(s): ZELEALEM TEMTIME Abstract: Although strategic planning and information technology are key concepts in management research, they have been widely used in relation to only large firms. Only few studies attempted to examine the perceptions of small and medium enterprises (hereafter, SMEs) about the role of IT in strategy making. Moreover, these studies are of less significance for developing countries as the definition and environment of SMEs vary from developed to developing country. This article analyses the strategic use of microcomputers and software packages in corporate planning and decision-making in small and medium enterprises (hereafter, SMEs). Data were collected from 44 SMEs from 3 cities in the Republic of Botswana to study their perceptions about the use of computer-based technology to solve managerial problems, and analysed using simple descriptive statistics. The findings indicate that SMEs in Botswana engaged in both strategic and operational planning activities. However, microcomputers and software packages were used primarily for operational and administrative tasks rather than for strategic planning. They perceive that strategic planning is costly, time-consuming, and hence appropriate for only large firms. The study also showed that firm size and strategic orientation have direct and positive relation to the use of computer technology for strategic decision making. The major implication of the findings for future research has been identified and presented. Title: THE USE OF NEUROFUZZY COMPUTABLE SYSTEM TO IDENTIFY PROMINENT BEHAVIOR CHARACTERISTICS IN SUCCESSFUL ENTREPRENEURS Author(s): Rogério Bastos, Angelita Ré, Lia Bastos Abstract: Ahead of small and medium companies there are individuals responsible for the company’s process of creation and development. It is of high importance to identify which characteristics and attributes that contribute to determine the success of these entrepreneurs are. In the present work, has been used a neurofuzzy computable system which permits to identify prominent characteristics in individuals who got success in their enterprises, considered so successful entrepreneurs. For that, a research was taken among entrepreneurs of textile and furniture fields from Santa Catarina State. Title: KNOWLEDGE ACQUISITION THROUGH CASE-BASED ADAPTATION FOR HYDRAULIC POWER MACHINE DESIGN Author(s): Chi-man VONG, Yi-ping Li, Pak-kin WONG Abstract: Knowledge acquisition is the first but usually the most important and difficult stage in building an intelligent decision-support system. Existing intelligent systems for hydraulic system design use production rules as its source of knowledge. However, this leads to problems of knowledge acquisition and knowledge base maintenance. This paper describes the application of CBR to hydraulic circuit design for production machines, which helps acquiring knowledge and solving problems by reusing this acquired knowledge (experience). A technique Case-Based Adaptation (CBA) is implemented in the adaptation stage of CBR so that adaptation becomes much easier. A prototype system has been developed to verify the usefulness of CBR in hydraulic power machine design. Title: KNOWLEDGE MANAGEMENT AND DATA CLASSIFICATION IN PELLUCID Author(s): Tung Dang, Baltazar Frankovic Abstract: Abstract: The main aim of the Pellucid project is to develop a platform based on the multi-agent technology for assisting public employees in their organization. This paper deals with one of many problems associated with building such a system. There is the problem of classification and identification of required information for agent’s performance. Pellucid agents use historical experience and information to assist newly arriving employees, therefore searching for some specific data from the database is a routine task that they have often to do. This paper presents methods for encoding data and creating the database, so that agents can have an easy access to the required information. Furthermore, two methods applicable with every type of database for classification and selection of historical information are presented. Title: SCALING UP INFORMATION UPDATES IN DISTRIBUTED CONDITION MONITORING Author(s): Sanguk Noh, Paul  Benninghoff Abstract: Monitoring complex conditions over multiple distributed, autonomous information agents can be expensive and difficult to scale. Information updates can lead to significant network traffic and processing cost, and high update rates can quickly overwhelm a system. For many applications, significant cost is incurred responding to changes at an individual agent that do not result in a change to an overriding condition. But often we can avoid much work of this sort by exploiting application semantics. In particular, we can exploit constraints on information change over time to avoid the expensive and frequent process of checking for a condition that cannot yet be satisfied. We motivate this issue and present a framework for exploiting the semantics of information change in information agents. We partition monitored objects based on a lower bound on the time until they can satisfy a complex condition, and filter updates to them accordingly. We present and implement a simple analytic model of the savings that accrue to our methods. Besides significantly decreasing the workload and increasing the scalability of distributed condition monitoring for many applications, our techniques can appreciably improve the agents' response time between a condition occurrence and its recognition. Title: A WEB-BASED DECISION SUPPORT SYSTEM FOR TENDERING PROCESSES Author(s): Noor Maizura Mohamad Noor, Brian Warboys, Nadia Papamichail Abstract: A decision support system (DSS) is an interactive computer-based system that helps decision makers utilise data and models to solve complex and unstructured problems. Procurement is a decision problem of paramount importance for any business. A critical and vital procurement task is to select the best contractor during the tendering or bidding process. This paper describes a Web-based DSS that aids decision makers in choosing among competitive bids for building projects. The system is based on a framework of a generic process approach and is intended to be used as a general decision-making aid. The DSS is currently being implemented as a research prototype in a process-support environment. It coordinates the participants of tendering processes and supports the submission, processing and evaluation of bids. A case study is drawn from the construction business to demonstrate the applicability of our approach. Title: ONE APPROACH TO FUZZY EXPERT SYSTEMS CONSTRUCTION Author(s): Dmitry Vetrov, Dmitry Kropotov Abstract: Some pattern recognition tasks contain expert information, which can be expressed in the terms of linguistic rules. Theory of fuzzy sets presents one of the most successive ways for using these rules. However, in this case there appear two main problems of forming fuzzy sets and generating fuzzy rules, which cannot be fully solved by expert in some areas. These are two "weak points" which hold in the expansion of fuzzy expert systems. The article below proposes one of possible solutions based on the use of precedent information. Title: A CAPABILITY MATURITY MODEL-BASED APPROACH TO THE MEASUREMENT OF SHARED SITUATION AWARENESS Author(s): Edgar Bates Abstract: Emerging technologies for decision aids offer the potential for large volumes of data to be collected, processed, and displayed without overloading users and has tremendous implications for the ability of decision makers to approach total situation awareness and achieving a dominant competitive advantage. In industry the measures of effectiveness are clearly linked to performance in the marketplace, but in the military measures of shared situational awareness generally lack the analogous objective rigor. This paper, thus attempts to provide the framework for assessing shared situational awareness using fundamental system engineering and knowledge management paradigms. Title: THE COMMUNIGRAM: MAKING COMMUNICATION VISIBLE FOR ENTERPRISE MANAGEMENT Author(s): Piotr Lipinski, Jerzy Korczak, Helwig Schmied, Kenneth Brown Abstract: The Communigram is a new methodological approach to project and process management which illustrates the information flows in the enterprise in a simple and intuitively comprehensible manner. It complements currently existing information systems by providing a means to plan organizational communication explic-itly such that the crucial exchange of information may be suitably controlled. This considerably improves the usefulness of information systems both in terms of information transmission effectiveness and user ac-ceptance. In this paper, the practical implementation of the Communigram in information systems is de-scribed with some notes on technical details and on the practical experience gained in its use. Title: THE DESIGN AND IMPLEMENTATION OF IMPROVED INTELLIGENT ANSWERING MODEL Author(s): Ruimin Shen, Qun Su Abstract: Based on the analysis of the main technical problems in the designs of the Intelligent Answering System, the traditional Answering System Model and its working mechanism is provided. Based on the analysis of the model, a Improved Intelligent Answering Model based on the data generalization based on the patterns tree, association rule mining of patterns, and the mergence and deletion of the rules based on the knowledge tree is come up with and implemented. In the end, the improvement of this model in intelligence is analyzed and proved with some data in a experiment. Title: INTEGRATED KNOWLEDGE BASED PROCESS IN MANUFACTURING ENVIRONMENT Author(s): jyoti K, Dino  Isa, Peter  Blanchfield, V.P Kallimani Abstract: Abstract: Industries in Malaysia are facing the threat of survival in this global competitive world. This factor is more evident in small scale industries. They are unable to sustain due to the various factors like expensive ,labour, Market fluctuations and the technology additions. Hence to leverage the system there is a need of the structure where in industry can expertise them selves by utilizing their own tacit and explicit knowledge and for betterment and survival. This paper is focused on the various factors in designing the knowledge platform in manufacturing sector using the environments like J2EE, Artificial Intelligence and prolog programming. Thus supporting the decisions taken in the industry Title: ACT E-SERVICE QUESTION ANSWERING SYSTEMS BASED ON FAQ CORPUS Author(s): Ben Chou, Hou-Yi Lin, Yuei-Lin Chiang Abstract: World Wide Web (WWW) is a huge platform of information interchange. Users can utilize search engine to search, interchange information on the Internet. Nowadays, there are about 5 hundred millions of web pages at least in the world. With information overloading everywhere on the Internet, users are often swamped with keyword-based search engine and waste much time on impertinent web pages because of the keyword appearance in the pages. After several innovations of search engine, search results are more and more precision and intelligent. In the future, semantic processing and intelligent sifting and ranking technologies are integrated into the third generation search engine. Thus, it is useful for satisfying and closing to the needs users wanted. In this research, we try to combine text mining, concept space, and some related technologies to implement a search engine, which has an appropriate capability of understanding natural language questions. And we will demonstrate it with ACT e-Service. Title: TME: AN XML-BASED INFORMATION EXTRACTION SYSTEM Author(s): Shixia Liu, Liping Yang Abstract: Information extraction is a form of shallow text processing that locates a specified set of relevant information in a natural-language document. In this paper, a system—Template Match Engine (TME) is developed to extract useful information from unlabelled texts. The main feature of this system is that it describes the extraction task by an XML template profile, which is more flexible than traditional pattern match methods. The system first builds an initial template profile by utilizing domain knowledge. Then the initial template profile is used to extract information from electronic documents. This step produces some feedback words by enlarging and analyzing the extracted information. Next, this template profile is refined by the feedback words and concept knowledge related to them. Finally, the refined profile is used to extract specified information from electronic documents. The experiment results show that TME system increases recall without loss of precision. Title: A GENERAL KNOWLEDGE BASE FOR COMPARING DESCRIPTIONS OF KNOWLEDGE Author(s): Susanne Dagh, Harald Kjellin Abstract: The complexity associated with managing knowledge bases makes it necessary to use a simple syntax when formalising knowledge for a knowledge base. If a large number of people contribute with descriptions of objects to such a knowledge base and if it is necessary to make precise comparisons between the objects of the knowledge base, then some important requirements must be fulfilled; 1) It is necessary that all contributors of knowledge descriptions perceive the knowledge in a similar way; 2) It is crucial that the definitions in the descriptions are on the right level of abstraction; 3) It must be easy for the contributors of knowledge descriptions to create knowledge structures and also to remove them. We propose principles for creating a general knowledge base that fulfils these requirements. We constructed a prototype to test the principles. The tests and inquiries showed that the prototype satisfies the requirements, and thus our conclusion is that the proposed general knowledge base facilitates comparisons of knowledge descriptions. Title: CONSTRAINT-BASED CONTRACT NET PROTOCOL Author(s): Alexander Smirnov, Nikolai Chilov, Tatiana Levashova, Michael Pashkin Abstract: The paper describes and analyses a constraint-based contract net protocol designed as a part of the being developed KSNet-approach. This approach addresses the problem of knowledge logistics and considers it as a problem of configuring a knowledge source network. Utilizing intelligent agents is motivated by a distributed and scalable nature of the problem. Made improvements to the contract net protocol concern a formalism of agents’ knowledge representation and a scenario of the agents’ interaction. For the agents’ knowledge representation and manipulation a formalism of object-oriented constraint networks was chosen. Modifications related to the interaction scenarios include introduction of iterative negotiation, concurrent conformation of proposals, extended set of available messages, additional role for agents and agents’ ability to change their roles during scenarios. Examples of the modifications are shown via UML diagrams. A short scenario at the end of the paper illustrates advantages of the developed modifications. Title: SIMULATING DATA ENVELOPMENT ANALYSIS USING NEURAL NETWORKS Author(s): Pedro Gouvêa Coelho Abstract: This article studies the creation of efficiency measurement structures of Decision-Making Units (DMUs) by using high-speed optimisation modules, inspired in the idea of an unconventional Artificial Neural Network (ANN) and numerical methods. In addition, the Linear Programming Problem (LPP) inherent in the Data Envelopment Analysis (DEA) methodology is transformed into an optimisation problem without constraints, by using a pseudo-cost function, including a penalty term, causing high cost every time one of the constraints is violated. The LPP is converted into a differential equations system. A non-standard ANN implements a numerical solution based on the gradient method. Title: SET-ORIENTED INDEXES FOR DATA MINING QUERIES Author(s): Janusz Perek, Zbyszko Krolikowski, Mikolaj Morzy Abstract: One of the most popular data mining methods is frequent itemset and association rule discovery. Mined patterns are usually stored in a relational database for future use. Analyzing discovered patterns requires excessive subset search querying in large amount of database tuples. Indexes available in relational database systems are not well suited for this class of queries. In this paper we study the performance of four different indexing techniques that aim at speeding up data mining queries, particularly improving set inclusion queries in relational databases. We investigate the performance of those indexes under varying factors including the size of the database, the size of the query, the selectivity of the query, etc. Our experiments show significant improvements over traditional database access methods using standard B+ tree indexes. Title: USING KNOWLEDGE ENGINEERING TOOL TO IDENTIFY THE SUBJECT OF A DOCUMENT - RESEARCH RESULTS Author(s): Offer Drori Abstract: Information databases today contain many millions of electronic documents. Locating information on the Internet today is problematic, due to the enormous number of documents it contains. Several other studies have found that associating documents with a subject or list of topics can improve locatability of information on the Internet [5] [6] [7]. Effective cataloguing of information is performed manually, requiring extensive resources. Consequently, most information is currently not catalogued. This paper aims to present a software tool that automatically locates the subject of a document and to show the results of a test performed, using the software tool (TextAnalysis) specially developed for this purpose Title: SUMMARIZING MEETING MINUTES Author(s): Carla Lopo Abstract: In this paper it is analyzed the problem of summarization, and specifically the problem of summarization of meeting verbatim. In order to solve it, it is proposed an approach that consists of structuring the meeting data and complementary data related to the environment in which the meeting is integrated. Then, the creation of possible summaries is based in the identification of genre of summaries and SQL queries. Title: ON FAST LEARNING OF NEURAL NETWORKS USING BACK PROPAGATION Author(s): Kanad Keeni Abstract: This study discusses the subject of training data selection for neural networks using back propagation. We have made only one assumption that there are no overlapping of training data belonging to different classes, in other words the training data is linearly/semi-linearly separable . Training data is analyzed and the data that affect the learning process are selected based on the idea of Critical points. The proposed method is applied to a classification problem where the task is to recognize the characters A,C and B,D. The experimental results show that in case of batch mode the proposed method takes almost 1/7 of real and 1/10 of user training time required for conventional method. On the other hand in case of online mode the proposed method takes 1/3 of training epochs, 1/9 of real and 1/20 of user and 1/3 system time required for the conventional method. The classification rate of training and testing data are the same as it is with the conventional method. Title: A PORTAL SYSTEM FOR PRODUCTION INFORMATION SERVICES Author(s): Yuan-Hung Chen, Jyi-Shane  Liu Abstract: Production data are usually voluminous, continuous, and tedious. Human efforts to derive production information from raw data often result in extra work loading, lagging, and errors. Undesirable results may occur when related functional units are not integrated in parallel with the same updated information. Therefore, successful production information management must address two significant problems: speed of information and effect of information. We propose a production information portal (PIP) architecture to facilitate information derivation efficiency and information utilization performance. The architecture is developed by integrating concepts of data and information management, event monitoring, configurable services, decision support, and information portal. A rigorous system analysis and modelling process is conducted to produce detailed specifications on functional modules, operation procedures, and data/control flows. The utility of the architecture and the prototype system was verified in a semiconductor fabrication domain and was tested by actual users on real data from a world class semiconductor company. Title: AN EXPERT SYSTEM FOR PREVENTING AND CORRECTING BURDEN SLIPS, DROPS AND HANGS IN A BLAST FURNACE Author(s): David Montes, Raquel Blanco, Eugenia Diaz, Javier Tuya, Faustino Obeso Abstract: This paper describes an expert system for preventing and correcting burden slips, drops and hangs inside a blast furnace. The system monitors and takes the decisions through the analysis and evaluation of more than a hundred parameters considered as input variables. The main difference between the system proposed here and a classical diagnostic system is the coexistence of three different models of behaviour: one based on a theoretical model of behaviour of permeability, a second empirical model based on the considerations given by the human experts, and a third model derived from the study of the real behaviour observed in the furnace over time, obtained by means of the study of historical files, using machine learning techniques. Title: PREDICTING OF CUSTOMER DEFECTION IN ELECTRONIC COMMERCE:USING BACK-PROPAGATION NEURAL NETWORKS Author(s): Ya-Yueh Shih Abstract: Since the cost of retaining an existing customer is lower than that of developing a new one, exploring potential customer defection becomes an important issue in the fiercely competitive environment of electronic commerce. Accordingly, this study used artificial neural networks (ANNs) to predict customers’ repurchase intentions and thus avoid defection based on a set of criteria of quality attributes satisfaction and three beliefs in theory of planned behavior (TPB). The predicted repurchase intentions found by utilizing ANNs was compared with traditional analytic tools such as multiple discriminant analysis (MDA). Finally, via T-test analysis indicated that predicted accuracy of ANNs is better in both training and testing phases. Title: KNOWLEDGE MANAGEMENT SYSTEMS FOR LEVERAGING ENTERPRISE DATA RESOURCES: TAXONOMY AND KEY ISSUES Author(s): Mahesh S. Raisinghani Abstract: With today’s emphasis on competitiveness, team-based organizations, and responsiveness, top management cannot separate their responsibilities between people management and traditional/e-business management, since they are both interrelated in knowledge management systems (KMS). Understanding how to manage under conditions of rapid change is a critical skill in the knowledge economy. Today, work in organizations of KMS is increasingly organized with teamwork-based; instead of, the traditional organization charts. As the workforce becomes increasingly diverse and global, it is important for top management to recognize that diversity is a positive force for KMS. Today’s team based, geographically dispersed employees are increasingly guided by a network of values and tradition as part of an organizational culture in KMS. Managing that culture and establishing those changed values are crucial KMS management tasks. This paper explores, describes, and assesses the integration, impact, and implications of KMS for theory and practice. Title: CONTENT-BASED REASONING IN INTELLIGENT MEDICAL INFORMATION SYSTEMS Author(s): Marek Ogiela Abstract: This paper describes an innovative approach to the use of linguistic methods of structural image analysis in intelligent systems of visual data perception. They are directed at understanding medical images and a deeper analysis of their semantic contents. This type of image reasoning and understanding is possible owing to the use of especially defined graph grammars enabling one both the correct recognition of significant disease lesions and conducting a deeper analysis of the discovered irregularities on various specific levels. The proposed approach will be described on selected examples of images obtained in radiological diagnosis. Title: KNOWLEDGE BASE GRID: TOWARD GLOBAL KNOWLEDGE SHARING Author(s): Wu Zhaohui, Xu Jiefeng, Wu Zhaohui Abstract: Grid technologies enable widespread sharing and coordinated use of networked resources. Bringing knowledge into Grid can be more challenging because in such settings, we encounter difficulties such as standardization of knowledge representation, developing standard protocols to support semantic interoperability, and developing methodology to construct on-demand intelligent services. In this paper, we present an open Knowledge Base Grid architecture that addresses these challenges. We first discuss the requirements of knowledge representation in the Internet, and then argue about the importance of developing standard protocols in such a knowledgeable Internet, at last we present some inference services which provide high level knowledge services such as correlative semantic browsing, knowledge query, forward and backward chaining inference etc. KB-Grid provides a platform for Distributed Artificial Intelligence. Title: FACE PATTERN RECOGNITION AND EXTRACTION FROM MULTIPLE PERSONS SCENE Author(s): Tetsuo Hattori Abstract: A method for face recognition of acquaintance as a subpattern in a given image is proposed. We consider that the face pattern to be recognized in the input image is approximately an affine transformed (rotated, enlarged and/or reduced, and translated) pattern of a registered original one. In order to estimate the parameters of the affine transformation, the method uses a Karhunen-Loeve (KL) expansion, spatial correlation, and an approximate equation based on Taylor’s expansion of affine transformation. In this paper, we deal with two types of pattern representation: ordinary grey level representation and a normalized gradient vector field (NGVF) one. The experimental result shows that our method using NGVF representation is considerably effective. Title: EXTRACTION OF FEELING INFORMATION FROM CHARACTERS USING A MODIFIED FOURIER TRANSFORM Author(s): Tetsuo Hattori Abstract: An automated feature extraction and evaluation method of feeling information from printed and handwritten characters is proposed. This method is based on image processing and pattern recognition techniques. First, an input binarized pattern is transformed by a distance transformation. Second, a two-dimensional vector field is composed from the gradient of the distance distribution. Third, a divergence operator extracts source and sink points from the field, and also the vectors on those points. Fourth, the Fourier transform is done for the vector field as a complex valued function. Differently from conventional methods, we deal with the Fourier transform with Laplacian operated phase. Fifth, applying the KL expansion method to the data of the complex vectors obtained from some kinds of character fonts, we extract some common feature vectors of each character font. Using those common vectors and linear multiple regression model, an automated quantitative evaluation system can be constructed. The experimental results show that our vector field method using the combination of Fourier transform and KL expansion is considerably more efficient in the discrimination of printed characters (or fonts), comparing with conventional method using gray level (or binarized) character pattern and KL expansion. Moreover, we obtain the results that the evaluation system based on the regression model comparatively meets well to the human assessment. Title: A CONCEPTUAL MODEL FOR A MULTIAGENT KNOWLEDGE BUILDING SYSTEM Author(s): Barbro Back, Adrian Costea, Tomas Eklund, Antonina Kloptchenko Abstract: Financial decision makers are challenged by the access to massive amounts of both numeric and textual financial information made achievable by the Internet. They are in need of a tool that makes possible rapid and accurate analysis of both quantitative and qualitative information, in order to extract knowledge for decision making. In this paper we propose a conceptual model of a knowledge-building system for decision support based on a society of software agents, and data and text mining methods. Title: BRIDGING THE GAP BETWEEN SOCIAL AND TECHNICAL PROCESSES TO FACILITATE IT ENABLED KNOWLEDGE DISSEMINATION Author(s): James Cunningham, Yacine Rezgui, Brendan Berney, Elaine Ferneley Abstract: The need for organizations to encourage collaborative working through knowledge sharing in order to better exploit their intellectual capital is recognized. However, much of the work to date suggests that despite the intuitive appeal of a collaborative approach significant knowledge remains locked away. It has been argued that the problem is both technological and cultural. Whilst technologically mature, sophisticated information communication technologies (ICTs) exist, providing a technological medium to support a collaborative culture in which knowledge can be elicited, stored, shared and disseminated is still elusive. This paper presents the work being undertaken as part of the IST funded e-COGNOS project that is developing an open, model-based infrastructure and a set of web-based tools that promote consistent knowledge management within collaborative construction environments. The e-COGNOS project has adopting an approach which moves away from the notion of technology managing information and toward the idea of social processes and technological tools evolving reciprocally – the notion of co-construction. Within this co-construction metaphor the project is developing a set of tools that mimic the social process of knowledge discovery thus aiming to bridge the gap between social and technological knowledge discovery and dissemination. Title: THE DEVELOPMENT OF A PROTOTYPE OF AN ENTERPRISE MARKETING DECISION SUPPORT SYSTEM Author(s): Junkang Feng, Xi Wang, Fugen Song Abstract: Against the background of the increasing importance of marketing decision making for the manufacturing enterprises and yet relatively weak and insufficient research on systematic methodologies for overall marketing decision making, we build up a model-based framework for marketing decision making. The framework offers an approach of fusing quantitative calculations with qualitative analysis for marketing decision making. Our review of the literature on the architecture of a Decision Support System (DSS) would seem to show that there exists a gap between the theories of the architecture of a DSS, which consists of mainly a database (DB), a model base (MB) and a knowledge base (KB), and the use of this architecture in practical design and implementing a DSS. To fill this gap, we put forward a notion of “Tri-Base Integration”, based upon which we have developed and tested an innovative architecture for a DSS. We have built a prototype of an Enterprise Marketing Decision Support System based upon these ideas. This prototype would seem to have proven the feasibility of our model-based framework for overall marketing decision making and our innovative architecture for a DSS. Title: APPLICATION OF NEURAL NETWORKS TO WATER TREATMENT: MODELING OF COAGULATION CONTROL Author(s): M. Salem, Hala Abdel-Gelil, L. Abdel All Abstract: Water treatment includes many complex phenomena, such as coagulation and flocculation. These reactions are hard or even impossible to control by conventional methods. The paper presents a new methodology for determining the optimum coagulate dosage in water treatment process. The methodology is based on a neural network based-model; the learning process is implemented by using the Error Backpropagation algorithm using raw water quality parameters as input. Title: USING KNOWLEDGE DISCOVERY IN DATABASES TO IDENTIFY ANALYSIS PATTERNS Author(s): Paulo Engel, Carolina Silva, Cirano Iochpe Abstract: Geographic information systems (GIS) are becoming more popular, increasing the need of implementing geographic databases (GDB). But the GDB design is not easy and requires experience in the task. To support that, the use of analysis patterns has been proposed. Although very promising, the use of analysis patterns in GDB design is yet very restrict. The main problem is that patterns are based on specialists’ experience. In order to help and speed up the identification of new and valid patterns, which are less dependent on specialists’ knowledge than those now available, this paper proposes the identification of analysis patterns on the basis of the process of knowledge discovery in databases (KDD). Title: SEMANTIC ANNOTATIONS AND SEMANTIC WEB USING NKRL (NARRATIVE KNOWLEDGE REPRESENTATION LANGUAGE) Author(s): Gian Zarri Abstract: We suggest that it should be possible to come closer to the Semantic Web goals by using ‘semantic annotations’ that enhance the traditional ontology paradigm by supplementing the ontologies of concepts with ‘ontologies of events’. We present then some of the properties of NKRL (Narrative Knowledge Representation Language), a conceptual modelling formalism that makes use of ontologies of events to take into account the semantic characteristics of those ‘narratives’ that represent a very large percentage of the global Web information. Title: INDUCTION OF TEMPORAL FUZZY CHAINS Author(s): Jose Jesus Castro Sanchez, Luis Rodriguez Benitez, Luis Jimenez Linares, Juan Moreno Garcia Abstract: The aim of this paper is to present an algorithm to induce the Temporal Fuzzy Chains (TFCs) (Eurofuse 2002). TFCs are used to model the dynamic systems in a linguistic manner. TFCs make use of two different concepts: the traditional method to represent the dynamic systems named state vectors, and the linguistic variables used in fuzzy logic. Thus, TFCs are qualitative and represents the "temporal zones" using linguistic states and linguistic transitions between the linguistic states. Title: THE PROTEIN STRUCTURE PREDICTION MODULE OF THE PROT-GRID Author(s): Dimitrios  Frossyniotis, George Papadopoulos, Dimitrios Vogiatzis Abstract: In this work, we describe the protein secondary structure prediction module of a distributed bio-informatics system. Protein databases contain over a million of sequenced proteins, however there is structuring information for at most 2\% of that number. The challenge is to reliably predict the structure based on classifiers. Our contribution is the evaluation of architectures of multiple classifier systems on a standard dataset (CB-396) containing protein sequencing information. We compare the results of a single classifier system based on SVMs, as well as with our version of an SVM based adaBoost algorithm and a novel fuzzy multi-SVM classifier. Title: WITH THE "DON'T KNOW" ANSWER IN RISK ASSESSMENT Author(s): Luigi Troiano, Canfora Gerardo Abstract: Decision making often deals with incomplete and uncertain information. Uncertainty concerns the level of confidence associated with the value of a piece of information, while incompleteness derives from the unavailability of data. Fuzzy numbers capture the uncertainty of information, but they are not able to explicitly represent incompleteness. In this paper we discuss an extension of fuzzy numbers, called fuzzy numbers with indeterminateness, and show how they can be used to model decision process involving incomplete information. In particular, the paper focuses on the Don't Know'' answer to questionnaires and develops an aggregation model that accounts for these type of answers. The main contribution lies in the formalization of the interrelationships between the risk of a decision and the incompleteness of the information on which it is made. Title: FUZZY INFERENCING IN WEB PAGE LAYOUT DESIGN Author(s): Abdul-Rahim Ahmad, Otman Basir, Khaled  Hassanein Abstract: The Web page layout design is a complex and ill-structured problem where the evolving tasks, inadequate information processing capabilities, cognitive biases and socio-emotional facets frequently hamper the procurement of a superior alternative. An important aspect in selection of a superior Web page layout design is the evaluation of its fitness value. Automating the fitness evaluation of layouts seems to be a significant step forward. It requires quantification of highly subjective Web page design guidelines in the form of some fitness measure. The Web usability and design guidelines come from experts who provide vague and conflicting opinions. This paper proposes the exploitation of fuzzy technology in modeling such subjective, vague, and uncertain Web usability and design guidelines. Title: MAPPING DOCUMENTS INTO CONCEPT DATABASES FOR THRESHOLD-BASED RETRIEVAL Author(s): REGHU RAJ PATINHARE COVILAKAM, RAMAN S Abstract: The trajectory of topic description in text documents such as news articles generally covers a small number of domain-specific concepts. Domain-specific phrases are excellent indicators of these concepts. Any form of representation of the concepts must invariably use finite strings of some finite representation language. Then, the design of a grammar with good selectivity and coverage is a viable solution to the problem of content capturing. This paper deals with the design of such a grammar for a small set of domains, which helps the representation of the concepts using the relational framework. This paradigm throws light into the possibility of denoting the text portion of web pages as a relational database, which can facilitate information retrieval using simple SQL queries obtained by translating a user's query. The advantage is that highly relevant results can be retrieved by looking for a threshold value in a specific attribute column. Title: A NEW METHOD OF KNOWLEDGE CREATION FOR KNOWLEDGE ORGANIZATIONS Author(s): Mingshu Li, Ying Dong Abstract: Knowledge creation is an interesting problem in knowledge management (KM). Topic maps, especially XML Topic Map (XTM), is used to organize information in a way that can be optimized for navigation. In this paper, we adopt XTM as a new method to discuss the problem of knowledge creation. Since XTM can be modeled as a formal hypergraph, we study the problem based on XTM hypergraph. New XTM knowledge operations have been designed for based on graph theories for knowledge creation. Moreover, they have been implemented as a toolkit, and applied on our KM platform. When applying the XTM knowledge operations, new knowledge can be generated for knowledge organizations. The application of the operations can fit users’ requests on the intelligent retrieval of the knowledge, or on the analysis of the system knowledge structure. Title: AN ARTIFICIAL NEURAL NETWORK BASED DECISION SUPPORT SYSTEM FOR BUDGETING Author(s): Barbro Back, Eija Koskivaara Abstract: This paper introduces an artificial neural network (ANN) based decision support system for budgeting. The proposed system estimates the future revenues and expenses of the organisation. We build models based on four to six years’ monthly account values of a big organisation. The monthly account values are regarded as a time-series and the target is to predict the following year’s account values with the ANN. Thus, the ANN’s output information is based on similar information on prior periods. The prediction results are compared to the actual account values and to the account values budgeted by the organisation. We found that ANN can be used for modeling the dynamics of the account values on monthly basis and for predicting the yearly account values. Title: A DATA MINING METHOD TO SUPPORT DECISION MAKING IN SOFTWARE DEVELOPMENT PROJECTS Author(s): José Luis Álvarez-Macías Abstract: In this paper, we present a strategy to induce knowledge as support decision making in Software Development Projects (SDP). The motive of this work is to reduce the great quantity of SDP do not meet the initial cost requirements, delivery date and the quality of the final product. The main objective of this strategy is to support the manager in the decision taking to establish the policies from management when beginning a software project. Thus, we apply a data mining tool, called ELLIPSES, on databases of SDP. The database are generated by means of the simulation of a dynamic model for the management of SDP. ELLIPSES tool is a new method oriented to discover knowledge according to the expert's needs, by the detection of the most significant regions. The method essence is found in an evolutionary algorithm that finds these regions one after another. The expert decides which regions are significant and determines the stop criterion. The extracted knowledge is offered through two types of rules: quantitative and qualitative models. The tool also offers a visualization of each rule by parallel coordinate systems. In order to present this strategy, ELLIPSES is applied to a database which has already been obtained by means of the simulation of a dynamic model on a project concluded. Title: USABILITY ISSUES IN DATA MINING SYSTEMS Author(s): Fernando Berzal Abstract: When we build data mining systems, we should reflect upon some design issues which are often overlooked in our quest for better data mining techniques. In particular, we usually focus on algorithmic details whose influence is minor when it comes to users’ acceptance of the systems we build. This paper tries to highlight some of the issues which are usually neglected and might have a major impact on our systems usability. Solving some of the usability problems we have identified would certainly add to the odds of successful data mining stories, improve user acceptance and use of data mining systems, and spur renewed interest in the development of new data mining techniques. Our proposal focuses on integrating diverse tools into a framework which should be kept coherent and simple from the user's point of view. Our experience suggests that such a framework should include bottom-up dataset-building blocks to describe input datasets, expert systems to propose suitable algorithms and adjust their parameters, as well as visualization tools to explore data, and communication and reporting services to share the knowledge discovered from the massive amounts of data available in actual databases. Title: PLANNING COOPERATIVE HOMOGENEOUS MULTIAGENT SYSTEMS USING MARKOV DECISION PROCESSES Author(s): Bruno Scherrer, François Charpillet, Iadine Chadès Abstract: This paper proposes a decision-theoric approach for designing a set of situated agents so that they can solve a cooperative problem. The approach we propose is based on reactive agents. Although they do not negotiate, reactive agents can solve complex tasks such as surrounding a mobile object : agents self-organize their activity through the interaction with the environment. The design of each agent's behavior results from solving a decentralized partially observable markov decision process (DEC-POMDP). But, as solving a DEC-POMDP is NEXP-complete, we propose an approximate solution to this problem based on both subjectivity and empathy. An obvious advantage of the proposed approach is that we are able to design agents' reactive policies considering features of a cooperative problem (top-down conception) and not the opposite (down-top conception). Title: AN EFFICIENT PROCEDURE FOR ARTIFICIAL NEURAL NETWORKS RETRAINING Author(s): Razvan Matei, Dumitru Iulian Nastac Abstract: The artificial neural networks (ANNs) ability to extract significant information from an initial set of data allows both an interpolation in the a priori defined points, and an extrapolation outside the range bordered by the extreme points from the training set. The main purpose of this paper is to establish how a viable ANN structure at a previous moment of time could be re-trained in an efficient manner in order to support modifications of the input-output function. To be able to fulfill our goal, we use an anterior memory, scaled with a certain convenient value. The evaluation of the computing effort involved in the retraining of an ANN shows us that a good choice for the scaling factor can substantially reduce the number of training cycles independent of the learning methods. Title: PROMAIS: A MULTI-AGENT DESIGN FOR PRODUCTION INFORMATION SYSTEMS Author(s): Lobna Hsairi, Khaled Ghédira, Faiez Gargouri Abstract: In the age of information proliferation and communication technology advances, Cooperative Information System (CIS) technology becomes a vital factor for production system design in every modern enterprise. In fact, current production system must hold to new strategic, economic and organizational structures in order to face new challenges. Consequently, intelligent software based on agent technology emerges to improve system design on the one hand, and to increase production profitability and enterprise competitive position on the other hand. This paper starts with an analytical description of logical and physical flows dealt with manufacturing, then proposes one Production Multi-Agent Information System (ProMAIS). ProMAIS is a collection of stationary and intelligent agent-agencies with specialized expertises, interacting to carry out the shared objectives: cost-effective production in promised delay and adaptability to the changes. In order to bring ProMAIS’s dynamic aspect out, interaction protocols are specially zoomed out by cooperation, negotiation and Contract Net protocols. Title: TEXT SUMMARIZATION: AN UPCOMING TOOL IN TEXT MINING Author(s): S. Raman, M. Saravanan Abstract: As Internet’s user base expands at an explosive rate, it provides great opportunities as well as grand challenges for text data mining. Text Summarization is the core functional task of text mining and text analysis, and it consists of condensing documents and also in a coherent order. This paper discusses the application of term distribution models to text summarization for the extraction of key sentences based on the identification of term patterns from the collection. The evaluation of the results is based on the human-generated summaries as a point of reference. Our system outperforms the other auto-summarizers considered at different percentage a level of summarization, and the final summary is close to the intersection of the frequently occurring sentences found in the human-generated summaries at 40% summarization level. Title: AUTOMATION OF CORE DESIGN OPTIMIZATION IN BWR Author(s): Yoko Kobayashi Abstract: This paper deals with the application of evolutionary algorithm and multi-agents algorithm to the information system in a nuclear industry. The core design of a boiling water reactor (BWR) is a hard optimization problem with nonlinear multi objective functions and nonlinear constrains. We have developed an integrative two-stage genetic algorithm (GA) to the optimum core design of a BWR and have realized the automation of a complex core design. In this paper, we further propose a new algorithm for combinatorial optimization using multi-agents. We name it as multi-agents algorithm (MAA). In order to improve the convergence performance of the core design optimization of BWR, we introduce this new algorithm to the first stage of the two-stage GA previously developed. The performance of the new algorithm also compared with the conventional two-stage GA. Title: LEARNING BAYESIAN NETWORKS FROM NOISY DATA. Author(s): Mohamed BENDOU, Paul MUNTEANU Abstract: This paper analyzes the effects of noise on learning Bayesian networks from data. It starts with the observation that limited amounts of noise may cause a significant increase of the complexity of learned networks. We show that, unlike classical over-fitting which affects other classes of learning methods, this phenomenon is theoretically justified by the alteration of the conditional independence relations between the variables and is beneficial for the predictive power of the learned models. We also discuss a second effect of noise on learning Bayesian networks: the instability of the structures learned from DAG-unfaithful noisy data. Title: BUILDING INTELLIGENT CREDIT SCORING SYSTEMS USING DECISION TABLES Author(s): Manu De Backer, Rudy Setiono, Christophe Mues, Jan  Vanthienen, Bart  Baesens Abstract: Accuracy and comprehensibility are two important criteria when developing decision support systems for credit scoring. In this paper, we focus on the second criterion and propose the use of decision tables as an alternative knowledge visualization formalism which lends itself very well to build intelligent and user-friendly credit scoring systems. Starting from a set of propositional if-then rules extracted by a neural network rule extraction algorithm, we develop decision tables and demonstrate their efficiency and user-friendliness for 2 real-life credit scoring cases. Title: EVALUATING THE SURVIVAL CHANCES OF VERY LOW BIRTHWEIGHT BABIES Author(s): Anália Lourenço, Ana Cristina Braga, Orlando Belo Abstract: Scoring systems that quantify neonatal mortality have an important role in health services research, planning and clinical auditing. They provide means to monitoring, in a more accurate and reliable way, the quality of care among and within hospitals. The classical analyses based on a simple comparison of mortality or dealing with the newborns birthweight solely have proved to be insufficient. There are a large number of variables that influence the survival of newborns that must to be taken into account. From strictly physiological information through more subjective data, concerning medical care, there are many variables to attend to. Scoring systems try to embrace such elements, providing more reliable comparisons of the outcome. Notwithstanding, if a clinical score intends to gain widespread between clinicians, it must be simple and accurate and use routine data. In this paper, it is presented a neonatal mortality risk evaluation case study, pointing out data specificities and how different data preparation approaches (namely, feature selection) will affect the overall outcome. Title: THE USE OF NEURAL NETWORK AND DATABASE TECHNOLOGY TO REENGINEER THE TECHNICAL PROCESS OF MONITORING COAL COMBUSTION EFFICIENCY Author(s): Farhi Marir Abstract: Monitoring the combustion process for electricity generation using coal as a primary resource, is of a major concern to the pertinent industries, power generation companies in particular. The carbon content of fly ash is indicative of the combustion efficiency. The determination of this parameter is useful to characterise the efficiency of coal burning furnaces. Traditional methods such as thermogrametric analysis (TGA) and loss on ignition which are based on ash collection and subsequent analysis, proved to be tediously difficult, time consuming and costly. Thus, a need for a new technology was inevitable and needed to monitor the process in a more efficient method yielding a better exploitation of the resources at the expense of a low cost. The main aim of this work is to introduce a new automated system which can be bolted onto a furnace and work online. The system consists of three main components, namely, a laser instrument for signal acquisition, a neural network tool for training, learning and simulation, and a database system for storage and retrieval. The components have been designed, adapted and tuned to communicate for knowledge acquisition of this multidimensional problem. The system has been tested for a range of coal ashes and proved to be efficient . reliable, fast and cost effective. Title: A KNOWLEDGE MANAGEMENT TOOL FOR A COLLABORATIVE E-PROJECT Author(s): Luc Lamontagne, Tang-Ho Lê Abstract: In this paper, we provide an overview of our software tool to exploit and interchange procedural knowledge represented as networks of semi-structured units. First, we introduce the notion of Procedural Knowledge Hierarchy; then we present the modeling of Procedural Knowledge by our software. We claim that the “bottom-up” approach, that is being carried out with this tool, is appropriate to gather new candidate terms for the construction of a new domain ontology. We also argue that the KU modeling together with a pivot KU structure (rather than individual keywords) would contribute a solution to the search engine on the Web. We detail the updating technique basing on the distributed tasks of an e-project. We also discuss some ideas pertaining to the identity issue for the web based on some space and time representation. Title: STRUCTURED CONTEXTUAL SEARCH FOR THE UN SECURITY COUNCIL Author(s): Irineu Theiss, Ricardo Barcia, Marcelo Ribeiro, Eduardo Mattos, Andre Bortolon, Tania C. D.  Bueno, Hugo Hoeschl Abstract: This paper presents a generic model of a methodology that emphasises the use of information retrieval methods combined with the Artificial Intelligence technique named CBR – Case-Based Reasoning. In knowledge-based systems, this methodology allows the human knowledge to be automatically indexed. This type of representation turns compatible the user language with the language found in the data contained in the knowledge base of the system, retrieving to the user more adequate answers to his/her search question. The paper describes the Olimpo System, a knowledge based system that enables to retrieve information from textual files, which is similar to the search context described by the user in natural language. For the development of the system, 300 Resolutions of the UN Security Council available on the Internet were indexed. Title: APPLYING FUZZY LOGIC AND NEURAL NETWORK FOR QUANTIFYING FLEXIBILITY OF SUPPLY CHAINS Author(s): Bjørn Solvang, Ziqiong  Deng, Wei Deng Solvang Abstract: Fuzzy Logic (FL) is the method that deals with uncertainty and vagueness in the model or description of the systems involved as well as those in the variables. A fuzzy logic system is unique in that it is able to handle numerical and linguistic knowledge, simultaneously. This is precisely the method that we’ve looking for when the quantification of supply chain flexibility has become an urgent task. This paper first elaborates the necessity of quantification of supply chain flexibility. Thereafter, a methodological framework for measurement of supply chain flexibility is introduced with the purpose of providing the research background of this paper. Fuzzy logic system is applied to quantify six types of supply chain flexibility as each depends on both qualify and quantify measures. Further, since the value of supply chain flexibility is also decided by the degree that it depends on each type of flexibility and the decision of these degrees needs the incorporation of expert knowledge, we apply Artificial Neural Network (ANN) to conduct the task. Title: AN APPROACH OF DATA MINING USING MONOTONE SYSTEMS Author(s): Rein Kuusik, Grete Lind Abstract: This paper treats data mining as a part of the process called knowledge discovery in databases (KDD in short), which consists of particular data mining algorithms and, under some acceptable computational efficiency limitations, produces a particular enumeration of patterns. Pattern is an expression (in a certain language) describing facts in a subset of facts. The data mining step is one of the most implemented steps of the whole KDD process. Also the KDD process involves preparing data for analysis and interpreting results found in data mining step. The main approach to data mining and its main disadvantage is shown and new method, called generator of hypotheses, and its base algorithm MONSA is presented. Title: DEVELOPMENT OF AN ORGANIZATIONAL SUBJECT Author(s): Chamnong Jungthirapanich, Parkpoom Srithimakul Abstract: Due to the globalization of markets are competitive, skillful employees are most wanted. Therefore, it reflects to high turn over rate in each organization. This research would create the pattern to retain the knowledge of those employees as called “the organizational subject model”. This pattern captures inner capability of the employees and develops to be the contents for the organization, then uses the educational method transform these contents to be the subject which is called “the organizational subject”. The organizational subject model is the new strategy to retain the knowledge of the skillful employees. This research also shows the statistical method to evaluate the efficiency and the effectiveness of the organizational subject and the hypothesis testing to evaluate the achievement of the organizational subject model. This model saves the knowledge capital investment, time, and futhermore to identify the unity of the organization. Title: MINING VERY LARGE DATASETS WITH SUPPORT VECTOR MACHINE ALGORITHMS Author(s): François Poulet, Thanh-Nghi Do Abstract: In this paper, we present new support vector machines (SVM) algorithms that can be used to classify very large datasets on standard personal computers. The algorithms have been extended from three recent SVMs algorithms: least squares SVM classification, finite Newton method for classification and incremental proximal SVM classification. The extension consists in building incremental, parallel and distributed SVMs for classification. Our three new algorithms are very fast and can handle very large datasets. An example of the effectiveness of these new algorithms is given with the classification into two classes of one billion points in 10-dimensional input space in some minutes on ten personal computers (800 MHz Pentium III, 256 Mb RAM, Linux). Title: EXTENSION OF THE BOX-COUNTING METHOD TO MEASURE THE FRACTAL DIMENSION OF FUZZY DATA Author(s): Antonio B. Bailón Abstract: The box-counting is a well known method used to estimate the dimension of a set of points that define an object. Those points are expressed with exact numbers that don't reflect the uncertainty that affects them in many cases. In this paper we propose an extension to the box-counting method that allows the measure of the dimension of sets of fuzzy points, i.e. sets of points affected by some degree of uncertainty. The fuzzy box-counting method allows the extension of algorithms that use the fractal dimension of sets of crisp points to enable them to work with fuzzy data. Title: TRACKER: A FRAMEWORK TO SUPPORT REDUCING REWORK THROUGH DECISION MANAGEMENT Author(s): Andy Salter, Phil Windridge, Alan Dix, Rodney Clarke, Caroline Chibelushi, John Cartmell, Ian Sommerville, Victor Onditi, Hanifa Shah, Devina Ramduny, Amanda Queck, Paul Rayson, Bernadette  Sharp, Albert Alderson Abstract: The Tracker project is studying rework in systems engineering projects. Our hypothesis is that providing decision makers with information about previous relevant decisions will assist in reducing the amount of rework in a project. We propose an architecture for the flexible integration of the tools implementing the variety of theories and models used in the project. The techniques include ethnographic analysis, natural language processing, activity theory, norm analysis, and speech and handwriting recognition. In this paper, we focus on the natural language processing components, and describe experiments which demonstrate the feasibility of our text mining approach. Title: EVALUATION OF AN AGENT-MEDIATED COLLABORATIVE PRODUCTION PROTOCOL IN AN INSTRUCTIONAL DESIGN SCENARIO Author(s): Ignacio Aedo, Paloma Díaz, Juan Manuel Dodero Abstract: Distributed knowledge creation or production is a collaborative task that needs to be coordinated. A multiagent architecture for collaborative knowledge production tasks is introduced, where knowledge-producing agents are arranged into knowledge domains or marts, and a distributed interaction protocol is used to consolidate knowledge that is produced in a mart. Knowledge consolidated in a given mart can be in turn negotiated in higher-level foreign marts. As an evaluation scenario, the proposed architecture and protocol are applied to facilitate coordination during the creation of learning objects by a distributed group of instructional designers. Title: SYMBOLIC MANAGEMENT OF IMPRECISION Author(s): Mazen EL-SAYED, Daniel PACHOLCZYK Abstract: This paper presents a symbolic model for handling nuanced information like "John is very tall". The model presented is based on a symbolic M-valued predicate logic. The first object of this paper has been to present a new representation method for handling nuanced statements of natural language and which contains linguistic modifiers. These modifiers are defined in a symbolic way within a multiset context. The second object has been to propose new Generalized Modus Ponens rules dealing with nuanced statements. Title: LIVE-REPRESENTATION PROCESS MANAGEMENT Author(s): Daniel  Corkill Abstract: We present the live-representation approach for managing and working in complex, dynamic business processes. In this approach, important aspects of business-process modeling, project planning, project management, resource scheduling, process automation, execution, and reporting are integrated into an detailed, on-line representation of planned and executing processes. This representation provides a real-time view of past, present, and anticipated process activities and resourcing. Changes resulting from process dynamics are directly reflected in the live representation, so that, at any point in time, the latest information about process status and downstream expectations is available. Managers can directly manipulate the live representation to change process structure and execution. These changes are immediately propagated throughout the environment, keeping managers and process participants in sync with process changes. A fundamental aspect of the live-representation approach is obtaining and presenting current and anticipated activities as an intrinsic and organic aspect of each participant's daily activities. By becoming an active partner in these activities, the environment provides tangible benefits in keeping everyone informed and coordinated without adding additional duties and distractions. Equally important are providing individuals the flexibility to choose when and how to perform activities and allowing them to provide informative details of their progress without being intrusive into the details of their workdays. In this paper, we describe the technical and humanistic issues associated with the live-representation approach and summarize the experiences gained in providing a commercial implementation used in the automotive and aerospace industries. Title: MR-BRAIN IMAGE SEGMENTATION USING GAUSSIAN MULTIRESOLUTION ANALYSIS AND THE EM ALGORITHM Author(s): Mohammed A-Megeed, Mohammed F. Tolba, Mostafa Gad, Tarek Gharib Abstract: We present a MR image segmentation algorithm based on the conventional Expectation Maximization (EM) algorithm and the multiresolution analysis of images. Although the EM algorithm was used in MRI brain segmentation, as well as, image segmentation in general, it fails to utilize the strong spatial correlation between neighboring pixels. The multiresolution-based image segmentation techniques, which have emerged as a powerful method for producing high-quality segmentation of images, are combined here with the EM algorithm to overcome its drawbacks and in the same time take its advantage of simplicity. Two data sets are used to test the performance of the EM and the proposed Gaussian Multiresolution EM, GMEM, algorithm. The results, which proved more accurate segmentation by the GMEM algorithm compared to that of the EM algorithm, are represented statistically and graphically to give deep understanding. Title: EPISTHEME: A SCIENTIFIC KNOWLEDGE MANAGEMENT ENVIRONMENT Author(s): Julia  Strauch, Jonice Oliveira, Jano Souza Abstract: Nowadays, researchers create and change information faster than in the past. Although great part of this exchanging is made by documental form, there is also a great informal or tacit knowledge exchange in people interactions.For the success of a scientific activity, it is necessary that researchers be provided with all necessary knowledge to execute their tasks, to make decisions, collaborate with one another and disseminate individual knowledge to transform it into organizational knowledge. In this context, we propose the scientific knowledge management environment called Epistheme. This environment having as goals: to help the organizational knowledge management, to be a learning environment, to facilitate the communication of people on the same domain research and to unify different perspectives and expertise in a single environment. This article shows the Epistheme framework with the modules of identification, creation, validation, integration, acquisition and knowledge dissemination. Title: A PROCESS-CENTERED APPROACH FOR KDD APPLICATION MANAGEMENT Author(s): Karin Becker, Karin Becker Abstract: KDD is the knowledge-intensive task consisting of complex interactions, protracted over time, between a human and a (large) database, possibly supported by a heterogeneous suite of tools. Managing this complex process, its underlying activities, resources and results, is a laborious and complex task. In this paper, we present a documentation model to structure and organize information necessary to manage a KDD application, based on the premise that documentation is important not only for better managing efforts, resources, and results, but also to capture and reuse project and corporate experiences. The documentation model is very flexible, independent of the particular process methodology and tools applied, and its use through a supporting environment allows the capture, storage and retrieval of information at any desired detail level, thus adaptable to any analyst profile or corporation policy. The approach presented is based on process-oriented organizational memory information systems, which aim at capturing the informal knowledge generated and used during corporate processes. The paper presents the striking features of the model, and discusses its use in a real case study. Title: A HYBRID CASE-BASED ADAPTATION MODEL FOR THYROID CANCER DIAGNOSIS Author(s): Abdel-Badeeh M. Salem, Khaled A. Nagaty, Bassant Mohamed  El Bagoury Abstract: : Adaptation in Case-Based Reasoning (CBR) is a very difficult knowledge-intensive task, especially for medical diagnosis. This is due to the complexities of medical domains, which may lead to uncertain diagnosis decisions. In this paper, a new hybrid adaptation model for cancer diagnosis has been developed. It combines transformational and hierarchical adaptation techniques with certainty factors (CF’s) and artificial neural networks (ANN’s). The model consists of a hierarchy of three phases that simulates the expert doctor reasoning phases for cancer diagnosis, which are the Suspicion, the To-Be-Sure and the Stage phases. Each phase uses the learning capabilities of a single ANN to learn the adaptation knowledge for performing the main adaptation task. Our model first formalizes the adaptation knowledge using IF-THEN transformational rules and then maps the transformational rules into numeric or binary vectors for training the ANN at each phase. The transformational rules of the Suspicion phase encode assigned CF’s to reflect the expert doctors’ feelings of cancer suspicion. The model is applied to thyroid cancer diagnosis and is tested with 820 patient cases, which are obtained from the expert doctors in the National Cancer Institute of Egypt. Cross-validation test has shown a very high diagnosis performance rate that approaches 100% with error rate 0.53%. The hybrid adaptation model is described in the context of a prototype namely: Cancer-C that is a hybrid expert system, which integrates neural networks into the CBR cycle. Title: DYNAMICS OF COORDINATION IN INTELLIGENT SOCIAL MULTI-AGENTS ON ARTIFICAL MARKET MODEL Author(s): Junko SHIBATA, Wataru SHIRAKI, Koji OKUHARA Abstract: We propose market selection problems in consideration of agent's preference. The artificial market is based on Hogg-Huberman model with reward mechanism. By using our model, agents can not only make use of imperfect and delayed information but also take the preference of the agent into account on market selection. Our model includes a conventional model that the benefit is an only factor for selecting. Finally the dynamical behaviors of our system are investigated numerically. From results of simulation, it is shown that agent's preference and uncertainty for market selection. Title: PARTIAL ABDUCTIVE INFERENCE IN BAYESIAN NETWORKS BY USING PROBABILITY TREES Author(s): Jose A. Gámez Abstract: The problem of partial abductive inference in Bayesian networks is, in general, more complex to solve than other inference problems as probability/evidence propagation or total abduction. When join trees are used as the graphical structure over which propagation will be carried out, the problem can be decomposed into two stages: (1) to obtain a join tree containing only the variables included in the explanation set, and (2) to solve a total abduction problem over this new join tree. In De Campos et al. (2002) different techniques are studied in order to approach this problem, obtaining as a result that not always the methods which obtain join trees with smaller size are also those requiring less CPU time during the propagation phase. In this work we propose to use (exact and approximate) {\em probability trees} as the basic data structure for the representation of the probability distributions used during the propagation. From our experiments, we observe how the use of exact probability trees improves the efficiency of the propagation. Besides, when using approximate probability trees the method obtain very good approximations and the required resources decrease considerably. Title: ONTOLOGY LEARNING THROUGH BAYESIAN NETWORKS Author(s): Mario Vento, Francesco Colace, Pasquale Foggia, Massimo De Santo Abstract: In this paper, we propose a method for learning ontologies used to model a domain in the field of intelligent e-learning systems. This method is based on the use of the formalism of Bayesian networks for representing ontologies, as well as on the use of a learning algorithm that obtains the corresponding probabilistic model starting from the results of the evaluation tests associated with the didactic contents under examination. Finally, we present an experimental evaluation of the method using real world data Title: LOGISTICS BY APPLYING EVOLUTIONARY COMPUTATION TO MULTICOMMODITY FLOW PROBLEM Author(s): Koji OKUHARA, Wataru SHIRAKI, Eri DOMOTO, Toshijiro TANAKA Abstract: In this paper, we propose an application of one of evolutionary computation, genetic algorithm, to logistics in multicommodity flow problem. We chose a multicommodity flow problem which can evaluate its congestion by traffic arrival ratio in a link. In simulation, we show that a proposed network control method using genetic algorithm is superior to the usual method which makes a path selection by the Dijkstra method and a traffic control by the gradient method. Title: TOOL FOR AUTOMATIC LEARNING OF BAYESIAN NETWORKS FROM DATABASE: AN APPLICATION IN THE HEALTH AREA Author(s): Cristiane Koehler Abstract: The learning of Bayesian Networks process is composed of two stages: learning topology and learning parameters associated to this topology. Currently, one of the most important research in the Artificial Intelligence area is the development of efficient inference techniques to use in intelligent systems. However, the usage of such techniques need the availability of a valid knowledge model. The necessity to extract knowledge from databases is increasing exponentially. More and more, the amount of information exceeds the analysis capacity by the traditional methods that do not analyse the information under the knowledge focus. It is necessary the development of new techniques and tools to extract knowledge from databases. In this article, the concepts of Data Mining and knowledge breakthrough based on the Bayesian Networks technology had been used to extract valid models of knowledge. Some learning bayesian algorithm were been studied, where problems were founded, mainly in the generation of the topology of the network with all the available variable in the database. The application domain of this research is the healht area, it was attested that in the clinical practice, the experts think only with the more important variables to the decision taking. Some algorithms have been analysed, and finally, a new algorithm was considered to extract bayesian models considering only the more relevant variables to the construction of the network topology. Title: COMPUTER GAMES AND ECONOMICS EXPERIMENTS Author(s): Kay-Yut Chen, Ren Wu Abstract: HP Labs has developed a software platform, called MUMS, for moderating economics games between human and/or robot participants. The primary feature of this platform is a flexible scripting language that allows a researcher to implement any economics games in a relative short time. This scripting language eliminates the need to program low-level functions such as networking, databases and interface components. The scripts are description of games including definitions of roles, timing rules, the game tree (in a stage format), input and output (with respect to a role, not client software). Definitions of variables and the use of common mathematical and logical operations are also allowed to provide maximum flexibility in handling the logic of games. This platform has been used to implement a wide variety of business related games including variations of a retailer game with simulated consumers and complex business rules, a double sided call market and negotiation in a procurement scenario. These games are constructed to accurately simulate HP business environments. Carefully calibrated experiments, with human subjects whose incentives were controlled by monetary compensations, were conducted to test how different business strategies result in different market behavior. For example, the retailer game was used to test how the market reacts to changes of HP's contract terms such as return policies. Experiment results were used in major HP consumer businesses to make policy decisions. Title: MINING WEB USAGE DATA FOR REAL-TIME ONLINE RECOMMENDATION Author(s): Stephen Rees, Mo Wang Abstract: A user's browser history contains a lot of information about the relationship between web pages and users. If this information can be fully exploited, it may provide better knowledge about the user's online behaviours and can provide better customer services and site performance. In this paper, an online recommendation model is proposed based on the web usage data. A special data structure for storing the discovered item sets is described. This data structure is especially suitable for online real time recommendation systems. Users are first classified using neural network algorithm. Then within each group, association rules algorithm is employed to discover common user profiles. In this process, users' interested web sections has been traced and modeled. Multiple support levels for different types page views and varying window sizes are also considered. Finally, a recommendation sets are generated based on user's active session. A demo website is provided to demonstrate the proposed model. Title: TEXT MINING FOR ORGANIZATIONAL INTELLIGENCE Author(s): Hercules do Prado, Edilberto  Silva, Edilson Ferneda Abstract: In this article it is presented a case study on the creation of organisational intelligence in a Brazilian news agency (Radiobras) with the application of text mining tools. Departing from the question about if Radiobras is fulfilling its social role, we construct an analysis model based on the enormous volume of texts produced by its journalists. CRISP-DM method was applied including the acquisition of the news produced during 2001, preparation of this material, with the cleansing and formatting of the archives, creation of a model of clustering and the generation of many views. The views had been supplied to the administration of the company allowing them to develop more accurate self-knowledge. Radiobras is an important company of Brazilian State, that spreads out the acts of the public administration and needs a self evaluation based in the knowledge of its results. As any other company, Radiobras is subject to the increasing requirement of competitiveness imposed to the modern organisations. In this scene, the generation and retention of organisational intelligence have been recognised as a competitive differential that can lead to a more adequate management of the businesses, including its relationship with customers and in the adequacy of its structure of work. The importance of the information for the elaboration of the knowledge and, conse-quently, the synthesis of intelligence is widely recognised, and requires a proper treatment adjusted to reach insights that can lead to the activation of the mental processes that will lead to that synthesis. Many internal and external views on the organisation can be built with the use of tools for the extraction of patterns from a large amount of data, subsidising decisively the managers in the decision making process. These views, constructed to answer the specific questions, constitute knowledge in a process of Organisational Learning that influences radically the way in which the organisation is managed. The contributions of IT in this field were developed, initially, aiming at the extraction of patterns from transactional databases that contains well structured data. However, considering that most of the information in the organisations are found in textual form, recent developments allows the extraction of interesting patterns from this type of data. Some patterns extracted in our case study are: (i) the measure of production and geographic distribution of Radiobras news, (ii) a survey of the most used words, (iii) the discovery of the covering areas of the news, (iv) the evaluation of how the company is fulfilling its role, in accordance with the subjects approached in its news, and (v) the evaluation of the journalistic covering of the company. Title: STAR – A MULTIPLE DOMAIN DIALOG MANAGER Author(s): Márcio Mourão, Nuno Mamede, Pedro Madeira Abstract: In this work we propose to achieve not only a dialogue manager for a domain, but also the aggregation of multiple domains in the same dialogue management system. With this in mind, we have developed a dialogue manager that consists of five modules. One of them, called Task Manager, deserves special attention. Each domain is represented by a frame, which is in turn composed by slots and rules. Slots define the domain data relationship, and rules define the system’s behavior. Rules are composed by operators (logical, conditional, and relational) and functions that can reference frame slots. The use of frames made possible all the remaining modules of the dialogue manager to become domain independent. This is, beyond any doubt, a step ahead in the design of conversational systems. Title: REQUIREMENTS OF A DECISION SUPPORT SYSTEM FOR CAPACITY ANALYSIS AND PLANNING IN ENTERPRISE NETWORKS Author(s): Américo Azevedo, Abailardo Moreira Abstract: Capacity analysis and planning is a key activity in the provision of adequate customer service levels and the management of the company’s operational performance. Traditional capacity analysis and planning systems have become inadequate in the face of several emerging manufacturing paradigms. One such paradigm is the production in distributed enterprise networks, consisting of subsets of autonomous production units within supply chains working in a collaborative and coordinated way. In these distributed networks, capacity analysis and planning becomes a complex task, especially because it is performed in a heterogeneous environment where the performance of individual manufacturing sites and of the network as a whole should be simultaneously considered. Therefore, the use of information system solutions is desirable in order to support effective and efficient planning decisions. Nevertheless, it seems that there is a lack of a clear definition of the most important requirements that must be met by supporting solutions. This paper attempts to identify some general requirements of a decision support system to be used for capacity analysis and planning in enterprise networks. Adaptability of capacity models, computational efficiency, monitoring mechanisms, support for distributed order promising, and integration with other systems, are some important requirements identified. Title: A SUBSTRATE MODEL FOR GLOBAL GUIDANCE OF SOFTWARE AGENTS Author(s): Guy Gouardères, Nicolas Guionnet Abstract: We try to understand how large groups of software agents can be given the means to achieve global tasks, while their point of view on the situation is only local (reduced to a neighbourhood.) To understand the duality between local abilities and global constraints, we introduced a formal model. We used it to evaluate the possibility of existence of an absolute criteria, for a local agent, to detect global failure (in order to change the situation). The study of a sample of examples led us to the fact that such a criteria does not always exist. When it exists, it’s often too global for local agents to apply (it demands too a large field of view to be employed.) That’s why we left, for a moment, the sphere of absolute criteria, to look for something more flexible. We propose a tool of domain globalisation that is inspired by continuous physics phenomena: If the domain is too partitioned, we can add to it, a propagation layer, to let the agents access data concerning its global state. This layer can be a pure simulation of wave or heat equations, or an exotic generalisation. We applied the concept to a maze obstruction problem. Title: APPLYING CASE-BASED REASONING TO EMAIL RESPONSE Author(s): Luc Lamontagne Abstract: In this paper, we describe a case-based reasoning approach for the semi-automatic generation of responses to email messages. This task poses some challenges from a case-based reasoning perspective especially to the precision of the retrieval phase and the adaptation of textual cases. We are currently developing an application for the Investor relations domain. This paper discusses how some of the particularities of the domain corpus, like the presence of multiple requests in the incoming email messages, can be addressed by the insertion of natural language processing techniques in different phases of the reasoning cycle. Title: THE INOVATION PLANNING TASK FOR PRODUCTS AND SERVICES Author(s): Alfram Albuquerque, Marcelo Barros, Agenor Martins, Edilson Ferneda Abstract: Innovation is crucial for business competitive intelligence and knowledge-based society. In this context, companies use to base their activities on the efficiency of their processes for supporting innovation of prod-ucts and services. Knowledge-based systems should leverage the innovation process and its planning by storing internal and external user information. In this paper, the authors detail this innovation process by presenting and discussing an architecture for the task of user support oriented to the innovation planning process. The proposed architecture is based on QFD – a methodology that translates the client voice into engineering requisites for products and services. Our methodological proposal increases efficiency on the base of the integration of both knowledge-based processes (KBPs) and mechanical processes (MPs) used to transform quality specification or requisites into engineering requirements. Title: DECISIO: A COLLABORATIVE DECISION SUPPORT SYSTEM FOR ENVIRONMENTAL PLANNING Author(s): Julia Strauch, Manuel de Castro, Jano de Souza Abstract: Environmental planning projects often face problems such as: difficulties to manage spatial data as a component of the process, lack of coordination of the different areas, difficulties of knowledge access, badly defined decision processes, and absence of documentation of the entire process and its relevant data. Our proposal is a web-based system that provides a workflow tool to design and execute the decision process and group decision support tools that help the decision makers in finding similar solutions, analyzing and prioritizing alternatives and helping the interaction among users. The main goals of the proposal are: Document the environmental process and data, provide tools to support collaboration, conflict management and alternative analysis and make available previous successful and failure similar cases. These functionalities have their human-computer interaction adapted to incorporate spatial data manipulation and geo-reference. The tool is being used in agro-meteorological projects with the purpose of improving the effectiveness and efficiency of the decision process and its result, maximize profit and preserving natural resources. Title: CLASSIFYING DATABASES BY K-PROPAGATED SELF-ORGANIZING MAP Author(s): Takao Miura, Taqlow Yanagida, Isamu Shioya Abstract: In this investigation, we discuss classifiers to databases by means of Neural Network. Among others, we introduce k-propagated Self Organizing Map (SOM) which involves learning mechanism of neighbors. And we show the feasibility of this approach. Also we evaluate the tool from the viewpoint of statistical tests. Title: MAKE OR BUY EXPERT SYSTEM (MOBES): A KNOWLEDGE-BASED DECISION SUPPORT TOOL TO MAXIMISE STRATEGIC ADVANTAGE Author(s): noornina dahlan, ai pin lee, reginald theam kwooi see, teng hoon lau, eng han gan Abstract: This paper presents a knowledge-based tool, which aids strategic make or buy decisions that are key components in enhancing an organization’s competitive position. Most companies have no firm basis for evaluating the make or buy decision; thereby using inaccurate costing analyses for sourcing strategies, which are directly responsible for the flexibility, customer service quality, and the core competencies of an organization. As a result, a prototype of the Make or Buy Expert System (MOBES) with multi-attribute analytic capability is developed. The proposed model comprises four main dimensions: identification and weighting of performance category; analysing technical capability category; comparison of retrieved internal and external technical capability profiles, and analysis of supplier category. This model aims to enable an organisation to enhance its competitiveness by improving its decision making process as well as leveraging its key internal resources to move further forward in its quest for excellence. Title: AGENT TECHNOLOGY FOR DISTRIBUTED ORGANIZATIONAL MEMORIES: THE FRODO PROJECT Author(s): Ludger  van Elst, Andreas Abecker, Ansgar Bernardi Abstract: Comprehensive approaches to knowledge management in modern enterprise are confronted with scenarios which are heterogeneous, distributed, and dynamic by nature. Pro-active satisfaction of information needs across intra-organizational boundaries requires dynamic negotiation of shared understanding and adaptive handling of changing and ad-hoc task contexts. We present the notion of a Distributed Organizational Memory (DOM) as a meta-information system with multiple ontology-based structures and a workflow-based context representation. We argue that agent technology offers the software basis which is necessary to realize DOM systems. We sketch a comprehensive Framework for Distributed Organizational Memories which enables the implementation of scalable DOM solutions and supports the principles of agent-mediated knowledge management. Title: USING THE I.S. AS A (DIS)ORGANIZATION GAUGE Author(s): Pedro Araujo, Pedro Mendes Abstract: The textile and garment industry in Portugal is undergoing some struggles. In their origin is a lack of organization of many companies. This situation, together with an increasing dynamics of the products and the markets, considerably complicates decision-making and information systems can be a precious aid. But contrary to academics, managers must be shown evidence of the advantages of using information technology. So, to help attain this objective, we propose the definition of an index quantifying the level of disorganization of the productive sector of the company. Continuously using the information system to monitor this index allows managers to improve the performance of the company's operations. Title: HELPING USER TO DISCOVER ASSOCIATION RULES. A CASE IN SOIL COLOR AS AGGREGATION OF OTHER SOIL PROPERTIES Author(s): Manuel Sanchez-Marañon, Jose-Maria Serrano, Gabriel Delgado, Julio Calero, Daniel Sanchez, Maria-Amparo Vila Abstract: As commercial and scientific databases size increases dramatically with little control on the overall application of this huge amount of data, knowledge discovery techniques are needed in order to obtain relevant and useful information to be properly used later. Data mining tools, as association rules and approximate dependencies, has been proven as effective and useful when users are looking for implicit or non-intuitive relations between data. The current and main disadvantage of rule-extraction algorithms rests on the sometimes excessive number of obtained results. Since human expert aid is needed in order to give an interpretation to results, a very interesting task is to make easier the expert's work. An user interface and a knowledge discovery management system would provide a comfortable way to easily sort out rules, according to their utility. An example of this necessity is shown in a case involving soil color as aggregation of other soil properties and as a interesting descriptor for soil-forming processes. Title: PRODUCTION ACTIVITY CONTROL USING AUTONOMOUS AGENTS Author(s): Eric Gouardères, Mahmoud Tchikou Abstract: The need of adaptability of production structures is continuously increased due to decrease of product life cycle and increase of the competition. The efficiency of a production system is now described not only in term of time cycle, due date, inventory level, but also in term of flexibility and reactivity in order to integrate the evolution of the market. Current methods for real time control of production system do not provide sufficient tools for an effective production activity control. The origin of such a problem is at the level of existing control structures. This work details the design of a production activity control system based on distributed structure. The structure is based on the distributed artificial intelligence concepts. After having introduced the context and reasoning work, we describe the different parts of our multi-agent model. Lastly, we illustrate this approach on a practical example of production cell. Title: HUMAN IRIS TEXTURE SEGMENTATION ALGORITHM BASED ON WAVELET THEORY Author(s): Taha El-Arief, Nahla El-Haggar, M. Helal Abstract: Iris recognition is a new biometric technology that exceptionally accurate that has stable and distinctive features for personal identification. For iris classification it is important to isolate the iris pattern by locating its inner (pupil) and outer (limbus) boundaries. This paper presents a texture segmentation algorithm for segmenting the Iris from the human eye in more accurate and efficient manner. A quad tree wavelet transform is first constructed to extract the texture feature. The fuzzy c-means (FCM) algorithm is then applied to the quad tree with the coarse-to-fine approach. Finally, the results demonstrate its potential usefulness. Title: AN EXPERT SYSTEM FOR CREDIT MANAGEMENT FOLLOW-UP Author(s): Nevine Labib, Ezzat Korany, Hamdy Latif, Mohamed Abderabu Abstract: Commercial risk assessment nowadays has become the major concern of banks since they are faced with severe losses of unrecoverable credit. The proposed system is an Expert System prototype for Credit management follow-up. The system uses rule-based inference mechanism of reasoning. The knowledge were obtained from Experts woring in six commercial Egyptian banks. It starts following up the granted loan. If the customer refrains from paying, it calculates his credit rating. If the customer credit rating is bad, it analyzes the problem causes and reasons and accordingly takes the suitable remedial action. When tested, the system proved to be efficient. Title: APPLICATION OF GROUP METHOD OF DATA HANDLING TO VIRTUAL ENVIRONMENT SIMULATOR Author(s): Wataru SHIRAKI Abstract: In this paper, we propose decision support system that selects the most useful development plan for preservation of natural environment and target species from two or more development plan. For the purpose, after recognizing the environmental situation and the impact among environmental factors where the species exist, we select a sustainable development based on evaluation and prediction of environment assessment by reconstructing dynamics in computer simulation. Then, we present hybrid system using artificial life technology such as cellular automaton and group method of data handling, which can apply to environmental assessment. From results of numerical example, proposal system approximates coefficients with sufficient accuracy if the structure of a model is known, it was also shown that near future dynamics can be predicted, even if the structure of a model is unknown. Title: AN EFFICIENT CLASSIFICATION AND IMAGE RETRIEVAL ALGORITHM BASED ON ROUGH SET THEORY Author(s): Jafar Mohammed, Aboul Ella Hassanien Abstract: : With an enormous amount of image data stored in databases and data warehouses, it is increasingly important to develop powerful tools for analysis of such data and mining interesting knowledge from it. In this paper, we study the classification problem of image databases and provide an algorithm for classification and retrieval image data in the context of Rough Set methodology. We present an efficient distance function called quadratic which works more efficiently with retrieval image data. We also demonstrate that by choosing the useful subset of rules based on simple decision table, the algorithm have high accuracy for classification. Title: USING SPECIALIZED KNOWLEDGE IN AUTOMATED WEB DOCUMENT SUMMARIZATION Author(s): Zhiping Zheng Abstract: Automated text summarization is a natural language processing task to generate short, concise, and comprehensive descriptions of essential content of documents. This paper is going to describe some new features in a real-time automated web document summarization system used in Seven Tones Search Engine, a search engine specialized in linguistics and languages. The main feature of this system is to use algorithms designed specifically for Web pages in a specific knowledge domain to improve the quality of summarization. It also considers the unique characteristics of search engines. In special, linguistics features should be very important to linguistics document. The documents are assumed either HTML or plain text. A good HTML parser will affect summarization quality very much although it is not a part of summarization algorithm. Title: A NEW APPROCH OF DATA MINING Author(s): Stéphane Prost, Claude Petit Abstract: This paper describe a trajectories classification algorithm ( each trajectory is defined by a finite number of values), it gives for each class of trajectories a characteristic trajectory: the meta-trajectory. The pathological trajectories are removed by the algorithm. Classes are built by an ascendant method. Two classes are built, then three and so on, a partition containing n classes allow to built a partition with n+1 classes. For each class a meta-trajectory is determined ( for example the gravity centre). The number of classes depends on the minimum number of trajectory by classes allowed and a parameter given by the user, which is compared with the inter-classes inertia gain, other dispersion may be chosen. Title: EXPERIENCE MANAGEMENT IN THE WORK OF PUBLIC ORGANIZATIONS: THE PELLUCID PROJECT Author(s): Simon LAMBERT, Sabine DELAITRE, Gianni VIANO, Simona STRINGA Abstract: One of the major issues in knowledge management for public organisations is organisational mobility of employees, that is the continual movement of staff between departments and units. As a consequence of this, the capture, capitalisation and reuse of experience become very important. In the PELLUCID project, three general scenarios have been identified from studies of the pilot application cases. They are contact management, document management and critical timing management. These scenarios are outlined, and a corresponding approach to experience formalisation is described. Requirements are also set out on the technical solution able to support experience management Title: USING GRAMMATICAL EVOLUTION TO DESIGN CURVES WITH A GIVEN FRACTAL DIMENSION Author(s): Manuel Alfonseca, Alfonso Ortega, Abdel Dalhoum Abstract: Lindenmayer Grammars have been applied to represent fractal curves. In this work, Grammatical Evolution is used to automatically generate and evolve Lindenmayer Grammars that represent curves with a fractal dimension that approximates a pre-defined required value. For many dimensions, this is a non trivial task to be performed manually. The procedure used parallels biological evolution, acting through three different levels: a genotype (a vector of integers) subject to random modifications in different generations), a protein-like intermediate level (a Lindenmayer Grammar with a single rule, generated from the genotype by applying a transformation algorithm) and a phenotype (the fractal curve). Title: DETECTION OF CARDIAC ARRHYTHMIAS BY NEURAL NETWORKS Author(s): Noureddine Belgacem, F. Reguig, M. Chikh Abstract: The classification of heart beats is important for automated arrhythmia monitoring devices. The study describes a neural classifier for the identification ad detection of cardiac arrhythmias in surface (Electrocardiogram) ECGs. Traditional features for the classification task are extracted by analyzing the heart rate and morphology of QRS complex and P wave of the ECG signal. The performance of the classifier is evaluated on the MIT-BIH database. The method achieved a sensitivity of 94.60% and a specificity of 96.49% in discrimination of six classes.

Area 3 - INFORMATION SYSTEMS ANALYSIS AND SPECIFICATION

Area 4 - INTERNET COMPUTING AND ELECTRONIC COMMERCE

 Title: THE RESOURCE FRAMEWORK FOR MOBILE APPLICATIONS: ENABLING COLLABORATION BETWEEN MOBILE USERS Author(s): Jörg Roth Abstract: Mobile devices are getting more and more interesting for several kinds of field workers such as sales representatives or maintenance engineers. When in the field, mobile users often want to collaborate with other mobile users or with stationary colleagues at home. Most established collaboration concepts are designed for stationary scenarios and often do not sufficiently support mobility. Mobile users are only weakly connected to the communication infrastructure by wireless networks. Small mobile devices like PDAs often do not have sufficient computational power to handle effortful tasks to coordinate and synchro-nize users. They have for example very limited user interface capabilities and reduced storage capacity. In addition, mobile devices are subject to other usage paradigms like stationary computers and often turned on and off during a session. In this paper, we introduce a framework for mobile collaborative applications based on so-called resources. The resource framework leads to a straightforward functional decomposition of the overall application. Our platform Pocket DreamTeam provides a runtime infrastructure for applica-tion based on resources. We demonstrate the resource concept with the help of two applications build to top of the Pocket DreamTeam platform. Title: A SEMANTIC FRAMEWORK FOR DISTRIBUTED APPLICATIONS Author(s): Liang-Jie Zhang, Wei-Tek Tsai, Bing Li Abstract: The .XIN technology is a novel approach to build and integrate existing distributed applications. The essence of a .XIN is business logic descriptions. Based on the concept of .XIN, developers’ effort is minimized because their developing work is concentrated on mapping business logic to .XINs. The adaptor layer is an interpreter that translates .XINs into implementations of particular distributed domains. This layer hides details of implementation techniques of distributed applications. Moreover, applications built with .XIN can share their services over Internet via RXC (Remote .XIN Call) and a remote .XIN-based Service can be blended into a local .XIN-based application via RXI (Remote .XIN Interchange). Finally, an object interface can be mapped to a .XIN interface. With the support of this mapping, both non-.XIN applications and .XIN applications have the same interface, .XIN interface. So it is possible for them to share their respective services over the Internet. This is also a new approach to integrate heterogeneous applications. The technology of .XIN is a semantic framework for distributed applications. Title: OPEN TRADING - THE SEARCH FOR THE INFORMATION ECONOMY'S HOLY GRAIL Author(s): Graham Scriven Abstract: This paper examines the concept of Open Trading, establishing its crucial importance in achieving comprehensive benefits for all trading partners as a result of the move towards the Information Economy. The rationale for interoperability is also examined and placed in perspective. The paper considers how Open Trading can be achieved and suggests ten principles as a practical guide for both vendors and business organisations. Title: EVALUATION OF MAINFRAME COMPUTER SYSTEM USING WEB SERVICE ARCHITECTURE Author(s): Yukinori Kakazu, Mitsuyoshi Nagao Abstract: In this paper, we propose a mainframe computer system using a web service architecture in order to realize a mainframe computer system that permits users to conveniently access it to perform flexible information processing. The web service is a system architecture that communicates among applications through the Internet by using the SOAP (Simple Object Access Protocol). SOAP is a simple protocol based on XML and HTTP. It has the advantages that the communication can be performed beyond the firewall provided to promote network security and that it can be used on various platforms. The web service architecture inherits these advantages of SOAP. It is likely that an effective and convenient mainframe computer system used over the Internet can be implemented by using the web service architecture. Moreover, the implementation of the proposed system can bring about new application model. Applications that users can unconsciously use the mainframe computer system and which can perform large-scale information processing can be implemented on low-performance clients, such as mobile platforms, by realizing the proposed system. In addition, the application combining the high-performance libraries on a mainframe computer system can be implemented on such a client. We report the construction of the proposed system and confirm its effectiveness through a computational experiment. The experimental result revealed that effective information processing could be performed over the Internet by using the proposed system. Title: WHAT IS THE VALUE OF EMOTION IN COMMUNICATION? IMPLICATIONS FOR USER CENTRED DESIGN. Author(s): Robert Cox Abstract: This research presents an investigation into the question - what is the value of emotion in communication. To gain a greater appreciation of this title, it is this paper’s intention to de-construct the sentence into its component parts – namely its nouns; Value, Emotions and Communications, and to study them in isolation to each other and as a total construct. Further, the everyday use of communications technology (i.e. e-mail, chat lines, mobile and fixed line telephones) has changed human communication norms. To identify the significance of this change, an investigation into the question of whether emotions continue to play an important role in effective human-to-human communications is most likely warranted. Title: COMBINING WEB BASED DOCUMENT MANAGEMENT AND EVENT-BASED SYSTEMS - MUDS AND MOOS TOGETHER WITH DMS FORM AN COOPERATIVE OPEN SOURCE KNOWLEDGE SPACE Author(s): Thorsten  Hampel Abstract: The WWW has developed as the de facto standard for computer based learning. However, as a server-centered approach it confines readers and learners to passive non-sequential reading. Authoring and web-publishing systems aim at supporting the authors' design process. Consequently, learners' activities are confined to selecting and reading (downloading documents) with almost no possibilities to structure and arrange their learning spaces nor do that in a cooperative manner. This paper presents a learner-centered – completely web-based – approach through virtual knowledge rooms. Our technical framework allows us to study different technical configurations within the traditional university setting. Considering the systems design the concept of virtual knowledge rooms is to combine event-based technology of virtual worlds with the classical document management functions in a client-server framework. Knowledge rooms and learning materials such as documents or multimedia elements are represented as a fully object oriented model of objects, attributes and access rights. We do not focus on interactive systems managing individual access rights to knowledge bases, but rather on cooperative management and structuring of distributed knowledge bases. Title: USERS-TAILORED E-BUSSINES THROUGH A THREE-LAYER PERSONALIZATION MODEL BASED ON AGENTS Author(s): Irene Luque Ruiz, Miguel Angel Gómez-Nieto, Gonzalo Cerruela Garcia, Enrique López Espinosa Abstract: The explosion of Internet together with the advantages that offers nowadays the electronic commerce, are provoking an important growth of the Web sites devoted to the development of this activity; what gives rise to each time be greater the quantity of information that arrives to the users upon the products or services that offer said sites. The users before so much information, which in some cases will interest it and in other not, finish for not processing it. This situation has provoked that the researchers try to seek solutions, among the ones that emphasizes the use of Artificial Intelligence for solve this problem. With this idea appears the personalization of Web sites, which has as objective to provide to the user the information that he needs. In this paper a personalization model in various levels is proposed, which applied to the Business Virtual Centre portal (BVC) will try to personalize services, information, as well as, the activities that will be able to carry out each user in it. Personalization model is based on: stereotypes existing in the system, information introduced by the user and the knowledge extracted from the information generated by the user during its stay in the BVC. Title: PERSONALIZATION MEETS MASS CUSTOMIZATION - SUPPORT FOR THE CONFIGURATION AND DESIGN OF INDIVIDUALIZED PRODUCTS Author(s): Martin Lacher, Thomas  Leckner, Michael Koch, Rosmary Stegmann Abstract: Using electronic media for customer interaction enables enterprises to better serve customers by cost-efficiently offering personalized services to all customers. In this paper we address the area of providing help for customers in selecting or designing individualized products (mass customization) by using personalization technologies. The paper provides an introduction to the application area and presents a system for supporting the customization and design of individualized products. The support solution is presented and discussed from a process (customer) point of view and from a system point of view. Title: E-COMMERCE PAYMENT SYSTEMS - AN OVERVIEW Author(s): Pedro Fonseca, Joaquim Marques, Carlos Serrao Abstract: Electronic Commerce is playing a growing importance on modern Economy since it provides a commodity way for consumers to acquire goods and services through electronic means – Internet and the WWW are the most important. However, this new way of trade raises important problems on the way payments are being made, and trust is one of the most important one. This paper starts by presenting some of the complexities related to Electronic Commerce payments in this New Economy, both on a consumer and seller perspective. Next, differences between the traditional and electronic payment systems are identified and how they both deal with the identified complexities. Electronic payment systems (EPS) are then identified referring the advantages presented to Electronic Commerce. Finally, a comparative EPS table is presented identifying strong and week points on each of the EPS and conclusions are drawn from this. Title: CONTENT ANALYSIS OF ONLINE INTERRATER RELIABILITY USING THE TRANSCRIPT RELIABILITY CLEANING PERCENTAGE (TRCP): A SOFTWARE ENGINEERING CASE STUDY Author(s): Peter Oriogun Abstract: In this paper the author presents a case study of online discourse by message unit using quantitative content analysis, with particular emphasis on the author's proposed interrater agreement percentage that will be referred to in this paper as Transcript Reliability Cleaning Percentage (TRCP). The paper will examine the ratings of participants' messages in terms of level of engagement within a negotiation forum in line with the author's Negotiated Incremental Architecture, Oriogun (2002) using the Win-Win Spiral Model, Boehm (1988). The variables that the author investigated are, participation, and interaction. The paper is divided into six sections, that will introduce the rationale for the study, a brief introduction to the Negotiated Incremental Architecture, followed by the study itself, we then define what we means by Transcripts Reliability Cleaning Percentage (TRCP) of online discourse using message unit, followed by the interpretation of individual participant's result and finally the author will conclude with a recommendation of a follow-on paper, using our SQUAD approach to online posted messages. The SQUAD approach is a semi-structured categorisation of online messages. The paper also discusses the reasons why there has been very little research on interrater reliability with respect to content analysis of online discourse, furthermore, a comparison is made of Cohen's kappa value as reported in Rouke, Anderson, Garrison & Archer (2000) and the author's proposed Transcript Reliability Cleaning Percentage (TRCP). It is argued in this paper that the proposed Transcript Reliability Cleaning Percentage (TRCP) will better enhance interrater reliability (percentage agreement between coders) of the rating of online transcripts. The author is suggesting that it is not possible under certain circumstances to obtain 100% agreement between coders after discussion. However, the author noted that this was achieved by, Hara, Bonk & Angeli (2000). Title: ARCHCOLLECT: A SET OF COMPONENTS TOWARDS WEB USERS’ INTERACTIONS Author(s): Julio Ferreira, Edgar Yano, Joao  Sobral, Joubert  Castro, Tiago Garcia, Rodrigo Pagliares Abstract: Abstract This paper describes an example of a system that emphasizes web users’ interactions, called ArchCollect. One JavaScript component and five Java components gather information coming only from the user, independing onthe web application that will be monitored and on the web server used to support it. This improves the portability of this software and its capacity to deal with many web applications in a Data Center at the same time, for example. The ArchCollect relational model, which is composed by several tables, provides analyses, regarding factors such as purchases, business results, the length of time spent to serve each interaction, user, process, service or product. In this software, data extraction and the data analysis are performed either by personalization mechanisms provided by internal algorithms, or by any commercial decision making tools focused on services, such as, OLAP, Data Mining and Statistics, or by both. Title: INTEGRATION OF OBJECT-ORIENTED FRAMEWORKS HAVING IDL AND RPC-BASED COMMUNICATIONS Author(s): Debnath Mukherjee Abstract: This paper proposes a software architecture to unify disparate application frameworks that have Interface Definition Language (IDL) and RPC-based communication between client and server, thus enabling distributed computation using disparate frameworks. The architecture also demonstrates how multiple inheritance from classes belonging to disparate object-oriented frameworks is possible. Title: THE SECURE TRUSTED AGENT PROXY SERVER ARCHITECTURE Author(s): Michelangelo Giansiracusa Abstract: Concerns of malicious host system attacks against agents have been a significant factor in the absence of investment in agent technologies for e-commerce in the greater Internet. However, in this paper, it can be seen that agent systems represent a natural evolution in distributed system paradigms. As in other distributed systems, applying traditional distributed systems security techniques and incorporating trusted third-parties can discourage bad behaviour by remote systems. The concept and properties of a trusted proxy server host as a 'middle-man' host anonomising authenticated agent entities in agent itineraries is introduced, along with its inherent benefits. It is hoped that this fresh secure agent architectural offering will inspire further new directions in tackling the very challenging malicious agent platform problem. Title: SECURE SMART CARD-BASED ACCESS TO AN E-LEARNING PORTAL Author(s): Josef von Helden, Ralf Bruns, Jürgen Dunkel Abstract: The purpose of the project OpenLearningPlatform is the development of an integrated E-learning portal in order to support teaching and learning at universities. Compared to other E-learning systems the originality of the OpenLearningPlatform is the strong smart card-based authentication and encryption that significantly enhances its usefulness. The secure authentication of every user and the encryption of the transmitted data are the prerequisites to offer personalized and authoritative services, which could not be offered otherwise. Hence, the smart card technology provides the basis for more advanced E-learning services. Title: TOWARDS WEB SITE USER'S PROFILE: LOG FILE ANALYSIS Author(s): Carlos Alberto de Carvalho, Ivo Pierozzi Jr., Eliane Gonçalves Gomes, Maria de Cléofas Faggion Alencar Abstract: The Internet is a remote, innovative, extremely dynamic and widely accessible communication medium. As in all other human communication formats, we observe the development and adoption of its own language, inherent to its multimedia aspects. The Embrapa Satellite Monitoring is using the Internet as a dissemination medium of its research results and interaction with clients, partners and web site users for more than one decade. In order to evaluate the web site usage and performance of the e-communication system the Webalizer software has been used to track and to calculate statistics based on web server log file analysis. The objective of the study is to analyze the data and evaluate the indicators related to requests origin (search string, country, time), actions performed by users (entry pages, agents) and system performance (error messages). It will help to remodel the web site design to improve the interaction dynamics and also develop a customized log file analyser. This tool would retrieve coherent and real information. Title: SCALABLE AND FLEXIBLE ELECTRONIC IDENTIFICATION Author(s): David Shaw, S. Maj Abstract: Verification of network service requests may be based on widely available identification and authentication services. Complexity or multiple access requirements may require access control artefacts such as hardware based signature generators. Characteristics of artefact generated signatures include security and repeatability. Electronic signatures used in remote transactions need to be graded, flexible and scalable to permit appropriate user selection. Further, inherent error detection may reduce inadvertent errors and misconduct and aid arbitration. Title: A SURVEY RESEARCH OF B2B IN IRAN Author(s): Javad Karimi Asl Abstract: EC is relatively new concept in business domain( Wigand, 1997). While the consumer side of the Web explosion has been much touted, it is the Business-to-Business (B2B) market that has quietly surpassed expectations. This paper is based on a survey of 102 business managers (or IS) in Iran and discusses the management practices, application, problems and technical situations with regard to EC development in this country .In this paper was evaluated the B2B situation in Iran. This paper discusses about their business or Is manager’s experiences, and satisfaction with current electronic- commerce (EC) solutions in use. The finding of this paper are useful for both researchers and practitioners as they provide an insight for critical management issues which engage both under development countries'non-governmental organizations and policy makers. The result of this study shows that there are more differences between conditions of EC in developed and derdeveloping countries Title: WHITE PAPER FOR FLOWAGENT PLATFORM Author(s): Wenjun Wang Abstract: FlowAgent is the network platform to implement "Streamline Bus" with Jini network technology. "Streamline Bus" is trying to solve problems that prevent us to integrate different applications cross-enterprises/ organizations; it realizes task-scheduling among different applications through pre-defined task data requiring / providing relations; it can provide automatically workload balance, dynamic fail over and run-time data/performance tracking. One critical issue of FlowAgent platform is how to define the internal format for the task running/scheduling data, (1) let it provide the isolated applications the request data for running, as while (2) control the flow through the “Streamline service”. Base on "Streamline Bus", you can build large-scale scheduling systems, that integrates applications of different business fields. Systems based on “Streamline Bus” are in full-distributed model, are very different from traditional “Workflow systems”, which depend on centralized rule engine and has much limitations on the types of application can be integrated. Title: A MISSION-BASED MULTIAGENT SYSTEM FOR INTERNET APPLICATIONS Author(s): Glenn Jayaputera, Seng Loke, Arkady Zaslavsky Abstract: Software agents have been one of the most active research areas in the last decade. As a result, new agent technologies and concepts are emerging. Mobile agent technology has been used in real life environments, such as on-line auctions, supply chain, information gatherings, etc. In most situations, mobile agents must be created and carefully crafted to work together almost from scratch. We believe that this is quite inefficient for application developers and users, and hence propose a system for generating and coordinating agents based on the notion of agent missions. The prototype system is called eHermes and its architecture and components are discussed in the paper. Title: KNOWLEDGE CONSTRUCTION IN E-LEARNING - DESIGNING AN E-LEARNING ENVIRONMENT Author(s): Lily Sun, Kecheng Liu, Shirley Williams Abstract: In the traditional classroom, students tend to depend on tutors for their motivation, direction, goal setting, progress monitoring, self-assessment, and achievement. A fundamental limitation is that students have little opportunity to conduct and manage their learning activities which are important for knowledge construction. E-Learning approaches and applications which are supported by pervasive technologies, have brought in great benefits to the whole society, meanwhile it also has raised many challenging questions. One of the issues of which researchers and educators are fully aware is that technologies cannot drive a courseware design for e-Learning. An effective and quality learning requires an employment of appropriate learning theory and paradigms, organisation of contents, as well as methods and techniques of delivery. This paper will introduce our research work in design an e-Learning environment with emphases on instructional design of courseware for e-learning. Title: THE FUTURE OF TELEPHONY: THE IP SOLUTION Author(s): Sílvia Fernandes Abstract: Enterprises have begun to transform their working environment to meet not only the business world of today but also the business world of tomorrow. The working methods are more flexible than ever before: some workers collaborate entirely from home and others work in several different offices circulating between remote workplaces. In a short time the way we work will be so radically different that working will be just what we do and no more where we are. As globalisation becomes a business reality and technology transforms communications, the world of data transmission together with wireless networks has progressed a lot instead of fixed and wire-line voice communications that have barely changed. However tariffs are still based on time and distance even though it does not make any sense in today’s global marketplace, in spite of the reduced costs that have resulted from the deregulation process over public telephone networks. Title: TOWARD A CLASSIFICATION OF INTERNET SCIENTIFIC CONFERENCES Author(s): Abed Ouhelli, Prosper Bernard, Michel Plaisent, Lassana Maguiraga Abstract: Since 1980, the classification of scientific production has been an constant concern for academics. Despite its growing importance in the last decade, Internet has not been investigated as an autonomous domain. This communication relates the efforts to develop a first classification of themes based on calls for paper submitted to the ISWORLD community in the last two years. The distribution of theme and sub-themes is presented and compared. Title: WEB NAVIGATION PATTERNS Author(s): Eduardo Marques, Ana Cristina Bicharra  Garcia Abstract: Many Internet service providers and online services require you to manually enter information, such as your user name and password, to establish a connection. With Scripting support for Dial-Up Networking, you can write a script to automate this process. A script is a text file that contains a series of commands, parameters, and expressions required by your Internet service provider or online service to establish the connection and use the service. You can use any text editor, such as Microsoft Notepad, to create a script file. Once you've created your script file, you can then assign it to a specific Dial-Up Networking connection by running the Dial-Up Scripting Tool. Title: DYNAMICALLY RECONSTRUCTIVE WEB SERVER CLUSTER USING A HIERARCHICAL GROUPING MECHANISM Author(s): Myong-soon  Park, Sung-il  Lim Abstract: The Internet is quickly growing and people who use the WWW are increasing exponentially. So, companies which offer Web Service want to service to clients during 365*24*60. Therefore they use the cluster system for the availability and the performance. The previous works have made the dispatcher do static position. So, if a node in the system is failed the total system results in crash. We need to make it do dynamic position as like SASHA (Scalable Application-Space Highly-Available) Architecture. SASHA Architecture is composed of COTS, Application-Space Software, Agent and Tokenbeat protocol for system administration. Because it composes nodes in system by a virtual ring, the system administration’s overhead happened. Our paper will propose improved fault Detection and Reconfiguration performance in SASHA Architecture. Title: CUSTOMER LOYALTY IN E-BUSINESS Author(s): Bel G. Raggad, Jim Lawler Abstract: This study examines from simulation the effects of the privacy sensitivity of customers, the personalization practices or standards of retailers and the difficulty in locating favorable sites, on the loyalty of consumers to a Web site. The key finding of the study is that customer privacy sensitivity is a critical success factor that significantly impacts loyalty to a retailer. Customers have higher loyalty to sites that request the least information, while they have lower loyalty to sites that request the most information. Web retailers considering expanded personalization of products or services to customers, through increased personal information, need to rethink their practices. The study also found that difficulty in locating a favorable site is a success factor that impacts retailer loyalty, and that customers have higher loyalty to difficult to locate favorable sites on the Web. These findings are important at a time when consumers are empowered with Web technology to immediately shop competitor sites. The significance of privacy to loyalty is a factor that needs to be considered seriously by retailers, if they are to compete for loyal customers, and this study furnishes a framework to effectively research loyalty, personalization and privacy on the Web. Title: OPERATION-SUPPORT SYSTEM FOR LARGE-SCALE SYSTEM USING INFORMATION TECHNOLOGY Author(s): Seiji Koide, Riichiro Mizoguchi, Akio Gofuku Abstract: We are developing an operation support system for large-scale system such as rocket launch using Information Technology. In the project, we build a multi-media database that organizes diverse information and data produced in designing, testing, and practical launching, develop case-based and model-based trouble shooting algorithms and systems that automatically detect anomaly and diagnose the causes rapidly, and provide a fast networking environment that allows us to work with experts in distance. The distributed collaborative environment in which all of human operators and software agents can work collaboratively is been developing by means of the Web servicing technology such as UDDI, WSDL, and SOAP, and the Semantic Web technology such as RDF, RDFS, OWL, and Topic Maps. This project was prepared with the contract under the Japanese IT program of the Ministry of Education, Culture, Sports, and Technology. Title: SIMULATION STUDY OF TCP PERFORMANCE OVER MOBILE IPV4 AND MOBILE IPV6 Author(s): Jiankun Hu, Damien Phillips Abstract: Mobile IPv6 (MIPv6) is a protocol to deal with mobility for the next generation Internet (IPv6). However, the performance of MIPv6 has not yet been extensively investigated. Knowledge of how MIPv6 affects TCP performance, especially in comparison with MIPv4, can provide directions for further improvement. In this report, an intensive simulation study of TCP performance over MIPv4 and MIPv6 has been conducted. Simulation using the famous network simulator NS-2 will be used to highlight differences when TCP is used in hybrid wireless environment, over these two Mobile IP protocols. Initial simulation has shown a solid improvement in performance for MIPv6 when IPv6 Route Optimisation features are used. During the course of simulation, a consistent event causing dropped TCP throughput was identified. Out of order arrival of packets would occur when the mobile node initiated a handover. This out of order arrival invokes TCP congestion control falsely which reduces throughput. The difference in overall throughput of MIPv4 compared to MIPv6 is roughly proportional to the difference in packet size attributed to IPv6's increased header size. Another contribution of this work is to provide modifications and new functions such as node processing time, to the NS-2 simulator to make such investigation possible. To the best of our knowledge, no similar publication has been reported. Title: COLLABORATIVE ENGINEERING PORTAL Author(s): KRISHNA KUMAR RAMALEKSHMI SANKAR KUMAR, COLIN TAY, KHENG YEOW TAN, STEVEN CHAN, YONGLIN LI, SAI KONG CHIN, ZIN MYINT THANT, SAI KONG CHIN Abstract: The Collaborative Engineering Portal (CE-Portal) is envisioned to be a comprehensive state-of-the-art infrastructure for facilitating collaborative engineering over the Web. This system offers a Web-based collaborative use of High Performance Computing and Networking technology for product/process design that helps the enterprises to shorten design cycles. This platform allows government professionals and engineers to share information among themselves and to work together with their private counterparts as a virtual project team. The Collaborative Engineering portal is developed as a multi-tiered system implemented using VNC and other Java technologies. In conclusions, we analyze strengths, weaknesses, opportunities and threats of the approach. Title: A SURVEY OF KNOWLEDGE BASE GRID FOR TRADITIONAL CHINESE MEDICINE Author(s): Jiefeng Xu, Zhaohui Wu Abstract: Knowledge base gird is a kind of grid, which takes many knowledge bases as its foundation and its knowledge sources. All these knowledge sources follow a public ontology standard defined by standard organization. Knowledge base grid has its own specific domain knowledge, and so can be browsed at semantic level. It also supports correlative browse and knowledge discovery. In this paper, we introduce a generic knowledge base grid for Traditional Chinese Medicine. Its framework consists of three main parts: Virtual Open Knowledge Base, Knowledge Base Index, and Semantic Browser. We anatomize the implementation in detail. Furthermore, knowledge presentation and services of knowledge base grid are discussed. Title: TOWARDS A SECURE MOBILE AGENT BASED M-COMMERCE SYSTEM Author(s): Ning Zhang, Omaima Bamasak Abstract: It is widely agreed that mobile agent technology, with its useful features, will provide the technical foundation for future m-commerce applications, as it can overcome the wireless network limitations of limited bandwidth, frequent disconnections and mobile device's weaknesses. In order for mobile agents to be accepted as a primary technology for enabling m-commerce, proper security mechanisms must be developed to address the new security issues they bring to the fore. The most challenging and difficult problem among them is the issue of protecting mobile agents against malicious hosts. Although, to the best of our knowledge, there is yet no general solution to this problem, mechanisms that provide effective protection against specific attacks from hostile hosts have been proposed. This paper has analysed the security requirements for a mobile agent in the context of m-commerce, surveyed the related work in relation to the requirements specified, and suggested the development of a framework that provides confidentiality of data carried by a mobile agent by using secret sharing scheme together with fair exchange and non-repudiation services. Title: NON-REPUDIATION AND FAIRNESS IN ELECTRONIC DATA EXCHANGE Author(s): Aleksandra Nenadic, Ning Zhang Abstract: In this paper we discuss the two security issues: non-repudiation and fairness in association with e-commerce applications. In particular, these issues are addressed in the context of electronic data exchange, which is one of the most commonly seen e-commerce applications. In detail, this paper gives a survey of the approaches to non-repudiation and fair electronic data exchange protocols. We additionally discuss the current technologies that propose solutions to these issues, and the emerging standards in the area of business data formats and protocols for the exchange of such data. Finally, we discuss the architecture layer at which to implement the protocols for non-repudiation and fair data exchange. Title: SOMEONE: A COOPERATIVE SYSTEM FOR PERSONALIZED INFORMATION EXCHANGE Author(s): Layda Agosto, Laurence Vignollet, Pascal Bellec, Michel Plu Abstract: This paper presents a user-centric, social-media service: SoMeONe. It's goal is to build an information exchange network using Web informational networks. It should allow the construction of personal knowledge bases whose quality is improved by collaboration. It tries to increase the user's commitment by helping him to establish and to maintain interesting interactions with enriching people. Although many users are individualist, the rules we define for this media should encourage a cooperative behaviour. The functionalities it offers are between a bookmark management system and mailing lists. With SoMeONe users exchange informartion with semantic addressing: they only need to annotate information for being diffused to appropriate users. Each user interacts only through a manually controlled contact network composed of known and trusted users. However, to keep his contact network open, SoMeOne helps each user to send information to new appropriate users. In return, the user expects these users to send him new information as well. In companies, where the Intranet is composed of huge amounts of heterogeneous and diverse information, such collective behaviour should increase the personal efficiency of each collaborator. Thus, SoMeONe provides some solutions to some knowledge management problems particularly for companies aware of the value of their social capital. Title: POTENTIAL ADVANTAGES OF SEMANTIC WEB FOR INTERNET COMMERCE Author(s): Yuxiao Zhao Abstract: Past decade saw much hype in the area of information technology. The emerging of semantic Web makes us ask if it is another hype. This paper focuses on its potential application in Internet commerce and intends to answer the question to some degree. The contributions are: first, we find and examine twelve potential advantages of applying semantic Web for Internet commerce; second, we conduct a case study of e-procurement in order to show its advantages for each process of e-procurement; lastly, we identify critical research issues that may transfer the potential advantages into tangible benefits. Title: BUSINESS MODEL ANALYSIS APPLIED TO MOBILE BUSINESS Author(s): Giovanni Camponovo Abstract: Mobile business is a young promising industry created by the emergence of wireless data networks. Similar to other emerging industries, it is characterized by a large number of uncertainties at different levels, in particular concerning technology, demand and strategy. This paper focuses on the strategic uncertainties, where a large number of actors are trying a number of strategic approaches to position themselves in the most favourable position in the value system. As a consequence, they are experimenting with a number of innovating business models. We argue that successful business models are likely to be the ones that best address the economic peculiarities underlying this industry, like mobility, network effects and natural monopolies. The paper presents the principal classes of actors that will participate in the mobile business industry and give an overview of their business models based on a formalized ontology. Title: VOICEXML APPLIED TO A WIRELESS COMMUNICATION SYSTEM Author(s): FRANK WANG Abstract: The project aims to develop a wireless online communication system (Wireless Messenger) to aid communication for small-medium enterprises. By expressing automated voice services using VoiceXML, a visual Web site is created in addition to the physical WML Web site. This wireless system links an out-of-office mobile phone and an in-house server. The functions of this system include posting and notifying of messages internally, posting and notifying of notices, setting and notifying of events, calendar reference modules and administrative controls. Title: A NEW SOLUTION FOR IMPEMENTATION OF A COLLABORATIVE BUSINESS PROCESS MODEL Author(s): Takaaki Kamogawa, Masao Matsumoto Abstract: This paper presents a Collaborative Business Process Model based on a Synchronized Theory. The Cisco case of co-working with suppliers is viewed in terms of business-process collaboration to identify issues concerning collaboration with suppliers. The authors also discuss past and present concepts of collaboration, and propose that it is necessary to combine a synchronized theory with a collaborative business process model. We propose a new solution for implementation of the Collaborative Business Process Model from the viewpoint of open infrastructure. Title: A DESIGN PROCESS FOR DEPLOYING B2B E-COMMERCE Author(s): Youcef Baghdadi Abstract: This paper emphasizes on architecture and design process for developing applications to support B2B electronic commerce due to their growth and difference from other categories of e-commerce in many aspects. It first describes current architectures, reference models, approaches and implementing technologies. It then proposes an architecture with four abstraction levels: business process, decomposition and coordination, supporting electronic commerce services, im-plementing technology, and the interfaces between them. This abstraction aims to make B2B e-commerce process-driven not technology-driven. Thus making business process independent from the implementing technologies. Finally, a five-steps design process in accordance with this architecture is described. Title: AN OBJECT ORIENTED IMPLEMENTATION OF BELIEF-GOAL-ROLE AGENTS Author(s): Walid  Chainbi Abstract: One of the most driving forces behind multi-agent systems research and development is the Internet. Agents are populating the Internet at an increasingly rapid pace. Unfortunately, they are almost universally asocial. Accordingly, adequate agent concepts will be essential for agents in such open environment. To address this issue, we show in the first part of this paper that agents need to have communication concepts and organization concepts. We argue that instead of the usual approach of starting from a set of intentional states, the intentional structure should be deduced in terms of interaction. To this end, we come up with ontologies related to communication and organization. The second part of this paper deals with a study which compares the agent paradigm to the object paradigm. We also show the capabilities as well as the limits of the object paradigm to deal with the agent paradigm. We illustrate our work with the well known prey/predator game. Title: BUILDING SUPPLY CHAIN RELATIONSHIPS WITH KNOWLEDGE MANAGEMENT: ENGINEERING TRUST IN COLLABORATIVE SYSTEMS Author(s): John  Perkins, Ann-Karin Jorgensen, Lisa Barton, Sharon Cox Abstract: Collaborative systems are essential components of electronic supply chains. Barriers to collaboration are identified and a preliminary model for evaluating its characteristic features is proposed. Some features of knowledge management and knowledge management systems are briefly reviewed and the application of these to the needs of collaborative system evaluation is explored. A process for iterative evaluation and review of collaborative system performance is proposed. Finally, a case study in the retail industry is used to assess the contribution of knowledge management concepts and systems to develop improved e-commerce performance in collaborative value networks. Title: WIDAM - WEB INTERACTION DISPLAY AND MONITORING Author(s): Hugo Gamboa, Vasco Ferreira Abstract: In this paper we describe the design and implementation of a system called Web Interaction Display and Monitoring (WIDAM). We have developed a web based client-server application that offers several services: (i) real time monitoring of the user interaction to be used in synchronous playback (Synchronous Monitoring Service) (ii) real time observation by other users (Synchronous Playback Service); (iii) storage of the user interaction information in the server database (Recording Service); (iv) retrieval and playback of a stored monitored interaction (Asynchronous Playback Service). WIDAM allows the usage of an interaction monitoring system directly over a web page, without the need of any installation, using low bandwidth comparatively to image based remote display systems. We discuss several applications of the presented system like intelligent tutoring systems, usability analysis, system performance monitoring, synchronous or asynchronous e-learning tools. Title: AN AGENT-MEDIATED MARKETPLACE FOR TRANSPORTATION TRANSACTIONS Author(s): Alexis Lazanas, Pavlos Moraitis, Nikos Karacapilidis Abstract: This paper reports on the development of an innovative agent-mediated electronic marketplace, which is able to efficiently handle transportation transactions of various types. Software agents of the proposed system represent and act for any user involved in a transportation scenario, while they cooperate and get the related information in real-time mode. Our overall approach aims at the development of a flexible framework that achieves efficient communication among all parties involved, constructs the possible alternative solutions and performs the required decision-making. The system is able to handle the complexity that is inherent in such environments, which is mainly due to the frequent need of finding a modular" transportation solution, that is one that fragments the itinerary requested to a set of sub-routes that may involve different transportation means (trains, trucks, ships, airplanes, etc.). The system's agents cooperate upon well-specified business models, thus being able to manage all the necessary freighting and fleet scheduling processes in wide-area transportation networks. Title: ENGINEERING MULTIAGENT SYSTEMS BASED ON INTERACTION PROTOCOLS: A COMPOSITIONAL PETRI NET APPROACH Author(s): Sea Ling, Seng Wai Loke Abstract: Multiagent systems are useful in distributed systems where autonomous and flexible behaviour with decentralized control is advantageous or necessary. To facilitate agent interactions in multiagent systems, a set of interaction protocols for agents has been proposed by the Foundation of Intelligent Physical Agents (FIPA). These protocols are specified diagramatically in an extension of UML called AUML (Agent UML) for agent communication. In this paper, we informally present a means to translate these protocols to equivalent Petri net specifications. Our Petri nets are compositional, and we contend that compositionality is useful since multiagent systems and their interactions are inherently modular, and so that mission-critical parts of a system can be analysed separately. Title: ENHANCING NEWS READING EXPERIENCE THROUGH PERSONALIZATION OF NEWS CONTENT AND SERVICES USING INTELLIGENT AGENTS Author(s): Logandran Balavijendran, Soon Nyean Cheong, Azhar Kassim Mustapha Abstract: One of the most common things we use the Internet for is to read the news. But there is so much news catering for so many people, that it often gets confusing and difficult to read what you want to read about. This system uses an Intelligent Agent to guess what the user is interested in and personalizes the news content. This is done by observing the user and determining short-term and long-term interests. To further enrich the experience, it provides features that allows the user to track specific news events and receive instant alerts; summarize news so you can take a quick look before committing yourself; find background information to learn about the news; search and filter results according to the user profile and also provides a smart download tool that makes viewing heavy multimedia content practical without needing large bandwidth (by exploiting the irregular nature of internet traffic and use). This agent is designed to work of the News on Demand Kiosk Network[1] and designed primarily in J2EE. Title: AN INTERNET ENABLED APPROACH FOR MRO MODELS AND ITS IMPLEMENTATION Author(s): Dennis F Kehoe, Zenon Michaelides, Peiyuan  Pan Abstract: This paper presents an Internet enabled approach for MRO applications based on the discussion on different MRO models and its implementation architectures. This approach is focused on using e-business philosophy and Internet technology to meet the requirements of MRO services. The proposed e-MRO models are framework techniques. Different system architectures for this new approach are described and available technologies for system implementation are also presented. Title: A NEW USER-ORIENTED MODEL TO MANAGE MULTIPLE DIGITAL CREDENTIALS Author(s): José Oliveira, Augusto Silva, Carlos Costa Abstract: E-Commerce and Services are become a major commodity reality. Aspects like electronic identification, authentication and trust are core elements in referred web market areas. The use of electronic credentials and the adoption of a unique worldwide-accepted digital certificate stored in a smart card will provide a higher level of security while allowing total mobility with secure transactions over the web. While this adoption does not take place, the widespread use of digital credentials will inevitably lead to each service client having to be in possession of different electronic credentials needed for all the services he uses. We present a new approach that provides a user-oriented model to manage multiple electronic credential, based in utilization of only one smart card per user as a basis for secure management of web-based services, thus contributing for a more generalized use of the technology. Title: INTELLIGENT AGENTS SUPPORTED COLLABORATION IN SUPPLY CHAIN MANAGEMENT Author(s): Minhong WANG, Huaiqing WANG, Huisong ZHENG Abstract: In today's global marketplace, individual firms no longer compete as independent entities but rather as integral part of supply chain links. This paper addresses the approach of applying the technology of intelligent agent in supply chain management to cater for the increasing demand on collaboration between supply chain partners. A multi-agent framework for collaborative planning, forecasting and replenishment in supply chain management is developed. With the concerns for exception handling and flexible collaboration between partners, some function are proposed in the system such as product activity monitoring, negotiation within partners, supply performance evaluation, and collaboration plan adjustment. Title: FIDES - A FINANCIAL DECISION AID THAT CAN BE TRUSTED Author(s): Sanja Vranes, Snezana Sucurovic, Violeta Tomasevic, Mladen Stanojevic, Vladimir Simeunovic Abstract: FIDES is aimed at valuating investment projects in accordance with the well-known UNIDO standard and making recommendations on a preferable investment, based on multicriteria analysis of available investment options. FIDES should provide a framework for analyzing key financial indicators, using the discounted cash-flow technique, and also allows for non-monetary factors to enter the multicriteria assessment process, whilst retaining an explicit and relatively objective and consistent set of evaluation conventions and clear decision criteria. Moreover, since virtually every investment and financing decision, involving allocation of resources under uncertain conditions, is associated with considerable risk, FIDES should integrate the risk management module. The basic principle governing risk management is intuitive and well articulated, taking into account investor’s subjective appetite for and aversion to risk, and the decision sensitivity to the uncertainty and/or imprecision of input data. Thus, with FIDES, financial analysts and decision-makers will be provided with effective modeling tools in the absence of complete or precise information and the significant presence of human involvement. The decision aid will be implemented using multiple programming paradigms (Internet programming, production rules, fuzzy programming, multicriteria analysis, etc.), using a three-tier architecture as a backbone. Being Web based, the application is especially convenient for large, geographically dispersed corporations. Title: AGENT-BASED GENERIC SERVICES AND THEIR APPLICATION FOR THE MOBILEWORKFORCE Author(s): Makram Bouzid Abstract: In this paper we propose an architecture of agent-based services for easy development of multi-agent applications. It is based on the notion of service components, which can be installed (“plugged”) into a communicative agent, and which can be composed in order to offer more sophisticated services. This architecture was validated through the design and development of a set of generic services for mobile workforce support, within the European project LEAP. These generic services were also built to develop two multi-agent applications that assist the mobile workers of British Telecommunications and the German Automobile Club ADAC. Both have been tested in real world conditions in UK and Germany. Title: AN EXTENSIBLE TOOL FOR THE MANAGEMENT OF HETEROGENEOUS REPRESENTATIONS OF XML DATA Author(s): Riccardo Torlone, Marco Imperia Abstract: In this paper we present a tool for the management and the exchange of structured data in XML format, described according to a variety of formats and models. The tool is based on a novel notion of metamodel'' that embeds, on the one hand, the main primitives adopted by different schema languages for XML and, on the other hand, the basic constructs of traditional database conceptual models. The metamodel is used as a level of reference for the translation between heterogeneous data representations. The tool enables the users to deal, in a uniform way, with various schema definition languages for XML (DTD, XML Schema and others) and the ER model, as a representative of a traditional conceptual model. The model translation facility allows the user to switch from one representation to another and accounts for possible loss of information in this operation. Moreover, the tool is easily extensible since new models and translations can be added to the basic set in a natural way. The tool can be used to support a number of involved e-Business activities like: information exchange between different organizations, integration of data coming from heterogeneous information sources, XML data design and re-engineering of existing XML repositories. Title: USER AUTHENTICATION FOR E-BUSINESS Author(s): James P H Coleman Abstract: There are many factors that need to be addressed before e-business is seen as a truly usable service to the ordinary customer. The most well known factors are: · The speed of access to the Internet and service providers · The cost of access to the Internet infrastructure. The poor quality of a large number of e-business/e-commerce web sites – in particular aspects such as the interface, design … A less well-known, but perhaps equally important factor is user authentication. User authentication is the process whereby the Service Provider (SP) is able to identify the person using the web site. This is normally done by a username/password combination. User Authentication is important for the SPs because if a product is ordered or a service is requested, then the supplier needs to be reasonably confident that the order/request is valid, and not a hoax. Unfortunately, the situation has arisen where a user who is a frequent web user may have accounts with many different SPs, e.g. Their bank, telephone company, ISP, superannuation/pension fund, insurance company, government (often with different departments within the Government) and so on. In these cases the SPs use a registration process where the user has a username and password. It is unfortunately usually the case that the username and password combinations are different between sites. This is a deterrent to the whole registration process as you have people with multiple registrations. There are many e-Gateway systems that offer a single-point-of-logon, for example the e-Government within the UK e-Government Project which aims to solve the problem at least within their infrastructure. The very large private sector has no such mechanism. This paper investigates current e-Gateway systems (including those where the primary purpose is not necessarily user authentication) and proposes a model for a more universal e-Gateway. Title: ON ESTIMATING THE AMOUNT OF LEARNING MATERIALS A CASE STUDY Author(s): Matti Järvenmpää, Pasi Tyrväinen, Ari Sievänen Abstract: E-learning has been studied as the means to apply digital computers to educational purposes. Although the benefits of information and communication technology are obvious in several cases, there still exists a lack of convincing measures for the value of using computers in education. This reflects the general difficulty in evaluating investments on information systems, known as the "IT investment paradox" that has not been solved so far. In this paper we approach the problem by estimating the amount of teaching and learning material in a target organisation, a university faculty. As expected, the volume of learning material dominates the communication of the faculty forming about 95% of all communication volume and 78% to 82% of communication when measured with other metrics. Also the use of alternative communication forms used in the target organisation was analysed quantitatively. The study also indicates, that communication forms dominating the volume of communication are likely to be highly organisation-specific. Title: E-COMMERCE ENGINEERING: A SHORT VS LONG SOFTWARE PROCESS FOR THE DEVELOPMENT OF E-COMMERCE APPLICATIONS Author(s): Andreas Andreou, Stephanos Mavromoustakos, Chrysostomos  Chrysostomou, George  Samaras , Andreas  Pitsillides, Christos  Schizas, Costas Leonidou Abstract: The immediacy in developing e-commerce applications, the quality of the services offered by these systems and the need for continuous evolution are primary issues that must be fully analysed and understood prior and during the development process. In this context, the present work suggests a new development framework which aims at estimating the level of complexity a certain e-commerce system encompasses and driving the selection of a long or short software process in terms of time and effort. The proposed framework utilizes a special form of Business Process Re-engineering (BPR) to define and assess critical business and organizational factors within small-to-medium enterprises (SMEs) whishing to go e-commerce. This set of factors is enriched with other critical issues belonging to the quality requirements of the system and to the application type of services it aspires to offer. The set of critical factors identified is used to estimate the average complexity level of the system using numerical values to describe the contribution of each factor to the overall complexity. The level of complexity estimated dictates the adoption of either a short or a long version of the well-known WebE process for analysing, designing and implementing the e-commerce system required by an SME. Title: ARCHITECTURE OF AUTOMATIC RECOMMENDATION SYSTEM IN E-COMMERCE Author(s): Rajiv Khosla, Qiubang Li Abstract: Automatic recommendation system will become an indispensable tool for customers to shop online. This paper proposes an architecture of automatic recommendation system in e-commerce. The response time of the system, which is the bottleneck of the system, is overcome by high performance computing. The architecture has already applied to an online banking system. Title: ELECTRONIC JOB MARKETPLACES: A NEWLY ESTABLISHED MANIFESTATION OF E-BUSINESS Author(s): Georgios Dafoulas, Mike Turega, Athanasios Nikolaou Abstract: Finding suitable candidates for critical job posts is currently an issue of concern for most organizations. Consideration of cultural fit, experience, ability to adapt to the company’s marketplace and ability to grow with the organisation all weigh heavily on the minds of most human resource professionals. Since the mid-90s a significant number of recruiting firms started exploiting the Internet mainly because of its global nature that provides access to an unlimited pool of skills. Optimistic estimations examine the Internet as a medium for conducting the recruitment and selection process in an online environment. This paper suggests developing an integrated Electronic Job Marketplace offering a new service in the Internet Job Market: Online Interviewing for screening candidate employees. In order to meet hiring objectives and control the increasing cost of recruiting, organisations could implement an online recruiting and selection process. The critical requirements of the new model are: eliminating paperwork, improving time-to-hire, reducing turnover, creating a resume and position-centric environment as well as using the Internet as a recruitment and selection tool. Title: ONE-TO-ONE PERSONALIZATION OF WEB APPLICATIONS USING A GRAPH BASED MODEL Author(s): Georg Sonneck, Thomas Mück Abstract: Due to the maturity of current web technology, a large fraction of non-technically oriented IT end users are confronted with increasingly complex web applications. Such applications should help these end users to fulfill their tasks in the most effective and efficient way. Out of this perspective there is little doubt that personalization issues play an important role in the era of web applications. Several approaches already exist to support so{-}called {\em Adaptive Hypermedia Systems}, i.e., systems which are able to adapt their output behaviour to different user categories. In this paper, we are focusing on those personalization and customization issues of web applications raised by task driven {\em user interaction} and give as example the interaction patterns caused by different users of a financial advisor system. To achieve this goal we propose, in a first step, a graph{-}based model representing the logical structure of web applications, a fully extensible XML schema description modelling the structure of the nodes in the graph and a document type definition to store user profiles. In a second step, this basic model is augmented by process graphs corresponding to specific business tasks the web application can be used for, leading to a first form of personalization by assigning a user to a process task. We then show in a final step how matches between stored skills within the user profile and the node descriptions can lead to one{-}to{-}one personalization of the process graph. Title: AN INVESTIGATION OF THE NEGOTIATION DOMAIN FOR ELECTRONIC COMMERCE INFORMATION SYSTEMS Author(s): Zlatko Zlatev, Pascal Eck, van Abstract: To fully support business cycles, information systems for electronic commerce need to be able to conduct negotiation automatically. In recent years, a number of general frameworks for automated negotiation have been proposed. Application of such frameworks in a specific negotiation situation entails selecting the proper framework and adapting it to this situation. This selection and adaptation process is driven by the specific characteristics of the situation. This paper presents a systematic investigation of there characteristics and surveys a number of frameworks for automated negotiation. Title: COLLABORATOR - A COLLABORATIVE SYSTEM FOR HETEROGENEOUS NETWORKS AND DEVICES Author(s): Agostino Poggi, Matteo Somacher, Socrates Costicoglou, Federico Bergenti Abstract: This paper presents a software framework, called Collaborator, to provide a shared workspace supporting the activities of virtual teams. This system exploits seamless integration of standard Web technologies with agent technologies, enhancing the classic Web communication mechanisms to support synchronous sharing of applications, and its use through emerging technologies such as: third generation of mobile networks and terminals, and new generation of home appliances. The system presented in the paper is the main result of an on-going European research project Collaborator (IST-2000-30045) that aims at specifying and developing a software distributed environment to support efficient synchronous collaborative work between virtual teams, and will experiment such an environment in the construction and telecommunication working sectors. Title: SOFTWARE AGENTS TO SUPPORT ADMINISTRATION IN ASYNCHRONOUS TEAM ENVIRONMENTS Author(s): Roger Tagg Abstract: Current economic pressures are causing severe problems for many enterprises in maintaining service standards with shrinking headcounts. Front-line workers have had to shoulder runaway workloads. Software Agent technologies have been widely advocated as a solution, but there are few reported success stories. In the author’s previous work, a design was been proposed for a system to support front-line staff in a team teaching environment. This system is based on a domain-specific screen desktop with drop boxes supported by a number of types of agent. This paper analyses the work these agents have to do and the technology needed to support them. Title: IT INFRASTRUCTURE FOR SUPPLY CHAIN MANAGEMENT IN COMPANY NETWORKS WITH SMALL AND MEDIUM-SIZED ENTERPRISES Author(s): Marcel Stoer, Joerg Nienhaus, Nils Birkeland, Guido Menkhaus Abstract: The current trend of extending supply chain management beyond the company's wall focuses on the integration of suppliers and consumers into a single information network. The objective is to optimize costs and opportunities for everyone involved. However, small-sized enterprises can rarely carry the high acquisition and introduction costs of hardware and software. This reduces the attractiveness of the small-sized enterprise as partner in a logistics and a production network. This article presents a lean IT infrastructure that targets small-sized enterprises. It allows flexible and configurable integration with the Internet, ERP systems and the secure communication of supply chain management data. Title: AGENTS-MIDDLEWARE APPROACH FOR CONTEXT AWARENESS IN PERVASIVE COMPUTING Author(s): Karim Djouani, Abdelghani CHIBANI, Yacine AMIRAT Abstract: With the emergence of wireless distributed systems, embedded computing is becoming more pervasive. Users in continuous transitions between handheld devices and fixed computers expect to maintain the same QoS. Thus, applications need to become increasingly autonomous by reducing interactions with users. The present paper caters with user’s mobility, context-aware embedded applications, distributed systems, and in the general case accesses to remote services through embedded middleware. The context, in which exist such applications, exhibits some constraints like: low bandwidth, frequent disconnections, resources poor devices (low CPU speed, little memory, low battery power, etc). The first objective of our work is to proof that agent paradigm and technologies present a great potential to fully blossom in this new area. This allows the building of new and more effective pervasive applications. Our vision, beyond what it was given in middleware and agents for pervasive computing research, is including the context-awareness capability into the early-introduced agents-middleware approach. Thus, we have proposed an agents-middleware architecture approach, which is FIPA standard compliant. This approach is a logical suite of some transitions in research results; from embedded middleware approaches to lightweight agents’ platform approaches, and arriving finally to context-aware agents-middleware approach. In this way, we present the usefulness of context notion through two derived concepts: pervasive context and user profile. Upon we have introduced tow specialized agents, within the agents-middleware, that process by inferring meta-data that describe context information extracted from sources like: sensors, user, system resources, wireless network, etc. on top of this agents-middleware we can build context-aware pervasive applications. We present also our ongoing work and the future targeted applications by our approach. Title: TOXIC FARM: A COOPERATIVE MANAGEMENT PLATFORM FOR VIRTUAL TEAMS AND ENTERPRISES Author(s): Hala Skaf-Molli, Pascal Molli, Pradeep Ray, Fethi  Rabhi, Gerald Oster Abstract: The proliferation of the Internet has revolutionized the way people work together for business. People located at remote places can collaborate across organizational and national boundaries. Although the Internet provides the basic connectivity, researchers all over the world are grappling with the problems of defining, designing and implementing web services that would help people collaborate effectively in virtual teams and enterprises. These problems are exacerbated by a number of issues, such as coordination, communication, data sharing, mobility and security. Hence there is a strong need for multiple services (to address above issues) though an open cooperative management platform to support the design and implementation of virtual teams and enterprises in this dynamic business environment. This paper presents a cooperative management platform called Toxic Farm for this purpose and discusses its application in business applications. Title: LEARNING USER PROFILES FOR INTELLIGENT SEARCH Author(s): Pasquale Lops, Marco Degemmis Abstract: The recent evolution of e-commerce emphasized the need for more and more receptive services to the unique and individual requests of users. Personalization has became an important strategy in Business to Consumer commerce, where a user explicitly wants the e-commerce site to consider his own information such as preferences in order to improve access to relevant products. By analyzing the information provided by a customer, his browsing and purchasing history, a personalization system could learn a customer's personal preferences and store them in a personal profile used to provide an intelligent search support. In this work, we propose a two-step profiles generation process: in the first step, the system learns coarse-grained profiles in which the preferences are the product categories the user is interested into. In the second step, the profiles are refined by a probabilistic model of each preferred product category, induced from the descriptions of the products the user likes. Experimental results demonstrate the effectiveness of the strategy proposed. Title: AGENT COMMUNICATION CHANNELS: TRANSPORT MECHANISMS Author(s): Qusay Mahmoud Abstract: Most of the work that has been done on agent communication has concentrated on ontologies – Agent Communication Languages (ACLs) that are used to describe objects that the agents manipulate. Little attention, if any, has been given to agent communication channels – the transport layer through which data is sent between agents. Here we describe the different communication transport techniques that can be used to send data between agents, and then we will compare and contrast the different transport mechanisms. This is important as the way agents communicate can have a significant effect on the performance of agent-based systems. Title: IMPLEMENTING AN INTERNET-BASED VOTING - A PROJECT EXPERIENCE Author(s): Alexander Prosser, Robert Krimmer, Robert Kofler Abstract: Worldwide research groups have developed remote electronic voting systems using several different approaches with no legal basis. In 2001 the Austrian Parliament passed a law allowing electronic voting with digital signatures for public elections. Besides these legal requirements, an algorithm has to solve the basic technical problem, of how to identify the user uniquely with still guaranteeing the anonymity of one’s vote and further not to allow fraud by the election administration. In this paper the authors give an experience report on the implementation of the first phase of an algorithm that fulfills these requirements by strictly separating the registration from the vote submission phase. Title: TOWARDS THE ENTERPRISES INFORMATION INFRASTRUCTURE BASED ON COMPONENTS AND AGENTS Author(s): Manuel Chi, Ernesto German, Matias Alvarado, Leonid Sheremetov, Miguel Contreras Abstract: Information infrastructure as the mean to bring together software applications within the enterprise is the key component to enable cooperation, information and knowledge exchange in an open distributed environment. In this article, component and agent paradigms for the integration of virtual enterprises are analyzed and the advantages and drawbacks of the proposed solution are discussed. As an example of the infrastructure as an integration of the both technologies, a Component Agent Platform (CAP) that uses DCOM as a particular case of component model for its implementation is described. Finally, we discuss the interoperability issues of the proposed solution and outline the directions of the future work. Title: GUARDIAN KNOWLEDGE FARM AGENTS AND SECURITY ARCHITECTURES: WEB SERVICES, XML, AND WIRELESS MAPPINGS Author(s): Britton Hennessey, Girish Hullur, Mandy McPherson, George Kelley Abstract: This paper merges the BDIP (beliefs, desires, intentions, and plans) rational agent model into the Jungian rational behavioral model. It also defines the key framework design dimensions and classified intelligences of knowledge farm network agents having the necessary know-how to function as trust and security guardians. The paper presents four practical example application mappings of the converged BDIP-Jungian framework into (1) seven design principles of computer systems security, (2) the web services security architecture, (3) the XML family systems security architecture, and (4) the wireless security architecture. Title: ICS- AN AGENT MEDIATED E-COMMERCE SYSTEM: ONTOLOGIES USAGE Author(s): Sofiane Labidi Abstract: The Electronic Commerce has presented an exponential growth in relation to the number of users and amount of commercial transactions. Recent advances in Software Agent’s technology allow agent-based electronic commerce where agents are entities acting autonomously (or semi-autonomously) on behalf of companies or people in negotiation into virtual environments. In this work, we propose the ICS (an Intelligent Commerce System) as a B2B E-Commerce system based on intelligent and mobile software agent’s technology following the OMG MASIF standard. Three important features of ICS are emphasized here: the e-commerce lifecycle approach, the user modeling, and a proposed ontology for each phase of the lifecycle. Title: IMPLEMENTATON OF MOBILE INFORMATION DEVICE PROFILE ON VIRTUAL LAB Author(s): Aravind Kumar Alagia Nambi Abstract: The rate at which information is produced in today’s world is mind-boggling. The information is changing by every minute and today’s corporate mantra is not “knowledge is power” but “Timely knowledge is power “. Millions of Dollars are won or lost due to information or lack of it. Business executives and corporate managers push their technology managers to provide information at the right time in the right form. They want information on the go and want to be connected all the time to the Internet or their corporate network. The rapid advancement of Technology in the field of miniaturization and that of communications has introduced a lot of roaming devices for people to connect through to the network like laptop, PDA, mobile phones and many embedded devices. Programming for these devices were cumbersome and limited since each device supported their own standard I/O ports, screen resolution and had specific configurations. The introduction of Java 2 Micro Edition (J2ME) has solved this problem to some extent. J2ME is divided into configuration and profiles, which provide specific information to a group of related devices. Mobile phones can be programmed using J2ME. If the mobility offered by the cellular phones combined with Electrical Engineering many new uses can be found out for existing electrical machines. It will also enable remote monitoring of electrical machines and the various parameters involved in Electrical Engineering Title: GLI-BBS: A GROUPWARE BASED ON GEOGRAPHICAL LOCATION INFORMATION FOR FIELD WORKERS Author(s): Tatsunori Sasaki, Naoki Odashima, Akihiro Abe Abstract: Geographical Location Information (GLI) is information showing in which geographical position a person or an object is located. Using digital maps and digital photographs, we have developed a GLI-based Bulletin Board System (GLI-BBS), and we are promoting applications for various public works in local communities. Fieldworkers who participate in public works can use the GLI-BBS effectively to share information and to form mutual agreement. As examples of concrete GLI-BBS applications, a support system for road maintenance and management operations are taken up to examine important points in operation. Title: SECURING INTERNET SERVERS FROM SYN FLOODING Author(s): Riaz Mahmood Abstract: Denial-of-Service (DoS) attacks utilize the vulnerabilities present in current Internet protocols and target end server machines with flood of bogus requests – thus blocking the services to the legitimate users. In this paper a counter denial-of-service method called Dynamic Ingress Filtering Algorithm (DIFA) is introduced. This algorithm aims to remove the network peripheries inability to counter spoof-based denial-of-service attacks originating from valid network prefixes. Dynamic Ingress Filtering mechanism by virtue of its design, gives protection against both types of spoof-based attacks, generating from valid network prefixes and invalid network prefixes. This is because of the reason that incoming traffic IP addresses change rate is compared with predefined threshold time limit. If the addresses are changing rapidly from a particular source – the packets arriving from that host are not forwarded. Advantages of DIFA include design simplicity, scalability and reasonable implementation costs Title: WEB SERVICES SECURITY MODEL BASED ON TRUST Author(s): Luminita Vasiu Abstract: The concept of Web services is the latest in the evolution of ever more modular and distributed computing. Web services represent a fairly simple extension to existing component models, such as Microsoft's Component Object Model (COM) or Sun's Enterprise Java Bean (EJB) specification .It is obvious that Web services have what it takes to change something important in the distributed programming field. But, until they do it developers will have some difficulties in figuring out how to solve and eliminate problems that appear when trying to build heterogeneous applications.In an open environment security is always an issue. In order to overcome this problem the main challenge is to understand and asses the risk involved in securing a Web-based service. How do you guarantee the security of a bank transaction service? There are efforts being made to develop security mechanisms for Web services. Standards like SAML, XKMS, SOAP security will probably be used in the future to guarantee protection for both the consumers and the services. In this paper we analyse some security issues faced by Web services and present a security model based on trust which supports more specific models such as identity-based-security, access control lists. Title: A MULTI-AGENT ARCHITECTURE FOR DYNAMIC COLLABORATIVE FILTERING Author(s): Gulden Uchyigit, Keith Clark Abstract: Collaborative Filtering systems suggest items to a user because it is highly rated by some other user with similar tastes. Although these systems are achieving great success on web based applications, the tremendous growth in the number of people using these applications require performing many recommendations per sec-ond for millions of users. Technologies are needed that can rapidly produce high quality recommendations for large community of users. In this paper we present an agent based approach to collaborative filtering where agents work on behalf of their users to form shared “interest groups”, which is a process of pre-clustering users based on their interest profiles. These groups are dynamically updated to reflect the user’s evolving interests over time. We further present a multi-agent based simulation of the architecture as a means of evaluating the system. Title: POLICIES COMPOSITION THROUGH GRAPHICAL COMPONENTS Author(s): Rui  Lopes, Vitor Roque, Jose Luis Oliveira Abstract: Policy based management have gained a crescent importance in the two last years. New demands on internetworking, on services specification, on QoS achievement and generically on network management functionality, have driven this paradigm to a very important level. The main idea is to provide services that allow specifying management and operational rules in the same way people do business. Despite the main focus of this technology has been associated with network management solutions, its generality allows to extend these principles to any business process inside an organization. In this paper we discuss the main proposals in the field, namely the IETF/DMTF model, and we present a proposal that allows the specification of policy rules through a user-friendly and component-oriented graphical interface. Title: TOWARDS AGENT BASED BUSINESS INFORMATION SYSTEMS AND PROCESS MANAGEMENT Author(s): Johann Sievering, Jean-Henry Morin Abstract: Todays Business Information Systems and Business Intelligence applications have become key instruments of corporate management. They have evolved over time to a mature discipline within IT departments. However, they appear to be slow at integrating emerging technologies offering major improvements to trading partners in the global networked ecosystem. The Internet is slow-ly evolving towards Peer-to-Peer architectures and grid computing, Agent-Oriented Program-ming, Digital Rights and Policy Management, trusted computing, ontologies and semantics. These evolutions are setting the ground and requirements for the future of corporate IT. This pa-per reports on current investigations and developments on this issue making the case for the in-tegration of emerging technologies in Business Information Systems. In particular, mobile agents and peer-to-peer computing offer major advantages in terms of technical architectures as well as a programming paradigm shift. We are currently working on a framework addressing these issues towards Active Business Objects. Title: ANALYSIS OF BUSINESS TO BUSINESS ELECTRONIC MARKETS IN CHINA: THEORETICAL AND PRACTICAL PERSPECTIVES Author(s): Jing Zhao Abstract: In China, electronic markets (e-markets) are in the early stages of development. It has unique characteristics in e-commerce activities and market mechanisms, which are largely a function of the current industry structure, financial infrastructure and organization structure. This paper addresses an interactive e-market space view and proposes the interactive e-commerce model for studying e-commerce activities and strategies in e-markets of China. Building on this theoretical insight the model draws attention to the e-commerce process in which buyers and sellers, virtual market manager and its business partners are linked and in which web-based communication and collaboration take place, and to the adopted innovative market mechanisms. The e-commerce process can be modelled by separating main business activities into four phases designed to exploit business opportunities. The model is applied to analyse one successful B2B Exchange in China. It offers an effective approach to studying dynamic structure of transaction and a high performance e-commerce strategy. Our research identifies the four lever of e-market capability. These abilities imply e-market potential to achieving and sustaining a new level of e-commerce strategy performance, and a more competitive position in a rapidly changing B2B electronic market of China. Title: MEMBERSHIP PORTAL AND SERVICE PROVISIONING SYSTEM FOR AN INFRASTRUCTURE OF HUBS:MANAGED E-HUB Author(s): Liang-Jie Zhang, Henry Chang, Zhong Tian, Shun Xiang Yang, Ying Nan Zuo, Jing Min Xu, Tian Chao Abstract: The goal of Managed e-Hub research prototype is to build a common infrastructure of hubs so that businesses can develop B2B exchanges meeting their business needs based on it. In this paper, an open and extensible framework for Managed e-Hub is presented and the hub fundamental services are discussed in detail as well. The service provisioning system of Managed e-Hub not only provides a way of integrating other services into the hub by means of service on-boarding and subscription, but also provisions these services with their required provisioning information. Title: APPLICATION SCENARIOS FOR DISTRIBUTED MANAGEMENT USING SNMP EXPRESSIONS Author(s): Rui Lopes Abstract: Management distribution is an, we can say, old topic in terms of the number of proposed solutions and publications. Recently, the DISMAN workgroup suggested a set of MIB modules to address this matter in the context of SNMP. One of the DISMAN modules has the capability of using expressions to perform decentralized processing of management information – the Expression MIB. Although existing for some time now, its capabilities are not very well known. In fact, other DISMAN MIBs, such as the Schedule MIB and the Script MIB already got some attention in several papers and are target of very solid work. There are hardly any papers describing the Expression MIB and its functionality. This paper contributes to eliminate this absence by describing our implementation effort around it as well as some real world applications for it. Title: AGENTAPI: AN API FOR THE DEVELOPMENT OF MANAGED AGENTS Author(s): Rui Lopes Abstract: Managed agents, namely SNMP agents, costs too much to develop, test and maintain. Although assuming simplicity since its origins, the SNMP model has several intrinsic aspects that make the development of management applications a complex task. However, there are tools available which intend to simplify this process by generating automatic code based on the management information definition. Unfortunately, these tools are usually complicated to use and require a strong background of programming experience and network management knowledge. This paper describes an API for managed agent development which also provides multiprotocol capabilities. Without changing the code, the resulting agent can be managed by SNMP, web browsers, wap browsers, CORBA or any other access method either simultaneously or individually. Title: AUTOMATIC E-COMMERCE USING A MOBILE AGENTS MODEL Author(s): Francesco Maria Raimondi, Salvatore Pennacchio Abstract: Electronic commerce business to business using mobile agents is one of the most important future promise and also one good result of global mobility code. As we will show, classic commerce model and electronic commerce model both introduce advantages and disadvantages. Electronic commerce through mobile agents has an objective that is to eliminate defects and to arrange advantages of previous models. In particular it takes cue of selling negotiation, in which it is necessary to take decisions. Title: A TIME ZONE BASED DYNAMIC CACHE REPLACEMENT POLICY Author(s): Srividya Gopalan, Kanchan Sripathy, Sridhar Varadarajan Abstract: This paper proposes a time zone based novel cache replacement policy, LUV, intended for web traffic in the context of a hybrid cache management strategy. The LUV replacement policy is based on ranking of web objects on a set of metrics intercepted by a proxy server. Further, in order to maximize the hit rate, the cache replacement policy makes use of immediate past access patterns for individual web objects with respect to various time zones. Title: BOOTSTRAPPING THE SEMANTIC WEB BY DEVELOPING A MULTI-AGENT SYSTEM TO FACILITATE ONTOLOGY REUSE:A RESEARCH AGENDA Author(s): Abir  Qasem Abstract: Ontologies are basic components of the Semantic Web but are difficult to build, and this acts as a bottleneck in the spread of the Semantic Web. Reuse is seen as one of the solution to this problem. This paper addresses the feasibility of a multi-agent system that will automatically identify the appropriate reusable ontologies and thereby greatly reduce the burden of its users. First, the area of automated software component reuse is reviewed and borrowed from in order to develop an appropriate framework. Next, a research agenda is proposed for developing this type of multi agent system for ontology reuse. Finally it is argued that the proposed multi-agent system will enable faster deployment of the Semantic Web by making the ontology development process efficient and developed ontologies, more robust and interoperable. This use of agents may help to bootstrap the Semantic Web itself by leveraging from the emerging Semantic Web architecture, and contributing to its growth. Title: A DYNAMIC AND SCALABLE AGENT-BASED APPROACH FOR KNOWLEDGE DISCOVERY : WEB SITE EXPLORATION Author(s): Aurelio López López, Alberto Méndez Torreblanca Abstract: The World Wide Web has become an open world of information with a continuous growth.This dynamic nature is causing several difficulties for discovering potentially useful Knowledge from the web. The techniques of web mining and software agents can be combined for resolving the problem. In this paper, we propose a dynamic and scalable agent-based approach for knowledge discovery from specific web sites, where the information is constantly added or eliminated, or its structure is permanently modified.We also report preliminary results of the approach for the exploration of web sites. Title: INTELLIGENT SOFTWARE AGENTS IN THE KNOWLEDGE ECONOMY: Author(s): Mahesh S. Raisinghani Abstract: Intelligent agent technology is emerging as one of the most important and rapidly advancing areas in information systems and e-business. There is a tremendous explosion in the development of agent-based applications in a variety of fields such as electronic commerce, supply chain management, resource allocation, intelligent manufacturing, industrial control, information retrieval and filtering, collaborative work, decision support, and computer games. While research on various aspects of intelligent agent technology and its application is progressing at a very fast pace, this is only the beginning. There are still a number of issues that have to be explored in terms of agent design, implementation, and deployment. For example, salient characteristics of agents in different domains, formal approaches for agent-oriented modeling, designing and implementing agent-oriented information systems, agent collaboration and coordination, and organizational impact of agent-based systems are some of the areas in need of further research. The purpose of this paper is to identify and explore the issues, opportunities, and solutions related to intelligent agent modeling, design, implementation, and deployment.

Page Updated on 27-05-2003

Copyright © Escola Superior de Tecnologia de Setúbal, Instituto Politécnico de Setúbal