Latest Articles from JUCS - Journal of Universal Computer Science Latest 27 Articles from JUCS - Journal of Universal Computer Science https://lib.jucs.org/ Fri, 29 Mar 2024 15:07:56 +0200 Pensoft FeedCreator https://lib.jucs.org/i/logo.jpg Latest Articles from JUCS - Journal of Universal Computer Science https://lib.jucs.org/ Enhancing EEG-based emotion recognition using PSD-Grouped Deep Echo State Network https://lib.jucs.org/article/98789/ JUCS - Journal of Universal Computer Science 29(10): 1116-1138

DOI: 10.3897/jucs.98789

Authors: Samar Bouazizi, Emna Benmohamed, Hela Ltifi

Abstract: Emotions are a crucial aspect of daily life and play a vital role in shaping human inter-actions. The purpose of this paper is to introduce a novel approach to recognize human emotions through the use of electroencephalogram (EEG) signals. To recognize these signals for emotion prediction, we employ a paradigm of Reservoir Computing (RC), called Echo State Network (ESN). In our analysis, we focus on two specific classes of emotion recognition: H/L Arousal and H/L Valence. We suggest using the Deep ESN model in conjunction with the Welch Power Spectral Density (Wlech PSD) method for emotion classification and feature extraction. Furthermore, we feed the selected features to a grouped ESN for recognizing emotions. Our approach is validated on the well-known DEAP benchmark, which includes the EEG data from 32 participants. The proposed model achieved 89.32% accuracy for H/L Arousal and 91.21% accuracy for H/L Valence on the DEAP dataset. The obtained results demonstrate the effectiveness of our approach, which yields good performance compared to existing models of emotion analysis based on EEG.

HTML

XML

PDF

]]>
Research Article Sat, 28 Oct 2023 18:00:03 +0300
Extracting concepts from triadic contexts using Binary Decision Diagram https://lib.jucs.org/article/67953/ JUCS - Journal of Universal Computer Science 28(6): 591-619

DOI: 10.3897/jucs.67953

Authors: Julio Cesar Vale Neves, Luiz Enrique Zarate, Mark Alan Junho Song

Abstract: Due to the high complexity of real problems, a considerable amount of research that deals with high volumes of information has emerged. The literature has considered new applications of data analysis for high dimensional environments in order to manage the difficulty in extracting knowledge from a database, especially with the increase in social and professional networks. Tri- adic Concept Analysis (TCA) is a technique used in the applied mathematical area of data analysis. Its main purpose is to enable knowledge extraction from a context that contains objects, attributes, and conditions in a hierarchical and systematized representation. There are several algorithms that can extract concepts, but they are inefficient when applied to large datasets because the compu- tational costs are exponential. The objective of this paper is to add a new data structure, binary decision diagrams (BDD), in the TRIAS algorithm and retrieve triadic concepts for high dimen- sional contexts. BDD was used to characterize formal contexts, objects, attributes, and conditions. Moreover, to reduce the computational resources needed to manipulate a high-volume of data, the usage of BDD was implemented to simplify and represent data. The results show that this method has a considerably better speedup when compared to the original algorithm. Also, our approach discovered concepts that were previously unachievable when addressing high dimensional contexts.

HTML

XML

PDF

]]>
Research Article Tue, 28 Jun 2022 10:00:00 +0300
Bloom filter variants for multiple sets: a comparative assessment https://lib.jucs.org/article/74230/ JUCS - Journal of Universal Computer Science 28(2): 120-140

DOI: 10.3897/jucs.74230

Authors: Luca Calderoni, Dario Maio, Paolo Palmieri

Abstract: In this paper we compare two probabilistic data structures for association queries derived from the well-known Bloom filter: the shifting Bloom filter (ShBF), and the spatial Bloom filter (SBF). With respect to the original data structure, both variants add the ability to store multiple subsets in the same filter, using different strategies. We analyse the performance of the two data structures with respect to false positive probability, and the inter-set error probability (the probability for an element in the set of being recognised as belonging to the wrong subset). As part of our analysis, we extended the functionality of the shifting Bloom filter, optimising the filter for any non-trivial number of subsets. We propose a new generalised ShBF definition with applications outside of our specific domain, and present new probability formulas. Results of the comparison show that the ShBF provides better space efficiency, but at a significantly higher computational cost than the SBF.

HTML

XML

PDF

]]>
Research Article Mon, 28 Feb 2022 11:00:00 +0200
From Classical to Fuzzy Databases in a Production Enterprise https://lib.jucs.org/article/24135/ JUCS - Journal of Universal Computer Science 26(11): 1382-1401

DOI: 10.3897/jucs.2020.073

Authors: Izabela Rojek, Dariusz Mikołajewski, Piotr Kotlarz, Alžbeta Sapietová

Abstract: This article presents the evolution of databases from classical relational databases to distributed databases and data warehouses to fuzzy databases used in a production enterprise. This paper discusses characteristics of this kind of enterprise. The authors precisely define centralized and distributed databases, data warehouses and fuzzy databases. In the modern global world, many companies change their management strategy from the one based on a centralized database to an approach based on distributed database systems. Growing expectations regarding business intelligence encourage companies to deploy data warehouses. New solutions are sought as the demand for engineers' expertise continues to rise. The requested knowledge can be certain or uncertain. Certain knowledge does not any problems and is easy to obtain. However, uncertain knowledge requires new ways of obtaining, including the use of fuzzy logic. It is from where the fuzzy database approach takes its beginning. The above-mentioned strategies of a production enterprise were described herein as a case of special interest.

HTML

XML

PDF

]]>
Research Article Sat, 28 Nov 2020 00:00:00 +0200
Parallel Fast Sort Algorithm for Secure Multiparty Computation https://lib.jucs.org/article/23151/ JUCS - Journal of Universal Computer Science 24(4): 488-514

DOI: 10.3217/jucs-024-04-0488

Authors: Zbigniew Marszałek

Abstract: The use of encryption methods such as secure multiparty computation is an important issue in applications. Applications that use encryption of information require special algorithms of sorting data in order to preserve the secrecy of the information. This proposition is composed for parallel architectures. Presented algorithm works with a number of logical processors. Operations are flexibly distributed among them. Therefore sorting of data sets takes less time. Results of the experimental tests confirm the effectiveness of the proposed flexible division of tasks between logical processors and show that this proposition is a valuable method that can find many practical applications in high performance computing.

HTML

XML

PDF

]]>
Research Article Sat, 28 Apr 2018 00:00:00 +0300
All-Pairs Shortest Paths Algorithm for Regular 2D Mesh Topologies https://lib.jucs.org/article/23669/ JUCS - Journal of Universal Computer Science 22(11): 1437-1455

DOI: 10.3217/jucs-022-11-1437

Authors: Vladimir Ciric, Aleksandar Cvetkovic, Ivan Milentijevic, Oliver Vojinovic

Abstract: Motivated by the large number of vertices that future technologies will put in the front of path-search algorithms, and inspired by highly regular 2D mesh structures that exist in the domain applications, in this paper we propose a new allpairs shortest paths algorithm, for any given regular 2D mesh topology, with complexity Ο(|V|2), where |V| is the number of vertices in the graph. The proposed algorithm can achieve better runtime than other known algorithms at the cost of narrowing the scope of the graphs that it can process to the graphs with regular 2D topology. The algorithm is developed into formalism by algebraic transformations in tropical algebra of the well-known Floyd-Warshall's algorithm. First we prove the equivalency of the Floyd-Warshall's algorithm and its tropical algebraic representation, and put the transformations of the algorithm into the algebraic domain. Secondly, having in mind the structure of the target class of graphs, we transform the original algorithm in the algebraic domain and develop a simple, low-complexity iterative algorithm for all-pairs shortest paths calculation. Decreasing of computational complexity can contribute to better exploitation of the algorithm in the wide range of applications from hardware design in new emerging technologies to big data problems in information technologies.

HTML

XML

PDF

]]>
Research Article Tue, 1 Nov 2016 00:00:00 +0200
A Proposal for Recommendation of Feature Selection Algorithm based on Data Set Characteristics https://lib.jucs.org/article/23272/ JUCS - Journal of Universal Computer Science 22(6): 760-781

DOI: 10.3217/jucs-022-06-0760

Authors: Saptarsi Goswami, Amlan Chakrabarti, Basabi Chakraborty

Abstract: Feature selection is an important prerequisite of any pattern recognition, machine learning or data mining problem. A lot of algorithms for feature subset selection have been developed so far for reduction of dimensionality of the data set in order to achieve high recognition accuracy with low computational cost. However, some methods or algorithms work well for some of the data sets and perform poorly on others. For any particular data set, it is difficult to find out the most suitable algorithm without some random trial and error process. It seems that the characteristics of the data set might have some effect on the algorithm for feature selection. In this work, the data set characteristics is studied for recommendation of appropriate feature selection algorithm to be used for a particular data set. A new proposal in terms of intra attribute relationship and a measure MVS (multivariate score) has been introduced to quantify and group different data sets on the basis of the data set correlation structure into several categories. The measure is used to group 63 publicly available bench mark data set according to their characteristics. The performance of different feature selection algorithms on different groups of data are then studied by simulation experiments to verify the relationship o f data set characteristics and the feature selection algorithm. The effect of some other data set characteristics has also been studied. Finally a framework of recommendation regarding the choice of proper feature selection algorithm has been indicated.

HTML

XML

PDF

]]>
Research Article Wed, 1 Jun 2016 00:00:00 +0300
Design and Implementation of an Extended Corporate CRMDatabase System with Big Data Analytical Functionalities https://lib.jucs.org/article/23257/ JUCS - Journal of Universal Computer Science 21(6): 757-776

DOI: 10.3217/jucs-021-06-0757

Authors: Ana Torre-Bastida, Esther Villar-Rodriguez, Sergio Gil-Lopez, Javier Ser

Abstract: The amount of open information available on-line from heterogeneous sources anddomains is growing at an extremely fast pace, and constitutes an important knowledge base for the consideration of industries and companies. In this context, two relevant data providers can behighlighted: the "Linked Open Data" (LOD) and "Social Media" (SM) paradigms. The fusion of these data sources - structured the former, and raw data the latter -, along with the informationcontained in structured corporate databases within the organizations themselves, may unveil significant business opportunities and competitive advantage to those who are able to understand andleverage their value. In this paper, we present two complementary use cases, illustrating the potential of using the open data in the business domain. The first represents the creation of an existingand potential customer knowledge base, exploiting social and linked open data based on which any given organization might infer valuable information as a support for decision making. Thesecond focuses on the classification of organizations and enterprises aiming at detecting potential competitors and/or allies via the analysis of the conceptual similarity between their participatedprojects. To this end, a solution based on the synergy of Big Data and semantic technologies will be designed and developed. The first will be used to implement the tasks of collection, data fusionand classification supported by natural language processing (NLP) techniques, whereas the latter will deal with semantic aggregation, persistence, reasoning and information retrieval, as well aswith the triggering of alerts based on the semantized information.

HTML

XML

PDF

]]>
Research Article Mon, 1 Jun 2015 00:00:00 +0300
Assessing the Impact of the homeML Format and the homeML Suite within the Research Community https://lib.jucs.org/article/23952/ JUCS - Journal of Universal Computer Science 19(17): 2559-2576

DOI: 10.3217/jucs-019-17-2559

Authors: Heather Mcdonald, Chris Nugent, Dewar Finlay, George Moore, William Burns, Josef Hallberg

Abstract: The lack of a standard format to store data generated within the smart environments research domain is limiting the opportunity for researchers to share and reuse datasets. The opportunity to exchange datasets is further hampered due to the lack of an online resource to facilitate this. In our current work we have attempted to resolve these issues through the development of homeML, a proposed format to support the storage and exchange of data generated within a smart environment and the homeML suite, an online tool to support data exchange and reuse. A usability and functionality study performed by 8 unbiased members of the research community is presented and discussed. All participants in the study agreed that the homeML format could address the need for a standard format within this domain. Participants also agreed that the homeML suite would be a useful tool to be available to researchers as they perform experiments in the area of smart environments.

HTML

XML

PDF

]]>
Research Article Fri, 1 Nov 2013 00:00:00 +0200
Graph-based KNN Algorithm for Spam SMS Detection https://lib.jucs.org/article/23927/ JUCS - Journal of Universal Computer Science 19(16): 2404-2419

DOI: 10.3217/jucs-019-16-2404

Authors: Tran Ho, Ho-Seok Kang, Sung-Ryul Kim

Abstract: In the modern life, SMS (Short Message Service) is one of the most necessary services on mobile devices. Because of its popularity, many companies use SMS as an effective marketing and advertising tool. Also, the popularity gives hackers chances to abuse SMS to cheat mobile users and steal personal information in their mobile phones, for example. In this paper, we propose a method to detect spam SMS on mobile devices and smart phones. Our approach is based on improving a graph-based algorithm and utilizing the KNN Algorithm - one of the simplest and most effective classification algorithms. The experimentation is carried out on SMS message collections and the results ensures the efficiency of the proposed method, with high accuracy and small processing time enough for detecting spam messages directly on mobile phones in real time.

HTML

XML

PDF

]]>
Research Article Tue, 1 Oct 2013 00:00:00 +0300
An Integrated MFFP-tree Algorithm for Mining Global Fuzzy Rules from Distributed Databases https://lib.jucs.org/article/23094/ JUCS - Journal of Universal Computer Science 19(4): 521-538

DOI: 10.3217/jucs-019-04-0521

Authors: Chun-Wei Lin, Tzung-Pei Hong, Yi-Fan Chen, Tsung-Ching Lin, Shing-Tai Pan

Abstract: In the past, many algorithms have been proposed for mining association rules from binary databases. Transactions with quantitative values are, however, also commonly seen in real-world applications. Each transaction in a quantitative database consists of items with their purchased quantities. The multiple fuzzy frequent pattern tree (MFFP-tree) algorithm was thus designed to handle a quantitative database for efficiently mining complete fuzzy frequent itemsets. It however, only processes a database for mining the desired rules. In this paper, we propose an integrated MFFP (called iMFFP)-tree algorithm for merging several individual MFFP trees into an integrated one. The proposed iMFFP-tree algorithm firstly handles the fuzzy regions for providing linguistic knowledge for human beings. The integration mechanism of the proposed algorithm thus efficiently and completely moves a branch from one sub-tree to the integrated tree. The proposed approach can derive both global and local fuzzy rules from distributed databases, thus allowing managers to make more significant and flexible decisions. Experimental results also showed the performance of the proposed approach.

HTML

XML

PDF

]]>
Research Article Thu, 28 Feb 2013 00:00:00 +0200
XML Database Transformations https://lib.jucs.org/article/29847/ JUCS - Journal of Universal Computer Science 16(20): 3043-3072

DOI: 10.3217/jucs-016-20-3043

Authors: Klaus-Dieter Schewe, Qing Wang

Abstract: Database transformations provide a unifying umbrella for queries and updates. In general, they can be characterised by five postulates, which constitute the database analogue of Gurevich's sequential ASM thesis. Among these postulates the background postulate supposedly captures the particularities of data models and schemata. For the characterisation of XML database transformations the natural first step is therefore to define the appropriate tree-based backgrounds, which draw on hereditarily finite trees, tree algebra operations, and extended document type definitions. This defines a computational model for XML database transformation using a variant of Abstract State Machines. Then the incorporation of weak monadic second-order logic provides an alternative computational model called XML machines. The main result is that these two computational models for XML database transformations are equivalent.

HTML

XML

PDF

]]>
Research Article Mon, 1 Nov 2010 00:00:00 +0200
LemmaGen: Multilingual Lemmatisation with Induced Ripple-Down Rules https://lib.jucs.org/article/29680/ JUCS - Journal of Universal Computer Science 16(9): 1190-1214

DOI: 10.3217/jucs-016-09-1190

Authors: Matjaž Juršič, Igor Mozetič, Tomaž Erjavec, Nada Lavrač

Abstract: Lemmatisation is the process of finding the normalised forms of words appearing in text. It is a useful preprocessing step for a number of language engineering and text mining tasks, and especially important for languages with rich inflectional morphology. This paper presents a new lemmatisation system, LemmaGen, which was trained to generate accurate and efficient lemmatisers for twelve different languages. Its evaluation on the corresponding lexicons shows that LemmaGen outperforms the lemmatisers generated by two alternative approaches, RDR and CST, both in terms of accuracy and efficiency. To our knowledge, LemmaGen is the most efficient publicly available lemmatiser trained on large lexicons of multiple languages, whose learning engine can be retrained to effectively generate lemmatisers of other languages.

HTML

XML

PDF

]]>
Research Article Sat, 1 May 2010 00:00:00 +0300
Algebras and Update Strategies https://lib.jucs.org/article/29630/ JUCS - Journal of Universal Computer Science 16(5): 729-748

DOI: 10.3217/jucs-016-05-0729

Authors: Michael Johnson, Robert Rosebrugh, Richard Wood

Abstract: The classical (Bancilhon-Spyratos) correspondence between view update translations and views with a constant complement reappears more generally as the correspondence between update strategies and meet complements in the order based setting of S. Hegner. We show that these two theories of database view updatability are linked by the notion of "lens" which is an algebra for a monad. We generalize lenses from the category of sets to consider them in categories with finite products, in particular the category of ordered sets.

HTML

XML

PDF

]]>
Research Article Mon, 1 Mar 2010 00:00:00 +0200
Rough Classification - New Approach and Applications https://lib.jucs.org/article/29505/ JUCS - Journal of Universal Computer Science 15(13): 2622-2628

DOI: 10.3217/jucs-015-13-2622

Authors: Ngoc Nguyen

Abstract: Rough classification has been known as the concept of Pawlak within the Rough Set Theory. In this paper the novel rough classification approach and its applications in e-learning systems and user interface management for recommendation processes will be presented.

HTML

XML

PDF

]]>
Research Article Wed, 1 Jul 2009 00:00:00 +0300
Complexity Analysis of Ontology Integration Methodologies:a Comparative Study https://lib.jucs.org/article/29347/ JUCS - Journal of Universal Computer Science 15(4): 877-897

DOI: 10.3217/jucs-015-04-0877

Authors: Trong Duong, Geun-Sik Jo, Jason Jung, Ngoc Nguyen

Abstract: Most previous research on ontology integration has focused on similarity measure-ments between ontological entities, e.g., lexicons, instances, schemas and taxonomies, resulting in high computational costs of considering all possible pairs between two given ontologies. In this paper, we propose a novel approach to reducing computational complexity in ontology integration. Thereby, we address the importance and types of concepts, for priority matching anddirect matching between concepts, respectively. Identity-based similarity is computed, to avoid comparisons of all properties related to each concept, while matching between concepts. Theproblem of conflict in ontology integration has initially been explored on the instance-level and concept-level. This is useful to avoid many cases of mismatching.

HTML

XML

PDF

]]>
Research Article Sat, 28 Feb 2009 00:00:00 +0200
Spatial Queries in Road Networks Based on PINE https://lib.jucs.org/article/28976/ JUCS - Journal of Universal Computer Science 14(4): 590-611

DOI: 10.3217/jucs-014-04-0590

Authors: Maytham Safar

Abstract: Over the last decade, due to the rapid developments in information technology (IT), a new breed of information systems has appeared such as geographic information systems that introduced new challenges for researchers, developers and users. One of its applications is the car navigation system, which allows drivers to receive navigation instructions without taking their eyes off the road. Using a Global Positioning System (GPS) in the car navigation system enables the driver to perform a wide range of queries, from locating the car position, to finding a route from a source to a destination, or dynamically selecting the best route in real time. Several types of spatial queries (e.g., nearest neighbour - NN, K nearest neighbours - KNN, continuous k nearest neighbours - CKNN, reverse nearest neighbour - RNN) have been proposed and studied in the context of spatial databases. With spatial network databases (SNDB), objects are restricted to move on pre-defined paths (e.g., roads) that are specified by an underlying network. In our previous work, we proposed a novel approach, termed Progressive Incremental Network Expansion (PINE), to efficiently support NN and KNN queries. In this work, we utilize our developed PINE system to efficiently support other spatial queries such as CKNN. The continuous K nearest neighbour (CKNN) query is an important type of query that finds continuously the K nearest objects to a query point on a given path. We focus on moving queries issued on stationary objects in Spatial Network Database (SNDB) (e.g., continuously report the five nearest gas stations while I am driving.) The result of this type of query is a set of intervals (defined by split points) and their corresponding KNNs. This means that the KNN of an object travelling on one interval of the path remains the same all through that interval, until it reaches a split point where its KNNs change. Existing methods for CKNN are based on Euclidean distances. In this paper we propose a new algorithm for answering CKNN in SNDB where the important measure for the shortest path is network distances rather than Euclidean distances. Our solution addresses a new type of query that is plausible to many applications where the answer to the query not only depends on the distances of the nearest neighbours, but also on the user or application need. By distinguishing between two types of split points, we reduce the number of computations to retrieve the continuous KNN of a moving object. We compared our algorithm with CKNN based on VN3 using IE (Intersection Examination). Our experiments show that our approach has better response time than approaches that are based on IE, and requires fewer shortest distance computations and KNN queries.

HTML

XML

PDF

]]>
Research Article Thu, 28 Feb 2008 00:00:00 +0200
Efficient Access Methods for Temporal Interval Queries of Video Metadata https://lib.jucs.org/article/28862/ JUCS - Journal of Universal Computer Science 13(10): 1411-1433

DOI: 10.3217/jucs-013-10-1411

Authors: Spyros Sioutas, Kostas Tsichlas, Bill Vassiliadis, Dimitrios Tsolis

Abstract: Indexing video content is one of the most important problems in video databases. In this paper we present linear time and space algorithms for handling video metadata that represent objects or events present in various frames of the video sequence. To accomplish this, we make a straightforward reduction of this problem to the intersection problem in Computational Geometry. Our first result is an improvement over the one of V. S. Subrahmanian [Subramanian, 1998] by a logarithmic factor in storage. This is achieved by using different basic data structures. Then, we present two other interesting time-efficient approaches. Finally a reduction to a special geometric problem is considered according to which we can achieve two optimal in time and space solutions in main and external memory model of computation respectively. We also present an extended experimental evaluation.

HTML

XML

PDF

]]>
Research Article Sun, 28 Oct 2007 00:00:00 +0300
Consensus Determining with Dependencies of Attributes with Interval Values https://lib.jucs.org/article/28745/ JUCS - Journal of Universal Computer Science 13(2): 329-344

DOI: 10.3217/jucs-013-02-0329

Authors: Michal Zgrzywa

Abstract: In this paper the author considers some problems related to attribute dependencies in consensus determining. These problems concern the dependencies of attributes representing the content of conflicts, which cause that one may not treat the attributes independently in consensus determining. It is assumed that attribute values are represented by intervals. In the paper the author considers the choice of proper distance function. Next, the limitations guarantying determining a correct consensus despite treating the attributes independently are presented. Additionally, the algorithm of calculating the proper consensus in cases when these limitation are not met is introduced.

HTML

XML

PDF

]]>
Research Article Wed, 28 Feb 2007 00:00:00 +0200
Deriving Consensus for Hierarchical Incomplete Ordered Partitions and Coverings https://lib.jucs.org/article/28744/ JUCS - Journal of Universal Computer Science 13(2): 317-328

DOI: 10.3217/jucs-013-02-0317

Authors: Marcin Hernes, Ngoc Nguyen

Abstract: A method for determining consensus of hierarchical incomplete ordered partitions and coverings of sets is presented in this chapter. Incomplete ordered partitions and coverings are often used in expert information analysis. These structures should be useful when an expert has to classify elements of a set into given classes, but referring to several elements he does not know to which classes they should belong. The hierarchical ordered partition is a more general structure than incomplete ordered partition. In this chapter we present definitions of the notion of hierarchical incomplete ordered partitions and coverings of sets. The distance functions between hierarchical incomplete ordered partitions and coverings are defined. We present also algorithms of consensus determining for a finite set of hierarchical incomplete ordered partitions and coverings.

HTML

XML

PDF

]]>
Research Article Wed, 28 Feb 2007 00:00:00 +0200
Connecting Segments for Visual Data Exploration and Interactive Mining of Decision Rules https://lib.jucs.org/article/28507/ JUCS - Journal of Universal Computer Science 11(11): 1835-1848

DOI: 10.3217/jucs-011-11-1835

Authors: Francisco Ferrer-Troyano, Jesús Aguilar-Ruiz, José Riquelme

Abstract: Visualization has become an essential support throughout the KDD process in order to extract hidden information from huge amount of data. Visual data exploration techniques provide the user with graphic views or metaphors that represent potential patterns and data relationships. However, an only image does not always convey high-dimensional data properties successfully. From such data sets, visualization techniques have to deal with the curse of dimensionality in a critical way, as the number of examples may be very small with respect to the number of attributes. In this work, we describe a visual exploration technique that automatically extracts relevant attributes and displays their ranges of interest in order to support two data mining tasks: classification and feature selection. Through di#erent metaphors with dynamic properties, the user can re-explore meaningful intervals belonging to the most relevant attributes, building decision rules and increasing the model accuracy interactively.

HTML

XML

PDF

]]>
Research Article Mon, 28 Nov 2005 00:00:00 +0200
Processing Inconsistency of Knowledge on Semantic Level https://lib.jucs.org/article/28359/ JUCS - Journal of Universal Computer Science 11(2): 285-302

DOI: 10.3217/jucs-011-02-0285

Authors: Ngoc Nguyen

Abstract: Inconsistency of knowledge may appear in many situations, especially in distributed environments in which autonomous programs operate. Inconsistency may lead to conflicts, for which the resolution is necessary for correct functioning of an intelligent system. Inconsistency of knowledge in general means a situation in which some autonomous programs (like agents) generate different versions (or states) of knowledge on the same subject referring to a real world. In this paper we propose two logical structures for representing inconsistent knowledge: conjunction and disjunction. For each of them we define the semantics and formulate the consensus problem, the solution of which would resolve the inconsistency. Next, we work out algorithms for consensus determination. Consensus methodology has been proved to be useful in solving conflicts and should be also effective for knowledge inconsistency resolution.

HTML

XML

PDF

]]>
Research Article Mon, 28 Feb 2005 00:00:00 +0200
Geometric Retrieval for Grid Points in the RAM Model https://lib.jucs.org/article/28303/ JUCS - Journal of Universal Computer Science 10(9): 1325-1353

DOI: 10.3217/jucs-010-09-1325

Authors: Spyros Sioutas, Christos Makris, Nektarios Kitsios, George Lagogiannis, John Tsaknakis, Kostas Tsichlas, Bill Vassiliadis

Abstract: We consider the problem of d-dimensional searching (d 3) for four query types: range, partial range, exact match and partial match searching. Let N be the number of points, s be the number of keys specified in a partial match and partial range query and t be the number of points retrieved. We present a data structure with worst case time complexities O(t + logd-2 N), O(t + (d - s) + logs N), O(d + ) and O(t + (d - s) + s ) for each of the aforementioned query types respectively. We also present a second, more concrete solution for exact and partial match queries, which achieves the same query time but has different space requirements. The proposed data structures are considered in the RAM model of computation.

HTML

XML

PDF

]]>
Research Article Tue, 28 Sep 2004 00:00:00 +0300
Knowledge Management Analysis of the Research & Development & Transference Process at HEROs: a Public University Case https://lib.jucs.org/article/28247/ JUCS - Journal of Universal Computer Science 10(6): 702-711

DOI: 10.3217/jucs-010-06-0702

Authors: Jon Rodríguez, Arturo Castellanos, Stanislav Ranguelov

Abstract: In Higher Education and Research Organisations (HEROs), one of the most important activities in the R & D process is the effective management of knowledge transference. A correct analysis and diagnosis of that process through knowledge management methodology is essential for the correct orientation of organisation strategy. The aim of this paper is to describe the analysis carried out in order to diagnose the research & development & transference (R & D & T) activities at a public university in Spain. The diagnosis analyses the key phases in the knowledge transference process, because these different stages define important implications for the monitoring of the intellectual capital and the organisation s performance. Also with in the diagnostic analysis preformed here an methodological innovation is introduced related with the cause and effect relations of the knowledge collaboration and a process witch deals mainly with intangibles.

HTML

XML

PDF

]]>
Research Article Mon, 28 Jun 2004 00:00:00 +0300
SemanticMiner - Ontology-Based Knowledge Retrieval https://lib.jucs.org/article/28064/ JUCS - Journal of Universal Computer Science 9(7): 682-696

DOI: 10.3217/jucs-009-07-0682

Authors: Eddie Moench, Mike Ullrich, Hans-Peter Schnurr, Jürgen Angele

Abstract: During the analysis of knowledge processes in enterprises it often turns out that simple access to existing enterprise knowledge which is covered in documents is not possible. To enable access to a companys document and data stocks Information Retrieval (IR) technologies play a central role. In the following we describe the underlying theory of the SemanticMiner system, including methods and technologies as well as continuing approaches to obtain Knowledge Retrieval (KR) by dint of semantic technologies.

HTML

XML

PDF

]]>
Research Article Mon, 28 Jul 2003 00:00:00 +0300
Process-oriented Knowledge Structuring https://lib.jucs.org/article/28038/ JUCS - Journal of Universal Computer Science 9(6): 542-550

DOI: 10.3217/jucs-009-06-0542

Authors: Kai Mertins, Peter Heisig, Kay Alwert

Abstract: Within a business environment, where the fast and reliable access to knowledge is a key success factor, an efficient handling of the organizational knowledge is crucial. Therefore the need for methods and techniques, which allow to structure and maintain complex knowledge bases according to the requirements emerging from the daily work have a high priority. This article provides a business process oriented approach to structure organizational knowledge and information bases. The approach was developed within applied research in the industrial, service and administrative sector. Following this approach, three different types of knowledge structures and their visualization have been developed by the Fraunhofer IPK and are currently applied and tested in organizations. Beside the approach itself, these three types of knowledge structure and the cases of application shall be introduced here.

HTML

XML

PDF

]]>
Research Article Sat, 28 Jun 2003 00:00:00 +0300
Automatic Data Restructuring https://lib.jucs.org/article/27556/ JUCS - Journal of Universal Computer Science 5(4): 243-286

DOI: 10.3217/jucs-005-04-0243

Authors: Seymour Ginsburg, Dan Simovici

Abstract: Data restructuring is often an integral but non-trivial part of information processing, especially when the data structures are fairly complicated. This paper describes the underpinnings of a program, called the Restructurer, that relieves the user of the "thinking and coding" process normally associated with writing procedural programs for data restructuring. The process is accomplished by the Restructurer in two stages. In the first, the differences in the input and output data structures are recognized and the applicability of various transformation rules analyzed. The result is a plan for mapping the specified input to the desired output. In the second stage, the plan is executed using embedded knowledge about both the target language and run-time efficiency considerations. The emphasis of this paper is on the planning stage. The restructuring operations and the mapping strategies are informally described and explained with mathematical formalism. The notion of solution of a set of instantiated forms with respect to an output form is then introduced. Finally, it is shown that such a solution exists if and only if the Restructurer produces one.

HTML

XML

PDF

]]>
Research Article Wed, 28 Apr 1999 00:00:00 +0300