
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 77 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Sat, 14 Mar 2026 03:11:11 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>A Visual Approach for Health Information Exploration: Adaptive Levels of Visual Granularity and Interaction Analysis</title>
		    <link>https://lib.jucs.org/article/150679/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 32(1): 4-32</p>
					<p>DOI: 10.3897/jucs.150679</p>
					<p>Authors: Stefan Lengauer, Lin Shao, Hossein Miri, Michael Bedek, Cordula Kupfer, Maria Zangl, Bettina Kubicek, Barbara Dienstbier, Klaus Jeitler, Cornelia Krenn, Thomas Semlitsch, Carolin Zipp, Dietrich Albert, Andrea Siebenhofer, Tobias Schreck</p>
					<p>Abstract: The effective and targeted provision of health information to consumers, specifically tailored to their needs and preferences, is indispensable in healthcare. With access to appropriate health information and adequate understanding, consumers are more likely to make informed and healthy decisions, become more proficient in recognizing symptoms, and potentially experience improvements in the prevention or treatment of their medical conditions. Most of today&rsquo;s health information, however, is provided in the form of static documents. In this paper, we present a novel and innovative visual health information system based on adaptive document visualizations. Depending on the users&rsquo; information needs and preferences, the system can display its content with document visualization techniques at different levels of detail, aggregation, and visual granularity. Users can navigate using content organization along sections or automatically computed topics, and choose abstractions from full texts to word clouds. Our first contribution is a formative user study which demonstrated that the implemented document visualizations offer several advantages over traditional forms of document exploration. Informed from that, we identified a number of crucial aspects for further system development. Our second contribution is the introduction of an interaction provenance visualization which allows users to inspect which content, in which representation, and in which order has been received. We show how this allows to analyze different document exploration and navigation patterns, useful for automatic adaptation and recommendation functions. We also define a baseline taxonomy for adapting the document presentations which can, in principle, be leveraged by the observed user patterns. The interaction provenance view, furthermore, allows users to reflect on their exploration and inform future usage of the system.</p>
					<p><a href="https://lib.jucs.org/article/150679/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/150679/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jan 2026 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Retail Indicators Forecasting and Planning</title>
		    <link>https://lib.jucs.org/article/112556/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(11): 1385-1403</p>
					<p>DOI: 10.3897/jucs.112556</p>
					<p>Authors: Nelson Baloian, Jonathan Frez, José A. Pino, Cristóbal Fuenzalida, Sergio Peñafiel, Belisario Panay, Gustavo Zurita, Horacio Sanson</p>
					<p>Abstract: We present a methodology to handle the problem of planning sales goals. The methodology supports the retail manager to carry out simulations to find the most plausible goals for the future. One of the novel aspects of this methodology is that the analysis is based not on current sales levels, as most previous works do, but on those in the future, making a more precise and accurate analysis of the situation. The work presents the solution for a scenario using three sales performance indicators: foot traffic, conversion rate and ticket mean value for sales, but it explains how it can be generalized to more indicators. The contribution of this work is in the first place a framework, which consists of a methodology for performing sales planning, then, an algorithm, which finds the best prediction model for a particular store, and finally, a tool, which helps sales planners to set realistic sales goals based on the predicted sales.  First we present the method to choose the best indicator prediction model for each retail store and then we present a tool which allows the retail manager estimate the improvements on the indicators in order to attain a desired sales goal level; the managers may then perform several simulations for various scenarios in a fast and efficient way. The developed tool implementing this methodology was validated by experts in the subject of administration of retail stores yielding good results.</p>
					<p><a href="https://lib.jucs.org/article/112556/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/112556/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/112556/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Nov 2023 18:00:08 +0000</pubDate>
		</item>
	
		<item>
		    <title>Customized Curriculum and Learning Approach Recommendation Techniques in Application of Virtual Reality in Medical Education</title>
		    <link>https://lib.jucs.org/article/94161/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 949-966</p>
					<p>DOI: 10.3897/jucs.94161</p>
					<p>Authors: Abhishek Kumar, Abdul Khader Jilani Saudagar, Mohammed AlKhathami, Badr Alsamani, Muhammad Badruddin Khan, Mozaherul Hoque Abul Hasanat, Ankit Kumar</p>
					<p>Abstract: Virtual Reality (VR) has made considerable gains in the consumer and professional markets. As VR has progressed as a technology, its overall usefulness for educational purposes has grown. On the other hand, the educational field struggles to keep up with the latest innovations, changing affordances, and pedagogical applications due to the rapid evolution of technology. Therefore, many have elaborated on the potential of virtual reality (VR) in learning. This research proposes a novel techniques customized curriculum for medical students and recommendations for their learning process based on deep learning techniques. Here the data has been collected based on the pre-historic performance of the student and their current requirement and these data have been created as a dataset. Then this has been processed for analysis based on CAD system integrated with deep learning techniques for creating a customized curriculum. Initially this data has been processed and analysed to remove missing and invalid data. Then these data were classified for creation of the curriculum using a gradient decision tree integrated with na&iuml;ve Bayes. From this, the customized curriculum has been generated. Based on this customized curriculum, the learning approach recommendation has been carried out using the fuzzy rules integrated knowledge-based recommendation system. The experimental results of the proposed technique have been carried out with an accuracy of 98%, specificity of 82%, F-1 score of 79%, information overload of 75%, and precision of 81%.</p>
					<p><a href="https://lib.jucs.org/article/94161/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94161/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94161/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Automatic Detection and Recognition of Citrus Fruit &amp; Leaves Diseases for Precision Agriculture</title>
		    <link>https://lib.jucs.org/article/94133/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 930-948</p>
					<p>DOI: 10.3897/jucs.94133</p>
					<p>Authors: Ashok Kumar Saini, Roheet Bhatnagar, Devesh Kumar Srivastava</p>
					<p>Abstract: Machine learning is a branch of computer science concerned with developing algorithms &amp; models capable of &lsquo;learning through data and iterations&rsquo;. Deep learning simulates the structure and function of human organs and diseases using artificial neural networks with more than one hidden layer. The primary purpose of this work is to develop and test computer vision and machine learning algorithms for classifying Huanglongbing (HLB)-infected, healthy, and unhealthy leaves and fruits of the citrus plant. The images were segmented using a normalized graph cut, and texture information was extracted using a co-occurrence matrix. The collected attributes were used for classification and support vector machine (SVM), and deep learning methods were employed. When rating the classification outcomes, the accuracy of the classification and the number of false positives and false negatives were considered. The result shows that Deep Learning could create categories up to 96.8% of HLB-infected leaves and fruits. Despite a broad variance in intensity from leaves collected in North India, this method suggests it could be beneficial in diagnosing HLB.</p>
					<p><a href="https://lib.jucs.org/article/94133/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94133/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94133/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Ordered Fuzzy Numbers Applied in Bee Swarm Optimization Systems</title>
		    <link>https://lib.jucs.org/article/24149/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(11): 1475-1494</p>
					<p>DOI: 10.3897/jucs.2020.078</p>
					<p>Authors: Dawid Ewald, Hubert Zarzycki, Łukasz Apiecionek, Jacek Czerniak</p>
					<p>Abstract: The paper presents an innovative OFNBee optimization method based on combining the swarm intelligence with the use of directed fuzzy numers OFN. In the introduction, the issues related to the subject of the study, including bee algorithms and OFN numbers, were reviewed. The innovative OFNBee algorithm was presented and verified against a set of known benchmarks functions such as Sphere, Rastrigin, Griewank, Rosenbrock, Schwefel and Ackley. These functions have been applied due to their reliability in the literature. In the further part of the study, the configuration of the algorithm parameters is carried out, including the launch of each mathematical function several dozen times for different data, such as different population sizes. The key part of the research and analysis was to compare OFNBee with six standard ABC, MBO, IMBO, TLBO, HBMO, BBMO bee algorithms. The article ends with a summary and an indication of the possible future works.</p>
					<p><a href="https://lib.jucs.org/article/24149/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24149/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24149/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Nov 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Semantic Web-based Representation of Human-logical Inference for Solving Bongard Problems</title>
		    <link>https://lib.jucs.org/article/24130/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(10): 1343-1363</p>
					<p>DOI: 10.3897/jucs.2020.070</p>
					<p>Authors: Jisha Maniamma, Hiroaki Wagatsuma</p>
					<p>Abstract: Bongard Problems (BPs) are a set of 100 visual puzzles introduced by M. M. Bongard in the mid-1960s. BPs have been established as benchmark puzzles for understanding the human context-based learning abilities to solve ill- posed problems. The puzzle requires the logical explanation as the answer to distinct two classes of figures from redundant options, which can be obtained by a thinking process to alternatively change the target frame (hierarchical level of analogy) of thinking from a wide range concept networks as D. R. Hofstadter suggested. Some minor research results to solve a limited set of BPs have reported based a single architecture accompanied with probabilistic approaches; however the central problem on BP's difficulties is the requirement of flexible changes of the target frame, therefore non-hierarchical cluster analyses does not provide the essential solution and hierarchical probabilistic models needs to include unnecessary levels for learning from the beginning to prevent a prompt decision making. We hypothesized that logical reasoning process with limited numbers of meta-data descriptions realizes the sophisticated and prompt decision-making and the performance is validated by using BPs. In this study, a semantic web-based hierarchical model to solve BPs was proposed as the minimum and transparent system to mimic human-logical inference process in solving of BPs by using the Description Logic (DL) with assertions on concepts (TBox) and individuals (ABox). Our results demonstrated that the proposed model not only provided individual solutions as a BP solver, but also proved the correctness of Hofstadter's idea as the flexible frame with concept networks for BPs in our actual implementation, which no one has ever achieved. This fact will open the new horizon for theories for designing of logical reasoning systems especially for critical judgments and serious decision-making as expert humans do in a transparent and descriptive way of why they judged in that manner.</p>
					<p><a href="https://lib.jucs.org/article/24130/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24130/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24130/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Oct 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Modified Principal Component Analysis Feature Extraction Method for the Task of Diagnosing Chronic Lymphocytic Leukemia Type B-CLL</title>
		    <link>https://lib.jucs.org/article/24083/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(6): 734-746</p>
					<p>DOI: 10.3897/jucs.2020.039</p>
					<p>Authors: Mariusz Topolski</p>
					<p>Abstract: The vast majority of medical problems are characterised by the relatively high spatial dimensionality of the task, which becomes problematic for many classic pattern recognition algorithms due to the well-known phenomenon of the curse of dimensionality. This creates the need to develop methods of space reduction, divided into strategies for the selection and extraction of features. The most commonly used tool of the second group is the PCA, which, unlike selection methods, does not select a subset of the original set of features and performs its mathematical transformation into a less dimensional form. However, natural downside of this algorithm is the fact that class context is not present in supervised learning tasks. This work proposes a feature extraction algorithm using the approach of the pca method, trying not only to reduce the feature space, but also trying to separate the class distributions in the available learning set. The problematic issue of the work was the creation of a method of feature extraction describing the prognosis for a chronic lymphocytic leukemia type B-CLL, which will be at least as good, or even better than when compared to other quality extractions. The purpose of the research was accomplished for binary and three-class cases in the event in which for verification of extraction quality, five algorithms of machine learning were applied. The obtained results were compared with the application of paired samples Wilcoxon test.</p>
					<p><a href="https://lib.jucs.org/article/24083/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24083/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24083/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jun 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Real-Time Bot Detection from Twitter Using the Twitterbot+ Framework</title>
		    <link>https://lib.jucs.org/article/24011/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(4): 496-507</p>
					<p>DOI: 10.3897/jucs.2020.026</p>
					<p>Authors: Kheir Daouadi, Rim Rebaï, Ikram Amous</p>
					<p>Abstract: Nowadays, bot detection from Twitter attracts the attention of several researchers around the world. Different bot detection approaches have been proposed as a result of these research efforts. Four of the main challenges faced in this context are the diversity of types of content propagated throughout Twitter, the problem inherent to the text, the lack of sufficient labeled datasets and the fact that the current bot detection approaches are not sufficient to detect bot activities accurately. We propose, Twitterbot+, a bot detection system that leveraged a minimal number of language-independent features extracted from one single tweet with temporal enrichment of a previously labeled datasets. We conducted experiments on three benchmark datasets with standard evaluation scenarios, and the achieved results demonstrate the efficiency of Twitterbot+ against the state-of-the-art. This yielded a promising accuracy results (>95%). Our proposition is suitable for accurate and real-time use in a Twitter data collection step as an initial filtering technique to improve the quality of research data.</p>
					<p><a href="https://lib.jucs.org/article/24011/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24011/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24011/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Apr 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cyberattack Response Model for the Nuclear Regulator in Slovenia</title>
		    <link>https://lib.jucs.org/article/22671/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(11): 1437-1457</p>
					<p>DOI: 10.3217/jucs-025-11-1437</p>
					<p>Authors: Samo Tomažič, Igor Bernik</p>
					<p>Abstract: Cyberattacks targeting the nuclear sector are now a reality; they are becoming increasingly frequent and sophisticated, while the perpetrators are increasingly motivated. The key stakeholders in the nuclear sector, such as nuclear facility operators, nuclear regulators responsible for nuclear safety or nuclear security, technical support organisations and computer equipment suppliers, must take the necessary cybersecurity measures to prepare for potential cyberattacks and provide the highest possible level of response to such cyberattacks. This can only be achieved by adopting a systematic approach to cyberattack response. When conducting the research study presented herein, a descriptive method was applied to review the scientific literature, various standards, recommendations and guides, as well as to devise an inventory of publicly available sources. On the basis of such an analysis, individual questions were then formulated in order to compile a structured interview, which was conducted with international experts working at nuclear facilities, nuclear regulators, technical support organisations, computer equipment suppliers and other organisations responsible for providing cybersecurity in the nuclear sector. On the basis of their responses, researchers devised an innovative and comprehensive Cyberattack Response Model to be used by Slovenia's nuclear safety regulator and the regulator responsible for the physical protection of nuclear facilities and nuclear and radioactive materials.</p>
					<p><a href="https://lib.jucs.org/article/22671/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22671/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22671/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Nov 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>High-Performance Simulation of Drug Release Model Using Finite Element Method with CPU/GPU Platform</title>
		    <link>https://lib.jucs.org/article/22658/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(10): 1261-1278</p>
					<p>DOI: 10.3217/jucs-025-10-1261</p>
					<p>Authors: Akhtar Ali, Imran Bajwa, Rafaqat Kazmi</p>
					<p>Abstract: his paper describes a hybrid CPU/GPU approach for solving a two-phase mathematical model numerically. The dynamic of drug release between the first phase (coating) and second phase (arterial tissue) is represented by a system of partial differential equations (PDEs). The system of equations is discretized by Finite Element Method. The whole discretized system involves a large sparse system of equation which requires a high computation. The CPU/GPU approach provides a platform to solve PDEs having extensive computations in parallel. Consequently, this platform can significantly reduce the solution times as compared to the implementation of CPU. This allows for more efficient investigation of different mathematical models, as well as, the governing parameters. In this paper, a significant parallel computing framework is presented to solve the governing equations numerically using the Graphics Processing Units (GPUs) with CUDA. This two-phase model investigates the impact of key parameters related to mass concentrations and drug release from tissue and coating layers. The identification and the role of major parameters such as (Filtration velocity, the ratio of accessible void volume to solid volume, the solid-liquid mass transfer rate) are tinted. Furthermore, the motivation and guidance for using parallel computing in order to handle computational complexities and large sparse system arise after discretizing the model equations are explained. We have designed a hybrid CPU/GPU solution of the proposed model by using Matlab. The parallel performance results show that CPU/GPU architecture is more efficient in large-scale problem simulations.</p>
					<p><a href="https://lib.jucs.org/article/22658/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22658/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22658/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Oct 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Effects of Platforms and Languages on the Memory Footprint of the Executable Program: A Memory Forensic Approach</title>
		    <link>https://lib.jucs.org/article/22651/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(9): 1174-1198</p>
					<p>DOI: 10.3217/jucs-025-09-1174</p>
					<p>Authors: Ziad Al-Sharif, Mohammed Al-Saleh, Yaser Jararweh, Ahmed Shatnawi</p>
					<p>Abstract: Identifying the software used in a cybercrime can play a key role in establishing the evidence against the perpetrator in the court of law. This can be achieved by various means, one of which is to utilize the RAM contents. RAM comprises vital information about the current state of a system, including its running processes. Accordingly, the memory footprint of a process can be used as evidence about its United States of Americage. However, this evidence can be influenced by several factors. This paper evaluates three of these factors. First, it evaluates how the used programming language affects the evidence. Second, it evaluates how the used platform affects the evidence. Finally, it evaluates how the search for this evidence is influenced by the implicitly used encoding scheme. Our results should assist the investigator in its quest to identify the best amount of evidences about the used software based on its execution logic, host platform, language used, and the encoding of its string values. Results show that the amount of digital evidence is highly affected by these factors. For instance, the memory footprint of a Java based software is often more traceable than the footprints of languages such as C++ and C#. Moreover, the memory footprint of a C# program is more visible on Linux than it is on Windows or Mac OS. Hence, often software related values are successfully identified in RAM memory dumps even after the program is stopped.</p>
					<p><a href="https://lib.jucs.org/article/22651/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22651/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22651/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Sep 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Determination of System Weaknesses Based on the Analysis of Vulnerability Indexes and the Source Code of Exploits</title>
		    <link>https://lib.jucs.org/article/22645/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(9): 1043-1065</p>
					<p>DOI: 10.3217/jucs-025-09-1043</p>
					<p>Authors: Andrey Fedorchenko, Elena Doynikova, Igor Kotenko</p>
					<p>Abstract: Currently the problem of monitoring the security of information systems is highly relevant. One of the important security monitoring tasks is to automate the process of determination of the system weaknesses for their further elimination. The paper considers the techniques for analysis of vulnerability indexes and exploit source code, as well as their subsequent classification. The suggested approach uses open security sources and incorporates two techniques, depending on the available security data. The first technique is based on the analysis of publicly available vulnerability indexes of the Common Vulnerability Scoring System for vulnerability classification by weaknesses. The second one complements the first one in case if there are exploits but there are no associated vulnerabilities and therefore the indexes for classification are absent. It is based on the analysis of the exploit source code for the features, i.e. indexes, using graph models. The extracted indexes are further used for weakness determination using the first technique. The paper provides the experiments demonstrating an effectiveness and potential of the developed techniques. The obtained results and the methods for their enhancement are discussed.</p>
					<p><a href="https://lib.jucs.org/article/22645/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22645/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22645/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Sep 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Modelling of Automotive Engine Dynamics using Diagonal Recurrent Neural Network</title>
		    <link>https://lib.jucs.org/article/23542/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(9): 1330-1342</p>
					<p>DOI: 10.3217/jucs-024-09-1330</p>
					<p>Authors: Yujia Zhai, Kejun Qian, Fei Xue, Moncef Tayahi</p>
					<p>Abstract: The spark-ignition (SI) engine dynamics is described as a severely nonlinear and fast process. A black-box model obtained by system identification approach is often valuable for the control and fault diagnosis application on such systems. Recurrent neural network (RNN) might be better suited for such dynamical system modelling due to its feedback back scheme if compared with feed-forward neural network. However, the computational load for RNN limits its practical application. In this paper, a diagonal recurrent neural network (DRNN) is investigated to model SI engine dynamics to achieve a balance between the modelling performance and computational burden. The data collection procedure and algorithms for training DRNN are presented too. Satisfactory results on modelling have been obtained with moderate cost on computation.</p>
					<p><a href="https://lib.jucs.org/article/23542/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23542/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23542/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Sep 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cloud Biometric Authentication: An Integrated Reliability and Security Method Using the Reinforcement Learning Algorithm and Queue Theory</title>
		    <link>https://lib.jucs.org/article/23145/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(4): 372-391</p>
					<p>DOI: 10.3217/jucs-024-04-0372</p>
					<p>Authors: A M N Balla Husamelddin, Guang Chen, Weipeng Jing</p>
					<p>Abstract: While cloud systems deliver a larger amount of computing power, they do not guarantee full security and reliability. Focusing on improving successful job execution under resource constraints and security problems, this work proposes an enhanced, effective, integrated and novel approach to security and reliability. To apply a high level of security in the system, our novel approach uses cloud biometric authentication by splitting the biometric data into small chunks and spreading it over the cloud's resources. Reliability is enhanced through successful job execution by employing an adaptive reinforcement learning (RL) algorithm combined with a queuing theory. Our approach supports task schedulers to effectively adapt to dynamic changes in cloud environments. Based on the idea of reliability, we developed an adaptive action-selection, which controls the action selection dynamically by considering queue buffer size and the uncertainty value function. We evaluated the performance of our approach by several experiments conducted in terms of successful task execution and utilization rate and then compared our approach with other job scheduling policies. The experimental results demonstrated the efficiency of our method and achieved the objectives of the proposed system.</p>
					<p><a href="https://lib.jucs.org/article/23145/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23145/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23145/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Using Content to Identify Overlapping Communities in Question Answer Forums</title>
		    <link>https://lib.jucs.org/article/23516/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 23(9): 907-931</p>
					<p>DOI: 10.3217/jucs-023-09-0907</p>
					<p>Authors: Mohsen Shahriari, Sabrina Haefele, Ralf Klamma</p>
					<p>Abstract: Nowadays, people use online social networks almost every day. They activate either due to their interests, or to search or catch their desirable information. Users of online social networks generate structural and contextual traces that can be analyzed by, i.e., network science researchers. Researchers can describe networks fabricated out of online traces from different perspectives that one of them is communities. Overlapping communities are overlapped structures, in which nodes have denser connections with each other than the rest of the network. Different approaches have addressed this problem; however, few analyses and methods have focused on contextual traces generated by users. As such, in this paper, we propose an algorithm that uses actual content produced by users. This algorithm uses term frequency of words generated by users and combines them by an extended clustering technique. Our evaluation results compare the proposed content-based community detection with structural-based methods. We also reveal community properties as well as its relation to contextual information. Administrators can use these algorithms in question & answer forums where the explicit links among users are missing.</p>
					<p><a href="https://lib.jucs.org/article/23516/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23516/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23516/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2017 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Exploring Teachers&#039; Perceptions on Modeling Effort Demanded by CSCL Designs with Explicit Artifact Flow Support</title>
		    <link>https://lib.jucs.org/article/23595/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 22(10): 1398-1417</p>
					<p>DOI: 10.3217/jucs-022-10-1398</p>
					<p>Authors: Osmel Bordies, Yannis Dimitriadis</p>
					<p>Abstract: Artifact flow represents an important aspect of teaching/learning processes, especially in CSCL situations in which complex relationships may be found. However, explicit modeling of CSCL processes with artifact flow may increase the cognitive load and associated effort of the teachers-designers and therefore decrease the efficiency of the design process. The empirical study, reported in this paper and grounded on mixed methods, provides evidence of the effort overload when teachers are involved in designing CSCL situations in a controlled environment. The results of the study illustrate the problem through the subjective perception of the participating teachers, complemented with objective parameters, such as time consumed, errors committed, uncertainty and objective complexity metrics.</p>
					<p><a href="https://lib.jucs.org/article/23595/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23595/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23595/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Oct 2016 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Insecurity of an Efficient Privacy-preserving Public Auditing Scheme for Cloud Data Storage</title>
		    <link>https://lib.jucs.org/article/23043/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(3): 473-482</p>
					<p>DOI: 10.3217/jucs-021-03-0473</p>
					<p>Authors: Hongyu Liu, Leiting Chen, Zahra Davar, Mohammad Pour</p>
					<p>Abstract: Cloud storage has a long string of merits but at the same time, poses many challenges on data integrity and privacy. A cloud data auditing protocol, which enables a cloud server to prove the integrity of stored files to a verifier, is a powerful tool for secure cloud storage. Wang et al. proposed a privacy-preserving public auditing protocol, however, Worku et al. found the protocol is seriously insecure and proposed an improvement to remedy the weakness. In this paper, unfortunately, we demonstrate that the new protocol due to Worku et al. fails to achieve soundness and obtains merely limited privacy. Specifically, we show even deleting all the files of a data owner, a malicious cloud server is able to generate a response to a challenge without being caught by TPA in their enhanced but unrealistic security model. Worse still, the protocol is insecure even in a correct security model. For privacy, a dishonest verifier can tell which file is stored on the cloud. Solutions to efficient public auditing mechanisms with perfect privacy protection are still worth exploring.</p>
					<p><a href="https://lib.jucs.org/article/23043/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23043/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23043/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Improved Cloud Data Sharing Scheme with Hierarchical Attribute Structure</title>
		    <link>https://lib.jucs.org/article/23042/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(3): 454-472</p>
					<p>DOI: 10.3217/jucs-021-03-0454</p>
					<p>Authors: Zhusong Liu, Hongyang Yan, Zhiqiang Lin, Lingling Xu</p>
					<p>Abstract: Cloud computing is an emerging computing paradigm that can provide storage resources and computing capacities services over the Internet. However, some new security issues arise when users' sensitive data are outsourced and shared in untrusted cloud. The traditional techniques to protect the confidentiality of sensitive data stored in cloud are encryption and related cryptographic tools. And the corresponding private keys to access and decrypt the files are disclosed to only authorized users. However, these traditional solutions are not scalable because the computational cost of encryption and other access control is heavy for devices with limited computation ability. In this paper, we present a new way to implement scalable and fine-grained access control systems, which can be applied for big data in untrusted cloud computing environment. The solution is based on symmetric, efficient broadcast encryption and fine-grained attribute-based encryption (ABE). In this access control system, users are able to join and revoked with broadcast encryption. An outsourced Hierarchical ABE scheme is first proposed in this paper to construct the access control system. The security analysis is also</p>
					<p><a href="https://lib.jucs.org/article/23042/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23042/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23042/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>On the Security of a User Equipment Registration Procedure in Femtocell-Enabled Networks</title>
		    <link>https://lib.jucs.org/article/23037/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(3): 406-418</p>
					<p>DOI: 10.3217/jucs-021-03-0406</p>
					<p>Authors: Chien-Ming Chen, Tsu-Yang Wu, Raylin Tso, Masahiro Mambo, Mu-En Wu</p>
					<p>Abstract: Mobile data traffic has been growing at an increasing rate with the popularity of smartphones, tablets, and other wireless devices. To reduce the load on the network, mobile network operators deploy femtocells to increase their coverage and performance and to eliminate wireless notspots. Femtocells are low-cost devices that connect a new femtocell network architecture to the core telecommunication network through a licensed spectrum and standardized interface protocols. In this paper, we first note that the user equipment registration procedure, which is defined in the 3GPP (Third Generation Partnership Project) standard, in a femtocellenabled network is vulnerable to denial-of-service attacks. We then propose a mechanism to defend against these attacks. For compatibility, the proposed mechanism makes use of the well-defined control message in the 3GPP standard and modifies the user equipment registration procedure as little as possible.</p>
					<p><a href="https://lib.jucs.org/article/23037/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23037/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23037/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Utility-Oriented Routing Scheme for Interest-Driven Community-Based Opportunistic Networks</title>
		    <link>https://lib.jucs.org/article/23821/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 20(13): 1829-1854</p>
					<p>DOI: 10.3217/jucs-020-13-1829</p>
					<p>Authors: Xiuwen Fu, Wenfeng Li, Giancarlo Fortino, Pasquale Pace, Gianluca Aloi, Wilma Russo</p>
					<p>Abstract: Opportunistic networks, as representative networks evolved from social networks and Ad-hoc networks, have been on cutting edges in recent years. Many research efforts have focused on realistic mobility models and cost-effective routing schemes. The concept of "community", as one of the most inherent attributes of opportunistic networks, has been proved to be very helpful in simulating mobility traces of human society and selecting suitable message forwarders. This paper proposes an interest-driven community-based mobility model by considering location preference and time variance in human behavior patterns. Based on this enhanced mobility model, a novel two-layer routing algorithm, named InterCom, is presented by jointly considering utilities generated by users' activity degree and social relationships. The results, obtained throughout an intensive simulation analysis, show that the proposed routing scheme is able to improve delivery ratio while keeping the routing overhead and transmission delay within a reasonable range with respect to well-known routing schemes for opportunistic networks.</p>
					<p><a href="https://lib.jucs.org/article/23821/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23821/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23821/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Nov 2014 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Developing Distributed Collaborative Applications with HTML5 under the Coupled Objects Paradigm</title>
		    <link>https://lib.jucs.org/article/23816/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 20(13): 1712-1737</p>
					<p>DOI: 10.3217/jucs-020-13-1712</p>
					<p>Authors: Nelson Baloian, Diego Aguirre, Gustavo Zurita</p>
					<p>Abstract: One of the main tasks in developing distributed collaborative systems is to support synchronization processes. The Coupled Objects paradigm has emerged as a way to easily support these processes by dynamically coupling arbitrary user interface objects between heterogeneous applications. In this article we present an architecture for developing distributed collaborative applications using HTML5 and show its usage through the design and implementation of a series of collaborative systems in different scenarios. The experience of developing and using this architecture has shown that it is easy to use, robust and has good performance.</p>
					<p><a href="https://lib.jucs.org/article/23816/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23816/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23816/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Nov 2014 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Development of Navigation Skills through Audio Haptic Videogaming in Learners who are Blind</title>
		    <link>https://lib.jucs.org/article/23965/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 19(18): 2677-2697</p>
					<p>DOI: 10.3217/jucs-019-18-2677</p>
					<p>Authors: Jaime Sánchez, Marcia Campos</p>
					<p>Abstract: This study presents the development of a video game with audio and haptic interfaces that allows for the stimulation of orientation and mobility skills in people who are blind through the use of virtual environments. We evaluate the usability and the impact of the use of an audio and haptic-based videogame on the development of orientation and mobility skills in school-age learners who are blind. The results show that the interfaces used in the videogame are usable and appropriately designed, and that the haptic interface is as effective as the audio interface for orientation and mobility purposes.</p>
					<p><a href="https://lib.jucs.org/article/23965/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23965/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23965/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Dec 2013 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Text Analysis for Monitoring Personal Information Leakage on Twitter</title>
		    <link>https://lib.jucs.org/article/23931/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 19(16): 2472-2485</p>
					<p>DOI: 10.3217/jucs-019-16-2472</p>
					<p>Authors: Dongjin Choi, Jeongin Kim, Xeufeng Piao, Pankoo Kim</p>
					<p>Abstract: Social networking services (SNSs) such as Twitter and Facebook can be considered as new forms of media. Information spreads much faster through social media than any other forms of traditional news media because people can upload information with no time and location constraints. For this reason, people have embraced SNSs and allowed them to become an integral part of their everyday lives. People express their emotional status to let others know how they feel about certain information or events. However, they are likely not only to share information with others but also to unintentionally expose personal information such as their place of residence, phone number, and date of birth. If such information is provided to users with inappropriate intentions, there may be serious consequences such as online and offline stalking. To prevent information leakages and detect spam, many researchers have monitored e-mail systems and web blogs. This paper considers text messages on Twitter, which is one of the most popular SNSs in the world, to reveal various hidden patterns by using several coefficient approaches. This paper focuses on users who exchange Tweets and examines the types of information that they reciprocate other's Tweets by monitoring samples of 50 million Tweets which were collected by Stanford University in November 2009. We chose an active Twitter user based on "happy birthday" rule and detecting their information related to place to live and personal names by using proposed coefficient method and compared with other coefficient approaches. As a result of this research, we can conclude that the proposed coefficient method is able to detect and recommend the standard English words for non-standard words in few conditions. Eventually, we detected 88,882 (24.287%) more name included Tweets and 14,054 (3.84%) location related Tweets compared by using only standard word matching method.</p>
					<p><a href="https://lib.jucs.org/article/23931/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23931/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23931/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 1 Oct 2013 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Semantic Integration of Heterogeneous Data Sources in the MOMIS Data Transformation System</title>
		    <link>https://lib.jucs.org/article/23813/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 19(13): 1986-2012</p>
					<p>DOI: 10.3217/jucs-019-13-1986</p>
					<p>Authors: Maurizio Vincini, Domenico Beneventano, Sonia Bergamaschi</p>
					<p>Abstract: In the last twenty years, many data integration systems following a classical wrapper/mediator architecture and providing a Global Virtual Schema (a.k.a. Global Virtual View - GVV) have been proposed by the research community. The main issues faced by these approaches range from system-level heterogeneities, to structural syntax level heterogeneities at the semantic level. Despite the research effort, all the approaches proposed require a lot of user intervention for customizing and managing the data integration and reconciliation tasks. In some cases, the effort and the complexity of the task is huge, since it requires the development of specific programming codes. Unfortunately, due to the specificity to be addressed, application codes and solutions are not frequently reusable in other domains. For this reason, the Lowell Report 2005 has provided the guideline for the definition of a public benchmark for information integration problem. The proposal, called THALIA (Test Harness for the Assessment of Legacy information Integration Approaches), focuses on how the data integration systems manage syntactic and semantic heterogeneities, which definitely are the greatest technical challenges in the field. We developed a Data Transformation System (DTS) that supports data transformation functions and produces query translation in order to push down to the sources the execution. Our DTS is based on MOMIS, a mediator-based data integration system that our research group is developing and supporting since 1999. In this paper, we show how the DTS is able to solve all the twelve queries of the THALIA benchmark by using a simple combination of declarative translation functions already available in the standard SQL language. We think that this is a remarkable result, mainly for two reasons: firstly to the best of our knowledge there is no system that has provided a complete answer to the benchmark, secondly, our queries does not require any overhead of new code.</p>
					<p><a href="https://lib.jucs.org/article/23813/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23813/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23813/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 1 Jul 2013 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Key-Insulated Signcryption</title>
		    <link>https://lib.jucs.org/article/23552/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 19(10): 1351-1374</p>
					<p>DOI: 10.3217/jucs-019-10-1351</p>
					<p>Authors: Jia Fan, Yuliang Zheng, Xiaohu Tang</p>
					<p>Abstract: Signcryption is a public key cryptographic technique that is particularly suited for mobile communications thanks to its light computational and communication overhead. The wide spread use of signcryption in a mobile computing environment, however, is accompanied by an increased risk of private key exposure. This paper addresses the issue of key exposure by proposing a key-insulated signcryption technique. We define a security model for key-insulated signcryption and prove that the keyinsulated signcryption technique is provably secure in the security model.</p>
					<p><a href="https://lib.jucs.org/article/23552/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23552/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23552/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 May 2013 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Integrated Approach of Software Development and Test Processes to Distributed Teams</title>
		    <link>https://lib.jucs.org/article/23973/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(19): 2686-2705</p>
					<p>DOI: 10.3217/jucs-018-19-2686</p>
					<p>Authors: Gislaine Camila Lapasini Leal, Ana Chaves, Elisa Hatsue Moriya Huzita, Marcio Delamaro</p>
					<p>Abstract: The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural di_erences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to o_er support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and di_culty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to de_ne the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the e_ective communication and understanding of it.</p>
					<p><a href="https://lib.jucs.org/article/23973/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23973/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23973/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 1 Nov 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Review of Mobile Location-based Games for Learning across Physical and Virtual Spaces</title>
		    <link>https://lib.jucs.org/article/23872/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(15): 2120-2142</p>
					<p>DOI: 10.3217/jucs-018-15-2120</p>
					<p>Authors: Nikolaos Avouris, Nikoleta Yiannoutsou</p>
					<p>Abstract: In this paper we review mobile location-based games for learning. These games are played in physical space, but at the same time, they are supported by actions and events in an interconnected virtual space. Learning in these games is related to issues like the narrative structure, space and game rules and content that define the virtual game space. First, we introduce the theoretical and empirical considerations of mobile location based games, and then we discuss an analytical framework of their main characteristics through typical examples. In particular, we focus on their narrative structure, the interaction modes that they afford, their use of physical space as prop for action, the way this is linked to virtual space and the possible learning impact the game activities have. Finally we conclude with an outline of future trends and possibilities that these kinds of playful activities can have on learning, especially outside school, like in environmental studies and visits in museums and other sites of cultural and historical value.</p>
					<p><a href="https://lib.jucs.org/article/23872/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23872/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23872/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Aug 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Modelling of a Digital Forensic Readiness Approach for Wireless Local Area Networks</title>
		    <link>https://lib.jucs.org/article/23720/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(12): 1721-1740</p>
					<p>DOI: 10.3217/jucs-018-12-1721</p>
					<p>Authors: Sipho Ngobeni, Hein Venter, Ivan Burke</p>
					<p>Abstract: Over the past decade, wireless mobile communication technology based on the IEEE 802.11 Wireless Local Area Networks (WLANs) has been adopted worldwide on a massive scale. However, as the number of wireless users has soared, so has the possibility of cybercrime. WLAN digital forensics is seen as not only a response to cybercrime in wireless networks, but also a means to stem the increase of cybercrime in WLANs. The challenge in WLAN digital forensics is to intercept and preserve all the communications generated by the mobile stations and to conduct a proper digital forensic investigation. This paper attempts to address this issue by proposing a wireless digital forensic readiness model designed to monitor, log and preserve wireless network traffic for digital forensic investigations. Thus, the information needed by the digital forensic experts is rendered readily available, should it be necessary to conduct a digital forensic investigation. The availability of this digital information can maximise the chances of using it as digital evidence and it reduces the cost of conducting the entire digital forensic investigation process.</p>
					<p><a href="https://lib.jucs.org/article/23720/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23720/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23720/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jun 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Analysing the Security Risks of Cloud Adoption Using the SeCA Model: A Case Study</title>
		    <link>https://lib.jucs.org/article/23717/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(12): 1662-1678</p>
					<p>DOI: 10.3217/jucs-018-12-1662</p>
					<p>Authors: Thijs Baars, Marco Spruit</p>
					<p>Abstract: When IS/IT needs to be replaced, cloud systems might provide a feasible solution. However, the adoption process thus far has gone undocumented and enterprise architects are troubled with proper hands-on tools missing, until very recently. This single case study describes a large Dutch utility provider in their effort to understand the facets of the cloud and identifying the risks associated with it. In an action research setting, the SeCA model was used to analyse the cloud solutions and identify the risks with specific data classifications in mind. The results show how decision makers can use the SeCA model in various ways to identify the security risks associated with each cloud solution analysed. The analysis assumes that data classifications are in place. This research concludes that by using the SeCA model, a full understanding of the security risks can be gained on an objective and structural level; this is a further validation of prior empirical research that the SeCA model is a proper hands-on tool for cloud security analysis.</p>
					<p><a href="https://lib.jucs.org/article/23717/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23717/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23717/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jun 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Information Security Service Culture - Information Security for End-users</title>
		    <link>https://lib.jucs.org/article/23715/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(12): 1628-1642</p>
					<p>DOI: 10.3217/jucs-018-12-1628</p>
					<p>Authors: Rahul Rastogi, Rossouw Solms</p>
					<p>Abstract: Information security culture has been found to have a profound influence on the compliance of end-users to information security policies and controls in their organization. Similarly, a complementary aspect of information security is the culture of information security managers and developers in the organization. This paper calls this is as the 'information security service culture' (ISSC). ISSC shapes and guides the behaviour of information security managers and developers as they formulate information security policies and controls. Thus, ISSC has profound influence on the nature of these policies and controls and thereby on the interaction of end-users with these artefacts. ISSC is useful in transforming information security managers and developers from their present-day technology-focused approach to an end-user centric approach.</p>
					<p><a href="https://lib.jucs.org/article/23715/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23715/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23715/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jun 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Establishing Knowledge Networks via Analysis of Research Abstracts</title>
		    <link>https://lib.jucs.org/article/23388/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(8): 993-1021</p>
					<p>DOI: 10.3217/jucs-018-08-0993</p>
					<p>Authors: Mahalakshmi Suryanarayanan, Dilip Sam, Sendhilkumar Selvaraju</p>
					<p>Abstract: The extraction and propagation of knowledge inherent in a social network environment is demanding higher significance in research. The knowledge hidden within a social network would be easier to be comprehended if provided in a collective form. In the field of scientific research, such presentation of appreciated knowledge evolved from research communities would aid researchers. In this paper, we propose the evolution of a knowledge network from the information available in digital bibliographic repositories like DBLP [DBLP]. The most important characteristic of this knowledge network would be the comprehension of the proficiency of the scientist in the perspective of an area of research. This is achieved by categorizing the research articles published by an author into specific domains. The quality of the research articles are ascertained by analysing the abstracts within the domain. This analysis is used to determine the quality of the research article in terms of originality, relevancy and thereby, the impact of the article with respect to a research area. This quality measure provides knowledge on the impact of the scientist on the research community is arrived at as a cumulative entity. This knowledge helps in the evolution of the knowledge network from the social network of a research community.</p>
					<p><a href="https://lib.jucs.org/article/23388/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23388/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23388/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Social Network Based Reputation Computation and Document Classification</title>
		    <link>https://lib.jucs.org/article/23084/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(4): 532-553</p>
					<p>DOI: 10.3217/jucs-018-04-0532</p>
					<p>Authors: Joo Lee, Yue Duan, Jae Oh, Wenliang Du, Howard Blair, Lusha Wang, Xing Jin</p>
					<p>Abstract: We develop two social network based algorithms that automatically compute author reputation from a collection of textual documents. We first extract keyword reference behaviors of the authors to construct a social network, which represents relationships among the authors in terms of information reference behavior. With this network, we apply the two algorithms: the first computes each author's reputation value considering only direct reference and the second utilizes indirect reference recursively. We compare the reputation values computed by the two algorithms and reputation ratings given by a human domain expert. We further evaluate the algorithms in email categorization tasks by comparing them with machine learning techniques. Finally, we analyse the social network through a community detection algorithm and other analysis techniques. We observed several interesting phenomena including the network being scale-free and having a negative assortativity.</p>
					<p><a href="https://lib.jucs.org/article/23084/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23084/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23084/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Feb 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Application of Pattern Repositories for Sharing PLE Practices in Networked Communities</title>
		    <link>https://lib.jucs.org/article/30002/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(10): 1492-1510</p>
					<p>DOI: 10.3217/jucs-017-10-1492</p>
					<p>Authors: Felix Mödritscher, Zinayida Petrushyna, Effie Law</p>
					<p>Abstract: Personal learning environments (PLEs) comprise a new kind of learning technology which aims at putting learners into centre stage, i.e. by empowering them to design and use environments for their learning needs and purposes. Setting a PLE approach into practice, however, is not trivial at all, as the prospective end-users have varying attitudes and experiences in using ICT in general and PLE software in particular. Here, practice sharing could be an enabler for increasing the usefulness and usability of PLE solutions. In this paper we examine the relevant issues of capturing and sharing "good practices" of PLE-based, collaborative activities. By good practices we refer to learning experiences provided by learners for a networked community. Moreover, we introduce the concept of a pattern repository as a back-end service for PLEs which should, in the sense of community approaches like Last.fm, support PLE users in selecting and using learning tools for their activities. Finally, we present a prototype and argue for the advantages of such a practice sharing infrastructure with respect to community literature, experiences, and an evaluation study.</p>
					<p><a href="https://lib.jucs.org/article/30002/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/30002/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/30002/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Jun 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Towards Classification of Web Ontologies for the Emerging Semantic Web</title>
		    <link>https://lib.jucs.org/article/29959/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(7): 1021-1042</p>
					<p>DOI: 10.3217/jucs-017-07-1021</p>
					<p>Authors: Muhammad Fahad, Nejib Moalla, Abdelaziz Bouras, Muhammad Qadir, Muhammad Farukh</p>
					<p>Abstract: The massive growth in ontology development has opened new research challenges such as ontology management, search and retrieval for the entire semantic web community. These results in many recent developments, like OntoKhoj, Swoogle, OntoSearch2, that facilitate tasks user have to perform. These semantic web portals mainly treat ontologies as plain texts and use the traditional text classification algorithms for classifying ontologies in directories and assigning predefined labels rather than using the semantic knowledge hidden within the ontologies. These approaches suffer from many types of classification problems and lack of accuracy, especially in the case of overlapping ontologies that share common vocabularies. In this paper, we define an ontology classification problem and categorize it into many sub-problems. We present a new ontological methodology for the classification of web ontologies, which has been guided by the requirements of the emerging Semantic Web applications and by the lessons learnt from previous systems. The proposed framework, OntClassifire, is tested on 34 ontologies with a certain degree of overlapping domain, and effectiveness of the ontological mechanism is verified. It benefits the construction, maintenance or expansion of ontology directories on the semantic web that help to focus on the crawling and improving the quality of search for the software agents and people. We conclude that the use of a context specific knowledge hidden in the structure of ontologies gives more accurate results for the ontology classification.</p>
					<p><a href="https://lib.jucs.org/article/29959/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29959/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29959/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 1 Apr 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Clustering Approach for Collaborative Filtering Recommendation Using Social Network Analysis</title>
		    <link>https://lib.jucs.org/article/29919/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(4): 583-604</p>
					<p>DOI: 10.3217/jucs-017-04-0583</p>
					<p>Authors: Manh Pham, Yiwei Cao, Ralf Klamma, Matthias Jarke</p>
					<p>Abstract: Collaborative Filtering(CF) is a well-known technique in recommender systems. CF exploits relationships between users and recommends items to the active user according to the ratings of his/her neighbors. CF suffers from the data sparsity problem, where users only rate a small set of items. That makes the computation of similarity between users imprecise and consequently reduces the accuracy of CF algorithms. In this article, we propose a clustering approach based on the social information of users to derive the recommendations. We study the application of this approach in two application scenarios: academic venue recommendation based on collaboration information and trust-based recommendation. Using the data from DBLP digital library and Epinion, the evaluation shows that our clustering technique based CF performs better than traditional CF algorithms.</p>
					<p><a href="https://lib.jucs.org/article/29919/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29919/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29919/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Feb 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Information Consolidation in Large Bodies of Information</title>
		    <link>https://lib.jucs.org/article/29865/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(21): 3314-3323</p>
					<p>DOI: 10.3217/jucs-016-21-3314</p>
					<p>Authors: Gerhard Wurzinger</p>
					<p>Abstract: Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations.  Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases.  It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible.</p>
					<p><a href="https://lib.jucs.org/article/29865/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29865/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29865/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Dec 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Approach to Generation of Decision Rules</title>
		    <link>https://lib.jucs.org/article/29579/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(1): 140-158</p>
					<p>DOI: 10.3217/jucs-016-01-0140</p>
					<p>Authors: Zhang Mingyi, Li Danning, Zhang Ying</p>
					<p>Abstract: Classical classification and clustering based on equivalence relations are very important tools in decision-making. An equivalence relation is usually determined by properties of objects in a given domain. When making decision, anything that can be spoken about in the subject position of a natural sentence is an object, properties of which are fundamental elements of the knowledge of the given domain. This gives the possibility of representing the concept related to a given domain. In general, the information about a set of the objects is uncertain or incomplete. Various approaches representing uncertainty of a concept were proposed. In particular, Zadeh?s fuzzy set theory and Pawlak?s rough set theory have been most influential on this research field. Zadeh characterizes uncertainty of a concept by introducing a membership function and a similarity (fuzzy equivalence) relation of a set of objects. Pawlak then characterizes uncertainty of a concept by union of some equivalence classes of an equivalence relation. As one of particular important and widely used binary relations, equivalence relation plays a fundamental role in classification, clustering, pattern recognition, polling, automata, learning, control inference and natural language understanding, etc.  An equivalence relation is a binary relation with reflexivity, symmetry and transitivity. However, in many real situations, it is not sufficient to consider equivalence relations only. In fact, a lot of relations determined by the attributes of objects do not satisfy transitivity. In particular, information obtained from a domain of objects is not transitive, when we make decision based on properties of objects. Moreover, the information about symmetry of a relation is mostly uncertain. So, it is needed to approximately make decision and reasoning by indistinct concepts. This provokes us to explore a new class of relations, so-called class of fuzzy semi-equivalence relations. In this paper we introduce the notion of fuzzy semi-equivalence relations and study its properties. In particular, a constructive method of fuzzy semi-equivalence classes is presented. Applying it we present approaches to the fuzzyfication of indistinct concepts approximated by fuzzy relative and semi-equivalence classes, respectively. And an application of the fuzzy semi-equivalence relation theory to generate decision rules is outlined.</p>
					<p><a href="https://lib.jucs.org/article/29579/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29579/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29579/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 1 Jan 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Automatically Deciding if a Document was Scanned or Photographed</title>
		    <link>https://lib.jucs.org/article/29564/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(18): 3364-3375</p>
					<p>DOI: 10.3217/jucs-015-18-3364</p>
					<p>Authors: Gabriel Pereira e Silva, Rafael Lins, Brenno Miro, Steven Simske, Marcelo Thielo</p>
					<p>Abstract: Portable digital cameras are being used widely by students and professionals in different fields as a practical way to digitize documents. Tools such as PhotoDoc enable the batch processing of such documents, performing automatic border removal and perspective correction. A PhotoDoc processed document and a scanned one look very similar to the human eye if both are in true color. However, if one tries to automatically binarize a batch of documents digitized from portable cameras compared to scanners, they have different features. The knowledge of their source is fundamental for successful processing. This paper presents a classification strategy to distinguish between scanned and photographed documents. Over 16,000 documents were tested with a correct classification rate of over 99.96%.</p>
					<p><a href="https://lib.jucs.org/article/29564/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29564/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29564/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Dec 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Human Tracking based on Multiple View Homography</title>
		    <link>https://lib.jucs.org/article/29493/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(13): 2463-2484</p>
					<p>DOI: 10.3217/jucs-015-13-2463</p>
					<p>Authors: Dong-Wook Seo, Hyun-Uk Chae, Byeong-Woo Kim, Won-Ho Choi, Kang-Hyun Jo</p>
					<p>Abstract: We propose a method for detection and tracking for objects under multiple cameras system. To track objects, one need to establish correspondence objects among multiple views. We apply the principal axis of objects and the homography constraint to match objects across multiple cameras. The principal axis belongs to the silhouette of objects that is extracted by the background subtraction. We use the multiple background model to the background subtraction. In an image sequence, many changes happen with respect to pixel intensity. This cannot be characterized by the single background model so that is necessary to use the multiple background model. Also, we use the median background model reducing some noises. The silhouette is detected by difference with background models and current image which includes moving objects. For calculating homography, we use landmarks on the ground plane in 3D space. The homography means the relation between two correspondence between two coinciding points from different views. The intersection of principal axes and ground plane in 3D space are the same point shown in each view. The intersection occurs when a principal axis in an image crosses to the transformed ground plane from another image. We construct the correspondence which means the relationship between intersection in current image and transformed intersection from the other image by homography constraint. Those correspondences confirm within a short distance measuring in the top viewed plane. Thus, we track a person by these corresponding points on the ground plane.</p>
					<p><a href="https://lib.jucs.org/article/29493/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29493/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29493/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Jul 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Flexible Strategy-Based Model Comparison Approach: Bridging the Syntactic and Semantic Gap</title>
		    <link>https://lib.jucs.org/article/29478/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(11): 2225-2253</p>
					<p>DOI: 10.3217/jucs-015-11-2225</p>
					<p>Authors: Kleinner Oliveira, Karin Breitman, Toacy Oliveira</p>
					<p>Abstract: In this paper we discuss the importance of model comparison as one of the pillars of model-driven development (MDD). We propose an innovative, flexible, model comparison approach, based on the composition of matching strategies. The proposed approach is fully implemented by a match operator that combines syntactical matching rule, synonym dictionary and typographic similarity strategies to a semantic, ontology-based strategy. Ontologies are semantically richer, have greater power of expression than UML models and can be formally verified for consistency, thus providing more reliability and accuracy to model comparison. The proposed approach is presented in the format of a workflow that provides clear guidance to users and facilitates the inclusion of new matching strategies and evolution.</p>
					<p><a href="https://lib.jucs.org/article/29478/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29478/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29478/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 1 Jun 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Protecting Mobile TV Multimedia Content in DVB/GPRS Heterogeneous Wireless Networks</title>
		    <link>https://lib.jucs.org/article/29362/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(5): 1023-1041</p>
					<p>DOI: 10.3217/jucs-015-05-1023</p>
					<p>Authors: Shiguo Lian, Yan Zhang</p>
					<p>Abstract: Normally, the multimedia content provider and network service providers are separated in mobile TV systems. The TV programs are broadcasted from the content provider to the mobile terminals through Digital Video Broadcasting Transmission System for Handheld Terminals (DVB-H), and the access information is unicasted from the service provider to the user via General Packet Radio Services (GPRS) networks. Due to the network architecture heterogeneity, protocols variation and algorithms difference, securing mobile TV content is becoming a significant challenge. In this paper, we present the architecture, protocol, user identification and digital right management (DRM) for protecting mobile TV multimedia content. The network architecture describes the integrated DVB-H and GPRS to provide secure mobile TV services. The efficient protocols and algorithms are proposed to encrypt the content and also decrypt the coded content. The user identification is able to identify the legal user by matching the username-password pair or the scanned fingerprint. The DRM is able to protect the data from both DVB-H and GPRS. Following this framework, the illegal usage of the mobile TV services can be efficiently prevented and the real-time multimedia Quality-of-Service (QoS) with respect to delay can be guaranteed. The real implementation has demonstrated the effectiveness of the multimedia content protection in the heterogeneous mobile networks. In addition, the delay is sufficiently low to provide live TV.</p>
					<p><a href="https://lib.jucs.org/article/29362/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29362/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29362/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Stacked Dependency Networks for Layout Document Structuring</title>
		    <link>https://lib.jucs.org/article/29210/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(18): 2998-3010</p>
					<p>DOI: 10.3217/jucs-014-18-2998</p>
					<p>Authors: Boris Chidlovskii, Loïc Lecerf</p>
					<p>Abstract: We address the problems of structuring and annotation of layout-oriented documents.We model the annotation problems as the collective classification on graph-like structures with typed instances and links that capture the domain-specific knowledge. We use the relational de-pendency networks (RDNs) for the collective inference on the multi-typed graphs. We then describe a variant of RDNs where a stacked approximation replaces the Gibbs sampling in orderto accelerate the inference. We report results of evaluation tests for both the Gibbs sampling and stacking inference on two document structuring examples.</p>
					<p><a href="https://lib.jucs.org/article/29210/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29210/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29210/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Language-Independent, Open-Vocabulary System Based on HMMs for Recognition of Ultra Low Resolution Words</title>
		    <link>https://lib.jucs.org/article/29209/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(18): 2982-2997</p>
					<p>DOI: 10.3217/jucs-014-18-2982</p>
					<p>Authors: Farshideh Einsele, Rolf Ingold, Jean Hennebert</p>
					<p>Abstract: In this paper, we introduce and evaluate a system capable of recognizing words extracted from ultra low resolution images such as those frequently embedded on web pages. The design of the system has been driven by the following constraints. First, the system has to recognize small font sizes between 6-12 points where anti-aliasing and resampling filters are applied. Such procedures add noise between adjacent characters in the words and complicate any a priori segmentation of the characters. Second, the system has to be able to recognize any words in an open vocabulary setting, potentially mixing different languages in Latin alphabet. Finally, the training procedure must be automatic, i.e. without requesting to extract, segment and label manually a large set of data. These constraints led us to an architecture based on ergodic HMMs where states are associated to the characters. We also introduce several improvements of the performance increasing the order of the emission probability estimators, including minimum and maximum width constraints on the character models and a training set consisting all possible adjacency cases of Latin characters. The proposed system is evaluated on different font sizes and families, showing good robustness for sizes down to 6 points.</p>
					<p><a href="https://lib.jucs.org/article/29209/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29209/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29209/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Using Conjunctions and Adverbs for Author Verification</title>
		    <link>https://lib.jucs.org/article/29204/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(18): 2967-2981</p>
					<p>DOI: 10.3217/jucs-014-18-2967</p>
					<p>Authors: Daniel Pavelec, Luiz Oliveira, Edson Justino, Leonardo Batista</p>
					<p>Abstract: Linguistics and stylistics have been investigated for author identification for quite awhile, but recently, we have testified a impressive growth in the volume with which lawyers and courts have called upon the expertise of linguists in cases of disputed authorship. This motivatescomputer science researchers to look to the problem of author identification from a different perspective. In this work, we propose a stylometric feature set based on conjunctions and ad-verbs of the Portuguese language to address the problem of author identification. Two different approaches of classification were considered. The first one is called writer-independent and it re-duces the pattern recognition problem to a single model and two classes, hence, makes it possible to build robust system even when few genuine samples per writer are available. The second oneis called the personal model, or writer-dependent, which very often performs better but needs a bigger number of samples per writer. Experiments on a database composed of short articlesfrom 30 different authors and Support Vector Machine (SVM) as classifier demonstrate that the proposed strategy can produced results comparable to the literature.</p>
					<p><a href="https://lib.jucs.org/article/29204/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29204/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29204/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Systematic Characterisation of Objects in Digital Preservation: The eXtensible Characterisation Languages</title>
		    <link>https://lib.jucs.org/article/29201/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(18): 2936-2952</p>
					<p>DOI: 10.3217/jucs-014-18-2936</p>
					<p>Authors: Christoph Becker, Andreas Rauber, Volker Heydegger, Jan Schnasse, Manfred Thaller</p>
					<p>Abstract: During the last decades, digital objects have become the primary medium to create, shape, and exchange information. However, in contrast to analog objects such as books that directly represent their content, digital objects are not usable without a corresponding technical environment. The fast changes in these environments and in formats and technologies mean that digital documents have a short lifespan before they become obsolete. Digital preservation, i.e. actions to ensure longevity of digital information, thus has become a pressing challenge. The dominant strategies prevailing today are migration and emulation; for each strategy, different tools are available. When converting an object to a different representation, a validation of the content is needed to verify that the transformed objects are still authentically representing the same intellectual content. This validation so far is largely done manually, which is infeasible for large collections. Preservation planning supports decision makers in reaching accountable decisions by evaluating potential strategies against well-defined requirements. Especially the evaluation of different migration tools for digital preservation has to rely on validating the converted objects and thus on an analysis of the logical structure and the content of documents. Existing approaches for characterising and describing objects do not attempt to fully extract the informational content of digital objects and thus are not suffficient for an in-depth validation of transformed content. This paper describes the eXtensible Characterisation Languages (XCL) that support the automatic validation of document conversions and the evaluation of migration quality by hierarchically decomposing a document and representing documents from different sources in an abstract XML language. The description language XCDL provides an abstract representation of digital content in XML, while the extraction language XCEL allows an extraction engine to create such an abstract description by mapping file format structures to XCDL concepts. We present the context of the development of these languages and tools and describe the overall concept and features of the languages. We further give examples and show how the languages can be applied to the evaluation of digital preservation solutions in the context of preservation planning.</p>
					<p><a href="https://lib.jucs.org/article/29201/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29201/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29201/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Generic Architecture for the Conversion of Document Collections into Semantically Annotated Digital Archives</title>
		    <link>https://lib.jucs.org/article/29199/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(18): 2912-2935</p>
					<p>DOI: 10.3217/jucs-014-18-2912</p>
					<p>Authors: Josep Lladós, Dimosthenis Karatzas, Joan Mas, Gemma Sánchez</p>
					<p>Abstract: Mass digitization of document collections with further processing and semantic annotation is an increasing activity among libraries and archives at large for preservation, browsing and navigation, and search purposes. In this paper we propose a software architecture for the process of converting high volumes of document collections to semantically annotated digital libraries. The proposed architecture recognizes two sources of knowledge in the conversion pipeline, namely document images and humans. The Image Analysis module and the Correction and Validation module cover the initial conversion stages. In the former information is automatically extracted from document images. The latter involves human intervention at a technical level to define workflows and to validate the image processing results. The second stage, represented by the Knowledge Capture modules requires information specific to the particular knowledge domain and generally calls for expert practitioners. These two principal conversion stages are coupled with a Knowledge Management module which provides the means to organise the extracted and acquired knowledge. In terms of data propagation, the architecture follows a bottom-up process, starting with document image units, called terms, and progressively building meaningful concepts and their relationships. In the second part of the paper we describe a real scenario with historical document archives implemented according to the proposed architecture.</p>
					<p><a href="https://lib.jucs.org/article/29199/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29199/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29199/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Recognising Informative Web Page Blocks Using Visual Segmentation for Efficient Information Extraction</title>
		    <link>https://lib.jucs.org/article/29101/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(11): 1893-1910</p>
					<p>DOI: 10.3217/jucs-014-11-1893</p>
					<p>Authors: Jinbeom Kang, Joongmin Choi</p>
					<p>Abstract: As web sites are getting more complicated, the construction of web information extraction systems becomes more troublesome and time-consuming. A common theme is the difficulty in locating the segments of a page in which the target information is contained, which we call the informative blocks. This article reports on the Recognising Informative Page Blocks algorithm (RIPB), which is able to identify the informative block in a web page so that information extraction algorithms can work on it more efficiently. RIPB relies on an existing algorithm for vision-based page block segmentation to analyse and partition a web page into a set of visual blocks, and then groups related blocks with similar content structures into block clusters by using a tree edit distance method. RIPB recognises the informative block cluster by using tree alignment and tree matching. A series of experiments were performed, and the conclusions were that RIPB was more than 95% accurate in recognising informative block clusters, and improved the efficiency of information extraction by 17%.</p>
					<p><a href="https://lib.jucs.org/article/29101/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29101/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29101/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Jun 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Structure-Based Crawling in the Hidden Web</title>
		    <link>https://lib.jucs.org/article/29098/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(11): 1857-1876</p>
					<p>DOI: 10.3217/jucs-014-11-1857</p>
					<p>Authors: Marcio Vidal, Altigran S. da Silva, Edleno De Moura, João Cavalcanti</p>
					<p>Abstract: The number of applications that need to crawl the Web to gather data is growing at an ever increasing pace. In some cases, the criterion to determine what pages must be included in a collection is based on theirs contents; in others, it would be wiser to use a structure-based criterion. In this article, we present a proposal to build structure-based crawlers that just requires a few examples of the pages to be crawled and an entry point to the target web site. Our crawlers can deal with form-based web sites. Contrarily to other proposals, ours does not require a sample database to fill in the forms, and does not require the user to interact heavily. Our experiments prove that our precision is 100% in seventeen real-world web sites, with both static and dynamic content, and that our recall is 95% in the eleven static web sites examined.</p>
					<p><a href="https://lib.jucs.org/article/29098/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29098/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29098/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Jun 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Information Integration for the Masses</title>
		    <link>https://lib.jucs.org/article/29094/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(11): 1811-1837</p>
					<p>DOI: 10.3217/jucs-014-11-1811</p>
					<p>Authors: Jim Blythe, Dipsy Kapoor, Craig Knoblock, Kristina Lerman, Steven Minton</p>
					<p>Abstract: Information integration applications combine data from heterogeneous sources to assist the user in solving repetitive data-intensive tasks. Currently, such applications require a high level of expertise in information integration since users need to know how to extract data from an on-line source, describe its semantics, and build integration plans to answer specific queries. We have integrated three task learning technologies within a single desktop application to assist users in creating information integration applications. It includes a tool for programmatic access to data in on-line information sources, a tool to semantically model them by aligning their input and output parameters with a common ontology, and a tool that enables the user to create complex integration plans using simple text instructions. Our system was integrated within the Calo Desktop Assistant and evaluated independently on a range of problems. It enabled non-expert users to construct integration plans for a variety of problems in the office and travel domains.</p>
					<p><a href="https://lib.jucs.org/article/29094/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29094/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29094/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Jun 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Exposure and Support of Latent Social Networks among Learning Object Repository Users</title>
		    <link>https://lib.jucs.org/article/29087/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(10): 1717-1738</p>
					<p>DOI: 10.3217/jucs-014-10-1717</p>
					<p>Authors: Peng Han, Gerd Kortemeyer, Bernd Krämer, Christine Prümmer</p>
					<p>Abstract: Although immense efforts have been invested in the construction of hundreds of learning object repositories, the degree of reuse of learning resources maintained in such repositories is still disappointingly low. As the reasons for this observation are not well understood, we carried out an empirical investigation with the objectives to identify recurring patterns in the retrieval and (re-) use of learning resources and to design and test social networking functionality supporting communities of practice. The outcomes of this project, which are reported here, aim to affect the design of a new generation of learning object repositories, like CampusContent, that tries to eliminate deficits of current repositories and involve recent contributions in the area of social software. Object of our investigation was LON-CAPA, a crossinstitutional learning content management and assessment system used since 2000. We analyzed hundreds of thousands of log data collected over a period of three years and detected various kinds of latent relationships among LON-CAPA users, such as the co-occurrence of learning resources from independent authors in instructional materials. To understand the rationale behind these findings, we conducted a study with LON-CAPA users. One section of the questionnaire asked for people's opinion about the expected benefit of community support. Nearly 80% of the study participants said that the formation of communities of practice (CoP) would be an asset to LON-CAPA. More than 80% would be ready to provide their profiles for matching up with CoPs and serve the community by spending time on the evaluation of resources they had used. Finally we sketch a faceted search functionality we designed to support CoPs among LON-CAPA users. This functionality is currently tested with two CoPs.</p>
					<p><a href="https://lib.jucs.org/article/29087/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29087/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29087/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 May 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Applications of Mash-ups for a Digital Journal</title>
		    <link>https://lib.jucs.org/article/29086/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(10): 1695-1716</p>
					<p>DOI: 10.3217/jucs-014-10-1695</p>
					<p>Authors: Muhammad Khan, Narayanan Kulathuramaiyer, Hermann Maurer</p>
					<p>Abstract: The WWW is currently experiencing a revolutionary growth due to numerous emerging tools, techniques and concepts. Digital journals thus need to transform themselves to cope with this evolution of the web. With their growing information size and access, conventional techniques for managing a journal and supporting authors and readers are becoming insufficient. Journals of the future need to provide innovative administrative tools in helping its managers to ensure quality. They also need to provide better facilities for assisting authors and readers in making decisions regarding their submission of papers and in providing novel navigational features for finding relevant publications and collaborators in particular areas of interest. In this paper, we explore an innovative solution to address these problems by using an emerging Web 2.0 technology. We explore the application of mash-ups for J.UCS - the Journal of Universal Computer Science and encourage readers and authors to try out the applications (see section 11 Conclusions). J.UCS can then serve as a model for contemporary electronic journals.</p>
					<p><a href="https://lib.jucs.org/article/29086/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29086/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29086/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 May 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Testing Website Usability in Spanish-Speaking Academia through Heuristic Evaluation and Cognitive Walkthroughs</title>
		    <link>https://lib.jucs.org/article/29068/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(9): 1513-1528</p>
					<p>DOI: 10.3217/jucs-014-09-1513</p>
					<p>Authors: María González, Toni Granollers, Afra Pascual</p>
					<p>Abstract: Although usability evaluations have been focused on assessing different contexts of use, no proper specifications have been addressed towards the particular environment of academic websites in the Spanish-speaking context of use. Considering that this context involves hundreds of millions of potential users, the AIPO Association is running the UsabAIPO Project. The ultimate goal is to promote an adequate translation of international standards, methods and ideal values related to usability in order to adapt them to diverse Spanish-related contexts of use. This article presents the main statistical results coming from the Second and Third Stages of the UsabAIPO Project, where the UsabAIPO Heuristic method (based on Heuristic Evaluation techniques) and seven Cognitive Walkthroughs were performed over 69 university websites. The planning and execution of the UsabAIPO Heuristic method and the Cognitive Walkthroughs, the definition of two usability metrics, as well as the outline of the UsabAIPO Heuristic Management System prototype are also sketched.</p>
					<p><a href="https://lib.jucs.org/article/29068/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29068/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29068/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 1 May 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Compensation Models for Interactive Advertising</title>
		    <link>https://lib.jucs.org/article/28970/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(4): 557-565</p>
					<p>DOI: 10.3217/jucs-014-04-0557</p>
					<p>Authors: Astrid Dickinger, Steffen Zorn</p>
					<p>Abstract: Due to a shift in the marketing focus from mass to micro markets, the importance of one-to-one communication in advertising has increased. Interactive media provide possible answers to this shift. However, missing standards in payment models for interactive media are a hurdle in the further development. The paper reviews interactive advertising payment models. Furthermore, it adapts the popular FCB grid as a tool for both advertisers and publishers or broadcasters to examine effective interactive payment models.</p>
					<p><a href="https://lib.jucs.org/article/28970/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28970/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28970/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Feb 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Combining Classifiers in the ROC-space for Off-line Signature Verification</title>
		    <link>https://lib.jucs.org/article/28941/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 237-251</p>
					<p>DOI: 10.3217/jucs-014-02-0237</p>
					<p>Authors: Luiz Oliveira, Edson Justino, Robert Sabourin, Flávio Bortolozzi</p>
					<p>Abstract: In this work we present a strategy for off-line signature verification. It takes intoaccount a writer-independent model which reduces the pattern recognition problem to a 2-class problem, hence, makes it possible to build robust signature verification systems even when fewsignatures per writer are available. Receiver Operating Characteristic (ROC) curves are used to improve the performance of the proposed system . The contribution of this paper is two-fold. First of all, we analyze the impacts of choosing different fusion strategies to combine the partial decisions yielded by the SVM classifiers. Then ROC produced by different classifiers are combined using maximum likelihood analysis, producingan ROC combined classifier. Through comprehensive experiments on a database composed of 100 writers, we demonstrate that the ROC combined classifier based on the writer-independentapproach can reduce considerably false rejection rate while keeping false acceptance rates at acceptable levels.</p>
					<p><a href="https://lib.jucs.org/article/28941/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28941/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28941/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Informatics for Historians: Tools for Medieval Document XML Markup, and their Impact on the History-Sciences</title>
		    <link>https://lib.jucs.org/article/28938/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 193-210</p>
					<p>DOI: 10.3217/jucs-014-02-0193</p>
					<p>Authors: Benjamin Burkard, Georg Vogeler, Stefan Gruner</p>
					<p>Abstract: This article is a revised and extended version of [VBG, 07]. We conjecture that the digitalization of historical text documents as a basis of data mining and information retrieval for the purpose of progress in the history sciences is urgently needed. We present a novel, specialist XML tool-suite supporting the working historian in the transcription of original medieval charters into a machine-readable form, and we also address some latest developments which can be found in the field since the publication of [VBG, 07].</p>
					<p><a href="https://lib.jucs.org/article/28938/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28938/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28938/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Machine Learning-Based Keywords Extraction for Scientific Literature</title>
		    <link>https://lib.jucs.org/article/28871/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(10): 1471-1483</p>
					<p>DOI: 10.3217/jucs-013-10-1471</p>
					<p>Authors: Chunguo Wu, Maurizio Marchese, Jingqing Jiang, Alexander Ivanyukovich, Yanchun Liang</p>
					<p>Abstract: With the currently growing interest in the Semantic Web, keywords/metadata extraction is coming to play an increasingly important role. Keywords extraction from documents is a complex task in natural languages processing. Ideally this task concerns sophisticated semantic analysis. However, the complexity of the problem makes current semantic analysis techniques insufficient. Machine learning methods can support the initial phases of keywords extraction and can thus improve the input to further semantic analysis phases. In this paper we propose a machine learning-based keywords extraction for given documents domain, namely scientific literature. More specifically, the least square support vector machine is used as a machine learning method. The proposed method takes the advantages of machine learning techniques and moves the complexity of the task to the process of learning from appropriate samples obtained within a domain. Preliminary experiments show that the proposed method is capable to extract keywords from the domain of scientific literature with promising results.</p>
					<p><a href="https://lib.jucs.org/article/28871/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28871/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28871/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Oct 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Improved SVM Based on Similarity Metric</title>
		    <link>https://lib.jucs.org/article/28868/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(10): 1462-1470</p>
					<p>DOI: 10.3217/jucs-013-10-1462</p>
					<p>Authors: Chaoyong Wang, Yanfeng Sun, Yanchun Liang</p>
					<p>Abstract: A novel support vector machine method for classification is presented in this paper. A modified kernel function based on the similarity metric and Riemannian metric is applied to the support vector machine. In general, it is believed that the similarity of homogeneous samples is higher than that of inhomogeneous samples. Therefore, in Riemannian geometry, Riemannian metric can be used to reflect local property of a curve. In order to enlarge the similarity metric of the homogeneous samples or reduce that of the inhomogeneous samples in the feature space, Riemannian metric is used in the kernel function of the SVM. Simulated experiments are performed using the databases including an artificial and the UCI real data. Simulation results show the effectiveness of the proposed algorithm through the comparison with four typical kernel functions without similarity metric.</p>
					<p><a href="https://lib.jucs.org/article/28868/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28868/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28868/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Oct 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Computer Forensics System Based on Artificial Immune Systems</title>
		    <link>https://lib.jucs.org/article/28858/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(9): 1354-1365</p>
					<p>DOI: 10.3217/jucs-013-09-1354</p>
					<p>Authors: Jin Yang, Tao Li, Sunjun Liu, Tiefang Wang, Diangang Wang, Gang Liang</p>
					<p>Abstract: The current computer forensics approaches mainly focus on the network actions capture and analysis the evidences after attacks, which always result in the static methods. Inspired by the theory of artificial immune systems (AIS ), a novel model of Computer Forensics System is presented. The concepts and formal definitions of immune cells are given, and dynamically evaluative equations for self, antigen, immune tolerance, mature-lymphocyte lifecycle and immune memory are presented, and the hierarchical and distributed management framework of the proposed model are built. Furthermore, the idea of biology immunity is applied for enhancing the self-adapting and self-learning ability to adapt continuously variety environments. The experimental results show that the proposed model has the features of real-time processing, selfadaptively, thus providing a promising solution for computer forensics.</p>
					<p><a href="https://lib.jucs.org/article/28858/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28858/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28858/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Sep 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Ontology and Grammar of the SOPHIE Choreography Conceptual Framework - An Ontological Model for Knowledge Management</title>
		    <link>https://lib.jucs.org/article/28844/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(9): 1157-1183</p>
					<p>DOI: 10.3217/jucs-013-09-1157</p>
					<p>Authors: Sinuhé Arroyo</p>
					<p>Abstract: Ontologies have been recognized as a fundamental infrastructure for advanced approaches to Knowledge Management (KM) automation in SOA. Building services communicate with each other by exchanging self-contained messages. Depending on the specific requirements of the business model they serve and the application domain for which services were deployed, a number of mismatches (i.e. sequence and cardinality of messages exchanges, structure and format of messages and content semantics), can occur which prevent interoperation among a prior compatible services. Existing choreography technologies attempt to model such external visible behavior. However, they lack the consistent semantic support required to fully meet the necessities of heterogeneous KM environments. This paper describes the ontology and grammar of SOPHIE, a semantic service-based choreography framework for overcoming conversational pattern mismatches in knowledge intensive environments. Consequently, the paper provides an overview of the framework that depicts its main building blocks, so a good understantind of the ontology and grammar that summarize the conceptual model is gained. Such ontology allows the desing and description of fully fledged choreographies that can be used, as a result of a mediation task, to produce the mediating structures that in fact allow dynamic service-to-service interoperation. Finally, a use case centred in the telcomunications field serves as proof of concept of how SOPHIE is being applied.</p>
					<p><a href="https://lib.jucs.org/article/28844/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28844/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28844/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Sep 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Mashups: Emerging Application Development Paradigm for a Digital Journal</title>
		    <link>https://lib.jucs.org/article/28774/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(4): 531-542</p>
					<p>DOI: 10.3217/jucs-013-04-0531</p>
					<p>Authors: Narayanan Kulathuramaiyer</p>
					<p>Abstract: The WWW is currently experiencing a revolutionary growth due to its increasing participative community software applications. This paper highlights an emerging application development paradigm on the WWW, called mashup. As blogs have enabled anyone to become a publisher, mashups stimulate web development by allowing anyone to combine existing data to develop web applications. Current applications of mashups include tracking of events such as crime, hurricanes, earthquakes, meta-search integration of data and media feeds, interactive games, and as an organizer for web resources. The implications of this emerging web integration and structuring paradigm remains yet to be explored fully. This paper describes mashups from a number of angles, highlighting current developments while providing sufficient illustrations to indicate its potential implications. It also highlights the role of mashups in complementing and enhancing digital journals by providing insights into the quality academic content, extent of coverage, and the enabling of expanded services. We present pioneering initiatives for the Journal of Universal Computer Science in our efforts to harness the collective intelligence of a collaborative scholarly network.</p>
					<p><a href="https://lib.jucs.org/article/28774/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28774/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28774/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Sustainable Memory System Using Global and Conical Spaces</title>
		    <link>https://lib.jucs.org/article/28730/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(2): 135-148</p>
					<p>DOI: 10.3217/jucs-013-02-0135</p>
					<p>Authors: Hidekazu Kubota, Satoshi Nomura, Yasuyuki Sumi, Toyoaki Nishida</p>
					<p>Abstract: We present a concept and implementation of a computational support for spatial memory management and describe its temporal evolution. Our essential idea is to use an immersible globe that consists of a global space and a conical space, thereby providing arbitrary memory space for humans. We developed a sustainable knowledge globe (SKG) for constructing a memory space, and proposed a system called Contents Garden to expand the SKG into immersive space. We carried out the user study of SKG to evaluate its effectiveness. We also proposed and discussed three perceptual operations on Contents Garden to improve the operativity of the SKG.</p>
					<p><a href="https://lib.jucs.org/article/28730/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28730/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28730/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Adaptive Hierarchical Extension of DSR: The Cluster Source Routing</title>
		    <link>https://lib.jucs.org/article/28722/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(1): 32-55</p>
					<p>DOI: 10.3217/jucs-013-01-0032</p>
					<p>Authors: Farid Jaddi, Béatrice Paillassa</p>
					<p>Abstract: Numerous studies have shown the difficulty for a single routing protocol to scale with respect to mobility and network size in wireless ad hoc networks. This paper presents a cluster-based extension of the DSR protocol called Cluster Source Routing (CSR)1. The proposed approach improves the scalability of DSR in high-density and low-mobility networks. The originality of our proposal is an adaptive use of DSR and CSR routing modes according to network density and node mobility in order to produce less overhead and perform efficient routing. Indeed, adaptation is a key feature for a routing protocol since network dynamics can suddenly and widely change in wireless ad hoc networks. Thus, the DSR-CSR protocol achieves enhanced performance over a broader {network density, node mobility} domain as shown by simulations.</p>
					<p><a href="https://lib.jucs.org/article/28722/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28722/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28722/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jan 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Mechanism for Solving Conflicts in Ambient Intelligent Environments</title>
		    <link>https://lib.jucs.org/article/28584/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(3): 284-296</p>
					<p>DOI: 10.3217/jucs-012-03-0284</p>
					<p>Authors: Pablo Haya, Germán Montoro, Abraham Esquivel, Manuel García-Herranz, Xavier Alamán</p>
					<p>Abstract: Ambient Intelligence scenarios describe situations in which multitude of devices and agents live together. In this kind of scenarios is frequent to see the appearance of conflicts when modifying the state of a device as for example a lamp. Those problems are not as much of sharing of resources as of conflict of orders coming from different agents. This coexistence must deal also with the desire of privacy of the different users over their personal information such as where they are, what their preferences are or to whom this information should be available. When facing incompatible orders over the state of a device it turns necessary to make a decision. In this paper we propose a centralised mechanism based on prioritized FIFO queues to decide the order in which the control of a device is granted. The priority of the commands is calculated following a policy that considers issues such as the commander's role, command's type, context's state and commander-context and commander-resource relations. Finally we propose a set of particular policies for those resources that do not adjust to the general policy. In addition we present a model pretending to integrate privacy through limiting and protecting contextual information.</p>
					<p><a href="https://lib.jucs.org/article/28584/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28584/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28584/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Mar 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Transformation of the Web: How Emerging Communities Shape the Information we Consume</title>
		    <link>https://lib.jucs.org/article/28574/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(2): 187-213</p>
					<p>DOI: 10.3217/jucs-012-02-0187</p>
					<p>Authors: Josef Kolbitsch, Hermann Maurer</p>
					<p>Abstract: To date, one of the main aims of the World Wide Web has been to provide users with information. In addition to private homepages, large professional information providers, including news services, companies, and other organisations have set up web-sites. With the development and advance of recent technologies such as wikis, blogs, podcasting and file sharing this model is challenged and community-driven services are gaining influence rapidly. These new paradigms obliterate the clear distinction between information providers and consumers. The lines between producers and consumers are blurred even more by services such as Wikipedia, where every reader can become an author, instantly.  This paper presents an overview of a broad selection of current technologies and services: blogs, wikis including Wikipedia and Wikinews, social networks such as Friendster and Orkut as well as related social services like del.icio.us, file sharing tools such as Flickr, and podcasting. These services enable user participation on the Web and manage to recruit a large number of users as authors of new content. It is argued that the transformations the Web is subject to are not driven by new technologies but by a fundamental mind shift that encourages individuals to take part in developing new structures and content. The evolving services and technologies encourage ordinary users to make their knowledge explicit and help a collective intelligence to develop.</p>
					<p><a href="https://lib.jucs.org/article/28574/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28574/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28574/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Feb 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Semantic Web Technologies Applied to e-learning Personalization in &lt;e-aula&gt;</title>
		    <link>https://lib.jucs.org/article/28468/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 11(9): 1470-1481</p>
					<p>DOI: 10.3217/jucs-011-09-1470</p>
					<p>Authors: Pilar Sancho, Iván Martínez, Baltasar Fernández-Manjón</p>
					<p>Abstract: Despite the increasing importance gained by e-learning standards in the past few years, and the unquestionable goals reached (mainly regarding interoperability among e-learning contents) current e-learning standards are yet not sufficiently aware of the context of the learner. This means that only a limited support for adaptation regarding individual characteristics is currently being provided. In this article, we propose the use of semantic metadata for Learning Object (LO) contextualization in order to adapt instruction to the learner's cognitive requirements in three different ways: background knowledge, knowledge objectives and the most suitable learning style. In our pilot e-learning platform () the context for LOs is addressed in two different ways: knowledge domain and instructional design. We propose the use of ontologies as the knowledge representation mechanism to allow the delivery of learning material that is relevant to the current situation of the learner.</p>
					<p><a href="https://lib.jucs.org/article/28468/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28468/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28468/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2005 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>On Complexity of Collective Communications on a Fat Cube Topology</title>
		    <link>https://lib.jucs.org/article/28422/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 11(6): 944-961</p>
					<p>DOI: 10.3217/jucs-011-06-0944</p>
					<p>Authors: Vladimir Kutálek, Václav Dvořák</p>
					<p>Abstract: A recent renewed interest in hypercube interconnection network has been concentrated to the more scalable version known as a fat cube. The paper introduces several router models for fat nodes and uses them for cost comparison of both the hypercube and fat cube topologies. Analysis of time complexity of collective communications is done next and lower bounds on the number of communication steps are derived. Examples of particular communication algorithms on the 2D-fat cube topology with 8 processors are summarized and described in detail. The performed study shows that a large variety of fat cubes can provide much desired flexibility, trading cost for performance and manufacturability.</p>
					<p><a href="https://lib.jucs.org/article/28422/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28422/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28422/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jun 2005 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Gossip Codes for Fingerprinting: Construction, Erasure Analysis and Pirate Tracing</title>
		    <link>https://lib.jucs.org/article/28344/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 11(1): 122-149</p>
					<p>DOI: 10.3217/jucs-011-01-0122</p>
					<p>Authors: Ravi Veerubhotla, Ashutosh Saxena, V. Gulati, A. Pujari</p>
					<p>Abstract: This work presents two new construction techniques for q-ary Gossip codes from t-designs and Traceability schemes. These Gossip codes achieve the shortest code length specified in terms of code parameters and can withstand erasures in digital fingerprinting applications. This work presents the construction of embedded Gossip codes for extending an existing Gossip code into a bigger code. It discusses the construction of concatenated codes and realisation of erasure model through concatenated codes.</p>
					<p><a href="https://lib.jucs.org/article/28344/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28344/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28344/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2005 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Software/Hardware Co-Design of Efficient and Secure Cryptographic Hardware</title>
		    <link>https://lib.jucs.org/article/28339/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 11(1): 66-82</p>
					<p>DOI: 10.3217/jucs-011-01-0066</p>
					<p>Authors: Nadia Nedjah, Luiza Mourelle</p>
					<p>Abstract: Most cryptography systems are based on the modular exponentiation to perform the non-linear scrambling operation of data. It is performed using successive modular multiplications, which are time consuming for large operands. Accelerating cryptography needs optimising the time consumed by a single modular multiplication and/or reducing the total number of modular multiplications performed. Using a genetic algorithm, we first yield the minimal sequence of powers, generally called addition chain, that need to be computed to finally obtain the modular exponentiation result. Then, we exploit the co-design methodology to engineer a cryptographic device that accelerates the encryption/decryption throughput without requiring considerable hardware area. Moreover the obtained designed cryptographic hardware is completely secure against known attacks.</p>
					<p><a href="https://lib.jucs.org/article/28339/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28339/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28339/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2005 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Galois Lattice Theory for Probabilistic Visual Landmarks</title>
		    <link>https://lib.jucs.org/article/28280/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 10(8): 1014-1033</p>
					<p>DOI: 10.3217/jucs-010-08-1014</p>
					<p>Authors: Emmanuel Zenou, Manuel Samuelides</p>
					<p>Abstract: This paper presents an original application of the Galois lattice theory, the visual landmark selection for topological localization of an autonomous mobile robot, equipped with a color camera. First, visual landmarks have to be selected in order to characterize a structural environment. Second, such landmarks have to be detected and updated for localization. These landmarks are combinations of attributes, and the selection process is done through a Galois lattice. This paper exposes the landmark selection process and focuses on probabilistic landmarks, which give the robot thorough information on how to locate itself. As a result, landmarks are no longer binary, but probabilistic. The full process of using such landmarks is described in this paper and validated through a robotics experiment.</p>
					<p><a href="https://lib.jucs.org/article/28280/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28280/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28280/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2004 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Identifying Employee Competencies in Dynamic Work Domains: Methodological Considerations and a Case Study</title>
		    <link>https://lib.jucs.org/article/28159/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 9(12): 1500-1518</p>
					<p>DOI: 10.3217/jucs-009-12-1500</p>
					<p>Authors: Tobias Ley, Dietrich Albert</p>
					<p>Abstract: We present a formalisation for employee competencies which is based on a psychological framework separating the overt behavioural level from the underlying competence level. On the competence level, employees draw on action potentials (knowledge, skills and abilities) which in a given situation produce performance outcomes on the behavioural level. Our conception is based on the competence performance approach by [Korossy 1997] and [Korossy 1999] which uses mathematical structures to establish prerequisite relations on the competence and the performance level. From this framework, a methodology for assessing competencies in dynamic work domains is developed which utilises documents employees have created to assess the competencies they have been acquiring. By means of a case study, we show how the methodology and the resulting structures can be validated in an organisational setting. From the resulting structures, employee competency profiles can be derived and development planning can be supported. The structures also provide the means for making inferences within the competency assessment process which in turn facilitates continuous updating of competency profiles and maintenance of the structures.</p>
					<p><a href="https://lib.jucs.org/article/28159/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28159/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28159/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Dec 2003 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Pruning-based Identification of Domain Ontologies</title>
		    <link>https://lib.jucs.org/article/28032/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 9(6): 520-529</p>
					<p>DOI: 10.3217/jucs-009-06-0520</p>
					<p>Authors: Raphael Volz, Rudi Studer, Alexander Maedche, Boris Lauser</p>
					<p>Abstract: We present a novel approach of extracting a domain ontology from large-scale thesauri. Concepts are identified to be relevant for a domain based on their frequent occurrence in domain texts. The approach allows to bootstrap the ontology engineering process from given legacy thesauri and identifies an initial domain ontology that may easily be refined by experts in a later stage. We present a thorough evaluation of the results obtained in building a biosecurity ontology for the UN FAO AOS project.</p>
					<p><a href="https://lib.jucs.org/article/28032/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28032/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28032/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Jun 2003 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Cross-Disciplinary Bibliography on Visual Languages for Information Sharing and Archiving</title>
		    <link>https://lib.jucs.org/article/28017/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 9(4): 368-396</p>
					<p>DOI: 10.3217/jucs-009-04-0369</p>
					<p>Authors: Daniela Camhy, Robert Stubenrauch</p>
					<p>Abstract: This bibliography offers citations for people who are interested in learning more about visual language, new types of communicating and archiving information with emphases on novel technologies and theoretical works in these multidisciplinary areas. This bibliography is considered in its broadest sense and covers references of research in humanities and social sciences as well as computer technology. Far from being exhaustive, it nevertheless covers essential resources in a selective way, so that the material can provide starting points for many different directions. What is not included here are references to visual programming languages.</p>
					<p><a href="https://lib.jucs.org/article/28017/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28017/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28017/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2003 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Applications of MIRACLE: Working With Dynamic Visual Information</title>
		    <link>https://lib.jucs.org/article/28016/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 9(4): 349-367</p>
					<p>DOI: 10.3217/jucs-009-04-0349</p>
					<p>Authors: Robert Stubenrauch, Daniela Camhy, Jennifer Lennon, Hermann Maurer</p>
					<p>Abstract: Systems supporting new forms of communication and archiving of dynamic visual information have a range of potential applications, some of which are described in this paper on a conceptual basis. We present a visual language for dynamic (historic) maps, applications of pictorial lexicons, concepts for interactive support systems for assembly and repair, and a platform for abstract movies.</p>
					<p><a href="https://lib.jucs.org/article/28016/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28016/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28016/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2003 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Foundations of MIRACLE: Multimedia Information Repository, A Computer-supported Language Effort</title>
		    <link>https://lib.jucs.org/article/28015/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 9(4): 309-348</p>
					<p>DOI: 10.3217/jucs-009-04-0309</p>
					<p>Authors: Hermann Maurer, Robert Stubenrauch, Daniela Camhy</p>
					<p>Abstract: Research in neurosciences, cognitive psychology and media sciences indicates that "visual thinking" carries a potential of the human mind that is generally still neglected today but could heavily be fostered by novel types of communicating and archiving information. Computer technology (information systems, telecommunication and visual tools) in turn promises to provide a wide range of highly effective tools to support visual, dynamic communication. MIRACLE treads new paths to address a crucial issue: In what way and to what extent can and should current and future systems support new ways of communicating and archiving information using dynamic, visual information?  This paper gives a survey of the numerous attempts that have been made so far to overcome language barriers by introducing artificial languages (both on a spoken/text and on a visual basis). It also analyzes the general status of technology (computer hardware and software) to support such efforts as well as a number of specific projects. From this overview we draw the conclusion that computer-based systems designed to support communicating and archiving dynamic visual information should focus on the following features:   Support dynamic language elements on a structural level in addition to traditional animated icons, Incorporate gestural language elements (inspired by natural sign languages) anticipating future ways of human-computer interaction, Allow evolutionary development of the language in a group-dynamic and interactive process involving large international groups of participants.  In a final section we give a brief outline of the cluster of specific projects carried out under the heading of MIRACLE.</p>
					<p><a href="https://lib.jucs.org/article/28015/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28015/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28015/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2003 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>XML and MPEG-7 for Interactive Annotation and Retrieval using Semantic Meta-data</title>
		    <link>https://lib.jucs.org/article/27916/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 8(10): 965-984</p>
					<p>DOI: 10.3217/jucs-008-10-0965</p>
					<p>Authors: Mathias Lux, Werner Klieber, Jutta Becker, Klaus Tochtermann, Harald Mayer, Helmut Neuschmied, Werner Haas</p>
					<p>Abstract: The evolution of the Web is not only accompanied by an increasing diversity of multimedia but by new requirements towards intelligent research capabilities, user specific assistance, intuitive user interfaces and platform independent information presentation. To reach these and further upcoming requirements new standardized Web technologies and XML based description languages are used. The Web Information Space has transformed into a Knowledge marketplace where worldwide located participants take part into the creation, annotation and consumption of knowledge. This paper points out the design of semantic retrieval frameworks and a prototype implementation for audio and video annotation, storage and retrieval using the MPEG-7 standard and semantic web reference implementations. MPEG-7 plays an important role towards the standardized enrichment of multimedia with semantics on higher abstraction levels and a related improvement of query results.</p>
					<p><a href="https://lib.jucs.org/article/27916/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/27916/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/27916/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Oct 2002 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Extracting and Visualizing Knowledge from Film and Video Archives</title>
		    <link>https://lib.jucs.org/article/27887/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 8(6): 602-612</p>
					<p>DOI: 10.3217/jucs-008-06-0602</p>
					<p>Authors: Howard Wactlar</p>
					<p>Abstract: Vast collections of video and audio recordings which have captured events of the last century remain a largely untapped resource of historical and scientific value. The Informedia Digital Video Library has pioneered techniques for automated video and audio indexing, navigation, visualization, search and retrieval and embedded them in a system for use in education and information mining. In recent work we introduce new paradigms for knowledge discovery by aggregating and integrating video content on-demand to enable summarization and visualization in response to queries in a useful broader context, starting with historic and geographic perspectives.</p>
					<p><a href="https://lib.jucs.org/article/27887/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/27887/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/27887/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2002 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Graphics Content in Digital Libraries: Old Problems, Recent Solutions, Future Demands</title>
		    <link>https://lib.jucs.org/article/27789/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 7(5): 400-409</p>
					<p>DOI: 10.3217/jucs-007-05-0400</p>
					<p>Authors: Dieter Fellner</p>
					<p>Abstract: Working with the ubiquitous Web we immediately realize its limitations when it comes to the delivery or exchange of non-textual, particularly graphical, information. Graphical information is still predominantly represented by raster images, either in a fairly low resolution to warrant acceptable transmission times or in high resolutions to please the reader s perception thereby challenging his or her patience (as these large data sets take their time to travel over congested internet highways).  Comparing the current situation with efforts and developments of the past, e.g. the Videotex systems developed in the time period from 1977 to 1985, we see that a proper integration of graphics from the very beginning has, once again, been overlooked.  The situation is even worse going from two-dimensional images to three-dimensional models or scenes. VRML, originally designed to address this very demand has failed to establish itself as a reliable tool for the time window given and recent advances in graphics technology as well as digital library technology demand new approaches which VRML, at least in its current form, won t be able to deliver.  After summarizing the situation for 2D graphics in digital documents or digital libraries this paper concentrates on the 3D graphics aspects of recent digital library developments and tries to identify the future challenges the community needs to master.</p>
					<p><a href="https://lib.jucs.org/article/27789/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/27789/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/27789/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 May 2001 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	