
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 100 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Sat, 14 Mar 2026 02:43:09 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>A Blockchain-Enabled Framework for Controlled Access to Cluster Resources</title>
		    <link>https://lib.jucs.org/article/141277/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 32(2): 241-266</p>
					<p>DOI: 10.3897/jucs.141277</p>
					<p>Authors: Kausthav Pratim Kalita, Debojit Boro, Dhruba Kumar Bhattacharyya</p>
					<p>Abstract: Purpose: Big data applications enable organizations to derive actionable insights that inform strategic decision making and enhance operational efficiency in real time. Hadoop&rsquo;s architecture features a distributed file system that stores voluminous data across multiple machines within a cluster. However, the management and control of access to this data can be viewed as centralized, as Hadoop relies on a central coordination system to manage tasks and resources across the cluster.Design / methodology / approach: To address the limitation in Hadoop, this paper proposes integrating blockchain technology to establish strict authentication procedures through smart contracts, enabling controlled access to the Hadoop platform. The proposed platform allows organizations to access Hadoop clusters through participation in a blockchain network, enabling efficient data storage mechanisms and model training capabilities.Findings: The performance of this integrated system is evaluated through simulations leveraging Ethereum based smart contracts. The findings suggest that implementing appropriate indexing mechanisms and hashing techniques can enable sufficient access control, thereby facilitating controlled access to Hadoop clusters. The paper presents the simulation results in terms of execution cost and execution time.Originality/value: This paper addresses the identified need for a transparent and reliable access control system that leverages blockchain&rsquo;s smart contracts to enable controlled and restricted access to Hadoop clusters.</p>
					<p><a href="https://lib.jucs.org/article/141277/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/141277/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Feb 2026 16:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>OntoKratos: An Ontology for Problematic Smartphone Use Identification and Intervention Suggestion</title>
		    <link>https://lib.jucs.org/article/147898/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 32(2): 155-180</p>
					<p>DOI: 10.3897/jucs.147898</p>
					<p>Authors: Gustavo Lazarotto Schroeder, Wesllei Felipe Heckler, Rosemary Francisco, Jorge Luis Victória Barbosa</p>
					<p>Abstract: Smartphone use has increased globally and has become essential in daily life. Although benefits exist, concerns arise about the negative effects of prolonged hyperconnectivity. The excessive use of smartphones combined with demographic and mental health related risk factors can lead to problematic smartphone use (PSU). PSU is characterized as the compulsive use of smartphones that disrupts an individual&rsquo;s daily life, work, and relationships. Considering this scenario, the present paper proposes OntoKratos as an ontology designed to detect and prevent PSU. The ontology enables inferences, such as determining the individual&rsquo;s mental health and PSU state, inferring context information, identifying PSU demographic and emotional risk factors, and suggesting interventions. OntoKratos includes 89 classes, 43 object properties, 35 data properties, and 1,113 axioms. Evaluations performed through a simulated dataset demonstrated the ontology&rsquo;s effectiveness regarding PSU identification and interventions for PSU behaviors. Ontology&rsquo;s rules allowed the definition of accurate axioms, improving the correct classification and inference of eight instantiated individuals. This study presents the first ontology for PSU identification and intervention suggestions on PSU behaviors. OntoKratos allows to identify and assist individuals by considering mental health and PSU status, inferring potential PSU risk factors, and providing tailored intervention suggestions to cope with PSU.</p>
					<p><a href="https://lib.jucs.org/article/147898/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/147898/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Feb 2026 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>RatKit: A Novel Methodology for Verifying, Validating, and Testing Agent-Based Simulations: the Boids Case</title>
		    <link>https://lib.jucs.org/article/148927/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 32(1): 133-152</p>
					<p>DOI: 10.3897/jucs.148927</p>
					<p>Authors: İbrahim Çakırlar, Sevcan Emek, Şebnem Bora, Oğuz Dikenelli</p>
					<p>Abstract: This study introduces a novel methodology and framework for the verification, validation, and testing of agent-based simulation models: RatKit. Building on repeatable automated testing in ABMS, the present contribution significantly extends the foundation by proposing an integrated metamodel and systematic development methodology that embeds these activities throughout the simulation lifecycle. The RatKit methodology is both general, in that it applies to a wide range of agent-based simulation models using a well-defined metamodel, and comprehensive, in that it addresses the macro-level (societal), the meso-level (interaction) and the micro-level (agent) aspects of simulations. It also provides a generic infrastructure to be able to support various VV&amp;T techniques. RatKit is designed as a general VV&amp;T framework for all ABM frameworks. The methodology comes with a dedicated implemented framework. It is implemented by selecting the Repast ABM development framework. RatKit is demonstrated through a detailed case study of the Boids model, where the dynamics of alignment, cohesion, and separation are examined. Results from the case study show that a test-driven approach can enhance model reliability and ensure that individual agent behaviors coalesce into realistic emergent phenomena. Experiences and feedback obtained during the case studies show that developing ABM with a test-driven method based on VV&amp;T facilitates the creation of desired models.</p>
					<p><a href="https://lib.jucs.org/article/148927/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/148927/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jan 2026 16:00:07 +0000</pubDate>
		</item>
	
		<item>
		    <title>Development of Reliable Access Control Mechanisms Using Artificial Intelligence for Corporate Data Protection</title>
		    <link>https://lib.jucs.org/article/153217/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(14): 1690-1716</p>
					<p>DOI: 10.3897/jucs.153217</p>
					<p>Authors: Przemyslaw Jatkiewicz</p>
					<p>Abstract: The study aims to analyse the vulnerabilities of traditional access control methods and define optimization objectives, constraints, and decision-making processes based on data for the effective implementation of artificial intelligence to enhance corporate data protection. The research methodology addressed various approaches, including machine learning, user behaviour analysis and neural networks, and data protection methods such as anonymisation, encryption and federated learning. Traditional access control methods, such as passwords, biometrics and multi-factor authentication, were discussed, as well as their shortcomings, including vulnerability to data breaches, phishing attacks and infrastructure threats. The use of artificial intelligence to strengthen access control mechanisms, such as machine learning, user behaviour analysis and neural networks, was emphasised. Artificial intelligence significantly improves security by enabling the analysis and processing of large amounts of data, detecting anomalies and predicting threats based on the analysis of user behaviour and biometric data. The study also examined methods of protecting data used to train artificial intelligence, including anonymisation, differential privacy, encryption and federated learning. Privacy issues the risks of data leakage when using artificial intelligence and the need to comply with ethical norms and standards were addressed. The successful integration of AI-oriented solutions into corporate security systems in various industries, including the financial sector, healthcare, and retail, is presented. Evaluating the effectiveness of artificial intelligence in access control systems is based on indicators such as the speed of the system&rsquo;s response to changes in user behaviour, the number of false positives and successfully prevented incidents. The study also developed recommendations for improving access control mechanisms using artificial intelligence, including the introduction of machine learning-based systems to detect anomalies in user behaviour, and the integration of AI with multi-factor authentication to create flexible and reliable data protection mechanisms.</p>
					<p><a href="https://lib.jucs.org/article/153217/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/153217/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Dec 2025 08:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>The 5 W’s of Zero-Knowledge Proof Development</title>
		    <link>https://lib.jucs.org/article/133397/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(14): 1607-1635</p>
					<p>DOI: 10.3897/jucs.133397</p>
					<p>Authors: Nadia van Niekerk, Brink van der Merwe, Louwrens Labuschagne</p>
					<p>Abstract: In the rapidly evolving realm of blockchain technology, the pursuit of enhanced privacy, security, and scalability has propelled the exploration of cryptographic innovations. Zero-Knowledge Proofs (ZKPs) have emerged as a pivotal solution, addressing diverse challenges across decentralized applications and cryptographic systems. However, the intricate mathematical foundations of ZKPs can pose a barrier to widespread adoption. To bridge this gap, a spectrum of ZKP tools has been developed, abstracting mathematical complexities and enabling developers with varying levels of expertise to incorporate ZKPs into their projects.The exploration of the 5 W&rsquo;s &ndash; Who, What, When, Where, and Why &ndash; guides developers in selecting ZKP tools aligned with their specific needs and understanding. This paper serves as a vital resource for developers entering the dynamic landscape of ZKP development. By answering crucial questions and providing nuanced insights into ZKP tools, it empowers developers to navigate this intricate domain effectively. As ZKP technology continues to evolve, our findings contribute to the ongoing dialogue surrounding its implementation, utilization and the ever-adapting toolkit shaping the future of cryptographic innovation.This paper employs a Mining Software Repositories (MSR) approach to unravel insights from the expansive landscape of ZKP development. By delving into GitHub repositories, we categorize author archetypes, discuss ZKP proof constructions, identify phases of tool development, explore the level of understanding required and examine the correlation between tool types and application purposes. Through a metrics-driven analysis, we unveil patterns in tool popularity, development trends, and historical perspectives, offering a comprehensive understanding of the ZKP tooling ecosystem.</p>
					<p><a href="https://lib.jucs.org/article/133397/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/133397/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Dec 2025 08:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Metaprogramming in Cyan</title>
		    <link>https://lib.jucs.org/article/141599/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(13): 1513-1537</p>
					<p>DOI: 10.3897/jucs.141599</p>
					<p>Authors: José de Oliveira Guimarães</p>
					<p>Abstract: Certain languages allow a metaprogram to act as a compiler plugin and thus alter the compilation process. The metaprogram interacts with low-level details of the compiler, making its construction difficult and potentially leading to errors. Different parts of the metaprogram may have conflicting interactions, thus producing unintended outcomes. This article introduces metaprogramming in the prototype-based object-oriented language Cyan. This language provides the same core functionality as other metaprogramming systems while introducing features that improve interactions between the compiler and different components of the metaprogram. Further-more, Cyan incorporates security measures designed to circumvent typical issues encountered in metaprogramming.</p>
					<p><a href="https://lib.jucs.org/article/141599/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/141599/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Nov 2025 14:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>DeepV-Net: A Deep Learning Technique for Multimodal Biometric Authentication Using EEG Signals and Handwritten Signatures</title>
		    <link>https://lib.jucs.org/article/150681/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(11): 1196-1221</p>
					<p>DOI: 10.3897/jucs.150681</p>
					<p>Authors: Ashish Ranjan Mishra, Rakesh Kumar, Rajkumar Saini</p>
					<p>Abstract: Ensuring secure and reliable person authentication is a critical challenge in modern security systems. Traditional biometric systems relying on physiological traits like fingerprints, iris, and facial recognition often suffer from spoofing vulnerabilities. In contrast, electroencephalogram (EEG) signals, characterized by unique temporal and cognitive patterns, provide a robust authentication mechanism. This paper introduces DeepV-Net, a multimodal fully convolutional neural network that leverages both EEG signals and dynamic handwritten signature data acquired from Wacom devices. The proposed model integrates spatial and temporal features of EEG signals with distinctive movement-based signature patterns through an end-to-end multimodal fusion strategy. Experimental evaluations on benchmark datasets demonstrate that DeepV-Net outperforms unimodal approaches and state-of-the-art authentication methods, achieving a training accuracy of 99.1% and a validation accuracy of 93.3%. These findings highlight the complementary nature of EEG and signature modalities, paving the way for more secure and efficient biometric authentication systems.</p>
					<p><a href="https://lib.jucs.org/article/150681/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/150681/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Sep 2025 10:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Explanatory Data Science in Technology Applications</title>
		    <link>https://lib.jucs.org/article/164654/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(9): 873-876</p>
					<p>DOI: 10.3897/jucs.164654</p>
					<p>Authors: Wolfram Luther, A. J. Han Vinck</p>
					<p>Abstract: This volume presents a conference paper selection from the 4th Workshop on Collaborative Technologies and Data Science in Smart City Applications (CODASSCA 2024): Data Science and Reliable Machine Learning, held in Yerevan, Armenia, October 3-6, 2024, https://codassca2024.aua.am/. The special issues guest editors invited five groups of authors from Armenia, Chile, Germany, the UK, and the USA to submit enlarged versions of their CODASSCA 2024 papers There was also a J.UCS open call so that any author could submit papers on the highlighted subjects. The invitation to review the 16 contributions received was accepted by 16 experts, and, after three rounds, seven articles were finally accepted for publication in the special issue.</p>
					<p><a href="https://lib.jucs.org/article/164654/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/164654/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Thu, 14 Aug 2025 16:00:01 +0000</pubDate>
		</item>
	
		<item>
		    <title>Binary Tree Blockchain of Decomposed Transactions</title>
		    <link>https://lib.jucs.org/article/135666/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(8): 851-872</p>
					<p>DOI: 10.3897/jucs.135666</p>
					<p>Authors: Davut Çulha</p>
					<p>Abstract: Widespread adoption of blockchain technologies requires scalability. To achieve scalability, various methods are applied, including new consensus algorithms, directed acyclic graph solutions, sharding solutions, and off-chain solutions. Sharding solutions are particularly promising as they distribute workload across different parts of the blockchain network. Similarly, directed acyclic graphs use graph data structures to distribute workload effectively. In this work, a binary tree data structure is used to enhance blockchain scalability. Binary trees offer several advantages, such as the ability to address nodes with binary numbers, providing a straightforward and efficient method for identifying and locating nodes. Each node in the tree contains a block of transactions, which allows for transactions to be directed to specific paths within the tree. This directionality not only increases scalability by enabling parallel processing of transactions but also ensures that the blockchain can handle a higher volume of transactions without becoming congested. Moreover, transactions are decomposed into transaction elements, improving the immutability of the binary tree blockchain. This novel decomposition process helps to minimize the computational overhead required for calculating account balances, making the system more efficient. By breaking down transactions into their fundamental components, the system can process and verify transactions more rapidly and accurately. This approach effectively realizes implicit sharding using a binary tree structure, distributing the processing load more evenly and reducing bottlenecks. The proposed method is simulated to assess its performance. Experimental results demonstrate the method's scalability, showing that it can handle a significantly higher transaction throughput compared to traditional blockchain structures.</p>
					<p><a href="https://lib.jucs.org/article/135666/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/135666/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jul 2025 08:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>CIAS: Catalog of Interoperability Architectural Solutions for Software Systems</title>
		    <link>https://lib.jucs.org/article/129692/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(7): 713-734</p>
					<p>DOI: 10.3897/jucs.129692</p>
					<p>Authors: Pedro Henrique Dias Valle, Elisa Yumi Nakagawa</p>
					<p>Abstract: Context: Software systems have become increasingly large and complex, and are required in several critical domains, including Industry 4.0, the military, smart cities, and transportation. Consequently, the architectural design of these systems becomes considerably complicated, in addition to requiring interoperability among diverse systems that sometimes comprise them. Problem: Although many interoperability architectural solutions exist, software architects have struggled to comprehend, analyze, and select the most suitable ones to solve their problems. Objective: This work provides a catalog of the main interoperability architectural solutions (patterns, styles, tactics, and approaches) for addressing the four levels of interoperability (namely, technical, syntactic, semantic, and organizational) to resolve interoperability issues in software systems. Method: 65 studies found systematically in the scientific literature were deeply examined and provided evidence to define our catalog, which comprises interoperability issues and architectural solutions to address these problems. Results: As a contribution, this catalog could help software architects better decide which architectural solutions could solve each interoperability issue in their integration projects.</p>
					<p><a href="https://lib.jucs.org/article/129692/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/129692/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Jun 2025 09:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>A tool-supported approach to integrate cognitive indicators into the Visual Studio Code</title>
		    <link>https://lib.jucs.org/article/124812/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(7): 683-712</p>
					<p>DOI: 10.3897/jucs.124812</p>
					<p>Authors: Roger Vieira, Kleinner Farias</p>
					<p>Abstract: Wearable devices capable of capturing psychophysiological data have emerged as a tangible reality. Recent academic investigations emphasize the pivotal role of developers&rsquo; cognitive indicators, such as attention levels and cognitive load, in influencing their effectiveness in understanding and managing code-related tasks. However, existing Integrated Development Environments (IDEs) and code editors, such as Visual Studio (VS) Code, lack comprehensive contextual information on cognitive indicators alongside source code. This article, therefore, introduces CognIDE, a novel tool-supported methodology aimed at seamlessly integrating psy-chophysiological data linked to cognitive indicators into VS Code. Addressing this crucial gap, CognIDE enriches VS Code by offering actionable contextual cues alongside dynamic source code. The evaluation of CognIDE, involving a survey with six industry professionals and in-depth interviews, examined its perceived utility, ease of use, and real-world applicability. Encouragingly, professionals demonstrated high acceptance, indicating CognIDE&rsquo;s potential to identify and prioritize code segments with specific cognitive indicators, notably related to bugs or code comprehension issues. This underscores CognIDE&rsquo;s promise in improving code review processes.</p>
					<p><a href="https://lib.jucs.org/article/124812/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/124812/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Jun 2025 09:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Test case prioritization based on human knowledge</title>
		    <link>https://lib.jucs.org/article/127870/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(6): 552-571</p>
					<p>DOI: 10.3897/jucs.127870</p>
					<p>Authors: Ícaro Prado Fernandes, Luiz Eduardo Galvão Martins</p>
					<p>Abstract: Building quality software, that is, suitable for use and meeting user needs, is one of the biggest challenges in the software industry. Although it is possible to guarantee the proper functioning of software through testing activities, such activities are exhaustive in nature, as it is impossible to test all inputs of a minimally complex program. This work proposes a method to prioritize test cases based on human knowledge using a combination of factors evaluated in an assessment answered by 29 software industry professionals and 5 academics. The assessment confirmed that the proposed factors are relevant. Finally, a practical example that prioritizes test cases for a banking application was carried out and it was observed that the proposed method works properly.</p>
					<p><a href="https://lib.jucs.org/article/127870/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/127870/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 May 2025 10:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Enhancing Knowledge Graph Construction with Automated Source Evaluation Using Large Language Models</title>
		    <link>https://lib.jucs.org/article/137103/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(5): 519-549</p>
					<p>DOI: 10.3897/jucs.137103</p>
					<p>Authors: Hendrik Hendrik, Silmi Fauziati, Adhistya Erna Permanasari</p>
					<p>Abstract: Knowledge graphs are a powerful way to represent and organize complex knowledge. They are used in many fields, like healthcare and finance. They allow for more insightful decision-making and discoveries. However, the quality of knowledge graphs depends heavily on their sources. Current methods for evaluating these sources are often slow and not scalable. They struggle to keep up with the large amount of online information. We created a new tool to address this problem. Our tool uses Large Language Models (LLMs) to assess online sources quickly. It evaluates websites based on credibility, relevance, content quality, coverage, comprehensiveness, and accessibility. We tested our tool on Halal tourism websites in Japan. We compared LLM evaluations with human expert judgments. Our comprehensive analysis revealed that certain LLM models, particularly GPT-3.5-turbo, GPT-4, and Mixtral-8x7B-Instruct-v0.1, showed strong correlation with human evaluations. Using a temperature setting of 0.4, these models demonstrated consistent and reliable performance across multiple evaluation runs. Our structured evaluation framework, incorporating weighted criteria validated through both expert input and statistical analysis, provides a robust foundation for automated source assessment. While some models showed varying performance across different criteria, our findings suggest that careful model selection and potential ensemble approaches could optimize evaluation accuracy. Our work contributes significantly to improving knowledge graph construction by demonstrating the viability of LLM-based source evaluation, while also identifying key areas for future research in scalability, cross-domain validation, and automated optimization.</p>
					<p><a href="https://lib.jucs.org/article/137103/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/137103/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2025 08:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Identification of Fault Prone Components in Multimedia Software based on Optimal Threshold Values decided using Genetic Algorithm</title>
		    <link>https://lib.jucs.org/article/129859/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(5): 469-493</p>
					<p>DOI: 10.3897/jucs.129859</p>
					<p>Authors: Manpreet Singh, Jitender Kumar Chhabra</p>
					<p>Abstract: Fault prediction of multimedia software is necessary to develop good quality multimedia software because integrating various multimedia heterogeneous components in a software system usually generates many faults. So, this research article proposes a new fault prediction model based on the decided threshold values of structural features. These features are captured using metrics specifically identified for multimedia software and weighted suitably based on the behavior of the components dealing with multimedia handling. The threshold values are optimized using the genetic algorithm (GA). This paper also proposes a GA-based technique to combine multiple features using conjunction (AND) and disjunction (OR) operators while finding threshold values. Finally, the proposed model is tested for cross-project software fault prediction on selected six multimedia software and validated on three other general software datasets. Results show that our identified thresholds-based model performs excellently for multimedia software and satisfactorily over other general software.</p>
					<p><a href="https://lib.jucs.org/article/129859/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/129859/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2025 08:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>OntoKaire: an ontology-based reasoning for work-related stressors in industrial settings</title>
		    <link>https://lib.jucs.org/article/128779/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(5): 445-468</p>
					<p>DOI: 10.3897/jucs.128779</p>
					<p>Authors: Carlos Goetz, Rodrigo Simon Bavaresco, Wesllei Felipe Heckler, Gustavo Lazarotto Schroeder, Rafael Kunst, Jorge Luis Victória Barbosa</p>
					<p>Abstract: Stress is a mental disorder responsible for impacting the industry through psychosomatic illnesses, loss of productivity, and accidents caused by stressful workplaces. Conversely, the literature indicates that fostering mental well-being among workers can boost motivation and performance while alleviating symptoms of stress. The fourth industrial revolution incorporated technologies into work that allowed the automation of processes and control of environments. The fifth revolution introduced the application of research and innovation aimed at a human-centered consciousness, enabling the advancement of mental health through sensors and wearables. Despite advancements in stress classification technologies, there remain opportunities for further research into identifying stress motivators within industrial work environments. In this sense, this paper proposes an ontology to identify stressors considering personal and environmental data, allowing knowledge generation related to work stressors for mitigating the problem. The methodology utilized in this ontology development consisted of seven stages and two evaluation phases. The findings addressed four key questions related to competency as outlined in the model. The results revealed potential stressful scenarios, including the timing of occurrence, shared locations, environmental factors, and identifying groups experiencing moments of stress. This study presents as a scientific contribution the first ontology to address the identification of work-related stressors in the industrial environment.</p>
					<p><a href="https://lib.jucs.org/article/128779/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/128779/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2025 08:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Explainable AI and deep learning models for recommender systems: State of the art and challenges</title>
		    <link>https://lib.jucs.org/article/122380/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(4): 383-421</p>
					<p>DOI: 10.3897/jucs.122380</p>
					<p>Authors: Maroua Benleulmi, Ibtissem Gasmi, Nabiha Azizi, Nilanjan Dey</p>
					<p>Abstract: Recommender systems have a pivotal function in delivering customized and pertinent suggestions to clients on the basis of their preferences and activities. The present paper presents a thorough overview of deep learning-based recommender systems, explores their application to enhance performance, and overcomes limitations. The survey encompasses fundamental models of recommender systems; moreover, it also delves into key deep learning models. This discussion focuses on the effective integration of deep learning techniques into recommender systems. Real-world applications highlight the effectiveness of these approaches in capturing complex and nonlinear patterns from large-scale data. This paper concludes by reflecting on challenges encountered in this research field and outlines potential future directions, offering valuable insights for academics and professionals in the field of recommender systems based on deep learning.</p>
					<p><a href="https://lib.jucs.org/article/122380/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/122380/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Mar 2025 10:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Formal Framework for Metamodeling in the Context of MDE</title>
		    <link>https://lib.jucs.org/article/121457/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(4): 338-362</p>
					<p>DOI: 10.3897/jucs.121457</p>
					<p>Authors: Liliana Favre</p>
					<p>Abstract: Metamodeling is a central concept in Model Driven Engineering (MDE). An important consideration in metamodeling is that secure metamodels are a prerequisite for secure software, since errors in a metamodel lead to errors in its instances (models). Formal methods can help solve this problem by providing systematic and rigorous techniques for reducing ambiguities and inconsistencies in the specification of metamodels. The goal of this article is to present a unified formal framework for metamodeling in the context of MDE, essentially based on MOF, the metamodeling foundation of the OMG industry standards. It is based on the Nereus metamodeling language and includes transformers for translating both MOF metamodels to Nereus metamodels and Nereus metamodels to MOF metamodels, with some prospects for future industrial use of these results. The Nereus language can be seen as a concrete syntax for MOF, extended by additional properties expressed by axioms. Transformers are defined starting from systems of transformation rules that allow automation of processes. An original real-world case in the context of model-driven reverse engineering is described.</p>
					<p><a href="https://lib.jucs.org/article/121457/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/121457/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Mar 2025 10:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Java Compiler Plugin for Type-Safe Inferences in Generics</title>
		    <link>https://lib.jucs.org/article/106159/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(4): 312-337</p>
					<p>DOI: 10.3897/jucs.106159</p>
					<p>Authors: Neha Kumari, Rajeev Kumar</p>
					<p>Abstract: The two most significant yet complex elements of Java generics are wildcards and type argument inference. Both processes rely on the compiler. Even though type argument inference and wildcard execution are implicit processes, a programmer should be aware of them to make the most of the features. A compiler error message tells much about the code and the process mechanism. If the error message is unambiguous and sound, it is easy for the programmer to debug the code. However, in the context of wildcard-type argument inference, the current Javac compiler emits cryptic and imprecise error messages. A programmer may get confused about the inference outcome and failure, so it will be difficult to resolve the errors easily. In this paper, we propose a few additions to the current Wildcard-based type inference algorithm to get detailed and valuable error messages. We implement a Java compiler plugin tool based on the proposed algorithm. The plugin can be easily executed through the Java command line. It gives a comprehensive error message that aids programmers in resolving errors more effectively.</p>
					<p><a href="https://lib.jucs.org/article/106159/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/106159/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Mar 2025 10:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Towards the Adoption of Blockchain to Trustworthy Interoperability in Industry 4.0 Systems: A Case Study</title>
		    <link>https://lib.jucs.org/article/125714/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(2): 189-206</p>
					<p>DOI: 10.3897/jucs.125714</p>
					<p>Authors: Ana Paula Allian, Frank Schnicke, Pablo Oliveira Antonino, Thomas Kuhn, Elisa Yumi Nakagawa</p>
					<p>Abstract: The rapid evolution of Industry 4.0 has brought forth transformative changes in manufacturing, accentuating the need for seamless interoperability among heterogeneous systems. However, the geographically distributed and decentralized nature of Industry 4.0 ecosystems presents a pressing challenge: ensuring trustworthy interoperability within a complex web of entities and intermediaries. This paper delves into the pivotal role of blockchain technology in addressing this challenge, aiming to bridge the gap between theoretical promises and practical applications. By examining the feasibility and efficacy of blockchain solutions in fostering trust and enabling interoperability within Industry 4.0 environments, we confront the pressing issue of data security, integrity, and reliability. Through the lens of seven blockchain-based solutions, we navigate the intricate landscape of Industry 4.0, offering insights into the trade-offs, risks, and potentials associated with blockchain adoption. Real-world case studies and practical demonstrations underscore the urgency and relevance of our research, shedding light on pathways for industry stakeholders to navigate the complexities of interoperability. Our findings not only contribute to advancing the discourse on blockchain&rsquo;s role in Industry 4.0 but also provide actionable strategies for addressing the overarching challenge of ensuring trustworthy interoperability in the digital age.</p>
					<p><a href="https://lib.jucs.org/article/125714/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/125714/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Feb 2025 08:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cross-Community Question Relevance Prediction for Stack Overflow and GitHub</title>
		    <link>https://lib.jucs.org/article/119772/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(1): 52-71</p>
					<p>DOI: 10.3897/jucs.119772</p>
					<p>Authors: Song Yu, Bugao Jiang, Danni Zhang, Zhifang Liao</p>
					<p>Abstract: As the open-source community has evolved, Stack Overflow (SO) has gained extensive usage. The question-and-answer community&rsquo;s mechanism for recommending related questions helps users discover more content relevant to their current problems, expediting issue resolution. However, the recommendation of relevant questions in a single community context limits the amount of available content and the diversity of content, and the recommendation results rely heavily on the existing knowledge of the community. Stack Overflow still harbors a substantial number of unresolved questions. To address this situation, this paper proposes a cross-community question relevance prediction model, CCQRP, to predict the relevance of Stack Overflow ques-tions and GitHub(GH) issues, and recommend relevant GitHub issues. CCQRP aims to assist developers in effectively resolving problems and enhancing development efficiency. We design an embedding layer incorporating BERTOverflow and Bi-LSTM and devise a weighted attention matrix based on named entity types of tokens. This matrix assigns different weights to tokens of varying named entity types during the prediction process, capturing critical information to predict the relevance of SO questions and GH issues. Due to the lack of existing datasets, we construct a dataset named Question-Issue dataset (QI), consisting of Stack Overflow questions, GitHub issues, and the corresponding question-issue relevance, containing 240,000 related SO question-GH issue pairs and 470,000 unrelated pairs. We evaluate the effectiveness of CCQRP on QI. Compared to the latest models (MQDD, CodeBERT, ASIM), CCQRP demonstrates an improvement in F1-score ranging from 0.60% to 10.86% and exhibits robust generalization capabilities.</p>
					<p><a href="https://lib.jucs.org/article/119772/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/119772/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2025 16:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Convolutional Neural Networks for Software Defect Categorization: An Empirical Validation</title>
		    <link>https://lib.jucs.org/article/117185/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(1): 22-51</p>
					<p>DOI: 10.3897/jucs.117185</p>
					<p>Authors: Ruchika Malhotra, Madhukar Cherukuri</p>
					<p>Abstract: The escalating complexity and scale of software systems have rendered them increasingly susceptible to a variety of defects. To empower maintenance teams to efficiently prioritize and resolve defects, Software Defect Categorization (SDC) models have emerged, offering the classification of software defects into categories such as &quot;high,&quot; &quot;medium,&quot; or &quot;low.&quot; This study embarks on the development of SDC models, based on three critical defect attributes: i) the maintenance effort required to rectify a defect, ii) the change impact on the software induced by defect resolution, and iii) a combined approach that integrates both maintenance effort and change impact. Leveraging the prevailing advancements in computational power and storage capacity, the study present a novel defect categorization model built upon Convolutional Neural Networks (CNNs). Extensive experiments were carried out on defect datasets from five Android operating system application modules, leading to the creation of 60 SDC models (5 datasets x 4 feature sets x 3 approaches). The results underscore the predictive potential of our CNN-based defect categorization model. Furthermore, SDC models rooted in the combined approach exhibit superior performance when compared to models based solely on change impact and remain competitive with those anchored in maintenance effort.</p>
					<p><a href="https://lib.jucs.org/article/117185/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/117185/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2025 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Computational Game Unit Balancing based on Game Theory</title>
		    <link>https://lib.jucs.org/article/121185/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(1): 3-21</p>
					<p>DOI: 10.3897/jucs.121185</p>
					<p>Authors: Emre Önal, Abdullah Bülbül</p>
					<p>Abstract: Optimizing game elements through iterative human playtests can be time-consuming and insufficient for games with complex intransitive mechanics. Imbalances in games often require the release of numerous balance patches. We present a computational method for making each game unit equally preferable against a uniform play strategy. We leverage concepts from game theory to model intricate relationships among intransitive entities. Matching units against each other is modeled as a symmetric zero-sum game, where unit selection represents a strategy and the error is quantified using payoff values derived from unit parameters. The algorithm takes the initial unit parameters provided by the game designer and optimizes them with minimal changes using gradient descent. Consequently, the payoff matrix converges to a state where a uniform strategy is a near Nash equilibrium, ensuring that each unit is equally preferable under the optimized condition. We implemented a testing environment based on fictitious play and verified our results on different scenarios. While the majority of game theory research focuses on finding optimal strategies given specific environmental conditions, this paper takes a different perspective within the context of game design. We explore game theoretic concepts to address the goal of designing environments that lead to desired strategy choices.</p>
					<p><a href="https://lib.jucs.org/article/121185/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/121185/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2025 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Using Adaptive Content Recommendations to Improve Logic and Programming Teaching and Learning</title>
		    <link>https://lib.jucs.org/article/115016/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(12): 1645-1661</p>
					<p>DOI: 10.3897/jucs.115016</p>
					<p>Authors: Aluizio Haendchen Filho, Adson Marques da Silva Esteves, Hércules Antonio do Prado, Edilson Ferneda, André Luis Alice Raabe</p>
					<p>Abstract: The high dropout rate in Information Technologies courses is a relevant problem in many countries, mainly because of the increasing demand for professionals in this sector. Usually, high dropout rates in these courses are related to difficulties in algorithms and programming subjects. Content recommendation systems are proposed to mitigate this problem, employing adaptive learning environments that facilitate the learning process. This study presents a content recommendation system that uses learning paths to group students and provide personalized recommendations based on peers&#39; progress. The work follows the many efforts of group-based recommendation systems reported in the literature. The system uses intelligent agents and clustering algorithms to implement the recommendation system and was evaluated by submitting the simulation results to the judgment of human experts who significantly agreed with them. This initiative could make programming teaching more adaptive, using the groups&#39; knowledge. Facilitating learning is one of the key issues to reduce dropout rates and resolve the shortage of labor in the technological area in Portuguese-speaking countries.</p>
					<p><a href="https://lib.jucs.org/article/115016/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/115016/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Nov 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Systematic Mapping of Configuration Management Activities in Software Product Line</title>
		    <link>https://lib.jucs.org/article/110887/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(11): 1484-1510</p>
					<p>DOI: 10.3897/jucs.110887</p>
					<p>Authors: Gonzalo P. Espinel-Mena, José L. Carrillo-Medina, Eddie E. Galarza, Mario Matias Urbieta</p>
					<p>Abstract: In the software product line (SPL), configuration management (CM) is a multidimensional problem that is taking great attention in software development.  Although much research has been developed on this topic, there is no clear view of the current state of it. In this study, we used a systematic method to develop a map of configuration management across product lines to classify relevant literature. The resulting map provides an overview of this research through the identification of the main activities of the CM, the types and trends of research as well as the maturity of existing contributions. Because the CM in SPL is still in its formative stage, we believe that this work will contribute to the process of providing a more common and coherent conceptual basis for its understanding. In addition, it can help to detect important research problems and gaps.</p>
					<p><a href="https://lib.jucs.org/article/110887/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/110887/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Oct 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Microservices Patterns Recommendation based on Information Retrieval</title>
		    <link>https://lib.jucs.org/article/108974/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(11): 1455-1483</p>
					<p>DOI: 10.3897/jucs.108974</p>
					<p>Authors: Álex dos Santos Moura, Fábio Gomes Rocha, Michel S. Soares</p>
					<p>Abstract: Software developers encounter recurring problems during software development, which can be solved using proven solutions known as design patterns. Microservices architecture, a Service-Oriented Architecture (SOA) variant, presents common communication, deployment, and service definition challenges. However, selecting the appropriate design pattern from a vast pool of patterns to solve a problem is difficult for novice and experienced developers. This paper proposes a recommendation tool based on Information Retrieval (IR) to assist developers in choosing the suitable microservices pattern to solve a given problem. The tool leverages textual descriptions given by developers to provide relevant indications of microservices patterns. It has been evaluated using both toy and industrial problems, demonstrating promising results. The results showed that the tool was able to solve 60% of toy design problems, indicating that it can provide valuable and accurate recommendations. Furthermore, tests with industrial problems revealed that over 70% of the recommended patterns helped to address the problems at hand. Interviews with developers who work in the software industry corroborated the recommendation tool&rsquo;s effectiveness and practicality.</p>
					<p><a href="https://lib.jucs.org/article/108974/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/108974/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Oct 2024 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cross-device Portability of Machine Learning Models in Electromagnetic Side-Channel Analysis for Forensics</title>
		    <link>https://lib.jucs.org/article/109788/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(10): 1390-1423</p>
					<p>DOI: 10.3897/jucs.109788</p>
					<p>Authors: Lojenaa Navanesan, Nhien-An Le-Khac, Yossi Oren, Kasun De Zoysa, Asanka P. Sayakkara</p>
					<p>Abstract: The possession of smart devices has ingrained itself into daily life. Therefore, smart devices, such as IoT and smartphones, are crucial sources of evidence in instances where criminal activity occurs. Due to the challenges in traditional digital forensic techniques involving smart devices, it has been recently proposed in the literature to leverage electromagnetic side-channel analysis (EM-SCA) for the purpose. This paper identifies and discusses an important barrier that exists in the application of EM-SCA for digital forensics that hinders its successful use, namely, the issue of cross-device portability of machine learning (ML) models that are used for EM-SCA. Firstly, the paper empirically evaluates the possibility of using trained ML models to extract forensic insights from EM radiation data of IoT devices. During this empirical study, the inability to reuse a trained ML model across different devices is identified. Secondly, the paper surveys the literature in search of related work that has studied the use of EM-SCA to gather information from smart devices. The purpose of the survey is to identify whether any existing work has been able to introduce potential approaches to enable cross-device portability of ML models in EM-SCA. The findings of this survey point to the fact that the identified problem still exists and requires further studies opening the door to future research.</p>
					<p><a href="https://lib.jucs.org/article/109788/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/109788/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Sep 2024 10:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>Accessibility Barriers for Blind Students in Teaching-learning Systems</title>
		    <link>https://lib.jucs.org/article/106239/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(10): 1343-1371</p>
					<p>DOI: 10.3897/jucs.106239</p>
					<p>Authors: Michele dos Santos Soares, Cássio Andrade Furukawa, Maria Istela Cagnin, Débora Maria Barroso Paiva</p>
					<p>Abstract: The use of digital technology by educators has received increasingly higher focus in the recent years. In spite of this, students with disabilities still face many accessibility obstacles while using digital technologies. In this context, this paper has the main goal of identifying the accessibility barriers faced by the community of blind students and highlighting the main factors that hinder this community from accessing learning objects. For this purpose, initially, a Systematic Literature Review (SLR) was conducted, with which it was possible to collect the main accessibility problems identified by the scientific community. In order to complement and detail the information obtained in the SLR, a questionnaire was submitted to blind students, in which it was possible to discover new difficulties from a practical point of view. Finally, the accessibility barriers found in the SLR and in the questionnaire responses were analyzed and the results obtained were related to the Web Content Accessibility Guidelines (WCAG), the main document that explains how to make web content accessible for people with disabilities. This work seeks to explain the main factors that hinder or prevent access to learning objects in teaching-learning systems, promote a discussion on alternatives for improving these resources, identify gaps and guide more detailed studies on the subject.</p>
					<p><a href="https://lib.jucs.org/article/106239/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/106239/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Sep 2024 10:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Flexible Multilevel System for Mitre ATT&amp;CK Model-driven Alerts and Events Correlation in Cyberattacks Detection</title>
		    <link>https://lib.jucs.org/article/131686/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(9): 1184-1204</p>
					<p>DOI: 10.3897/jucs.131686</p>
					<p>Authors: Javier Muñoz-Calle, Rafael Estepa Alonso, Antonio Estepa Alonso, Jesús E. Díaz-Verdejo, Elvira Castillo Fernández, Germán Madinabeitia</p>
					<p>Abstract: Network monitoring systems can struggle to detect the full sequence of actions in a multi-step cyber attack, frequently resulting in multiple alerts (some of which are false positive (FP)) and missed actions. The challenge of easing the job of security analysts by triggering a single and accurate alert per attack requires developing and evaluating advanced event correlation techniques and models that have the potential to devise relationships between the different observed events/alerts.This work introduces a flexible architecture designed for hierarchical and iterative correlation of alerts and events. Its key feature is the sequential correlation of operations targeting specific attack episodes or aspects. This architecture utilizes IDS alerts or similar cybersecurity sensors, storing events and alerts in a non-relational database. Modules designed for knowledge creation then query these stored items to generate meta-alerts, also stored in the database. This approach facilitates creating a more refined knowledge that can be built on top of existing one by creating specialized modules. For illustrative purposes, we make a case study where we use this architectural approach to explore the feasibility of monitoring the progress of attacks of increased complexity by increasing the levels of the hyperalerts defined, including a case of a multi-step attack that adheres to the ATT&amp;CK model. Although the mapping between the observations and the model components (i.e., techniques and tactics) is challenging, we could fully monitor the progress of two attacks and up to 5 out of 6 steps of the most complex attack by building up to three specialized modules. Despite some limitations due to the sensors and attack scenarios tested, the results indicate the architecture&rsquo;s potential for enhancing the detection of complex cyber attacks, offering a promising direction for future cybersecurity research.</p>
					<p><a href="https://lib.jucs.org/article/131686/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/131686/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 14 Sep 2024 16:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Empirical Evaluation of Large Language Models in Static Code Analysis for PHP Vulnerability Detection</title>
		    <link>https://lib.jucs.org/article/134739/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(9): 1163-1183</p>
					<p>DOI: 10.3897/jucs.134739</p>
					<p>Authors: Orçun Çetin, Emre Ekmekcioglu, Budi Arief, Julio Hernandez-Castro</p>
					<p>Abstract: Web services play an important role in our daily lives. They are used in a wide range of activities, from online banking and shopping to education, entertainment and social interactions. Therefore, it is essential to ensure that they are kept as secure as possible. However &ndash; as is the case with any complex software system &ndash; creating a sophisticated software free from any security vulnerabilities is a very challenging task. One method to enhance software security is by employing static code analysis. This technique can be used to identify potential vulnerabilities in the source code before they are exploited by bad actors. This approach has been instrumental in tackling many vulnerabilities, but it is not without limitations. Recent research suggests that static code analysis can benefit from the use of large language models (LLMs). This is a promising line of research, but there are still very few and quite limited studies in the literature on the effectiveness of various LLMs at detecting vulnerabilities in source code. This is the research gap that we aim to address in this work. Our study examined five notable LLM chatbot models: ChatGPT 4, ChatGPT 3.5, Claude, Bard/Gemini1, and Llama-2, assessing their abilities to identify 104 known vulnerabilities spanning the Top-10 categories defined by the Open Worldwide Application Security Project (OWASP). Moreover, we evaluated issues related to these LLMs&rsquo; false-positive rates using 97 patched code samples. We specifically focused on PHP vulnerabilities, given its prevalence in web applications. We found that ChatGPT-4 has the highest vulnerability detection rate, with over 61.5% of vulnerabilities found, followed by ChatGPT-3.5 at 50%. Bard has the highest rate of vulnerabilities missed, at 53.8%, and the lowest detection rate, at 13.4%. For all models, there is a significant percentage of vulnerabilities that were classified as partially found, indicating a level of uncertainty or incomplete detection across all tested LLMs. Moreover, we found that ChatGPT-4 and ChatGPT-3.5 are consistently more effective across most categories, compared to other models. Bard and Llama-2 display limited effectiveness in detecting vulnerabilities across the majority of categories listed. Surprisingly, our findings reveal high false positive rates across all LLMs. Even the model demonstrating the best performance (ChatGPT-4) notched a false positive rate of nearly 63%, while several models glaringly under-performed, hitting startlingly bad false positive rates of over 90%. Finally, simultaneously deploying multiple LLMs for static analysis resulted in only a marginal enhancement in the rates of vulnerability detection. We believe these results are generalizable to most other programming languages, and hence far from being limited to PHP only.</p>
					<p><a href="https://lib.jucs.org/article/134739/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/134739/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 14 Sep 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Exploiting TTPs to Design an Extensible and Explainable Malware Detection System</title>
		    <link>https://lib.jucs.org/article/131753/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(9): 1140-1162</p>
					<p>DOI: 10.3897/jucs.131753</p>
					<p>Authors: Yashovardhan Sharma, Simon Birnbach, Ivan Martinovic</p>
					<p>Abstract: In recent years, numerous sophisticated malware detection systems have been proposed, many of which are based on machine learning. Though such systems attain impressive results, they are often designed having effectiveness as the main, if not only, requirement. As a result, the effectiveness of such systems, especially if based on deep learning models, often comes with (i) poor extensibility, being very difficult to adapt and/or extend to other settings, and (ii) poor explainability, since it is often not possible for humans to understand the reasons behind the model&rsquo;s predictions, making further analysis of threats a challenge. In this paper we show how it is possible to design an extensible and explainable yet effective malware detection system. Extensibility is obtained thanks to the exploitation of TTPs (Tactics, Techniques, and Procedures) from the popular MITRE ATT&amp;CK framework, which is an ontology of adversarial behaviour that allows us to divide the general problem of malware detection into the smaller problems of detecting the different types of malicious activity that can be carried out. Explainability is obtained by returning (i) which TTPs have been detected and are responsible for the classification of the entire behaviour as malicious, and (ii) why such TTPs have been classified as malicious. To demonstrate the viability of this approach we implement these ideas in a system called RADAR. We evaluate RADAR on a very large dataset comprising of 2,286,907 malicious and benign samples, representing a total of 84,792,452 network flows. The experimental analysis confirms that the proposed methodology can be effectively exploited: RADAR&rsquo;s ability to detect malware is comparable to other state-of-the-art non-interpretable systems&rsquo; capabilities. To the best of our knowledge, RADAR is the first TTP-based system for malware detection that uses machine learning while being extensible and explainable.</p>
					<p><a href="https://lib.jucs.org/article/131753/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/131753/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 14 Sep 2024 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Editorial: Fighting Cybersecurity Risks from a Multidisciplinary Perspective</title>
		    <link>https://lib.jucs.org/article/131628/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(9): 1137-1139</p>
					<p>DOI: 10.3897/jucs.131628</p>
					<p>Authors: Steffen Wendzel, Aleksandra Mileva, Virginia N. L. Franqueira, Martin Gilje Jaatun</p>
					<p>Abstract: Digitization, powered by the Internet, artificial intelligence, inter-operabile data formats and communication standards, high-bandwidth mobile technology, and nano-technology, allows for an increasing number of new services that are tailored to the particular demands of end-users, industry and government organizations.However, these new digital services have also become the major focus of cyber-crime. Whereas traditional research mostly covered pure technical aspects of cybercrime, it is becoming increasingly important to address cybercrime and cybersecurity in a multidisciplinary fashion, including legal, behavioral, technical and sociological aspects.This special issue aims to offer a mixture of selected extended versions of papers presented at the European Interdisciplinary Cybersecurity Conference (EICC&rsquo;23), which took place in Stavanger, Norway, as well as submissions from an open call. We considered papers dealing with the above-mentioned risks and problems, new challenges, interdisciplinary issues, and innovative multidisciplinary solutions (defense mechanisms, methods, and countermeasures) for promoting cybersecurity in the cyberspace.Overall, we received 15 submissions. Each accepted paper received at least three reviews. After a first round of reviews, eight where rejected. The remaining seven papers underwent another round of reviews (five papers underwent a major revision and only two papers were scheduled to undergo a minor revision). Finally, the authors of these seven papers adequately addressed the reviewers&rsquo; comments and they thus have been accepted for inclusion in this special issue. We like to thank all authors who submitted their work to this special issue and all reviewers for their contributions. Further, we like to thank the J.UCS team for accepting our special issue for inclusion in their journal. We hope that all readers will enjoy this special issue.</p>
					<p><a href="https://lib.jucs.org/article/131628/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/131628/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Sat, 14 Sep 2024 16:00:01 +0000</pubDate>
		</item>
	
		<item>
		    <title>Usa-DSL: a Process for Usability Evaluation of Domain-Specific Languages</title>
		    <link>https://lib.jucs.org/article/103264/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(8): 1023-1047</p>
					<p>DOI: 10.3897/jucs.103264</p>
					<p>Authors: Ildevana Poltronieri, Avelino Francisco Zorzo, Maicon Bernardino, Edson OliveiraJr</p>
					<p>Abstract: Software architects and developers often use Domain-Specific Languages (DSLs) to model or code applications. However, designing a DSL that effectively represents its domain can be a challenge, potentially contributing to poor uptake and usage. To the best of our knowledge, one issue is that DSL designers may evaluate their language&rsquo;s usability using ad hoc processes, due to a lack of expertise in usability evaluation. Additionally, current approaches lack well-defined processes and may not yield the desired results for DSL designers. Therefore, DSL designers require a well-defined usability evaluation process to assess how architects, developers, and end users perceive their DSL. This paper introduces Usa-DSL, a Usability Evaluation Process for Domain-Specific Languages. Usa-DSL aims to assist DSL designers in evaluating their languages in terms of ease and quality of use, without requiring deep knowledge of usability evaluation. We analyze the feasibility of Usa-DSL and show that it is a useful and user-friendly tool for evaluating DSLs.</p>
					<p><a href="https://lib.jucs.org/article/103264/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/103264/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Aug 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Empirical Study on the Correctness and Effort to Integrate Feature Models</title>
		    <link>https://lib.jucs.org/article/94073/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(7): 880-908</p>
					<p>DOI: 10.3897/jucs.94073</p>
					<p>Authors: Vinicius Bischoff, Kleinner Farias</p>
					<p>Abstract: Feature model integration is pivotal in software development, particularly in evolving software product lines through new feature accommodations. Despite its significance, the influence of developers&rsquo; experience on integration efforts and correctness still needs to be more adequately understood. This study conducted a controlled experiment with 25 participants (18 students and seven professionals) following well-known guidelines to run empirical studies. Each participant addressed ten experimental tasks, encompassing 250 integration scenarios, to explore two research questions. The effort and correctness rate in integrating feature models were quantified, revealing that students exerted higher effort (29.23%) and achieved a higher number of correct integrations (39.53%) than professionals. Notably, this superiority lacked statistical significance. Additionally, this article highlights practical implications and noteworthy challenges for the scientific community, providing valuable insights for software development practices. The findings lay a foundation for future studies, delving into software development tasks where students and professionals may achieve comparable results. Finally, this study marks an initial step towards an ambitious agenda, empirically advancing the feature model integration field.</p>
					<p><a href="https://lib.jucs.org/article/94073/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/94073/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jul 2024 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Digital Transformation of Public Services in a Startup-Based Environment: Job Perceptions, Relationships, Potentialities and Restrictions</title>
		    <link>https://lib.jucs.org/article/106979/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(6): 720-757</p>
					<p>DOI: 10.3897/jucs.106979</p>
					<p>Authors: George Marsicano, Edna Dias Canedo, Glauco V. Pedrosa, Cristiane S. Ramos, Rejane M. da C. Figueiredo</p>
					<p>Abstract: Digital transformation in public administration needs to be accompanied by more dynamic and intelligent strategies, which effect cultural change. Inspired by the business culture of startups, in 2021 the Brazilian government created the StartUp GOV.BR program to develop and accelerate the development of digital transformation projects within the Federal Government. This program aims to make digital transformation processes more proactive and flexible and generate more profitable operations. In this work, we investigated the perception of ICT practitioners (members of startups) about the program and the issues that surround it. Our goal was to identify relations, potentialities and restrictions of this program to contribute to outlining growth strategies, as well as the assets and capabilities needed to successfully transform digital public services in a startup-based environment. For this purpose, we conducted 23 focus groups with up to 12 people, totaling 175 participants. Then, we fully transcribed and qualitatively analyzed the data from each of the focus groups based on Grounded Theory. As a result, we developed maps of relationships between categories, along with narratives that help explain and understand the members&rsquo; perception of the StartUp GOV.BR program. We also listed 34 points for improvement and 62 actions to be taken to improve the program. The results achieved in this work can contribute to a research agenda of initiatives towards the Digital Transformation of public services in governments around the world combining innovative digital strategies based on the perspective of professionals.</p>
					<p><a href="https://lib.jucs.org/article/106979/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/106979/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2024 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Software Process Improvement by Managing Situational Method Engineering Knowledge</title>
		    <link>https://lib.jucs.org/article/110894/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(5): 645-673</p>
					<p>DOI: 10.3897/jucs.110894</p>
					<p>Authors: Razieh Dehghani, Raman Ramsin</p>
					<p>Abstract: Organizational processes have been recognized as valuable knowledge assets. Situational Method Engineering (SME) processes are particularly valuable as they are used for engineering other processes: SME processes help construct bespoke Software Development Methodologies (SDMs) for specific software-engineering project situations. Every SDM has a Software Development Process (SDP) at its heart, which specifies the activities that should be performed throughout the project, the products that should be produced, and the people that should be involved. Existing SME methods suffer from certain weaknesses that are rooted in loss of knowledge within their processes; for instance, the method engineers&#39; experience, which is a kind of tacit knowledge, is not properly captured and utilized in these processes. Managing SME process knowledge helps alleviate these weaknesses through reusing the software developers&#39; experience and maintaining the method engineers&#39; knowledge. We propose an evaluation framework that can be used for assessing an SME method&#39;s ability to manage process knowledge. We also provide a model that guides the improvement of existing SME methods in their support for Knowledge Management (KM), and also helps engineer new SME methods that provide adequate KM support. We have assessed the applicability of the proposed evaluation framework and improvement model by using them to enhance eight prominent SME methods, and also by applying them to four industrial case studies.</p>
					<p><a href="https://lib.jucs.org/article/110894/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/110894/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 May 2024 16:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>UP-Home: A Self-Adaptive Solution for Smart Home Security</title>
		    <link>https://lib.jucs.org/article/107050/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(4): 502-530</p>
					<p>DOI: 10.3897/jucs.107050</p>
					<p>Authors: Josival Silva, Nelson Rosa, Fernando Aires</p>
					<p>Abstract: Smart home devices are vulnerable to attacks that put their users&rsquo; security at risk. Vulnerabilities are discovered very frequently and can expose these devices through unsecured services. Meanwhile, the lack of standardisation in upgrade methods makes smart homes a potentially vulnerable environment. Furthermore, many manufacturers release their products and then abandon them, refusing to support security updates. As a result, security updates are needed to deal with the emergence of new attacks. There are several proposals to promote security in smart homes. However, there are rare solutions where changes for security purposes occur with little or no human intervention. This paper presents UP-Home, a self-adaptive solution that manages the security of smart homes. UP-Home aims to ensure that smart home devices meet the security requirements set by industry standards. The solution can continually identify and mitigate smart home security vulnerabilities. With autonomous computing techniques, UP-Home seeks to ensure the self-protection of devices and, consequently, the entire smart home. With the UP-Home evalu-ation, it was possible to notice significant improvements in the security of the smart home without any human intervention.</p>
					<p><a href="https://lib.jucs.org/article/107050/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/107050/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Apr 2024 17:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>An SVR-based and Location-aware Method for Mobile QoS Prediction</title>
		    <link>https://lib.jucs.org/article/106314/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(3): 383-401</p>
					<p>DOI: 10.3897/jucs.106314</p>
					<p>Authors: Lifang Ren, Jing Li, Wenjian Wang</p>
					<p>Abstract: With the rapid development of intelligent mobile communication technology, the num&shy;ber of mobile services and the number of mobile users are both continuously increasing. So, the services used by a user can only account for a very small proportion of the existing services, which results in a sparse user&shy;service quality of service (QoS) matrix. However, QoS is critical for service selection and service recommendation. Therefore, predicting the unknown values of the sparse QoS matrix is essential. However, due to the sparsity of QoS data, the QoS predic&shy;tion accuracy is difficult to improve. Faced with the problem, this paper intends to utilize the outstanding generalization ability and only support vectors dependent property of support vector regression (SVR) to overcome the difficulty brought by the sparsity of data and predict the un&shy;known QoS more accurately. Moreover, it is evident that in the mobile environment, QoS values are closely related to the locations of the invoking users. Therefore, this paper intends to improve the accuracy of QoS prediction by incorporating not only the information of similar users but also the information of nearby users into feature vectors. On the other hand, the known QoS values of nearby users can be used to roughly estimate the unknown QoS values of the cold&shy;start user, so as to alleviate the cold&shy;start problem to some extent. Thus, a location&shy;aware SVR&shy;based method for QoS prediction (SVR4QP) is proposed. Compared with some classical QoS prediction algorithms, the experimental results show that in 1/3 of the cases, SVR4QP is moderate&#894; in 1/6 of the cases, SVR4QP is suboptimal&#894; and in half of the cases, SVR4QP is optimal. Compared with some novel mobile QoS prediction methods, the experimental results show that in 1/4 of the cases, SVR4QP is moderate&#894; in half of the cases, SVR4QP is suboptimal&#894; and in 1/4 of the cases, SVR4QP is op&shy;timal. All these indicate that SVR4QP has comparatively more accurate mobile QoS prediction.</p>
					<p><a href="https://lib.jucs.org/article/106314/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/106314/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Mar 2024 16:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>Visualizing Portable Executable Headers for Ransomware Detection: A Deep Learning-Based Approach</title>
		    <link>https://lib.jucs.org/article/104901/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(2): 262-286</p>
					<p>DOI: 10.3897/jucs.104901</p>
					<p>Authors: Tien Quang Dam, Nghia Thinh Nguyen, Trung Viet Le, Tran Duc Le, Sylvestre Uwizeyemungu, Thang Le-Dinh</p>
					<p>Abstract: In recent years, the rapid evolution of ransomware has led to the development of numerous techniques designed to evade traditional malware detection methods. To address this issue, a novel approach is proposed in this study, leveraging machine learning to encode critical information from Portable Executable (PE) headers into visual representations of ransomware samples. The proposed method selects highly impactful features for data sample classification and encodes them as images based on predefined color rules. A deep learning model named peIRCECon (PE Header-Image-based Ransomware Classification Ensemble with Concatenating) is also developed by integrating prominent architectures, such as VGG16 and ResNet50, and incorporating the concatenating method to enhance ransomware detection and classification performance. Experimental results using self-collected datasets demonstrate the efficacy of this approach, achieving high accuracy of 99.85% in distinguishing between ransomware and benign samples. This promising approach holds the potential to significantly improve the effectiveness of ransomware detection and classification, thereby contributing to more robust cybersecurity defense systems.</p>
					<p><a href="https://lib.jucs.org/article/104901/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/104901/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2024 16:00:07 +0000</pubDate>
		</item>
	
		<item>
		    <title>Recommendation of Machine Learning Techniques for Software Effort Estimation using Multi-Criteria Decision Making</title>
		    <link>https://lib.jucs.org/article/110051/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(2): 221-241</p>
					<p>DOI: 10.3897/jucs.110051</p>
					<p>Authors: Ajay Kumar</p>
					<p>Abstract: For the development of the software industry, Software Effort Estimation (SEE) is one of the essential tasks. Project managers can overcome budget and time overrun issues by accurately estimating a software project&#39;s development effort in the software life cycle. In prior studies, a variety of machine learning methods for SEE modeling were applied. The outcomes for various performance or accuracy measures are inconclusive. Therefore, a mechanism for assessing machine learning approaches for SEE modeling in the context of several contradictory accuracy measures is desperately needed. This study addresses selecting the most appropriate machine learning technique for SEE modeling as a Multi-Criteria Decision Making (MCDM) problem. The machine learning techniques are selected through a novel approach based on MCDM. In the proposed approach, three MCDM methods- Weighted Aggregated Sum Product Assessment (WASPAS), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) were applied to determine the ranking of machine learning techniques on SEE performance based on multiple conflicting accuracy measures. For validating the proposed method, an experimental study was conducted over three SEE datasets using ten machine-learning techniques and six performance measures. Based on MCDM rankings, Random Forest, Support Vector Regression, and Kstar are recommended as the most appropriate machine learning techniques for SEE modeling. The results show how effectively the suggested MCDM-based approach can be used to recommend the appropriate machine learning technique for SEE modeling while considering various competing accuracy or performance measures altogether.</p>
					<p><a href="https://lib.jucs.org/article/110051/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/110051/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2024 16:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Proposal of Naturalistic Software Development Method</title>
		    <link>https://lib.jucs.org/article/105637/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(2): 179-203</p>
					<p>DOI: 10.3897/jucs.105637</p>
					<p>Authors: Lizbeth Alejandra Hernández-González, Ulises Juárez-Martínez, Jezreel Mejía, Alberto Aguilar-Laserre</p>
					<p>Abstract: Naturalistic programming purports to include natural language elements in programming languages to increase software expressiveness. Even though natural language is inherently ambiguous, it is richer and thus more expressive than any artificial language. Currently, the Naturalistic Programming Paradigm (NPP) is supported by its conceptual model and three general-purpose naturalistic programming languages that can generate executable binary code. Nevertheless, to date, no research efforts have been concentrated on applying the NPP within a software development process. To address this gap, in this article, we propose a naturalistic software development method to test the advantages of the NPP. The method focuses on the analysis and design stages of the software development process and seeks to contribute to closing the gap between the problem and the solution domains. We also present an example of an implementation using Cal-4700, a naturalistic programming language, showing the differences in expressiveness of programming with a traditional programming language, like Python.</p>
					<p><a href="https://lib.jucs.org/article/105637/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/105637/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Towards a set of metrics for hybrid (quantum/classical) systems maintainability</title>
		    <link>https://lib.jucs.org/article/99348/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(1): 25-48</p>
					<p>DOI: 10.3897/jucs.99348</p>
					<p>Authors: Ana Díaz Muñoz, Moisés Rodríguez Monje, Mario Gerardo Piattini Velthuis</p>
					<p>Abstract: Given the rapid evolution that has taken place in recent years in the software industry, and along with it the emergence of quantum software, there is a need to design an environment for measuring quality metrics for hybrid, classic-quantum software. In order to measure and evaluate the quality of classic software, there are models and standards, among which ISO/IEC 25000 stands out, which proposes a set of quality characteristics such as maintainability. However, there is currently no consensus for the measurement and evaluation of quantum software quality. In this paper we propose a series of adaptations to &ldquo;classic&rdquo; metrics, as well as a set of new measurements for hybrid maintainability. Finally, a first prototype of a measurement tool developed as a SonarQube plugin, capable of measuring these metrics in quantum developments, is also presented.</p>
					<p><a href="https://lib.jucs.org/article/99348/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/99348/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/99348/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jan 2024 16:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Design and Evaluation using Technology Acceptance Model of an Architecture Conceptualization Framework System based on the ISO/IEC/IEEE 42020</title>
		    <link>https://lib.jucs.org/article/104938/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(12): 1510-1534</p>
					<p>DOI: 10.3897/jucs.104938</p>
					<p>Authors: Valdicélio Santos, Michel S. Soares</p>
					<p>Abstract: Among the difficulties in developing software-intensive systems are the necessity of managing and controlling data that must be held for decades, as well as describing the needs and concerns of a variety of stakeholders. Therefore, one cannot neglect a good Software Engineering practice which is to develop software-intensive systems based on solid software architecture. However, the processes related to the software architecture of software-intensive systems are often considered only from a low level of abstraction. A recent architectural Standard, the ISO/IEC/IEEE 42020, defines 6 clauses for the architecture process, among them the Architecture Conceptual-ization process is the subject of this study. Considering that the ISO/IEC/IEEE 42020 has only recently been published, given the importance of establishing a well-defined software architecture, and considering the difficulties of understanding an architectural Standard, this work proposes a framework, and then the design and further evaluation of a web-based application to support soft-ware architects in using the activities and tasks of the Architecture Conceptualization clause based on the framework described. The ArchConcept was designed to address the high-level abstraction of the Standard ISO/IEC/IEEE 42020 and can be useful for software architects who want to follow ISO/IEC/IEEE 42020&rsquo;s recommendation and achieve high-quality results in their work of software architecture conceptualization. A qualitative evaluation employing a questionnaire was carried out to obtain information about the perceptions of professionals regarding the ArchConcept, according to the Technology Acceptance Model (TAM). As ArchConcept is focused on activities of Archi-tecture Conceptualization, which is one of the early stages of a software project, the results found could be evidence of the short time dedicated to the initial phases of projects and their consequences.regarding the ArchConcept, according to the Technology Acceptance Model (TAM). As ArchConcept is focused on the early stages of the project (Architecture Conceptualization), the results found in this work could be evidence of the short time dedicated to the initial phase of projects and their consequences.</p>
					<p><a href="https://lib.jucs.org/article/104938/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/104938/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/104938/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Dec 2023 08:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>OntoFoCE and ObE Forensics. Email-traceability supporting tools for digital forensics</title>
		    <link>https://lib.jucs.org/article/97822/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(12): 1482-1509</p>
					<p>DOI: 10.3897/jucs.97822</p>
					<p>Authors: Herminia Beatriz Parra de Gallo, Marcela Vegetti</p>
					<p>Abstract: This paper shows the research conducted to respond to a continuous requirement of justice regarding the application of scientifically supported forensic tools. Considering ontological engineering as the appropriate framework to respond to this requirement, the article presents OntoFoCE (Spanish abbreviation for Ontology for Electronic Mail Forensics), a specific ontology for the forensic analysis of emails. The purpose of this ontology is to help the computer expert in the validation of an email presented as judicial evidence. OntoFoCE is the fundamental component of the ObE Forensics (Ontology-based Email Forensics) tool. Although there are numerous forensic tools to analyze emails, the originality of the one proposed here lies in the implementation of semantic technologies to represent the traceability of the email transmission process. From that point on, it is possible to provide answers to the items of digital evidence subject to the expert examination. These answers make it possible to support these evidence items in the forensic analysis of an email and to guarantee the gathering of scientifically and technically accepted results that are valid for justice. Thus, the research question that is tried to be answered is: Is it possible to apply ontological engineering as a scientific support to design and develop a forensic tool that allows automatic answers to the evidence items subject to the expert examination in the forensic analysis of emails?</p>
					<p><a href="https://lib.jucs.org/article/97822/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/97822/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/97822/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Dec 2023 08:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cost-based Virtual Machine Scheduling for Data-as-a-Service</title>
		    <link>https://lib.jucs.org/article/99223/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(12): 1461-1481</p>
					<p>DOI: 10.3897/jucs.99223</p>
					<p>Authors: Ana Cristina Alves de Oliveira, Marco Aurélio Spohn, Christof Fetzer, Le Quoc Do, André Martin</p>
					<p>Abstract: Data-as-a-Service (DaaS) is a branch of cloud computing that supports &ldquo;querying the Web&rdquo;. Due to its ultrahigh scale, it is essential to establish rules when defining resources&rsquo; costs and guidelines for infrastructure investments. Those decisions should prioritize minimizing the incidence of agreement breaches that compromise the performance of cloud services and optimize resources&rsquo; usage and services&rsquo; cost. This article aims to address the cost problem of DaaS by developing a model that optimizes the costs of querying distributed data sources over virtual machines spread across multisite data centers. We have designed and analyzed a cost model for DaaS, besides implementing a scheduling system to perform a cost-based VM assignment. To validate our model, we have studied and characterized a real-world DaaS system&rsquo;s network and processing workloads. On average, our cost-based scheduling performs at least twice as well as the traditional round-robin approach. Our model also supports load balancing and infrastructure scalability when combined with an adaptive cost scheme that prioritizes VM allocation within the underutilized data centers and avoids sending VMs to data centers in the eminence of becoming over-utilized.</p>
					<p><a href="https://lib.jucs.org/article/99223/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/99223/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/99223/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Dec 2023 08:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Perceptual Learning Modules (PLM) in CS1: a Negative Result and a Methodological Warning</title>
		    <link>https://lib.jucs.org/article/96347/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(9): 988-1009</p>
					<p>DOI: 10.3897/jucs.96347</p>
					<p>Authors: Ricardo Caceffo, Jacques Wainer, Guilherme Gama, Islene Garcia, Rodolfo Azevedo</p>
					<p>Abstract: Perceptual Learning Modules (PLMs) is a variation of Perceptual Learning based on multiple-choice questionnaires. There exists successful research of the use of PLMs in math and flight training. The possibility of designing and adopting PLMs in Introductory Programming Courses (CS1) is still an open area of study. The goal of this study is to test whether students that received a PLM training on recognising segments of programs will perform better at writing programs. Two PLM interventions were administered to students. The first intervention was a nonrandom controlled experiment, in which students opted to answer the PLM questionnaire (N=40), while the control group consisted of students that did not answer it (N=629). The second intervention was a randomized controlled experiment with a placebo, in which students were randomly assigned to perform either the PLM questionnaire (N=51) or another a placebo activity (N=51). The different forms of analysis of the first experiment results yielded Cohen&rsquo;s d ranging from 0.23 to 0.34 in favor of the PLM intervention. For the second experiment, the effect size was d = -0.11 against the PLM intervention, but the two results were significant. We believe that the cautious conclusion is that there is a null effect in using a PLM activity as part of a CS1 course. The paper is also of interest because of the methodological decisions and techniques used.</p>
					<p><a href="https://lib.jucs.org/article/96347/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/96347/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/96347/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2023 08:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Naive Fracterm Calculus</title>
		    <link>https://lib.jucs.org/article/87563/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(9): 961-987</p>
					<p>DOI: 10.3897/jucs.87563</p>
					<p>Authors: Jan Bergstra, John V. Tucker</p>
					<p>Abstract: An outline is provided of a new perspective on elementary arithmetic, based on addition, multiplication, subtraction and division, which is informal and unique and may be considered naive when contrasted with a plurality of algebraic and logical, axiomatic formalisations of elementary arithmetic.</p>
					<p><a href="https://lib.jucs.org/article/87563/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/87563/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/87563/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2023 08:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>diffReplication - An Energy-Aware Fault Tolerance Model for Silent Error Detection and Mitigation in Heterogeneous Extreme-scale Computing Environment</title>
		    <link>https://lib.jucs.org/article/94462/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(8): 892-910</p>
					<p>DOI: 10.3897/jucs.94462</p>
					<p>Authors: Longhao Li, Taieb Znati, Rami Melhem</p>
					<p>Abstract: At extreme scale, the frequency of silent errors &ndash; a class of errors that remain undetected by low-level error detection mechanisms &ndash; increases significantly with the computational complexity of the application and the scale of the computing infrastructure. As hardware and software advances are made to usher in the next scientific era of computing, developing new approaches to mitigate the impact of silent errors remains a challenging problem. In this work, we propose an energy-aware fault-tolerance model, referred to diffReplication to overcome silent errors. In the proposed model, the main process is associated with one replica that executes at the same rate as the main process, and one diffReplica that is executed at a fraction of the main process&#39; execution rate. If the main and its replica reach consensus at the end of a computation phase, the state of the diffReplica is updated and computation is resumed. If the synchronization attempt results in a disagreement, however, the diffReplica increases its execution speed to complete the computation and quickly reach the synchronization barrier. Assuming a single error over any given synchronization interval, a majority voting is used to reach consensus and tolerate silent errors. To further enhance its performance, diffReplication is augmented with speculative execution, whereby the main or its fast replica is selected to continue execution without waiting for the diffReplica. The selection process is based on the previous behaviour of the main and its replica. A performance analysis study is carried out to assess the performance of diffReplication, in terms of the energy saving and time-to-completion reduction achieved by the diffReplication scheme. The experiment shows that speculative execution reduces the time to completion with additional energy, and dynamic decision-making balances the energy consumption and time to completion.</p>
					<p><a href="https://lib.jucs.org/article/94462/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94462/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94462/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Aug 2023 18:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Modeling Strategy for the Verification of Context-Oriented Chatbot Conversational Flows via Model Checking</title>
		    <link>https://lib.jucs.org/article/91311/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(7): 805-835</p>
					<p>DOI: 10.3897/jucs.91311</p>
					<p>Authors: Geovana Ramos Sousa Silva, Genaína Nunes Rodrigues, Edna Dias Canedo</p>
					<p>Abstract: Verification of chatbot conversational flows is paramount to capturing and understanding chatbot behavior and predicting problems that would cause the entire flow to be restructured from scratch. The literature on chatbot testing is scarce, and the few works that approach this subject do not focus on verifying the communication sequences in tandem with the functional requirements of the conversational flow itself. However, covering all possible conversational flows of context-oriented chatbots through testing is not feasible in practice given the many ramifications that should be covered by test cases. Alternatively, model checking provides a model-based verification in a mathematically precise and unambiguous manner. Moreover, it can anticipate design flaws early in the software design phase that could lead to incompleteness, ambiguities, and inconsistencies. We postulate that finding design flaws in chatbot conversational flows via model checking early in the design phase may overcome quite a few verification gaps that are not feasible via current testing techniques for context-oriented chatbot conversational flows. Therefore, in this work, we propose a modeling strategy to design and verify chatbot conversational flows via the Uppaal model checking tool. Our strategy is materialized in the form of templates and a mapping of chatbot elements into Uppaal elements. To evaluate this strategy, we invited a few chatbot developers with different levels of expertise. The feedback from the participants revealed that the strategy is a great ally in the phases of conversational prototyping and design, as well as helping to refine requirements and revealing branching logic that can be reused in the implementation phase.</p>
					<p><a href="https://lib.jucs.org/article/91311/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/91311/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/91311/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jul 2023 16:00:07 +0000</pubDate>
		</item>
	
		<item>
		    <title>VMTools-RA: a Reference Architecture for Software Variability Tools</title>
		    <link>https://lib.jucs.org/article/97113/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(7): 649-690</p>
					<p>DOI: 10.3897/jucs.97113</p>
					<p>Authors: Ana P. Allian, Leandro F. Silva, Edson OliveiraJr, Elisa Y. Nakagawa</p>
					<p>Abstract: Currently, software systems must be appropriately developed to support an amount of variability for accommodating different requirements. To support such development, a diversity of tools has already been designed for variability management (i.e., identification, modeling, evaluation, and realization). However, due to this diversity, there is a lack of consensus on what in fact software variability tools are and even what functionalities they should provide. Besides that, the building of new tools is still an effort- and time-consuming task. To support their building, we present VMTools-RA, a reference architecture that encompasses knowledge and practice for developing and evolving variability tools. Designed in a systematic way, VMTools-RA was evaluated throughout: a controlled experiment with software developer practitioners; and an instantiation of the VMTools-RA architecture to implement a software variability tool, named SMartyModeling. As a result, VMTools-RA is evidenced to be feasible and it can be considered an important contribution to the software variability and developers of variability-intensive software systems community, which require specific tools developed in a faster manner with less risk, what a reference architecture could provide.</p>
					<p><a href="https://lib.jucs.org/article/97113/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/97113/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/97113/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jul 2023 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cost-Effective Scheduling in Fog Computing: An Environment Based on Modified PROMETHEE Technique</title>
		    <link>https://lib.jucs.org/article/90429/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(4): 397-416</p>
					<p>DOI: 10.3897/jucs.90429</p>
					<p>Authors: Shefali Varshney, Rajinder Sandhu, P. K. Gupta</p>
					<p>Abstract: With the rising use of Internet of Things (IoT)-enabled devices, there is a significant increase in the use of smart applications that provide their response in real time. This rising demand imposes many issues such as scheduling, cost, overloading of servers, etc. To overcome these, a cost-effective scheduling technique has been proposed for the allocation of smart applications. The aim of this paper is to provide better profit by the Fog environment and minimize the cost of smart applications from the user end. The proposed framework has been evaluated with the help of a test bed containing four analysis phases and is compared on the basis of five metrics- average allocation time, average profit by the Fog environment, average cost of smart applications, resource utilization and number of applications run within given latency. The proposed framework performs better under all the provided metrics.</p>
					<p><a href="https://lib.jucs.org/article/90429/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/90429/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/90429/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Apr 2023 12:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>CIMLA: A Modular and Modifiable Data Preparation, Organization, and Fusion Infrastructure to Partially Support the Development of Context-aware MMLA Solutions</title>
		    <link>https://lib.jucs.org/article/84558/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(3): 265-297</p>
					<p>DOI: 10.3897/jucs.84558</p>
					<p>Authors: Shashi Kant Shankar, Adolfo Ruiz-Calleja, Luis P. Prieto, María Jesús Rodríguez-Triana, Pankaj Chejara, Sandesh Tripathi</p>
					<p>Abstract: Multimodal Learning Analytics (MMLA) solutions aim to provide a more holistic picture of a learning situation by processing multimodal educational data. Considering contextual information of a learning situation is known to help in providing more relevant outputs to educational stakeholders. However, most of the MMLA solutions are still in prototyping phase and dealing with different dimensions of an authentic MMLA situation that involve multiple cross-disciplinary stakeholders like teachers, researchers, and developers. One of the reasons behind still being in prototyping phase of the development lifecycle is related to the challenges that software developers face at different levels in developing context-aware MMLA solutions. In this paper, we identify the requirements and propose a data infrastructure called CIMLA. It includes different data processing components following a standard data processing pipeline and considers contextual information following a data structure. It has been evaluated in three authentic MMLA scenarios involving different cross-disciplinary stakeholders following the Software Architecture Analysis Method. Its fitness was analyzed in each of the three scenarios and developers were interviewed to assess whether it meets functional and non-functional requirements. Results showed that CIMLA supports modularity in developing context-aware MMLA solutions and each of its modules can be reused with required modifications in the development of other solutions. In the future, the current involvement of a developer in customizing the configuration file to consider contextual information can be investigated.</p>
					<p><a href="https://lib.jucs.org/article/84558/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/84558/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/84558/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Mar 2023 10:30:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Undergraduate research in software engineering. An experience and evaluation report</title>
		    <link>https://lib.jucs.org/article/95718/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(3): 203-221</p>
					<p>DOI: 10.3897/jucs.95718</p>
					<p>Authors: Gerardo Matturro</p>
					<p>Abstract: The purpose of this paper is to present an undergraduate research experience process model and the evaluation of seven years of its application in an undergraduate research program in software engineering. Undergraduate students who participated in research projects between 2015 and 2022 were surveyed to find out a) their motivations for participating in research projects in software engineering, b) the skills they consider they have acquired or improved by participating in those projects, and c) their perception of benefits and utility for their future work and professional activities. Results reveal that participation in real research projects in software engineering is highly valued by undergraduate students, who perceive benefits in the development of research and soft skills, and for their future professional activity. In addition, these undergraduate research projects and the process followed show that it is feasible to make original contributions to the body of knowledge of software engineering.</p>
					<p><a href="https://lib.jucs.org/article/95718/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/95718/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/95718/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Mar 2023 10:30:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Enhanced Testing Approach for Mobile Applications</title>
		    <link>https://lib.jucs.org/article/86295/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(2): 152-178</p>
					<p>DOI: 10.3897/jucs.86295</p>
					<p>Authors: Amira Samir, Huda Amin Maghawry, Nagwa Badr</p>
					<p>Abstract: Nowadays, there is an enormous number of mobile applications that are continuously being launched to the market. As a result of this rapid process, there is a need to increase the speed of testing process using enhanced approaches. This research aims to increase the effectiveness of the graphical user interface testing process of mobile applications. This is achieved by proposing an enhanced combinatorial-based metaheuristic approach. The proposed approach aims to maximize statement and branch coverage by applying Cuckoo search, for event selection. The approach was compared to monkey, frequency, random and greedy approaches. Experiments were conducted on different mobile applications. During the same testing time duration, the proposed approach achieved higher coverage than the other approaches. The proposed approach proved its effectiveness in mobile application testing compared to the other approaches.</p>
					<p><a href="https://lib.jucs.org/article/86295/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/86295/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/86295/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Feb 2023 10:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Evaluations of Integrated Programming Environment for First-Year Students in Computer Engineering</title>
		    <link>https://lib.jucs.org/article/81329/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(1): 73-97</p>
					<p>DOI: 10.3897/jucs.81329</p>
					<p>Authors: Matias Salinas, Paul Leger, Hiroaki Fukuda, Nicolás Cardozo, Vannessa Duarte, Ismael Figueroa</p>
					<p>Abstract: Many factors influence the problems that currently exist in the learning-teaching process of programming. The use of an Integrated Development Environment (IDE) makes the experience a complicated process because these IDEs focus on professional programmers and not on novice learners. This also affects the classrooms of the university &ldquo;Pontificia Universidad Cat&oacute;lica de Valpara&iacute;so (PUCV)&rdquo; (Chile). The use of professional IDEs negatively affects the learning process of first-year students who face the development of the algorithms for the first time. One of the IDE widely used for teaching programming courses is Code::Blocks, which is a tool for professional developers. Through a heuristic and usability evaluation, we found that Code::Blocks has a complex user interface and a functional overload. Using these two findings, as well as recommendations given during these tests, we highlight the important aspects that an IDE for novice learners should have. Taking into account previous observations and state-of-the-art/practice of IDEs, a functional IDE prototype, named Incre-IDLE, is developed. In addition to Code::Blocks evaluations, this paper reports the results of a heuristic and usability evaluation applied to first-year students at PUCV about functionalities provided by Incre-IDLE. These results suggest that Incre-IDLE has a simple interface, is easy to install and use, and does not have functional overload (i.e., spend a considerable amount of time learning the tool). Concretely, the results show that 66.7% of the students could complete tasks easily and 100% of them found the GUI intuitive. In terms of GUI, 83.3% considered the application interface &ldquo;very simple&rdquo;; and the text, concepts, and icons &ldquo;very understandable&rdquo; by 66.7%. The students also found the tool &ldquo;motivating&rdquo; (66.7%) or &ldquo;very motivating&rdquo; (33.3%). These results closely match the findings obtained by the heuristic evaluation of Incre-IDLE from the experts: 83.3% of them rated it as &ldquo;useful&rdquo; or &ldquo;very useful&rdquo;, and only a 16.7% rated it as &ldquo;useless&rdquo;.</p>
					<p><a href="https://lib.jucs.org/article/81329/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/81329/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/81329/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Jan 2023 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Building an integrated requirements engineering process based on Intelligent Systems and Semantic Reasoning on the basis of a systematic analysis of existing proposals</title>
		    <link>https://lib.jucs.org/article/78776/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(11): 1136-1168</p>
					<p>DOI: 10.3897/jucs.78776</p>
					<p>Authors: Alexandra Corral, Luis E. Sánchez, Leandro Antonelli</p>
					<p>Abstract: Requirements Engineering is one of the fundamental activities in the software development process and is oriented toward what should be produced. One of the development team&rsquo;s most common problems is a lack of communication regarding an understanding of the discourse domain and how to integrate and process excessive information originating from different sources. This may lead to errors of omission and the consequent production of incomplete and inconsistent artifacts, which will have a direct effect on the quality of the software. The use of machine learning techniques helps the development team produce successful software on the basis of the acquisition of knowledge and human experience with which to understand the domain of the application. This paper, therefore, presents a proposal for a new methodological process oriented toward the construction of a vocabulary concerning the application domain. The authors propose to do this by employing Natural Language Processing (NLP), ontologies and heuristics that will lead to the production of a Lexicon that is common to analysts and customers, both of whom will understand the universe of discourse, thus mitigating problems of completeness. This objective has been achieved by carrying out a Systematic Literature Review of the artificial intelligence techniques employed in the requirements engineering process, which led to the discovery that 41.37% use NLP, while 55.71% apply ontologies such as semantic reasoners which help solve the problem of language ambiguity, the structures in specifications or the identification of key concepts with which to establish traceability links. However, the review also showed that the problems regarding the comprehension and completeness of requirements problems have yet to be resolved.</p>
					<p><a href="https://lib.jucs.org/article/78776/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/78776/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/78776/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Nov 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Development and Evaluation of a Software Product Line for M-Learning Applications</title>
		    <link>https://lib.jucs.org/article/90663/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(10): 1058-1086</p>
					<p>DOI: 10.3897/jucs.90663</p>
					<p>Authors: Venilton FalvoJr, Anderson da Silva Marcolino, Nemésio Freitas Duarte Filho, Edson OliveiraJr, Ellen Francine Barbosa</p>
					<p>Abstract: The popularity of mobile devices in all social classes has motivated the development of mobile learning (m-learning) applications. The existing applications, even having many benefits and facilities in relation to the teaching-learning process, still presents problems and challenges, es- pecially regarding the development, reuse and architectural standardization. Particularly, there is a growing adoption of the Software Product Line (SPL) concept, in view of research that investigates these gaps. This paradigm enables organizations to explore the similarities and variabilities of their products, increasing the reuse of artifacts and, consequently, reducing costs and development time. In this context, we discuss how systematic reuse can improve the development of solutions in the m-learning domain. Therefore, this work presents the design, development and experimental evaluation of M-SPLearning, an SPL created to enable the systematic production of m-learning applications. Specifically, the conception of M-SPLearning covers from the initial study for an effective domain analysis to the implementation and evaluation of its functional version. In this regard, the products have been experimentally evaluated by industry software developers, pro- viding statistical evidence that the use of our SPL can speed up the time-to-market of m-learning applications, in addition to reducing their respective number of faults.</p>
					<p><a href="https://lib.jucs.org/article/90663/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/90663/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/90663/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Oct 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Towards more trustworthy predictions: A hybrid evidential movie recommender system</title>
		    <link>https://lib.jucs.org/article/79777/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(10): 1003-1029</p>
					<p>DOI: 10.3897/jucs.79777</p>
					<p>Authors: Raoua Abdelkhalek, Imen Boukhris, Zied Elouedi</p>
					<p>Abstract: Recommender Systems (RSs) are considered as popular tools that have revolutionized the e-commerce and digital marketing. Their main goal is predicting the users&rsquo; future preferences and providing accessible and personalized recommendations. However, uncertainty can spread at any level throughout the recommendation process, which may affect the results. In fact, the ratings given by the users are often unreliable. The final provided predictions itself may also be pervaded with uncertainty and doubt. Obviously, the reliability of the predictions cannot be fully certain and trustworthy. For the system to be effective, recommendations must inspire trust in the system and provide reliable and credible recommendations. The user may speculate about the uncertainty pervaded behind the given recommendation. He could tend to a reliable recommendation offering him a global overview about his preferences rather than an inappropriate one that contradicts his activities and objectives. While such imperfection cannot be ignored, traditional RSs are rarely able to deal with the uncertainty spreading around the prediction process, which may affect the credibility, the transparency and the trustworthiness of the current RS. Thus, in this paper, we opt for the uncertain framework of the belief function theory (BFT), which allows us to represent, quantify and manage imperfect evidence. By using the BFT, the users&rsquo; preferences and the interactions between the neighbors can be represented under uncertainty. Evidence from different information sources can then be combined leading to more reliable results. The proposed approach is a hybrid evidential movie RS that uses different data sources and delivers a personalized user-interface allowing a global overview of the possible future preferences. This representation would increase the users&rsquo; confidence towards the system as well as their satisfaction. Experiments are performed on MovieLens and their additional features provided by the Internet Movie Database (IMDb) and Rotten Tomatoes. The new approach achieves promising results compared to traditional approaches in terms of MAE, NMAE and RMSE. It also reaches interesting Precision, Recall and F-measure values of respectively, 0.782, 0.792 and 0.787.</p>
					<p><a href="https://lib.jucs.org/article/79777/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/79777/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/79777/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Oct 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Disassemble Byte Sequence Using Graph Attention Network</title>
		    <link>https://lib.jucs.org/article/76528/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(7): 758-775</p>
					<p>DOI: 10.3897/jucs.76528</p>
					<p>Authors: Jing Qiu, Feng Dong, Guanglu Sun</p>
					<p>Abstract: Disassembly is the basis of static analysis of binary code and is used in malicious code detection, vulnerability mining, software optimization, etc. Disassembly of arbitrary suspicious code blocks (e.g., for suspicious traffic packets intercepted by the network) is a difficult task. Traditional disassembly methods require manual specification of the starting address and cannot automate the disassembly of arbitrary code blocks. In this paper, we propose a disassembly method based on code extension selection network by combining traditional linear sweep and recursive traversal methods. First, each byte of a code block is used as the disassembly start address, and all disassembly results (control flow graphs) are combined into a single flow graph. Then a graph attention network is trained to pick the correct subgraph (control flow graph) as the final result. In the experiment, the compiler-generated executable file, as well as the executable file generated by hand-written assembly code, the data file and the byte sequence intercepted by the code segment were tested, and the disassembly accuracy was 93%, which can effectively distinguish the code from the data.</p>
					<p><a href="https://lib.jucs.org/article/76528/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/76528/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/76528/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jul 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Neuro-Fuzzy Hybridized Approach for Software Reliability Prediction</title>
		    <link>https://lib.jucs.org/article/80537/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(7): 708-732</p>
					<p>DOI: 10.3897/jucs.80537</p>
					<p>Authors: Ajay Kumar</p>
					<p>Abstract: Context: Reliability prediction is critical for software engineers in the current challenging scenario of increased demand for high-quality software. Even though various software reliability prediction models have been established so far, there is always a need for a more accurate model in today&#39;s competitive environment for producing high-quality software. Objective: This paper proposes a neuro-fuzzy hybridized method by integrating self-organized- map (SOM) and fuzzy time series (FTS) forecasting for the reliability prediction of a software system. Methodology: In the proposed approach, a well-known supervised clustering algorithm SOM is incorporated with FTS forecasting for developing a hybrid model for software reliability prediction. To validate the proposed approach, an experimental study is done by applying proposed neuro-fuzzy method on a software failure dataset. In addition, a comparative study was conducted for evaluating the performance of the proposed method by comparing it with some of the existing FTS models. Results: Experimental outcomes show that the proposed approach performs better than the existing FTS models. Conclusion: The results show that the proposed approach can be used efficiently in the software industry for software reliability prediction.</p>
					<p><a href="https://lib.jucs.org/article/80537/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/80537/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/80537/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jul 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Pattern Language as Support to Software Measurement Planning for Statistical Process Control</title>
		    <link>https://lib.jucs.org/article/68237/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(7): 671-707</p>
					<p>DOI: 10.3897/jucs.68237</p>
					<p>Authors: Daisy Ferreira Brito, Monalessa Perini Barcellos, Gleison Santos</p>
					<p>Abstract: The growing interest of organizations in improving their software processes has led them to aim at achieving high maturity, where statistical process control (SPC) is required. One of the challenges involved in performing SPC is selecting measures suitable for it. Measures used in SPC can be found in the literature and can be reused by organizations, but the information is dispersed, not favoring reuse. From measures suggested in the literature or used in practical experiences, it is possible to identify patterns that can be used to support organizations in measurement planning. Patterns can be organized as pattern languages, which favor reuse and contribute towards increasing productivity. In this work, from the results of a systematic mapping and a survey, we identified measurement planning patterns in the Goal-Question-Metric format and organized them in a Measurement Planning Pattern Language (MePPLa). MePPLa was created by following a Systematic Approach for creating Measurement Planning Pattern Languages (SAMPPLa), also defined in this work. This paper presents SAMPPLa, MePPLa and the main results of a study carried out to evaluate MePPLa. The results showed that using MePPLa is viable and useful to aid in software measurement planning. Mainly, MePPLa contributes to increasing productivity when creating a measurement plan and the quality of the resulting measurement plan.</p>
					<p><a href="https://lib.jucs.org/article/68237/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/68237/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/68237/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Jul 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>MODELFY: A Model-driven Solution for Decision Making based on Fuzzy Information</title>
		    <link>https://lib.jucs.org/article/76030/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(5): 445-474</p>
					<p>DOI: 10.3897/jucs.76030</p>
					<p>Authors: María Castañeda, Mercedes G. Merayo, Juan Boubeta-Puig, Iván Calvo</p>
					<p>Abstract: There exist areas, such as the disease prevention or inclement weather protocols, in which the analysis of the information based on strict protocols require a high level of rigor and security. In this situation, it would be desirable to apply formal methodologies that provide these features. In this scope, recently, it has been proposed a formalism, fuzzy automaton, that captures two relevant aspects for fuzzy information analysis: imprecision and uncertainty. However, the models should be designed by domain experts, who have the required knowledge for the design of the processes, but do not have the necessary technical knowledge. To address this limitation, this paper proposes MODELFY, a novel model-driven solution for designing a decision-making process based on fuzzy automata that allows users to abstract from technical complexities. With this goal in mind, we have developed a framework for fuzzy automaton model design based on a Domain- Specific Modeling Language (DSML) and a graphical editor. To improve the interoperability and functionality of this framework, it also includes a model-to-text transformation that translates the models designed by using the graphical editor into a format that can be used by a tool for data anal- ysis. The practical value of this proposal is also evaluated through a non-trivial medical protocol for detecting potential heart problems. The results confirm that MODELFY is useful for defining such a protocol in a user-friendly and rigorous manner, bringing fuzzy automata closer to domain experts.</p>
					<p><a href="https://lib.jucs.org/article/76030/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/76030/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/76030/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 May 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Scrum Watch: a tool for monitoring the performance of Scrum-based work teams</title>
		    <link>https://lib.jucs.org/article/67593/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(1): 98-117</p>
					<p>DOI: 10.3897/jucs.67593</p>
					<p>Authors: Florencia Vega, Guillermo Rodríguez, Fabio Rocha, Rodrigo Pereira dos Santos</p>
					<p>Abstract: Agile Methods propose an approach for developing software based on an iterative and incremental life cycle model, in which needs and solutions evolve through collaboration between multi-functional and self-organized teams. As such, agile practices in work teams are gaining much momentum. To meet the demanding level of projects, agile software development also has to keep up with several challenges. In this context, software industry has chosen to use several tools to ease development and communication between different teams&rsquo; members. However, these tools generate overwhelming volumes of data that hamper decision-making by project managers. To address this issue, we present Scrum Watch, a tool-based approach that focuses on generating, through cloud-based technologies, graphic elements and reports that assist project managers with information to support decision making. Results obtained from an undergraduate Systems Engineering course through a capstone project confirm the feasibility of the proposed approach, which exploits the benefits of the availability and visualization of process and product metrics.</p>
					<p><a href="https://lib.jucs.org/article/67593/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/67593/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/67593/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Middleware for the Internet of Things: a systematic literature review</title>
		    <link>https://lib.jucs.org/article/71693/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(1): 54-79</p>
					<p>DOI: 10.3897/jucs.71693</p>
					<p>Authors: Rodolfo Medeiros, Sílvio Fernandes, Paulo G. G Queiroz</p>
					<p>Abstract: The Internet of Things (IoT) emerged to describe a network of connected things on a large scale to offer services to a large number of applications in different environments and domains. Middleware is software that seeks to facilitate the management and communication of all these things, providing the necessary functionalities to manage things, to discover, to compose services, and perform communication. For this reason, several proposals for middleware solutions for IoT have been developed. In this article, we conducted a systematic review of the literature to bring together middleware solutions for IoT, identifying the requirements and communication protocols used. In addition, we present some gaps and directions for future research in the development of IoT middleware.</p>
					<p><a href="https://lib.jucs.org/article/71693/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/71693/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/71693/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>SoREn, How Dynamic Software Update Tools Can Help Cybersecurity Systems to Improve Monitoring and Actions</title>
		    <link>https://lib.jucs.org/article/66857/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(1): 27-53</p>
					<p>DOI: 10.3897/jucs.66857</p>
					<p>Authors: Sébastien Martinez, Christophe Gransart, Olivier Stienne, Virginie Deniau, Philippe Bon</p>
					<p>Abstract: Because stopping a service to apply updates raises issues, Dynamic Software Updating studies the application of updates on programs without disrupting the services they provide. This is acheived using specific mechanisms operating updating tasks such as the modification of the program state. To acheive transparency, Dynamic Software Updating systems use pre-selected and pre-configured mechanisms. Developers provide patches that are transparently converted to dynamic updates. The cost of such transparency is often that applied patches cannot modify the general semantic of the updated program. Allowing dynamic modification of the general semantic of a running program is rarely considered.In the context of protection of communications between moving vehicles and uncontrolled infrastructure, SoREn (Security REconfigurable Engine) is designed to be dynamically reconfigurable. Its semantics can transparently be modified at runtime to change the security policy it enforces. Administrators can supply new policies to trigger a reconfiguration, without developing new components. This paper details and discusses the design of SoREn, its meta-model linked to cybersecurity business concepts and its automatic reconfiguration calculator allowing transparent application of reconfigurations.</p>
					<p><a href="https://lib.jucs.org/article/66857/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/66857/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/66857/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Leveraging multifaceted proximity measures among developers in predicting future collaborations to improve the social capital of software projects</title>
		    <link>https://lib.jucs.org/article/76602/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(11): 1240-1271</p>
					<p>DOI: 10.3897/jucs.76602</p>
					<p>Authors: Amit Kumar, Sonali Agarwal</p>
					<p>Abstract: Social capital is an asset earned by people through their social connections. One of the motivations among developers to contribute to open source development and maintenance tasks is to earn social capital. Recent studies suggest that the social capital of the project has an impact on the sustained participation of the developers in open source software (OSS). One way to improve the social capital of the project is to help the developers in connecting with their peers. However, to the best of our knowledge, there is no prior research which attempts to predict future collaborations among developers and establish the significance of these collaborations on improving the social capital at the project level. To address this research gap, in this paper, we model the past collaborations among developers on version control system (VCS) and issue tracking system (ITS) as homogeneous and heterogeneous developer social network (DSN). Along with the novel path count based features, defined on proposed heterogeneous DSN, multifaceted proximity features are used to generate a feature set for machine learning classifiers. Our experiments performed on 5 popular open source projects (Spark, Kafka, Flink, WildFly, Hibernate) indicate that the proposed approach can predict the future collaborations among developers on both the platforms i.e. VCS as well as ITS with a significant accuracy (AUROC up to 0.85 and 0.9 for VCS and ITS respectively). A generic metric- recall of gain in social capital is proposed to investigate the efficacy of these predicted collaborations in improving the social capital of the project. We also concretised this metric on various measures of social capital and found that collaborations predicted by our approach have significant potential to improve the social capital at project level (e.g. Recall of gain in cohesion index up to 0.98 and Recall of gain in average godfather index up to 0.99 for VCS). We also showed that structure of collaboration network has an impact on the accuracy and usefulness of predicted collaborations. Since the past research suggests that many newcomers abandon the open source project due to social barriers which they face after joining the project, our research outcomes can be used to build the recommendation systems which might help to retain such developers by improving their social ties based on similar skills/interests.</p>
					<p><a href="https://lib.jucs.org/article/76602/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/76602/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/76602/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Nov 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Understanding the Impact of Development Efforts in Code Quality</title>
		    <link>https://lib.jucs.org/article/72475/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(10): 1096-1127</p>
					<p>DOI: 10.3897/jucs.72475</p>
					<p>Authors: Ricardo Perez-Castillo, Mario Piattini</p>
					<p>Abstract: Today, there is no company that does not attempt to control or assure software quality in a greater or lesser extent. Software quality has been mainly studied from the perspectives of the software product and the software process. However, there is no thorough research about how code quality is affected by the software development projects&rsquo; contexts. This study analyses how the evolution of the development effort (i.e., the number of developers and their contributions) influences the code quality (i.e., the number of bugs, code smells, cloning, etc). This paper presents a multiple case study that analyses 13 open-source projects from GitHub and SonarCloud, and retrieves more than 95,000 commits and more than 25,000 quality measures. The insights are that more developers or higher number of commits does not necessary influence worse quality levels. After applying a clustering algorithm, it is detected an inverse correlation in some cases where specific efforts were made to improve code quality. The size of commits and the relative weight of developers in their teams might also affect measures like complexity or cloning. Project managers can therefore understand the mentioned relationships and consequently make better decisions based on the information retrieved from code repositories.</p>
					<p><a href="https://lib.jucs.org/article/72475/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/72475/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/72475/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Oct 2021 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Continuations and Aspects to Tame Callback Hell on the Web</title>
		    <link>https://lib.jucs.org/article/72205/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(9): 955-978</p>
					<p>DOI: 10.3897/jucs.72205</p>
					<p>Authors: Paul Leger, Hiroaki Fukuda, Ismael Figueroa</p>
					<p>Abstract: JavaScript is one of the main programming languages to develop highly rich responsive and interactive Web applications. In these kinds of applications, the use of asynchronous operations that execute callbacks is crucial. However, the dependency among nested callbacks, known as callback hell, can make it difficult to understand and maintain them, which will eventually mix concerns. Unfortunately, current solutions for JavaScript do not fully address the aforementioned issue. This paper presents Sync/cc, a JavaScript package that works on modern browsers. This package is a proof-of-concept that uses continuations and aspects that allow developers to write event handlers that need nested callbacks in a synchronous style, preventing callback hell. Unlike current solutions, Sync/cc is modular, succinct, and customizable because it does not require ad-hoc and scattered constructs, code refactoring, or adding ad-hoc implementations such as state machines. In practice, our proposal uses a) continuations to only suspend the current handler execution until the asynchronous operation is resolved, and b) aspects to apply continuations in a non-intrusive way. We test Sync/cc with a management information system that administers courses at a university in Chile.</p>
					<p><a href="https://lib.jucs.org/article/72205/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/72205/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/72205/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Sep 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Compiler and Language Support for Designing Mixed-Criticality Applications</title>
		    <link>https://lib.jucs.org/article/71831/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 894-911</p>
					<p>DOI: 10.3897/jucs.71831</p>
					<p>Authors: Nermin Kajtazovic, Peter Hödl, Leo Happ Botler</p>
					<p>Abstract: Coexistence of software components and functions of different criticality in a single computing platform has challenged the safety community for the past two decades. Despite efforts that have been made so far, dealing with mixed-criticality has still left some room for improvements. One particular concern here is that partitioning of hardware and software resources with regard to criticality (safety related, non-safety related) has direct implications on how safety measures need to be realised. For example, a self-test that must meet certain diagnostic coverage for the microcontroller core by inspecting its instructions, needs to cover only those instructions which are able to affect a safety function. Available software mechanisms and tools are to a certain extent still unable to deal with such a fine-grained selection of resources. In this work, we introduce a compiler extension and language support which enable accurate selection of data based on their criticality. The compiler extension serves to establish detailed traceability between the software code and its representation in runtime memory. With the language support, the individual data elements can be classified based on the desired safety integrity level. As a result, safety measures that operate on data (e.g. Abraham test for SRAM can achieve better coverage. The method has been evaluated and applied to industrial safety controllers. We provide here relevant performance figures and discuss possible applications of the method in other fields.</p>
					<p><a href="https://lib.jucs.org/article/71831/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/71831/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/71831/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Lean integration of IT security and data privacy governance aspects into product development in agile organizations</title>
		    <link>https://lib.jucs.org/article/71770/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 868-893</p>
					<p>DOI: 10.3897/jucs.71770</p>
					<p>Authors: Alexander Poth, Mario Kottke, Kerstin Middelhauve, Torsten Mahr, Andreas Riel</p>
					<p>Abstract: This article deals with the design of a product development-specific framework to support lean and adequate governance. This framework is based on layers of product-specific standards and regulations. The layers can be merged into a specific set to address the demands of a product to fit the state-of-the-art requirements of its domain. For the product domain, specific layers are presented with examples from IT security and data privacy for the software development phase. The approach is generic and can be extended to other domains like finance services or embedded products and their life-cycle phases.</p>
					<p><a href="https://lib.jucs.org/article/71770/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/71770/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/71770/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cybersecurity Verification and Validation Testing in Automotive</title>
		    <link>https://lib.jucs.org/article/71833/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 850-867</p>
					<p>DOI: 10.3897/jucs.71833</p>
					<p>Authors: Damjan Ekert, Jürgen Dobaj, Alen Salamun</p>
					<p>Abstract: The new generations of cars have a number of ECUs (Electronic Control Units) which are connected to a central gateway and need to pass cybersecurity integration tests to fulfil the homologation requirements of cars. Cars usually have a gateway server (few have additional domain servers) with Linux and a large number of ECUs which are real time control of actuators (ESP, EPS, ABS, etc. – usually they are multicore embedded controllers) connected by a real time automotive specific bus (CAN-FD) to the domain controller or gateway server. The norms (SAE J3061, ISO 21434) require cybersecurity related verification and validation. Fir the verification car manufacturers use a network test suite which runs > 2000 test cases and which have to be passed for homologation. These norms have impact on the way how car communication infrastructure is tested, and which cybersecurity attack patterns are checked before a road release of an ECU/car.This paper describes typical verification and validation approaches in modern vehicles and how such test cases are derived and developed.</p>
					<p><a href="https://lib.jucs.org/article/71833/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/71833/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/71833/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cybersecurity Threat Analysis, Risk Assessment and Design Patterns for Automotive Networked Embedded Systems: A Case Study</title>
		    <link>https://lib.jucs.org/article/72367/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 830-849</p>
					<p>DOI: 10.3897/jucs.72367</p>
					<p>Authors: Jürgen Dobaj, Damjan Ekert, Jakub Stolfa, Svatopluk Stolfa, Georg Macher, Richard Messnarz</p>
					<p>Abstract: Cybersecurity has become a crucial challenge in the automotive sector. At the current stage, the framework described by the ISO/SAE 21434 is insufficient to derive concrete methods for the design of secure automotive networked embedded systems on the supplier level. This article describes a case study with actionable steps for designing secure systems and systematically eliciting traceable cybersecurity requirements to address this gap. The case study is aligned with the ISO/SAE 21434 standard and can provide the basis for integrating cybersecurity engineering into company-specific processes and practice specifications.</p>
					<p><a href="https://lib.jucs.org/article/72367/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/72367/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/72367/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>ODD description methods for automated driving vehicle and verifiability for safety</title>
		    <link>https://lib.jucs.org/article/72333/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 796-810</p>
					<p>DOI: 10.3897/jucs.72333</p>
					<p>Authors: Masao Ito</p>
					<p>Abstract: There is no standard method for describing the Operational Design Domain (ODD) in automated driving vehicles. There are many elements in the operating domain, including the external environment, and it is necessary to connect them with the internal state of the automated driving system. Its content ultimately requires the user&#39;s understanding. The description method of this ODD is summarised from the aspect of safety. Consistency with standards and guidelines will also be considered.</p>
					<p><a href="https://lib.jucs.org/article/72333/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/72333/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/72333/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Recent Advances in Cybersecurity and Safety Architectures in Automotive, IT, and Connected Services</title>
		    <link>https://lib.jucs.org/article/72072/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(8): 793-795</p>
					<p>DOI: 10.3897/jucs.72072</p>
					<p>Authors: Richard Messnarz, Ricardo Colomo-Palacios, Georg Macher, Andreas Riel, Miklos Biro</p>
					<p>Abstract: This is a special issue in cooperation with EuroSPI (www.eurospi.net). EuroSPI represents a large international network of renowned experts and annual European conference series running successfully since its foundation in 1994. From 2013 onwards, an international functional safety and from 2016 onwards a functional safety and cybersecurity workshop has been established, to which leading European and Asian industry and research have been actively contributing to.High-quality,  original  papers  about  best  practices  for  implementing  functional  safety  and cybersecurity in automotive, IT, and connected services have been selected for this special issue. They provide insights into the current state of the art implementations in automotive safety and cybersecurity, IT security, and safety in future highly autonomous self-learning vehicles.</p>
					<p><a href="https://lib.jucs.org/article/72072/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/72072/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/72072/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Sat, 28 Aug 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Approach for Testing False Data Injection Attack on Data Dependent Industrial Devices</title>
		    <link>https://lib.jucs.org/article/70326/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 774-792</p>
					<p>DOI: 10.3897/jucs.70326</p>
					<p>Authors: Mathieu Briland, Fabrice Bouquet</p>
					<p>Abstract: False data injection is an attack in which an attacker injects fabricated data into a system with the objective to change the behaviour and the decision-making of the system. Many industrial data-based devices are vulnerable to such attacks, this work presents an approach for testing False Data Injection Attack. This approach uses a Domain-Specific Language to generate altered data with two objectives, to provide sophisticated attacks scenarios to increase the resilience of vulnerable systems against False Data Injection Attack and to train detection tools.</p>
					<p><a href="https://lib.jucs.org/article/70326/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70326/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70326/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Model-Driven Engineering for End-Users in the Loop in Smart Ambient Systems</title>
		    <link>https://lib.jucs.org/article/70515/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 755-773</p>
					<p>DOI: 10.3897/jucs.70515</p>
					<p>Authors: Sylvie Trouilhet, Jean-Paul Arcangeli, Jean-Michel Bruel, Maroun Koussaifi</p>
					<p>Abstract: At the heart of cyber-physical and ambient systems, the user should permanently benefit from applications adapted to the situation and her/his needs. To do this, she/he must be able to configure her/his software environment and be supported as much as possible in that task. To this end, an intelligent &ldquo;engine&rdquo; assembles software components that are present in the ambient environment at the time and makes unanticipated applications emerge. The problem is to put the user &ldquo;in the loop&rdquo;, i.e., provide adapted and intelligible descriptions of the emerging applications, and present them so that the user can accept, modify or reject them. Besides, user feedback must be collected to feed the engine&rsquo;s learning process. Our approach relies on Model-Driven Engineering (MDE). However, differently from the regular use of MDE tools and techniques by engineers to develop software and generate code, our focus is on end-users. Models of component assemblies are represented and made editable for them. Based on a metamodel that supports modeling and de- scription of component-based applications, a user interface provides multi-faceted representations of the emerging applications and captures user feedback. Our solution relies on several domain- specific languages and a transformation process, based on the established MDE tools (Gemoc studio, Eclipse Modeling Framework, EcoreTools, Sirius, Acceleo). It works in conjunction with the intelligent engine that builds the emerging applications and to which it provides learning data.</p>
					<p><a href="https://lib.jucs.org/article/70515/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70515/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70515/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Assembling the Web of Things and Microservices for the Management of Cyber-Physical Systems</title>
		    <link>https://lib.jucs.org/article/70325/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 734-754</p>
					<p>DOI: 10.3897/jucs.70325</p>
					<p>Authors: Manel Mena, Javier Criado, Luis Iribarne, Antonio Corral</p>
					<p>Abstract: Cyber-Physical Systems (CPS) and Internet of Things (IoT) devices are handled by numerous different protocols. The management and connection to those devices tend to create usability and integrability issues. This brings about the need for a solution capable of facilitating the communication between different platforms and devices. The Web of Things (WoT) describes interfaces and interaction patterns among things, thereby abstracting itself from the underlying protocols used to manage those things and their implementation strategies. This paper describes the concept of Digital Dice, an abstraction of IoT devices and CPS capable of leveraging the advantages of microservices architectures and inspired by the concept of Digital Twins. A Digital Dice is a servient system of the WoT domain that represents a device by the features of the device, hence different WoT description models result in different microservices related to the particular thing. The paper explores the definition of Digital Dices and the conversion between WoT Thing Description Models and Digital Dices and the architecture that sustains the system.</p>
					<p><a href="https://lib.jucs.org/article/70325/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70325/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70325/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Formal Model for Configurable Business Process with Optimal Cloud Resource Allocation</title>
		    <link>https://lib.jucs.org/article/70978/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 693-713</p>
					<p>DOI: 10.3897/jucs.70978</p>
					<p>Authors: Abderrahim Ait Wakrime, Souha Boubaker, Slim Kallel, Emna Guermazi, Walid Gaaloul</p>
					<p>Abstract: In today&rsquo;s competitive business environments, organizations increasingly need to model and deploy flexible and cost effective business processes. In this context, configurable process models are used to offer flexibility by representing process variants in a generic manner. Hence, the behavior of similar variants is grouped in a single model holding configurable elements. Such elements are then customized and configured depending on specific needs. However, the decision to configure an element may be incorrect leading to critical behavioral errors. Recently, process configuration has been extended to include Cloud resources allocation, to meet the need of business scalability by allowing access to on-demand IT resources. In this work, we propose a formal model based on propositional satisfiability formula allowing to find correct elements configuration including resources allocation ones. In addition, we propose to select optimal con- figurations based on Cloud resources cost. This approach allows to provide the designers with correct and cost-effective configuration decisions.</p>
					<p><a href="https://lib.jucs.org/article/70978/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70978/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70978/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Enhancing GDPR compliance through data sensitivity and data hiding tools</title>
		    <link>https://lib.jucs.org/article/70369/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 650-666</p>
					<p>DOI: 10.3897/jucs.70369</p>
					<p>Authors: Xabier Larrucea, Micha Moffie, Dan Mor</p>
					<p>Abstract: Since the emergence of GDPR, several industries and sectors are setting informatics solutions for fulfilling these rules. The Health sector is considered a critical sector within the Industry 4.0 because it manages sensitive data, and National Health Services are responsible for managing patients&rsquo; data. European NHS are converging to a connected system allowing the exchange of sensitive information cross different countries. This paper defines and implements a set of tools for extending the reference architectural model industry 4.0 for the healthcare sector, which are used for enhancing GDPR compliance. These tools are dealing with data sensitivity and data hiding tools A case study illustrates the use of these tools and how they are integrated with the reference architectural model.</p>
					<p><a href="https://lib.jucs.org/article/70369/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70369/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70369/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Advances and Challenges for Model and Data Engineering</title>
		    <link>https://lib.jucs.org/article/70972/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(7): 646-649</p>
					<p>DOI: 10.3897/jucs.70972</p>
					<p>Authors: Christian Attiogbé, Flavio Ferrarotti, Sofian Maabout</p>
					<p>Abstract: Following the stimulating discussions in the workshops held during the 9th International Conference on Model and Data Engineering (MEDI 2019), we proposed to edit a special issue compiling the fruitful research resulting from those discussions. This special issue on current research in model and data engineering of the Journal of Universal Computer Science is the outcome of that proposal. As such, it contains thoroughly revised and significantly extended versions of key papers discussed at MEDI 2019 workshops.The main objective of MEDI is to provide a forum for the dissemination of research accomplishments and to promote the interaction and collaboration between the models and data research communities. MEDI provides an international platform for the pre- sentation of research on models and data theory, development of advanced technologies related to models and data and their advanced applications. This international scientific event, initiated by researchers from Euro-Mediterranean countries in 2011, aims also at promoting the creation of north-south scientific networks, projects and faculty/student exchanges.The following seven accepted papers nicely reflect the wide range of topics covered by MEDI conferences.In their paper &ldquo;Enhancing GDPR Compliance Through Data Sensitivity and Data Hiding Tools&rdquo;, Xabier Larrucea, Micha Moffie and Dan Mor consider the problem of fulfilling the rules set by the General Data Protection Regulation (GDPR) of the EU within the framework of the reference architectural model industry 4.0 for the healthcare sector. This is challenging due to the highly sensitive data managed by this sector and the need to share this data between different national healthcare providers within the EU. The authors propose and implement a series of valuable tools to enhance security and privacy in this context as well as compliance with the GDPR. They also illustrate through a case study the use of the proposed tools for sharing health records and their integration within the reference framework.In their paper &ldquo;BSO-MV: An Optimized Multiview Clustering Approach for Items Recommendation in Social Networks&rdquo;, Lamia Berkani, Lylia Betit and Louiza Belarif present a new approach to improve accuracy and coverage of clustering based recommendations systems for social networks. The approach is based on improving the results of multiview clustering by combining it with a bees swarm optimization algorithm. Through extensive experimentation with two real-world datasets, they are able to demonstrate the effectiveness of the proposed approach to significantly improve accuracy, outperforming others clustering-based approaches.In their paper &ldquo;A Formal Model for Configurable Business Process with Optimal Cloud Resource Allocation&rdquo;, Abderrahim Ait Wakrime, Souha Boubaker, Slim Kallel, Emna Guermazi and Walid Gaaloul propose a formal approach to analyse and verify con- figurable business process models as well as to optimize the cost of their implementation in the Cloud. The mechanism consists on transforming the problem into an equivalent Boolean satisfiability problem (SAT) which is then feed to a solver. This transformation is done by means of translation rules from configurable business processes to SAT. This model formalizes the different configurable process behaviors including control-flow and cloud resource allocations, enabling the derivation of correct configuration variants. Weighted partial SAT formulae are integrated in the model in order to optimize the global cloud resource allocation cost.In their paper &ldquo;Towards a Semantic Graph-based Recommender System: A Case Study of Cultural Heritage&rdquo;, Sara Qassimi and El Hassan Abdelwahed present a semantic graph-based recommender system of cultural heritage places. Their approach consists on first constructing an emergent description that semantically augments the information about the places of interest and then model through graphs the semantic relationships between similar cultural heritage places and their associated tags. Note that the unsuper- vised nature of folksonomy&rsquo;s tags semantically weakens the description of resources, which in turn hinders their indexing and decreases the quality of their classification and clustering. The semantic augmentation produced by the proposed method in the case study of cultural heritage places in Marrakesh city shows to be an effective tool to fight information overload and to produce better recommendations in this context. As such, the paper presents a valuable contribution that can be used to improve the quality of recommender systems in general.In their paper &ldquo;Assembling the Web of Things and Microservices for the Management of Cyber-Physical Systems&rdquo;, Manel Mena, Javier Criado, Luis Iribarne and Antonio Corral face the challenge of facilitating communication between the diverse devices and protocols used by Cyber-Physical Systems (CPS) and the Internet of Things (IoT). They propose an approach based on the concept of digital dice (an abstraction of various objects). The digital dice builds on the web of things standard. It is based on microservices and capable of handling the interaction and virtualization of IoT devices. This work introduces a technique to build, transform and compose digital dices from descriptions of &ldquo;things&rdquo;. A full transformation flow is presented and a case study is used to illustrate its implementation. The proposal is shown to be effective and flexible, improving the state of the art.In their paper &ldquo;Model-Driven Engineering for End-Users in the Loop in Smart Ambient Systems&rdquo;, Sylvie Trouilhet, Jean-Paul Arcangeli, Jean-Michel Bruel and Maroun Koussaifi present a Model-Driven Engineering (MDE) approach to involve the user in the process of constructing at run time component based applications, adapted to a situation and user needs, in the context of ambient systems. The proposed solution relies on several domain-specific languages and a transformation process, based on established MDE tools (Gemoc Studio, Eclipse Modeling Framework, EcoreTools, Sirius and Acceleo). In this context, the authors describe an innovative way of reinforcing the place of the user in the engineering loop. The authors propose an editor that allows the end user to be aware of the emerging applications resulting of this process, to understand their function and use, and to modify them if desired. From these actions, feedback data are extracted to improve the process.In their paper &ldquo;An Approach for Testing False Data Injection Attack on Data Depen- dent Industrial Devices&rdquo;, Mathieu Briland and Fabrice Bouquet present a domain specific language (DSL) for generating test data for IoT devices/environments. The DSL is proposed for testing and simulating false data injection attacks (FDIA). First, the paper outlines a generic approach for FDIA and presents a list of possible sensor types and a categorization schema for data obtained from sensors. Then, the application of the DSL is illustrated using two examples; a simple one altering the data obtained from a temperature sensor and a more complex one concurrently altering the data obtained from three particle sensors. The authors show that their approach works well in the case study of the Flowbird parking meter system and discuss how it can be adapted to different application domains.We are grateful to all authors of journal articles in this issue, who contributed to a fine collection of research in model and data engineering. We would like to express our greatest thanks to all reviewers, who put in a lot of time reading the articles and making substantial suggestions for improvement, which at the end led to the high quality. We also would like to thank J.UCS evaluation committee for the opportunity to publish this collection of research articles as a special issue of the Journal of Universal Computer Science and in particular to the publishing managers Dana Kaiser and Johanna Zeisberg for their timeless assistance during the whole process. Last but not least, we would like to acknowledge to our host institutions, the University of Nantes and the Software Competence Center Hagenberg (SCCH) for their support and sponsoring of this special issue. In particular, Prof. Yamine Ait-Ameur and its host institute IRIT/INP-ENSEEIHT have significantly collaborated with this special issue in the framework of the COMET scientific partnership agreement with SCCH, and have also supported the MEDI confer- ence from which it originated.Christian Attiogb&eacute;, Flavio Ferrarotti and Sofian Maabout (July, 2021)</p>
					<p><a href="https://lib.jucs.org/article/70972/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70972/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70972/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Wed, 28 Jul 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Security Reference Architecture for Cyber-Physical Systems (CPS)</title>
		    <link>https://lib.jucs.org/article/68539/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(6): 609-634</p>
					<p>DOI: 10.3897/jucs.68539</p>
					<p>Authors: Julio Moreno, David G. Rosado, Luis E. Sánchez, Manuel A. Serrano, Eduardo Fernández-Medina</p>
					<p>Abstract: Cyber-physical systems (CPS) are the next generation of engineered systems into which computing, communication, and control technologies are now being closely integrated. They play an increasingly important role in critical infrastructures, governments and everyday life. Security is crucial in CPS, but they were not, unfortunately, initially conceived as a secure environment, and if these security issues are to be incorporated, then security must be considered from the very beginning of the system design. One way in which to solve this problem is by having a global perspective, which can be achieved by employing a Reference Architecture (RA), since it is a high-level abstraction of a system that could be useful in the implementation of complex systems. It is widely accepted that adding elements in order to address many security factors (integrity, confidentiality, availability, etc.) and facilitate the definition of the security requirements of a Security Reference Architecture (SRA) is a good starting point when attempting to solve these kinds of cybersecurity problems and protect the system from the beginning of the development. An SRA makes it possible to define the key elements of a specific environment, thus allowing a better understanding of the inherent elements of the environments, while promoting the integration of security aspects and mechanisms. The present paper, therefore, presents the definition of an SRA for CPS by using UML models in an attempt to facilitate secure CPS implementations.</p>
					<p><a href="https://lib.jucs.org/article/68539/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/68539/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/68539/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jun 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Mobile, Open and Social Language Learning Designs and Architectures</title>
		    <link>https://lib.jucs.org/article/68852/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(5): 413-424</p>
					<p>DOI: 10.3897/jucs.68852</p>
					<p>Authors: Agnes Kukulska-Hulme, Jorge Arus-Hota, Jesus Garcia Laborda</p>
					<p>Abstract: The emerging paradigm of mobile open social learning for languages (MOSL4L) integrates the three elements of mobile, open and social, and in so doing it creates the idea of a conceptually different language learning space. It is a space full of opportunity and challenge, relevant to a post-pandemic world in which we are looking for ways to build back better. The paper discusses tensions between formal and informal language learning and the nature of learning outcomes in MOSL4L. It focuses on the needs of individuals while also considering the characteristics of the virtual spaces in which they participate. It highlights the potential of micro experiences and learning moments as structures that are well aligned with MOSL4L. It suggests developments in language curricula to take account of communication challenges being identified in the contemporary world. Many more new learning designs and software architectures will have to be developed to match the possibilities generated by the MOSL4L space.</p>
					<p><a href="https://lib.jucs.org/article/68852/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/68852/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/68852/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Fri, 28 May 2021 15:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Formal Verification of Cloud and Fog Systems:A Review and Research Challenges</title>
		    <link>https://lib.jucs.org/article/66455/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(4): 341-363</p>
					<p>DOI: 10.3897/jucs.66455</p>
					<p>Authors: Fairouz Fakhfakh, Slim Kallel, Saoussen Cheikhrouhou</p>
					<p>Abstract: Cloud and Fog computing have been widely recognized as attractive solutions in both academic and industrial sectors. Despite their benefits, the adoption of Cloud and Fog computing still have considerable challenges to be handled due to the increase of client requirements. A crucial issue, in this context, is how to verify the correctness of Cloud and Fog systems. The use of formal methods is an efficient mean which provides a real help for the designer to evaluate the behaviour of a system and prevent errors before its implementation. In this paper, we present a systematic literature review (SLR) on the current state of the art in this field. We collect the existing studies on the use of formal methods for proving the correctness of Cloud and Fog systems. The proposed approaches are compared based on some technical properties such as the verification methods, the verification tools, the considered properties, and the application domains. In addition, future directions which need more investigations are presented. We believe that our paper will be useful for industry and academic researchers to understand the existing contributions that deal with the cor- rectness of Cloud and Fog systems. Moreover, it helps them to address several gaps in the literature.</p>
					<p><a href="https://lib.jucs.org/article/66455/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/66455/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/66455/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Apr 2021 19:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Communication architecture based on IoT technology to control and monitor pets feeding</title>
		    <link>https://lib.jucs.org/article/65094/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 190-207</p>
					<p>DOI: 10.3897/jucs.65094</p>
					<p>Authors: Yadira Quiñonez, Carmen Lizarraga, Raquel Aguayo, David Arredondo</p>
					<p>Abstract: Technology is currently a significant benchmark in any application area; science and technology have permitted the invention of tools and devices that simplify daily activities by developing software engineering applications that provide automated solutions. In this sense, this work proposes two architectures that allow communication between the electronic device and the mobile application remotely, using the GSM/GPRS communication services and the Twitter social network. This development aims to control dogs&#39; feeding adequately and healthily, providing the ration of food a dog needs according to the daily energy requirements. A nutritional assessment has also been performed considering different factors such as the size, breed, and weight of the dog to calculate the daily ration of healthy and balanced food according to daily energy requirements. Essentially, the electronic device consists of two parts: on the one hand, the electronic design is formed with an Arduino board, a Sim900 module to send and receive text messages, and the ESP8266 Wi-Fi serial transceiver module, which allows establishing the internet connection to receive the tweet that users post, both modules permit remote communication with the device using the Arduino board. On the other hand, the mobile application developed on Android uses a standard design according to the Google material design guidelines, allowing the owner to feed, schedule the feeding, review the dog&#39;s food history, and receive alerts when the food is going to be finished.</p>
					<p><a href="https://lib.jucs.org/article/65094/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65094/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65094/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>K-Step Crossover Method based on Genetic Algorithm for Test Suite Prioritization in Regression Testing</title>
		    <link>https://lib.jucs.org/article/65241/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 170-189</p>
					<p>DOI: 10.3897/jucs.65241</p>
					<p>Authors: P. K. Gupta</p>
					<p>Abstract: Software is an integration of numerous programming modules  (e.g., functions, procedures, legacy system, reusable components, etc.) tested and combined to build the entire module. However, some undesired faults may occur due to a change in modules while performing validation and verification. Retesting of entire software is a costly affair in terms of money and time. Therefore, to avoid retesting of entire software, regression testing is performed. In regression testing, an earlier created test suite is used to retest the software system&#39;s modified module. Regression Testing works in three manners; minimizing test cases, selecting test cases, and prioritizing test cases. In this paper, a two-phase algorithm has been proposed that considers test case selection and test case prioritization technique for performing regression testing on several modules ranging from a smaller line of codes to huge line codes of procedural language. A textual based differencing algorithm has been implemented for test case selection. Program statements modified between two modules are used for textual differencing and utilized to identify test cases that affect modified program statements. In the next step, test case prioritization is implemented by applying the Genetic Algorithm for code/condition coverage. Genetic operators: Crossover and Mutation have been applied over the initial population (i.e. test cases), taking code/condition coverage as fitness criterion to provide a prioritized test suite. Prioritization algorithm can be applied over both original and reduced test suite depending upon the test suite&#39;s size or the need for accuracy. In the obtained results, the efficiency of the prioritization algorithms has been analyzed by the Average Percentage of Code Coverage (APCC) and Average Percentage of Code Coverage with cost (APCCc). A comparison of the proposed approach is also done with the previously proposed methods and it is observed that APCC &amp; APCCc values achieve higher percentage values faster in the case of the prioritized test suite in contrast to the non-prioritized test suite.</p>
					<p><a href="https://lib.jucs.org/article/65241/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65241/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65241/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Business Patterns Catalogue and Selection Proposal for the Conceptual Model of a Software Product</title>
		    <link>https://lib.jucs.org/article/65083/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 135-151</p>
					<p>DOI: 10.3897/jucs.65083</p>
					<p>Authors: Oscar Carlos Medina, Manuel Pérez Cota, Brenda Elizabeth Meloni, Marcelo Martín Marciszack</p>
					<p>Abstract: A pattern is a model that allows reusing a successful solution to the same problem in a different context. A pattern implementation could be the elaboration of an analysis model to incorporate good practices patterns Conceptual Modelling of Electronic Government systems. Defining a new pattern, and selecting a previously existing one from a limited set, called catalogue, are essential activities that every analysis model must solve when using patterns. The present work describes a proposal to manage a Business Patterns catalogue that can be applied to Conceptual Modelling of software products. Business Patterns allow to model and design business processes inside an organization, being it public or private. An application, called &ldquo;PatCat&rdquo; (Pattern Catalogue), was developed to test de proposal, using the Business Model of an Information System for a public education institution as a pilot. The introduction of patterns at the beginning of the Modelling Process allows to simplify and clarify the requirements elicitation, amongst other benefits. Thus, a specific management application for a pattern catalogue is useful to standardize and speed up this software design task.</p>
					<p><a href="https://lib.jucs.org/article/65083/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65083/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65083/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Instrument for Measuring Perception about Social and Human Factors that Influence Software Development Productivity</title>
		    <link>https://lib.jucs.org/article/65102/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 111-134</p>
					<p>DOI: 10.3897/jucs.65102</p>
					<p>Authors: Liliana Machuca-Villegas, Gloria Piedad Gasca-Hurtado, Solbey Morillo Puente, Luz Marcela Restrepo Tamayo</p>
					<p>Abstract: In terms of productivity in software development, there is specific interest in identifying its influencing factors. For this purpose, several classification approaches have been previously used, which have already recognized technical factors, organizational factors, product factors, project factors, and personal factors. However, these approaches often focus on technical factors over social and human factors (SHFs). Nevertheless, in addition to the obvious technical aspects, the software development process involves problem-solving skills and cognitive aspects and social interaction. In this sense, determining SHFs can lead to software organizations designing strategies for improving team productivity. In this study, we first conducted a preliminary classification of the SHFs identified in the literature. Because this study seeks to assess the factors from the standpoint of software development professionals, we developed and validated an instrument to measure the perception of software development team members about SHFs that may be affecting their productivity. For this purpose, the first four stages of survey-based research were followed: objective definition, survey design, instrument construction, instrument validity, and reliability assessment. The instrument included 79 items assessing 13 different SHFs. After assessing both their validity and reliability, the results demonstrated that the instrument is a valid and reliable tool for measuring SHFs perception among software development team members.</p>
					<p><a href="https://lib.jucs.org/article/65102/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65102/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65102/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Using a platform based on the Basic profile of ISO/IEC 29110 to reinforce DevOps environments</title>
		    <link>https://lib.jucs.org/article/65080/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 91-110</p>
					<p>DOI: 10.3897/jucs.65080</p>
					<p>Authors: Mirna Munoz, Mario Negrete, Magdalena Arcilla-Cobián</p>
					<p>Abstract: The growth of software demand has caused more competition among customers who expect faster changes and better quality in the software products delivered. The need to satisfy the continuous requirements of the market, the need to keep the quality of products and services, as well as the need to improve their processes become a difficult task for organizations. DevOps arises to handle this continuous change because it addresses the reduction of the gap between development and operation. However, the influence of this new paradigm in organizations becomes a big challenge, mainly related to a cultural change. If the change of culture is not properly implemented, it could impact a team with negative consequences. In this context, there is no specific guidance that helps organizations with their implementation. Based on the lack of guidance, this paper presents the Reinforced DevOps Guidance, which aims to help teams to achieve an evolution of their software development, software delivery, and project management processes toward a proper DevOps implementation. The guidance uses a web platform that allows a dynamic implementation. That way helps teams to understand the set of tasks to be followed and the impact of their implementation on their current organizations. This paper shows both an overview of the guidance, highlighting the web platform, and its application in a very small entity (VSE). The results show that the use of the guidance: provides support toward addressing the effort in VSEs; gives the information of the technology, processes, and teams aspects that should be improved; and allows the cultural change in a pace supported by VSEs.</p>
					<p><a href="https://lib.jucs.org/article/65080/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65080/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65080/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Knowledge Intensive Software Engineering Applications</title>
		    <link>https://lib.jucs.org/article/65078/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(2): 87-90</p>
					<p>DOI: 10.3897/jucs.65078</p>
					<p>Authors: Jezreel Mejía, Rafael Valencia-García, Giner Alor-Hernández, José A. Calvo-Manzano</p>
					<p>Abstract: The use of Information and Communication Technologies (ICTs)  has become a competitive strategy that allows organizations to position themselves within their market of action. In addition, the evolution, advancement and use of ICTs within any type of organization have created new domains of interest. In this context, Knowledge-intensive software engineering applications are becoming crucial in organizations to support their performance. Knowledge-based technologies provide a consistent and reliable basis to face the challenges for organization, manipulation and visualization of the data and knowledge, playing a crucial role as the technological basis of the development of a large number of information systems. In software engineering, it involves the integration of various knowledge sources that are in constant change.Knowledge-intensive software applications are becoming more significant because the domains of many software applications are inherently knowledge-intensive and this knowledge is often not explicitly dealt with in software development. This impedes maintenance and reuse. Moreover, it is generally known that developing software requires expertise and experience, which are currently also implicit and could be made more tangible and reusable using knowledge-based or related techniques. Furthermore, organizations have recognized that the software engineering applications are an optimal way for providing solutions, because it is a file that is constantly evolving due to the new challenges. Examples of approaches that are directly related to this tendency are data analysis, software architectures, knowledge engineering, ontologies, conceptual modelling, domain analysis and domain engineering, business rules, workflow management, human and cultural factors, to mention but a few. Therefore, tools and techniques are necessary to capture and process knowledge in order to facilitate subsequent development efforts, especially in the domain of software engineering.</p>
					<p><a href="https://lib.jucs.org/article/65078/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/65078/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/65078/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Sun, 28 Feb 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Fuzzy Adaptive Data Packets Control Algorithm for IoT System Protection</title>
		    <link>https://lib.jucs.org/article/24142/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(11): 1435-1454</p>
					<p>DOI: 10.3897/jucs.2020.076</p>
					<p>Authors: Łukasz Apiecionek, Matusz Biedziak</p>
					<p>Abstract: One of huge problem for recent IT systems are attacks on their resources called Distributed Denial of Service attacks. Many servers which are accessible from public network were a victim of such attacks or could be in the future. Unfortunately, there is still no effective method for protecting network servers against source of the attack, while such attack could block network resources for many hours. Existing solutions for protecting networks and IoT systems are using mainly firewalls and IDS/IPS mechanisms, which is not sufficient. This article presents the method minimizing the DDoS attacks. Proposed method provides possibilities for the network administrators to protect their servers and IoT network resources during the attack. The proposed fuzzy adaptive algorithm is using Ordered Fuzzy Numbers for predicting amount of packets which could be passed over the network boarder gateway. Proposed solution will give the opportunity for ordinary users to finish their work when the attack occurs.</p>
					<p><a href="https://lib.jucs.org/article/24142/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24142/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24142/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Nov 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Have Variability Tools Fulfilled the Needs of the Software Industry?</title>
		    <link>https://lib.jucs.org/article/24123/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(10): 1282-1311</p>
					<p>DOI: 10.3897/jucs.2020.067</p>
					<p>Authors: Ana Allian, Edson OliveiraJr, Rafael Capilla, Elisa Nakagawa</p>
					<p>Abstract: For nearly 30 years, industry and researchers have proposed many software variability tools to cope with the complexity of modeling variability in software development, followed by a number of publications on variability techniques built upon theoretical foundations. After more than 25 years of the practice of software variability, there are not many studies investigating the impact of software variability tools in the industry and the perception of practitioners. For this reason, we investigate in this research work how existing software variability tools fulfill the needs of companies demanding this kind of tool support. We conducted a survey with practitioners from companies in eight different countries in order to analyze the missing capabilities of software variability management tools and we compared the results of the survey with the scientifoc literature through a systematic mapping study (SMS) to analyze if the proposed solutions cover the needs required by practitioners. Our major findings indicate that many tools lack important qualities such as interoperability, collaborative work, code generation, scalability, impact analysis, and test; while the results from the SMS showed these such capabilities are, to some extent, found in some of the existing tools.</p>
					<p><a href="https://lib.jucs.org/article/24123/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24123/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24123/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Oct 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Using the Scientific Method as a Metaphor to Understand Modeling</title>
		    <link>https://lib.jucs.org/article/24119/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(9): 1230-1264</p>
					<p>DOI: 10.3897/jucs.2020.064</p>
					<p>Authors: Emilio Rodríguez-Priego, Francisco García-Izquierdo, Ángel Rubio</p>
					<p>Abstract: Although modeling is used to address complex problems, it is difficult to study modeling itself with an easy to understand model. Many authors have proposed such a model of modeling, but a consensus on the meaning of the basic modeling concepts has yet to materialize. We claim that any proposal regarding the fundamentals of modeling should address several objectives, such as to focus on the concept of model and define what it is, how a model is created and how it relates to the entities it models or to explain the relationship between model and other basic concepts such as metamodel or (modeling-)language. In this paper, we present some of the most important elements of our proposal, named Scientific Method approach to Modeling (SMM). Our proposal uses the Scientific Method as a metaphor to explain the mechanisms of modeling, since it provides well-known mechanisms constantly utilized when developing or understanding models: validation, analysis, synthesis and analogy. Inspired by these mechanisms, our proposal addresses the notion of model by including several constructors that allow us to explain better several complex modeling mechanisms extensively discussed in the literature, such as the metamodel notion.</p>
					<p><a href="https://lib.jucs.org/article/24119/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24119/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24119/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Sep 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Model Driven Software Engineering Meta-Workbenches: An XTools Approach</title>
		    <link>https://lib.jucs.org/article/24110/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(9): 1148-1176</p>
					<p>DOI: 10.3897/jucs.2020.060</p>
					<p>Authors: Tony Clark, Jens Gulden</p>
					<p>Abstract: Model Driven Software Engineering aims to provide a quality assured process for designing and generating software. Modelling frameworks that offer technologies for domain specific language and associated tool construction are called language workbenches. Since modelling is itself a domain, there are benefits to applying a workbenchbased approach to the construction of modelling languages and tools. Such a framework is a meta-modelling tool and those that can generate themselves are reflective metatools. This article reviews the current state of the art for modelling tools and proposes a set of reflective meta-modelling tool requirements. The XTools framework has been designed as a reflective meta-tool and is used as a benchmark.</p>
					<p><a href="https://lib.jucs.org/article/24110/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24110/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24110/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Sep 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Bite-Sized Virtual Reality Learning Applications: A Pattern-Based Immersive Authoring Environment</title>
		    <link>https://lib.jucs.org/article/24097/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(8): 947-971</p>
					<p>DOI: 10.3897/jucs.2020.051</p>
					<p>Authors: Robin Horst, Ramtin Naraghi-Taghi-Off, Linda Rau, Ralf Dorner</p>
					<p>Abstract: Bite-sized learning is a current educational trend in which educators divide content into relatively small, easily comprehensible chunks, called nuggets. In this paper, we introduce an authoring toolkit that relies on VR implementation of nuggets and show that a nugget-based approach is also facilitating the authoring of VR learning content. In particular, we present Immersive Nugget Tiles (IN-Tiles), a novel authoring toolkit aimed at authors who are not experts in VR. With IN-Tiles, manipulating VR nuggets and authoring VR learning content can be directly accomplished within a virtual environment allowing authors to immediately experience the results of their authoring efforts in VR. We discuss the underlying concepts of IN-Tiles, specifically how to visualize VR nuggets in a virtual environment and how to present affordances that support authoring and manipulating VR nuggets. We report the results of a user study where we evaluated the IN-Tiles toolkit and compared it to a conventional 2D authoring environment that also relies on component-based VR. The results support the hypothesis that nugget-based immersive authoring tools are suitable to create bitesized VR applications successfully and that authoring directly in VR has an added value particularly for authors who are no IT specialists.</p>
					<p><a href="https://lib.jucs.org/article/24097/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24097/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24097/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Aug 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Utilizing Debugging Information of Applications in Memory Forensics</title>
		    <link>https://lib.jucs.org/article/24088/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(7): 805-826</p>
					<p>DOI: 10.3897/jucs.2020.044</p>
					<p>Authors: Mohammed Al-Saleh, Ethar Qawasmeh, Ziad Al-Sharif</p>
					<p>Abstract: The rapid development in the digital world has contributed to the dramatic increase in the number of cybercrimes. An application's volatile data that is kept in memory (RAM) could give clues on how a criminal has been using the application up to acquisition time. Unfortunately, application-level memory forensics has been conducted in an ad hoc manner because a forensic investigator has to come up with a new technique for a new application. This process has become problematic and exhausting. This paper proposes a general solution to investigate any application in memory. We heavily utilize applications' debugging information generated by compilers in our solution. Furthermore, we extend Volatility [Walters, 2007], an open-source memory forensic framework, by developing and integrating a plugin to investigate applications in memory. We design several experiments to evaluate the effectiveness of our plugin. Interestingly, our plugin can parse debugging information and extract variables' names and data types regardless of their scope and complexity. In addition, we experimented with a real world application and succeeded in collecting vital information out of it. By accurately computing the Virtual Addresses (VA) of variables along with their allocated memory sizes based on their types, we are able to extract their values out of memory. In addition, we trace call stacks as per threads to extract local variables' values. Finally, direct and indirect pointers are successfully dereferenced.</p>
					<p><a href="https://lib.jucs.org/article/24088/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24088/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24088/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jul 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Guidelines for Structuring Object-Oriented Product Configuration Models in Standard Configuration Software</title>
		    <link>https://lib.jucs.org/article/24005/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(3): 374-401</p>
					<p>DOI: 10.3897/jucs.2020.020</p>
					<p>Authors: Jeppe Rasmussen, Lars Hvam, Katrin Kristjansdottir, Niels Mortensen</p>
					<p>Abstract: Product configuration systems (PCSs) are increasingly being used in various industries to manage product knowledge and create the required specifications of customized products. Companies applying PCS face significant challenges in modelling, structuring and documenting the systems. Some of the main challenges related to PCSs are formalising product knowledge conceptually and structuring the product features. The modelling techniques predominantly used to visualise and structure PCSs are the Unified Modelling Language (UML) notations, Generic Bill of Materials (GBOM) and Product Variant Master (PVM), associated with class collaboration cards (CRC-cards). These methods are used to both analyse and model the products and create a basis for implementation to a PCS by using an object-oriented approach. However, the modelling techniques do not consider that most commercial PCSs are not fully object-oriented, but rather, they are expert systems with an inference engine and a knowledge base; therefore, the constructed product models require modifications before implementation in the configuration software. The consequences are that what is supposedly a feasible structure of the product model is not always appropriate for the implementation in standard PCS software. To address this challenge, this paper investigates the best practice in modelling and implementation techniques for PCSs in standard software and alternative structuring methods used in object-oriented software design. The paper proposes a method for a modular design of a PCS in not fully object-oriented standard PCS software using design patterns. The proposed method was tested in a case company that suffered from a poorly structured product model in a not fully object-oriented PCS. The results show that its maintainability can be improved by using design patterns in combination with an agile documentation approach.</p>
					<p><a href="https://lib.jucs.org/article/24005/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24005/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24005/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Mar 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Experimental Evaluation of Three Value Recommendation Methods in Interactive Configuration</title>
		    <link>https://lib.jucs.org/article/24003/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(3): 318-342</p>
					<p>DOI: 10.3897/jucs.2020.018</p>
					<p>Authors: Hélène Fargier, Pierre-François Gimenez, Jérôme Mengin</p>
					<p>Abstract: The present work deals with the recommendation of values in interactive configuration, with no prior knowledge about the user, but given a list of products previously configured and bought by other users ("sales histories"). The basic idea is to recommend, for a given variable at a given step of the configuration process, a value that has been chosen by other users in a similar context, where the context is defined by the variables that have already been decided, and the values that the current user has chosen for these variables. From this point, two directions have been explored. The first one is to select a set of similar configurations in the sales history (typically, the k closest ones, using a distance measure) and to compute the best recommendation from this set - this is the line proposed by [Coster et al., 2002]. The second one, that we propose here, is to learn a model from the entire sample as representation of the users' preferences, and to use it to recommend a pertinent value; three families of models are experimented: the Bayesian networks, the naive Bayesian networks and the lexicographic preferences trees.</p>
					<p><a href="https://lib.jucs.org/article/24003/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24003/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24003/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Mar 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Improving WalkSAT for Random 3-SAT Problems</title>
		    <link>https://lib.jucs.org/article/23998/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(2): 220-243</p>
					<p>DOI: 10.3897/jucs.2020.013</p>
					<p>Authors: Huimin Fu, Yang Xu, Shuwei Chen, Jun Liu</p>
					<p>Abstract: Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problems. One of the most famous SLS algorithms for SAT is called WalkSAT, which has wide influence and performs well on most of random 3-SAT instances. However, the performance of WalkSAT lags far behind on random 3-SAT instances equal to or greater than the phase transition ratio. Motivated by this limitation, in the present work, firstly an allocation strategy is introduced and utilized in WalkSAT to determine the initial assignment, leading to a new algorithm called WalkSATvav. The experimental results show that WalkSATvav significantly outperforms the state-of-the-art SLS solvers on random 3-SAT instances at the phase transition for SAT Competition 2017. However, WalkSATvav cannot rival its competitors on random 3-SAT instances greater than the phase transition ratio. Accordingly, WalkSATvav is further improved for such instances by utilizing a combination of an improved genetic algorithm and an improved ant colony algorithm, which complement each other in guiding the search direction. The resulting algorithm, called WalkSATga, is far better than WalkSAT and significantly outperforms some previous known SLS solvers on random 3-SAT instances greater than the phase transition ratio from SAT Competition 2017. Finally, a new SAT solver called WalkSATlg, which combines WalkSATvav and WalkSATga, is proposed, which is competitive with the winner of random satisfiable category of SAT competition 2017 on random 3-SAT problem.</p>
					<p><a href="https://lib.jucs.org/article/23998/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23998/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23998/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Feb 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Scalable Distributed Metadata Server Based on Nonblocking Transactions</title>
		    <link>https://lib.jucs.org/article/23991/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(1): 89-106</p>
					<p>DOI: 10.3897/jucs.2020.006</p>
					<p>Authors: Kohei Hiraga, Osamu Tatebe, Hideyuki Kawashima</p>
					<p>Abstract: Metadata performance scalability is critically important in high-performance computing when accessing many small files from millions of clients. This paper proposes a design of a scalable distributed metadata server, PPMDS, for parallel file systems using multiple key-value servers. In PPMDS, hierarchical namespace of a file system is efficiently managed by multiple servers. Multiple entries can be atomically updated using a nonblocking distributed transaction based on an algorithm of dynamic software transactional memory. This paper also proposes optimizations to further improve the metadata performance by introducing a server-side transaction processing, multiple readers, and a shared lock mode, which reduce the number of remote procedure calls and prevent unnecessary blocking. Performance evaluation shows the scalable performance up to 3 servers, and achieves 62,000 operations per second, which is 2.58x performance improvement compared to a single metadata performance.</p>
					<p><a href="https://lib.jucs.org/article/23991/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23991/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23991/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Intelligent Recommender System Based on Association Rule Analysis for Requirement Engineering</title>
		    <link>https://lib.jucs.org/article/23988/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(1): 33-49</p>
					<p>DOI: 10.3897/jucs.2020.003</p>
					<p>Authors: Mohammad Muhairat, Shadi Bi, Bilal Hawashin, Mohammad Elbes, Mahmoud Al-Ayyoub</p>
					<p>Abstract: Requirement gathering is a vital step in software engineering. Even though many recent researches concentrated on the improvement of the requirement gathering process, many of their works lack completeness especially when the number of users is large. Data Mining techniques have been recently employed in various domains with promising results. In this work, we propose an intelligent recommender system for requirement engineering based on association rule analysis, which is a main category in Data Mining. Such recommender would contribute in enhancing the accuracy of the gathered requirements and provide more comprehensive results. Conducted experiments in this work prove that FP Growth outperformed Apriori in terms of execution and space consumption, while both methods were efficient in term of accuracy.</p>
					<p><a href="https://lib.jucs.org/article/23988/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23988/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23988/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Adopting Trust in Learning Analytics Infrastructure: A Structured Literature Review</title>
		    <link>https://lib.jucs.org/article/22691/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(13): 1668-1686</p>
					<p>DOI: 10.3217/jucs-025-13-1668</p>
					<p>Authors: George-Petru Ciordas-Hertel, Jan Schneider, Stefaan Ternier, Hendrik Drachsler</p>
					<p>Abstract: One key factor for the successful outcome of a Learning Analytics (LA) infrastructure is the ability to decide which software architecture concept is necessary. Big Data can be used to face the challenges LA holds. Additional challenges on privacy rights are introduced to the Europeans by the General Data Protection Regulation (GDPR). Beyond that, the challenge of how to gain the trust of the users remains. We found diverse architectural concepts in the domain of LA. Selecting an appropriate solution is not straightforward. Therefore, we conducted a structured literature review to assess the state-of-the-art and provide an overview of Big Data architectures used in LA. Based on the examination of the results, we identify common architectural components and technologies and present them in the form of a mind map. Linking the findings, we are proposing an initial approach towards a Trusted and Interoperable Learning Analytics Infrastructure (TIILA).</p>
					<p><a href="https://lib.jucs.org/article/22691/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22691/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22691/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Dec 2019 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	