
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 72 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Sat, 18 Apr 2026 10:59:20 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>NirMACNet: A Novel Multi-Scale Adaptive Convolutional Network for NIR Spectroscopy</title>
		    <link>https://lib.jucs.org/article/143527/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(14): 1583-1606</p>
					<p>DOI: 10.3897/jucs.143527</p>
					<p>Authors: Nguyen Thi Hoang Phuong, Phan Minh Nhat, Nguyen Van Hieu</p>
					<p>Abstract: Near-infrared (NIR) spectroscopy has emerged as a valuable analytical technique for assessing the composition and quality of various materials. This study proposes NirMACNet, a novel convolutional neural network (CNN) architecture that incorporates a residual-based multi-scale kernel mechanism for enhanced prediction of compositional attributes. The model is evaluated on two distinct NIR spectral datasets, milk and soil, to demonstrate its generalization capability across domains. By leveraging multiscale kernel operations, NirMACNet effectively captures diverse spectral patterns, while its deep architecture facilitates comprehensive feature extraction. To mitigate performance degradation commonly associated with deeper networks, residual learning is employed. Experimental results indicate that NirMACNet consistently outperforms state-of-the-art methods in terms of prediction accuracy. Future work will involve expanding the diversity of training datasets and investigating alternative architectural enhancements to further improve model robustness and applicability.</p>
					<p><a href="https://lib.jucs.org/article/143527/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/143527/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Dec 2025 08:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Enhancing Home-Based Rehabilitation Exercises with a Temporal Conditional Generative Adversarial Network</title>
		    <link>https://lib.jucs.org/article/141304/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(12): 1386-1413</p>
					<p>DOI: 10.3897/jucs.141304</p>
					<p>Authors: Fouzi Lezzar, Seif Eddine Mili</p>
					<p>Abstract: Abstract: Physical rehabilitation is essential for restoring motor function; however, traditional methods often require therapist supervision, which can be costly and inaccessible. Home-based rehabilitation offers a practical alternative, but without real-time guidance, patients may develop incorrect movement patterns that impede progress. Existing approaches typically provide feedback only after exercises are completed, reducing their effectiveness. To overcome this limitation, we propose a Temporal Conditional Generative Adversarial Network (TCGAN)-based motion generation system that delivers real-time skeletal guidance tailored to each patient&rsquo;s body structure and positioning. By detecting key anatomical landmarks and generating adaptive motion sequences, the system ensures precise movement execution, minimizing errors and improving rehabilitation outcomes. Patients can mimic these movements, enabling them to perform exercises with greater accuracy and confidence. Quantitative and qualitative evaluations confirm the effectiveness of the generated exercises, thanks to an optimized architecture, an improved loss function, a refined training process, and fine-tuned TCGAN hyperparameters. Experimental results demonstrate a high degree of similarity between generated and real movements, with a Fr&eacute;chet Inception Distance (FID) score of 0.89 and strong temporal alignment, as shown by Dynamic Time Warping (DTW) scores ranging from 2.9 to 5.6 across nine rehabilitation exercises. These metrics underscore the system&rsquo;s realism and reliability.</p>
					<p><a href="https://lib.jucs.org/article/141304/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/141304/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Oct 2025 10:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>Posture Monitoring of Patients in Radiotherapy Scenarios Based on Stacked Grayscale 3-Channel Images</title>
		    <link>https://lib.jucs.org/article/130186/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(6): 648-665</p>
					<p>DOI: 10.3897/jucs.130186</p>
					<p>Authors: Yang Zhang, Ziwen Wei, Zhihua Liu, Xiaolong Wu, Junchao Qian</p>
					<p>Abstract: Purpose: Incorrect patient positioning during radiotherapy can significantly impact treatment efficacy and pose potential risks. This study aims to develop a model that can rapidly and effectively monitor the patient&rsquo;s postures during radiotherapy sessions using real-time video. Methods: The neural network utilized in this research employed a two-stream architecture, consisting of spatial and temporal streams. For the spatial stream, RGB frames from the videos were directly used as input. In the temporal stream, representative frames were extracted from the video to construct stacked grayscale 3-channel images (SG3I) frames. This approach enabled capturing motion information through a large-scale dataset pre-trained 2D convolutional neural network (CNN), eliminating the need for computationally expensive optical flow calculations. Additionally, an improved lightweight network architecture was employed. The model was trained and tested using volunteer videos collected from a radiotherapy center in a hospital. Results: The results demonstrated that the proposed model outperforms existing methods in terms of detection accuracy while achieving higher efficiency in frame generation. Conclusion: In this study, we introduced a cost-effective and highly accurate method for recognizing patient&rsquo;s postures during radiotherapy. This approach could be readily deployed in any radiotherapy facility, ensuring treatment precision and patient safety.</p>
					<p><a href="https://lib.jucs.org/article/130186/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/130186/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 May 2025 10:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>Plant Leaf Recognition using OSSGabor filter and Vision Transformer</title>
		    <link>https://lib.jucs.org/article/129624/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(6): 623-647</p>
					<p>DOI: 10.3897/jucs.129624</p>
					<p>Authors: Thuy Phuong Khuat, Trang Van, Hoang Thien Van</p>
					<p>Abstract: Deep learning methods are increasingly used in automated plant species classification systems to support biodiversity conservation and ecological monitoring, particularly for medicinal plants. This study presents a novel approach to plant leaf recognition by integrating the Vision Transformer (ViT) model with the OSSGabor filter, termed the OGViT method. The OSSGabor filter is a leaf feature extraction technique that combines the responses of Gabor filters in 16 directions and optimizes their parameters using the Structural Similarity Index Measure (SSIM). These features capture intricate details such as leaf veins, texture, and frequency variations, which are essential for enabling ViT to fully leverage deep learning for leaf recognition. Experimental results on four public datasets&mdash;Swedish Leaf, Flavia, Folio, and UCI Leaf&mdash;demonstrate that the OGViT method outperforms state-of-the-art approaches, achieving accuracy scores of 100%, 100%, 100%, and 98.88%, respectively, with a 20% testing set and an 80% training set. This performance highlights the effectiveness of the proposed method for plant classification, offering a robust tool with potential applications in agriculture and biodiversity conservation.</p>
					<p><a href="https://lib.jucs.org/article/129624/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/129624/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 May 2025 10:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Novel GA-based Approach to Automatically Generate ConvLSTM Architectures for Human Activity Recognition</title>
		    <link>https://lib.jucs.org/article/131543/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(5): 494-518</p>
					<p>DOI: 10.3897/jucs.131543</p>
					<p>Authors: Sarah Khater, Magda B. Fayek, Mayada Hadhoud</p>
					<p>Abstract: Human activity recognition (HAR) is a challenging computer vision problem that requires recognizing and categorizing human actions using spatiotemporal data. In recent years, ConvLSTM has shown distinctive advances in manipulating spatiotemporal data. ConvLSTM-based architectures, as any deep learning architecture, require deciding on many hyperparameters apart from trainable weights. State-of-the-art designs for general purpose datasets already exist, but specific purpose applications require architecture designs that perform well on application-dependent datasets. The design of such architectures requires either many trials and errors, which consume time and resources, or an experienced architect. Neural architecture search (NAS) meth-ods have been introduced to automate the design process and address the challenge of relying on expert knowledge when creating neural architectures. NAS enables rapid prototyping and experimentation, reducing the time spent on trial and error in manual design. One of the leading approaches in NAS is Genetic Algorithm (GA), which plays a significant role in optimizing neu-ral architectures. In this paper, a novel GA-based approach is proposed to automatically design ConvLSTM-based architectures from scratch for HAR applications. Our approach is based on multi-objective GA that maximizes recognition accuracy and minimizes the number of trainable parameters and overfitting measure. The experiments are held on KTH, Weizmann, and UCF Sports datasets. The best classification accuracies from the generated models are 97.92%, 96.77%, and 94.87% for KTH, Weizmann, and UCF Sports datasets, respectively. The experimental results show that the automatically generated models with the proposed approach outperform some of the state-of-the-art manually designed ConvLSTM-based architectures with percentages up to 9.92%, 5.77% and 23.64% for KTH, Weizmann, and UCF Sports, respectively. We also compared our approach with other NAS approaches. Our approach is found to outperform some of the introduced approaches with percentages approximately 2%, 11%, and 4% for KTH, Weizmann, and UCF Sports, respectively.</p>
					<p><a href="https://lib.jucs.org/article/131543/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/131543/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Apr 2025 08:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>EBAR: A Novel Machine Learning Model for Quantifying Chemical Concentrations using NIR Spectroscopy</title>
		    <link>https://lib.jucs.org/article/121757/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 31(4): 363-382</p>
					<p>DOI: 10.3897/jucs.121757</p>
					<p>Authors: Phan Minh Nhat, Ngo Le Huy Hien, Dinh Minh Toan, Le Viet Hung, Phan Binh, Phung Thi Anh, Nguyen Thi Hoang Phuong, Nguyen Van Hieu</p>
					<p>Abstract: The examination of Near Infrared Reflectance Spectroscopy (NIR) in cattle and poultry fertilizers provides a viable solution for determining optimal fertilizer composition for crop growth while mitigating adverse impacts on soil and groundwater quality. In recent studies, conventional machine learning models combined with spectral analysis have been used to ascertain cattle and poultry fertilizer concentrations. However, these traditional machine learning models encounter challenges in achieving data generalization, resulting in suboptimal prediction accuracy. To address this issue, this study proposes a synthesized machine learning model named EBAR (Error Based Accumulation Regression), which exhibits a commendable coefficient of determination, with an average R2 = 0.865 across 7 chemical substances, surpassing the performance of existing traditional machine learning models. Additionally, a Backward Elimination technique is designed to identify crucial wavelength ranges for monitoring component concentrations. The research outcome is promising and acts as a novel benchmark for later models in determining component concentrations through NIR spectroscopy. Future research gears toward expanding datasets and increasing samples of fertilizers, extending examined wavelength, and improving the model&rsquo;s efficiency to apply to various types of foods, including seafood, vegetables, and fruits.</p>
					<p><a href="https://lib.jucs.org/article/121757/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/121757/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Mar 2025 10:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Diagnosis of Lung Cancer from Computed Tomography Scans with Deep Learning Methods</title>
		    <link>https://lib.jucs.org/article/116916/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(8): 1089-1111</p>
					<p>DOI: 10.3897/jucs.116916</p>
					<p>Authors: Furkan Berk Seyrek, Halil Yiğit</p>
					<p>Abstract: In recent years, rapid advancements in technology, particularly in the realm of artificial intelligence, have significantly transformed the landscape of lung cancer diagnosis. Early detection of lung cancer is pivotal in enhancing patient outcomes; however, traditional diagnostic methods are laborious and time-consuming. Leveraging the power of deep learning techniques, specifically utilizing established neural network architectures, offers a promising solution. This study focuses on the classification of lung images from computed tomography (CT) scans into cancerous and non-cancerous categories. By employing prevalent deep learning models, transfer learning, and rigorous evaluation metrics, this study aims to assess the efficiency of these models in accurately diagnosing lung cancer. The study uses a publicly available dataset and employs preprocessing and segmentation techniques to prepare the images for analysis. The performance of the deep learning models is evaluated on the basis of parameters such as accuracy, sensitivity, specificity, and F1 score. The results demonstrate remarkable accuracy rates, with specific architectures such as ResNet-152V2 and the proposed deep convolutional neural network architecture achieving a staggering 99.1% accuracy. These findings underscore the potential of deep learning techniques in revolutionizing lung cancer diagnosis, offering valuable support to healthcare professionals, and paving the way for more efficient and accurate diagnostic practices.</p>
					<p><a href="https://lib.jucs.org/article/116916/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/116916/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Aug 2024 16:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Embedded Neural Network Approach for Reinforcing Deep Learning: Advancing Hand Gesture Recognition</title>
		    <link>https://lib.jucs.org/article/110291/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(7): 957-977</p>
					<p>DOI: 10.3897/jucs.110291</p>
					<p>Authors: Anwar Mira, Olaf Hellwich</p>
					<p>Abstract: Deep neural networks (DNNs) can face limitations during training for recognition, motivating this study to improve recognition capabilities by optimizing deep learning features for hand gesture image recognition. We propose a novel approach that enhances features from well-trained DNNs using an improved radial basis function (RBF) neural network, targeting recognition within individual gesture categories. We achieve this by clustering images with a self-organizing map (SOM) network to identify optimal centers for RBF training. Our enhanced SOM, employing the Hassanat distance metric, outperforms the traditional K-Means method across a comparative analysis of various distance functions and the expanded number of cluster centers, accurately identifying hand gestures in images. Our training pipeline learns from hand gesture videos and static images, addressing the growing need for machines to interact with gestures. Despite challenges posed by gesture videos, such as sensitivity to hand pose sequences within a single gesture category and overlapping hand poses due to the high similarities and repetitions, our pipeline achieved significant enhancement without requiring time-related training data. We also improve the recognition of static hand pose images within the same category. Our work advances DNNs by integrating deep learning features and incorporating SOM for RBF training.</p>
					<p><a href="https://lib.jucs.org/article/110291/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/110291/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jul 2024 16:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Multi-Class Microscopic Image Analysis of Protozoan Parasites Using Convolutional Neural Network</title>
		    <link>https://lib.jucs.org/article/112639/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(4): 420-432</p>
					<p>DOI: 10.3897/jucs.112639</p>
					<p>Authors: Sivaramasamy Elayaraja, Sunil Yeruva, Vlastimil Stejskal, Satish Nandipati</p>
					<p>Abstract: Protozoan parasites cause a wide range of devastating diseases in various kinds of organisms, including humans. It may be lethal if untreated promptly. To detect specific disease-causingorganisms parasites, a wide range of immunological and molecular technologies are now widely available. However, all of this depends on the worker&#39;s expertise and are time-consuming, error-prone, and expensive. With the development of technology, compared to traditional biological techniques, convolutional neural networks have reached excellent achievements in image classification, cutting costs while attaining an overall higher accuracy and eliminating human error. Many models include numerous convolutional layers and offer an accuracy between 90 and 95 percent. In this study, 4740 microscopic images of protozoan parasites from six classes with a balanced dataset and an 80&ndash;20% split were classified using three convolutional layers with stochastic gradient descent as an optimizer. A 5-fold cross-validation approach is used to evaluate the proposed method. We also examine and evaluate with deep learning models namely VGG16, ResNet50, and InceptionV3. The performance evaluation of the proposed model shows an accuracy of 94% with a precision range (of 0.83-0.99) and a recall range (of 0.76-1.00), respectively. The retrained model was able to recognize and classify all 6 different parasites. Except for class Leishmania, where 24% of images are incorrectly classified as Plasmodium and Trichomonas, the model demonstrates that most cases are correctly identified.</p>
					<p><a href="https://lib.jucs.org/article/112639/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/112639/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Apr 2024 17:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Classification of CNC Vibration Speeds by Heralick Features</title>
		    <link>https://lib.jucs.org/article/106543/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(3): 363-382</p>
					<p>DOI: 10.3897/jucs.106543</p>
					<p>Authors: Melih Kuncan, Kaplan Kaplan, Yılmaz Kaya, Mehmet Recep Minaz, H. Metin Ertunç</p>
					<p>Abstract: In the contemporary landscape of industrial manufacturing, the concept of computer numerical control (CNC) has emerged due to the optimization of conventional machinery, distinguished by its remarkable precision and expeditious processing capabilities. These inherent advantages have seamlessly paved the way for the pervasive integration of CNC machines across a myriad of industrial manufacturing sectors. The present study embarks upon a comprehensive inquiry, delving into the intricate analysis of a specialized prototype CNC molding machine, encompassing a meticulous assessment of its structural rigidity, robustness, and propensity for vibrational occurrences. Moreover, an insightful exploration is undertaken to discern the intricate interplay between vibrational signals and intricate machining processes, particularly under diverse conditions such as the presence or absence of the cutting tool, and at varying rotational speeds denoted in revolutions per minute (RPM). The trajectory of this research voyage encompasses an extensive array of empirical experiments meticulously conducted on the prototype CNC machine, with synchronous real-time acquisition of vibrational data. This empirical journey starts by generating two distinct datasets, each meticulously designed to encompass an assemblage of seven distinct rotational speeds, spanning the spectrum from 18000 to 30000 RPM, thereby facilitating enhanced diversity within the dataset. In parallel, a secondary dataset is meticulously derived from the CNC machine operating in the absence of the cutting tool, thereby encapsulating an exhaustive range of 20 discrete RPM values. The extraction of pivotal features aimed at discerning between the vibrational signals arising from distinct conditions (i.e., those emanating from situations involving the presence or absence of the cutting tool) and the associated variance in CNC machine speeds is facilitated through an innovative framework grounded in co-occurrence matrices. The culmination of this methodological framework is the identification of discernible co-occurrence matrices, thereby facilitating the subsequent computation of Heralick features. The classification effort was performed systematically using 10-fold cross-validation analysis, covering a number of different machine learning models. The outcomes emanating from this intricate sequence of systematic methodologies underscore remarkable achievements. Specifically, the classification of vibrational signals corresponding to varying CNC machine speeds, contingent upon the presence or absence of the cutting tool, yields commendable accuracy rates of 94.27% and 94.16%, respectively. Notably, an exemplary accuracy rate of 100% is attained when classifying differing conditions (i.e., situations involving the presence or absence of the cutting tool) across specific RPM settings, prominently at 22000  24000  26000  28000  and 30000 RPM.</p>
					<p><a href="https://lib.jucs.org/article/106543/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/106543/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Mar 2024 16:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Visualizing Portable Executable Headers for Ransomware Detection: A Deep Learning-Based Approach</title>
		    <link>https://lib.jucs.org/article/104901/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(2): 262-286</p>
					<p>DOI: 10.3897/jucs.104901</p>
					<p>Authors: Tien Quang Dam, Nghia Thinh Nguyen, Trung Viet Le, Tran Duc Le, Sylvestre Uwizeyemungu, Thang Le-Dinh</p>
					<p>Abstract: In recent years, the rapid evolution of ransomware has led to the development of numerous techniques designed to evade traditional malware detection methods. To address this issue, a novel approach is proposed in this study, leveraging machine learning to encode critical information from Portable Executable (PE) headers into visual representations of ransomware samples. The proposed method selects highly impactful features for data sample classification and encodes them as images based on predefined color rules. A deep learning model named peIRCECon (PE Header-Image-based Ransomware Classification Ensemble with Concatenating) is also developed by integrating prominent architectures, such as VGG16 and ResNet50, and incorporating the concatenating method to enhance ransomware detection and classification performance. Experimental results using self-collected datasets demonstrate the efficacy of this approach, achieving high accuracy of 99.85% in distinguishing between ransomware and benign samples. This promising approach holds the potential to significantly improve the effectiveness of ransomware detection and classification, thereby contributing to more robust cybersecurity defense systems.</p>
					<p><a href="https://lib.jucs.org/article/104901/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/104901/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2024 16:00:07 +0000</pubDate>
		</item>
	
		<item>
		    <title>Transfer Learning with EfficientNetV2S for Automatic Face Shape Classification</title>
		    <link>https://lib.jucs.org/article/104490/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(2): 153-178</p>
					<p>DOI: 10.3897/jucs.104490</p>
					<p>Authors: Petra Grd, Igor Tomičić, Ena Barčić</p>
					<p>Abstract: The classification of human face shapes, a pivotal aspect of one&rsquo;s appearance, plays a crucial role in diverse fields like beauty, cosmetics, healthcare, and security. In this paper, we present a multi-step methodology for face shape classification, harnessing the potential of transfer learning and a pretrained EfficientNetV2S neural network. Our approach comprises key phases, including preprocessing, augmentation, training, and testing, ensuring a comprehensive and reliable solution. The preprocessing step involves precise face detection, cropping, and image scaling, laying a solid foundation for accurate feature extraction. Our methodology utilizes a publicly available dataset of female celebrities, comprising five face shape classes: heart, oblong, oval, round, and square. By augmenting this dataset during training, we magnify its diversity, enabling better generalization and enhancing the model&rsquo;s robustness. With the EfficientNetV2S neural network, we employ transfer learning, leveraging pretrained weights to optimize accuracy, training speed, and parameter size. The result is a highly efficient and effective model, which outperforms state-of-the-art approaches on the same dataset, boasting an outstanding overall accuracy of 96.32%. Our findings demonstrate the efficiency of our approach, proving its potential in the field of face shape classification. The success of our methodology holds promise for various applications, offering valuable insights into beauty analysis, cosmetic recommendations, and personalized healthcare.</p>
					<p><a href="https://lib.jucs.org/article/104490/">HTML</a></p>
					
					<p><a href="https://lib.jucs.org/article/104490/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Feb 2024 16:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Image Filtering Techniques for Object Recognition in Autonomous Vehicles</title>
		    <link>https://lib.jucs.org/article/102428/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 30(1): 49-84</p>
					<p>DOI: 10.3897/jucs.102428</p>
					<p>Authors: Ngo Le Huy Hien, Ah-Lian Kor, Mei Choo Ang, Eric Rondeau, Jean-Philippe Georges</p>
					<p>Abstract: The deployment of autonomous vehicles has the potential to significantly lessen the variety of current harmful externalities, (such as accidents, traffic congestion, security, and environmental degradation), making autonomous vehicles an emerging topic of research. In this paper, a literature review of autonomous vehicle development has been conducted with a notable finding that autonomous vehicles will inevitably become an indispensable future greener solution. Subsequently, 5 different deep learning models, YOLOv5s, EfficientNet-B7, Xception, MobilenetV3, and InceptionV4, have been built and analyzed for 2-D object recognition in the navigation system. While testing on the BDD100K dataset, YOLOv5s and EfficientNet-B7 appear to be the two best models. Finally, this study has proposed Hessian, Laplacian, and Hessian-based Ridge Detection filtering techniques to optimize the performance of those 2 models. The results demonstrate that these filters could increase the mean average precision by up to 11.81%, and reduce detection time by up to 43.98% when applied to YOLOv5s and EfficientNet-B7 models. Overall, all the experiment results are promising and could be extended to other domains for semantic understanding of the environment. Additionally, various filtering algorithms for multiple object detection and classification could be applied to other areas. Different recommendations and future work have been clearly defined in this study.</p>
					<p><a href="https://lib.jucs.org/article/102428/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/102428/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/102428/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jan 2024 16:00:04 +0000</pubDate>
		</item>
	
		<item>
		    <title>Face Plastic Surgery Recognition Model Based on Neural Network and Meta-Learning Model </title>
		    <link>https://lib.jucs.org/article/98674/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(10): 1092-1115</p>
					<p>DOI: 10.3897/jucs.98674</p>
					<p>Authors: Rasha R. Atallah, Ahmad Sami Al-Shamayleh, Mohammed A. Awadallah</p>
					<p>Abstract: Facial recognition is a procedure of verifying a person&#39;s identity by using the face, which is considered one of the biometric security methods. However, facial recognition methods face many challenges, such as face aging, wearing a face mask, having a beard, and undergoing plastic surgery, which decreases the accuracy of these methods.This study evaluates the impact of plastic surgery on face recognition models. The motivation for conducting the research in that aspect is because plastic surgery treatments do not only change the shape and texture of any face but also have increased rapidly in this era. This paper proposes a model based on an artificial neural network with model-agnostic meta-learning (ANN-MAML) for plastic surgery face recognition. This study aims to build a framework for face recognition before and after undergoing plastic surgery based on an artificial neural network. Also, the study seeks to clarify the collaboration between facial plastic surgery and facial recognition software to determine the issues. The researchers evaluated the proposed ANN-MAML&#39;s performance using the HDA dataset. The experimental results show that the proposed ANN-MAML learning model attained an accuracy of 90% in facial recognition using Rhinoplasty (Nose surgery) images, 91% on Blepharoplasty surgery (Eyelid surgery) images, 94% on Brow lift (Forehead surgery) images, as well as 92% on Rhytidectomy (Facelift) images. Finally, the results of the proposed model were compared with the baseline methods by the researchers, which showed the superiority of the ANN-MAML over the baselines.</p>
					<p><a href="https://lib.jucs.org/article/98674/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/98674/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/98674/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Oct 2023 18:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>PlantKViT: A Combination Model of Vision Transformer and KNN for Forest Plants Classification</title>
		    <link>https://lib.jucs.org/article/94657/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(9): 1069-1089</p>
					<p>DOI: 10.3897/jucs.94657</p>
					<p>Authors: Nguyen Van Hieu, Ngo Le Huy Hien, Luu Van Huy, Nguyen Huy Tuong, Pham Thi Kim Thoa</p>
					<p>Abstract: The natural ecosystem incorporates thousands of plant species and distinguishing them is normally manual, complicated, and time-consuming. Since the task requires a large amount of expertise, identifying forest plant species relies on the work of a team of botanical experts. The emergence of Machine Learning, especially Deep Learning, has opened up a new approach to plant classification. However, the application of plant classification based on deep learning models remains limited. This paper proposed a model, named PlantKViT, combining Vision Transformer architecture and the KNN algorithm to identify forest plants. The proposed model provides high efficiency and convenience for adding new plant species. The study was experimented with using Resnet-152, ConvNeXt networks, and the PlantKViT model to classify forest plants. The training and evaluation were implemented on the dataset of DanangForestPlant, containing 10,527 images and 489 species of forest plants. The accuracy of the proposed PlantKViT model reached 93%, significantly improved compared to the ConvNeXt model at 89% and the Resnet-152 model at only 76%. The authors also successfully developed a website and 2 applications called &lsquo;plant id&rsquo; and &lsquo;Danangplant&rsquo; on the iOS and Android platforms respectively. The PlantKViT model shows the potential in forest plant identification not only in the conducted dataset but also worldwide. Future work should gear toward extending the dataset and enhance the accuracy and performance of forest plant identification.</p>
					<p><a href="https://lib.jucs.org/article/94657/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94657/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94657/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2023 08:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>Cooperative Swarm Intelligence Algorithms for Adaptive Multilevel Thresholding Segmentation of COVID-19 CT-Scan Images</title>
		    <link>https://lib.jucs.org/article/93498/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(7): 759-804</p>
					<p>DOI: 10.3897/jucs.93498</p>
					<p>Authors: Muath Sabha, Thaer Thaher, Marwa M. Emam</p>
					<p>Abstract: The Coronavirus Disease 2019 (COVID-19) is widespread throughout the world and poses a serious threat to public health and safety. A COVID-19 infection can be recognized using computed tomography (CT) scans. To enhance the categorization, some image segmentation techniques are presented to extract regions of interest from COVID-19 CT images. Multi-level thresholding (MLT) is one of the simplest and most effective image segmentation approaches, especially for grayscale images like CT scan images. Traditional image segmentation methods use histogram approaches; however, these approaches encounter some limitations. Now, swarm intelligence inspired meta-heuristic algorithms have been applied to resolve MLT, deemed an NP-hard optimization task. Despite the advantages of using meta-heuristics to solve global optimization tasks, each approach has its own drawbacks. However, the common flaw for most meta-heuristic algorithms is that they are unable to maintain the diversity of their population during the search, which means they might not always converge to the global optimum. This study proposes a cooperative swarm intelligence-based MLT image segmentation approach that hybridizes the advantages of parallel meta-heuristics and MLT for developing an efficient image segmentation method for COVID-19 CT images. An efficient cooperative model-based meta-heuristic called the CPGH is developed based on three practical algorithms: particle swarm optimization (PSO), grey wolf optimizer (GWO), and Harris hawks optimization (HHO). In the cooperative model, the applied algorithms are executed concurrently, and a number of potential solutions are moved across their populations through a procedure called migration after a set number of generations. The CPGH model can solve the image segmentation problem using MLT image segmentation. The proposed CPGH is evaluated using three objective functions, cross-entropy, Otsu&rsquo;s, and Tsallis, over the COVID-19 CT images selected from open-sourced datasets. Various evaluation metrics covering peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and universal quality image index (UQI) were employed to quantify the segmentation quality. The overall ranking results of the segmentation quality metrics indicate that the performance of the proposed CPGH is better than conventional PSO, GWO, and HHO algorithms and other state-of-the-art methods for MLT image segmentation. On the tested COVID-19 CT images, the CPGH offered an average PSNR of 24.8062, SSIM of 0.8818, and UQI of 0.9097 using 20 thresholds.</p>
					<p><a href="https://lib.jucs.org/article/93498/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/93498/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/93498/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jul 2023 16:00:06 +0000</pubDate>
		</item>
	
		<item>
		    <title>A novel deep learning model with the Grey Wolf Optimization algorithm for cotton disease detection</title>
		    <link>https://lib.jucs.org/article/94183/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(6): 595-626</p>
					<p>DOI: 10.3897/jucs.94183</p>
					<p>Authors: Burak Gülmez</p>
					<p>Abstract: Plants are a big part of the ecosystem. Plants are also used by humans for various purposes. Cotton is one of these important plants and is very critical for humans. Cotton production is one of the most important sources of income for many countries and farmers in the world. Cotton can get diseases like other plants and living things. Detecting these diseases is critical. In this study, a model is developed for disease detection from leaves of cotton. This model determines whether the cotton is healthy or diseased through the photograph. It is a deep convolutional neural network model. While establishing the model, care is taken to ensure that it is a problem-specific model. The grey wolf optimization algorithm is used to ensure that the model architecture is optimal. So, this algorithm will find the most efficient architecture. The proposed model has been compared with the ResNet50, VGG19, and InceptionV3 models that are frequently used in the literature. According to the results obtained, the proposed model has an accuracy value of 1.0. Other models had accuracy values of 0.726, 0.934, and 0.943, respectively. The proposed model is more successful than other models.</p>
					<p><a href="https://lib.jucs.org/article/94183/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94183/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94183/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Jun 2023 12:00:05 +0000</pubDate>
		</item>
	
		<item>
		    <title>Automated Classification of Cell Level of HEp-2 Microscopic Images Using Deep Convolutional Neural Networks-Based Diameter Distance Features</title>
		    <link>https://lib.jucs.org/article/96293/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(5): 432-445</p>
					<p>DOI: 10.3897/jucs.96293</p>
					<p>Authors: Mitchell Jensen, Khamael Al-Dulaimi, Khairiyah Saeed Abduljabbar, Jasmine Banks</p>
					<p>Abstract: To identify autoimmune diseases in humans, analysis of HEp-2 staining patterns at cell level is the gold standard for clinical practice research communities. An automated procedure is a complicated task due to variations in cell densities, sizes, shapes and patterns, overfitting of features, large-scale data volume, stained cells and poor quality of images. Several machine learning methods that analyse and classify HEp-2 cell microscope images currently exist. However, accuracy is still not at the level required for medical applications and computer aided diagnosis due to those challenges. The purpose of this work to automate classification procedure of HEp-2 stained cells from microscopic images and improve the accuracy of computer aided diagnosis. This work proposes Deep Convolutional Neural Networks (DCNNs) technique to classify HEp-2 cell patterns at cell level into six classes based on employing the level-set method via edge detection technique to segment HEp-2 cell shape. The DCNNs are designed to identify cell-shape and fundamental distance features related with HEp-2 cell types. This paper is investigated the effectiveness of our proposed method over benchmarked dataset. The result shows that the proposed method is highly superior comparing with other methods in benchmarked dataset and state-of-the-art methods. The result demonstrates that the proposed method has an excellent adaptability across variations in cell densities, sizes, shapes and patterns, overfitting features, large-scale data volume, and stained cells under different lab environments. The accurate classification of HEp-2 staining pattern at cell level helps increasing the accuracy of computer aided diagnosis for diagnosis process in the future.</p>
					<p><a href="https://lib.jucs.org/article/96293/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/96293/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/96293/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 May 2023 18:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>Semi-Supervised Semantic Segmentation for Identification of Irrelevant Objects in a Waste Recycling Plant</title>
		    <link>https://lib.jucs.org/article/87643/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(5): 419-431</p>
					<p>DOI: 10.3897/jucs.87643</p>
					<p>Authors: César Domínguez, Jónathan Heras, Eloy Mata, Vico Pascual, Lucas Fernández-Cedrón, Marcos Martínez-Lanchares, Jon Pellejero-Espinosa, Antonio Rubio-Loscertales, Carlos Tarragona-Pérez</p>
					<p>Abstract: In waste recycling plants, measuring the waste volume and weight at the beginning of the treatment process is key for a better management of resources. This task can be conducted by using orthophoto images, but it is necessary to remove from those images the objects, such as containers or trucks, that are not involved in the measurement process. This work proposes the application of deep learning for the semantic segmentation of those irrelevant objects. Several deep architectures are trained and compared, while three semi-supervised learning methods (PseudoLabeling, Distillation and Model Distillation) are proposed to take advantage of non-annotated images. In these experiments, the U-net++ architecture with an EfficientNetB3 backbone, trained with the set of labelled images, achieves the best overall multi Dice score of 91.23%. The application of semi-supervised learning methods further boosts the segmentation accuracy in a range between 1.31% and 2.59%, on average.</p>
					<p><a href="https://lib.jucs.org/article/87643/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/87643/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/87643/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 May 2023 18:00:02 +0000</pubDate>
		</item>
	
		<item>
		    <title>Feature Fusion and NRML Metric Learning for Facial Kinship Verification</title>
		    <link>https://lib.jucs.org/article/89254/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 29(4): 326-348</p>
					<p>DOI: 10.3897/jucs.89254</p>
					<p>Authors: Fahimeh Ramazankhani, Mahdi Yazdian-Dehkord, Mehdi Rezaeian</p>
					<p>Abstract: Features extracted from facial images are used in various fields such as kinship verification. The kinship verification system determines the kin or non-kin relation between a pair of facial images by analysing their facial features. In this research, different texture and color features have been used along with the metric learning method, to verify the kinship for the four kinship relations of father-son, father-daughter, mother-son and mother-daughter. First, by fusing effective features, NRML metric learning used to generate the discriminative feature vector, then SVM classifier used to verify to kinship relations. To measure the accuracy of the proposed method, KinFaceW-I and KinFaceW-II databases have been used. The results of the evaluations show that the feature fusion and NRML metric learning methods have been able to improve the performance of the kinship verification system. In addition to the proposed approach, the effect of feature extraction from the image blocks or the whole image is investigated and the results are presented. The results indicate that feature extraction in block form, can be effective in improving the final accuracy of kinship verification.</p>
					<p><a href="https://lib.jucs.org/article/89254/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/89254/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/89254/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Apr 2023 12:00:03 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Novel Image Super-Resolution Reconstruction Framework Using the AI Technique of Dual Generator Generative Adversarial Network (GAN)</title>
		    <link>https://lib.jucs.org/article/94134/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 967-983</p>
					<p>DOI: 10.3897/jucs.94134</p>
					<p>Authors: Loveleen Kumar, Manish Jain</p>
					<p>Abstract: Image superresolution (SR) is the process of enlarging and enhancing a low-resolution image. Image superresolution helps in industrial image enhancement, classification, detection, pattern recognition, surveillance, satellite imaging, medical diagnosis, image analytics, etc. It is of utmost importance to keep the features of the low-resolution image intact while enlarging and enhancing it. In this research paper, a framework is proposed that works in three phases and generates superresolution images while keeping low-resolution image features intact and reducing image blurring and artifacts. In the first phase, image enlargement is done, which enlarges the low-resolution image to the 2x/4x scale using two standard algorithms. The second phase enhances the image using an AI-empowered Generative adversarial network (GAN). We have used a GAN with dual generators and named it EffN-GAN (EfficientNet-GAN). Fusion is done in the last phase, wherein the final improved image is generated by fusing the enlarged image and GAN output image. The fusion phase helps in reducing the artifacts. We have used the DIV2K dataset to train the GAN and further tested the results on the images of Set5, Set14, B100, Urban100, Manga109 datasets with ground truth of size 224x224x3. The obtained results were compared with the state-of-the-art superresolution approach based on important image quality parameters, namely, Peak signal-to--to-noise ratio (PSNR), Structural similarity index (SSIM), Visual information fidelity (VIF) image quality parameters. The results show that the proposed framework for generating super-resolution images from 2x/4x resolution downgraded images improves the aforementioned mentioned image quality parameters significantly.</p>
					<p><a href="https://lib.jucs.org/article/94134/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94134/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94134/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>X-Ray Image Authentication Scheme Using SLT and Contourlet Transform for Modern Healthcare System</title>
		    <link>https://lib.jucs.org/article/94132/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 916-929</p>
					<p>DOI: 10.3897/jucs.94132</p>
					<p>Authors: Vijay Krishna Pallaw, Kamred Udham Singh</p>
					<p>Abstract: The network&rsquo;s convenience has created a copyright dilemma for some multimedia works. Nowadays, every healthcare system relies on digital medical images for diagnosis. These medical images are transmitted through communication channels, so there is a risk of tampering and copyright violation. A digital watermarking system can ensure and guarantee that tampering and copyright violation are prevented. This study presents a nonblind digital watermarking approach to X-ray medical images based on Contourlet transform (C.T.) and Slantlet Transform (SLT). Since the two-dimensional signals are represented flexibly by contourlet transforms, the contour plot can be used efficiently to represent curves and smooth contours. At the same time, the SLT has better time-localization &amp; smoothness properties. The maximum energy of an image is conceived in the LL band if SLT transform are employed. Therefore, the LL band is used to entrench the watermark. The additive quantization method has been used to entrench the watermark. The efficiency of our scheme is assessed by different quality parameters and compared with several existing schemes. The results of the experiment show that the proposed scheme performs better and has the ability to resist several attacks.</p>
					<p><a href="https://lib.jucs.org/article/94132/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94132/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94132/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Color Ultrasound Image Watermarking Scheme Using FRT and Hessenberg Decomposition for Telemedicine Applications</title>
		    <link>https://lib.jucs.org/article/94127/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 882-897</p>
					<p>DOI: 10.3897/jucs.94127</p>
					<p>Authors: Lalan Kumar, Kamred Udham Singh</p>
					<p>Abstract: Watermarking is a valuable technique for verifying medical images obtained through the internet for diagnosis. There is a greater need for security in medical pictures with ever- increasing security risks. This research presented a Finite Ridgelet Transform (FRT)-Hessenberg based watermarking scheme in medical images. The suggested paradigm is divided into two stages. Before watermark insertion, FRT is applied to medical images. The coefficients are combined into blocks of 4 x 4 and each block is decomposed using Hessenberg decomposition. The second column of the Q matrix is used to insert the watermark using the additive quantization technique. The results obtained from our experiment have given good visual quality of the watermarked images. The high PSNR value 53.6121 and NC value 1.0 show that our scheme is performing better. Moreover, the performance of our scheme is robust against several attacks. The consequences of this result imply that the anticipated scheme is effective for medical image watermarking.</p>
					<p><a href="https://lib.jucs.org/article/94127/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94127/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94127/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Advances and Practical Applications of Deep and Shallow Machine Learning</title>
		    <link>https://lib.jucs.org/article/80697/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(3): 225-226</p>
					<p>DOI: 10.3897/jucs.80697</p>
					<p>Authors: Michal Choras, Robert Burduk, Rafal Kozik, Jörg Keller</p>
					<p>Abstract: </p>
					<p><a href="https://lib.jucs.org/article/80697/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/80697/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/80697/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Editorial</category>
		    <pubDate>Mon, 28 Mar 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Fastener Classification Using One-Shot Learning with Siamese Convolution Networks</title>
		    <link>https://lib.jucs.org/article/70484/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(1): 80-97</p>
					<p>DOI: 10.3897/jucs.70484</p>
					<p>Authors: Canan Tastimur, Erhan Akin</p>
					<p>Abstract: Deep Learning has been widely used in image-based applications such as object classification, object detection, and object recognition in recent years. Classifying highly similar objects is a very difficult problem. It is difficult to classify datasets in this situation where object similarity between classes and differences between classes are high. In this study, Siamese Convolution Neural Network, which is a similarity measurement-based network, has been practiced to classify 6 types of screws, 5 types of nuts, and 7 types of bolts that are very similar to each other. In addition, this neural network formed with the One-Shot Learning technique is trained. Thanks to the OSL technique, there is no need to use large data sets. Also, there is no need to use large amounts of data from each class. Adding a new class to be classified is also made easier by the use of the OSL technique. The performance results of the proposed method are manifested in detail in the article.</p>
					<p><a href="https://lib.jucs.org/article/70484/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/70484/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/70484/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Novel Real-Time Edge-Cloud Big Data Management and Analytics Framework for Smart Cities</title>
		    <link>https://lib.jucs.org/article/71645/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(1): 3-26</p>
					<p>DOI: 10.3897/jucs.71645</p>
					<p>Authors: Roberto Cavicchioli, Riccardo Martoglia, Micaela Verucchi</p>
					<p>Abstract: Exposing city information to dynamic, distributed, powerful, scalable, and user-friendly big data systems is expected to enable the implementation of a wide range of new opportunities; however, the size, heterogeneity and geographical dispersion of data often makes it difficult to combine, analyze and consume them in a single system. In the context of the H2020 CLASS project, we describe an innovative framework aiming to facilitate the design of advanced big-data analytics workflows. The proposal covers the whole compute continuum, from edge to cloud, and relies on a well-organized distributed infrastructure exploiting: a) edge solutions with advanced computer vision technologies enabling the real-time generation of &ldquo;rich&rdquo; data from a vast array of sensor types; b) cloud data management techniques offering efficient storage, real-time querying and updating of the high-frequency incoming data at different granularity levels. We specifically focus on obstacle detection and tracking for edge processing, and consider a traffic density monitoring application, with hierarchical data aggregation features for cloud processing; the discussed techniques will constitute the groundwork enabling many further services. The tests are performed on the real use-case of the Modena Automotive Smart Area (MASA).</p>
					<p><a href="https://lib.jucs.org/article/71645/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/71645/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/71645/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jan 2022 10:30:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Deep Semi-Supervised Image Classification Algorithms: a Survey</title>
		    <link>https://lib.jucs.org/article/77029/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 27(12): 1390-1407</p>
					<p>DOI: 10.3897/jucs.77029</p>
					<p>Authors: Ani Vanyan, Hrant Khachatrian</p>
					<p>Abstract: Semi-supervised learning is a branch of machine learning focused on improving the performance of models when the labeled data is scarce, but there is access to large number of unlabeled examples. Over the past five years there has been a remarkable progress in designing algorithms which are able to get reasonable image classification accuracy having access to the labels for only 0.1% of the samples. In this survey, we describe most of the recently proposed deep semi-supervised learning algorithms for image classification and identify the main trends of research in the field. Next, we compare several components of the algorithms, discuss the challenges of reproducing the results in this area, and highlight recently proposed applications of the methods originally developed for semi-supervised learning.</p>
					<p><a href="https://lib.jucs.org/article/77029/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/77029/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/77029/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Dec 2021 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Application of Multi-Descriptor Binary Shape Analysis for Classification of Electronic Parts</title>
		    <link>https://lib.jucs.org/article/24010/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(4): 479-495</p>
					<p>DOI: 10.3897/jucs.2020.025</p>
					<p>Authors: Kamil Maliński, Krzysztof Okarma</p>
					<p>Abstract: Rapid growth of availability of modern electronic and robotic solutions, also for home and amateur use, related to the progress in home automation and popularity of the IoT systems, makes it possible to develop some unique hardware solutions, also by independent researchers and engineers, often with the help of the 3D printing technology. Although in many industrial applications high speed pick and place machines are used for assembling small surface-mount devices (SMD), especially in mass production of electronic parts, there are still some applications, where the traditional through-hole technology used in Printed Circuit Boards (PCB) is utilised, particularly considering some mechanical, thermal or power conditions, preventing the use of the SMD technology. One of the possibilities of supporting such types of production and prototyping, in some cases supported by relatively less sophisticated robotic solutions, may be the application of vision systems, making it possible to classify and recognize some electronics parts with the use of shape analysis of their packages as well as further optical recognition of markings. Another application of such methods may be related to the automatic vision based verification of the assembling quality and correctness of the placement of electronic parts after completing the production. In the paper some experimental results, obtained using various shape descriptors for the classification of electronic packages, are presented. The initial experiments, obtained for a prepared dedicated database of synthetic images, have been verified and confirmed also for some natural images, leading to promising results.</p>
					<p><a href="https://lib.jucs.org/article/24010/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24010/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24010/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Apr 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Study on Pattern Recognition with the Histograms of Oriented Gradients in Distorted and Noisy Images</title>
		    <link>https://lib.jucs.org/article/24009/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(4): 454-478</p>
					<p>DOI: 10.3897/jucs.2020.024</p>
					<p>Authors: Andrzej Bukała, Michał Koziarski, Bogusław Cyganek, Osman Koç, Alperen Kara</p>
					<p>Abstract: Histograms of oriented gradients (HOG) are still one of the most frequently used low-level features for pattern recognition in images. Despite their great popularity and simple implementation performance of the HOG features almost always has been measured on relatively high quality data which are far from real conditions. To fill this gap we experimentally evaluate their performance in the more realistic conditions, based on images affected by different types of noise, such as Gaussian, quantization, and salt-and-pepper, as well on images distorted by occlusions. Different noise scenarios were tested such anti-distortions during training as well as application of a proper denoising method in the recognition stage. As underpinned with experimental results, the negative impact of distortions and noise on object recognition with HOG features can be significantly reduced by employment of a proper denoising strategy.</p>
					<p><a href="https://lib.jucs.org/article/24009/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24009/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24009/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Apr 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Convolutional Neural Networks and Transfer Learning Based Classification of Natural Landscape Images</title>
		    <link>https://lib.jucs.org/article/23999/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(2): 244-267</p>
					<p>DOI: 10.3897/jucs.2020.014</p>
					<p>Authors: Damir Krstinić, Maja Braović, Dunja Božić-Štulic</p>
					<p>Abstract: Natural landscape image classification is a difficult problem in computer vision. Many classes that can be found in such images are often ambiguous and can easily be confused with each other (e.g. smoke and fog), and not just by a computer algorithm, but by a human as well. Since natural landscape video surveillance became relatively pervasive in recent years, in this paper we focus on the classification of natural landscape images taken mostly from forest fire monitoring towers. Since these images usually suffer from the lack of the usual low and middle level features (e.g. sharp edges and corners), and since their quality is degraded by atmospheric conditions, this makes the already difficult problem of natural landscape classification even more challenging. In this paper we tackle the problem of automatic natural landscape classiffication by proposing and evaluating a classifier based on a pretrained deep convolutional neural network and transfer learning.</p>
					<p><a href="https://lib.jucs.org/article/23999/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23999/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23999/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Feb 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Quality Assessment of Photographed 3D Printed Flat Surfaces Using Hough Transform and Histogram Equalization</title>
		    <link>https://lib.jucs.org/article/22621/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 701-717</p>
					<p>DOI: 10.3217/jucs-025-06-0701</p>
					<p>Authors: Jarosław Fastowicz, Krzysztof Okarma</p>
					<p>Abstract: Automatic visual quality assessment of objects created using additive manufacturing processes is one of the hot topics in the Industry 4.0 era. As the 3D printing becomes more and more popular, also for everyday home use, a reliable visual quality assessment of printed surfaces attracts a great interest. One of the most obvious reasons is the possibility of saving time and filament in the case of detected low printing quality, as well as correction of some smaller imperfections during the printing process. A novel method presented in the paper can be successfully applied for the assessment of at surfaces almost independently on the filament's colour. Is utilizes the assumption about the regularity of the layers visible on the printed high quality surfaces as straight lines, which can be extracted using Hough transform. However, for various colours of filaments some preprocessing operations should be conducted to allow a proper line detection for various samples. In the proposed method the additional brightness compensation has been used together with Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. Results obtained for the database of 88 photos of 3D printed samples, together with their scans, are encouraging and allow a reliable quality assessment of 3D printed surfaces for various colours of filaments.</p>
					<p><a href="https://lib.jucs.org/article/22621/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22621/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22621/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Synthetic Image Translation for Football Players Pose Estimation</title>
		    <link>https://lib.jucs.org/article/22619/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 683-700</p>
					<p>DOI: 10.3217/jucs-025-06-0683</p>
					<p>Authors: Michał Sypetkowski, Grzegorz Sarwas, Tomasz Trzciński</p>
					<p>Abstract: In this paper, we present an approach for football players pose estimation on very low-resolution images. The camera recording the football match is far away from the pitch in order to register at least half of it. As a result, even using very high resolution cameras, the image area presenting every single player is very small. Additionally, variable weather conditions or shadows and reflections, make this aim very hard. Such images are very hard to annotate by human. In our research we assume lack of manually annotated training data from our target distribution. Instead of manual annotation of large dataset, we create simple python script for rendering synthetic images with perfect annotations. Then we train vanilla CycleGAN (Cycle-consistent Generative Adversarial Networks) for transformation of raw synthetic images into more realistic. We use transformed images to train CPN (Cascaded Pyramid Networks) model. Without bells and whistles, we achieve similar precision on our images as the same CPN model trained with COCO (Common Objects in Context) keypoints dataset.</p>
					<p><a href="https://lib.jucs.org/article/22619/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22619/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22619/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Hybrid Stochastic GA-Bayesian Search for Deep Convolutional Neural Network Model Selection</title>
		    <link>https://lib.jucs.org/article/22617/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 647-666</p>
					<p>DOI: 10.3217/jucs-025-06-0647</p>
					<p>Authors: Waseem Rawat, Zenghui Wang</p>
					<p>Abstract: In recent years, deep convolutional neural networks (DCNNs) have delivered notable successes in visual tasks, and in particular, image classification related applications. However, they are sensitive to the selection of their architectural and learning hyperparameters, which impose an exponentially large search space on modern DCNN models. Traditional hyperparameter selection methods include manual model tuning, grid, or random search but these require expert domain knowledge or are computationally burdensome. On the other hand, Bayesian optimization and evolutionary inspired techniques have surfaced as viable alternatives to the hyperparameter problem. In this work, an alternative automated system that combines the advantages of evolutionary processes and state-of-the-art Bayesian optimization is proposed. Specifically, the search space is first partitioned into separate discrete-architectural, and continuous and categorical learning parameter subspaces, which are then efficiently traversed by a stochastic genetic search applied to the former, combined with a genetic-Bayesian search of the latter. Several sequential experiments on prominent image classification tasks reveal that the proposed method results in overall classification accuracy improvements over several well-established techniques, and significant computational costs reductions compared to brute force computation.</p>
					<p><a href="https://lib.jucs.org/article/22617/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22617/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22617/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Fast Binarization of Unevenly Illuminated Document Images Based on Background Estimation for Optical Character Recognition Purposes</title>
		    <link>https://lib.jucs.org/article/22616/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 627-646</p>
					<p>DOI: 10.3217/jucs-025-06-0627</p>
					<p>Authors: Hubert Michalak, Krzysztof Okarma</p>
					<p>Abstract: One of the key operations during the image preprocessing step in Optical Character Recognition (OCR) algorithms is image binarization. Although for uniformly illuminated images, obtained typically by atbed scanners, the use of a single global threshold may be sufficient for further recognition of individual characters, it cannot be applied directly in case of non-uniform lightened document images. Such problem may occur during capturing photos of documents in unknown lighting conditions making a proper text recognition impossible in some parts of the image. Since the application of popular adaptive thresholding methods, e.g. Niblack, Sauvola and their modifications, based on the analysis of the neighbourhood of each pixel is time consuming, a faster solution might be the division of images into blocks or elimination of non-uniform background. Such an approach can be considered as a balance solution filling the gap between global and local adaptive thresholding. The solution proposed in the paper, useful also for various mobile devices due to limited computational requirements, is based on the approximation of lighting distribution of the background using the reduced resolution images. The proposed method allows to obtain very good OCR results being superior in comparison to typical adaptive binarization algorithms both in terms of the resulting OCR accuracy and computational efficiency.</p>
					<p><a href="https://lib.jucs.org/article/22616/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22616/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22616/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Improving Person Re-identification by Segmentation-Based Detection Bounding Box Filtering</title>
		    <link>https://lib.jucs.org/article/22615/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 611-626</p>
					<p>DOI: 10.3217/jucs-025-06-0611</p>
					<p>Authors: Dominik Pieczyński, Marek Kraft, Michał Fularz</p>
					<p>Abstract: In this paper, a method for improving the quality of person re-identification results is presented. The method is based on the assumption, that including segmentation information into re-identi_cation pipeline discards the automated detections that are of poor quality due to occlusions, misplaced regions of interest (ROI), multiple persons found within a single ROI, etc. using a simple segment number, bounding box fill rate and aspect ratio check. Assuming that a joint detector-segmented approach is used, the additional cost associated with the use of the proposed approach is very low.</p>
					<p><a href="https://lib.jucs.org/article/22615/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22615/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22615/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Detection of Potholes Using a Deep Convolutional Neural Network</title>
		    <link>https://lib.jucs.org/article/23529/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(9): 1244-1257</p>
					<p>DOI: 10.3217/jucs-024-09-1244</p>
					<p>Authors: Lim Suong, Kwon Jangwoo</p>
					<p>Abstract: Poor road conditions like cracks and potholes can cause inconvenience to passengers, damage to vehicles, and accidents. Detecting those obstacles has become relevant due to the rise of the autonomous vehicle. Although previous studies used various sensors and applied different image processing techniques, performance is still significantly lacking, especially when compared to the tremendous leaps in performance with computer vision and deep learning. This research addresses this issue with the help of deep learning-based techniques. We applied the You Only Look Once version 2 (YOLOv2) detector and propose a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. Despite a limited amount of learning data and the challenging nature of pothole images, our proposed architecture is able to obtain a significant increase in performance over YOLOv2 (from 60.14% to 82.43% average precision).</p>
					<p><a href="https://lib.jucs.org/article/23529/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23529/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23529/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Sep 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Real Time Path Finding for Assisted Living Using Deep Learning</title>
		    <link>https://lib.jucs.org/article/23150/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(4): 475-487</p>
					<p>DOI: 10.3217/jucs-024-04-0475</p>
					<p>Authors: Ugnius Malūkas, Rytis Maskeliūnas, Robertas Damaševičius, Marcin Woźniak</p>
					<p>Abstract: The paper presents a computer vision based system, which performs real time path finding for visually impaired or blind people. The semantic segmentation of camera images is performed using deep convolutional neural network (CNN), which able to recognize patterns across image feature space. Out of three different CNN architectures (AlexNet, GoogLeNet and VGG) analysed, the fully connected VGG16 neural network is shown to perform best in the semantic segmentation task. The algorithm for extracting and finding paths, obstacles and path boundaries is presented. The experiments performed using own dataset (300 images extracted from two hours of video recording walking in outdoors environment) show that the developed system is able to find paths, path objects and path boundaries with an accuracy of 96.1 ± 2.6%.</p>
					<p><a href="https://lib.jucs.org/article/23150/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23150/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23150/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The Bag-of-Words Method with Different Types of Image Features and Dictionary Analysis</title>
		    <link>https://lib.jucs.org/article/23143/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(4): 357-371</p>
					<p>DOI: 10.3217/jucs-024-04-0357</p>
					<p>Authors: Marcin Gabryel</p>
					<p>Abstract: Algorithms from the field of computer vision are widely applied in various fields including security, monitoring, automation elements, but also in multimodal human-computer interactions where they are used for face detection, body tracking and object recognition. Designing algorithms to reliably perform these tasks with limited computing resources and the ability to detect the presence of nearby people and objects in the background, changes in illumination and camera pose is a huge challenge for the field. Many of these problems use different classification methods. One of many image classification algorithms is Bag-of-Words (BoW). Originally, the classic BoW algorithm was used mainly for the natural language, so its direct application to computer vision issues may not be effective enough. The algorithm presented in this article contains a number of modifications that facilitate application of many types of characteristic features extracted from an image, image representation analysis and an adaptive clustering algorithm to create a dictionary of image features. These modifications affect classification result, which was confirmed in the experimental research.</p>
					<p><a href="https://lib.jucs.org/article/23143/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23143/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23143/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>On Real-valued Visual Cryptographic Basis Matrices</title>
		    <link>https://lib.jucs.org/article/23749/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(12): 1536-1562</p>
					<p>DOI: 10.3217/jucs-021-12-1536</p>
					<p>Authors: Neil Buckley, Atulya Nagar, Subramanian Arumugam</p>
					<p>Abstract: Visual cryptography (VC) encodes an image into noise-like shares, which can be stacked to reveal a reduced quality version of the original. The problem with encrypting colour images is that they must undergo heavy pre-processing to reduce them to binary, entailing significant quality loss. This paper proposes VC that works directly on intermediate grayscale values per colour channel and demonstrates real-valued basis matrices for this purpose. The resulting stacked shares produce a clearer reconstruction than in binary VC, and to the best of the authors' knowledge, is the first method posing no restrictions on colour values while maintaining the ability to decrypt with human vision. Grayscale and colour images of differing entropies are encrypted using fuzzy OR and XOR, and their PSNR and structural similarities are compared with binary VC to demonstrate improved quality. It is compared with previous research and its advantages highlighted, notably in high quality reconstructions with minimal processing.</p>
					<p><a href="https://lib.jucs.org/article/23749/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23749/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23749/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Nov 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Video Semantic Analysis Framework based on Run-time Production Rules - Towards Cognitive Vision</title>
		    <link>https://lib.jucs.org/article/23266/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(6): 856-870</p>
					<p>DOI: 10.3217/jucs-021-06-0856</p>
					<p>Authors: Alejandro Zambrano, Carlos Toro, Marcos Nieto, Ricardo Sotaquira, Cesar Sanín, Edward Szczerbicki</p>
					<p>Abstract: This paper proposes a service-oriented architecture for video analysis which separates object detection from event recognition. Our aim is to introduce new tools to be considered in the pathway towards Cognitive Vision as a support for classical Computer Vision techniques that have been broadly used by the scientific community. In the article, we particularly focus in solving some of the reported scalability issues found in current Computer Vision approaches by introducing an experience based approximation based on the Set of Experience Knowledge Structure (SOEKS). In our proposal, object detection takes place client-side, while event recognition takes place server-side. In order to implement our approach, we introduce a novel architecture that aims at recognizing events defined by a user using production rules (a part of the SOEKS model) and the detections made by the client using their own algorithms for visual recognition. In order to test our methodology, we present a case study, showing the scalability enhancements provided.</p>
					<p><a href="https://lib.jucs.org/article/23266/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23266/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23266/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 1 Jun 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Compression Algorithm for Managing Digital Elevation Models in Mobile Devices</title>
		    <link>https://lib.jucs.org/article/23564/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 20(10): 1433-1442</p>
					<p>DOI: 10.3217/jucs-020-10-1433</p>
					<p>Authors: Rolando Quintero, Giovanni Guzman, Miguel Torres, Rolando Menchaca-Mendez, Marco Moreno-Ibarra, Felix Mata</p>
					<p>Abstract: Nowadays, there are many applications such as disaster mitigation, survey-ing or geology-support, and others where Digital Elevation Models (DEM) are useful in the field. DEM typically requires a huge amount of data, making the tasks of DEMtransmission over a wireless network or storing and displaying it in a mobile device very complex. These tasks are important challenges in computer science research. Up-to-date, the compression techniques are used to compress DEMs with a high compression coefficient, nevertheless, whether the user requires to access the file or to obtain certaininformation about the raster data, it is necessary to decompress the entire DEM file. In consequence, these approaches are not well suited for applications focused on deviceswith limited hardware resources. In this paper, a novel compression/decompression technique is presented. This approach is capable of obtaining the specific parameterssuch as altitudes and contour lines of a sub-region of the DEM without using a full decompression stage. A detailed analysis of the properties and complexity of our approachis presented.</p>
					<p><a href="https://lib.jucs.org/article/23564/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23564/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23564/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Oct 2014 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>An Approach to Skew Detection of Printed Documents</title>
		    <link>https://lib.jucs.org/article/23103/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 20(4): 488-506</p>
					<p>DOI: 10.3217/jucs-020-04-0488</p>
					<p>Authors: Darko Brodić, Carlos A. B. Mello, Čedomir Maluckov, Zoran Milivojevic</p>
					<p>Abstract: In this paper, we propose an approach to estimate the text skew for printed documents. This is an important step to prevent errors in further stages of an automatic document processing system (as text segmentation). Our approach is based on the statistical analysis of the height of the connected components. In a nutshell, our algorithm is comprised of four steps: (i) removal of redundant data; (ii) establishment of the connected components, which represent filled convex hulls around each text element; (iii) enlargement of these components using morphological erosion; (iv) removal of the largest connected component to identify the first estimation of text skew. According to it, the connected components are enlarged by oriented morphological erosion and the longest of them is extracted. Statistical moments are applied to this longest component to evaluate its orientation and the global text skew of the document is identified. At the end of this process, the original document is rotated back based on the calculated angle. The performance of the proposed algorithm is examined by testing on a custom dataset. The results support the robustness of our approach.</p>
					<p><a href="https://lib.jucs.org/article/23103/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23103/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23103/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 1 Apr 2014 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Non-Marker based Mobile Augmented Reality and its Applications using Object Recognition</title>
		    <link>https://lib.jucs.org/article/23981/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(20): 2832-2850</p>
					<p>DOI: 10.3217/jucs-018-20-2832</p>
					<p>Authors: Daewon Kim, Doosung Hwang</p>
					<p>Abstract: As the augmented reality technology has become more pervasive and applicable, it is easily seen in our daily lives regardless of fields and scopes. Existing camera vision based augmented reality techniques depend on marker based approaches rather than real world information. The augmented reality technology using marker recognition has limitations in its applicability and provision of proper environment to guarantee user's immersiveness to relevant service application programs. This study aims to implement a smart mobile terminal based augmented reality technology by using a camera built in a terminal device and image and video processing technology without any markers so that users can recognize multimedia objects from real world images and build an augmented reality service, where 3D content connected to objects and relevant information are added to the real world image. Object recognition from a real world image is involved in a process of comparison against preregistered reference information, where operation to measure similarity is reduced for faster running of the application, considering the characteristics of smart mobile devices. Furthermore, the design allows users to interact through touch events on the smart device after 3D content is output onto the terminal screen. Afterward, users can browse object related information on the web. The augmented reality technology appropriate for the smart mobile environment is proposed and tested through several experiments and showed reliable performances in the results.</p>
					<p><a href="https://lib.jucs.org/article/23981/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23981/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23981/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Dec 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Low Complexity H.264/AVC Intraframe Coding for Wireless Multimedia Sensor Network</title>
		    <link>https://lib.jucs.org/article/23456/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(9): 1177-1193</p>
					<p>DOI: 10.3217/jucs-018-09-1177</p>
					<p>Authors: Xingang Liu, Jiantan Liu, Kook-Yeol Yoo, Haengrae Cho</p>
					<p>Abstract: For the Wireless Multimedia Sensor Network (WMSN), the intraframe video coding is widely used for the robust transmission and computation complexity. Though the intraframe algorithm requires much smaller computational complexity than the interframe coding, the amount of computation of the intraframe should be reduced to use WMSN application. In this paper, we propose an intra mode decision algorithm to reduce the computation complexity of intraframe H.264/AVC encoders. The proposed algorithm determines the candidate modes and skips the remaining modes based on the smoothness and directional similarity of MB. The simulation results show that the proposed algorithm achieves 18% to 70% reduction in the computational complexity, compared with the various conventional methods.</p>
					<p><a href="https://lib.jucs.org/article/23456/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23456/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23456/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 1 May 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Hierarchical Graph-Grammar Model for Secure and Efficient Handwritten Signatures Classification</title>
		    <link>https://lib.jucs.org/article/29945/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(6): 926-943</p>
					<p>DOI: 10.3217/jucs-017-06-0926</p>
					<p>Authors: Marcin Piekarczyk, Marek Ogiela</p>
					<p>Abstract: One important subject associated with personal authentication capabilities is the analysis of handwritten signatures. Among the many known techniques, algorithms based on linguistic formalisms are also possible. However, such techniques require a number of algorithms for intelligent image analysis to be applied, allowing the development of new solutions in the field of personal authentication and building modern security systems based on the advanced recognition of such patterns. The article presents the approach based on the usage of syntactic methods for the static analysis of handwritten signatures. The graph linguistic formalisms applied, such as the IE graph and ETPL(k) grammar, are characterised by considerable descriptive strength and a polynomial membership problem of the syntactic analysis. For the purposes of representing the analysed handwritten signatures, new hierarchical (two-layer) HIE graph structures based on IE graphs have been defined. The two-layer graph description makes it possible to take into consideration both local and global features of the signature. The usage of attributed graphs enables the storage of additional semantic information describing the properties of individual signature strokes. The verification and recognition of a signature consists in analysing the affiliation of its graph description to the language describing the specimen database. Initial assessments display a precision of the method at a average level of under 75%.</p>
					<p><a href="https://lib.jucs.org/article/29945/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29945/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29945/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Mar 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Color Image Restoration Using Neural Network Model</title>
		    <link>https://lib.jucs.org/article/29882/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(1): 107-125</p>
					<p>DOI: 10.3217/jucs-017-01-0107</p>
					<p>Authors: Satyadhyan Chickerur, Aswatha M</p>
					<p>Abstract: Neural network learning approach for color image restoration has been discussed in this paper and one of the possible solutions for restoring images has been presented. Here neural network weights are considered as regularization parameter values instead of explicitly specifying them. The weights are modified during the training through the supply of training set data. The desired response of the network is in the form of estimated value of the current pixel. This estimated value is used to modify the network weights such that the restored value produced by the network for a pixel is as close as to this desired response. One of the advantages of the proposed approach is that, once the neural network is trained, images can be restored without having prior information about the model of noise/blurring with which the image is corrupted.</p>
					<p><a href="https://lib.jucs.org/article/29882/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29882/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29882/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Jan 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Fusion of Complementary Online and Offline Strategies for Recognition of Handwritten Kannada Characters</title>
		    <link>https://lib.jucs.org/article/29880/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(1): 81-93</p>
					<p>DOI: 10.3217/jucs-017-01-0081</p>
					<p>Authors: Rakesh Rampalli, Angarai Ramakrishnan</p>
					<p>Abstract: This work describes an online handwritten character recognition system working in combination with an offline recognition system. The online input data is also converted into an offline image, and in parallel recognized by both online and offline strategies. Features are proposed for offline recognition and a disambiguation step is employed in the offline system for the samples for which the confidence level of the classier is low. The outputs are then combined probabilistically resulting in a classier out-performing both individual systems. Experiments are performed for Kannada, a South Indian Language, over a database of 295 classes. The accuracy of the online recognizer improves by 11% when the combination with offline system is used.</p>
					<p><a href="https://lib.jucs.org/article/29880/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29880/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29880/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Jan 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A New Approach to Water Flow Algorithm for Text Line Segmentation</title>
		    <link>https://lib.jucs.org/article/29875/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(1): 30-47</p>
					<p>DOI: 10.3217/jucs-017-01-0030</p>
					<p>Authors: Darko Brodić, Zoran Milivojevic</p>
					<p>Abstract: This paper proposes a new approach to water flow algorithm for the text line segmentation. Original method assumes hypothetical water flows under a few specified angles to the document image frame from left to right and vice versa. As a result, unwetted image frames are extracted. These areas are of major importance for text line segmentation. Method modifications mean extension values of water flow angle and unwetted image frames function enlargement. Results are encouraging due to text line segmentation improvement which is the most challenging process stage in document image processing.</p>
					<p><a href="https://lib.jucs.org/article/29875/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29875/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29875/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Jan 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Pragmatic Qualitative Approach for Juxtaposing Shapes</title>
		    <link>https://lib.jucs.org/article/29698/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(11): 1410-1424</p>
					<p>DOI: 10.3217/jucs-016-11-1410</p>
					<p>Authors: Lledó Museros, Luis González-Abril, Francisco Velasco, Zoe Falomir</p>
					<p>Abstract: This paper presents a qualitative shape description scheme which has been defined in order to have a formal theory to allow the construction of new shapes from a set of given shapes by using a juxtaposition operation. Specifically, the qualitative shape description scheme defined is a pragmatic scheme since it has been defined in order to be applied in the automatic and intelligent assembling of trencadís mosaics.</p>
					<p><a href="https://lib.jucs.org/article/29698/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29698/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29698/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 1 Jun 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Gabor Filter Aided 3D Ultra-Sonography Diagnosis System with WLAN Transmission Consideration</title>
		    <link>https://lib.jucs.org/article/29691/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(10): 1327-1342</p>
					<p>DOI: 10.3217/jucs-016-10-1327</p>
					<p>Authors: Wei-Ming Chen, Chi-Hsiang Lo, Han-Chieh Chao, Chun-Cheng Chang</p>
					<p>Abstract: The Gabor filter aided diagnosis system for 3-dimensional ultra-sonography (3DUS) under the WLAN environment is introduced. Due to the important relationship between breast tumour surface features and internal architecture, we applied our system using 3D inter-pixel correlations instead of 2D features. Gabor filters provide a multi-resolution representation of texture, which increases ultrasound technology capability in the differential diagnosis of solid breast tumours. Our experiments show that the performance of the proposed diagnostic method is effective. Moreover, physicians manipulate our diagnostic system using hand-held devices in the hospital. Because WLAN is unstable, our system ensures good transmission quality. We also focus on transmission control strategies that adapt to the time varying wireless network conditions. We analyze strategies that use competitive analysis techniques. The experiments show that the algorithms performance is effective.</p>
					<p><a href="https://lib.jucs.org/article/29691/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29691/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29691/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 May 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Pose Estimation of Rotating Sensors in the Context of Accurate 3D Scene Modeling</title>
		    <link>https://lib.jucs.org/article/29686/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(10): 1269-1290</p>
					<p>DOI: 10.3217/jucs-016-10-1269</p>
					<p>Authors: Karsten Scheibe, Fay Huang, Reinhard Klette</p>
					<p>Abstract: Sensor-line cameras have been designed for space missions in the 1980s, and are used for various tasks, including panoramic imaging. Laser range-finders are able to generate dense depth maps (of isolated surface points). Panoramic sensor-line cameras and laser range-finders may both be implemented as rotating sensors, and we used them together this way to reconstruct accurately 3D environments (such as, for example, large buildings).  This article reviews related developments, followed by a detailed description of designed calibration and pose estimation techniques which have been used for both rotating sensors. Related experiments evaluate the accuracy of calibrated sensor parameters and of estimated poses.</p>
					<p><a href="https://lib.jucs.org/article/29686/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29686/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29686/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 May 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A General Framework for Multi-Human Tracking using Kalman Filter and Fast Mean Shift Algorithms</title>
		    <link>https://lib.jucs.org/article/29652/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(6): 921-937</p>
					<p>DOI: 10.3217/jucs-016-06-0921</p>
					<p>Authors: Ahmed Ali, Kenji Terada</p>
					<p>Abstract: The task of reliable detection and tracking of multiple objects becomes highly complex for crowded scenarios. In this paper, a robust framework is presented for multi-Human tracking. The key contribution of the work is to use fast calculation for mean shift algorithm to perform tracking for the cases when Kalman filter fails due to measurement error. Local density maxima in the difference image - usually representing moving objects - are outlined by a fast non-parametric mean shift clustering procedure. The proposed approach has the robu st ability to track moving objects, both separately and in groups, in consecutive frames under some kinds of difficulties such as rapid appearance changes caused by image noise and occlusion.</p>
					<p><a href="https://lib.jucs.org/article/29652/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29652/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29652/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Mar 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>3D Head Pose and Facial Expression Tracking using a Single Camera</title>
		    <link>https://lib.jucs.org/article/29651/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(6): 903-920</p>
					<p>DOI: 10.3217/jucs-016-06-0903</p>
					<p>Authors: Lucas Terissi, Juan Gómez</p>
					<p>Abstract: Algorithms for 3D head pose and facial expression tracking using a single camera (monocular image sequences) is presented in this paper. The proposed method is based on a combination of feature-based and model-based approaches for pose estimation. A generic 3D face model, which can be adapted to any person, is used for the tracking. In contrast to other methods in the literature, the proposed method does not require a training stage. It only requires an image of the person's face to be tracked facing the camera to which the model is fitted manually through a graphical user interface. The algorithms were evaluated perceptually and quantitatively with two video databases. Simulation results show that the proposed tracking algorithms correctly estimate the head pose and facial expression, even when occlusions, changes in the distance to the camera and presence of other persons in the scene, occur. Both perceptual and quantitative results are similar to the ones obtained with other methods proposed in the literature. Although the algorithms were not optimized for speed, they run near real time. Additionally, the proposed system delivers separate head pose and facial expression information. Since information related with facial expression, which is represented only by six parameters, is independent from head pose information, the tracking algorithms could also be used for facial expression analysis and video-driven facial animation.</p>
					<p><a href="https://lib.jucs.org/article/29651/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29651/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29651/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Mar 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Adaptive Binarization of Unconstrained Hand-Held Camera-Captured Document Images</title>
		    <link>https://lib.jucs.org/article/29563/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(18): 3343-3363</p>
					<p>DOI: 10.3217/jucs-015-18-3343</p>
					<p>Authors: Syed Bukhari, Faisal Shafait, Thomas Breuel</p>
					<p>Abstract: This paper presents a new adaptive binarization technique for degraded hand-held camera-captured document images. State-of-the-art locally adaptive binarization methods are sensitive to the values of free parameter. This problem is more critical when binarizing degraded camera-captured document images because of distortions like non-uniform illumination, bad shading, blurring, smearing and low resolution. We demonstrate in this paper that local binarization methods are not only sensitive to the selection of free parameters values (either found manually or automatically), but also sensitive to the constant free parameters values for all pixels of a document image. Some range of values of free parameters are better for foreground regions and some other range of values are better for background regions. For overcoming this problem, we present an adaptation of a state-of-the-art local binarization method such that two different set of free parameters values are used for foreground and background regions respectively. We present the use of ridges detection for rough estimation of foreground regions in a document image. This information is then used to calculate appropriate threshold using different set of free parameters values for the foreground and background regions respectively. Evaluation of the method using an OCR-based measure and a pixel-based measure show that our method achieves better performance as compared to state-of-the-art global and local binarization methods.</p>
					<p><a href="https://lib.jucs.org/article/29563/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29563/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29563/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Dec 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Robust Extraction of Text from Camera Images using Colour and Spatial Information Simultaneously</title>
		    <link>https://lib.jucs.org/article/29562/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(18): 3325-3342</p>
					<p>DOI: 10.3217/jucs-015-18-3325</p>
					<p>Authors: Shyama Chowdhury, Soumyadeep Dhar, Karen Rafferty, Amit Das, Bhabatosh Chanda</p>
					<p>Abstract: The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction.</p>
					<p><a href="https://lib.jucs.org/article/29562/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29562/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29562/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Dec 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Vascular Pattern Analysis towards Pervasive Palm Vein Authentication</title>
		    <link>https://lib.jucs.org/article/29370/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(5): 1081-1089</p>
					<p>DOI: 10.3217/jucs-015-05-1081</p>
					<p>Authors: Debnath Bhattacharyya, Poulami Das, Tai-hoon Kim, Samir Bandyopadhyay</p>
					<p>Abstract: In this paper we propose an Image Analysis technique for Vascular Pattern of Hand Palm, which in turn leads towards Palm Vein Authentication of an individual. Near-Infrared Image of Palm Vein pattern is taken and passed through three different processes or algorithms to process the Infrared Image in such a way that the future authentication can be done accurately or almost exactly. These three different processes are: a. Vascular Pattern Marker Algorithm (VPMA); b. Vascular Pattern Extractor Algorithm (VPEA); and c. Vascular Pattern Thinning Algorithm (VPTA). The resultant Images will be stored in a Database, as the vascular patterns are unique to each individual, so future authentication can be done by comparing the pattern of veins in the palm of a person being authenticated with a pattern stored in a database.</p>
					<p><a href="https://lib.jucs.org/article/29370/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29370/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29370/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Graph-based Approach for Robust Road Guidance Sign Recognition from Differently Exposed Images</title>
		    <link>https://lib.jucs.org/article/29341/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(4): 786-804</p>
					<p>DOI: 10.3217/jucs-015-04-0786</p>
					<p>Authors: Andrey Vavilin, Kang-Hyun Jo</p>
					<p>Abstract: In this paper we present an approach to detect traffic guidance signs and recognise the structure of junction information on them. The detection algorithm is based on using differently exposed images. These images are combined into one using tone mapping technique in order to minimize effects of bad environment conditions and low dynamic range of CCD-cameras. This technique allows robust sign detection in various lighting conditions. To localize sign candidates color segmentation is used. To minimize number of false detection filtering operations based on geometrical and color properties is applied. Recognition process is based on graph theory. Each sign candidate is decomposed into principal components and the region which represents junction structure is mapped into a graph. This graph is checked for possible mapping mistakes. Finally, the graph is analyzed in order to extract all possible paths of junction crossing. These paths must represent the real structure of the junction and correspond to the road law. The proposed method allows more effective detection in different lighting and environmental conditions such as insufficient or excessive lighting, rain, fog etc compared with conventional approaches.</p>
					<p><a href="https://lib.jucs.org/article/29341/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29341/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29341/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Feb 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Novel Multi-Layer Level Set Method for Image Segmentation</title>
		    <link>https://lib.jucs.org/article/29151/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(14): 2428-2452</p>
					<p>DOI: 10.3217/jucs-014-14-2427</p>
					<p>Authors: Xiao-Feng Wang, De-Shuang Huang</p>
					<p>Abstract: In this paper, a new multi-layer level set method is proposed for multi-phase image segmentation. The proposed method is based on the conception of image layer and improved numerical solution of bimodal Chan-Vese model. One level set function is employed for curve evolution with a hierarchical form in sequential image layers. In addition, new initialization method and more efficient computational method for signed distance function are introduced. Moreover, the evolving curve can automatically stop on true boundaries in single image layer according to a termination criterion which is based on the length change of evolving curve. Specially, an adaptive improvement scheme is designed to speed up curve evolution process in a queue of sequential image layers, and the detection of background image layer is used to confirm the termination of the whole multi-layer level set evolution procedure. Finally, numerical experiments on some synthetic and real images have demonstrated the efficiency and robustness of our method. And the comparisons with multi-phase Chan-Vese method also show that our method has a less time-consuming computation and much faster convergence.</p>
					<p><a href="https://lib.jucs.org/article/29151/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29151/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29151/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jul 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Implementation of a Prototype Positioning System for LBS on U-campus</title>
		    <link>https://lib.jucs.org/article/29148/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(14): 2381-2399</p>
					<p>DOI: 10.3217/jucs-014-14-2381</p>
					<p>Authors: Jaegeol Yim, Ilseok Ko, Jaesu Do, Jaehun Joo, Seunghwan Jeong</p>
					<p>Abstract: Location-based service is one of the most popular buzzwords in the field of U-cities. Positioning a user is an essential ingredient of a location-based system in a U-city. For outdoor positioning, GPS based practical solutions have been introduced. However, the measurement error of GPS is too big for it to be used for U-campus services, because the size of a campus is smaller than that of a city. We propose the Relative-Interpolation Method to improve the accuracy of outdoor positioning. However, indoor positioning is also necessary for a U-campus because the GPS signal is not available inside buildings. For indoor positioning, various systems including Cricket, Active Badge, and so on have been introduced. These methods require special equipment dedicated to positioning. Our method does not require such equipment because it determines the users position based on the received signal strength indicators (RSSIs) from access points (AP) which are already installed for WLAN. The algorithm we use for indoor positioning is a kind of fingerprinting method. However, our algorithm builds a decision tree instead of a look-up table in the off-line phase. Therefore, the proposed method is faster than the existing indoor positioning methods in the real-time phase. We integrated our indoor and outdoor positioning methods and implemented a prototype indoor-outdoor positioning system on a laptop. The experimental results are discussed in this paper. In implementing the prototype, we also implemented a C# library function which can be used to read the RSSIs from the APs.</p>
					<p><a href="https://lib.jucs.org/article/29148/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29148/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29148/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jul 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Methodology for the Separation of Foreground/Background in Arabic Historical Manuscripts using Hybrid Methods</title>
		    <link>https://lib.jucs.org/article/28945/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 284-298</p>
					<p>DOI: 10.3217/jucs-014-02-0284</p>
					<p>Authors: Wafa Boussellaa, Abderrazak Zahour, Adel Alimi</p>
					<p>Abstract: This paper presents a new color document image segmentation system suitable for historical Arabic manuscripts. Our system is composed of a hybrid method which couple together background light intensity normalization algorithm and k-means clustering with maximum likelihood (ML) estimation, for foreground/ background separation. Firstly, the background normalization algorithm performs separation between foreground and background. This foreground is used in later steps. Secondly, our algorithm proceeds on luminance and distort the contrast. These distortions are corrected with a gamma correction and contrast adjustment. Finally, the new enhanced foreground image is segmented to foreground/background on the basis of ML estimation. The initial parameters for the ML method are estimated by k-means clustering algorithm. The segmented image is used to produce a final restored document image. The techniques are tested on a set of Arabic historical manuscripts documents from the National Tunisian Library. The performance of the algorithm is demonstrated on by real color manuscripts distorted with show-through effects, uneven background color and localized spot</p>
					<p><a href="https://lib.jucs.org/article/28945/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28945/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28945/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Table-form Extraction with Artefact Removal</title>
		    <link>https://lib.jucs.org/article/28942/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 252-265</p>
					<p>DOI: 10.3217/jucs-014-02-0252</p>
					<p>Authors: Luiz Antônio Pereira Neves, João De Carvalho, Jacques Facon, Flávio Bortolozzi</p>
					<p>Abstract: In this paper we present a novel methodology to recognize the layout structure of handwritten filled table-forms. Recognition methodology includes locating line intersections, correcting wrong intersections produced by what we call artefacts (overlapping data, broken segments and smudges), extracting correct table-form cells and using as little previous table-form knowledge as possible. To improve layout structure recognition, a novel artefact identification and deletion method is also proposed. To evaluate the effectiveness of the methodology, a database composed of 350 handwritten filled table-form images damaged by different types of artefacts was used. Experiments show that the artefact identification method improves performance of the table-forms structure extractor that reached a success rate of 85%.</p>
					<p><a href="https://lib.jucs.org/article/28942/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Metaclasses and Zoning Mechanism Applied to Handwriting Recognition</title>
		    <link>https://lib.jucs.org/article/28939/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 211-223</p>
					<p>DOI: 10.3217/jucs-014-02-0211</p>
					<p>Authors: Cinthia Freitas, Luiz Oliveira, Simone B. K. Aires, Flávio Bortolozzi</p>
					<p>Abstract: The contribution of this paper is twofold. First we investigate the use of the confusion matrices in order to get some insight to better define perceptual zoning for character recognition. The features considered in this work are based on concavities/convexities deficiencies, which are obtained by labelling the background pixels of the input image. Four different perceptual zoning (symmetrical and non-symmetrical) are discussed. Experiments show that this mechanism of zoning could be considered as a reasonable alternative to exhaustive search algorithms. The second contribution is a methodology to define metaclasses for the problem of handwritten character recognition. The proposed approach is based on the disagreement among the characters and it uses Euclidean distance computed between the confusion matrices. Through comprehensive experiments we demonstrate that the use of metaclasses can improve the performance of the system.</p>
					<p><a href="https://lib.jucs.org/article/28939/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28939/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28939/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Performance Evaluation and Limitations of a Vision System on a Reconfigurable/Programmable Chip</title>
		    <link>https://lib.jucs.org/article/28759/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(3): 440-453</p>
					<p>DOI: 10.3217/jucs-013-03-0440</p>
					<p>Authors: José Fernández-Pérez, Francisco Sánchez-Fernández, Ricardo Carmona-Galán</p>
					<p>Abstract: This paper presents a survey of the characteristics of a vision system implemented in a reconfigurable/programmable chip (FPGA). System limitations and performance have been evaluated in order to derive specifications and constraints for further vision system synthesis. The system hereby reported has a conventional architecture. It consists in a central microprocessor (CPU) and the necessary peripheral elements for data acquisition, data storage and communications. It has been designed to stand alone, but a link to the programming and debugging tools running in a digital host (PC) is provided. In order to alleviate the computational load of the central microprocessor, we have designed a visual co-processor in charge of the low-level image processing tasks. It operates autonomously, commanded by the CPU, as another system peripheral. The complete system, without the sensor, has been implemented in a single reconfigurable chip as a SOPC. The incorporation of a dedicated visual co-processor, with specific circuitry for low-level image processing acceleration, enhances the system throughput outperforming conventional processing schemes. However, time-multiplexing of the dedicated hardware remains a limiting factor for the achievable peak computing power. We have quantified this effect and sketched possible solutions, like replication of the specific image processing hardware.</p>
					<p><a href="https://lib.jucs.org/article/28759/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28759/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28759/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Mar 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Real-time Architecture for Robust Motion Estimation under Varying Illumination Conditions</title>
		    <link>https://lib.jucs.org/article/28748/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 13(3): 363-376</p>
					<p>DOI: 10.3217/jucs-013-03-0363</p>
					<p>Authors: Javier Díaz, Eduardo Ros, Rafael Rodriguez-Gomez, Begoña Pino</p>
					<p>Abstract: Motion estimation from image sequences is a complex problem which requires high computing resources and is highly affected by changes in the illumination conditions in most of the existing approaches. In this contribution we present a high performance system that deals with this limitation. Robustness to varying illumination conditions is achieved by a novel technique that combines a gradient-based optical flow method with a non-parametric image transformation based on the Rank transform. The paper describes this method and quantitatively evaluates its robustness to different illumination changing patterns. This technique has been successfully implemented in a real-time system using reconfigurable hardware. Our contribution presents the computing architecture, including the resources consumption and the obtained performance. The final system is a real-time device capable to computing motion sequences in real-time even in conditions with significant illumination changes. The robustness of the proposed system facilitates its use in multiple potential application fields.</p>
					<p><a href="https://lib.jucs.org/article/28748/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28748/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28748/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Mar 2007 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Persian/Arabic Baffletext CAPTCHA</title>
		    <link>https://lib.jucs.org/article/28717/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(12): 1783-1796</p>
					<p>DOI: 10.3217/jucs-012-12-1783</p>
					<p>Authors: Mohammad Shirali-Shahreza, Mohammad Shirali-Shahreza</p>
					<p>Abstract: Nowadays, many daily human activities such as education, trade, talks, etc are done by using the Internet. In such things as registration on Internet web sites, hackers write programs to make automatic false registration that waste the resources of the web sites while it may also stop it from functioning. Therefore, human users should be distinguished from computer programs. To this end, this paper presents a method for distinction of Persian and Arabic-language users from computer programs based on Persian and Arabic texts. Our proposed algorithm is based on adding a background to the image of a meaningless Persian/Arabic randomly generated word. This method relies on the difficulty of automatic separation of background from Persian/Arabic writing, due to the presence of many diacritical dots and signs.</p>
					<p><a href="https://lib.jucs.org/article/28717/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28717/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28717/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Dec 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Ridge Orientation Estimation and Verification Algorithm for Fingerprint Enhancement</title>
		    <link>https://lib.jucs.org/article/28691/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(10): 1426-1438</p>
					<p>DOI: 10.3217/jucs-012-10-1426</p>
					<p>Authors: Limin Liu, Tian-Shyr Dai</p>
					<p>Abstract: Fingerprint image enhancement is a common and critical step in fingerprint recognition systems. To enhance the images, most of the existing enhancement algorithms use filtering techniques that can be categorized into isotropic and anisotropic according to the filter kernel. Isotropic filtering can properly preserve features on the input images but can hardly improve the quality of the images. On the other hand, anisotropic filtering can effectively remove noise from the image but only when a reliable orientation is provided. In this paper, we propose a ridge orientation estimation and verification algorithm which can not only generate an orientation of ridge flows, but also verify its reliability. Experimental results show that, on average, over 51 percent of an image in the NIST-4 database has reliable orientations. Based on this algorithm, a hybrid fingerprint enhancement algorithm is developed which applies isotropic filtering on regions without reliable orientations and anisotropic filtering on regions with reliable orientations. Experimental results show the proposed algorithm can combine advantages of both isotropic and anisotropic filtering techniques and generally improve the quality of fingerprint images.</p>
					<p><a href="https://lib.jucs.org/article/28691/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28691/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28691/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Oct 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Construction of Wavelets and Applications</title>
		    <link>https://lib.jucs.org/article/28677/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(9): 1278-1291</p>
					<p>DOI: 10.3217/jucs-012-09-1278</p>
					<p>Authors: Ildikó László, Ferenc Schipp, Samuel Kozaitis</p>
					<p>Abstract: A sequence of increasing translation invariant subspaces can be defined by the Haar-system (or generally by wavelets). The orthogonal projection to the subspaces generates a decomposition (multiresolution) of a signal. Regarding the rate of convergence and the number of operations, this kind of decomposition is much more favorable then the conventional Fourier expansion. In this paper, starting from Haar-like systems we will introduce a new type of multiresolution. The transition to higher levels in this case, instead of dilation will be realized by a two-fold map. Starting from a convenient scaling function and two-fold map, we will introduce a large class of Haar-like systems. Besides others, the original Haar system and Haar-like systems of trigonometric polynomials, and rational functions can be constructed in this way. We will show that the restriction of Haar-like systems to an appropriate set can be identified by the original Haar-system. Haar-like rational functions are used for the approximation of rational transfer functions which play an important role in signal processing [Bokor1 1998, Schipp01 2003, Bokor3 2003, Schipp 2002].</p>
					<p><a href="https://lib.jucs.org/article/28677/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28677/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28677/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Applications of Neighborhood Sequence in Image Processing and Database Retrieval</title>
		    <link>https://lib.jucs.org/article/28672/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(9): 1240-1253</p>
					<p>DOI: 10.3217/jucs-012-09-1240</p>
					<p>Authors: András Hajdu, János Kormos, Tamás Tóth, Krisztián Veréb</p>
					<p>Abstract: In this paper we show how the distance functions generated by neighborhood sequences provides flexibility in image processing algorithms and image database retrieval. Accordingly, we present methods for indexing and segmenting color images, where we use digital distance functions generated by neighborhood sequences to measure distance between colors. Moreover, we explain the usability of neighborhood sequences within the field of image database retrieval, to find similar images from a database for a given query image. Our approach considers special distance functions to measure the distance between feature vectors extracted from the images, which allows more flexible queries for the users.</p>
					<p><a href="https://lib.jucs.org/article/28672/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28672/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28672/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>The &quot;MEDIP-Platform Independent Software System for Medical Image Processing&quot; Project</title>
		    <link>https://lib.jucs.org/article/28670/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(9): 1229-1239</p>
					<p>DOI: 10.3217/jucs-012-09-1229</p>
					<p>Authors: András Hajdu, János Kormos, Zsolt Lencse, Lajos Trón, Miklós Emri</p>
					<p>Abstract: In this paper we present the structure and the achieved results of the R&D project IKTA-4, 6/2001 MEDIP - Platform independent software system for medical image processing supported by the Hungarian Ministry of Education. The aim of the project was to develop a software background for our basic and applied research in the field of medical imaging that can be used in clinical routine, as well. Realization was based on the experience of information technology and medical imaging research university teams and a company specialized on software and hardware developing for nuclear medicine. The aims also reflect some former research and development activities of the participants. Thus some of them are well experienced in registration, segmentation and image fusion techniques. These experiences were also considered in the determination of the main purposes. The capabilities of the provided software library were demonstrated through test applications from the fields of orthopedics, oncology and nuclear medicine.</p>
					<p><a href="https://lib.jucs.org/article/28670/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28670/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28670/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Developing on Exact Quality and Classification System for Plant Improvement</title>
		    <link>https://lib.jucs.org/article/28659/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(9): 1154-1164</p>
					<p>DOI: 10.3217/jucs-012-09-1154</p>
					<p>Authors: József Berke, Zsolt Polgar, Zoltán Horváth, Tamás Nagy</p>
					<p>Abstract: On the field of potato research and breeding, there are several possibilities for the application of modern digital image processing and data collection/analysing techniques. One of the most obvious methods is the multi/hyper spectral analysis. In our experiments research were done in the visible as well as in the infra, near infra and thermal wavelength. For more advanced analysis we developed a multi/hyper-spectral analysis method (spectral fractal dimension measurement and application). In the following we summarize its basic elements and the developed integrated information system of potato research and breeding.</p>
					<p><a href="https://lib.jucs.org/article/28659/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28659/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28659/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2006 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Stack Filter Design Using a Distributed Parallel Implementation of Genetic Algorithms</title>
		    <link>https://lib.jucs.org/article/27387/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 3(7): 821-834</p>
					<p>DOI: 10.3217/jucs-003-07-0821</p>
					<p>Authors: Peter Undrill, Kostas Delibasis, George Cameron</p>
					<p>Abstract: Stack filters are a class of non-linear spatial operators used for suppression of noise in signals. In this work their design is formulated as an optimisation problem and a method that uses Genetic Algorithms (GAs) to perform the configuration is explained. Because of its computational complexity the process has been implemented as a distributed parallel GA using the Parallel Virtual Machine (PVM) software. We present the results of applying our stack filters to the restoration of magnetic resonance (MR) images corrupted with uniform, uncorellated, noise showing improved statistical performance compared with the median filter and indicating better retention of image details. The efficiency of the parallel implementation is examined, addressing both algorithmic and data decomposition, showing that execution times can be significantly reduced by distributing the task across a network of heterogeneous processors.</p>
					<p><a href="https://lib.jucs.org/article/27387/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/27387/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/27387/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jul 1997 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Robust Affine Matching Algorithm Using an Exponentially Decreasing Distance Function</title>
		    <link>https://lib.jucs.org/article/27154/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 1(8): 614-631</p>
					<p>DOI: 10.3217/jucs-001-08-0614</p>
					<p>Authors: Axel Pinz, Manfred Prantl, Harald Ganster</p>
					<p>Abstract: We describe a robust method for spatial registration, which relies on the coarse correspondence of structures extracted from images, avoiding the establishment of point correspondences. These structures (tokens) are points, chains, polygons and regions at the level of intermediate symbolic representation (ISR). The algorithm recovers conformal transformations (4 affine parameters), so that 2-dimensional scenes as well as planar structures in 3D scenes can be handled. The affine transformation between two different tokensets is found by minimization of an exponentially decreasing distance function. As long as the tokensets are kept sparse, the method is very robust against a broad variety of common disturbances (e.g. incomplete segmentations, missing tokens, partial overlap). The performance of the algorithm is demonstrated using simple 2D shapes, medical, and remote sensing satellite images. The complexity of the algorithm is quadratic on the number of affine parameters.</p>
					<p><a href="https://lib.jucs.org/article/27154/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/27154/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/27154/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Aug 1995 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	