
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 15 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Fri, 13 Mar 2026 17:23:43 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>A Novel Image Super-Resolution Reconstruction Framework Using the AI Technique of Dual Generator Generative Adversarial Network (GAN)</title>
		    <link>https://lib.jucs.org/article/94134/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 967-983</p>
					<p>DOI: 10.3897/jucs.94134</p>
					<p>Authors: Loveleen Kumar, Manish Jain</p>
					<p>Abstract: Image superresolution (SR) is the process of enlarging and enhancing a low-resolution image. Image superresolution helps in industrial image enhancement, classification, detection, pattern recognition, surveillance, satellite imaging, medical diagnosis, image analytics, etc. It is of utmost importance to keep the features of the low-resolution image intact while enlarging and enhancing it. In this research paper, a framework is proposed that works in three phases and generates superresolution images while keeping low-resolution image features intact and reducing image blurring and artifacts. In the first phase, image enlargement is done, which enlarges the low-resolution image to the 2x/4x scale using two standard algorithms. The second phase enhances the image using an AI-empowered Generative adversarial network (GAN). We have used a GAN with dual generators and named it EffN-GAN (EfficientNet-GAN). Fusion is done in the last phase, wherein the final improved image is generated by fusing the enlarged image and GAN output image. The fusion phase helps in reducing the artifacts. We have used the DIV2K dataset to train the GAN and further tested the results on the images of Set5, Set14, B100, Urban100, Manga109 datasets with ground truth of size 224x224x3. The obtained results were compared with the state-of-the-art superresolution approach based on important image quality parameters, namely, Peak signal-to--to-noise ratio (PSNR), Structural similarity index (SSIM), Visual information fidelity (VIF) image quality parameters. The results show that the proposed framework for generating super-resolution images from 2x/4x resolution downgraded images improves the aforementioned mentioned image quality parameters significantly.</p>
					<p><a href="https://lib.jucs.org/article/94134/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94134/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94134/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>X-Ray Image Authentication Scheme Using SLT and Contourlet Transform for Modern Healthcare System</title>
		    <link>https://lib.jucs.org/article/94132/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 916-929</p>
					<p>DOI: 10.3897/jucs.94132</p>
					<p>Authors: Vijay Krishna Pallaw, Kamred Udham Singh</p>
					<p>Abstract: The network&rsquo;s convenience has created a copyright dilemma for some multimedia works. Nowadays, every healthcare system relies on digital medical images for diagnosis. These medical images are transmitted through communication channels, so there is a risk of tampering and copyright violation. A digital watermarking system can ensure and guarantee that tampering and copyright violation are prevented. This study presents a nonblind digital watermarking approach to X-ray medical images based on Contourlet transform (C.T.) and Slantlet Transform (SLT). Since the two-dimensional signals are represented flexibly by contourlet transforms, the contour plot can be used efficiently to represent curves and smooth contours. At the same time, the SLT has better time-localization &amp; smoothness properties. The maximum energy of an image is conceived in the LL band if SLT transform are employed. Therefore, the LL band is used to entrench the watermark. The additive quantization method has been used to entrench the watermark. The efficiency of our scheme is assessed by different quality parameters and compared with several existing schemes. The results of the experiment show that the proposed scheme performs better and has the ability to resist several attacks.</p>
					<p><a href="https://lib.jucs.org/article/94132/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94132/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94132/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Color Ultrasound Image Watermarking Scheme Using FRT and Hessenberg Decomposition for Telemedicine Applications</title>
		    <link>https://lib.jucs.org/article/94127/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 28(9): 882-897</p>
					<p>DOI: 10.3897/jucs.94127</p>
					<p>Authors: Lalan Kumar, Kamred Udham Singh</p>
					<p>Abstract: Watermarking is a valuable technique for verifying medical images obtained through the internet for diagnosis. There is a greater need for security in medical pictures with ever- increasing security risks. This research presented a Finite Ridgelet Transform (FRT)-Hessenberg based watermarking scheme in medical images. The suggested paradigm is divided into two stages. Before watermark insertion, FRT is applied to medical images. The coefficients are combined into blocks of 4 x 4 and each block is decomposed using Hessenberg decomposition. The second column of the Q matrix is used to insert the watermark using the additive quantization technique. The results obtained from our experiment have given good visual quality of the watermarked images. The high PSNR value 53.6121 and NC value 1.0 show that our scheme is performing better. Moreover, the performance of our scheme is robust against several attacks. The consequences of this result imply that the anticipated scheme is effective for medical image watermarking.</p>
					<p><a href="https://lib.jucs.org/article/94127/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/94127/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/94127/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 28 Sep 2022 10:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Application of Multi-Descriptor Binary Shape Analysis for Classification of Electronic Parts</title>
		    <link>https://lib.jucs.org/article/24010/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(4): 479-495</p>
					<p>DOI: 10.3897/jucs.2020.025</p>
					<p>Authors: Kamil Maliński, Krzysztof Okarma</p>
					<p>Abstract: Rapid growth of availability of modern electronic and robotic solutions, also for home and amateur use, related to the progress in home automation and popularity of the IoT systems, makes it possible to develop some unique hardware solutions, also by independent researchers and engineers, often with the help of the 3D printing technology. Although in many industrial applications high speed pick and place machines are used for assembling small surface-mount devices (SMD), especially in mass production of electronic parts, there are still some applications, where the traditional through-hole technology used in Printed Circuit Boards (PCB) is utilised, particularly considering some mechanical, thermal or power conditions, preventing the use of the SMD technology. One of the possibilities of supporting such types of production and prototyping, in some cases supported by relatively less sophisticated robotic solutions, may be the application of vision systems, making it possible to classify and recognize some electronics parts with the use of shape analysis of their packages as well as further optical recognition of markings. Another application of such methods may be related to the automatic vision based verification of the assembling quality and correctness of the placement of electronic parts after completing the production. In the paper some experimental results, obtained using various shape descriptors for the classification of electronic packages, are presented. The initial experiments, obtained for a prepared dedicated database of synthetic images, have been verified and confirmed also for some natural images, leading to promising results.</p>
					<p><a href="https://lib.jucs.org/article/24010/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/24010/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/24010/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Apr 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Convolutional Neural Networks and Transfer Learning Based Classification of Natural Landscape Images</title>
		    <link>https://lib.jucs.org/article/23999/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(2): 244-267</p>
					<p>DOI: 10.3897/jucs.2020.014</p>
					<p>Authors: Damir Krstinić, Maja Braović, Dunja Božić-Štulic</p>
					<p>Abstract: Natural landscape image classification is a difficult problem in computer vision. Many classes that can be found in such images are often ambiguous and can easily be confused with each other (e.g. smoke and fog), and not just by a computer algorithm, but by a human as well. Since natural landscape video surveillance became relatively pervasive in recent years, in this paper we focus on the classification of natural landscape images taken mostly from forest fire monitoring towers. Since these images usually suffer from the lack of the usual low and middle level features (e.g. sharp edges and corners), and since their quality is degraded by atmospheric conditions, this makes the already difficult problem of natural landscape classification even more challenging. In this paper we tackle the problem of automatic natural landscape classiffication by proposing and evaluating a classifier based on a pretrained deep convolutional neural network and transfer learning.</p>
					<p><a href="https://lib.jucs.org/article/23999/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23999/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23999/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Feb 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Quality Assessment of Photographed 3D Printed Flat Surfaces Using Hough Transform and Histogram Equalization</title>
		    <link>https://lib.jucs.org/article/22621/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 701-717</p>
					<p>DOI: 10.3217/jucs-025-06-0701</p>
					<p>Authors: Jarosław Fastowicz, Krzysztof Okarma</p>
					<p>Abstract: Automatic visual quality assessment of objects created using additive manufacturing processes is one of the hot topics in the Industry 4.0 era. As the 3D printing becomes more and more popular, also for everyday home use, a reliable visual quality assessment of printed surfaces attracts a great interest. One of the most obvious reasons is the possibility of saving time and filament in the case of detected low printing quality, as well as correction of some smaller imperfections during the printing process. A novel method presented in the paper can be successfully applied for the assessment of at surfaces almost independently on the filament's colour. Is utilizes the assumption about the regularity of the layers visible on the printed high quality surfaces as straight lines, which can be extracted using Hough transform. However, for various colours of filaments some preprocessing operations should be conducted to allow a proper line detection for various samples. In the proposed method the additional brightness compensation has been used together with Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. Results obtained for the database of 88 photos of 3D printed samples, together with their scans, are encouraging and allow a reliable quality assessment of 3D printed surfaces for various colours of filaments.</p>
					<p><a href="https://lib.jucs.org/article/22621/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22621/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22621/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Fast Binarization of Unevenly Illuminated Document Images Based on Background Estimation for Optical Character Recognition Purposes</title>
		    <link>https://lib.jucs.org/article/22616/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 627-646</p>
					<p>DOI: 10.3217/jucs-025-06-0627</p>
					<p>Authors: Hubert Michalak, Krzysztof Okarma</p>
					<p>Abstract: One of the key operations during the image preprocessing step in Optical Character Recognition (OCR) algorithms is image binarization. Although for uniformly illuminated images, obtained typically by atbed scanners, the use of a single global threshold may be sufficient for further recognition of individual characters, it cannot be applied directly in case of non-uniform lightened document images. Such problem may occur during capturing photos of documents in unknown lighting conditions making a proper text recognition impossible in some parts of the image. Since the application of popular adaptive thresholding methods, e.g. Niblack, Sauvola and their modifications, based on the analysis of the neighbourhood of each pixel is time consuming, a faster solution might be the division of images into blocks or elimination of non-uniform background. Such an approach can be considered as a balance solution filling the gap between global and local adaptive thresholding. The solution proposed in the paper, useful also for various mobile devices due to limited computational requirements, is based on the approximation of lighting distribution of the background using the reduced resolution images. The proposed method allows to obtain very good OCR results being superior in comparison to typical adaptive binarization algorithms both in terms of the resulting OCR accuracy and computational efficiency.</p>
					<p><a href="https://lib.jucs.org/article/22616/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22616/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22616/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Real Time Path Finding for Assisted Living Using Deep Learning</title>
		    <link>https://lib.jucs.org/article/23150/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(4): 475-487</p>
					<p>DOI: 10.3217/jucs-024-04-0475</p>
					<p>Authors: Ugnius Malūkas, Rytis Maskeliūnas, Robertas Damaševičius, Marcin Woźniak</p>
					<p>Abstract: The paper presents a computer vision based system, which performs real time path finding for visually impaired or blind people. The semantic segmentation of camera images is performed using deep convolutional neural network (CNN), which able to recognize patterns across image feature space. Out of three different CNN architectures (AlexNet, GoogLeNet and VGG) analysed, the fully connected VGG16 neural network is shown to perform best in the semantic segmentation task. The algorithm for extracting and finding paths, obstacles and path boundaries is presented. The experiments performed using own dataset (300 images extracted from two hours of video recording walking in outdoors environment) show that the developed system is able to find paths, path objects and path boundaries with an accuracy of 96.1 ± 2.6%.</p>
					<p><a href="https://lib.jucs.org/article/23150/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23150/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23150/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Apr 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A New Approach to Water Flow Algorithm for Text Line Segmentation</title>
		    <link>https://lib.jucs.org/article/29875/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 17(1): 30-47</p>
					<p>DOI: 10.3217/jucs-017-01-0030</p>
					<p>Authors: Darko Brodić, Zoran Milivojevic</p>
					<p>Abstract: This paper proposes a new approach to water flow algorithm for the text line segmentation. Original method assumes hypothetical water flows under a few specified angles to the document image frame from left to right and vice versa. As a result, unwetted image frames are extracted. These areas are of major importance for text line segmentation. Method modifications mean extension values of water flow angle and unwetted image frames function enlargement. Results are encouraging due to text line segmentation improvement which is the most challenging process stage in document image processing.</p>
					<p><a href="https://lib.jucs.org/article/29875/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29875/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29875/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Jan 2011 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Robust Extraction of Text from Camera Images using Colour and Spatial Information Simultaneously</title>
		    <link>https://lib.jucs.org/article/29562/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(18): 3325-3342</p>
					<p>DOI: 10.3217/jucs-015-18-3325</p>
					<p>Authors: Shyama Chowdhury, Soumyadeep Dhar, Karen Rafferty, Amit Das, Bhabatosh Chanda</p>
					<p>Abstract: The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction.</p>
					<p><a href="https://lib.jucs.org/article/29562/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29562/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29562/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Dec 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Vascular Pattern Analysis towards Pervasive Palm Vein Authentication</title>
		    <link>https://lib.jucs.org/article/29370/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(5): 1081-1089</p>
					<p>DOI: 10.3217/jucs-015-05-1081</p>
					<p>Authors: Debnath Bhattacharyya, Poulami Das, Tai-hoon Kim, Samir Bandyopadhyay</p>
					<p>Abstract: In this paper we propose an Image Analysis technique for Vascular Pattern of Hand Palm, which in turn leads towards Palm Vein Authentication of an individual. Near-Infrared Image of Palm Vein pattern is taken and passed through three different processes or algorithms to process the Infrared Image in such a way that the future authentication can be done accurately or almost exactly. These three different processes are: a. Vascular Pattern Marker Algorithm (VPMA); b. Vascular Pattern Extractor Algorithm (VPEA); and c. Vascular Pattern Thinning Algorithm (VPTA). The resultant Images will be stored in a Database, as the vascular patterns are unique to each individual, so future authentication can be done by comparing the pattern of veins in the palm of a person being authenticated with a pattern stored in a database.</p>
					<p><a href="https://lib.jucs.org/article/29370/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29370/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29370/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 1 Mar 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Graph-based Approach for Robust Road Guidance Sign Recognition from Differently Exposed Images</title>
		    <link>https://lib.jucs.org/article/29341/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 15(4): 786-804</p>
					<p>DOI: 10.3217/jucs-015-04-0786</p>
					<p>Authors: Andrey Vavilin, Kang-Hyun Jo</p>
					<p>Abstract: In this paper we present an approach to detect traffic guidance signs and recognise the structure of junction information on them. The detection algorithm is based on using differently exposed images. These images are combined into one using tone mapping technique in order to minimize effects of bad environment conditions and low dynamic range of CCD-cameras. This technique allows robust sign detection in various lighting conditions. To localize sign candidates color segmentation is used. To minimize number of false detection filtering operations based on geometrical and color properties is applied. Recognition process is based on graph theory. Each sign candidate is decomposed into principal components and the region which represents junction structure is mapped into a graph. This graph is checked for possible mapping mistakes. Finally, the graph is analyzed in order to extract all possible paths of junction crossing. These paths must represent the real structure of the junction and correspond to the road law. The proposed method allows more effective detection in different lighting and environmental conditions such as insufficient or excessive lighting, rain, fog etc compared with conventional approaches.</p>
					<p><a href="https://lib.jucs.org/article/29341/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29341/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29341/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 28 Feb 2009 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>A Novel Multi-Layer Level Set Method for Image Segmentation</title>
		    <link>https://lib.jucs.org/article/29151/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(14): 2428-2452</p>
					<p>DOI: 10.3217/jucs-014-14-2427</p>
					<p>Authors: Xiao-Feng Wang, De-Shuang Huang</p>
					<p>Abstract: In this paper, a new multi-layer level set method is proposed for multi-phase image segmentation. The proposed method is based on the conception of image layer and improved numerical solution of bimodal Chan-Vese model. One level set function is employed for curve evolution with a hierarchical form in sequential image layers. In addition, new initialization method and more efficient computational method for signed distance function are introduced. Moreover, the evolving curve can automatically stop on true boundaries in single image layer according to a termination criterion which is based on the length change of evolving curve. Specially, an adaptive improvement scheme is designed to speed up curve evolution process in a queue of sequential image layers, and the detection of background image layer is used to confirm the termination of the whole multi-layer level set evolution procedure. Finally, numerical experiments on some synthetic and real images have demonstrated the efficiency and robustness of our method. And the comparisons with multi-phase Chan-Vese method also show that our method has a less time-consuming computation and much faster convergence.</p>
					<p><a href="https://lib.jucs.org/article/29151/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29151/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29151/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jul 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Table-form Extraction with Artefact Removal</title>
		    <link>https://lib.jucs.org/article/28942/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 252-265</p>
					<p>DOI: 10.3217/jucs-014-02-0252</p>
					<p>Authors: Luiz Antônio Pereira Neves, João De Carvalho, Jacques Facon, Flávio Bortolozzi</p>
					<p>Abstract: In this paper we present a novel methodology to recognize the layout structure of handwritten filled table-forms. Recognition methodology includes locating line intersections, correcting wrong intersections produced by what we call artefacts (overlapping data, broken segments and smudges), extracting correct table-form cells and using as little previous table-form knowledge as possible. To improve layout structure recognition, a novel artefact identification and deletion method is also proposed. To evaluate the effectiveness of the methodology, a database composed of 350 handwritten filled table-form images damaged by different types of artefacts was used. Experiments show that the artefact identification method improves performance of the table-forms structure extractor that reached a success rate of 85%.</p>
					<p><a href="https://lib.jucs.org/article/28942/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Applications of Neighborhood Sequence in Image Processing and Database Retrieval</title>
		    <link>https://lib.jucs.org/article/28672/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 12(9): 1240-1253</p>
					<p>DOI: 10.3217/jucs-012-09-1240</p>
					<p>Authors: András Hajdu, János Kormos, Tamás Tóth, Krisztián Veréb</p>
					<p>Abstract: In this paper we show how the distance functions generated by neighborhood sequences provides flexibility in image processing algorithms and image database retrieval. Accordingly, we present methods for indexing and segmenting color images, where we use digital distance functions generated by neighborhood sequences to measure distance between colors. Moreover, we explain the usability of neighborhood sequences within the field of image database retrieval, to find similar images from a database for a given query image. Our approach considers special distance functions to measure the distance between feature vectors extracted from the images, which allows more flexible queries for the users.</p>
					<p><a href="https://lib.jucs.org/article/28672/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28672/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28672/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Thu, 28 Sep 2006 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	