
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 4 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Fri, 6 Mar 2026 20:21:41 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>Synthetic Image Translation for Football Players Pose Estimation</title>
		    <link>https://lib.jucs.org/article/22619/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 25(6): 683-700</p>
					<p>DOI: 10.3217/jucs-025-06-0683</p>
					<p>Authors: Michał Sypetkowski, Grzegorz Sarwas, Tomasz Trzciński</p>
					<p>Abstract: In this paper, we present an approach for football players pose estimation on very low-resolution images. The camera recording the football match is far away from the pitch in order to register at least half of it. As a result, even using very high resolution cameras, the image area presenting every single player is very small. Additionally, variable weather conditions or shadows and reflections, make this aim very hard. Such images are very hard to annotate by human. In our research we assume lack of manually annotated training data from our target distribution. Instead of manual annotation of large dataset, we create simple python script for rendering synthetic images with perfect annotations. Then we train vanilla CycleGAN (Cycle-consistent Generative Adversarial Networks) for transformation of raw synthetic images into more realistic. We use transformed images to train CPN (Cascaded Pyramid Networks) model. Without bells and whistles, we achieve similar precision on our images as the same CPN model trained with COCO (Common Objects in Context) keypoints dataset.</p>
					<p><a href="https://lib.jucs.org/article/22619/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22619/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22619/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Fri, 28 Jun 2019 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Non-Photorealistic Rendering of Neural Cells from their Morphological Description</title>
		    <link>https://lib.jucs.org/article/23342/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 21(7): 935-958</p>
					<p>DOI: 10.3217/jucs-021-07-0935</p>
					<p>Authors: Angela Mendoza, Susana Mata, Luis Pastor</p>
					<p>Abstract: Gaining a better understanding of the human brain continues to be one of the greatest and most elusive of challenges. Its extreme complexity can only be addressed through the coordinated and collaborative work of researchers from a range of disciplines. 3D visualization has proven to be a useful tool for simplifying the analysis of complex systems, where gaining meaningful understanding from unstructured raw data is almost impossible, such as in the case of the brain. This paper presents a novel approach for visualizing neurons directly from the morphological descriptions extracted by neuroscience laboratories, pursuing two goals: improving the readability of complex neuronal scenarios and avoiding the need to store 3D models of the intricate geometry of neurons, since such models are demanding of computer resources. The proposed rendering method involves illustration techniques that facilitate the visual analysis of dense neural scenes. The work presented here brings the field of neuroscience and the benefits of 3D visualization environments closer together, increasing the interpretability of massive neural scenarios through visual inspection. A preliminary user study has proven the utility of the proposed rendering techniques for the visual exploration of dense neuronal scenes. The feasibility of parallelizing the implemented algorithms has also been assessed, representing a further step towards interactive illustrative visualization of neuronal forests.</p>
					<p><a href="https://lib.jucs.org/article/23342/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23342/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23342/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Jul 2015 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>On Succinct Representations of Textured Surfaces by Weighted Finite Automata</title>
		    <link>https://lib.jucs.org/article/29620/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 16(5): 586-603</p>
					<p>DOI: 10.3217/jucs-016-05-0586</p>
					<p>Authors: Jürgen Albert, German Tischler</p>
					<p>Abstract: Generalized finite automata with weights for states and transitions have been successfully applied to image generation for more than a decade now. Bilevel images (black and white), grayscale- or color-images and even video sequences can be effectively coded as weighted finite automata. Since each state represents a subimage within those automata the weighted transitions can exploit self-similarities for image compression. These "fractal" approaches yield remarkable results in comparison to the well-known standard JPEG- or MPEG-encodings and frequently provide advantages for images with strong contrasts. Here we will study the combination of these highly effective compression techniques with a generalization of weighted finite automata to higher dimensions, which establish d-dimensional relations between resultsets of ordinary weighted automata. For the applications we will restrict ourselves to three-dimensional Bezier spline-patches and to grayscale images as textures.</p>
					<p><a href="https://lib.jucs.org/article/29620/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29620/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29620/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 1 Mar 2010 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Gaze-based Interaction for Virtual Environments</title>
		    <link>https://lib.jucs.org/article/29218/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(19): 3085-3098</p>
					<p>DOI: 10.3217/jucs-014-19-3085</p>
					<p>Authors: Jorge Jimenez, Diego Gutierrez, Pedro Latorre</p>
					<p>Abstract: Abstract We present an alternative interface that allows users to perceive new sensations in virtual environments. Gaze-based interaction in virtual environments creates the feeling of controlling objects with the mind, arguably translating into a more intense immersion sensation. Additionally, it is also free of some of the most cumbersome aspects of interacting in virtual worlds. By incorporating a real-time physics engine, the sensation of moving something real is further accentuated. We also describe various simple yet effective techniques that allow eyetracking devices to enhance the three-dimensional visualization capabilities of current displays. Some of these techniques have the additional advantage of freeing the mouse from most navigation tasks. This work focuses on the study of existing techniques, a detailed description of the implemented interface and the evaluation (both objective and subjective) of the interface. Given that appropriate filtering of the data from the eye tracker used is a key aspect for the correct functioning of the interface, we will also discuss that aspect in depth.</p>
					<p><a href="https://lib.jucs.org/article/29218/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/29218/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/29218/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sat, 1 Nov 2008 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	