
<rss version="0.91">
    <channel>
        <title>Latest Articles from JUCS - Journal of Universal Computer Science</title>
        <description>Latest 4 Articles from JUCS - Journal of Universal Computer Science</description>
        <link>https://lib.jucs.org/</link>
        <lastBuildDate>Fri, 13 Mar 2026 01:25:53 +0000</lastBuildDate>
        <generator>Pensoft FeedCreator</generator>
        
	
		<item>
		    <title>Authorship Studies and the Dark Side of Social Media Analytics</title>
		    <link>https://lib.jucs.org/article/23994/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 26(1): 156-170</p>
					<p>DOI: 10.3897/jucs.2020.009</p>
					<p>Authors: Patrick Juola</p>
					<p>Abstract: The computational analysis of documents to learn about their authorship (also known as authorship attribution and/or authorship profiling) is an increasingly important area of research and application of technology. This paper discusses the technology, focusing on its application to social media in a variety of disciplines. It includes a brief survey of the history as well as three tutorial case studies, and discusses several significant applications and societal benefits that authorship analysis has brought about. It further argues, though, that while the benefits of this technology have been great, it has created serious risks to society that have not been sufficiently considered, addressed, or mitigated.</p>
					<p><a href="https://lib.jucs.org/article/23994/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23994/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23994/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Tue, 28 Jan 2020 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>GerIE - An Open Information Extraction System for the German Language</title>
		    <link>https://lib.jucs.org/article/22920/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 24(1): 2-24</p>
					<p>DOI: 10.3217/jucs-024-01-0002</p>
					<p>Authors: Akim Bassa, Mark Kroll, Roman Kern</p>
					<p>Abstract: Open Information Extraction (OIE) allows to extract relations from a text without the need of domain-speci_c training data. To date, most of the research on OIE has been focused to the English language and little or no research has been conducted on other languages, including German. To tackle this problem, we developed GerIE, an OIE system for the German language. We surveyed the literature on OIE in order to identify concepts that may apply to the German language. Our system is based on the output of a German dependency parser and a number of handcrafted rules to extract the propositions. To evaluate the system, we created two dedicated datasets: one derived from news articles and the other devised from texts from an encyclopedia. Our system achieves F-measures of up to 0.89 for correctly-preprocessed sentences.</p>
					<p><a href="https://lib.jucs.org/article/22920/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/22920/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/22920/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Sun, 28 Jan 2018 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Computational Analysis of Medieval Manuscripts: A New Tool for Analysis and Mapping of Medieval Documents to Modern Orthography</title>
		    <link>https://lib.jucs.org/article/23978/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 18(20): 2750-2770</p>
					<p>DOI: 10.3217/jucs-018-20-2750</p>
					<p>Authors: Mushtaq Ahmad, Stefan Gruner, Muhammad Afzal</p>
					<p>Abstract: Medieval manuscripts or other written documents from that period contain valuable information about people, religion, and politics of the medieval period, making the study of medieval documents a necessary pre-requisite to gaining in-depth knowledge of medieval history. Although tool-less study of such documents is possible and has been ongoing for centuries, much subtle information remains locked such manuscripts unless it gets revealed by effective means of computational analysis. Automatic analysis of medieval manuscripts is a non-trivial task mainly due to non-conforming styles, spelling peculiarities, or lack of relational structures (hyper-links), which could be used to answer meaningful queries. Natural Language Processing (NLP) tools and algorithms are used to carry out computational analysis of text data. However due to high percentage of spelling variations in medieval manuscripts, NLP tools and algorithms cannot be applied directly for computational analysis. If the spelling variations are mapped to standard dictionary words, then application of standard NLP tools and algorithms becomes possible. In this paper we describe a web-based software tool CAMM (Computational Analysis of Medieval Manuscripts) that maps medieval spelling variations to a modern German dictionary. Here we describe the steps taken to acquire, reformat, and analyze data, produce putative mappings as well as the steps taken to evaluate the findings. At the time of the writing of this paper, CAMM provides access to 11275 manuscripts organized into 54 collections containing a total of 242446 distinctly spelled words. CAMM accurately corrects spelling of 55% percent of the verifiable words. CAMM is freely available at http://researchworks.cs.athabascau.ca/.</p>
					<p><a href="https://lib.jucs.org/article/23978/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/23978/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/23978/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Wed, 1 Feb 2012 00:00:00 +0000</pubDate>
		</item>
	
		<item>
		    <title>Table-form Extraction with Artefact Removal</title>
		    <link>https://lib.jucs.org/article/28942/</link>
		    <description><![CDATA[
					<p>JUCS - Journal of Universal Computer Science 14(2): 252-265</p>
					<p>DOI: 10.3217/jucs-014-02-0252</p>
					<p>Authors: Luiz Antônio Pereira Neves, João De Carvalho, Jacques Facon, Flávio Bortolozzi</p>
					<p>Abstract: In this paper we present a novel methodology to recognize the layout structure of handwritten filled table-forms. Recognition methodology includes locating line intersections, correcting wrong intersections produced by what we call artefacts (overlapping data, broken segments and smudges), extracting correct table-form cells and using as little previous table-form knowledge as possible. To improve layout structure recognition, a novel artefact identification and deletion method is also proposed. To evaluate the effectiveness of the methodology, a database composed of 350 handwritten filled table-form images damaged by different types of artefacts was used. Experiments show that the artefact identification method improves performance of the table-forms structure extractor that reached a success rate of 85%.</p>
					<p><a href="https://lib.jucs.org/article/28942/">HTML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/xml/">XML</a></p>
					<p><a href="https://lib.jucs.org/article/28942/download/pdf/">PDF</a></p>
			]]></description>
		    <category>Research Article</category>
		    <pubDate>Mon, 28 Jan 2008 00:00:00 +0000</pubDate>
		</item>
	
	</channel>
</rss>
	