<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//TaxonX//DTD Taxonomic Treatment Publishing DTD v0 20100105//EN" "../../nlm/tax-treatment-NS0.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:tp="http://www.plazi.org/taxpub" article-type="research-article" dtd-version="3.0" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">109</journal-id>
      <journal-id journal-id-type="index">urn:lsid:arphahub.com:pub:3dc5f44e-8666-58db-bc76-a455210e8891</journal-id>
      <journal-title-group>
        <journal-title xml:lang="en">JUCS - Journal of Universal Computer Science</journal-title>
        <abbrev-journal-title xml:lang="en">jucs</abbrev-journal-title>
      </journal-title-group>
      <issn pub-type="ppub">0948-695X</issn>
      <issn pub-type="epub">0948-6968</issn>
      <publisher>
        <publisher-name>Journal of Universal Computer Science</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3897/jucs.84130</article-id>
      <article-id pub-id-type="publisher-id">84130</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Research Article</subject>
        </subj-group>
        <subj-group subj-group-type="scientific_subject">
          <subject>Topic M - Knowledge Management</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Identifying Tweets with Personal Medication Intake Mentions using Attentive Character and Localized Context Representations</article-title>
      </title-group>
      <contrib-group content-type="authors">
        <contrib contrib-type="author" corresp="yes">
          <name name-style="western">
            <surname>Selvarajah</surname>
            <given-names>Jarashanth</given-names>
          </name>
          <email xlink:type="simple">jarashanth02@gmail.com</email>
          <uri content-type="orcid">https://orcid.org/0000-0002-3228-0145</uri>
          <xref ref-type="aff" rid="A1">1</xref>
        </contrib>
        <contrib contrib-type="author" corresp="yes">
          <name name-style="western">
            <surname>Nawarathna</surname>
            <given-names>Ruwan</given-names>
          </name>
          <email xlink:type="simple">ruwan.nawarathna@gmail.com</email>
          <uri content-type="orcid">https://orcid.org/0000-0001-5843-8919</uri>
          <xref ref-type="aff" rid="A2">2</xref>
        </contrib>
      </contrib-group>
      <aff id="A1">
        <label>1</label>
        <addr-line content-type="verbatim">Postgraduate Institute of Science, University of Peradeniya, Peradeniya, Sri Lanka</addr-line>
        <institution>Postgraduate Institute of Science, University of Peradeniya</institution>
        <addr-line content-type="city">Peradeniya</addr-line>
        <country>Sri Lanka</country>
      </aff>
      <aff id="A2">
        <label>2</label>
        <addr-line content-type="verbatim">Department of Statistics and Computer Science, University of Peradeniya, Peradeniya, Sri Lanka</addr-line>
        <institution>Department of Statistics and Computer Science, University of Peradeniya</institution>
        <addr-line content-type="city">Peradeniya</addr-line>
        <country>Sri Lanka</country>
      </aff>
      <author-notes>
        <fn fn-type="corresp">
          <p>Corresponding authors: Jarashanth Selvarajah (<email xlink:type="simple">jarashanth02@gmail.com</email>), Ruwan Nawarathna (<email xlink:type="simple">ruwan.nawarathna@gmail.com</email>).</p>
        </fn>
        <fn fn-type="edited-by">
          <p>Academic editor: </p>
        </fn>
      </author-notes>
      <pub-date pub-type="collection">
        <year>2022</year>
      </pub-date>
      <pub-date pub-type="epub">
        <day>28</day>
        <month>12</month>
        <year>2022</year>
      </pub-date>
      <volume>28</volume>
      <issue>12</issue>
      <fpage>1312</fpage>
      <lpage>1329</lpage>
      <uri content-type="arpha" xlink:href="http://openbiodiv.net/0624A484-21A7-554E-BB97-1C3363958557">0624A484-21A7-554E-BB97-1C3363958557</uri>
      <history>
        <date date-type="received">
          <day>21</day>
          <month>03</month>
          <year>2022</year>
        </date>
        <date date-type="accepted">
          <day>12</day>
          <month>10</month>
          <year>2022</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>Jarashanth Selvarajah, Ruwan Nawarathna</copyright-statement>
        <license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by-nd/4.0/" xlink:type="simple">
          <license-p>This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-ND 4.0). This license allows reusers to copy and distribute the material in any medium or format in unadapted form only, and only so long as attribution is given to the creator. The license allows for commercial use.</license-p>
        </license>
      </permissions>
      <abstract>
        <label>Abstract</label>
        <p>Individuals with health anomalies often share their experiences on social media sites, such as Twitter, which yields an abundance of data on a global scale. Nowadays, social media data constitutes a leading source to build drug monitoring and surveillance systems. However, a proper assessment of such data requires discarding mentions which do not express drug-related personal health experiences. We automate this process by introducing a novel deep learning model. The model includes character-level and word-level embeddings, embedding-level attention, convolu- tional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and context-aware attentions. An embedding for a word is produced by integrating both word-level and character-level embeddings using an embedding-level attention mechanism, which selects the salient features from both embeddings without expanding dimensionality. The resultant embedding is further analyzed by three CNN layers independently, where each extracts unique n-grams. BiGRUs followed by attention layers further process the outputs from each CNN layer. Besides, the resultant embedding is also encoded by a BiGRU with attention. Our model is developed to cope with the intricate attributes inherent to tweets such as vernacular texts, descriptive medical phrases, frequently misspelt words, abbreviations, short messages, and others. All these four outputs are summed and sent to a softmax classifier. We built a dataset by incorporating tweets from two benchmark datasets designed for the same objective to evaluate the performance. Our model performs substantially better than existing models, including several customized Bidirectional Encoder Representations from Transformers (BERT) models with an F1-score of 0.772.</p>
      </abstract>
    </article-meta>
  </front>
</article>
