JUCS - Journal of Universal Computer Science 28(12): 1312-1329, doi: 10.3897/jucs.84130
Identifying Tweets with Personal Medication Intake Mentions using Attentive Character and Localized Context Representations
expand article infoJarashanth Selvarajah, Ruwan Nawarathna§
‡ Postgraduate Institute of Science, University of Peradeniya, Peradeniya, Sri Lanka§ Department of Statistics and Computer Science, University of Peradeniya, Peradeniya, Sri Lanka
Open Access
Abstract

Individuals with health anomalies often share their experiences on social media sites, such as Twitter, which yields an abundance of data on a global scale. Nowadays, social media data constitutes a leading source to build drug monitoring and surveillance systems. However, a proper assessment of such data requires discarding mentions which do not express drug-related personal health experiences. We automate this process by introducing a novel deep learning model. The model includes character-level and word-level embeddings, embedding-level attention, convolu- tional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and context-aware attentions. An embedding for a word is produced by integrating both word-level and character-level embeddings using an embedding-level attention mechanism, which selects the salient features from both embeddings without expanding dimensionality. The resultant embedding is further analyzed by three CNN layers independently, where each extracts unique n-grams. BiGRUs followed by attention layers further process the outputs from each CNN layer. Besides, the resultant embedding is also encoded by a BiGRU with attention. Our model is developed to cope with the intricate attributes inherent to tweets such as vernacular texts, descriptive medical phrases, frequently misspelt words, abbreviations, short messages, and others. All these four outputs are summed and sent to a softmax classifier. We built a dataset by incorporating tweets from two benchmark datasets designed for the same objective to evaluate the performance. Our model performs substantially better than existing models, including several customized Bidirectional Encoder Representations from Transformers (BERT) models with an F1-score of 0.772.

Keywords
Drug surveillance; character-level embedding; context-aware attention; convolutional neural networks, bidirectional gated recurrent units