Mostrar el registro sencillo del ítem

dc.contributor.authorGhafouri, Vahid 
dc.contributor.authorSuch, Jose
dc.contributor.authorSuarez-Tangil, Guillermo 
dc.date.accessioned2024-09-25T11:58:23Z
dc.date.available2024-09-25T11:58:23Z
dc.date.issued2024-11-14
dc.identifier.citation@inproceedings{ghafouri2023ai, author = {Ghafouri, Vahid and Agarwal, Vibhor and Zhang, Yong and Sastry, Nishanth and Such, Jose and Suarez-Tangil, Guillermo}, title = {AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics}, year = {2023}, isbn = {9798400701245}, url = {https://doi.org/10.1145/3583780.3614777}, doi = {10.1145/3583780.3614777}, abstract = {The introduction of ChatGPT and the subsequent improvement of Large Language Models (LLMs) have prompted more and more individuals to turn to the use of ChatBots, both for information and assistance with decision-making. However, the information the user is after is often not formulated by these ChatBots objectively enough to be provided with a definite, globally accepted answer. Controversial topics, such as "religion", "gender identity", "freedom of speech", and "equality", among others, can be a source of conflict as partisan or biased answers can reinforce preconceived notions or promote disinformation. By exposing ChatGPT to such debatable questions, we aim to understand its level of awareness and if existing models are subject to socio-political and/or economic biases. We also aim to explore how AI-generated answers compare to human ones. For exploring this, we use a dataset of a social media platform created for the purpose of debating human-generated claims on polemic subjects among users, dubbed Kialo. Our results show that while previous versions of ChatGPT have had important issues with controversial topics, more recent versions of ChatGPT (gpt-3.5-turbo) are no longer manifesting significant explicit biases in several knowledge areas. In particular, it is well-moderated regarding economic aspects. However, it still maintains degrees of implicit libertarian leaning toward right-winged ideals which suggest the need for increased moderation from the socio-political point of view. In terms of domain knowledge on controversial topics, with the exception of the "Philosophical" category, ChatGPT is performing well in keeping up with the collective human level of knowledge. Finally, we see that sources of Bing AI have slightly more tendency to the center when compared to human answers. All the analyses we make are generalizable to other types of biases and domains.}, booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management}, pages = {556–565}, numpages = {10}, keywords = {sentence transformers, controversial topics, NLP, Kialo, ChatGPT, AI bias}, location = {, Birmingham, United Kingdom, }, series = {CIKM '23} } @inproceedings{ghafouri2024echo, title={Transformer-Based Quantification of the Echo Chamber Effect in Online Communities}, author={Ghafouri, Vahid and Alatawi, Faisal and Karami, Mansooreh and Such, Jose and Suarez-Tangil, Guillermo}, booktitle={ACM Conference on Computer-Supported Cooperative Work and Social Computing}, year={2024}, month={november}, series = {CSCW2 '24} } @article{Lingam2018semeval2014, author = "Vijay Lingam and Simran Bhuria and Mayukh Nair and Divij Gurpreetsingh and Anjali Goyal and Ashish Sureka", title = "{Dataset for Conflicting Statements Detection in Text}", year = "2018", month = "2", url = "https://figshare.com/articles/dataset/Dataset_for_Conflicting_Statements_Detection_in_Text/5873823", doi = "10.6084/m9.figshare.5873823.v1" } @misc{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, year={2021}, primaryClass={cs.CL} } @article{wang2023r32, title={MultiLoRA: Democratizing LoRA for Better Multi-Task Learning}, author={Wang, Yiming and Lin, Yu and Zeng, Xiaodong and Zhang, Guannan}, year={2023} } @article{liu2023r32, title={Sparsely Shared LoRA on Whisper for Child Speech Recognition}, author={Liu, Wei and Qin, Ying and Peng, Zhiyuan and Lee, Tan}, year={2023} } @inproceedings{marelli2014semeval, title = "{S}em{E}val-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment", author = "Marelli, Marco and Bentivogli, Luisa and Baroni, Marco and Bernardi, Raffaella and Menini, Stefano and Zamparelli, Roberto", editor = "Nakov, Preslav and Zesch, Torsten", booktitle = "Proceedings of the 8th International Workshop on Semantic Evaluation ({S}em{E}val 2014)", month = aug, year = "2014", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/S14-2001", doi = "10.3115/v1/S14-2001", pages = "1--8", } @inproceedings{Fang2020DistributionShift, author = {Fang, Tongtong and Lu, Nan and Niu, Gang and Sugiyama, Masashi}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin}, pages = {11996--12007}, publisher = {Curran Associates, Inc.}, title = {Rethinking Importance Weighting for Deep Learning under Distribution Shift}, url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-Paper.pdf}, volume = {33}, year = {2020} } @InProceedings{zhai2024CatastrophicForgetting, title = {Investigating the Catastrophic Forgetting in Multimodal Large Language Model Fine-Tuning}, author = {Zhai, Yuexiang and Tong, Shengbang and Li, Xiao and Cai, Mu and Qu, Qing and Lee, Yong Jae and Ma, Yi}, booktitle = {Conference on Parsimony and Learning}, pages = {202--227}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/zhai24a/zhai24a.pdf}, url = {https://proceedings.mlr.press/v234/zhai24a.html}, abstract = {Following the success of GPT4, there has been a surge in interest in multimodal large language model (MLLM) research. This line of research focuses on developing general-purpose LLMs through fine-tuning pre-trained LLMs and vision models. However, catastrophic forgetting, a notorious phenomenon where the fine-tuned model fails to retain similar performance compared to the pre-trained model, still remains an inherited problem in multimodal LLMs (MLLM). In this paper, we introduce EMT: Evaluating MulTimodality for evaluating the catastrophic forgetting in MLLMs, by treating each MLLM as an image classifier. We first apply EMT to evaluate several open-source fine-tuned MLLMs and we discover that almost all evaluated MLLMs fail to retain the same performance levels as their vision encoders on standard image classification tasks. Moreover, we continue fine-tuning LLaVA, an MLLM and utilize EMT to assess performance throughout the fine-tuning. Interestingly, our results suggest that early-stage fine-tuning on an image dataset improves performance across other image datasets, by enhancing the alignment of text and language features. However, as fine-tuning proceeds, the MLLMs begin to hallucinate, resulting in a significant loss of generalizability, even when the image encoder remains frozen. Our results suggest that MLLMs have yet to demonstrate performance on par with their vision models on standard image classification tasks and the current MLLM fine-tuning procedure still has room for improvement.} } @article{Introne2023sbertproblem, title={Measuring Belief Dynamics on Twitter}, volume={17}, url={https://ojs.aaai.org/index.php/ICWSM/article/view/22154}, DOI={10.1609/icwsm.v17i1.22154}, abstractNote={There is growing concern about misinformation and the role online media plays in social polarization. Analyzing belief dynamics is one way to enhance our understanding of these problems. Existing analytical tools, such as sur-vey research or stance detection, lack the power to corre-late contextual factors with population-level changes in belief dynamics. In this exploratory study, I present the Belief Landscape Framework, which uses data about people’s professed beliefs in an online setting to measure belief dynamics with more temporal granularity than previous methods. I apply the approach to conversations about climate change on Twitter and provide initial validation by comparing the method’s output to a set of hypotheses drawn from the literature on dynamic systems. My analysis indicates that the method is relatively robust to different parameter settings, and results suggest that 1) there are many stable configurations of belief on the polarizing issue of climate change and 2) that people move in predictable ways around these points. The method paves the way for more powerful tools that can be used to understand how the modern digital media eco-system impacts collective belief dynamics and what role misinformation plays in that process.}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Introne, Joshua}, year={2023}, month={Jun.}, pages={387-398} } @article{Iqbal2023Nextdoor, title={Lady and the Tramp Nextdoor: Online Manifestations of Real-World Inequalities in the Nextdoor Social Network}, volume={17}, url={https://ojs.aaai.org/index.php/ICWSM/article/view/22155}, DOI={10.1609/icwsm.v17i1.22155}, abstractNote={From health to education, income impacts a huge range of life choices. Earlier research has leveraged data from online social networks to study precisely this impact. In this paper, we ask the opposite question: do different levels of income result in different online behaviors? We demonstrate it does. We present the first large-scale study of Nextdoor, a popular location-based social network. We collect 2.6 Million posts from 64,283 neighborhoods in the United States and 3,325 neighborhoods in the United Kingdom, to examine whether online discourse reflects the income and income inequality of a neighborhood. We show that posts from neighborhoods with different incomes indeed differ, e.g. richer neighborhoods have a more positive sentiment and discuss crimes more, even though their actual crime rates are much lower. We then show that user-generated content can predict both income and inequality. We train multiple machine learning models and predict both income (R2=0.841) and inequality (R2=0.77).}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Iqbal, Waleed and Ghafouri, Vahid and Tyson, Gareth and Suarez-Tangil, Guillermo and Castro, Ignacio}, year={2023}, month={Jun.}, pages={399-410} } @InProceedings{Hazra2023pkddsbert, author="Hazra, Rima and Dwivedi, Arpit and Mukherjee, Animesh", editor="Amini, Massih-Reza and Canu, St{\'e}phane and Fischer, Asja and Guns, Tias and Kralj Novak, Petra and Tsoumakas, Grigorios", title="Is This Bug Severe? A Text-Cum-Graph Based Model for Bug Severity Prediction", booktitle="Machine Learning and Knowledge Discovery in Databases", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="236--252", abstract="Repositories of large software systems have become commonplace. This massive expansion has resulted in the emergence of various problems in these software platforms including identification of (i) bug-prone packages, (ii) critical bugs, and (iii) severity of bugs. One of the important goals would be to mine these bugs and recommend them to the developers to resolve them. The first step to this is that one has to accurately detect the extent of severity of the bugs. In this paper, we take up this task of predicting the severity of bugs in the near future. Contextualized neural models built on the text description of a bug and the user comments about the bug help to achieve reasonably good performance. Further information on how the bugs are related to each other in terms of the ways they affect packages can be summarised in the form of a graph and used along with the text to get additional benefits.", isbn="978-3-031-26422-1" } @InProceedings{Upadhyay2023bertrerankers, author="Upadhyay, Rishabh and Pasi, Gabriella and Viviani, Marco", editor="Koutra, Danai and Plant, Claudia and Gomez Rodriguez, Manuel and Baralis, Elena and Bonchi, Francesco", title="A Passage Retrieval Transformer-Based Re-Ranking Model for Truthful Consumer Health Search", booktitle="Machine Learning and Knowledge Discovery in Databases: Research Track", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="355--371", abstract="Searching for online information is nowadays a critical task in a scenario characterized by information overload and misinformation. To address these issues, it is necessary to provide users with both topically relevant and truthful information. Re-ranking is a strategy often used in Information Retrieval (IR) to consider multiple dimensions of relevance. However, re-rankers often analyze the full text of documents to obtain an overall relevance score at the re-ranking stage, which can lead to sub-optimal results. Some recent Transformer-based re-rankers actually consider text passages rather than the entire document, but focus only on topical relevance. Transformers are also being used in non-IR solutions to identify information truthfulness, but just to perform a binary classification task. Therefore, in this article, we propose an IR model based on re-ranking that focuses on suitably identified text passages from documents for retrieving both topically relevant and truthful information. This approach significantly reduces the noise introduced by query-unrelated content in long documents and allows us to evaluate the document's truthfulness against it, enabling more effective retrieval. We tested the effectiveness of the proposed solution in the context of the Consumer Health Search task, considering publicly available datasets. Our results show that the proposed approach statistically outperforms full-text retrieval models in the context of multidimensional relevance, such as those based on aggregation, and monodimensional relevance Transformer-based re-rankers, such as BERT-based re-rankers.", isbn="978-3-031-43412-9" } @article{kucuk2020stancedetection, author = {K\"{u}\c{c}\"{u}k, Dilek and Can, Fazli}, title = {Stance Detection: A Survey}, year = {2020}, issue_date = {January 2021}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {53}, number = {1}, issn = {0360-0300}, url = {https://doi.org/10.1145/3369026}, doi = {10.1145/3369026}, abstract = {Automatic elicitation of semantic information from natural language texts is an important research problem with many practical application areas. Especially after the recent proliferation of online content through channels such as social media sites, news portals, and forums; solutions to problems such as sentiment analysis, sarcasm/controversy/veracity/rumour/fake news detection, and argument mining gained increasing impact and significance, revealed with large volumes of related scientific publications. In this article, we tackle an important problem from the same family and present a survey of stance detection in social media posts and (online) regular texts. Although stance detection is defined in different ways in different application settings, the most common definition is “automatic classification of the stance of the producer of a piece of text, towards a target, into one of these three classes: {Favor, Against, Neither}.” Our survey includes definitions of related problems and concepts, classifications of the proposed approaches so far, descriptions of the relevant datasets and tools, and related outstanding issues. Stance detection is a recent natural language processing topic with diverse application areas, and our survey article on this newly emerging topic will act as a significant resource for interested researchers and practitioners.}, journal = {ACM Comput. Surv.}, month = {feb}, articleno = {12}, numpages = {37}, keywords = {social media analysis, deep learning, Twitter, Stance detection} } @inproceedings{sun2018stancedetection, title = "Stance Detection with Hierarchical Attention Network", author = "Sun, Qingying and Wang, Zhongqing and Zhu, Qiaoming and Zhou, Guodong", editor = "Bender, Emily M. and Derczynski, Leon and Isabelle, Pierre", booktitle = "Proceedings of the 27th International Conference on Computational Linguistics", month = aug, year = "2018", address = "Santa Fe, New Mexico, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/C18-1203", pages = "2399--2409", abstract = "Stance detection aims to assign a stance label (for or against) to a post toward a specific target. Recently, there is a growing interest in using neural models to detect stance of documents. Most of these works model the sequence of words to learn document representation. However, much linguistic information, such as polarity and arguments of the document, is correlated with the stance of the document, and can inspire us to explore the stance. Hence, we present a neural model to fully employ various linguistic information to construct the document representation. In addition, since the influences of different linguistic information are different, we propose a hierarchical attention network to weigh the importance of various linguistic information, and learn the mutual attention between the document and the linguistic information. The experimental results on two datasets demonstrate the effectiveness of the proposed hierarchical attention neural model.", } @article{ALDayel2021stancedetection, title = {Stance detection on social media: State of the art and trends}, journal = {Information Processing \& Management}, volume = {58}, number = {4}, pages = {102597}, year = {2021}, issn = {0306-4573}, doi = {https://doi.org/10.1016/j.ipm.2021.102597}, url = {https://www.sciencedirect.com/science/article/pii/S0306457321000960}, author = {Abeer ALDayel and Walid Magdy}, keywords = {Stance detection, Stance, Social media, Stance classification}, abstract = {Stance detection on social media is an emerging opinion mining paradigm for various social and political applications in which sentiment analysis may be sub-optimal. There has been a growing research interest for developing effective methods for stance detection methods varying among multiple communities including natural language processing, web science, and social computing, where each modeled stance detection in different ways. In this paper, we survey the work on stance detection across those communities and present an exhaustive review of stance detection techniques on social media, including the task definition, different types of targets in stance detection, features set used, and various machine learning approaches applied. Our survey reports state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches. In addition, we explore the emerging trends and different applications of stance detection on social media, including opinion mining and prediction and recently using it for fake news detection. The study concludes by discussing the gaps in the current existing research and highlights the possible future directions for stance detection on social media.} } @InProceedings{Dey2018stancedetection, author="Dey, Kuntal and Shrivastava, Ritvik and Kaushik, Saroj", editor="Pasi, Gabriella and Piwowarski, Benjamin and Azzopardi, Leif and Hanbury, Allan", title="Topical Stance Detection for Twitter: A Two-Phase LSTM Model Using Attention", booktitle="Advances in Information Retrieval", year="2018", publisher="Springer International Publishing", address="Cham", pages="529--536", abstract="The topical stance detection problem addresses detecting the stance of the text content with respect to a given topic: whether the sentiment of the given text content is in favor of (positive), is against (negative), or is none (neutral) towards the given topic. Using the concept of attention, we develop a two-phase solution. In the first phase, we classify subjectivity - whether a given tweet is neutral or subjective with respect to the given topic. In the second phase, we classify sentiment of the subjective tweets (ignoring the neutral tweets) - whether a given subjective tweet has a favor or against stance towards the topic. We propose a Long Short-Term memory (LSTM) based deep neural network for each phase, and embed attention at each of the phases. On the SemEval 2016 stance detection Twitter task dataset [7], we obtain a best-case macro F-score of 68.84{\%} and a best-case accuracy of 60.2{\%}, outperforming the existing deep learning based solutions. Our framework, T-PAN, is the first in the topical stance detection literature, that uses deep learning within a two-phase architecture.", isbn="978-3-319-76941-7" } @InProceedings{Wendenius2023triplet, author="Wendenius, Christof and Kuehn, Eileen and Streit, Achim", editor="Amini, Massih-Reza and Canu, St{\'e}phane and Fischer, Asja and Guns, Tias and Kralj Novak, Petra and Tsoumakas, Grigorios", title="Training Parameterized Quantum Circuits with Triplet Loss", booktitle="Machine Learning and Knowledge Discovery in Databases", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="515--530", abstract="Training parameterized quantum circuits (PQCs) is a growing research area that has received a boost from the emergence of new hybrid quantum classical algorithms and Quantum Machine Learning (QML) to leverage the power of today's quantum computers. However, a universal pipeline that guarantees good learning behavior has not yet been found, due to several challenges. These include in particular the low number of qubits and their susceptibility to noise but also the vanishing of gradients during training. In this work, we apply and evaluate Triplet Loss in a QML training pipeline utilizing a PQC for the first time. We perform extensive experiments for the Triplet Loss based setup and training on two common datasets, the MNIST and moon dataset. Without significant fine-tuning of training parameters and circuit layout, our proposed approach achieves competitive results to a regular training. Additionally, the variance and the absolute values of gradients are significantly better compared to training a PQC without Triplet Loss. The usage of metric learning proves to be suitable for QML and its high dimensional space as it is not as restrictive as learning on hard labels. Our results indicate that metric learning provides benefits to mitigate the so-called barren plateaus.", isbn="978-3-031-26419-1" } @misc{reimers2019sentencebert, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers and Iryna Gurevych}, year={2019}, eprint={1908.10084}, archivePrefix={arXiv}, primaryClass={cs.CL} } @Article{Galli2024sberts, AUTHOR = {Galli, Carlo and Donos, Nikolaos and Calciolari, Elena}, TITLE = {Performance of 4 Pre-Trained Sentence Transformer Models in the Semantic Query of a Systematic Review Dataset on Peri-Implantitis}, JOURNAL = {Information}, VOLUME = {15}, YEAR = {2024}, NUMBER = {2}, ARTICLE-NUMBER = {68}, URL = {https://www.mdpi.com/2078-2489/15/2/68}, ISSN = {2078-2489}, ABSTRACT = {Systematic reviews are cumbersome yet essential to the epistemic process of medical science. Finding significant reports, however, is a daunting task because the sheer volume of published literature makes the manual screening of databases time-consuming. The use of Artificial Intelligence could make literature processing faster and more efficient. Sentence transformers are groundbreaking algorithms that can generate rich semantic representations of text documents and allow for semantic queries. In the present report, we compared four freely available sentence transformer pre-trained models (all-MiniLM-L6-v2, all-MiniLM-L12-v2, all-mpnet-base-v2, and All-distilroberta-v1) on a convenience sample of 6110 articles from a published systematic review. The authors of this review manually screened the dataset and identified 24 target articles that addressed the Focused Questions (FQ) of the review. We applied the four sentence transformers to the dataset and, using the FQ as a query, performed a semantic similarity search on the dataset. The models identified similarities between the FQ and the target articles to a varying degree, and, sorting the dataset by semantic similarities using the best-performing model (all-mpnet-base-v2), the target articles could be found in the top 700 papers out of the 6110 dataset. Our data indicate that the choice of an appropriate pre-trained model could remarkably reduce the number of articles to screen and the time to completion for systematic reviews.}, DOI = {10.3390/info15020068} } @article{devlin2018bert, title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, year={2018} } @InProceedings{HaCohen2017stancedetection, author="HaCohen-kerner, Yaakov and Ido, Ziv and Ya'akobov, Ronen", editor="Altun, Yasemin and Das, Kamalika and Mielik{\"a}inen, Taneli and Malerba, Donato and Stefanowski, Jerzy and Read, Jesse and {\v{Z}}itnik, Marinka and Ceci, Michelangelo and D{\v{z}}eroski, Sa{\v{s}}o", title="Stance Classification of Tweets Using Skip Char Ngrams", booktitle="Machine Learning and Knowledge Discovery in Databases", year="2017", publisher="Springer International Publishing", address="Cham", pages="266--278", abstract="In this research, we focus on automatic supervised stance classification of tweets. Given test datasets of tweets from five various topics, we try to classify the stance of the tweet authors as either in FAVOR of the target, AGAINST it, or NONE. We apply eight variants of seven supervised machine learning methods and three filtering methods using the WEKA platform. The macro-average results obtained by our algorithm are significantly better than the state-of-art results reported by the best macro-average results achieved in the SemEval 2016 Task 6-A for all the five released datasets. In contrast to the competitors of the SemEval 2016 Task 6-A, who did not use any char skip ngrams but rather used thousands of ngrams and hundreds of word embedding features, our algorithm uses a few tens of features mainly character-based features where most of them are skip char ngram features.", isbn="978-3-319-71273-4" } @article{Biber1988stance, author = {Douglas Biber and Edward Finegan}, title = {Adverbial stance types in English}, journal = {Discourse Processes}, volume = {11}, number = {1}, pages = {1-34}, year = {1988}, publisher = {Routledge}, doi = {10.1080/01638538809544689}, abstract = { The present paper identifies various speech styles of English as marked by stance adver‐bials. By stance we mean the overt expression of an author's or speaker's attitudes, feelings, judgments, or commitment concerning the message. Adverbials are one of the primary lexical markers of stance in English, and we limit ourselves in this paper to adverbial marking of stance (the attitudinal and style disjuncts presented in Quirk, Green‐baum, Leech, \& Svartvik, 1985). All occurrences of stance adverbials are identified in the LOB and London‐Lund corpora (410 texts of written and spoken British English), and each is analyzed in its sentential context to distinguish true markers of stance from adverbials that serve other functions (e.g., as manner adverbs). The adverbials marking stance are divided into six semantic categories, and the frequency of occurrence for each category in each text is computed. The six categories are labeled (1) honestly adverbials, (2) generally adverbials, (3) surely adverbials, (4) actually adverbials, (5) maybe adverbials, and (6) amazingly adverbials. Using a multivariate statistical technique called cluster analysis, texts that are maximally similar in their exploitation of these stance adverbials are grouped into clusters. We interpret each cluster by consideration of the frequent stance adverbials in the cluster, the situational characteristics of the texts in the cluster, and functional analyses of the stance adverbials in individual texts. Although the stance adverbials are grouped into categories on the basis of their literal meanings, the clusters are interpreted in terms of the discourse functions of the adverbials; in several cases, our analysis shows that the discourse functions of stance adverbials differ considerably from the functions suggested by their literal meanings. With respect to the adverbial marking of stance, eight styles are identified, including “Cautious,” “Secluded from Dispute,” and “Faceless.” } } @inproceedings{Li2019stance, title = "Multi-Task Stance Detection with Sentiment and Stance Lexicons", author = "Li, Yingjie and Caragea, Cornelia", editor = "Inui, Kentaro and Jiang, Jing and Ng, Vincent and Wan, Xiaojun", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D19-1657", doi = "10.18653/v1/D19-1657", pages = "6299--6305", abstract = "Stance detection aims to detect whether the opinion holder is in support of or against a given target. Recent works show improvements in stance detection by using either the attention mechanism or sentiment information. In this paper, we propose a multi-task framework that incorporates target-specific attention mechanism and at the same time takes sentiment classification as an auxiliary task. Moreover, we used a sentiment lexicon and constructed a stance lexicon to provide guidance for the attention layer. Experimental results show that the proposed model significantly outperforms state-of-the-art deep learning methods on the SemEval-2016 dataset.", } @article{Guo2023argumentative, title={Representing and Determining Argumentative Relevance in Online Discussions: A General Approach}, volume={17}, url={https://ojs.aaai.org/index.php/ICWSM/article/view/22146}, DOI={10.1609/icwsm.v17i1.22146}, abstractNote={Understanding an online argumentative discussion is essential for understanding users’ opinions on a topic and their underlying reasoning. A key challenge in determining completeness and persuasiveness of argumentative discussions is to assess how arguments under a topic are connected in a logical and coherent manner. Online argumentative discussions, in contrast to essays or face-to-face communication, challenge techniques for judging argument relevance because online discussions involve multiple participants and often exhibit incoherence in reasoning and inconsistencies in writing style. We define relevance as the logical and topical connections between small texts representing argument fragments in online discussions. We provide a corpus comprising pairs of sentences, labeled with argumentative relevance between the sentences in each pair. We propose a computational approach relying on content reduction and a Siamese neural network architecture for modeling argumentative connections and determining argumentative relevance between texts. Experimental results indicate that our approach is effective in measuring relevance between arguments, and outperforms strong and well-adopted baselines. Further analysis demonstrates the benefit of using our argumentative relevance encoding on a downstream task, predicting how impactful an online comment is to certain topic, comparing to encoding that does not consider logical connection.}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Guo, Zhen and Singh, Munindar P.}, year={2023}, month={Jun.}, pages={292-302} } @inproceedings{mikolov2013wordtovec, title = "Linguistic Regularities in Continuous Space Word Representations", author = "Mikolov, Tomas and Yih, Wen-tau and Zweig, Geoffrey", editor = "Vanderwende, Lucy and Daum{\'e} III, Hal and Kirchhoff, Katrin", booktitle = "Proceedings of the 2013 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2013", address = "Atlanta, Georgia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N13-1090", pages = "746--751", } % eprint={2103.00336}, % archivePrefix={arXiv}, % primaryClass={cs.LG} @misc{lamb2021transformers, title={Transformers with Competitive Ensembles of Independent Mechanisms}, author={Alex Lamb and Di He and Anirudh Goyal and Guolin Ke and Chien-Feng Liao and Mirco Ravanelli and Yoshua Bengio}, year={2021}, } @inproceedings{Koch2015SiameseNN, title={Siamese Neural Networks for One-Shot Image Recognition}, author={Gregory R. Koch}, year={2015}, url={https://api.semanticscholar.org/CorpusID:13874643} } @InProceedings{Hoffer2015triplet, author="Hoffer, Elad and Ailon, Nir", editor="Feragen, Aasa and Pelillo, Marcello and Loog, Marco", title="Deep Metric Learning Using Triplet Network", booktitle="Similarity-Based Pattern Recognition", year="2015", publisher="Springer International Publishing", address="Cham", pages="84--92", abstract="Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", isbn="978-3-319-24261-3" } @incollection{McCloskey1989catastrophic, title = {Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem}, editor = {Gordon H. Bower}, series = {Psychology of Learning and Motivation}, publisher = {Academic Press}, volume = {24}, pages = {109-165}, year = {1989}, issn = {0079-7421}, doi = {https://doi.org/10.1016/S0079-7421(08)60536-8}, url = {https://www.sciencedirect.com/science/article/pii/S0079742108605368}, author = {Michael McCloskey and Neal J. Cohen}, abstract = {Publisher Summary Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.} } @inproceedings{Vahtola2022negation, title = "It Is Not Easy To Detect Paraphrases: Analysing Semantic Similarity With Antonyms and Negation Using the New {S}em{A}nto{N}eg Benchmark", author = {Vahtola, Teemu and Creutz, Mathias and Tiedemann, J{\"o}rg}, editor = "Bastings, Jasmijn and Belinkov, Yonatan and Elazar, Yanai and Hupkes, Dieuwke and Saphra, Naomi and Wiegreffe, Sarah", booktitle = "Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.blackboxnlp-1.20", doi = "10.18653/v1/2022.blackboxnlp-1.20", pages = "249--262", abstract = "We investigate to what extent a hundred publicly available, popular neural language models capture meaning systematically. Sentence embeddings obtained from pretrained or fine-tuned language models can be used to perform particular tasks, such as paraphrase detection, semantic textual similarity assessment or natural language inference. Common to all of these tasks is that paraphrastic sentences, that is, sentences that carry (nearly) the same meaning, should have (nearly) the same embeddings regardless of surface form. We demonstrate that performance varies greatly across different language models when a specific type of meaning-preserving transformation is applied: two sentences should be identified as paraphrastic if one of them contains a negated antonym in relation to the other one, such as {``}I am not guilty{''} versus {``}I am innocent{''}.We introduce and release SemAntoNeg, a new test suite containing 3152 entries for probing paraphrasticity in sentences incorporating negation and antonyms. Among other things, we show that language models fine-tuned for natural language inference outperform other types of models, especially the ones fine-tuned to produce general-purpose sentence embeddings, on the test suite. Furthermore, we show that most models designed explicitly for paraphrasing are rather mediocre in our task.", } @inproceedings{Qin2023LLM, title = "Is {C}hat{GPT} a General-Purpose Natural Language Processing Task Solver?", author = "Qin, Chengwei and Zhang, Aston and Zhang, Zhuosheng and Chen, Jiaao and Yasunaga, Michihiro and Yang, Diyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.85", doi = "10.18653/v1/2023.emnlp-main.85", pages = "1339--1384", abstract = "Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot{---}i.e., without adaptation on downstream data. Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing (NLP) community due to the fact that it can generate high-quality responses to human input and self-correct previous mistakes based on subsequent conversations. However, it is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot. In this work, we empirically analyze the zero-shot learning ability of ChatGPT by evaluating it on 20 popular NLP datasets covering 7 representative task categories. With extensive empirical studies, we demonstrate both the effectiveness and limitations of the current version of ChatGPT. We find that ChatGPT performs well on many tasks favoring reasoning capabilities (e.g., arithmetic reasoning) while it still faces challenges when solving specific tasks such as sequence tagging. We additionally provide in-depth analysis through qualitative case studies.", }es
dc.identifier.urihttps://hdl.handle.net/20.500.12761/1851
dc.description.abstractSentence transformers excel at grouping topically similar texts, but struggle to differentiate opposing viewpoints on the same topic. This shortcoming hinders their utility in applications where understanding nuanced differences in opinion is essential, such as those related to social and political discourse analysis. This paper addresses this issue by fine-tuning sentence transformers with arguments for and against human-generated controversial claims. We demonstrate how our fine-tuned model enhances the utility of sentence transformers for social computing tasks such as opinion mining and stance detection. We elaborate that applying stance-aware sentence transformers to opinion mining is a more computationally efficient approach in comparison to the classic classification-based approaches.es
dc.description.sponsorshipUK's Research centre on Privacy, Harm Reduction & Adversarial Influence onlinees
dc.description.sponsorshipSpanish Ministry of Science and Innovationes
dc.description.sponsorshipESF Investing in your futurees
dc.language.isoenges
dc.titleI love pineapple on pizza != I hate pineapple on pizza: Stance-Aware Sentence Transformers for Opinion Mininges
dc.typeconference objectes
dc.conference.date12-16 November 2024es
dc.conference.placeMiami, Floridaes
dc.conference.titleEmpirical Methods in Natural Language Processing *
dc.event.typeconferencees
dc.pres.typepaperes
dc.type.hasVersionAMes
dc.rights.accessRightsopen accesses
dc.acronymEMNLP*
dc.rankA*
dc.relation.projectIDMCIN/AEI/10.13039/501100011033es
dc.relation.projectIDTED2021-132900A-I00es
dc.relation.projectIDRYC-2020-029401-Ies
dc.relation.projectNameCOMETes
dc.relation.projectName2019 Ramon y Cajal fellowes
dc.relation.projectNameREPHRAINes
dc.relation.projectNameAP4L: Adaptive PETs to Protect & emPower People during Life Transitionses
dc.relation.projectNameEuropean Union-NextGenerationEUes
dc.subject.keywordsentence transformerses
dc.subject.keywordsemantic searches
dc.subject.keywordopinion mininges
dc.subject.keywordcomputational social sciencees
dc.description.refereedTRUEes
dc.description.statusinpresses


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem