SOTAVerified

Investigating Large Language Models for Financial Causality Detection in Multilingual Setup

2024-01-22IEEE International Conference on Big Data (BigData) 2024Unverified0· sign in to hype

Neelesh K Shukla, Raghu Katikeri, Msp Raja, Gowtham Sivam, Shlok Yadav, Amit Vaid, Shreenivas Prabhakararao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents our contribution to the Financial Document Causality Detection (FinCausal) task, a component of the FNP-2023 workshop. The FinCausal challenge centers on the extraction of cause-and-effect relationships from financial texts written in both English and Spanish. Recent advancements in Generative AI and Large Language Models (LLMs) have instigated investigations into their reasoning abilities, propelling our exploration of LLMs’ potential for causal reasoning within the financial domain. This study also ventures into the domain of non-English languages, aiming to uncover the capacity of LLMs on this front as well. Our investigation revealed that LLMs exhibit a remarkable ability to identify causal relationships, particularly when provided with few task-specific relevant examples. Additionally, our research demonstrates the effectiveness of LLMs in processing non-English languages when given the same English prompts along with language comprehension instructions. We conducted a comparative analysis between OpenAI GPT3.5 and 4, concluding that GPT-4 model is better-suited for this purpose. Our study unveils that LLMs yield semantically similar cause and effects. This discovery highlights LLMs don’t rely solely on content for the predictions and so the necessity of adopting an evaluation approach for this task, one that emphasizes also on semantic similarity metrics.

Tasks

Reproductions