Ashish Kumar Jha, Rohit Nishant
With fake news now a serious concern facing researchers, practitioners, and policymakers alike, research is increasingly exploring the factors that lead to its proliferation. However, there is limited research on the role of temporal orientation. i.e., emphasis on time. This paper examines whether a future temporal orientation (FTO), defined as a relative emphasis on the future observed in fake news titles and content, is associated with fake news sharing. We bring arguments grounded in evolutionary psychology to understand the underlying rationale driving this phenomenon. Our analysis of a Twitter dataset comprising 465519 tweets suggests that FTO characterizes fake news and is positively associated with fake news sharing. Notably, fake news titles and the accompanying text differ in their FTO. Specifically, we show an inverted U-shaped relationship between fake news sharing and the difference in FTO between the title and accompanying text. As a practical implication of this analysis, efforts to limit the spread of fake news should pay more attention to how such news emphasizes the future.
Yu Shi, Vincent Charles, Joe Zhu
Ensuring financial sustainability is imperative for a financial institution's overall stability. To mitigate the risk of bank failure amid financial crises, effective management of financial sustainability performance becomes paramount. This study introduces a comprehensive framework for the accurate and efficient quantification, indexing, and evaluation of financial sustainability within the American banking industry. Our approach begins by conceptualizing financial sustainability as a multi-stage, multifactor structure. We construct a composite index through a three-stage network data envelopment analysis (DEA) and subsequently develop a random forest classification model to predict financial sustainability outcomes. The classification model attains an average testing recall rate of 84.34 %. Additionally, we employ SHapley Additive exPlanations (SHAP) to scrutinize the impacts of contextual variables on financial sustainability performance across various substages and the overall banking process, as well as to improve the interpretability and transparency of the classification results. SHAP results reveal the significance and effects of contextual variables, and noteworthy differences in contextual impacts emerge among different banking substages. Specifically, loans and leases, interest income, total liabilities, total assets, and market capitalization positively contribute to the deposit stage; revenue to assets positively influences the loan stage; and revenue per share positively affects the profitability stage. This study serves the managerial objective of assisting banks in capturing financial sustainability and identifying potential sources of unsustainability. By unveiling the “black box” of financial sustainability and deciphering its internal dynamics and interactions, banks can enhance their ability to monitor and control financial sustainability performance more effectively.
Maria D. Guillen, Vincent Charles, Juan Aparicio
Overfitting is a classical statistical issue that occurs when a model fits a particular observed data sample too closely, potentially limiting its generalizability. While Data Envelopment Analysis (DEA) is a powerful non-parametric method for assessing the relative efficiency of decision-making units (DMUs), its reliance on the minimal extrapolation principle can lead to concerns about overfitting, particularly when the goal extends beyond evaluating the specific DMUs in the sample to making broader inferences. In this paper, we propose an adaptation of Stochastic Gradient Boosting to estimate production possibility sets that mitigate overfitting while satisfying shape constraints such as convexity and free disposability. Our approach is not intended to replace DEA but to complement it, offering an additional tool for scenarios where generalization is important. Through simulation experiments, we demonstrate that the proposed method performs well compared to DEA, especially in high-dimensional settings. Furthermore, the new machine learning-based technique is compared to the Corrected Concave Non-parametric Least Squares (C2NLS), showing competitive performance. We also illustrate how the usual efficiency measures in DEA can be implemented under our approach. Finally, we provide an empirical example based on data from the Program for International Student Assessment (PISA) to demonstrate the applicability of the new method.
Rohit Nishant, Tuan (Kellan) Nguyen, Thompson S. H. Teo, Pei-Fang Hsu
How do shareholders respond to technologies hyped in general discourse, e.g., artificial intelligence (AI), if a common understanding is lacking and the technologies are still evolving? Do they respond primarily to substantive signals in technology announcements, such as AI capabilities, or do rhetorical signals also play a significant role? Adopting signalling theory as a theoretical lens, we conceptualise announcements of AI capabilities as substantive signals and linguistic elements in the announcements pertaining to organisational time horizon and risk-reward considerations as rhetorical signals. Departing from the typical focus on bijective relationships, we consider holistic, complex configurations of interdependent factors using the qualitative comparative analysis (QCA) methodology. Notably, announcements pertaining to AI capabilities are not necessarily associated with positive market reactions; in fact, when all three types of AI are included in announcements without explicit consideration of risks, shareholders react negatively. We find that shareholder response is based on joint evaluation of substantive and rhetorical signals, and that these signals interact in a complex way to produce positive and negative market reactions. These findings motivate several propositions for market reactions to IT announcements, providing implications for both theory and practice.
Vignesh Yoganathan, Victoria-Sophie Osburg, Andrea Fronzetti Colladon, Vincent Charles, Waldemar Toporowski
Societal or population-level attitudes are aggregated patterns of different individual attitudes, representing collective general predispositions. As service robots become ubiquitous, understanding attitudes towards them at the population (vs. individual) level enables firms to expand robot services to a broad (vs. niche) market. Targeting population-level attitudes would benefit service firms because: 1) they are more persistent, thus, stronger predictors of behavioral patterns, and 2) this approach is less reliant on personal data, whereas individualized services are vulnerable to AI-related privacy risks. As for service theory, ignoring broad unobserved differences in attitudes produces biased conclusions, and our systematic review of previous research highlights a poor understanding of potential heterogeneity in attitudes toward service robots. We present five diverse studies (S1-S5), utilizing multinational and ‘real world’ data (Ntotal = 89,541; years: 2012-2024). Results reveal a stable structure comprising four distinct attitude profiles (S1-S5): positive (“adore”), negative (“abhor”), indifferent (“ignore”), and ambivalent (“unsure”). The psychological need for interacting with service staff, and for autonomy and relatedness in technology use, function as attitude profile antecedents (S2). Importantly, the attitude profiles predict differences in post-interaction discomfort and anxiety (S3), satisfaction ratings and service evaluations (S4), and perceived sociability and uncanniness based on a robot’s humanlikeness (S5).
Rohit Nishant, Dirk Schneckenberg, MN Ravishankar
This paper presents a new perspective on the problem of bias in artificial intelligence (AI)-driven decision-making by examining the fundamental difference between AI and human rationality in making sense of data. Current research has focused primarily on software engineers’ bounded rationality and bias in the data fed to algorithms but has neglected the crucial role of algorithmic rationality in producing bias. Using a Weberian distinction between formal and substantive rationality, we inquire why AI-based algorithms lack the ability to display common sense in data interpretation, leading to flawed decisions. We first conduct a rigorous text analysis to uncover and exemplify contextual nuances within the sampled data. We then combine unsupervised and supervised learning, revealing that algorithmic decision-making characterizes and judges data categories mechanically as it operates through the formal rationality of mathematical optimization procedures. Next, using an AI tool, we demonstrate how formal rationality embedded in AI-based algorithms limits its capacity to perform adequately in complex contexts, thus leading to bias and poor decisions. Finally, we delineate the boundary conditions and limitations of leveraging formal rationality to automatize algorithmic decision-making. Our study provides a deeper understanding of the rationality-based causes of AI’s role in bias and poor decisions, even when data is generated in a largely bias-free context.
Olivia Brown, Robert M. Davison, Stephanie Decker, David A. Ellis, James Faulconbridge, Julie Gore, Michelle Greenwood, Gazi Islam, Christina Lubinski, Niall G. MacKenzie, Renate Meyer, Daniel Muzio, Paolo Quattrone, M. N. Ravishankar, Tammar Zilber, Shuang Ren, Riikka M. Sarala, Paul Hibbert
The advent of generative artificial intelligence (GAI) has sparked both enthusiasm and anxiety as different stakeholders grapple with the potential to reshape the business and management landscape. This dynamic discourse extends beyond GAI itself to encompass closely related innovations that have existed for some time, for example, machine learning, thereby creating a collective anticipation of opportunities and dilemmas surrounding the transformative or disruptive capacities of these emerging technologies.