Bank financial sustainability evaluation: data envelopment analysis with random forest and Shapley additive explanations
European Journal of Operational Research, 2025

Yu Shi, Vincent Charles, Joe Zhu Ensuring financial sustainability is imperative for a financial institution's overall stability. To mitigate the risk of bank failure amid financial crises, effective management of financial sustainability performance becomes paramount. This study introduces a comprehensive framework for the accurate and efficient quantification, indexing, and evaluation of financial sustainability within the American banking industry. Our approach begins by conceptualizing financial sustainability as a multi-stage, multifactor structure. We construct a composite index through a three-stage network data envelopment analysis (DEA) and subsequently develop a random forest classification model to predict financial sustainability outcomes. The classification model attains an average testing recall rate of 84.34 %. Additionally, we employ SHapley Additive exPlanations (SHAP) to scrutinize the impacts of contextual variables on financial sustainability performance across various substages and the overall banking process, as well as to improve the interpretability and transparency of the classification results. SHAP results reveal the significance and effects of contextual variables, and noteworthy differences in contextual impacts emerge among different banking substages. Specifically, loans and leases, interest income, total liabilities, total assets, and market capitalization positively contribute to the deposit stage; revenue to assets positively influences the loan stage; and revenue per share positively affects the profitability stage. This study serves the managerial objective of assisting banks in capturing financial sustainability and identifying potential sources of unsustainability. By unveiling the “black box” of financial sustainability and deciphering its internal dynamics and interactions, banks can enhance their ability to monitor and control financial sustainability performance more effectively.

Estimating non-overfitted convex production technologies: a stochastic machine learning approach
European Journal of Operational Research, 2025

Maria D. Guillen, Vincent Charles, Juan Aparicio Overfitting is a classical statistical issue that occurs when a model fits a particular observed data sample too closely, potentially limiting its generalizability. While Data Envelopment Analysis (DEA) is a powerful non-parametric method for assessing the relative efficiency of decision-making units (DMUs), its reliance on the minimal extrapolation principle can lead to concerns about overfitting, particularly when the goal extends beyond evaluating the specific DMUs in the sample to making broader inferences. In this paper, we propose an adaptation of Stochastic Gradient Boosting to estimate production possibility sets that mitigate overfitting while satisfying shape constraints such as convexity and free disposability. Our approach is not intended to replace DEA but to complement it, offering an additional tool for scenarios where generalization is important. Through simulation experiments, we demonstrate that the proposed method performs well compared to DEA, especially in high-dimensional settings. Furthermore, the new machine learning-based technique is compared to the Corrected Concave Non-parametric Least Squares (C2NLS), showing competitive performance. We also illustrate how the usual efficiency measures in DEA can be implemented under our approach. Finally, we provide an empirical example based on data from the Program for International Student Assessment (PISA) to demonstrate the applicability of the new method.

Role of substantive and rhetorical signals in the market reaction to announcements on AI adoption: a configurational study
European Journal of Information Systems, 2024

Rohit Nishant, Tuan (Kellan) Nguyen, Thompson S. H. Teo, Pei-Fang Hsu How do shareholders respond to technologies hyped in general discourse, e.g., artificial intelligence (AI), if a common understanding is lacking and the technologies are still evolving? Do they respond primarily to substantive signals in technology announcements, such as AI capabilities, or do rhetorical signals also play a significant role? Adopting signalling theory as a theoretical lens, we conceptualise announcements of AI capabilities as substantive signals and linguistic elements in the announcements pertaining to organisational time horizon and risk-reward considerations as rhetorical signals. Departing from the typical focus on bijective relationships, we consider holistic, complex configurations of interdependent factors using the qualitative comparative analysis (QCA) methodology. Notably, announcements pertaining to AI capabilities are not necessarily associated with positive market reactions; in fact, when all three types of AI are included in announcements without explicit consideration of risks, shareholders react negatively. We find that shareholder response is based on joint evaluation of substantive and rhetorical signals, and that these signals interact in a complex way to produce positive and negative market reactions. These findings motivate several propositions for market reactions to IT announcements, providing implications for both theory and practice.

Societal attitudes toward service robots: Adore, abhor, ignore, or unsure?
Journal of Service Research, 2024

Vignesh Yoganathan, Victoria-Sophie Osburg, Andrea Fronzetti Colladon, Vincent Charles, Waldemar Toporowski Societal or population-level attitudes are aggregated patterns of different individual attitudes, representing collective general predispositions. As service robots become ubiquitous, understanding attitudes towards them at the population (vs. individual) level enables firms to expand robot services to a broad (vs. niche) market. Targeting population-level attitudes would benefit service firms because: 1) they are more persistent, thus, stronger predictors of behavioral patterns, and 2) this approach is less reliant on personal data, whereas individualized services are vulnerable to AI-related privacy risks. As for service theory, ignoring broad unobserved differences in attitudes produces biased conclusions, and our systematic review of previous research highlights a poor understanding of potential heterogeneity in attitudes toward service robots. We present five diverse studies (S1-S5), utilizing multinational and ‘real world’ data (Ntotal = 89,541; years: 2012-2024). Results reveal a stable structure comprising four distinct attitude profiles (S1-S5): positive (“adore”), negative (“abhor”), indifferent (“ignore”), and ambivalent (“unsure”). The psychological need for interacting with service staff, and for autonomy and relatedness in technology use, function as attitude profile antecedents (S2). Importantly, the attitude profiles predict differences in post-interaction discomfort and anxiety (S3), satisfaction ratings and service evaluations (S4), and perceived sociability and uncanniness based on a robot’s humanlikeness (S5).

The formal rationality of artificial intelligence-based algorithms and the problem of bias
Journal of Information Technology, 2024

Rohit Nishant, Dirk Schneckenberg, MN Ravishankar This paper presents a new perspective on the problem of bias in artificial intelligence (AI)-driven decision-making by examining the fundamental difference between AI and human rationality in making sense of data. Current research has focused primarily on software engineers’ bounded rationality and bias in the data fed to algorithms but has neglected the crucial role of algorithmic rationality in producing bias. Using a Weberian distinction between formal and substantive rationality, we inquire why AI-based algorithms lack the ability to display common sense in data interpretation, leading to flawed decisions. We first conduct a rigorous text analysis to uncover and exemplify contextual nuances within the sampled data. We then combine unsupervised and supervised learning, revealing that algorithmic decision-making characterizes and judges data categories mechanically as it operates through the formal rationality of mathematical optimization procedures. Next, using an AI tool, we demonstrate how formal rationality embedded in AI-based algorithms limits its capacity to perform adequately in complex contexts, thus leading to bias and poor decisions. Finally, we delineate the boundary conditions and limitations of leveraging formal rationality to automatize algorithmic decision-making. Our study provides a deeper understanding of the rationality-based causes of AI’s role in bias and poor decisions, even when data is generated in a largely bias-free context.