Sorry

This feed does not validate.

In addition, interoperability with the widest range of feed readers could be improved by implementing the following recommendation.

Source: https://aisel.aisnet.org/jais/recent.rss

  1. <?xml version="1.0" encoding="utf-8" ?>
  2. <rss version="2.0">
  3. <channel>
  4. <title>Journal of the Association for Information Systems</title>
  5. <copyright>Copyright (c) 2024 Association for Information Systems All rights reserved.</copyright>
  6. <link>https://aisel.aisnet.org/jais</link>
  7. <description>Recent documents in Journal of the Association for Information Systems</description>
  8. <language>en-us</language>
  9. <lastBuildDate>Wed, 03 Apr 2024 11:49:40 PDT</lastBuildDate>
  10. <ttl>3600</ttl>
  11.  
  12.  
  13.  
  14.  
  15.  
  16.  
  17.  
  18.  
  19. <item>
  20. <title>From Methodological Symmetry to Gaia: Latour’s Legacy and Untapped Potential for IS Research</title>
  21. <link>https://aisel.aisnet.org/jais/vol25/iss2/10</link>
  22. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/10</guid>
  23. <pubDate>Sat, 02 Mar 2024 23:26:37 PST</pubDate>
  24. <description>
  25. <![CDATA[
  26. <p>A defining concern for information systems (IS) is how to theorize its primary object of study, the digital artifact. Historically, approaches in IS have oscillated between technological determinist and social determinist ones. Bruno Latour’s works contributed to recalibrating a more symmetrical treatment of the two, and he played a distinctive role in stimulating the evolution of empirically committed, process-oriented theorizing in the IS field. We commemorate Latour’s work by revisiting his early influence on IS research and discuss some of the direct and indirect influences on contemporary IS, such as infrastructure studies and sociomateriality. With the interest in agentic technologies via AI, Latour’s perspectives on technological agency are more relevant than ever. In addition, we explore ideas that are relevant to IS but have yet to be taken up, thus representing untapped theoretical and methodological potential for future IS research—for example, approaching data as circulating reference or how his work could contribute to sustainability discussions in IS.</p>
  27.  
  28. ]]>
  29. </description>
  30.  
  31. <author>Margunn Aanestad et al.</author>
  32.  
  33.  
  34. </item>
  35.  
  36.  
  37.  
  38.  
  39.  
  40.  
  41. <item>
  42. <title>Optimal Launch Timing of Bug Bounty Programs for Software Products under Different Licensing Models</title>
  43. <link>https://aisel.aisnet.org/jais/vol25/iss2/8</link>
  44. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/8</guid>
  45. <pubDate>Sat, 02 Mar 2024 23:26:36 PST</pubDate>
  46. <description>
  47. <![CDATA[
  48. <p>An increasing number of software firms are utilizing bug bounty programs (BBPs) to detect bugs and enhance their product quality by leveraging the contributions of external ethical hackers. Although launching a BBP involves bounties as well as the costs of processing bug reports and fixing bugs, software firms can save failure costs and enjoy the benefits of greater user trust. The costs and benefits resulting from launching a BBP vary with launch timings and software licensing models. Hence, we investigate the optimal BBP launch strategies for software firms, using perpetual or subscription licensing models. Our findings reveal that under perpetual licensing, the firm has only two viable launch strategies: simultaneous launch, i.e., launching the software and the BBP simultaneously, and no launch. Under subscription licensing, however, delayed launch, i.e., launching the BBP later than the software release time, occurs as the optimal strategy when the failure cost is not high and the benefit of user trust is significant. Two distinct patterns in the relationship between the firm’s bug-fixing capability and its payoff are identified: a U-shaped pattern and an inverted U-shaped pattern. We uncover the conditions under which a firm should opt not to launch a BBP as its bug-fixing capability improves. This study offers insights into how firms can be motivated to launch BBPs to improve the overall reliability of their software.</p>
  49.  
  50. ]]>
  51. </description>
  52.  
  53. <author>Nan Feng et al.</author>
  54.  
  55.  
  56. </item>
  57.  
  58.  
  59.  
  60.  
  61.  
  62.  
  63. <item>
  64. <title>How Online Consumers Use Multiple Advice Sources:  An Empirical Exploration Using Verbal Protocol Analysis</title>
  65. <link>https://aisel.aisnet.org/jais/vol25/iss2/9</link>
  66. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/9</guid>
  67. <pubDate>Sat, 02 Mar 2024 23:26:36 PST</pubDate>
  68. <description>
  69. <![CDATA[
  70. <p>With more online stores providing recommendations and reviews simultaneously from multiple advice sources—such as recommendation agents, other consumers, and experts—consumers are facing the challenge of deciding how to use wide-ranging and possibly conflicting sets of information to improve their product selection. Although previous studies have investigated the classical decision-making strategies used in preferential choice problems, most of these are not directly applicable to multiple advice source environments. In addition, since most of these studies have mainly focused on a variance model rather than a process model, they cannot fully explain how decision makers reach their product selection decisions. To shed light on the processes that online consumers use in making product selections when using multiple advice sources, this study: (1) explores if, how, and when consumers use consistency as a part of their decision-making strategies through developing process models; (2) identifies the consistency strategies utilized when using multiple advice sources; and (3) proposes a new consistency-based decision-making model. Through concurrent verbal protocol analysis, we identified four <em>consistency strategies</em> and found that the use of consistency strategies increases decision quality more than traditional nonconsistency strategies. Our findings are triangulated through the theoretical lens of cognitive dissonance theory, information search process model, and confirmation bias for rigorous validation. We also describe the theoretical and practical implications of our findings.</p>
  71.  
  72. ]]>
  73. </description>
  74.  
  75. <author>Hongki Kim et al.</author>
  76.  
  77.  
  78. </item>
  79.  
  80.  
  81.  
  82.  
  83.  
  84.  
  85. <item>
  86. <title>Nonverbal Peer Feedback and User Contribution in Online Forums: Experimental Evidence of the Role of  Attribution and Emotions</title>
  87. <link>https://aisel.aisnet.org/jais/vol25/iss2/7</link>
  88. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/7</guid>
  89. <pubDate>Sat, 02 Mar 2024 23:26:35 PST</pubDate>
  90. <description>
  91. <![CDATA[
  92. <p>Peer feedback is often associated with an increase in the contributions of members in online communities. Verbal feedback (such as a review) can give details about how the recipient can improve their contribution, but it requires the recipient to read and process the feedback. Conversely, nonverbal feedback (such as an upvote) is easy to comprehend but does not convey much helpful information. Prior studies have mainly focused on the impact of verbal feedback. However, little has been done to explore the underlying mechanism of the effect of nonverbal peer feedback on people’s tendency to contribute more. We present two experimental studies conducted on Amazon Mechanic Turk. Study 1 demonstrates how verbal and nonverbal feedback impact user contributions differently. Next, building on attribution-emotion-action theory, we use Study 2 to establish a causal mechanism between nonverbal feedback and users’ knowledge contribution. Specifically, users who receive nonverbal peer feedback make internal and external attributions, which in turn impact their emotions and contribution decisions. We find that users receiving more positive feedback attribute this in equal measure internally to perceived self-efficacy and externally to perceived fairness, whereas users who receive negative feedback attribute it more to the lack of perceived fairness of peer feedback. These findings have important implications for both content-sharing platforms and researchers trying to better understand the drivers of online content-sharing behavior.</p>
  93.  
  94. ]]>
  95. </description>
  96.  
  97. <author>Ramesh Shankar et al.</author>
  98.  
  99.  
  100. </item>
  101.  
  102.  
  103.  
  104.  
  105.  
  106.  
  107. <item>
  108. <title>Qualitative Cusp Catastrophe Multi-Agent Simulation Model to Explore Abrupt Changes in  Online Impulsive Buying Behavior</title>
  109. <link>https://aisel.aisnet.org/jais/vol25/iss2/6</link>
  110. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/6</guid>
  111. <pubDate>Sat, 02 Mar 2024 23:26:34 PST</pubDate>
  112. <description>
  113. <![CDATA[
  114. <p>We develop a qualitative catastrophe (a nonlinear sudden violent change) multi-agent simulation model to investigate the evolution of group behavior, specifically abrupt changes in online impulsive buying (OIB) behavior. Studies have rarely investigated the mechanism of abrupt changes in OIB at the group level. To address the research gaps and advance this area of research, we employed a sequential multiple-methods approach. First, we designed a questionnaire to obtain and analyze consumer data to identify OIB drivers. Second, we built a qualitative catastrophe model based on empirical findings to describe sudden changes in the OIB behavior of individuals by merging catastrophe theory (CT) and qualitative simulation (QSIM). Finally, grounded in the qualitative catastrophe model, we constructed an agent-based model (ABM) to simulate group-level OIB behavior. The empirical findings revealed the following. (1) Sudden changes in group-level OIB occur as consumers’ sense of <em>quantified self</em> increases when self-control is low. (2) The greater the number of consumers with a proving preference (who prefer to prove their competence and performance to others) in the group, the higher the possibility of catastrophe; the scale of catastrophe increases significantly with the enhancement of product information features. (3) We identify optimal gamification for sudden increases in group-level OIB; the larger the degree of social networking is, the higher the likelihood of catastrophe behavior. Our proposed combination of a survey study, QSIM, and an ABM is a plausible solution for behavioral research on consumers, and the integration paradigm could help maintain market stability and promote products.</p>
  115.  
  116. ]]>
  117. </description>
  118.  
  119. <author>Xiaochao Wei et al.</author>
  120.  
  121.  
  122. </item>
  123.  
  124.  
  125.  
  126.  
  127.  
  128.  
  129. <item>
  130. <title>Content and Style of Firm-Generated Posts on Social Media: A Study of User Engagement on Hedonic and Utilitarian Product Pages on Facebook</title>
  131. <link>https://aisel.aisnet.org/jais/vol25/iss2/5</link>
  132. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/5</guid>
  133. <pubDate>Sat, 02 Mar 2024 23:26:33 PST</pubDate>
  134. <description>
  135. <![CDATA[
  136. <p>Social media platforms are increasingly crucial in corporate engagement, but there is limited research delineating how page characteristics influence firm behavior on these platforms. This paper examines the relationship between firm-generated content and user engagement on product pages and whether a page’s characteristics and content motivations moderate this relationship. Analyzing a sample of 29,267 posts on 85 Facebook product pages of 49 Fortune 1000 firms, we studied the relationship between the content (i.e., informational vs. emotional) and style (i.e., formal vs. informal) of a given post and the level of user engagement. The key finding is that in the content dimension, “incongruous” posts, i.e., content not traditionally expected for a product page of a given type, generate more favorable engagement. Also, informal style posts achieve more favorable engagement irrespective of the product page characteristics. The secondary discovery is attributable to Facebook’s function as a social media network where acquaintances engage with one another. The platform’s unwritten protocol of communication largely emphasizes informality, which is likely the driver of this observation. These findings contribute to the burgeoning social media strategy literature illustrating how product page characteristics moderate effective social media content strategies.</p>
  137.  
  138. ]]>
  139. </description>
  140.  
  141. <author>Scott Schanke et al.</author>
  142.  
  143.  
  144. </item>
  145.  
  146.  
  147.  
  148.  
  149.  
  150.  
  151. <item>
  152. <title>Proposing Shocks and Dissatisfaction to Explain Quitting and Switching a Service: An Image Theory Perspective</title>
  153. <link>https://aisel.aisnet.org/jais/vol25/iss2/4</link>
  154. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/4</guid>
  155. <pubDate>Sat, 02 Mar 2024 23:26:32 PST</pubDate>
  156. <description>
  157. <![CDATA[
  158. <p>This study sheds light on how shocks, or events perceived as jarring, cause users to discontinue a service. We use image theory to develop a model of discontinuation (MOD), which includes five main concepts: <em>shock, script, image violation, service </em>and<em> task dissatisfaction</em>, and <em>search for and evaluation of superior alternatives</em>. MOD explains how two types of user discontinuation behavior—quitting and switching—are formed along seven paths that are either related to contextual or technological shocks or result from dissatisfaction with a task or service. Our first empirical study shows that more than 95% of the 467 ex-users surveyed followed one of these paths when discontinuing media streaming, social networking, and matchmaking services. In the second study, we developed a research model based on the MOD to understand how the psychological factors associated with shocks lead to the intention to discontinue. We evaluate this model with a scenario study in which we present the situation of users of a matchmaking service and present participants with a contextual shock. Data from 201 individuals show that users are likely to develop intentions to quit when they use an engaged script or experience an image violation. Interviews with a panel of practitioners confirmed that MOD provides a useful and applicable approach to understanding users’ quitting and switching behaviors in different contexts. We conclude our paper with a discussion of implications for research and practice.</p>
  159.  
  160. ]]>
  161. </description>
  162.  
  163. <author>Christian Maier et al.</author>
  164.  
  165.  
  166. </item>
  167.  
  168.  
  169.  
  170.  
  171.  
  172.  
  173. <item>
  174. <title>Research Perspectives: An Encompassing Framework for Conceptualizing Space in Information Systems: Philosophical Perspectives, Themes, and Concepts</title>
  175. <link>https://aisel.aisnet.org/jais/vol25/iss2/3</link>
  176. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/3</guid>
  177. <pubDate>Sat, 02 Mar 2024 23:26:31 PST</pubDate>
  178. <description>
  179. <![CDATA[
  180. <p>The conceptualization of <em>space</em> is integral to many of the diverse forms of information systems—for example, the physical space represented in geographical information systems and the virtual space of simulated worlds. Yet despite its importance and centrality, the conceptualization of space in information systems is not as sophisticated or mature as in other fields. A lack of attention to the diversity of perspectives on space hampers ongoing research and the re-visioning of phenomena that could lead to new insights in information systems. The aim of this paper is to develop an encompassing framework that provides a comprehensive view of philosophical perspectives, spatial themes, and concepts of space that are relevant to information systems. As a result of an extensive literature review, an encompassing framework is presented that includes four prominent spatial themes: representing space, differentiating space, disclosing space, and intuitive space. Each theme is related to its key characteristics and features and underlying philosophical perspectives. The paper demonstrates how the new framework can facilitate IS scholars’ expansive analysis in scholarly work and assist editors and reviewers in evaluating papers concerning space in IS and shows how the re-visioning of phenomena can lead to transformational shifts in understanding IS phenomena.</p>
  181.  
  182. ]]>
  183. </description>
  184.  
  185. <author>Amir Haj-Bolouri et al.</author>
  186.  
  187.  
  188. </item>
  189.  
  190.  
  191.  
  192.  
  193.  
  194.  
  195. <item>
  196. <title>Firm Competitive Structure and Consumer Reaction in Search Advertisings</title>
  197. <link>https://aisel.aisnet.org/jais/vol25/iss2/2</link>
  198. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/2</guid>
  199. <pubDate>Sat, 02 Mar 2024 23:26:30 PST</pubDate>
  200. <description>
  201. <![CDATA[
  202. <p>Sponsored search advertising has become an important venue for firms competing for consumers. As a result, many keywords attract a large number of bidders, and the competing advertisers may be quite heterogeneous. We examine whether this heterogeneity impacts how consumers perceive and react to such competitions. To this end, we draw on the theory of <em>strategic groups</em> to prescribe the structure of the competitive environment and investigate how strategic groups impact consumers’ clicking and website-visit behavior when viewing sponsored search results. Our unique datasets that combine search results from Google and consumers’ clickstream data enable us to disentangle such an impact. We find strong positive externality for within-group competitors relative to across-group competitors: (1) consumers are more likely to co-visit two firms that belong to the same strategic group, as opposed to two firms from different groups when both firms appear in the search results; (2) the presence of a firm in the search results primes consumers to visit other firms from the same strategic group even when the other firms do not appear in the search results. Our findings contribute to the sponsored search and strategic group literature by theorizing and empirically verifying consumers’ website-visit behaviors from the strategic group perspective.</p>
  203.  
  204. ]]>
  205. </description>
  206.  
  207. <author>Cheng Nie et al.</author>
  208.  
  209.  
  210. </item>
  211.  
  212.  
  213.  
  214.  
  215.  
  216.  
  217. <item>
  218. <title>Judging the Wrongness of Firms in Social Media Firestorms: The Heuristic and Systematic Information Processing Perspective</title>
  219. <link>https://aisel.aisnet.org/jais/vol25/iss2/1</link>
  220. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss2/1</guid>
  221. <pubDate>Sat, 02 Mar 2024 23:26:29 PST</pubDate>
  222. <description>
  223. <![CDATA[
  224. <p>Social media firestorms pose a significant challenge for firms in the digital age. Tackling firestorms is difficult because the judgments and responses from social media users are influenced by not only the nature of the transgressions but also by the reactions and opinions of other social media users. Drawing on the heuristic-systematic information processing model, we propose a research model to explain the effects of social impact (the heuristic mode) and argument quality and moral intensity (the systematic mode) on perceptions of firm wrongness (the judgment outcome) as well as the effects of perceptions of firm wrongness on vindictive complaining and patronage reduction. We adopted a mixed methods approach in our investigation, including a survey, an experiment, and a focus group study. Our findings show that the heuristic and systematic modes of information processing exert both direct and interaction effects on individuals’ judgment. Specifically, the heuristic mode of information processing dominates overall and also biases the systematic mode. Our study advances the literature by offering an alternative explanation for the emergence of social media firestorms and identifying a novel context in which the heuristic mode dominates in dual information processing. It also sheds light on the formulation of response strategies to mitigate the adverse impacts resulting from social media firestorms. We conclude our paper with limitations and future research directions.</p>
  225.  
  226. ]]>
  227. </description>
  228.  
  229. <author>Tommy K. H. Chan et al.</author>
  230.  
  231.  
  232. </item>
  233.  
  234.  
  235.  
  236.  
  237.  
  238.  
  239. <item>
  240. <title>A Knowledge Management Perspective of Generative Artificial Intelligence</title>
  241. <link>https://aisel.aisnet.org/jais/vol25/iss1/15</link>
  242. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/15</guid>
  243. <pubDate>Mon, 01 Jan 2024 01:10:33 PST</pubDate>
  244. <description>
  245. <![CDATA[
  246. <p>In this editorial, revisiting Alavi and Leidner (2001) as a conceptual lens, we consider the organizational implications of generative artificial intelligence (GenAI) from a knowledge management (KM) perspective. We examine how GenAI impacts the processes of knowledge creation, storage, transfer, and application, highlighting both the opportunities and challenges this technology presents. In knowledge creation, GenAI enhances information processing and cognitive functions, fostering individual and organizational learning. However, it also introduces risks like AI bias and reduced human socialization, potentially marginalizing junior knowledge workers. For knowledge storage and retrieval, GenAI’s ability to quickly access vast knowledge bases significantly changes employee interactions with KM systems. This raises questions about balancing human-derived tacit knowledge with AI-generated explicit knowledge. The paper also explores GenAI’s role in knowledge transfer, particularly in training and cultivating a learning culture. Challenges include an overreliance on AI and risks in disseminating sensitive information. In terms of knowledge application, GenAI is seen as a tool to boost productivity and innovation, but issues like knowledge misapplication, intellectual property, and ethical considerations are critical. Conclusively, the paper argues for a balanced approach to integrating GenAI into KM processes. It advocates for harmonizing GenAI’s capabilities with human insights to effectively manage knowledge in contemporary organizations, ensuring both technological advances and ethical responsibility.</p>
  247.  
  248. ]]>
  249. </description>
  250.  
  251. <author>Maryam Alavi et al.</author>
  252.  
  253.  
  254. </item>
  255.  
  256.  
  257.  
  258.  
  259.  
  260.  
  261. <item>
  262. <title>Navigating Generative Artificial Intelligence Promises and Perils for Knowledge and Creative Work</title>
  263. <link>https://aisel.aisnet.org/jais/vol25/iss1/13</link>
  264. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/13</guid>
  265. <pubDate>Mon, 01 Jan 2024 01:10:32 PST</pubDate>
  266. <description>
  267. <![CDATA[
  268. <p>Generative artificial intelligence (GenAI) is rapidly becoming a viable tool to enhance productivity and act as a catalyst for innovation across various sectors. Its ability to perform tasks that have traditionally required human judgment and creativity is transforming knowledge and creative work. Yet it also raises concerns and implications that could reshape the very landscape of knowledge and creative work. In this editorial, we undertake an in-depth examination of both the opportunities and challenges presented by GenAI for future IS research.</p>
  269.  
  270. ]]>
  271. </description>
  272.  
  273. <author>Hind Benbya et al.</author>
  274.  
  275.  
  276. </item>
  277.  
  278.  
  279.  
  280.  
  281.  
  282.  
  283. <item>
  284. <title>The Societal Impacts of Generative Artificial Intelligence:  A Balanced Perspective</title>
  285. <link>https://aisel.aisnet.org/jais/vol25/iss1/14</link>
  286. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/14</guid>
  287. <pubDate>Mon, 01 Jan 2024 01:10:32 PST</pubDate>
  288. <description>
  289. <![CDATA[
  290. <p>The discourse surrounding the societal impacts of generative artificial intelligence (GAI), exemplified by technologies like ChatGPT, often oscillates between extremes: utopian visions of unprecedented productivity and dystopian fears of humanity’s demise. This polarized perspective neglects the nuanced, pragmatic manifestation of GAI. In general, extreme views oversimplify the technology itself or its potential to address societal issues. The authors suggest a more balanced analysis, acknowledging that GAI’s impacts will unfold dynamically over time as diverse implementations interact with human stakeholders and contextual factors. While Big Tech firms dominate GAI’s supply, its demand is expected to evolve through experimentation and use cases. The authors argue that GAI’s societal impact depends on identifiable contingencies, emphasizing three broad factors: the balance between automation and augmentation, the congruence of physical and digital realities, and the retention of human bounded rationality. These contingencies represent trade-offs arising from GAI instantiations, shaped by technological advancements, stakeholder dynamics, and contextual factors, including societal responses and regulations. Predicting long-term societal effects remains challenging due to unforeseeable discontinuities in the technology’s trajectory. The authors anticipate a continuous interplay between GAI initiatives, technological advances, learning experiences, and societal responses, with outcomes depending on the above contingencies.</p>
  291.  
  292. ]]>
  293. </description>
  294.  
  295. <author>Rajiv Sabherwal et al.</author>
  296.  
  297.  
  298. </item>
  299.  
  300.  
  301.  
  302.  
  303.  
  304.  
  305. <item>
  306. <title>AI for Knowledge Creation, Curation, and Consumption in Context</title>
  307. <link>https://aisel.aisnet.org/jais/vol25/iss1/12</link>
  308. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/12</guid>
  309. <pubDate>Mon, 01 Jan 2024 01:10:31 PST</pubDate>
  310. <description>
  311. <![CDATA[
  312. ]]>
  313. </description>
  314.  
  315. <author>David Schwartz et al.</author>
  316.  
  317.  
  318. </item>
  319.  
  320.  
  321.  
  322.  
  323.  
  324.  
  325. <item>
  326. <title>Reimagining the Journal Editorial Process:  An AI-Augmented Versus an AI-Driven Future</title>
  327. <link>https://aisel.aisnet.org/jais/vol25/iss1/10</link>
  328. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/10</guid>
  329. <pubDate>Mon, 01 Jan 2024 01:10:30 PST</pubDate>
  330. <description>
  331. <![CDATA[
  332. <p>The editorial process at our leading information systems journals has been pivotal in shaping and growing our field. But this process has grown long in the tooth and is increasingly frustrating and challenging its various stakeholders: editors, reviewers, and authors. The sudden and explosive spread of AI tools, including advances in language models, make them a tempting fit in our efforts to ease and advance the editorial process. But we must carefully consider how the goals and methods of AI tools fit with the core purpose of the editorial process. We present a thought experiment exploring the implications of two distinct futures for the information systems powering today’s journal editorial process: an AI-augmented and an AI-driven one. The AI-augmented scenario envisions systems providing algorithmic predictions and recommendations to enhance human decision-making, offering enhanced efficiency while maintaining human judgment and accountability. However, it also requires debate over algorithm transparency, appropriate machine learning methods, and data privacy and security. The AI-driven scenario, meanwhile, imagines a fully autonomous and iterative AI. While potentially even more efficient, this future risks failing to align with academic values and norms, perpetuating data biases, and neglecting the important social bonds and community practices embedded in and strengthened by the human-led editorial process. We consider and contrast the two scenarios in terms of their usefulness and dangers to authors, reviewers, editors, and publishers. We conclude by cautioning against the lure of an AI-driven, metric-focused approach, advocating instead for a future where AI serves as a tool to augment human capacity and strengthen the quality of academic discourse. But more broadly, this thought experiment allows us to distill what the editorial process is about: the building of a premier research community instead of chasing metrics and efficiency. It is up to us to guard these values.</p>
  333.  
  334. ]]>
  335. </description>
  336.  
  337. <author>Galit Shmueli et al.</author>
  338.  
  339.  
  340. </item>
  341.  
  342.  
  343.  
  344.  
  345.  
  346.  
  347. <item>
  348. <title>Responsible Artificial Intelligence and Journal Publishing</title>
  349. <link>https://aisel.aisnet.org/jais/vol25/iss1/11</link>
  350. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/11</guid>
  351. <pubDate>Mon, 01 Jan 2024 01:10:30 PST</pubDate>
  352. <description>
  353. <![CDATA[
  354. <p>The aim of this opinion piece is to examine the responsible use of artificial intelligence (AI) in relation to academic journal publishing. The work discusses approaches to AI with particular attention to recent developments with generative AI. Consensus is noted around eight normative themes for principles for responsible AI and their associated risks. A framework from Shneiderman (2022) for human-centered AI is employed to consider journal publishing practices that can address the principles of responsible AI at different levels. The resultant AI principled governance matrix (AI-PGM) for journal publishing shows how countermeasures for risks can be employed at the levels of the author-researcher team, the organization, the industry, and by government regulation. The AI-PGM allows a structured approach to responsible AI and may be modified as developments with AI unfold. It shows how the whole publishing ecosystem should be considered when looking at the responsible use of AI—not just journal policy itself.</p>
  355.  
  356. ]]>
  357. </description>
  358.  
  359. <author>Shirley Gregor</author>
  360.  
  361.  
  362. </item>
  363.  
  364.  
  365.  
  366.  
  367.  
  368.  
  369. <item>
  370. <title>Peer Review in the Age of Generative AI</title>
  371. <link>https://aisel.aisnet.org/jais/vol25/iss1/9</link>
  372. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/9</guid>
  373. <pubDate>Mon, 01 Jan 2024 01:10:29 PST</pubDate>
  374. <description>
  375. <![CDATA[
  376. <p>Rapid advances in artificial intelligence (AI), including recent generative forms, are significantly impacting our lives and work. A key aspect of our work as IS researchers is the publishing of research articles, for which peer review serves as the primary means of quality control. While there have been debates about whether and to what extent AI can replace researchers in various domains, including IS, we lack an in-depth understanding of how AI can impact the peer review process. Considering the high volume of submissions and limited reviewer resources, there is a pressing need to use AI to augment the review process. At the same time, advances in AI have been accompanied by concerns about biases introduced by AI tools and the ethics of using them, among other issues such as hallucinations. Thus, critical issues to understand are: how can AI augment and potentially automate the review process, what are the pitfalls in doing so, and what er the implications for IS research and peer review practice. I will offer my views on these issues in this opinion piece.</p>
  377.  
  378. ]]>
  379. </description>
  380.  
  381. <author>Atreyi Kankanhalli</author>
  382.  
  383.  
  384. </item>
  385.  
  386.  
  387.  
  388.  
  389.  
  390.  
  391. <item>
  392. <title>The Other Reviewer: RoboReviewer</title>
  393. <link>https://aisel.aisnet.org/jais/vol25/iss1/8</link>
  394. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/8</guid>
  395. <pubDate>Mon, 01 Jan 2024 01:10:28 PST</pubDate>
  396. <description>
  397. <![CDATA[
  398. <p>The peer review process is a mainstay for informing publication decisions at many journals and conferences. It has several strengths that are well-accepted, such as providing a signal about the quality of published papers. Nonetheless, it has several limitations that have been documented extensively, such as reviewer biases affecting paper appraisals. To date, attempts to mitigate these limitations have had limited success. Accordingly, I consider how developments in artificial intelligence technologies—in particular, pretrained large language models with downstream fine-tuning—might be used to automate peer reviews. I discuss several challenges that are likely to arise if these systems are built and deployed and some ways to address these challenges. If the systems are deemed successful, I describe some characteristics of a highly competitive, lucrative marketplace for these systems that is likely to emerge. I discuss some ramifications of such a marketplace for authors, reviewers, editors, conference chairs, conference program committees, publishers, and the peer review process.</p>
  399.  
  400. ]]>
  401. </description>
  402.  
  403. <author>Ron Weber</author>
  404.  
  405.  
  406. </item>
  407.  
  408.  
  409.  
  410.  
  411.  
  412.  
  413. <item>
  414. <title>Human-in-the-Loop AI Reviewing: Feasibility, Opportunities, and Risks</title>
  415. <link>https://aisel.aisnet.org/jais/vol25/iss1/7</link>
  416. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/7</guid>
  417. <pubDate>Mon, 01 Jan 2024 01:10:27 PST</pubDate>
  418. <description>
  419. <![CDATA[
  420. <p>The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks.</p>
  421.  
  422. ]]>
  423. </description>
  424.  
  425. <author>Iddo Drori et al.</author>
  426.  
  427.  
  428. </item>
  429.  
  430.  
  431.  
  432.  
  433.  
  434.  
  435. <item>
  436. <title>New Frontiers in Information Systems Theorizing: Human-gAI Collaboration</title>
  437. <link>https://aisel.aisnet.org/jais/vol25/iss1/6</link>
  438. <guid isPermaLink="true">https://aisel.aisnet.org/jais/vol25/iss1/6</guid>
  439. <pubDate>Mon, 01 Jan 2024 00:45:13 PST</pubDate>
  440. <description>
  441. <![CDATA[
  442. <p>The <em>Journal of the Association for Information Systems</em> has long had a reputation for promoting theory development. Yet theory development can be experienced as risky and frustrating because of a lack of divergence and convergence—both in terms of ideas and in the social dynamics among human theorists. These dichotomies can stymie progress and lead to unfinished works. Misconceptions about theory can also hamper advances. We examine the ways in which generative artificial intelligence (gAI) tools may be useful in developing theory in information systems (IS) through human-gAI collaboration, thus forging new frontiers in IS theorizing.</p>
  443.  
  444. ]]>
  445. </description>
  446.  
  447. <author>Sirkka Jarvenpaa et al.</author>
  448.  
  449.  
  450. </item>
  451.  
  452.  
  453.  
  454.  
  455.  
  456. </channel>
  457. </rss>
  458.  
Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda