Healthcare systems worldwide are confronting a critical need to reconfigure how knowledge is generated, shared, and applied to support sustainable organisational transformation (Mukherjee et al., 2025; OECD, 2024). Over the past decade, health systems worldwide have faced increasing pressures to balance limited resources with growing expectations (WHO, 2025). Financial constraints, workforce shortages, and chronic underinvestment are set against the backdrop of increasing demand for high-quality services, rising prevalence of chronic conditions, and a rapidly ageing population (Kyriopoulos et al., 2025; McKee et al., 2021). These tensions have been further exacerbated by disruptive global events, most notably the COVID-19 pandemic, climate-related crises, and geopolitical instabilities, which have exposed structural weaknesses and underscored the need for rethinking healthcare delivery models (Wade, 2023; WHO, 2025). These disruptive events, however, acted as both a stress test and an accelerator: they exposed structural vulnerabilities in pharmaceutical and device supply chains, health workforce shortages, and hospital capacity, while simultaneously demonstrating the potential of new biomedical technologies, digital platforms, and patient-centred processes (Duong et al., 2025; Fusternau et al., 2022). In this view, crises have functioned as “learning laboratories”, accelerating institutional awareness that resilience depends not only on technological readiness but also on the capacity to manage and reuse knowledge across systems and sectors. The lessons learned during these periods have translated into a renewed urgency to rethink how healthcare systems manage their assets and mobilise their knowledge resources to become more resilient and sustainable, raising the fundamental question of how innovation can generate value over time.
Addressing these challenges requires reconceptualising healthcare organisations as knowledge-intensive and complex systems capable of learning and adapting. The pursuit of resilience and sustainability in healthcare rests on the ability to integrate innovation with collaboration among diverse stakeholders in the value chain (Asperti et al., 2025). A growing body of research emphasises that intelligent and sustainable solutions, supported by collective behaviours, are critical levers for building healthcare services that can withstand crises while maintaining high quality (Emami et al., 2024; Zurynski et al., 2022). From this perspective, healthcare organisations are best understood as dynamic learning systems: adaptive entities that continuously create, absorb, and reorganise knowledge to respond to environmental pressures. They integrate multiple forms of knowledge, scientific, tacit, and relational, balancing the adoption of technological change with the preservation of professional expertise and human values.
Consequently, also traditional assessment frameworks should be revised, going beyond clinical and economic metrics to include social and environmental dimensions of value, but also acceptability of the technology (Foglia et al., 2024), helping decision-makers align innovation investments with long-term resilience goals (Marsh et al., 2016). Embedding the principles of Responsible Innovation further ensures that technological progress remains ethically sound, socially desirable, and environmentally sustainable (Pacifico et al., 2018). Healthcare systems that manage to integrate sustainability and responsibility into innovation processes are more likely to attract funding, talent, and trust.
Furthermore, the rapid advancement of AI and digital technologies represents one of the most significant forces driving transformations in healthcare (Kraus et al., 2021). AI applications in healthcare hold the potential to revolutionise knowledge management by enabling the processing of vast amounts of data collected through sensors, wearables, and electronic health records. These data, when combined with advanced analytics and machine learning, support the identification of patterns, the prediction of clinical trajectories, and the formulation of evidence-based decisions in both clinical and managerial domains (Basile et al., 2025). In this way, AI contributes to more accurate diagnostics, enhances decision-making, and optimises resource allocation across healthcare organisations.
However, as AI-driven solutions mature, there is a pressing need to systematically evaluate their contribution to value creation and capture and to integrate such evaluations into decision-making processes to inform responsible adoption decisions (Di Bidino et al., 2024).
Yet the integration of AI also raises critical questions that extend beyond technical implementation (Li et al., 2024). On one hand, AI-based solutions can strengthen the responsiveness of healthcare systems, bringing them closer to citizens’ needs by supporting both operational and strategic decision-making. On the other hand, their diffusion challenges organisational structures and workflows, creating the need for reconfiguration of services and professional roles. Moreover, ethical and legal concerns surrounding AI, such as transparency, accountability, and the management of algorithmic bias, cannot be ignored (Gama and Magistretti, 2025). Healthcare professionals remain ultimately responsible for patient care, and intelligent technologies must be designed as tools that augment rather than replace human expertise (Jussupow et al., 2021). This human-centred approach to AI adoption requires a cultural shift within healthcare organisations, where digital literacy, ethical awareness, and interdisciplinary collaboration become integral components of professional development. In this context, affordance theory provides a valuable interpretive lens to understand how digital platforms can either enable or constrain value creation and the mitigation of algorithmic bias. Technological affordances are not fixed properties of artefacts but emerge from the interaction between technical features, organisational practices, and user perceptions (Leonardi, 2011). Exploring these dynamics can inform the design of digital platforms that enhance collaboration, transparency, and ethical use of artificial intelligence. Embedding an affordance-based perspective into the design and governance of digital technologies can therefore foster the development of socio-technical ecosystems capable of generating positive impacts while minimising bias and digital inequalities.
This requires the establishment of clear protocols, robust governance mechanisms, and strong cybersecurity safeguards to ensure trust, legitimacy, and inclusiveness in the adoption of AI.
These governance challenges are amplified when considering the broader landscape of digital health innovations, which represent a critical domain where significant tensions accompany the promises of innovation. Remote monitoring systems, telehealth platforms, wearable devices, and home-based care solutions are increasingly integrated into healthcare delivery, with the potential to extend access, ensure continuity of care, and optimise resource use (Basile et al., 2025; Antonacci et al., 2023; Kraus et al., 2021; Anton). However, despite years of experimentation and numerous pilot projects, the evidence on the safety, efficacy, and cost-effectiveness of these technologies remains partially addressed, varying widely depending on the clinical context, technological application, and patient population. There is thus an urgent need for shared evaluation frameworks and cross-sectoral learning platforms capable of translating local digital health experiences into transferable knowledge for policy and practice.
Beyond clinical effectiveness, digital health solutions generate complex organisational challenges. Their integration can reshape workloads, alter professional responsibilities, and create new training demands for staff (Jeilani and Hussein, 2025). Furthermore, their sustainable adoption often depends on the redesign of organisational models that can incorporate digital tools into established practices and, where necessary, create new professional roles to bridge the technological and clinical domains. This reconfiguration of structures and competences is essential if digital innovations are to move beyond isolated projects and become embedded in the everyday functioning of healthcare systems. These changes highlight a broader trend: the reconfiguration of healthcare professional roles. This reconfiguration is driven by two interrelated dynamics. First, workforce shortages and resource constraints have accelerated the redistribution of activities across professional and non-professional boundaries, transferring specific caring, administrative, or educational tasks to non-clinicians and, in some cases, to patients and communities (Bergey et al., 2019; Leong et al., 2021). Second, AI systems fundamentally reshape what constitutes professional expertise by redistributing tasks between human and non-human agents, reconfiguring how expertise is coordinated and applied in clinical practice (Tyskbo & Sergeeva, 2022). This dual transformation, driven by both workforce pressures and technological capabilities, requires healthcare organisations to rethink how work is organised and how professional boundaries are defined.
The revision of professional roles requires a reorganisation of healthcare knowledge assets, which can be understood across four dimensions: human, relational, structural, and informational capital. Addressing these dimensions means not only redefining competencies and skills (the human capital), but also reshaping interprofessional relationships (relational capital) particularly to support new forms of collaboration and coordination (Olive et al., 2024), redesigning processes and organisational culture (structural capital), and updating IT infrastructures to support new forms of collaboration (informational capital).
Understanding how healthcare organizations integrate AI and digital technologies to generate sustainable value provides insights into the organizational, cultural, and strategic factors that facilitate or impede their successful implementation (Pinelli et al., 2025).
These dynamics underscore the need to develop healthcare systems that are not only technologically advanced but also socially and ethically grounded. Intelligent knowledge practices must extend beyond efficiency and cost containment to contribute to broader objectives of sustainability and equity. This involves aligning healthcare innovation with the Sustainable Development Goals (SDGs), Responsible Innovation and ESG principles, ensuring that progress in digital health supports long-term resilience and social responsibility (Ghebreyesus et al., 2018; Koebe et al., 2025).
Healthcare systems thus operate at the intersection of complexity, disruption, and responsibility. They are laboratories where the challenges of balancing technological innovation with human expertise, efficiency with equity, and short-term pressures with long-term sustainability are being tested. The diffusion of AI and the expansion of digital technologies, the redefinition of professional roles, and the pursuit of the SDG goal all converge toward a new paradigm of healthcare as a dynamic and intelligent learning system. Its future will not be defined solely by the sophistication of technologies but by the capacity to embed them within organisational cultures, ethical frameworks, and collaborative practices that sustain resilience, foster inclusiveness, and create value for both patients and society at large.
This track aims to stimulate this debate, welcoming empirical and conceptual contributions that critically address how AI, digital innovations, and knowledge management practices interact with social and organisational aspects. within healthcare organisations. Contributions are encouraged that examine knowledge creation, sharing, and application processes, analysing how human expertise and AI-based capabilities can be integrated to advance sustainability objectives.
Contributions may address, but are not limited to, the following topics of interest:
- How can healthcare organisations effectively manage their knowledge assets when introducing AI-driven and innovative healthcare products, services, and technologies?
- How does the acquisition of knowledge through AI and digital solutions influence healthcare organisations’ behaviour and lead to new governance models, including patient care, research, and policy development?
- How do organisational redesign, evolving professional roles, knowledge asset reconfiguration and organizational strategies influence the successful and sustainable integration of AI and digital technologies in healthcare organisations?
- How can healthcare organisations, conceived as dynamic learning systems, can revise traditional operations and decision-making strategies to integrate social, environmental, and ethical dimensions of value to guide innovation investments that enhance long-term resilience and sustainability?
- How can knowledge ecosystems that integrate human expertise with AI capabilities be effectively developed and sustained in healthcare organisations?
- How can AI solutions and technologies be efficiently applied within healthcare organisations to drive innovation, process redesign, and sustainability in healthcare delivery?
- How can the design of digital platforms be strategically leveraged to shape human–technology interactions that foster inclusion, ethical awareness, and sustainable value creation within healthcare organisations?
- How can AI solutions be designed to support knowledge co-creation between healthcare providers, patients, and caregivers, fostering a more participatory healthcare model?
- What competences and skills are required to transform AI-driven knowledge into healthcare innovations while safeguarding human judgment, tacit expertise, and relational intelligence?
- What is the relationship between a healthcare organisation’s ability to translate knowledge into value creation and sustainability outcomes (e.g., SDG/ESG commitments), primarily through AI and digital innovation?