Community insights on AI: reflections from the ELIXIR-UK All Hands 2025

8 December 2025

A summary of panel discussions and World Café conversations exploring how AI is shaping research, skills and infrastructure across the UK

The purpose of this report is to summarise the discussions that took place during the AI panel and World Café sessions at the ELIXIR-UK 2025 All Hands meeting

The conversations captured here reflect the community’s current thinking, questions and concerns around AI in research. They are intended as background material to inform the subsequent phases of ELIXIR-UK’s AI work, including future consultations and strategy development.

The meeting was held from 23-24 October 2025 at Sandy Park in Exeter. The two-day event brought the community together to showcase progress and define future strategy. The Agenda of day 2 was focused on this topic. There were two dedicated sessions on artificial intelligence (AI), including an ELIXIR-UK AI panel sponsored by Basecamp Research and a subsequent series of in-depth World Café roundtables to address key issues and develop a strategy for ELIXIR-UK in the rapidly evolving AI landscape.

This report reflects the views, experiences and priorities shared by participants on the day; it does not represent formal ELIXIR-UK positions or agreed recommendations.

AI in practice: reality, assumptions and reflections

Day 2 began with a lively session, “AI in Practice: Reality, Assumptions and Reflections,” moderated by Jamie Harrison (University of Exeter). The panel – Gavin Farrell (UL), Carla Greco (Basecamp Research), Dipali Singh (Quadram Institute Bioscience & AI BIO), and Mohab Helmy (University of Cambridge) – addressed “confessions” from the audience that revealed immediate concerns about the use of generative AI. 

Key concerns included:

  • The erosion of critical thinking
  • A widening skills gap as students over-trust AI outputs
  • Fears of intellectual property leakage
  • Risks of sensitive data exposure through public-facing AI systems

The conversation quickly moved beyond user-level concerns into deeper issues around infrastructure, sustainability and validation. Panellists emphasised that while AI is powerful for detecting patterns (e.g., in DNA), any “black box” outputs require rigorous validation. Institutions also face the logistical overhead and financial and environmental costs of building AI capability in-house.

The panel also discussed the so-called “API Apocalypse,” referring to a growing risk in which uncontrolled AI-driven data scraping overwhelms public APIs. This could destabilise the open-access resources we maintain and depend on – “kill the goose” of the open-access resources that ELIXIR provides.

A “fact or fiction” segment of the session debunked common AI myths and hype. The panel was unanimous in that:

  • AI is not ready for wide clinical applications for patient care and research, citing a lack of training, accountability, validation frameworks and regulatory oversight that should be in place first.
  • AI will not “replace” scientists, but noted it will change how they work.
  • The critical skill will be knowing how to use (and debug) the AI tools and outputs.

The key motto of the discussion was:
“You should not code [with gen AI] more than you are able to debug [with your own technical skills/knowledge].”

The group observed that this applies broadly: where users cannot evaluate a model’s outputs themselves, there should be mechanisms for independent verification to ensure the results are reliable.

Some societies and higher education institutions have their own guidelines, but many lack formal guidance. A minimum standard for ethical and responsible use would benefit all. 

The community strongly supported developing concrete, shared guidelines for ethical and responsible AI use, including disclosure of model versions, prompts and curation processes.

The session concluded with a strong call from the ELIXIR-UK community for ELIXIR Europe’s support in spearheading the development of responsible, explainable AI training and for the community to come together to establish best practices.

Alongside concerns, the group recognised meaningful opportunities: AI can accelerate research, reduce administrative burden and increase productivity – when used appropriately.

World Café discussions from our community

Following the panel, a dynamic World Café session invited attendees to rotate through discussions at six themed tables. Participants moved between topics in three rounds, enabling cross-pollination of ideas. These distilled community priorities are intended to inform ELIXIR-UK’s future AI strategies.

Across all tables, several cross-cutting themes emerged: 

  • The need for transparency and trust in AI-generated outputs
  • The importance of responsible governance and clear guidance
  • Concerns around the sustainability of data and compute infrastructure
  • Training must emphasise the development of critical judgement rather than technical skills and shortcuts.

Below are the summaries of each table.

Munazah Andrabi at the ELIXIR-UK All hands meeting in Exeter 2025

Bias in AI models

Discussions highlighted that bias originates from multiple sources, including training data, algorithms, and societal blind spots. The group noted that while bias can be harmful (e.g. in clinical decision making where a model trained on specific demographic or settings may perform poorly when applied to different settings), it can also be contextual (e.g. local models can perform well within the settings they are designed to serve, even if they are not broadly generalisable).

Key outcomes included the need for data transparency, AI literacy, more training on responsible AI use (not just technical skills), and inclusive governance to mitigate systemic bias. 

A key recommendation was also the inclusion of review periods for models/tools, such as periodic reviews using an “MOT”-like system to ensure they remain fit-for-purpose.

How will careers change in the next 10 years? 

The consensus was that AI will change the nature of work rather than simply replace jobs. Routine tasks will become automated, which should, in theory, free people to focus on higher-level thinking. This would naturally increase the value of human-centric skills such as emotional intelligence, leadership and strategic decision-making. However, the group also raised concerns about whether these skills will continue to develop in younger generations who are growing up relying heavily on generative AI during their formative years.

Concerns discussed included the erosion of skills such as patience and creativity, and the risk that outputs become increasingly homogeneous. A key observation was the potential loss of critical thinking, which is essential for interpreting and evaluating AI-generated outputs.

Participants noted that, without guidance on appropriate use, awareness of risks, or grounding in best practice, new generations growing up with constant access to generative AI may not develop the depth of critical thinking needed to distinguish reliable information from AI “hallucinations” or misinformation. This point was strongly reinforced by attendees involved in higher education teaching.

The group agreed that this area requires urgent investment – not only in technical skill development, but also in targeted outreach programmes to support the development and reinforcement of foundational analytical skills.

Participants highlighted another concern: as AI streamlines tasks such as preparing grant submissions, the volume of applications is likely to rise, placing additional strain on review systems. This will require institutions to consider how AI could support review workflows. However, they also noted that using AI to assess AI-generated content may amplify existing model biases and create further challenges.

Skills needed for AI in research 

The group focused on the need for conceptual understanding over deep mathematical expertise. The most crucial skills identified were critical thinking, intuition to assess AI outputs, and the ability to spot “hallucinations”. Many of the discussions resonated with those highlighted in the session on career changes in the next 10 years.

Participants stressed the morning’s motto: researchers must “code with LLMs only as much as they can debug”. While these models are fantastic tools for those with the appropriate critical skill set to automate their work, they are also a recipe for disaster for those generating outputs without the wherewithal to carefully inspect the results.

The need for community-level skills to judge AI-generated results was once again flagged as essential.

Data access and how models are built 

This group discussed the technical and ethical hurdles to building models. Key barriers include limited GPU resources, the incompatibility of PDFs with AI processing, and the need for improved data curation. 

Participants highlighted significant ethical considerations, including the need to protect sensitive data, to obtain consent from data producers, to ensure they receive recognition, and to balance open access with preventing harmful data scraping.

How to ensure AI-generated content is properly referenced 

Participants explored the need for transparent disclosure when using AI in research. The group discussed a “traffic light” system to indicate the level of AI contribution, rather than a simple binary disclosure. 

Key challenges include the rise of AI-generated “fake references” and the difficulty of detecting AI-driven text, with consensus that the author must remain responsible for validating and owning the final content produced and for detailing its production in a transparent manner.

Use of AI for increasing productivity 

This discussion concluded that productivity gains are nuanced and depend heavily on the user’s existing expertise. While AI is useful for overcoming the “blank page” and saving time on routine tasks, it can also lead to a loss of detail, creativity and quality. 

The group’s key insight was the need to train for “judgement” (knowing when AI is “good enough”) rather than just for speed, advocating for a “responsible sipping” of AI rather than “getting drunk” on it.

Summary and next steps

The themes and insights shared in this report reflect the perspectives of the ELIXIR-UK community and are presented to support ongoing conversation. They are not formal recommendations, nor do they represent an agreed position of ELIXIR-UK.

The insights from the panel and roundtables provided a clear springboard for developing ELIXIR-UK’s AI strategy. A dominant theme was the urgent need to go beyond technical upskilling and further address the complex socio-ethical challenges of AI, including responsible use, accountability and algorithmic bias. These community-driven priorities demonstrate that the research infrastructure must evolve not only to provide AI tools but also to champion the training, standards, and governance needed to use them safely.

As key stakeholders of ELIXIR Europe, ELIXIR-UK will feed these findings into the unified ELIXIR AI strategies at the European level via venues such as the newly formed ELIXIR AI Ecosystem Focus Group. ELIXIR-UK members interested in contributing to this European-level work can join the mailing list linked at the bottom of the group’s webpage.

Eva Caamano Guitierrez at the ELIXIR-UK All hands meeting in Exeter 2025

Authors

  • Gavin Farrell  (University of Padova and University of Limerick)
  • Phil Reed (University of Manchester)
  • Xenia Perez Sitja (Earlham Institute)
  • Ariadna Miquel Clopés (Earlham Institute)
  • Carole Goble (University of Manchester)
  • Carla Greco (Basecamp Research)
  • Dipali Singh (Quadram Institute Bioscience & AI in the Biosciences Network)
  • Mohab Helmy (University of Cambridge)
  • Jamie Harrison (University of Exeter)
  • Robert Andrews (Cardiff University)
  • Eva Caamaño-Gutierrez (University of Liverpool)
  • Craig Willis (University of Exeter)

The conversations captured here reflect the community’s current thinking, questions and concerns around AI in research. They are intended as background material to inform the next phases of ELIXIR-UK’s AI work, including future consultations and strategy development.

Farrell, G., Reed, P., Pérez Sitjà, X., Miquel-Clopés, A., Goble, C., Carla, G., Singh, D., Helmy, M., Harrison, J., Andrews, R., Caamaño Gutiérrez, E., & Craig, W. (2025). Community insights on AI: reflections from the ELIXIR-UK All Hands 2025. 2025 ELIXIR UK All Hands, Exeter, UK. Zenodo.