SAPEA panel discussion at MTA200: “Upholding Integrity in Scientific Advice: Key Principles and Challenges”

A wide-ranging discussion on topics such as academic freedom, AI in science policy, integrity and transparency, and working with policymakers took place at the Hungarian Academy of Sciences (MTA) at its 200th anniversary celebration and international conference in Budapest on 4 November. The day’s workshop was organised by Science Advice for Policy by European Academies (SAPEA) and co-organised with All European Academies (ALLEA) and MTA.

2025. november 24.

The panel was moderated by Professor Maura Hiney, Adjunct Professor of Research at University College Dublin, and the panel included Ana Marusic, Professor at the University of Split School of Medicine, Peter Lund, Professor Emeritus at Aalto University, Markus Prutsch, Associate Professor at Heidelberg University, and David Budtz Pedersen, Professor of Science Communication and Impact Studies at Aalborg University as panellists.

Opening the session, Professor Ana Marusic outlined the mounting pressures faced by publishers and researchers to uphold integrity. Editors, she said, are both “gatekeepers” and guardians of the scientific record, now confronted by an unprecedented wave of challenges linked to AI and commercial manipulation. She drew attention to the rise of paper mills – industrial operations that publish low-quality papers, unreliable data and sometimes even sell authorship slots. “Everything is for sale,” she warned, describing what she called a black market in publishing that mirrors the illicit trade in drugs or arms. The financial scale of these networks, she noted, has become alarming, with major publishers forced to retract thousands of articles.

Ana Marusic Photo: mta.hu

To counter these developments, the publishing community is working on trust markers – tools designed to verify the identities of authors, reviewers and institutions, and to build transparency throughout the research process. Even widely used identifiers such as ORCID are no longer sufficient, Ana cautioned, as they can easily be falsified.

Generative AI has added further complexity. In early 2023, a handful of scientific papers even listed ChatGPT as a co-author. Ana explained that such attributions were swiftly withdrawn, as “generative AI cannot be an author,” lacking the legal and ethical capacity to sign statements, transfer copyright or approve licences.

Professor Marusic noted that publishers and research organisations are now developing clearer standards for how AI may be used. Minor language editing, such as with Grammarly, need not be declared; but where AI contributes to analysis or interpretation, it must be documented and supervised by human researchers. Editors are also reminding peer reviewers that confidential manuscripts must never be uploaded into public AI systems, as this would breach intellectual property and confidentiality.

Despite these efforts, public trust remains fragile. Ana cited research suggesting that when scientists disclose the use of AI, readers may view their work with greater scepticism than if they had said nothing. “We are in a very fluid state,” she observed. “But transparency and integrity must guide how we move forward.”

Professor Peter Lund warned that AI cannot replace human judgement, creativity or ethical reflection. While algorithms can support data-driven tasks, he argued that they lack “ethical judgement, democratic values” and an understanding of human dignity. He cautioned that AI “can lie” and could mislead people into making poor decisions based on inaccurate information.

Peter Lund Photo: mta.hu

Professor Lund saw particular risks in using AI for complex, value-based areas such as science policy. In engineering, where data is constrained, AI may perform well, but in policy contexts he urged greater caution. “We are at the very beginning,” he said. “We are playing with fire with AI.”

Professor David Budtz Pedersen expanded on this theme, emphasising the uncertainty introduced when AI systems are applied to science advice. “Basically, we cannot ensure integrity,” he said. “The moment you start introducing these technologies you will start introducing a lot of uncertainty.” He linked this to science’s ongoing reproducibility crisis. Outputs from large language models, he explained, cannot be replicated in the same way as traditional experiments. “You won’t get the same answer twice,” he said, undermining a cornerstone of scientific credibility.

David Budtz Pedersen Photo: mta.hu

Professor Pedersen distinguished between predictive AI – which already provides tangible benefits in areas such as healthcare and energy systems – and generative AI, which creates new text or content. The latter, he warned, should not be relied on for policy advice. He recalled that his research group’s 2023 paper in Nature, “AI Tools as Science Policy Advisers?”, prompted debate within both the European Parliament and the U.S. Congress. The paper argued that governments should build their own in-house AI capacities rather than depend on commercial systems, to preserve transparency and democratic oversight.

Adding to the discussion, Professor Markus Prutsch underlined that AI’s appeal often lies in its illusion of efficiency.

Markus Prutsch Photo: mta.hu
(Click on the photo for more images)

Having instant access to information gives users a sense of mastery, he said, but this can erode appreciation for how science actually functions – through rigorous evidence gathering and peer review. “AI can support and complement,” he said, “but it cannot replace scientific evidence in the policymaking context.” He further cautioned against the temptation, even within public administration, to prioritise speed and cost savings over quality of evidence.

The panel then turned to the role of policymakers in ensuring integrity and using evidence responsibly. Professor Pedersen described policymakers as the “lead users” of scientific information, noting that the effectiveness of evidence depends not only on how it is supplied, but also on whether it is absorbed and acted upon. He observed that policy processes have become increasingly compressed, leaving little time for nuanced consultation. In some cases, science is even perceived as an obstacle to rapid legislative action. To address this, he called for closer collaboration between researchers and policymakers, shared formats for advice, and mutual understanding of time constraints and expectations.

Professor Pedersen added that policymaking is often neither fully informed by science nor open to it. Beyond issues of timing, he said, lies a broader crisis of trust. In an era of misinformation and “alternative truths”, policymakers may question not only the evidence, but the motives of the experts themselves. He argued that science should not present itself as the ultimate truth, but as the best available knowledge at a given moment – knowledge that must be open to revision.

Click on the photo for more images Photo: mta.hu

Professor Marusic offered a perspective from medicine, where scientific knowledge is known to evolve quickly. “The half-life of medical knowledge is about five years,” she said. “Half of what we know now will be obsolete in five years.” For this reason, she stressed the importance of continual engagement between academia and policymakers, and the ability to conduct rapid reviews during emergencies, as was done during the COVID-19 pandemic.

In the final segment, the panellists turned to the question of academic freedom and the growing pressures on scientists worldwide. Professor Prutsch called restrictions on academic freedom “one of the most fundamental threats to science-informed policymaking.” Even in long-established democracies, he warned, scholars are facing political, legal or financial constraints that undermine independence.

Professor Lund noted that many citizens and decision-makers lack a clear understanding of what academic freedom entails. He argued that scientists must take a more active role in explaining its value to society and policymakers. He also pointed to the need for diverse funding sources, including partnerships with regional banks, non-profits and industry, to strengthen institutional autonomy.

Professor Marusic reminded the audience that the scientific community must also take responsibility for maintaining its own integrity. During the pandemic, she said, politics sometimes entered the scientific process, and policies were shaped by ideology rather than evidence. “We allowed evidence to become politicised,” she reflected.

Professor Prutsch highlighted that policymakers are not as concerned with credentials and degrees, but whether science policy people have “contextual competencies”, which are “the competencies to actually contextualise your research into the policymaking process.”