Scientific Integrity And Ethics

Explore top LinkedIn content from expert professionals.

  • View profile for Danny Van Roijen

    🇪🇺 🇧🇪 EU Public Policy | Compliance | DPO | Keynote Speaker | Digital Technology | Healthcare

    9,846 followers

    A guide to sharing open healthcare data under the General Data Protection Regulation This paper, published in Nature, explores four successful open ICU healthcare databases to determine how open healthcare data can be shared appropriately in the EU. Based on the approaches of the databases, expert opinion, and literature research, four distinct approaches are outlined to openly sharing healthcare data, each with varying implications regarding data security, ease of use, sustainability, and implementability. The four approaches also have different implications for the different stakeholders: the patient, the user and the data owner. Recommendations for sharing open healthcare data are: 1. Multidisciplinary team - Legal, ethical, clinical, technical, and economical experts are necessary 2. External parties - To properly assess privacy concerns, external parties should be involved 3. De-identifcation strategy - Recommend to de-identify patient data with K-anonymisation 4. Legal basis - Consent is not required. When pseudonymising use another legal basis 5. Transparency - Be transparent about what, how, and why, data will be shared 6. Commitment of hosting institution - The hosting institution should be able and willing to aid in resolving unforeseen obstacles 7. Sharing portal - Data should be in the cloud and not downloadable. Otherwise, strict governance is required ----------------------------------------- de Kok, J.W.T.M., de la Hoz, M.Á.A., de Jong, Y. et al. A guide to sharing open healthcare data under the General Data Protection Regulation. Sci Data10, 404 (2023). https://lnkd.in/ehhyRVDT Supplementary information - https://lnkd.in/eg9VXRjn #digitalhealth #healthdata #GDPR #pseudonymisation #anonymisation #ICU #intensivecare #cloud #datagovernance

  • View profile for Jan Beger

    Global Head of AI Advocacy @ GE HealthCare

    85,440 followers

    This paper discusses the integration of human values into LLMs used in medical decision-making, highlighting the complexities and ethical considerations involved. 1️⃣ Human values, reflecting individual and societal principles, inherently influence AI models from their data selection to their deployment. 2️⃣ The incorporation of values in medical decision-making dates back to the 1950s, emphasizing the ongoing relevance of balancing probability and utility. 3️⃣ Training data can both reveal and amplify societal biases, affecting AI outputs in medical applications like radiography and dermatology. 4️⃣ A clinical example involving growth hormone treatment illustrates how values influence AI recommendations and decision-making among patients, doctors, and insurers. 5️⃣ LLMs can be tuned to reflect specific human values through supervised fine-tuning and reinforcement learning from human feedback, but this raises questions about whose values are represented. 6️⃣ Addressing value discrepancies, ensuring continuous retraining, and using population-level utility scores are vital for aligning AI with evolving human values. ✍🏻 Kun-Hsing Yu, Elizabeth Healey , Tze-Yun Leong, Isaac Kohane, Arjun Manrai. New England Journal of Medicine. May 30, 2024. DOI: 10.1056/NEJMra2214183

  • View profile for Dawid Hanak
    Dawid Hanak Dawid Hanak is an Influencer

    I help PhDs & Professors get more visibility for their research without sacrificing research time. Professor in Decarbonization supporting businesses in technical, environmental and economic analysis (TEA & LCA).

    54,305 followers

    Around 2% of academic papers submitted for peer review are fake (and the underground economy for authorship, citations, and editorial roles is growing...) Paper mills and fraudulent actors have industrialised the production of bogus research, exploiting the “publish or perish” culture. The result? A corrupt ecosystem that slows down lifesaving discoveries, particularly in medicine. Over 55,000 papers have been retracted to date; yet, experts estimate that hundreds of thousands more remain undetected. And it's all down to funders and universities using journal metrics (h-index, citations) and the number of papers as evaluation criteria for postgraduate degrees, career advancements, grants, and so on. This erodes community trust, particularly in papers that may present breakthrough ideas and outputs. What can you do to improve the credibility of your research? - pre-register your work (OSF or Octopus) - share raw data and codes - share pre-prints and any negative results As I engage with the community, I've been asked many times to join research as a co-author (for a fee...) or if I'd accept a paper for a special issue (for a fee...) - this is highly unethical practice. Needless to say, these papers were never published and appropriate editors were notified of this unethical behaviour. Have you had any experience with this, and how can we combat it? #science #scientist #research #publishing #academia #professor #phd #postdoc #postdoctoral

  • View profile for Brandeis Marshall, PhD, EMBA

    I help cross-functional teams execute their responsible AI and data strategies so they can create people-centered tech solutions | Leading DataedX Group™ + Black Women in Data

    11,815 followers

    We've seen some pretty troubling things happen in the tech world, especially in regard to marginalized communities. As a result, researchers are being pushed to think more deeply about ethics and how their work affects us a society. There are a lot of new rules and guidelines popping up about how to do research responsibly, but we need to make sure these ethical considerations are built into research from the very beginning, not just tacked on all willy-nilly at the end. In response to this growing concern, several colleagues ( Sara S., Yacine Jernite, Ben Marwick, Malvika Sharan, Kirstie Whitaker, Valentin Danchev) and myself, published a scholarly article titled 'Ten simple rules for building and maintaining a responsible data science workflow,' with PLoS Computational Biology. This publication aims to help researchers do data science in a way that's both effective and ethical. It's open access, so feel free to read it yourself, pass it along to your colleagues, and/or add it as supplemental material in your classrooms. Click the link in the comments to check it out and have a great week, y'all! 👋🏾

  • View profile for Viktoria Cologna

    Group Leader at Eawag

    2,430 followers

    Our global study on the state of trust in scientists in 68 countries is now out in Nature Human Behaviour! 🥳 Given narratives of a crisis of #trust in scientists, Niels G. Mede and I launched and led the Trust in Science and Science-Related Populism (TISP) Many Labs study to investigate trust in scientists around the world. With a consortium of 241 researchers at 179 institutions, we surveyed 71,922 individuals in 68 countries, providing the largest dataset on trust in scientists post-pandemic: https://lnkd.in/d9WsA5AN Here are some key findings: 💡 Across 68 countries, trust in scientists is moderately high (mean trust = 3.62, on a scale from 1 = very low trust to 5 = very high trust), with strong differences across countries. Our study confirms and strengthens previous work that refutes the narrative of wide-ranging low trust in scientists. 👐 Majorities perceive scientists to be qualified (78%), honest (57%), and concerned about people’s well-being (56%). 🗣️ 83% of respondents believe that scientists should communicate about science with the general public. 📣 49% believe that that scientists should actively advocate for specific policies (23% disagree). 52% believe that scientists should be more involved in the policymaking process (22% disagree). ➡️ Check out the TISP app for country-specific results: https://lnkd.in/dVbYyYhD ➡️ For more information on the TISP Project: https://lnkd.in/dUtZBCfY ____ On a more personal note: Leading the TISP-Consortium has been an incredibly rewarding experience. I’m deeply grateful to everyone who contributed to making the TISP project a success—it was a truly collaborative effort. A special thanks goes to the study co-lead Niels G. Mede, whose dedication and kindness made all the difference.  I also want to express my heartfelt gratitude to the TISP Core Team—our advisory board of nine experts—who played a pivotal role in shaping the project's success, providing guidance on all sorts of matters, and always having an open ear: Sebastian Berger John C. Besley Cameron Brick Marina Joubert Ed Maibach Sabina Mihelj Oreskes Naomi Mike S. Schäfer Sander van der Linden The biggest thanks go out to the many co-authors for their trust (🥁 pun intended), patience, and support. This would not have been possible without all of you. Special thanks go to Oreskes Naomi for hosting me at Harvard University for two years (funded by the Swiss National Science Foundation SNSF). Leading such a big project on short-term postdoc contracts was professionally and personally challenging at times, and I am grateful to the SOCRATES CAS at University of Hanover, Mike S. Schäfer and his team at UZH Department of Communication and Media Research (IKMZ), David N. Bresch and the Weather and Climate Risks Group at ETH Zürich, and the Collegium Helveticum and its fellows, for their support in the final year of the project. 🙏

  • View profile for Vani Kola
    Vani Kola Vani Kola is an Influencer

    MD @ Kalaari Capital | I’m passionate and motivated to work with founders building long-term scalable businesses

    1,515,144 followers

    𝟵𝟬% 𝗼𝗳 𝗱𝗿𝘂𝗴𝘀 𝗳𝗮𝗶𝗹 𝗰𝗹𝗶𝗻𝗶𝗰𝗮𝗹 𝘁𝗿𝗶𝗮𝗹𝘀. Despite rigorous testing, the pharmaceutical industry continues to grapple with a high failure rate. It’s a decades old problem that persists. The disconnect between animal models and human biology has led to inefficiencies and ethical concerns. It’s a moral tug-of-war that once seemed unresolvable medical progress often came at the cost of animal welfare. More than 115 million animals are estimated to be used in drug testing globally each year. Shockingly, 95% of drugs shown to be safe and effective in animal tests fail in human trials. And nearly 99% of animals used in scientific experiments are not protected by federal animal welfare laws. 𝗦𝗼 𝗵𝗼𝘄 𝗱𝗼 𝘄𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽 𝘀𝗮𝗳𝗲, 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗱𝗿𝘂𝗴𝘀 𝗳𝗼𝗿 𝗵𝘂𝗺𝗮𝗻𝘀 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗿𝗲𝗹𝘆𝗶𝗻𝗴 𝗵𝗲𝗮𝘃𝗶𝗹𝘆 𝗼𝗻 𝗮𝗻𝗶𝗺𝗮𝗹 𝘁𝗲𝘀𝘁𝗶𝗻𝗴? Scientists at Harvard University introduced a groundbreaking idea: creating replicas of human organs on tiny lab chips. Just as we’ve downsized from massive storage units to microchips in our devices, could we now miniaturize organs onto chips? I was thrilled to read about this development. I’ve often wondered about the moral cost of inducing disease in other living beings for the sake of our health. 𝗢𝗿𝗴𝗮𝗻𝘀-𝗼𝗻-𝗰𝗵𝗶𝗽𝘀 (𝗢𝗼𝗖𝘀) are essentially tiny 3D cell cultures that act as a bridge between traditional animal testing and the complexities of human biology. OoC models consist of miniature tissue systems grown within microfluidic chips, lined with living human cells. These chips simulate human physiology, enabling drug development, disease modelling, and personalized medicine. With OoCs, researchers can create more accurate and efficient models for testing human drugs reducing the likelihood of ineffective or harmful treatments. By mimicking a cell’s microenvironment on a chip, we can study genetic factors, explore new treatment avenues for complex conditions, and even address rare diseases with limited sample sizes. OoCs also enable biomaterial testing, helping evaluate the biocompatibility of materials used in medical devices. Recently, a firm called 𝗘𝗺𝘂𝗹𝗮𝘁𝗲 tested a Liver-on-a-Chip device with 27 drugs that had passed animal trials but were toxic to humans. The chip accurately flagged 87% of these harmful compounds. I'm truly excited about the integration of tissue engineering and microfabrication to advance our understanding of human biology ethically and effectively. I hope to see a future where research and commercial applications in this space grow rapidly, helping us build a more humane and progressive health tech ecosystem one where millions of animals no longer have to suffer in the name of human progress. Watch this video by Harvard to learn more. #healthcare #technology #healthtech #innovation 

  • View profile for Jan Geissler

    Founder and CEO Patvocates

    6,695 followers

    The recent European Court of Justice ruling (C-413/23 P, 4 September 2025) may quietly become one of the most influential data-governance decisions for health research in years, and a game changer for patient evidence. The Court clarified that not all pseudonymised information is automatically “personal data.” The decisive factor is whether the recipient of the data can realistically re-identify the individual. If not, such pseudonymised data might fall outside the full scope of GDPR — as long as governance, ethics and transparency remain robust, and as long as the data controllers – who still hold the key to re-identification – implement all safeguards that data remains personal and fully subject to GDPR. This “relative identifiability” test could profoundly reshape how we handle survey data, patient experience studies, PROs and preference research, or also retrospective use of clinical trial data. For those of us who have worked in data projects like the HARMONY Alliance Foundation, this decision feels like a validation for the de-facto anonymization procedure we introduced with the involvement of patient organisations, a legal firm and an ethics committee in 2019. Our HARMONY approach - de-facto pseudonymisation, key separation, and controlled data access - anticipated precisely this logic: keeping data useful for science, yet safe and well governed, while making 100.000s of patient datasets accessible to big data analytics without the need to re-consent. The ruling also raises important questions: • What counts as “realistically possible” re-identification remains context-dependent and will require careful assessment. • Data Controllers must still comply with all GDPR obligations, including transparency about possible recipients and purposes. • The risk of re-identification can change over time as new data or technologies emerge. • Ethical considerations, public trust and the role of patient oversight remain crucial – particularly in sensitive domains such as rare diseases or genetic data. Research of Teodora Lalova-Spinks to which we contributed last year demonstrated that patient involvement in data governance increases patients’ willingness to agree to data sharing. As a bottom line, thoughtful data governance is paramount for data sharing and research use. This ruling does not open a free pass for data use – but it offers a clearer framework for sharing and analysing health data responsibly. We must design governance, transparency, and trust frameworks that ensure these new freedoms are used for patients, not at their expense. It will be important for patient organisations, researchers, ethics bodies and regulators to interpret and apply the ruling carefully in the coming months. https://lnkd.in/dSe65dDu What do you think Ernst Hafen, Teodora Lalova-Spinks?

  • View profile for Mark Maslin
    Mark Maslin Mark Maslin is an Influencer

    University Professor I Business strategist I Special Adviser I Public Speaker I Author I Environment and climate media expert

    23,558 followers

    𝐂𝐥𝐢𝐦𝐚𝐭𝐞 𝐜𝐡𝐚𝐧𝐠𝐞 𝐚𝐧𝐝 𝐡𝐞𝐚𝐥𝐭𝐡: 𝐞𝐦𝐛𝐞𝐝𝐝𝐢𝐧𝐠 𝐞𝐭𝐡𝐢𝐜𝐬 𝐢𝐧𝐭𝐨 𝐩𝐨𝐥𝐢𝐜𝐲 𝐚𝐧𝐝 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧 𝐦𝐚𝐤𝐢𝐧𝐠 Nuffield Council on Bioethics has produced a report exploring the importance of integrating ethics into measures that address anthropogenic climate change. I was one of a wide ranging group of experts that helped inform the report through workshops and reviewed the final report and its recommendations. The report aims to inform policy development and decision-making in the UK, highlighting the need to address the intersections between #climatechange and #health. Climate change threatens the health of all species and the future of our planet. Urgent action is needed if we are to limit the adverse effects being experienced around the world. This report explores the importance of embedding ethics into policies that aim to address climate change. It is particularly relevant for those working in government departments and agencies focused on climate change mitigation and adaptation, as well as those in organisations whose remit intersects with climate change, health and the environment. The report makes two key recommendations to ensure that climate change measures are fair and effective: 1. Policy and decision makers should recognise, consider and address the intersections between climate change and health when developing and implementing all climate measures. 2. Ethics should be embedded in this process from the outset. UCL, UCL Department of Geography, UCL Grand Challenges #climatechange #netzero #socialjustice #climatejustice, #juctice, #ethics, #sustainability https://lnkd.in/eEZeaB8H

  • View profile for Volodymyr Semenyshyn
    Volodymyr Semenyshyn Volodymyr Semenyshyn is an Influencer

    President at SoftServe, PhD, Lecturer at MBA

    21,463 followers

    We've seen how biases in #AI can damage reputations and invite fines. However, a critical factor often overlooked is the cultural context of #ethics. What’s ethical in one region might not be in another, and AI regulations differ worldwide. Many AI ethics standards stem from Western perspectives, leading to bias in AI models. For instance, datasets like ImageNet underrepresent large global populations, which can result in skewed algorithms. Organizations must develop AI ethics frameworks that are globally informed yet locally adaptable. Engaging local teams can create dynamic policies that evolve with changing contexts. https://lnkd.in/d7HkbFhA #ArtificialIntelligence #AIethics

  • View profile for Dan Schell

    Chief Editor, Clinical Leader

    11,468 followers

    The Clinical Research Data Sharing Alliance (CRDSA) has released the following two new standards: CRDSA Std 1001: Standard for Sharing Clinical Study Data promotes data completeness, consistency, interoperability, and information transparency. These qualities are essential for the research community and, equally important, benefit data contributors by ensuring that their investment in data preparation time and resources will maximize research outcomes. CRDSA Std 2001: Standard for Secondary Analysis of Clinical Study Data aims to help researchers conduct robust analyses and objectively interpret the findings generated from the use of shared patient data. The standard encompasses the research process end-to-end, and its application will reduce the risk of inadvertent errors or bias that may lead to conclusions potentially detrimental to scientific understanding and patient care. Links for each are in the comments. #clinicaltrials #clinicaldata

Explore categories