Risks of Genai in Education

Explore top LinkedIn content from expert professionals.

Summary

Generative AI (GenAI) in education offers advanced tools for personalized learning but also poses significant risks, such as dependency on automation, diminishing critical thinking, and undermining the importance of human interaction in teaching. By understanding these risks, educators and policymakers can work to address the ethical and cognitive challenges brought by AI integration in learning environments.

  • Promote critical thinking: Design curricula that emphasize problem-solving, creativity, and deep learning to prevent over-reliance on GenAI tools, which may lead to cognitive disengagement and reduced intellectual development.
  • Ensure ethical AI use: Advocate for stronger regulations and ethical guidelines on the use of GenAI, especially in protecting young and vulnerable learners from risks like misinformation, cyberbullying, and exploitation.
  • Invest in AI literacy: Equip students and educators with the skills needed to critically evaluate and responsibly engage with AI tools, while addressing potential biases and digital inequalities.
Summarized by AI based on LinkedIn member posts
  • View profile for Cristóbal Cobo

    Senior Education and Technology Policy Expert at International Organization

    37,678 followers

    This manifesto is a thought-provoking piece of work. 🤔 Some of the ideas explored here are not necessarily mainstream. 💡  The Manifesto for Teaching and Learning argues that Generative AI (GenAI) poses a significant risk of displacing teachers by automating grading, assessments, lesson planning, and tutoring, thereby reducing reliance on human educators. 🏫📉 AI-powered adaptive learning platforms and chatbots can provide instant, personalized instruction, potentially replacing traditional teaching roles, especially in online education. 🖥️🤖 As institutions seek cost-cutting measures, AI-driven automation may lead to workforce reductions, particularly among adjunct faculty and teaching assistants. ⚠️ Additionally, the growing preference for AI-generated content over human instruction could devalue teachers' pedagogical expertise, shifting their role from knowledge facilitators to mere supervisors. 😕📚 The standardization of AI-led education further threatens personalized, human-centered teaching, diminishing mentorship, emotional intelligence, and deep learning engagement. 💔👩🏫  Three Key Highlights from This Work  1. GenAI as a Non-Neutral Force in Education 🏛️⚖️  The paper argues that Generative AI is not a neutral tool. ⚠️ This idea challenges the widely accepted notion of AI as an unbiased, objective technology. The manifesto suggests that GenAI reflects and amplifies societal biases, reinforcing existing inequalities and marginalizing diverse voices. 🌍✊ This calls for a much more critical and conscious approach to how AI is designed, developed, and integrated into education systems, especially considering its potential to perpetuate injustice rather than drive progress. 🔄📢  2. The Dehumanizing Impact of Over-Reliance on GenAI 💔🤖  A radical claim in the paper is the potential erosion of essential human qualities in education—such as empathy, creativity, and emotional intelligence—if GenAI is overused. 🎭🧠 The authors argue that AI’s growing role in education might replace or diminish human-to-human connections, which are crucial for student development, such as mentorship, emotional support, and nuanced understanding. 👩🏫❤️ This critique questions the broader societal trend toward technologizing personal and relational experiences. 📲😔  3. A Call to Reframe Educational Purpose and Integrity 🔄📖  The paper challenges the current approach to academic integrity in the age of GenAI, proposing that we need to rethink what we mean by “authentic learning.” 🧐 It warns that GenAI could lead to superficial learning and undermine critical thinking if not carefully managed. 🚨 The manifesto calls for a shift in educational practices, from output-based assessments (where the focus is on correct answers) ✅ to a more process-oriented model that fosters critical thinking, creativity, and metacognition. 🏆📚 Source: https://lnkd.in/ejzYAGQh

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,793 followers

    "Children are intensive users of digital tools such as artificial intelligence (AI). Generative AI – AI that can create new content such as text, images, videos and music – is becoming increasingly sophisticated,making it difficult to distinguish user-generated content from AI-generated (synthetic) content. If not supervised properly, these tools might carry risks for children, whose cognitive capacities are still developing. The following are some key challenges associated with generative AI. Synthetic reality Children are particularly vulnerable to synthetic content such as deepfakes, and because of their still-developing cognitive abilities, can be manipulated more easily. A Massachusetts Institute of Technology Media Lab study has shown that 7-year-olds tend to attribute real feelings and personality to AI agents. Generative AI may also be used for malicious purposes towards children, including cyberbullying or online grooming. The increase in AI-generated online child sexual abuse is already a growing challenge for law enforcement. Reduced critical thinking Significant concerns focus on the potential consequences of AI-assisted learning for students' research, writing and argumentation skills, as generative AI's capability in data analysis and automation could reduce students' cognitive skills, in particular their critical thinking and problem solving. However, some research advocates integrating AI into learning tools to enhance critical thinking and problem solving, as this would help students develop the analytical skills needed for a technology-driven future. Digital divides and AI literacy According to UNESCO, AI literacy entails the skills and knowledge required for effective use of AI tools in everyday life, with an awareness of the risks and opportunities associated with them. Incorporating AI literacy is therefore essential for building foundational understanding and skills to bridge the digital divide and foster inclusion. Despite the pivotal role of learning development, AI literacy is still more commonly implemented at secondary schools and universities than it is at primary schools. From a gender perspective,the Organisation for Economic Co-operation and Development (OECD) highlights that AI may exacerbate gender disparities if gender equality issues are not addressed adequately whentheAI tools are trained. Moreover, AI tools are mainly trained on the world's three most spoken languages (Chinese, English and Spanish), thereby making AI less safe for people who speak low-resource languages (those for which limited linguistic data for training AI models are available), since AI tools are less precise in those languages. Educational stakeholders will likely have a key role to play in tackling these concerns by preparing teachers for an ethical use of AI and adapting curricula." By the European Parliament

  • View profile for Nate Hagens

    Educator, systems thinker, partner and alliance builder for the future of a living Earth and human culture

    23,835 followers

    While most industries are embracing artificial intelligence, citing profit and efficiency, the tech industry is pushing AI into education under the guise of ‘inevitability’. But the focus on its potential benefits for academia eclipses the pressing (and often invisible) risks that AI poses to children – including the decline of critical thinking, the inability to connect with other humans, and even addiction. With the use of AI becoming more ubiquitous by the day, we must ask ourselves: can our education systems adequately protect children from the potential harms of AI? In this episode, I'm joined once again by philosopher of education Zak Stein to delve into the far-reaching implications of technology – especially artificial intelligence – on the future of education. Together, we examine the risks of over-reliance on AI for the development of young minds, as well as the broader impact on society and some of the biggest existential risks. Zak explores the ethical challenges of adopting AI into educational systems, emphasizing the enduring value of traditional skills and the need for a balanced approach to integrating technology with human values (not just the values of tech companies). What steps are available to us today – from interface design to regulation of access – to limit the negative effects of Artificial Intelligence on children? How can parents and educators keep alive the pillars of independent thinking and foundational learning as AI threatens them? Ultimately, is there a world where Artificial Intelligence could become a tool to amplify human connection and socialization – or might it replace them entirely? Watch/listen: https://lnkd.in/dfjdiV39

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    78,114 followers

    In a district-wide training I ran this summer, a school leader told me the story of her neurodivergent 16-year-old daughter who was chatting with her Character AI best friend on average 6 hours a day. The school leader was clearly conflicted. Her daughter had trouble connecting to her peers, but her increasingly over-reliance on a GenAI chatbot clearly had the potential to harm her daughter. From that day on, we have encouraged those attending our trainings to learn more about the tool and start having discussions with their students. So today after giving a Keynote on another AI risk, Deepfakes, I was shocked to read the NYTimes article on the suicide of Sewel Setzer III. Sewel, a neurodivergent 14 year old, had an intimate relationship with a Game of Thrones themed AI girlfriend that he had discussed suicide with. This should be an enormous warning sign to us all about the potential dangers of AI chatbots like Character AI (the third most popular chatbot after ChatGPT and Gemini). This tool allows users as young as 13 to interact with more than 18 million avatars without parental permission. Character AI also has little to no safeguards in place for harmful and sexual content, no warnings in place for data privacy, and no flags for those at risk of self-harm. We cannot wait for a commitment from the tech community on stronger safeguards for GenAI tools, stronger regulations on chatbots for minors, and student facing AI literacy programs that go beyond ethical use. These safeguards are especially important in the context of the current mental health and isolation crisis amongst young people, which makes these tools very attractive. Link to the article in the comments. #GenAI #Ailiteracy #AIethics #safety

  • View profile for Jason Gulya

    Exploring the Connections Between GenAI, Alternative Assessment, and Process-Minded Teaching | Professor of English and Communications at Berkeley College | Keynote Speaker | Mentor for AAC&U’s AI Institute

    39,583 followers

    I don't talk about "risks" of AI in my presentations anymore. I talk about harm. Because let's be honest, AI is harming our college students right now. Today. At this very moment. At this very second. I know because they tell me. ------------- Here are the top 3 harms that my students mention: 1️⃣ Decreased motivation → My students wonder: "What's the point of me doing this if AI can do it?" 2️⃣ Stress → My students have heard "No one knows what's going to happen" again and again, for years. → They're being told that jobs may just vanish. → That is a tough spot to be in. 3️⃣ The Devaluing of Human Relationship → My students mention this a lot! → They're feeling the pull, wondering if they should interact more with bots or the people around them. ------------- Don't get me wrong. AI has a lot of potential for creating personalized experiences, encouraging critical thinking, and forcing us to rethink how we teach. But we need the full picture, as much as we can see it. And that picture doesn't just include the benefits or future risks to avoid. It includes the right now. Because let's be honest... Many of our students have Gen-AI at their fingertips for almost 2 years now. We have a lot of data for understanding how it is affecting them (and us). We just need to look closely.

  • View profile for Stephen Klein

    Founder & CEO, Curiouser.AI | Berkeley Instructor | Building Values-Based, Human-Centered AI | LinkedIn Top Voice in AI

    67,538 followers

    At this year’s UCLA graduation, a student stepped onto the stage, opened his laptop, and proudly displayed ChatGPT on the screen. The caption read: “Thanks for getting me through college.” The moment went viral. Most people saw a joke It made me ill mostly because so many people think it's funny And so many others, even here on LinkedIn are making money pushing GenAI into schools and onto kids This idea we can take a technology we know so little about and turn our kids over for profit amazes me And Google is even pushing it onto 13 years (parents have to opt out, not in) But maybe what we should see is a mirror. And what’s showing up isn’t just about one student or one tool. It’s about how we define intelligence, learning, and integrity in a world where the boundaries between human and machine are blurring. Here’s what we now know from peer-reviewed studies and large-scale surveys: 92% of students in the UK now use GenAI tools in their education.¹ MIT research shows that over-reliance on tools like ChatGPT leads to cognitive disengagement, poorer memory retention, and lower writing quality.² Academic misconduct involving AI is up 5x year-over-year in some countries.³ ChatGPT users tend to score lower on exams, and report increased procrastination.⁴ Detection tools are flawed, leading to bias and false accusations, especially against non-native speakers.⁵ What If We’re Asking the Wrong Question? The right question isn’t “Should students use AI?” Because the truth is: there are advantages to using tools like ChatGPT. When used well, they can: Accelerate research Break through creative blocks Translate complex concepts Help students learn by example Act as sparring partners for ideas The Ethical Line Isn’t Just Technical. It’s Personal. Because the goal of education is not just to graduate. It’s to grow. Learning and growing requires struggling If we lose that, no AI tool will be able to give it back This isn’t just an academic concern. It’s a preview of what’s coming for every workplace, every leader, and every institution The real disruption isn’t that machines can write It’s that humans may forget why we write at all ******************************************************************************** The trick with technology is to avoid spreading darkness at the speed of light. I’m the Founder & CEO of Curiouser.AI, a Generative AI platform and strategic advisory focused on elevating organizations and augmenting human intelligence through strategic coaching and values-based leadership. I also teach Marketing and AI Ethics at UC Berkeley. If you're a CEO or board member committed to building a stronger, values-driven organization in the age of AI, reach out, we’d welcome the conversation. Visit curiouser.ai, DM me, or connect on Hubble: https://lnkd.in/gphSPv_e

  • View profile for Ellen Desmarais

    Chief Executive I Board Member I Double Impact Executive

    3,101 followers

    Take it from neuroscientist Jared Cooney Horvath, PhD, MEd - we risk losing our ability to develop higher order thinking skills if we turn all knowledge accumulation over to GenAI. The "boring" learning work matters for developing critical reasoning. GenAI use comes with real tradeoffs. He challenges current perspectives that GenAI can take over for educators in Harvard Business Review, informed by fascinating research and insights into how our brains actually work and how we as humans learn. https://lnkd.in/ePwRXHEm

Explore categories