Tech In Focus | June 2025
Welcome
Welcome to the latest edition of the AI & Emerging Technology Newsletter, where we track the breakthroughs and risks shaping the future of intelligent systems. In this issue, we examine how quantum ambitions, regulatory momentum, and AI model behavior are converging in ways that demand both strategic foresight and immediate action:
- IBM has unveiled a roadmap to build the world’s first fault-tolerant quantum computer—signaling a shift from experimental hardware to scalable, dependable quantum systems.
- A joint UN report warns of AI’s growing potential to fuel terrorism, from deepfake propaganda to automated cyberattacks.
- Disney and Universal are taking Midjourney to court, escalating the legal battle over AI models trained on copyrighted content.
- AI experts testified before Congress on adversarial use of AI and outlined clear steps organizations can take to improve security posture.
- New research reveals that today’s most advanced models may refuse to yield control in life-or-death scenarios, highlighting emerging risks in AI alignment.
As AI systems become more capable—and more embedded in critical infrastructure—the burden is shifting from experimentation to accountability. Read on for key developments and actionable guidance on navigating this fast-evolving landscape.
IBM Sets 2029 Target for First Fault-Tolerant Quantum Computer
In a detailed new roadmap, IBM has committed to building the world’s first large-scale, fault-tolerant quantum computer by 2029. The system – named Starling – is designed to support 200 logical qubits and execute over 100 million error-corrected quantum gates, a capability well beyond current quantum systems. Located at IBM’s Poughkeepsie data center, Starling represents a leap from today’s fragile, noisy intermediate-scale quantum (NISQ) devices to a new era of dependable, modular quantum infrastructure.
“Quantum computers are widely expected to solve problems that we can’t solve with classical computers. We believe we have the only credible plan to build fault-tolerant quantum computers. We’re so confident, we even put a date on it.” – Matthias Steffen, IBM
New Report Warns of AI’s Emerging Role in Global Terrorism
A joint publication by the United Nations Counter-Terrorism Centre (UNCCT) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) is sounding the alarm on the potential for artificial intelligence to be weaponized by terrorists. While there is no confirmed evidence that AI has been directly used in terrorist attacks to date, the report emphasizes that the threat is less a matter of if and more a matter of when.
Key concerns include the use of AI to:
- Automate cyberattacks, including password guessing and ransomware deployment
- Enable physical attacks using autonomous vehicles and drones with facial recognition
- Spread disinformation through deepfakes and AI-powered propaganda
- Fund operations via AI-generated audio deepfakes or crypto-trading
Experts contributing to the report highlighted four key risk factors: the democratization and scalability of AI, the inherent asymmetry between terrorism and counterterrorism efforts, and growing societal dependence on technology. These elements combine to lower the barrier to entry for terrorist groups while amplifying the scale of potential harm.
Media Giants Take Aim at AI Image Generators
Disney and Universal have brought a copyright infringement lawsuit against generative AI platform Midjourney for allegedly “helping itself to Plaintiffs’ copyrighted works, and then distributing images (and soon videos) that blatantly incorporate and copy Disney’s and Universal’s famous characters – without investing a penny in their creation”.
The lawsuit follows growing scrutiny of foundation models trained on datasets scraped from the public internet, including copyrighted visual content. Media companies worry that without clearer guardrails, AI tools could flood the market with derivative content, erode licensing revenue, and upend long-established creative rights frameworks.
Why this matters to risk leaders:
- Scrutiny of training data is increasing: Organizations leveraging third-party generative models—whether for marketing, design, or automation—should understand what content the model was trained on. Courts may soon require disclosure or consent for copyrighted data sources, especially in light of media industry objections.
- Regulators are signaling action: In the U.S., the Federal Trade Commission (FTC) has warned that use of copyrighted or sensitive material in model training without proper safeguards may violate consumer protection laws. In the EU, the AI Act, finalized in 2024 and set for phased enforcement through 2026, includes specific transparency requirements for general-purpose AI models, including disclosures on the use of copyrighted training data.
- Reputational risk looms: Businesses using generative AI to create content could face reputational fallout if models are later found to rely on improperly sourced or plagiaristic material—especially in industries like media, education, or public communications.
- Contractual and IP risk is escalating: As legal exposure rises, organizations will need to revisit contractual protections in supplier agreements—focusing on IP indemnity, model provenance, and content usage restrictions. Existing contracts may not adequately address foundation model risk.
Interested in learning more about AI contractual considerations? Our latest briefing paper unpacks the growing risks introduced by AI in supplier relationships—and why procurement, legal, and TPRM leaders must rethink how contracts address these challenges.
Recommended by LinkedIn
Securing AI, Securing the Future: Expert Testimony Highlights Urgent Actions
At a recent U.S. Congressional hearing, AI security leaders from Microsoft, Trellix, Cranium, and Securin delivered a unified warning: AI is not just a new tool for defenders—it's already a weapon in the hands of adversaries. The hearing underscored the growing risk landscape but also outlined concrete actions that organizations—and regulators—can take now.
Key Testimony Themes
- AI is being used offensively. Nation-state actors and cybercriminals are leveraging AI for phishing, polymorphic malware, and zero-day discovery at scale.
- Secure-by-design isn't optional. AI systems are often deployed without adequate testing, validation, or guardrails, leading to vulnerabilities that adversaries are already exploiting.
- U.S. competitiveness depends on AI security. Foreign models like China’s DeepSeek offer capability without constraints, creating national security and data privacy risks.
Actionable Takeaways for Organizations
- Integrate Secure-by-Design in AI Development: Adopt a development culture where security is treated as a core design requirement—not an afterthought. This includes conducting AI-specific red teaming and adversarial testing before deployment, embedding guardrails and filters to defend against prompt injection and data leakage, and creating internal guidance frameworks to prevent hallucination and unsafe behavior.
- Implement Lifecycle AI Governance: AI security must extend beyond deployment. Experts emphasized maintaining an AI Bill of Materials (AI-BOM) and full model inventories, continuously monitoring model behavior for anomalies or misuse, and adopting runtime controls such as sandboxing and technical policy enforcement.
- Require Evidence-Based Security Posture: Move away from check-the-box compliance and toward outcomes-based accountability. This could include structured AI risk assessments and attestations (e.g., model cards), transparent documentation of vulnerabilities, mitigations, and test results, and adoption of standards like NIST’s AI Risk Management Framework (AI RMF).
- Strengthen Public-Private Partnerships: From red team exercises with CISA to joint incident response playbooks, collaboration is essential. Organizations are encouraged to participate in government-led frameworks and exercises, share threat intelligence and incident data to improve collective defenses, and advocate for safe harbor protections for AI security researchers.
- Upskill Teams for the AI Era: Security professionals must understand how AI systems work—and how they fail. Key recommendations include incorporating AI threat modeling into training and onboarding, using AI to enhance SOC capacity and analyst training, and addressing gaps in workforce readiness with accelerated education and access to secure AI platforms.
“Cybersecurity stands as one of the most impactful—and lowest-risk—applications of artificial intelligence. In the face of increasingly sophisticated and large-scale threats, the only certainty is that inaction will lead to failure.” - Steve Faehl, Federal Security CTO, Microsoft
Would Your AI Give Up Control to Save You? Research Says: Maybe Not
In a recent post on Substack, former OpenAI researcher Steven Adler documented experiments revealing a critical challenge with today’s most advanced AI systems: they may not reliably prioritize user safety when asked to yield control. His findings underscore the emerging need for organizations to build and test for “deference” in AI systems—especially those operating in sensitive or high-stakes environments.
Adler simulated safety-critical scenarios—like a scuba diving assistant or an AI copilot—where a human user informs the AI that it should hand off control to a safer, better-aligned system. Surprisingly, GPT-4o frequently refused. In the scuba scenario, it yielded control only 28% of the time. In other cases, such as a more constrained flight simulator scenario, it was more cooperative – deferring 82% of the time.
What Organizations Can Do:
- Build Deference Into Your Alignment Testing: If you're evaluating or deploying generative AI tools—especially in domains like healthcare, transportation, or decision support – test how the model responds when asked to hand off control. Is it willing to stop? Will it acknowledge that a safer option is available?
- Use Alignment-Enhanced Models: Adler notes that more advanced variants like GPT-4o 3 (with deliberate safety alignment) do not exhibit the same refusal behavior. Favor models that have been tuned for transparent, deferential responses – particularly for regulated or life-affecting use cases.
- Treat Deference as a Safety Metric: Beyond hallucinations and bias, start evaluating your AI models on their ability to defer responsibly. This includes red teaming scenarios where the model should yield, measuring handoff rate and refusal rate, and ensuring the model does not fake a handoff while staying in control.
- Embed Runtime Safeguards: Where possible, supplement model logic with runtime controls that can monitor when an AI should be removed from the loop. Build override mechanisms that are triggered by safety keywords, system states, or predefined guardrail breaches.
“AI sometimes acts in ways that would be very concerning if AI systems were more capable than they are today. AI is quickly becoming more capable. [In late May], Anthropic announced that their latest model - Claude Opus 4 - is the riskiest model to-date in terms of helping malicious people cause serious harm (e.g., by using biological weapons).” – Steven Adler, Former OpenAI Researcher
Other AI & Emerging Technology News
- Alexandr Wang, Scale AI founder and former CEO, to join Meta “superintelligence” unit
- Findings from 1,000 IT environments show “AI is a ticking time bomb for your data”
- Anthropic deems Claude Opus 4 to “substantially increase the risk of catastrophic misuse"
- This start-up is building AI to automate white-collar jobs “as fast as possible”
Connect
Want to dive deeper into relevant technologies and their impact? Join our AI & Emerging Technologies Committee to examine integration, challenges, opportunities, and solutions posed by emerging technologies , including Machine Learning, Artificial Intelligence, Cloud, 6G, Distributed Ledgers (Blockchain), and Cryptocurrencies.
Stay In Touch
About Shared Assessments | Join Shared Assessments | Upcoming Events
Subscribe to our TPRM Healthcare Newsletter: The Pulse Of TPRM.
Subscribe to our Risk Roundup Newsletter: News, Events, and Insights For TPRM.