Leveraging this new OpenAI real time translator to phish via phone calls in the target’s preferred language in 3…2… So far, AI has been used for believable translations in phishing emails — E.g. my Icelandic customers are seeing a massive increase in phishing in their language in 2024. Before only 350,000 or so people comfortably spoke Icelandic correctly, now AI can do it for the attacker. We’re going to see this real time translation tool increasingly used to speak in the target’s preferred language during phone call based attacks. These tools are easily integrated into the technology we use to spoof caller ID, place calls, and voice clone. Now, in any language. Educate your team & family + friends. Make sure folks know: - AI can voice clone - AI can real time translate to speak in any language - Caller ID is easily spoofed with or without AI tools - AI tools will increase in believability Example AI voice clone/spoof example here: https://lnkd.in/gPMVDBYC Will this AI be used for good? Sure! Real time translations are quite useful for people, businesses, & travel. We still need to educate folks on how AI is currently use to phish people & how real time AI translations will increase scams across (previous) language barriers. *What can we do to protect folks from attackers using AI to trick?* - Educate first: make sure folks around you know it’s possible for attackers to use AI to voice clone, deepfake video and audio (in real time during calls) - Be politely paranoid: encourage your team and community to use 2 methods of communication to verify someone is who they say they are for sensitive actions like sending money, data, access, etc. For example, if you get a phone call from your nephew saying he needs bail money now, contact him a different way before sending money to confirm it’s an authentic request - Passphrase: consider using a passphrase with your loved ones to verify identity in emergencies (e.g. your sister calls you crying saying she needs $1,500 urgently ask her to say the passphrase you agreed upon together or contact with another communication method before sending money)
Understanding AI-Driven Phishing Attacks
Explore top LinkedIn content from expert professionals.
Summary
AI-driven phishing attacks represent a sophisticated evolution of traditional phishing scams, where cybercriminals use artificial intelligence to craft realistic, highly targeted emails, calls, or even deepfake multimedia to deceive individuals and organizations. This advanced technology enables attackers to bypass traditional red flags like poor grammar and fraudulent-looking emails, making it increasingly challenging to identify scams.
- Educate your network: Share knowledge about AI’s ability to generate realistic emails, voice clones, and even deepfake videos to help people identify potential scams.
- Use multi-step verification: Adopting two methods of communication or using multi-factor authentication can help confirm someone’s identity before acting on requests for sensitive information.
- Implement strong security measures: Incorporate tools with advanced behavioral analytics, anomaly monitoring, and real-time detection to counter AI-enabled phishing efforts.
-
-
#phishingawareness Just a little reminder on #phishing as we might be distracted checking emails while off or when we return from holiday to a bulging mailbox! Phishing is the art of sending an email with the aim of getting users to open a malicious file or click on a link to then steal credentials. But most phishers aren’t very good, and the success rate is relatively low: In 2021, the average click rate for a phishing campaign was 17.8%. However, now cybercriminals have AI to write their emails, which might well improve their phishing success rates. Here’s why. The old clues for telling if something was a phishing mail were: - It asks you to update/fill in personal information. - The URL on the email and the URL that displays when you hover over the link are different from one another. - The “From” address imitates a legitimate address, especially from a known brand. - The formatting and design differ from what you usually receive from a brand. - The content is badly written and may well include typos. - There is a sense of urgency in the message, encouraging you to quickly perform an action. - The email contains an attachment you weren’t expecting. While most of these are still valid, there are a few checks you can strike off your list due to the introduction of #AI. When a phisher is using a Large Language Model (LLM) like ChatGPT, a few simple instructions are all it takes to make the email look as if it came from the intended sender. And LLMs do not make grammatical errors or put extra spaces between words (unless you ask them to). They’re not limited to one language ether. AI can write the same mail in every desired language and make it look like you are dealing with a native speaker. It’s also easier to create phishing emails tailored to the intended target. All in all, the amount of work needed to create an effective phishing email has been reduced dramatically, and the number of phishing emails has gone up accordingly. In the last year, there’s been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular. Because of AI, it’s become much harder to recognize phishing emails, which makes things almost impossible for filtering software. According to email security provider Egress 71% of email attacks created through AI go undetected. This article gives you tips to raise your game! (no paywall either) https://lnkd.in/g6FzYhcr
-
🎣 👀 Do you really know who's on that video call with you? 🔍 Mandiant (now part of Google Cloud) analysts have uncovered evidence of commoditized #deepfake video proffered explicitly for #phishing attacks: "advertisements on hacker forums and #Telegram channels in English and Russian boasted of the software’s ability to replicate a person’s likeness to make an attempted extortion, fraud or #socialengineering exercise 'seem more personal in nature.' The going rate is as little as $20 per minute, $250 for a full video or $200 for a training session." 🚩 #AI promises to lower the marginal cost of many operations to near zero, and that can include malicious operations. The novelty here is the relatively low compute required to pull off REAL-TIME video deepfakes. This means technically unsophisticated threat actors can launch a malicious avatar which can converse with employees rather than relying on a pre-scripted output. In fact, this innovation pretty much renders last year's #Phishing as a Service kits obsolete. 💥 In a Microsoft Teams video earlier this year #CyberArk's chairman, Udi Mokady found himself staring at a deepfake of himself, created by a researcher at the company. “I was shocked. There I was, crouched over in a hoodie with my office in the background.” This same attack was demonstrated live at #DEFCON31 (very sorry to have missed it). When I talk with #securityawareness teams, they often say they're not prioritizing #AI stuff "just yet" because they're still drilling the basics. Sure, but threat actors are rapidly up-skilling and up-leveling. Why wouldn't you do the same for your workforce? 🔊 🎯 Any SMEs, evangelists, or executives with a public presence have provided more than enough data for training. Mokady's double was trained from audio on earnings calls. Smaller companies, where everyone knows one another may be safe for now, but larger organizations are big game, and with many thousands of employees, relying on familiarity will not be an adequate #cybersecurity defense strategy. Rick McElroy Tristan Morris Molly McLain Sterling Ashley Chackman 🔹️James McQuiggan Michael McLaughlin Julian Dobrowolski #informationsecurity #deepfakes --------- 💯 Human-generated content ✅ I've personally read all linked content https://lnkd.in/gMpmh9ap
-
Generative AI and the Emergence of Unethical Models: Examining WormGPT It is surprising that it has taken malware developers this long to create an unethical GPT model. Enter WormGPT, a rogue variant of the GPTJ language model that brings the formidable power of generative AI into the threat actor supply chain, significantly increasing the risk of business email compromise (BEC) attacks. WormGPT Overview: WormGPT is a tool for malicious activities, harnessing AI technology. It has several unique capabilities, including unlimited character support, chat memory retention, and code formatting. Although specifics regarding its training datasets, which predominantly revolve around malware, remain undisclosed. Experiment Findings: Controlled experiments were conducted to evaluate WormGPT's potential for harm. In one such experiment, it was tasked with creating a manipulative email to deceive an account manager into paying a fraudulent invoice. The results were predictably alarming. Findings: The AI crafted a deceptive email with striking persuasive power, showcasing its capacity to orchestrate complex phishing and BEC attacks. These findings offer a reflection of the capabilities of generative AI, resembling ChatGPT but devoid of ethical boundaries. The experiment underscores a long-speculated concern—the threat that generative AI tools could pose, even in the hands of inexperienced threat actors. The Potential of Generative AI for BEC Attacks: Generative AI excels at creating near-perfect grammar, enhancing the perceived authenticity of deceptive emails. Furthermore, it lowers the entry threshold, making sophisticated BEC attacks accessible to less skilled threat actors. As expected, the evolving landscape of cybersecurity brings new complexities and demands fortified defenses against these advanced threats. The logical progression leads to the use of AI as a defense against AI. By leveraging AI to counter these AI-orchestrated threats, defenses can potentially outpace and block them before they even launch. Synthetic data generated from core threats and their variants can aid in bolstering defenses against an impending wave of similar attacks. Organizations will increasingly rely on AI tools to discover, detect, and resolve these sophisticated threats. As this reality unfolds, it becomes clear that the question was not if, but when. The road ahead demands both adaptability and tenacity. #cybersecurity #chatGPT
-
Blackhat Recap! It has been quite a challenge to narrow down my favorite presentations at Blackhat, but this one really stood out to me. There were several discussions about Artificial Intelligence (AI) and Large Language Models (LLMs). I have been wondering how LLMs such as GPT-4 is playing into the creation of phishing emails. More and more, we have seen phishing emails getting more sophisticated. We are no longer hearing from a prince who wants to give you money! Researchers Fredrik Heiding, a research fellow at Harvard, Jeremy Bernstein, a postdoctoral researcher at MIT, Bruce Schneier, a security expert and author, and Arun Vishwanath, founder and Chief Technologist at Avant Research Group, conducted a ground breaking experiment to see how LLMs performed against human-led efforts to create effective phishing campaigns. Their target was students at Harvard University involving a Starbucks giveaway. I won’t give away the results of the experiment (they are in the article), but as a CISO, it is concerning how easy it is to create a phishing email. The creators no longer need to be a native English speaker to create an email that may be hard for a person to spot. My takeaway: More than ever, Security Awareness training is critical for your organization. It will take humans to identify a phishing email. AI and LLMs have made it easier to create realistic phishing emails that could bypass current defensive technology. Oh, and Heiding also gave us a great reminder… the “Unsubscribe” link is often where bad guys want you to click. Stay vigilant! #AI #LLMs #PhishingEmails #SecurityAwareness #cybersecurity #Cisos
-
🔒 Welcome to Day 24 of the 30-Day #CyberSecureMindset Challenge: Shield Your Family from Summer Cyber Risks! 🚨 Cybercriminals Do Not Take Weekends Off! 🚨 As a retired Federal Bureau of Investigation (FBI) agent, I've witnessed first-hand how #cybercriminals work tirelessly day and night, without any break, to execute their malicious schemes. The recent emergence of #fraudgpt , an AI tool developed exclusively for offensive purposes, is a stark reminder of the ever-evolving cyber threats we face. 🔐 The Importance of a CyberSecure Mindset 🔐 FraudGPT, available on #darkweb marketplaces, poses a significant risk to individuals and organizations alike. This #aibot gives the #cybercriminals the ability to effortlessly craft spear-phishing emails, create cracking tools, conduct carding, and even write malicious code. The potential for damage is vast, including #databreaches, #ransomware attacks, and business email compromise. 🛡️ No Boundaries for #Cybercriminals 🛡️ The evil doers behind FraudGPT claim the tool has no boundaries, offering exclusive features that cater to any individual's sinister intentions. With over 3,000 confirmed sales and reviews, it's evident that this AI-driven cybercrime is on the rise. 🎯 Take Action with a CyberSecure Mindset 🎯 Let's not be caught off guard! It's time to equip ourselves with the knowledge and tools necessary to shield against cyber threats. A CyberSecure Mindset is not just an option; it's a necessity in today's digital landscape. 🔍 Stay Informed: Keep yourself updated on the latest cybersecurity threats, including emerging tools like FraudGPT. Awareness is the first line of defense. 🔒 Implement Multi-Factor Authentication (MFA): Enable MFA on all your online accounts, especially sensitive ones like email and financial platforms. This adds an extra layer of security, making it harder for cybercriminals to gain unauthorized access. ⚠️ Be Cautious of Unsolicited Messages: Cybercriminals often use phishing emails and messages to trick users into revealing sensitive information. Be wary of unexpected emails or messages, especially those asking for personal data or login credentials. 💡 Use Strong, Unique Passwords: Avoid using the same password across multiple accounts. Create strong, complex passwords and consider using a password manager to keep them secure. 🔄 Regularly Update Software: Keep your devices and software up to date with the latest security patches. Cybercriminals often exploit vulnerabilities in outdated software to gain access to systems 🚀 Call to Action 🚀 Join me in building a strong CyberSecure Mindset. Stay informed about the latest cybersecurity trends, adopt security best practices, and promote a culture of cyber resilience within your organization. Together, we can outsmart cybercriminals and safeguard our digital future. #Cybersecurity #AIinCybercrime #CyberThreats #StayInformed #DefendAgainstCyberAttacks #CyberResilience #thesecrettocybersecurity
-
Generative and Deep Fake Fraud on the Rise, my canary in the coal mine is getting sick! Every time I am hanging out with our amazing merchant members I always ask the question is anyone experiencing fraud attacks using deep fakes or generative AI. While fraudsters have been using FraudGPT and other unlocked AI Engines to do bad things like write better phishing emails, assist with writing code for bots and other fairly obvious things, until this month I had not heard of real and consistent scams leveraging these new tools. This simple question has been my canary in the coal mine. This past few weeks I have now had two merchants give me very real examples where the criminals are leveraging these new tools in innovative and more sophisticated ways. In one example the fraudster used audio samples to create a fake audio of the consumer to bypass voice biometric authentication. In another the fraudster used the deep fake tech to create a video of themselves with some required docs to pass screenings. What does this mean? In a nutshell it means as I have been saying for 25+ years: - there is no silver bullet to solve fraud - every awesome fraud tool works really well until it is adopted in the mainstream and then fraudsters crack it and it’s effectiveness diminishes, it’s still valuable, just not as valuable To me this means the biometric of voice and video are on the edge of starting to be compromised, the fraud detection tools need to get more sophisticated, the fraudsters will improve their attacks too. Be alert, if you use voice or video as part of your fraud screening, train your team to be alert for the fakes, I believe it’s still pretty obvious through manual review but it’s here. I played Christmas music on the drive home last night, because as I pondered what this means, I realized Black Friday/Cyber Monday is right around the corner and we are going to have to collaborate and stay connected this season as this is a very real new threat merchants need to be prepared for and it’s going to appear in ways we never thought of.