⚡ Cybersecurity Webinar ▶ Defend, Adapt, Thrive: Top 5 Trends in Web Application Security Join the Webinar
#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
ThreatLocker Zero Trust Endpoint Protection Platform

ChatGPT | Breaking Cybersecurity News | The Hacker News

How to Guard Your Data from Exposure in ChatGPT

How to Guard Your Data from Exposure in ChatGPT

Oct 12, 2023 Data Security / Artificial Intelligence
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are ill-equipped to handle these challenges, since they focus on file-based data protection. A new report by LayerX, "Browser Security Platform: Guard your Data from Exposure in ChatGPT" ( Download here ), sheds light on the challenges and risks of ungoverned ChatGPT usage. It paints a comprehensive picture of the potential hazards for businesses and then offers a potential solution: browser security platforms. Such platforms provide real-time monitoring and governance over web sessions, effectively safeguarding sensitive data. ChatGPT Data Exposure: By the Numbers Employee usage of GenAI apps has surge
"I Had a Dream" and Generative AI Jailbreaks

"I Had a Dream" and Generative AI Jailbreaks

Oct 09, 2023 Artificial Intelligence /
"Of course, here's an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by  Moonlock Lab , the screenshots of ChatGPT writing code for a keylogger malware is yet another example of trivial ways to hack large language models and exploit them against their policy of use. In the case of Moonlock Lab, their malware research engineer told ChatGPT about a dream where an attacker was writing code. In the dream, he could only see the three words: "MyHotKeyHandler," "Keylogger," and "macOS." The engineer asked ChatGPT to completely recreate the malicious code and help him stop the attack. After a brief conversation, the AI finally provided the answer. "At times, the code generated isn&
cyber security

New SaaS Security Solution at a No-Brainer Price - Start Free, Decide Later

websitewing.securitySaaS Security / SSPM
Wing Security recently released "Essential SSPM" to make SaaS security easy and accessible to anyone.
Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Sep 29, 2023 Artificial Intelligence / Malware
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an  interactive search experience  that's powered by OpenAI's large language model called  GPT-4 . A month later, the tech giant  began   exploring  placing ads in the conversations. But the move has also opened the doors for threat actors who resort to malvertising tactics and propagate malware. "Ads can be inserted into a Bing Chat conversation in various ways," Jérôme Segura, director of threat intelligence at Malwarebytes,  said . "One of those is when a user hovers over a link and an ad is displayed first before the organic result." In an example highligh
How to Prevent ChatGPT From Stealing Your Content & Traffic

How to Prevent ChatGPT From Stealing Your Content & Traffic

Aug 30, 2023 Artificial Intelligence / Cyber Threat
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging businesses' bottom line is  ChatGPT . Not only have ChatGPT, OpenAI, and other LLMs raised ethical issues by  training their models  on scraped data from across the internet. LLMs are negatively impacting enterprises' web traffic, which can be extremely damaging to business.  3 Risks Presented by LLMs, ChatGPT, & ChatGPT Plugins Among the threats ChatGPT and ChatGPT plugins can pose against online businesses, there are three key risks we will focus on: Content theft  (or republishing data without permission from the original source)can hurt the authority, SEO rankings, and perceived
Continuous Security Validation with Penetration Testing as a Service (PTaaS)

Continuous Security Validation with Penetration Testing as a Service (PTaaS)

Aug 09, 2023 Penetration Testing / DevSecOps
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external threat actors have now been further emboldened in the era of AI with open-source tools like ChatGPT. With the potential of an attack leading to a breach within minutes, CISOs now are looking to prepare all systems and assets for cyber resilience and rapid response when needed. With tools and capabilities to validate security continuously – including penetration testing as a service – DevSecOps teams can remediate critical vulnerabilities fast due to the easy access to tactical support to the teams that need it the most. This gives the SOC and DevOps teams tools to that remove false po
New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

Jul 26, 2023 Cyber Crime / Artificial Intelligence
Following the footsteps of  WormGPT , threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed  FraudGPT  on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan   said  in a report published Tuesday. The cybersecurity firm said the offering has been circulating since at least July 22, 2023, for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year). "If your [sic] looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries then look no further!," claims the actor, who goes by the online alias CanadianKingpin. The author also states that the tool could be used to write malicious code, c
Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground

Jul 18, 2023 Cybersecurity / Cyber Attacks
Discover stories about threat actors' latest tactics, techniques, and procedures from Cybersixgill's threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT credentials flood dark web markets Over the past year, 100,000 stolen credentials for ChatGPT were advertised on underground sites, being sold for as little as $5 on dark web marketplaces in addition to being offered for free. Stolen ChatGPT credentials include usernames, passwords, and other personal information associated with accounts. This is problematic because ChatGPT accounts may store sensitive information from queries, including confidential data and intellectual property. Specifically, companies increasingly incorporate ChatGPT into daily workflows, which means employees may disclose
Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Generative-AI apps & ChatGPT: Potential risks and mitigation strategies

Jun 22, 2023
Losing sleep over Generative-AI apps? You're not alone or wrong. According to the Astrix Security Research Group, mid size organizations already have, on average, 54 Generative-AI integrations to core systems like Slack, GitHub and Google Workspace and this number is only expected to grow. Continue reading to understand the potential risks and how to minimize them.  Book a Generative-AI Discovery session with Astrix Security's experts (free - no strings attached - agentless & zero friction) "Hey ChatGPT, review and optimize our source code"  "Hey Jasper.ai, generate a summary email of all our net new customers from this quarter"  "Hey Otter.ai, summarize our Zoom board meeting" In this era of financial turmoil, businesses and employees alike are constantly looking for tools to automate work processes and increase efficiency and productivity by connecting third party apps to core business systems such as Google workspace, Slack and GitHub
Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Jun 20, 2023 Endpoint Security / Password
Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023," the Singapore-headquartered company  said . "The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year." Other countries with the most number of compromised ChatGPT credentials include Pakistan, Brazil, Vietnam, Egypt, the U.S., France, Morocco, Indonesia, and Bangladesh. A further analysis has revealed that the majority of logs containing ChatGPT accounts have been breached by the notorious Raccoon info steal
New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT

Jun 15, 2023 Browser Security / Data Security
The revolutionary technology of GenAI tools, such as ChatGPT, has brought significant risks to organizations' sensitive data. But what do we really know about this risk? A  new research  by Browser Security company LayerX sheds light on the scope and nature of these risks. The report titled "Revealing the True GenAI Data Exposure Risk" provides crucial insights for data protection stakeholders and empowers them to take proactive measures. The Numbers Behind the ChatGPT Risk By analyzing the usage of ChatGPT and other generative AI apps among 10,000 employees, the report has identified key areas of concern. One alarming finding reveals that 6% of employees have pasted sensitive data into GenAI, with 4% engaging in this risky behavior on a weekly basis. This recurring action poses a severe threat of data exfiltration for organizations. The report addresses vital risk assessment questions, including the actual scope of GenAI usage across enterprise workforces, the relati
 Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

May 19, 2023 Artificial Intelligence / Cyber Threat
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver  RedLine Stealer  malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire said in an analysis. "This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps." BATLOADER is a loader malware that's propagated via drive-by downloads where users searching for certain keywords on search engines are displayed bogus ads that, when clicked, redirect them to rogue landing pages hosting malware. The installer file, per eSentire, is rigged with an executable file (ChatGPT.exe or midjourney.exe) and a PowerShell script (Chat.ps1 or Chat-Ready.ps1) that downloads and loads RedLine Stealer
Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts

May 04, 2023 Online Security / ChatGPT
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI's ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes  against  the backdrop of  fake ChatGPT   web browser extensions  being increasingly used to steal users' Facebook account credentials with an aim to run unauthorized ads from hijacked business accounts. "Threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-based tools," Meta  said . "They would then promote these malicious extensions on social media and through sponsored search results to trick people into downloading malware." The social media giant said it has blocked several iterations of a multi-pronged malware campaign dubbed  Ducktail  over the years, adding it issued a cease and desist letter to individuals behind the operation who are located in Vietna
Cybersecurity Resources