⚡ Cybersecurity Webinar ▶ Defend, Adapt, Thrive: Top 5 Trends in Web Application Security Join the Webinar
#1 Trusted Cybersecurity News Platform Followed by 4.50+ million
The Hacker News Logo
Get the Free Newsletter
ThreatLocker Zero Trust Endpoint Protection Platform

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Oct 17, 2023 Cyber Threat / Artificial Intelligence
Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations, and evaluate their potential impact on cybersecurity. While it's crucial to maintain a watchful eye, it's equally important to avoid widespread panic, as the situation, though disconcerting, is not yet a cause for alarm. Interested in how your organization can protect against generative AI attacks with an advanced email security solution?  Get an IRONSCALES demo .  Meet FraudGPT and WormGPT FraudGPT  represents a subscription-based malicious Generative AI that harnesses sophisticated machine learning algorithms to generate deceptive content. In stark contrast to ethical AI models, Fr
How to Guard Your Data from Exposure in ChatGPT

How to Guard Your Data from Exposure in ChatGPT

Oct 12, 2023 Data Security / Artificial Intelligence
ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are ill-equipped to handle these challenges, since they focus on file-based data protection. A new report by LayerX, "Browser Security Platform: Guard your Data from Exposure in ChatGPT" ( Download here ), sheds light on the challenges and risks of ungoverned ChatGPT usage. It paints a comprehensive picture of the potential hazards for businesses and then offers a potential solution: browser security platforms. Such platforms provide real-time monitoring and governance over web sessions, effectively safeguarding sensitive data. ChatGPT Data Exposure: By the Numbers Employee usage of GenAI apps has surge
cyber security

New SaaS Security Solution at a No-Brainer Price - Start Free, Decide Later

websitewing.securitySaaS Security / SSPM
Wing Security recently released "Essential SSPM" to make SaaS security easy and accessible to anyone.
Webinar: How vCISOs Can Navigating the Complex World of AI and LLM Security

Webinar: How vCISOs Can Navigating the Complex World of AI and LLM Security

Oct 09, 2023 Artificial Intelligence / CISO
In today's rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) has become ubiquitous across various industries. This wave of innovation promises improved efficiency and performance, but lurking beneath the surface are complex vulnerabilities and unforeseen risks that demand immediate attention from cybersecurity professionals. As the average small and medium-sized business leader or end-user is often unaware of these growing threats, it falls upon cybersecurity service providers – MSPs, MSSPs, consultants and especially vCISOs - to take a proactive stance in protecting their clients. At Cynomi, we experience the risks associated with generative AI daily, as we use these technologies internally and work with MSP and MSSP partners to enhance the services they provide to small and medium businesses. Being committed to staying ahead of the curve and empowering virtual vCISOs to swiftly implement cutting-edge secur
Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

Sep 29, 2023 Artificial Intelligence / Malware
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an  interactive search experience  that's powered by OpenAI's large language model called  GPT-4 . A month later, the tech giant  began   exploring  placing ads in the conversations. But the move has also opened the doors for threat actors who resort to malvertising tactics and propagate malware. "Ads can be inserted into a Bing Chat conversation in various ways," Jérôme Segura, director of threat intelligence at Malwarebytes,  said . "One of those is when a user hovers over a link and an ad is displayed first before the organic result." In an example highligh
Webinar — AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks

Webinar — AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks

Sep 25, 2023 Artificial Intelligence / Cybersecurity
Generative AI is a double-edged sword, if there ever was one. There is broad agreement that tools like ChatGPT are unleashing waves of productivity across the business, from IT, to customer experience, to engineering. That's on the one hand.  On the other end of this fencing match: risk. From IP leakage and data privacy risks to the empowering of cybercriminals with AI tools, generative AI presents enterprises with concrete concerns. For example, the mass availability of AI tools was the second most-reported Q2 risk among senior enterprise risk executives — appearing in the top 10 for the first time — according to a  Gartner survey .  In this escalating AI arms race, how can enterprises separate fact from hype and comprehensively manage generative AI risk while accelerating productivity?  Register here and join Zscaler's Will Seaton, Product Marketing Manager, ThreatLabz, to: Uncover the  tangible risks of generative AI  — both for employee AI usage and by threat actors b
Live Webinar: Overcoming Generative AI Data Leakage Risks

Live Webinar: Overcoming Generative AI Data Leakage Risks

Sep 19, 2023 Artificial Intelligence / Browser Security
As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartner's "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI.  A new webinar  featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this critical risk. Throughout the webinar, the speakers will explain why data security is a risk and explore the ability of DLP solutions to protect against them, or lack thereof. Then, they will delineate the capabilities required by DLP solutions to ensure businesses benefit from the productivity GenAI applications have to offer without compromising security.  The Business and Security Risks of Generative AI Applications GenAI security risks occur when employees insert sensitive texts into these applications. These actions warrant careful consideration, because the inserted data b
Everything You Wanted to Know About AI Security but Were Afraid to Ask

Everything You Wanted to Know About AI Security but Were Afraid to Ask

Sep 04, 2023 Artificial Intelligence / Cyber Security
There's been a great deal of AI hype recently, but that doesn't mean the robots are here to replace us. This article sets the record straight and explains how businesses should approach AI. From musing about self-driving cars to fearing AI bots that could destroy the world, there has been a great deal of AI hype in the past few years. AI has captured our imaginations, dreams, and occasionally, our nightmares. However, the reality is that AI is currently much less advanced than we anticipated it would be by now. Autonomous cars, for example, often considered the poster child of AI's limitless future, represent a narrow use case and are not yet a common application across all transportation sectors. In this article, we de-hype AI, provide tools for businesses approaching AI and share information to help stakeholders educate themselves.  AI Terminology De-Hyped AI vs. ML AI (Artificial Intelligence) and ML (Machine Learning) are terms that are often used interchangeably, but the
Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

Aug 25, 2023 Threat Detection / Artificial Intelligence
In today's digital landscape, your business data is more than just numbers—it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic, one-size-fits-all tool can capture this nuance. By utilizing your own data, you position yourself ahead of potential threats, enabling informed decisions and safeguarding your assets. Join our groundbreaking webinar, " Clean Data, Better Detections: Using Your Business Data for AI/ML Detections ," to unearth how your distinct business data can be the linchpin to amplifying your AI/ML threat detection prowess. This webinar will endow you with the insights and tools necessary to harness your business data, leading to sharper, more efficient, and potent threat detections. UPC
New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

Jul 26, 2023 Cyber Crime / Artificial Intelligence
Following the footsteps of  WormGPT , threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed  FraudGPT  on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan   said  in a report published Tuesday. The cybersecurity firm said the offering has been circulating since at least July 22, 2023, for a subscription cost of $200 a month (or $1,000 for six months and $1,700 for a year). "If your [sic] looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries then look no further!," claims the actor, who goes by the online alias CanadianKingpin. The author also states that the tool could be used to write malicious code, c
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks

WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks

Jul 15, 2023 Artificial Intelligence / Cyber Crime
With generative artificial intelligence (AI) becoming all the rage these days, it's perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime. According to findings from SlashNext, a new generative AI cybercrime tool called  WormGPT  has been advertised on underground forums as a way for adversaries to launch sophisticated phishing and business email compromise ( BEC ) attacks. "This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities," security researcher Daniel Kelley  said . "Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalized to the recipient, thus increasing the chances of success for the attack." The author of the software has described it as the "biggest enemy of the well-known ChatGPT" that "lets you do all sorts of illegal stuff.
The Risks and Preventions of AI in Business: Safeguarding Against Potential Pitfalls

The Risks and Preventions of AI in Business: Safeguarding Against Potential Pitfalls

Jul 12, 2023 DNS Filtering / Network Security
Artificial intelligence (AI) holds immense potential for optimizing internal processes within businesses. However, it also comes with legitimate concerns regarding unauthorized use, including data loss risks and legal consequences. In this article, we will explore the risks associated with AI implementation and discuss measures to minimize damages. Additionally, we will examine regulatory initiatives by countries and ethical frameworks adopted by companies to regulate AI. Security risks  AI phishing attacks Cybercriminals can leverage AI in various ways to enhance their phishing attacks and increase their chances of success. Here are some ways AI can be exploited for phishing: -  Automated Phishing Campaigns:  AI-powered tools can automate the creation and dissemination of phishing emails on a large scale. These tools can generate convincing email content, craft personalized messages, and mimic the writing style of a specific individual, making phishing attempts appear more legit
3 Reasons SaaS Security is the Imperative First Step to Ensuring Secure AI Usage

3 Reasons SaaS Security is the Imperative First Step to Ensuring Secure AI Usage

Jun 30, 2023 SaaS Security / Artificial Intelligence,
In today's fast-paced digital landscape, the widespread adoption of AI (Artificial Intelligence) tools is transforming the way organizations operate. From chatbots to generative AI models, these SaaS-based applications offer numerous benefits, from enhanced productivity to improved decision-making. Employees using AI tools experience the advantages of quick answers and accurate results, enabling them to perform their jobs more effectively and efficiently. This popularity is reflected in the staggering numbers associated with AI tools.  OpenAI's viral chatbot, ChatGPT, has amassed approximately 100 million users worldwide, while other generative AI tools like DALL·E and Bard have also gained significant traction for their ability to generate impressive content effortlessly. The generative AI market is projected to exceed $22 billion by 2025,  indicating the growing reliance on AI technologies. However, amidst the enthusiasm surrounding AI adoption, it is imperative to address
The Right Way to Enhance CTI with AI (Hint: It's the Data)

The Right Way to Enhance CTI with AI (Hint: It's the Data)

Jun 29, 2023 Cyber Threat Intelligence
Cyber threat intelligence is an effective weapon in the ongoing battle to protect digital assets and infrastructure - especially when combined with AI. But AI is only as good as the data feeding it. Access to unique, underground sources is key. Threat Intelligence offers tremendous value to people and companies. At the same time, its ability to address organizations' cybersecurity needs and the benefits it offers vary by company, industry, and other factors. A common challenge with cyber threat intelligence (CTI) is that the data it produces can be vast and overwhelming, creating confusion and inefficiencies among security teams' threat exposure management efforts. Additionally, organizations have different levels of security maturity, which can make access to and understanding of CTI data difficult. Enter generative AI. Many cybersecurity companies – and more specifically, threat intelligence companies – are bringing generative AI to market to simplify threat intelligence a
How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

Jun 26, 2023 SaaS Security / Artificial Intelligence
Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception. Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023  generative AI survey of 1,000 executives  revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher.  Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigurati
Cybersecurity Resources