Close Menu
Edu Expertise Hub
    Facebook X (Twitter) Instagram
    Wednesday, January 21
    • About us
    • Contact
    • Submit Coupon
    Facebook X (Twitter) Instagram YouTube
    Edu Expertise Hub
    • Home
    • Udemy Coupons
    • Best Online Courses and Software Tools
      • Business & Investment
      • Computers & Internet
      • eBusiness and eMarketing
    • Reviews
    • Jobs
    • Latest News
    • Blog
    • Videos
    Edu Expertise Hub
    Home » Latest News » Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
    Latest News

    Popular LLMs dangerously vulnerable to iterative attacks, says Cisco

    TeamBy TeamNovember 9, 2025No Comments3 Mins Read2 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Some of the world’s most widely used open-weight generative AI (GenAI) services are profoundly susceptible to so-called “multi-turn” prompt injection or jailbreaking cyber attacks, in which a malicious actor is able to coax large language models (LLMs) into generating unintended and undesirable responses, according to a research paper published by a team at networking giant Cisco.

    Cisco’s researchers tested Alibaba Qwen3-32B, Mistral Large-2, Meta Llama 3.3-70B-Instruct, DeepSeek v3.1, Zhipu AI GLM-4.5-Air, Google Gemma-3-1B-1T, Microsoft Phi-4, and OpenAI GPT-OSS-2-B, engineering multiple scenarios in which the various models’ output disallowed content, with success rates ranging from 25.86% against Google’s model, up to 92.78% in the case of Mistral.

    The report’s authors, Amy Chang and Nicholas Conley, alongside contributors Harish Santhanalakshmi Ganesan and Adam Swanda, said this represented a two to tenfold increase over single-turn baselines.

    “These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions,” they said.

    “We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance.

    “The analysis concludes that open-weight models, while crucial for innovation, pose tangible operational and ethical risks when deployed without layered security controls … Addressing multi-turn vulnerabilities is essential to ensure the safe, reliable and responsible deployment of open-weight LLMs in enterprise and public domains.”

    What is a multi-turn attack?

    Multi-turn attacks take the form of iterative “probing” of an LLM to expose systemic weaknesses that are usually masked because models can better detect and reject isolated adversarial requests.

    Such an attack could begin with an attacker making benign queries to establish trust, before subtly introducing more adversarial requests to accomplish their actual goals.

    Prompts may be framed with terminology such as “for research purposes” or “in a fictional scenario”, and attackers may ask the models to engage in roleplay or persona adoption, introduce contextual ambiguity or misdirection, or to break down information and reassemble it – among other tactics.

    Whose responsibility?

    The researchers said their work underscored the susceptibility of LLMs to adversarial attacks and that this was a source of particular concern given all of the models tested were open-weight, which in layman’s terms means anybody who cares to do so is able to download, run and even make changes to the model.

    They highlighted as an area of particular concern three more susceptible models – Mistral, Llama and Qwen – which they said had probably been shipped with the expectation that developers would add guardrails themselves, compared with Google’s model, which was most resistant to multi-turn manipulation, or OpenAI’s and Zhipu’s, which both rejected multi-turn attempts more than 50% of the time.

    “The AI developer and security community must continue to actively manage these threats – as well as additional safety and security concerns – through independent testing and guardrail development throughout the lifecycle of model development and deployment in organisations,” they wrote.

    “Without AI security solutions – such as multi-turn testing, threat-specific mitigation and continuous monitoring – these models pose significant risks in production, potentially leading to data breaches or malicious manipulations,” they added.

    This post is exclusively published on eduexpertisehub.com

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Team

      Related Posts

      Scaling structured literacy with implementation science

      December 7, 2025

      Interview: Paul Neville, director of digital, data and technology, The Pensions Regulator

      December 7, 2025

      Students Want Power, Not Worksheets. Schools Must Teach Them to Organize.

      December 7, 2025

      Solving the staffing crisis is key to the Science of Reading movement

      December 6, 2025

      Cyber teams on alert as React2Shell exploitation spreads

      December 6, 2025

      Teaching Sex Education in Schools Is More Fraught Than Ever

      December 5, 2025
      Courses and Software Tools

      Welcome to AI: A Human Guide to Artificial Intelligence

      March 20, 2024126 Views

      Extreme Privacy: What It Takes to Disappear

      August 24, 202481 Views

      Modern C++ Programming Cookbook: Master Modern C++ with comprehensive solutions for C++23 and all previous standards

      September 18, 202434 Views

      Meebook E-Reader M7 | 6.8′ Eink Carta Screen | 300PPI Smart Light | Android 11 | Ouad Core Processor | Out Speaker | Support Google Play Store | 3GB+32GB Storage | Micro-SD Slot | Gray

      August 19, 202429 Views

      HR from the Outside In: Six Competencies for the Future of Human Resources

      May 20, 202525 Views
      Reviews

      Truth Worth Telling

      December 8, 2025

      Womens Tops Summer Sweater Short Sleeve Shirts Dressy Casual Basic Casual Cap Sleeve Tops Beach Vacation Clothes

      December 8, 2025

      The Model Thinker: What You Need to Know to Make Data Work for You

      December 8, 2025

      Scaling structured literacy with implementation science

      December 7, 2025

      How to Accept a Job Offer Professionally

      December 7, 2025
      Stay In Touch
      • Facebook
      • YouTube
      • TikTok
      • WhatsApp
      • Twitter
      • Instagram
      Latest News

      Scaling structured literacy with implementation science

      December 7, 2025

      Interview: Paul Neville, director of digital, data and technology, The Pensions Regulator

      December 7, 2025

      Students Want Power, Not Worksheets. Schools Must Teach Them to Organize.

      December 7, 2025

      Solving the staffing crisis is key to the Science of Reading movement

      December 6, 2025

      Cyber teams on alert as React2Shell exploitation spreads

      December 6, 2025
      Latest Videos

      How to Choose a Hacking Course?

      December 7, 2025

      Don’t Become a Data Analyst if

      December 6, 2025

      FC 25 vs eFootball 2025 – Graphical Details, Player Animation – Comparison! #fc25 #efootball

      December 4, 2025

      Career Game #360: Devin Booker Scoring Highlights vs BOS (02/07/2021)

      December 3, 2025

      is the CISM REQUIRED for a CYBERSECURITY career?

      December 2, 2025
      Latest Jobs

      Senior Associate, AI Data Scientist

      November 21, 2025

      Nursing Adjunct Faculty – Part-Time Nursing Instructors Needed

      November 21, 2025

      Sr. Firewall Engineer

      November 21, 2025

      Portfolio Analyst

      November 21, 2025

      Vehicle Service Specialist

      November 21, 2025
      Legal
      • Home
      • Privacy Policy
      • Cookie Policy
      • Terms and Conditions
      • Disclaimer
      • Affiliate Disclosure
      • Amazon Affiliate Disclaimer
      Latest Udemy Coupons

      ISO 9001:2015 – Quality Management System Internal Auditor | Udemy Coupons 2026

      May 5, 202537 Views

      Advanced Program in Human Resources Management | Udemy Coupons 2026

      April 5, 202536 Views

      Mastering Maxon Cinema 4D 2024: Complete Tutorial Series | Udemy Coupons 2026

      August 22, 202436 Views

      Diploma in Aviation, Airlines, Air Transportation & Airports | Udemy Coupons 2026

      March 21, 202531 Views

      Time Management and Timeboxing in Business, Projects, Agile | Udemy Coupons 2026

      April 2, 202527 Views
      Blog

      How to Accept a Job Offer Professionally

      December 7, 2025

      How to Express Gratitude Professionally

      December 6, 2025

      How to Make a Strong Impression

      December 5, 2025

      Thank-You Letter Template for Recommendation Letter: How to Express Gratitude

      December 4, 2025

      How to Track Products Without the Admin Overload –

      December 3, 2025
      Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
      © 2026 All rights reserved!

      Type above and press Enter to search. Press Esc to cancel.

      We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
      .
      SettingsAccept
      Privacy & Cookies Policy

      Privacy Overview

      This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
      Necessary
      Always Enabled
      Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
      Non-necessary
      Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
      SAVE & ACCEPT