The Japan Times - 'Vibe hacking' puts chatbots to work for cybercriminals

EUR -
AED 4.2308
AFN 75.461931
ALL 95.701743
AMD 434.289094
ANG 2.062212
AOA 1056.403079
ARS 1597.18451
AUD 1.668628
AWG 2.073925
AZN 1.963008
BAM 1.952758
BBD 2.315114
BDT 141.040283
BGN 1.969159
BHD 0.435651
BIF 3421.500424
BMD 1.15202
BND 1.480462
BOB 7.942627
BRL 5.945121
BSD 1.149419
BTN 107.068206
BWP 15.769502
BYN 3.405953
BYR 22579.598756
BZD 2.311719
CAD 1.606781
CDF 2655.407311
CHF 0.920187
CLF 0.02682
CLP 1058.995158
CNY 7.928953
CNH 7.933071
COP 4226.094473
CRC 534.859814
CUC 1.15202
CUP 30.528539
CVE 110.594367
CZK 24.524559
DJF 204.737509
DKK 7.474082
DOP 70.100891
DZD 153.514723
EGP 62.594955
ERN 17.280305
ETB 179.485717
FJD 2.596428
FKP 0.872669
GBP 0.871389
GEL 3.093221
GGP 0.872669
GHS 12.67803
GIP 0.872669
GMD 85.249915
GNF 10114.739035
GTQ 8.793302
GYD 240.575224
HKD 9.029248
HNL 30.533639
HRK 7.533181
HTG 150.860401
HUF 384.6946
IDR 19578.12495
ILS 3.606256
IMP 0.872669
INR 106.83831
IQD 1505.854131
IRR 1519716.438584
ISK 144.440755
JEP 0.872669
JMD 181.216908
JOD 0.816828
JPY 183.924702
KES 149.53662
KGS 100.744622
KHR 4596.719375
KMF 491.913091
KPW 1036.813404
KRW 1741.002708
KWD 0.356366
KYD 0.957908
KZT 544.681477
LAK 25310.339681
LBP 103108.170116
LKR 362.66133
LRD 210.92142
LSL 19.532595
LTL 3.401617
LVL 0.696846
LYD 7.350613
MAD 10.799077
MDL 20.225019
MGA 4805.472163
MKD 61.628064
MMK 2419.045405
MNT 4115.898864
MOP 9.279644
MRU 45.662874
MUR 54.087791
MVR 17.81067
MWK 1993.077817
MXN 20.611607
MYR 4.643839
MZN 73.672136
NAD 19.532172
NGN 1587.634232
NIO 42.293196
NOK 11.258292
NPR 171.306902
NZD 2.017019
OMR 0.44364
PAB 1.149409
PEN 3.976705
PGK 4.972168
PHP 69.592978
PKR 320.72236
PLN 4.278316
PYG 7435.481305
QAR 4.191071
RON 5.088018
RSD 117.392788
RUB 92.536885
RWF 1678.770184
SAR 4.325327
SBD 9.260829
SCR 16.643127
SDG 692.364618
SEK 10.924729
SGD 1.482309
SHP 0.864314
SLE 28.397729
SLL 24157.303089
SOS 656.873849
SRD 43.029156
STD 23844.495215
STN 24.461468
SVC 10.057332
SYP 127.45718
SZL 19.524669
THB 37.596228
TJS 11.017337
TMT 4.043591
TND 3.388621
TOP 2.773788
TRY 51.288526
TTD 7.797954
TWD 36.858934
TZS 2995.253282
UAH 50.34114
UGX 4312.282184
USD 1.15202
UYU 46.547487
UZS 13965.244481
VES 545.355491
VND 30344.215879
VUV 137.094003
WST 3.186803
XAF 654.931042
XAG 0.015774
XAU 0.000247
XCD 3.113393
XCG 2.071573
XDR 0.815708
XOF 654.942394
XPF 119.331742
YER 274.930073
ZAR 19.553086
ZMK 10369.569656
ZMW 22.212589
ZWL 370.950081
  • RBGPF

    -13.5000

    69

    -19.57%

  • RIO

    -0.3600

    94.45

    -0.38%

  • CMSC

    0.0500

    22.04

    +0.23%

  • CMSD

    0.1100

    22.26

    +0.49%

  • BCE

    -0.9300

    24.45

    -3.8%

  • NGG

    1.1500

    87.99

    +1.31%

  • GSK

    0.7000

    56.69

    +1.23%

  • AZN

    2.7600

    203.49

    +1.36%

  • RELX

    0.3600

    33.59

    +1.07%

  • RYCEF

    0.9000

    15.99

    +5.63%

  • BTI

    0.3900

    58.28

    +0.67%

  • VOD

    0.0800

    15.21

    +0.53%

  • BCC

    -1.8800

    73.2

    -2.57%

  • JRI

    0.0900

    12.61

    +0.71%

  • BP

    0.9500

    47.12

    +2.02%

'Vibe hacking' puts chatbots to work for cybercriminals
'Vibe hacking' puts chatbots to work for cybercriminals / Photo: Kirill KUDRYAVTSEV - AFP/File

'Vibe hacking' puts chatbots to work for cybercriminals

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

Text size:

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

- Dodging safeguards -

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.

T.Sato--JT