The Japan Times - AI's blind spot: tools fail to detect their own fakes

EUR -
AED 4.330863
AFN 77.820662
ALL 96.710083
AMD 446.915552
ANG 2.110688
AOA 1081.237111
ARS 1712.049869
AUD 1.696014
AWG 2.122385
AZN 1.999969
BAM 1.945697
BBD 2.377356
BDT 144.360427
BGN 1.98015
BHD 0.444482
BIF 3495.449829
BMD 1.179103
BND 1.499328
BOB 8.185843
BRL 6.199486
BSD 1.180371
BTN 107.939993
BWP 15.53599
BYN 3.379851
BYR 23110.412093
BZD 2.373884
CAD 1.611869
CDF 2540.966445
CHF 0.91914
CLF 0.025848
CLP 1020.643256
CNY 8.190631
CNH 8.184246
COP 4260.545962
CRC 585.66398
CUC 1.179103
CUP 31.24622
CVE 110.688288
CZK 24.29488
DJF 209.550233
DKK 7.467634
DOP 74.224166
DZD 153.244416
EGP 55.519107
ERN 17.68654
ETB 183.055348
FJD 2.630873
FKP 0.860455
GBP 0.862779
GEL 3.177673
GGP 0.860455
GHS 12.917063
GIP 0.860455
GMD 86.659259
GNF 10318.327481
GTQ 9.056973
GYD 246.958173
HKD 9.208851
HNL 31.187291
HRK 7.535522
HTG 154.698714
HUF 380.920301
IDR 19770.367994
ILS 3.656209
IMP 0.860455
INR 106.603028
IQD 1545.214033
IRR 49669.699645
ISK 145.289235
JEP 0.860455
JMD 185.330055
JOD 0.836029
JPY 183.444203
KES 152.257677
KGS 103.113012
KHR 4746.480142
KMF 492.864429
KPW 1061.192392
KRW 1711.997572
KWD 0.362196
KYD 0.983634
KZT 596.070037
LAK 25344.81143
LBP 100872.232776
LKR 365.526699
LRD 219.312992
LSL 18.995699
LTL 3.481584
LVL 0.713227
LYD 7.451607
MAD 10.799106
MDL 19.984083
MGA 5247.007079
MKD 61.632525
MMK 2476.09962
MNT 4203.059097
MOP 9.495595
MRU 47.081421
MUR 53.708211
MVR 18.216755
MWK 2048.101661
MXN 20.514553
MYR 4.64743
MZN 75.167649
NAD 18.995947
NGN 1640.332736
NIO 43.277197
NOK 11.433865
NPR 172.704717
NZD 1.963554
OMR 0.453362
PAB 1.180376
PEN 3.968887
PGK 4.997009
PHP 69.385519
PKR 329.853883
PLN 4.222543
PYG 7848.248955
QAR 4.293407
RON 5.095259
RSD 117.432769
RUB 90.142087
RWF 1713.236162
SAR 4.42191
SBD 9.501329
SCR 16.802389
SDG 709.232781
SEK 10.571829
SGD 1.500013
SHP 0.884632
SLE 28.858499
SLL 24725.192318
SOS 673.823663
SRD 44.835427
STD 24405.044418
STN 25.055931
SVC 10.328502
SYP 13040.374153
SZL 18.99502
THB 37.251404
TJS 11.024404
TMT 4.13865
TND 3.357492
TOP 2.838996
TRY 51.250288
TTD 7.991573
TWD 37.253763
TZS 3052.095081
UAH 50.834097
UGX 4216.108388
USD 1.179103
UYU 45.79223
UZS 14444.007554
VES 436.022235
VND 30680.251156
VUV 140.497995
WST 3.196289
XAF 652.59615
XAG 0.014777
XAU 0.000253
XCD 3.186584
XCG 2.127254
XDR 0.810297
XOF 650.277405
XPF 119.331742
YER 281.068604
ZAR 18.969486
ZMK 10613.339413
ZMW 23.164702
ZWL 379.670575
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • RYCEF

    0.7000

    16.7

    +4.19%

  • CMSC

    -0.0100

    23.75

    -0.04%

  • GSK

    0.8700

    52.47

    +1.66%

  • NGG

    -0.6600

    84.61

    -0.78%

  • BTI

    0.3100

    60.99

    +0.51%

  • RIO

    1.4900

    92.52

    +1.61%

  • RELX

    -0.2700

    35.53

    -0.76%

  • VOD

    0.2600

    14.91

    +1.74%

  • BCC

    0.9400

    81.75

    +1.15%

  • AZN

    1.3100

    188.41

    +0.7%

  • BCE

    -0.0300

    25.83

    -0.12%

  • CMSD

    0.0300

    24.08

    +0.12%

  • JRI

    0.0700

    13.15

    +0.53%

  • BP

    -0.1800

    37.7

    -0.48%

AI's blind spot: tools fail to detect their own fakes
AI's blind spot: tools fail to detect their own fakes / Photo: Chris Delmas - AFP

AI's blind spot: tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated -- even though it had generated the image itself.

Text size:

Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.

In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.

Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.

The image of Co, whose whereabouts has been unknown since the official probe began, appeared to show him in Portugal.

When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.

AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.

"These models are trained primarily on language patterns and lack the specialized visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.

"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalized assessments, making them unreliable for tasks like fact-checking or verifying authenticity."

Google did not respond to AFP’s request for comment.

- 'Distinguishable from reality' -

AFP found similar examples of AI tools failing to verify their own creations.

During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.

An AFP analysis found it was created using Google's Gemini AI model.

But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.

"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Center, told AFP.

"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."

Earlier this year, Columbia University's Tow Center for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.

All seven models failed to correctly identify the provenance of the photos, the study said.

- 'Shocked' -

AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.

"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.

"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."

Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.

The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.

The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."

Human fact-checking has long been a flashpoint in hyperpolarized societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.

Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.

"We can't rely on AI tools to combat AI in the long run," Fallorina said.

burs-ac/sla/sms

Y.Watanabe--JT